How and Why is Scala Used in Aerospace Industry?

There’s been a recent thread in scala-user e-mail list that touched an interesting topic: How and Why is Scala Used in Aerospace Industry?

A few highlights from the thread:

* Scala and Akka are currently used for spacecraft telemetry data display, storage and analysis for European Space Agency. The software is used for all missions at GSOC (the Columbus Module of the ISS, the SAR earth observation satellites TerraSAR-X/TanDEM-X, and some other missions) and for LEOPs at Eutelsat.

* DLR GSOC ( ) will be using Scala and Spire for space mission planning. The next generation of the GSOC scheduling engine PLATO ( is currently being written in Scala.

* Scala is also used for telemetry analysis at JPL (NASA’s Jet Propulsion Laboratory), and more generally for development of modeling DSLs. We are part of a research lab (Laboratory for Reliable Software), which works in close interaction with missions.

* Rüdiger Klaehn’s words: “I am absolutely convinced that functional programming (meaning not just a language that has closures, but programming using almost exclusively with pure functions) is the correct path to reliable software. The most ubiquitous and accepted platform in space operations at DLR and in general in european space operations is the JVM. Even the next generation European Mission Control system (MCS) is going to be written for the JVM: So you need a functional language that runs on the JVM and can seamlessly consume JVM libraries. This leaves Scala and Clojure as serious contenders. Since I favour strongly typed languages, the choice was clear.”

Some reasons given by programmers that chose Scala for aerospace industry software: Read the rest of this entry »

1 Comment

Posted by on December 22, 2014 in FunctionalProgramming



How to fix class “javax.servlet.FilterRegistration”‘s signer information does not match signer information of other classes in the same package (when unit testing with Spark Streaming)

I’ve recently started to write some unit tests for a Spark Streaming application and even the simplest scenario led to the following error:

class “javax.servlet.FilterRegistration”‘s signer information does not match signer information of other classes in the same package

This happened when Maven ran the test, in other words, it was a run-time error. An Internet search indicated that I should be suspicious about different versions of javax.servlet. The following commands showed that I was on the right track:

mvn dependency:tree
mvn dependency:tree | grep servlet

Modifying the pom.xml for excluding javax.servlet as follows solved the problem:


I hope things will be less conflicting in the upcoming Hadoop and Spark versions.

Leave a comment

Posted by on December 4, 2014 in Programlama


Tags: , , ,

How to get better performance from Scala by using Parallel Collections

Today I needed to download the HTML content of some articles from a newspaper and I’ve decided to write a quick and dirty Scala application to get the job done quickly. I only needed to parse a main HTML page using regular expressions, get a list of URLs, and then iterate over them, by getting the contents of each, and finally writing them to files. Thanks to Scala I was able to code it comfortably and quickly, but when I ran the code I’ve seen that it took about 50 seconds to grab the contents of 150 URLs. Would it be possible to make it faster? Fortunately, Scala had Parallel Collections support for a very long time, and I’ve decided to try it out.

All I had to do was to convert the following part:

for (url <- urls) { ...


for (url <- urls.par) { ...

and run it again.

The result was better than I expected: The ‘normal’ version ran in the range of 30 to 50 seconds whereas the parallelized version run in the range of 8 – 10 seconds, that is 3 to 5 times faster! Yet another reason to use Scala.

And for those who say “Gist or didn’t happen”, you can see the source code at and its relevant build.sbt file at Don’t take my word for it, spend a few minutes and try it yourself.

Leave a comment

Posted by on October 31, 2014 in Programlama


Tags: ,

Functional Programming in Scala: The most advanced Scala and functional programming book for the working programmer

bjarnason_cover150It is safe to say that “Functional Programming in Scala” by Chiusano and Bjarnason can be considered the most advanced Scala programming book published so far (in a sense, it can be compared to SICP.). Half of one of my bookshelves is occupied by Scala books, including Scala in Depth, but none of them takes the concept of functional programming as serious as this book, and pushes it to its limits that much. This, in turn, means that most of the Java programmers (including very senior ones), as well as Scala programmers with some experience should prepare themselves to feel very much like a newbie again.

But why the need for such a book, and what’s all that noise about functional programming? Here is my favorite description of functional programming given by Tony Morris : “Supposing a program composed of parts A, B, C, D, and a requirement for program of parts A, B, C, and E. The effort required to construct this program should be proportional to the size of E. The extent to which this is true is the extent to which one achieves the central thesis of Functional Programming. Identifying independent program parts requires very rigorous cognitive discipline and correct concept formation. This can be very (very) difficult after exposure to sloppy thinking habits. Composable programs are easier to reason about. We may (confidentally) determine program behaviour by determining the behaviour of sub-programs -> fewer bugs. Composable programs scale indefinitely, by composing more and more sub-programs. There is no distinction between a ‘small’ and a ‘large’ application; only ‘smaller than’ or ‘greater than’.”

The description above not only points at the core idea of functional programming and why that is important, as well as useful, but also draws attention to the fact that getting used to functional programming design can be difficult for people who are not used to thinking that way. Fortunately, “Functional Programming in Scala” is here to fill a huge void in that respect.
Read the rest of this entry »

Leave a comment

Posted by on September 13, 2014 in FunctionalProgramming


Tags: ,

PostgreSQL 9 High Availability Cookbook

6969OSPostgreSQL 9 High Availability Cookbook is a very well written book whose primary audience are experienced DBAs and system engineers who want to take their PostgreSQL skills to the next level by diving into the details of building highly available PostgreSQL based systems. Reading this book is like drinking from a fire hose, the signal-to-noise ratio is very high; in other words, every single page is packed with important, critical, and very practical information. As a consequence, this also means that the book is not for newbies: not only you have to know the fundamental aspects of PostgreSQL from a database administrator’s point of view, but you also need to have solid GNU/Linux system administration background.

One of the strongest aspects of the book is the author’s principled and well-structured engineering approach to building a highly available PostgreSQL system. Instead of jumping to some recipes to be memorized, the book teaches you basic but very important principles of capacity planning. More importantly, this planning of servers and networking is not only given as a good template, but the author also explains the logic behind it, as well as drawing attention to the reason behind the heuristics he use and why some magic numbers are taken as a good estimate in case of lack of more case-specific information. This style is applied very consistently throughout the book, each recipe is explained so that you know why you do something in addition to how you do it. Read the rest of this entry »

Leave a comment

Posted by on August 21, 2014 in Books, Linux, sysadmin


Tags: , , , ,

Is this the State of the Art for grammar checking on Linux in 21st century?

Recently, I’ve shared an article with a colleague of mine. The article had been published in a peer-reviewed journal and the contents were original and interesting. On the other hand, my colleague, being a meticulous reader of scientific texts, has immediately spotted a few simple grammar errors. It was very easy to blame the authors and editors for not correcting such errors before publication, but this triggered another question:

Why don’t we have open source and very high quality grammar checking software that is already integrated into major text editors such as VIM, Emacs, etc.?

Any user of recent version of MS Word is well aware of on-the-fly grammar checking, at least for English. But as many academicians know very well, many of them use LaTeX to typeset their articles and rely on either well-known text editors such as VIM and Emacs, or specialized software for handling LaTeX easily. Therefore, to tell these people “go and check your article using MS Word, or copy paste your article text to an online grammar checking service” does not make a lot of sense. Those methods are not convenient and thus not very usable by hundreds of thousands of scientists writing articles every day. But what would be the ideal way? The answer is simple in theory: We have high quality open source spell checkers, at least for English, and they have been already integrated into major text editors, therefore scientists who write in LaTeX have no excuse for spelling errors, it is simply a matter of activating the spell checker. If only they had similar software for grammar checking, it would be very straightforward and convenient to eliminate the easiest grammar errors, at least for English.

A quick search on the Internet revealed the following for grammar checking on GNU/Linux:

– Baoqiu Cui has implemented a grammar checker integration for Emacs using link-grammar, but unfortunately it is far from easily usable.


Read the rest of this entry »

1 Comment

Posted by on June 10, 2014 in Emacs, Linguistics, Linux


Tags: , , , ,

GODISNOWHERE: A look at a famous question using Python, Google and natural language processing

Are there any commonalities among human intelligence, Bayesian probability models, corpus linguistics, and religion? This blog entry presents a piece of light reading for people interested in a combination of those topics.
You have probably heard the famous question:

       “What do you see below?”


The stream of letters can be broken down into English words in two different ways, either as “God is nowhere”   or as “God is now here.” You can find an endless set of variations on this theme on the Internet,  but I will deal with this example in the context of computational linguistics and big data processing.


When I first read the beautiful book chapter titled “Natural Language Corpus Data” written by Peter Norvig, in the book “Beautiful Data“, I’ve decided to make an experiment using Norvig’s code. In that chapter, Norvig showed a very concise Python program that ‘learned’ how to break down a stream of letters into English words, in other words, a program with the capability to do ‘word segmentation’.

Norvig’s code coupled with Google’s language corpus, is powerful and impressive; it is able to take a character string such as


and return a correct segmentation:

‘when’, ‘in’, ‘the’, ‘course’, ‘of’, ‘human’, ‘events’, ‘it’, ‘becomes’, ‘necessary’

But how would it deal with “GODISNOWEHERE”? Let’s try it out in a GNU/Linux environment: Read the rest of this entry »


Posted by on March 1, 2014 in Linguistics, Programlama, python


Tags: , , , , , , , ,


Get every new post delivered to your Inbox.

Join 65 other followers