Somehow a few articles that I’ve already read some time ago came together under this heading. Here are some bits and pieces to contemplate upon again and again:
From ‘Computing Science Education: The Road not Taken‘ by Niklaus Wirth
It is instructive to compare a mathematics text book with a computer text book in a high school curriculum. I had the misfortune to do so at some length, and to put it simply: we suck. I assume that we are sending a strong message: don?t even consider computer science as a career unless you are a masochist.
Surely, in this post-modern academic environment the professor has long ago ceased to be the wise, learned man, penetrating deeper and deeper into his beloved subject in his quiet study. The modern professor is the manager of a large team of researchers, the keen fund raiser maintaining close relationships with the key funding agencies, and the untiring author of exciting project proposals and astonishing success stories. It would be suicidal in this highly competitive business to waste time pondering about how to best present trivial subjects to a mass of beginners. When it comes to course material, software and tools, the obvious choice is what lies on the shelf and has been adopted by everyone else anyway. In this fight for success and survival it is best to join the bandwagon. Achievement is measured in the size of a team, the number of publications generated and citations discovered, conferences visited and resources consumed, but not in the devotion to teaching, which is not measurable anyway. Surely, this academic life style is often contrary to the better knowledge of the individual, but is enforced by pressure to convert places of learning into profit centers of high visibility, and it borders on prostitution.
From ‘Systems Software Research is Irrelevant‘ by Rob Pike:
What is Systems Research these days?
Web caches, Web servers, file systems, network packet delays, all that stuff. Performance, peripherals, and applications, but not kernels or even user-level applications.
Mostly, though, it’s just a lot of measurement, a misinterpretation and misapplication of the scientific method.
Too much phenomenology: invention has been replaced by observation. Today we see papers comparing interrupt latency on Linux vs. Windows. They may be interesting, they may even be relevant, but they aren’t research.
In a misguided attempt to seem scientific, there’s too much measurement: performance minutiae and bad charts.
By contrast, a new language or OS can make the machine feel different, give excitement, novelty. But today that’s done by a cool Web site or a higher CPU clock rate or some cute little device that should be a computer but isn’t.
The art is gone.
But art is not science, and that’s part of the point. Systems research cannot be just science; there must be engineering, design, and art.
From ‘Why I am Not a Professor OR The Decline and Fall of the British University‘ by Mark Tarver:
This year, 2007, marks the marks the eighth year at which I ceased to be a tenured lecturer in the UK, what is called I think, a tenured professor in the USA. I’ve never worked out whether I was, in American terms, an assistant professor or an associate professor. But it really doesn’t matter, because today I am neither. You see I simply walked out and quit the job. And this is my story. If there is a greater significance to it than the personal fortunes of one man, it is because my story is also the story of the decline and fall of the British university and the corruption of the academic ideal . That is why this essay carries two titles – a personal one and a social one. This is because I was privileged to be part of an historical drama. As the Chinese say, I have lived in interesting times.
After seven years of the new regime, I had the opportunity to compare the class of 1999 with the class of 1992. In 1992 I set an course in Artificial Intelligence requiring students to solve six exercises, including building a Prolog interpreter. In 1999, six exercises had shrunk to one; which was a 12 line Prolog program for which eight weeks were allotted for students to write it. A special class was laid on for students to learn this and many attended, including students who had attended a course incorporating logic programming the previous term. It was a battle to get the students to do this, not least because two senior lecturers criticised the exercise as presenting too much of a challenge to the students. My Brazilian Ph.D. student who superintended some of these students, told me that the level of attainment of some of our British final year students was lower than that of the first year Brazilian students.
Teaching was not the only criterion of assessment. Research was another and, from the point of view of getting promotion, more important. Teaching being increasingly dreadful, research was both an escape ladder away from the coal face and a means of securing a raise. The mandarins in charge of education decreed that research was to be assessed, and that meant counting things. Quite what things and how wasn’t too clear, but the general answer was that the more you wrote, the better you were. So lecturers began scribbling with the frenetic intensity of battery hens on overtime, producing paper after paper, challenging increasingly harassed librarians to find the space for them. New journals and conferences blossomed and conference hopping became a means to self-promotion. Little matter if your effort was read only by you and your mates. It was there and it counted.
Today this ideology is totally dominant all over the world, including North America. You can routinely find lecturers with more than a hundred published papers and you marvel at these paradigms of human creativity. These are people, you think, who are fit to challenge Mozart who wrote a hundred pieces or more of music. And then you get puzzled that, in this modern world, there should be so many Mozarts – almost one for every department.
From ‘How Should Research be Organised? [PDF]‘ by Donald Gillies:
This book is divided into three parts. Part 1 presents a criticism of the Research Assessment Exercise (or RAE) which has been used to organise research in the UK from 1986 to 2008. The RAE is based on peer review, and the criticism consists in pointing out a surprising flaw in peer review. Many works which in retrospect are seen as constituting major advances were judged by contemporaries of the researcher, his peers, to be valueless. An example of this is provided by Frege whose Begrisschrift was judged in 6 contemporary reviews to have made no advance in the subject. Nowadays it is seen as having introduced mathematical logic in its modern form.
Part 2 of the book criticises the new system (the Research Excellence Framework, or REF) which has been introduced in the UK to replace the RAE. Where this does not continue to use peer review, it uses metrics such as citation indices. A citation index judges the merit of a research paper by the number of times it is cited by other researchers. However, the papers of pink diamonds like Frege and Semmelweis whose work is not appreciated by their contemporaries will not be cited by their contemporaries. So they will do badly on metrics such as citation indices as well as on peer review. So the new system has exactly the same fault as the one it replaces. It is likely to result in pink diamonds being thrown away, and hence in progress in research being held up.
We do live in interesting times and maybe these pessimistic views are not that pessimistic but just realistic. Once again these kind of articles make think about ‘slowing down’ instead of going so fast and ‘innovative’.