Author Archives: hadara

ekoolist

Juba mitu nädalat on meedias kõneaineks eKooliga seonduvad teemad. Seda nii puht tehniliselt (ei tööta, on aeglane, arusaamatu) kui ka puht olemuslikult – kas ikka on vaja kõike raamidesse suruda ja kirja panna?

Isiklik kokkupuude ekooliga mul puudub ja seega avaldan arvamust puht internetist loetu ja tuttavate kirjeldatu põhjal.

Probleemid tunduvad saavat alguse sellest, et eKooli, mis on oma turul sisuliselt monopoolne tegija, haldab ja arendab Koolitööde AS, mis on erafirma.
Turumajanduses on üldiselt iga firma primaarne eesmärk teenida oma omanikele võimalikult suurt kasumit. Toimiva konkurentsiga turul ollakse sunnitud selle eesmärgi täitmiseks arvestama ka tarbija ootuste ja soovidega, sest muidu läheb tarbija lihtsalt konkurendi juurde. E-kool on Eesti turul monopolses olukorras ja on äärmiselt ebatõenäoline, et sellise väikse turu pärast ka keegi konkureerima hakkaks. Seega ei ole neil ilmselt ka erilist põhjust oma klientide soosingu pärast väga vaeva näha.

Koolide jaoks on valik hetkel seega põhimõtteliselt kas ekooli kasutada ja loota et ajakirjanduse kaudu survet avaldades tehakse midagi siiski nende seisukohalt paremaks, mitte kasutada või luua oma süsteem (mida ka osad koolid teevad).

Praegusest mudelist tunduks oluliselt loogilisem, kui kõik koolid kes on oma süsteemi arendamise peale juba välja läinud ei ponnistaks omaette vaid ühendaksid jõud MTÜ või SA moodustamisega, mis siis hakkaks uut eKooli looma.
– Kuna arendaja kuuluks sellisel juhul koolidele endile oleks neil ka motivatsioon luua töövahend, mis on võimalikult mugav, kiire ja tööd lihtsustav.
– Langeb ära vajadus maksta kinni omanike kasumit, seega peaks lahendus tulema odavam kui praegune.
– Peaks tulema ka oluliselt odavam, kui igaühel oma süsteemi teha.

Ideaalis võiks sellise süsteemi disain olla võimalikult hajus ja modulaarne.
Keskne server tegeleks ainult kasutajate autoriseerimisega aga soovi korral võiks iga kool reaalse rakenduse paigaldada ka kuhugi mujale (tekiks hostingupakkujate vahel konkurents, mis hinda alla suruks). Iga kool saaks seega teatud piirides otsustada, mis versiooni ja mis moodulite, väljanägemisega (skin) ta kasutab. MTÜ tegeleks kasutajatelt tagasiside kogumisega, olulisemate moodulite arendamise ja tsentraalse autentimissüsteemi haldamisega. Rakendus võiks olla avatud lähtekoodiga, nii et kõik asjaosalised saaksid lihtsa vaevaga teha parandusi ja luua soovi korral lisa mooduleid, mis on just neile olulised.

Võibolla ei tule isegi tühjalt kohalt alustada, kuna enamvähem analoogsete eesmärkidega avatud lähtekoodiga rakendusi on üle maailma mitmeid (näiteks Schooltool, Moodle).
Osad neist võimaldaksid suht lihtsa vaevaga liikuda e-hinnetelehe funktsionaalsusest reaalse e-kooli funktsionaalsuseni.

Arvestades, et praeguse eKooli puhul on mitmed asjad pikalt olnud ingliskeelsed tekib kahtlus, et ka nemad on selle lahenduse teinud millegi olemasoleva kohandamise teel.

PS. huvitav kellele see Koolitööde AS tegelikult kuulub? Ilmselt küsimus mu oskamatuses aga poole tunniga ma ammendavat vastust leida ei suutnud.

europython 2010

We (me and my wife, both programmers) got back from the Europython 2010 conference a couple of days ago and I decided to write a bit about the stuff that we saw there.

This year the conference took place in Birmingham, UK which is the UK’s second largest city. We were there for only the main part of the conference (19-22 July) but there were a couple of days of tutorials before that and a couple of days of sprints (hack-togethers) afterwards. There were almost 400 delegates this year which is a bit down from the previous year. Each day started with a keynote from someone well known in the community.

The first keynote was given by Dr. Russel Weinder who talked about the future of processors and the need for better concurrent programming tools than using threads, mutexes and shared memory which he considers too low level. He also said that cache coherency algorithms will not scale much further. For 256+ core machines that are not too far in the future we certainly can’t get fast shared memory because assuring cache coherency would be impossible. In short some kind of architecture with message passing between cores/core clusters with separate memory is inevitable. Russel was pushing CSP (Communicating Sequential Processes) as an answer to that problem and there were 2 other talks about it (one by him and one by Sarah Mount). Basically it seems to be a message passing model where functions run as separate processes that communicate with each other over synchronous channels. The examples given used embarrassingly parallel problems and for me it was kind of hard to imagine that approach in large systems. I think the main “problem” hindering the adoption of this (and similar models) so far is that everyone has grown so accustomed to OOP by now. You can’t really use objects in the CSP model since they tend to have a lot of internal state. So it’s rather hard to throw away all the years of OOP thinking and replace it with something else – mostly because of the inertia and the fact that people really, really hate change.

Still, some kind of message passing abstraction seems to be unavoidable in the future.

The second keynote was given by Bruce Lawson from Opera and his main point was that people should use W3C standardized stuff instead of proprietary things like Flash and Silverlight. His argument was that standardized technologies make your site more easily maintainable and accessible, for regular people with different devices, for screen readers for the blind as well as for search engines. He had a demo about using the  CSS media queries standard to write web applications that ran on very different mobile devices. The general idea is to limit the scope of CSS rules based on the capabilities of the presenting device directly from the CSS instead of the current widespread approach of detecting the actual device or just writing your stuff for one single device only.
Used in conjunction with W3C widgets (and possibly device APIs) it would allow you to write cross platform applications that can be installed on users’ devices and used as a normal app that just happens to run inside a browser. This seems to be a far more sane way of writing mobile apps than the current approach that requires a different language & SDK for each major device family. I remember I read an interview with someone from Nokia a while back where he echoed the same sentiment that this is the future for most types of applications. Now if only we could write web apps in some saner language than JS 🙂
Bruce also had couple of nice demos of the canvas and video tags.

The third keynote was given by Richard Jones and it was an overview of the history and the current state of Python (both CPython and the alternative implementations).

The last keynote was delivered by BDFL Guido Van Rossum and it was basically a QA session with questions posted by participants to the special app beforehand. The questions could also be voted for. I had never seen Guido IRL before so it was nice to see that the guy is really rational and down to earth. There were questions about what language features he hates and which ones he would like to take over from other languages. He really couldn’t come up with anything that he strongly dislikes and said that it’s really important to keep the language clean and simple so the newcomers to both programming in general and Python in particular would find it easy to use.
There were questions about the viability of alternative Python implementations and Guido said he really has no emotional attachment to CPython and he thinks alternative implementations are a great plan B for various cases, but it will certainly take a lot more time and effort for an alternative implementation to the reach level of quality where it can be used widely as a drop-in replacement for CPython. Mainly because there are just too many nuances that are really only “documented” by the actual behaviour of CPython.
He was happy that people finally think about alternatives for parallelism other than just getting rid of the Global Interpreter Lock.

Raymond Hettinger gave 2 talks on “Idiomatic Python” which where some of the best talks that I attended. Raymond talked about many things like:
– exceptions in Python are cheap so the meme that they should be avoided because of performance reasons that rose from C++ where exceptions are really slow, doesn’t hold in Python
– bound methods are a perfectly legal construct and shouldn’t be considered a hack
– how to optimize Python code (after you have made it stable of course). One of the main techniques was bringing often used functions from the global/builtin scopes to the local scope to avoid constant lookups through several namespaces.
– __missing__() special method that can be defined in your class that subclasses the dict class and is called when dict is accessed with array access syntax (a[foo]) and the key is not found. Whatever the __missing__() function returns is returned in this case.
– several interesting objects from the Collections module, for example named tuples and ordered dictionaries.
– there’s a math.fsum() function that should be used to add together floats, which doesn’t lead to huge cumulative rounding error like the sum() would.

There’s actually a page by David Goodger that bears the same name “Idiomatic Python” that is probably not connected to Raimond’s talks in any way but contains a lot of similar very good advice that I would have liked to get back when I started programming Python :-).

I also attended Raymond’s talk on Selenium testing which I was a bit disappointed at because it felt a bit like a commercial for his company’s Selenium-in-the-cloud testing product. Which admittedly looked good but I would have wanted to see something more technical.

There were several talks about alternative Python implementations like PyPy and HotPy. PyPy seems to be usable for some real world stuff now and has got a C module interface latelly. HotPy is fast but is purely a research project which means writing the boring but necessary stuff that is required for actually running most real world applications on it is not in the plans yet. Several comparisons also mentioned Google’s Unladen Swallow which can’t use all the optimizations that PyPy and HotPy use since they want to be strictly CPython compatible, but should work as a good drop-in replacement because of that and still promises up to 5* speedup compared to CPython on some use-cases.

One of the nice talks that I attended was given by Mark Dickinson about Python’s new float representation in Python 3.1. It produces more intuitive and repeatable results across platforms than the previous implementation. For example repr(1.1) will now return 1.1 instead of 1.1000000000000001 that was returned previously. Both are within the tolerated error range, but 1.1 is certainly more intuitive result.
This seemingly small change actually required adding more than 4000 lines of code.
The author was a bit worried that they actually made the float behavior too intuitive now so people new to it don’t understand right away that float is inherently unsafe for things that require great precision like financial calculations (you should use decimal in these cases).

The Talk about Shogun machine learning toolbox contained several interesting classifier demos and seemed easy enough so I will certainly try it out.

There was another talk by Tony Ibbs about kbus which is a communication system for Linux that provides a reliable and simple to use communication mechanism between processes on the same or even separate systems. Communication is handled by the kernel module and the bus is primarily intended for use in embedded systems (he works for an STB company). It can do both unicast and multicast messaging and messages have IDs so you can respond to a specific message and simply block until you get a reply or see that there can’t be a reply –  because the conversation partner has died or lost its state because of a restart. The format of Tony’s talk was really nice too, basically all of his slides were small code snippets showing how to use the system.

On the 3rd conference day we had dinner which happened to be held at the same hotel that we were staying at so it was really convenient for us. We met some interesting people there but unluckily there was just too much noise in the dining room so it was hard to hear anyone else besides the person next to you. They were also gathering money for the PSF there and in exchange you could get shoulder massage if you wanted.

General notes

– In general the conference was well organized and the talks didn’t go overtime. Wi-Fi was awful for the first couple of days, but stabilized after it was split into 2 different SSIDs.
– Sometimes the abstracts weren’t enough of a guide to make a good decision about what to listen to since some speakers talked about completely different things. For example one talk should have been about the nuances and differences in various database libraries, but most of the time was spent on introductory stuff like why should you use a database in your application. One guy didn’t even have a talk and said he wanted to do a QA session instead. Someone proposed on the list that holding a short fast-forward session at the beginning of each day where each speaker gets 30 seconds to advertise their talk would help and it really sounds like a reasonable idea.
– Birmingham doesn’t seem to like tourists – we couldn’t find a map of the city anywhere in the airport or the train station. We actually asked from several bookstores in the airport and in the city and not only didn’t they have it – they were actually surprised of the question as if we were the first persons ever to ask for such a thing 🙂
– There are many sharp contrasts in the Birmingham city architecture – examples of modern architecture are right next to houses that are probably more than a century old and have plywood over the windows. There are lots of old abandoned factories with trees and all other kinds of stuff growing out of the walls, roofs and smoke pipes. It’s kind of nice to see that nature always wins at the end 😛

on Motorolas Droid X

Current crop of smartphones are really just tiny computers that happen to have GSM modem attached to it and are fit into really small chasis. You could go out and buy Beagleboard that contains the same SoC that powers many of the smartphones (OMAP 3*) and attach a screen and a GSM module to it. It would of course look horribly clunky but it would be usable as a basic phone and with little bit more engineering effort you can get something that looks like a normal phone so there really isn’t any magic to it.

Now keeping that in mind it’s really alarming to see how some manufacturers are trying to lock down their phones so you wouldn’t be able to install your own stuff on them. Latest and the most invasive example of this is the Droid X from Motorola that contains technology called eFuse that will semi-brick [1] the phone if you try to install OS on it that is not signed by them.

Here’s a quote from the Motorola representative:

Motorola’s primary focus is the security of our end users and protection of their data, while also meeting carrier, partner and legal requirements. The Droid X and a majority of Android consumer devices on the market today have a secured bootloader. In reference specifically to eFuse, the technology is not loaded with the purpose of preventing a consumer device from functioning, but rather ensuring for the user that the device only runs on updated and tested versions of software. If a device attempts to boot with unapproved software, it will go into recovery mode, and can re-boot once approved software is re-installed. Checking for a valid software configuration is a common practice within the industry to protect the user against potential malicious software threats. Motorola has been a long time advocate of open platforms and provides a number of resources to developers to foster the ecosystem including tools and access to devices via MOTODEV at http://developer.motorola.com.

While this rhetoric might seem reasonable at first glance it really doesn’t make any sense once you think of the Droid X as a computer that it really is and Android as a specific version of a specific OS. Would you ever buy a computer from a hardware vendor if their offering came with their specific version of “Windows 7” (branded and modified a bit) that can’t be upgraded by the user?

The mobile network isn’t really any more special than wifi network or the internet in general. I have never seen any reasonable argument why device containing GSM modem should be treated any differently than a device containing Wifi device. Both have radios in them and both are really distinct modules that run their own firmware/operating system so messing with the radio is limited there, not from the OS of the main device.

Only real reason seems to be that all the relevant industries want to have as much control as possible and in general this interest is always in direct conflict with long term interests of the users. The real main reason from the perspective of the Motorola is probably just planned obsolescence. They are likely to provide software updates for this device at most for a year and after that you just have to buy newer device if you want newer software even though your hardware might be perfectly capable to run it.

Anyway, currently the most open smartphone that is actually usable as a phone seems to be Nokia N900.

bidirectional pipes in shell

The other day I wanted to control one interactive command line program from another. So the controlling program A had to have it’s output connected to program B’s input (normal pipe) and program B output had to be connected back to program A input (the tricky part).
After trying different things I found the easiest approach to be with named pipes:

rm -f app_fifo && mkfifo app_fifo && cat app_fifo | ./app_controller | ./myapp > app_fifo && rm -f app_fifo

In ksh shell there seems to be easier built in way for doing this kind of thing, but I have never used it.

some fixes for the nginx push module

I have been testing various Comet/Push servers lately and finally decided to use Nginx Push module. My use case is a bit extraordinary as far as Comet applications go – I need to have ~150k TCP connections open 24/7, but there’s no need for broadcasting or message queuing functionality.

I found 2 memory leaks in the Nginx Push module ver. 0.692.

The first one occurs when you send a message to a channel that doesn’t have any listeners. Here’s a patch I wrote for that. I have only tested this with the push_subscriber_concurrency first and push_store_messages off scenario, it might very well break other scenarios.

The other far more annoying memory leak occured for every message that I sent and was rather large (message length + ~200 hundred bytes). So it was leaking about 27 MB for 100k “hello world!” type test messages.

I spent several days hunting this one down and finally it occured to me that the nginx pool allocator that the module used through ngx_(p|c)alloc() and ngx_pfree() functions wasn’t really built to free memory in the general case. Unless you allocations were larger than some defined threshhold (4k) they were done from a memory block that didn’t have any means in the data structure for actually freeing these allocations (it only keeps the max alloced pointer).
So if the small allocation area ran out of memory it was just increased.

Larger allocations were kept in a different allocation block list and were actually free’d on ngx_pfree().

Here’s the actual source of the ngx_pfree()

ngx_int_t
ngx_pfree(ngx_pool_t *pool, void *p)
{
    ngx_pool_large_t  *l;
 
    for (l = pool->large; l; l = l->next) {
        if (p == l->alloc) {
            ngx_log_debug1(NGX_LOG_DEBUG_ALLOC, pool->log, 0,
                           "free: %p", l->alloc);
            ngx_free(l->alloc);
            l->alloc = NULL;
 
            return NGX_OK;
        }
    }
 
    return NGX_DECLINED;
}

This allocator is designed that way to be as efficient as possible for allocating memory for requests that generally have a short life time. For that particular use case not freeing small amounts of memory is mostly OK and because of the simpler data structure also more effective. All the memory blocks were free’d anyway then the request was done and ngx_destroy_pool() was called.

The Push module used this pool allocator in a different way – the pool was created on bootup and it was never destroyed so it just grew and grew even though free was called correctly.

So anyway, here’s a second patch that fixes this memory leak by replacing nginx pool allocator usage with actual system allocator. I have tested this with 200k TCP connections with 100M messages and the memory usage hasn’t changed at all.

I also found a segmentation fault when using push_subscriber_concurrency last. This is probably some kind of concurrency/locking issue since this setting causes internal broadcasting under some conditions. I haven’t spent any time hunting that bug since I really needed push_subscriber_concurrency first. Besides the author of the module said that bug was known and someone was already working on that.

uue ID kaardi tarkvara saaga

Uue ID kaardi tarkvara tegemise lugu meenutab juba natukene seebiooperit. Kõigepealt ilus algus 2008. aasta keskpaigas, kui Smartlink võitis 10.5 miljoni EEK eest riigihanke uue ID kaardi tarkvara kompoti loomiseks. 8 kuu jooksul, ehk siis keskeltläbi 2009 jaanipäevaks pidi valmis saama uus soft mitmele Windowsi versioonile, Macile ja kolmele levinumale Linuxi distributsioonile. Browserite poolelt oli ette nähtud IE >= 6.0, FF >= 1.5 ja Safari >= 3.0. Lisaks veel ODFi allkirjastamise tugi ja muud pudi-padi. Kogu soft pidi tulema avatud lähtekoodiga (LGPL) litsentsi all.

2009. aasta jaanipäevaks midagi ei valminud ja avalikkuses eriti mingit infot ka ei liikunud uutest tähtaegadest või muust sellisest. Vähemalt oli kood tõesti avatud ja igaüks, kes soovis, sai arendust jälgida, testida ja ka vigu raporteerida.

Aeg läks vahepeal tarkvara maailmas edasi ja aktuaalseks sai mõningate muude asjade toetamine kui lepingus ette nähtud (Google Chrome browser, Windows 7). Kurat teab, kas selle kohta lepingu muutmisel 2009 alguses mingeid lisanõudeid sisse tuli. Iseenesest see väga palju tööd ei oleks tohtinud juurde tuua, kuna browseri poolelt võiks ilmselt NP plugin APIga pääseda nii või teisiti ja Windows 7 ei erine ilmselt sisulise API poolest vähimalgi määral Vistast. Nii või teisiti annab N OSi ja M browserit nii 32 kui ka 64-bitistel arhitektuuridel päris paraja hulga kombinatsioone, mida testida.

2010. aasta algul läks asi huvitavamaks, tarkvara tellija RIA ütles veebruari viimasel päeval lepingu üles ja samal ajal väitis Smartlink ajakirjanduses, et järgmiseks päevaks oli planeeritud tarkvara üleandmine.

«Oleme RIA-le üle andnud tarkvaraarendused, mis peaksid tagama ID-kaardi töö ka Google Chrome-i, Maci ja Windos 7 kasutajatele ning soovime, et see tarkvara jõuaks võimalikult kiirelt kasutusse,» ütles ettevõtte tegevjuht Henrik Põder.

Tegevjuht lisas, et lepingu lõpetamine olukorras, kus Smartlink on kõik lepingust tulenevad kohused täitnud vaid mõnepäevase hilinemisega, ei ole heauskne ega mingilgi viisil mõistlikult põhjendatud.

Huvitav on selles jutus Google Chromei ja Windows 7 mainimine, mida algses lepingus ilmselt ei esinenud. Samas on puudu igasugune viide Linuxi tarkvarale ja Firefoxile, mida algne leping kohe kindlasti sisaldas. Selles valguses tundub ebarealistlik väita, et oldi valmis kõik lepingust tulenevad kohustused 1. veebruariks täitma.
Veel veebruaris esines muuhulgas Linuxi all Firefoxiga näiteks viga, mis põhjustas Firefoxi crashi käivitumisel, kui smartcardi lugejat masinal küljes polnud. Viimased kommentaarid selle vea juures veebruari keskpaigast räägivad, et pole mõtet vana plugini lappida, kuna uus on kirjutamisel.

Hetkeseis siis selline, et Smartlink andis RIA kohtusse. RIA külmutas oma Traci ligipääsud ja seepeale forkis Smartlink ID kaardi tarkvara. Iseenesest tundub Traci külmutamine veider, SVNist oleks saanud ilusti kohtu jaoks konkreetse kuupäeva seisu kätte, kui selles küsimus. Kõrvalseisjale jäi pigem mulje, et ehk üritatakse arendust viia uue arendaja kätte ja suletud arendusmudelile.

Puht praktilise poole pealt olen ma uut softi kasutanud autentimiseks eelmise aasta lõpust saadik Linuxi & FF’ga ja selleks otstarbeks on ta üldjoontes probleemideta toiminud (nagu vanem ID Labori tehtugi). Kuna uue softiga allkirjastamist ükski pank ei toeta, siis seda ei saanud testida, mis on iseenesest masendav, kuna just sellele on enamus Linuxi kasutajaid aastaid stabiilset lahendust oodanud.

Praegu Swedbankis Linuxi puhul kasutatav Java allkirjastamise applet on laialt tuntud oma ebastabiilsuse poolest – põhimõtteliselt toimib see ainult 32-bitisel i386 arhitektuuril eeldusel, et sul on SUNi enda Java (mitte ntx. OpenJDK) ja isegi siis ainult mingite asjaolude kokkulangemisel (stiilis allkirjastada saab 1 korra, siis tuleb browserit restartida).

Ükskord üritasin ma seda appletiga allkirjastamist proovida ka 32 bitise Windows XP ja FF kombinatsiooniga SUNi enda ametliku JRE’ga (1.6 vist oli), et näha, kas see asi põhimõtteliselt üldse kuskil toimib. Java VM pani seal selle appleti laadimise peale segmentation faultiga pillid kotti 🙂

Kokkuvõttes tundub, et selle kohtujamaga lükkub jällegi normaalselt toimiva allkirjastamise toe saabumine Swedbankis edasi, kuna saab pugeda selle taha, et ametlikult pole uut softi välja toodud.

Ehk siis Swedbank võiks ennast nii palju kokku võtta, et selle mittetoimiva appleti asemel pakkuda huvitatud Linuxi kasutajatele kasvõi eksperimentaalset uue ID kaardi softi tuge.

on random

Once I took a course called “An Introduction To Cryptography” or something along those lines as a part of my masters studies. In the first lecture the lecturer stated that the whole point of this course will be to prove mathematically that it’s theoretically possible to generate pseudo random numbers that are indistinguishable from real random phenomena. After that there were lectures upon lectures of proofs of different things that were probably required to prove that. I really didn’t understand most of it…

Anyway, back in the real world programmers rarely seem to care much about randomness and just seed the random with time() and think it’s good enough.  Since time will only give you second precision this seed can be rather easily brute forced if you know the timeframe in which the seeding was done. If you know it down to 1 hour precision you have only 60*60 = 3600 possible seeds to try.

So to get some rust off my C skills I wrote couple of simple random functions that do a bit better seeding by using microseconds in addition to seconds and also extra entropy from the operating systems address space layout randomization (if available).

C version:

#include <stdio.h>
#include <stdlib.h>
 
#include <sys/time.h>
 
long int randint() {
    static char seeded = 0;
 
    if (0 == seeded) {
        struct timeval t;
 
        gettimeofday(&t, NULL);
        // using microseconds will widen search space for the attacker compared to using just seconds as returned by time()
        // in addition we hope to get some additional machine specific entropy from the OS address space layout randomization
        // by XORing with the address of the var t which should be on different address each time
        srand((t.tv_usec*t.tv_sec)^((long)&t));
        seeded = 1;
    }
 
    return random();
}
 
int main(void) {
     printf("%ld\n", randint());
     printf("%ld\n", randint());
}

And here’s the same thing as a C++ functor.

#include <stdio.h>
#include <stdlib.h>
 
#include <sys/time.h>
 
#include <iostream>
 
class Randomize {
public:
    Randomize() {
        struct timeval t;
 
        gettimeofday(&t, NULL);
        // using microseconds will widen search space for the attacker compared to using just seconds as returned by time()
        // in addition we hope to get some additional machine specific entropy from the OS address space layout randomization
        // by XORing with the address of the var t which should be on different address each time
        srand((t.tv_usec*t.tv_sec)^((long)&t));
    }
 
    long int operator() () {
        return random();
    }
};
 
int main(void) {
    Randomize r = Randomize();
    printf("%ld\n", r());
    printf("%ld\n", r());
}

real programmer can write Fortran programs in any language…

Every couple of years I stumble over couple of classic stories about real programmers of the old and those still always make me laugh.

Here are the links in case anyone hasn’t read them yet:

http://www.pbm.com/~lindahl/mel.html

http://www.pbm.com/~lindahl/real.programmers.html

Besides being funny lots of these things are true too. For example I have had to maintain code that was living proof that you can really write Java in almost any language (Python in my case).

the Windows tax

I haven’t used Windows on any of my own computers for almost 10 years now, so paying for it when buying a new computer makes no sense to me. But buying a computer from any international manufacturer without an OS or with Linux has usually been a rather complicated matter since even if those options existed they were well hidden on the manufacturers’ sites and limited to a couple of models only.

So usually people who never use Windows still just buy a computer with it and reinstall it right away with Linux/*BSD/whatever else and a rare few bother with trying to get refund for the unused Windows. In the perfect capitalism you would be able to just vote with your wallet but a couple of years back there basically just weren’t any alternatives available.

This has never been much of a problem with desktops since those are easy to build yourself and there are lots of local manufacturers selling them without an OS. Things have always been a bit more complicated with laptops because  it’s hard to build one yourself and even though there are local manufacturers around, their build quality tends to be inferior to international brands.

About a year and a half ago when we were searching for a suitable laptop for my wife we had to settle for a local manufacturer’s product which was cheap for its specs and came with Ubuntu Linux.  Over the course of the year some problems started to bother her, among which are:

  • the graphics support always broke with updates since the machine had the notorious SiS graphics that are (somewhat) usable only with a closed source binary driver that was last updated years ago,
  • the glossy screen was very hard to use in the office setting, actually it could easily be used as a mirror.
  • the build quality is nothing to write home about – when you move the screen it flickers, the keyboard is quite irresponsive etc.
  • It’s a bit too large and heavy, which is mainly just a side effect of its 15.6″ widescreen.

So we decided to give that laptop to her mother and buy a new and smaller one for her. I was really surprised to see that there now were lots of laptops available in local shops without an OS or with different Linux distributions preinstalled from Dell, Toshiba, Asus and Acer. Most were ~50 – 127 EUR cheaper than the Windows version depending on the exact Windows version. Instead of the OS our most restrictive requirement became anti glare screen since almost all the laptops seem to come with a glossy one these days. What we settled upon was Dell Vostro 1320 which came with no OS installed which in practice meant that it came with a FreeDOS CD with full source and printed GPL license.

freedos

The store also had the same model with the same HW specs available with Windows XP pro and 3 years of warranty (ours has 5 years) and it’s 63 EUR more expensive.

Interestingly enough you can’t really just go to Dell’s site and configure this model (or any other model that I tried) with anything else than Windows as an OS, but local resellers have all these Linux & no OS versions readily available anyway. So Eastern Europe might be privileged in that way 😛