Archive for September, 2008

14th Annual IF Comp is underway!

Tuesday, September 30th, 2008

Get the games while they’re fresh!

Just in time for my enforced 2-week hiatus from work…

Cancelled

Tuesday, September 30th, 2008

My project got cancelled today.

I don’t have too much to add to the official memo; I am one of the lucky ones who now has his pick of which project to go to next. Many good people I worked with have been let go; others have been shifted to other projects; I expect some will leave.

As much as the decision is painful, I think it is the right one for the studio. Mike Verdu is a man of great integrity and has been personally involved in the project, so this decision weighs as heavy on him as on anyone. Such is the nature of the industry.

I hope that those let go can find work at other studios. I am sure that recruiters all over the LA area are having a busy day.

Optician

Thursday, September 25th, 2008

Yesterday I went to the optician for my yearly checkup. I still have 20/20 vision (both eyes at 0) and intraocular pressure of 15 mmHg in both eyes, which is normal. Since I have a family history of glaucoma, my optometrist always does the pressure test.

I also have a small nevus (mole!) on my left retina which apparently is not unusual in fair-skinned northern Europeans. I got to look at a digital image of the inside of my eyes and see everything. The nevus is small, round, flat, and disappears in green light, all of which indicate that it is totally normal.

I asked when I should take mini-Elbeno for a checkup. She recommended at 3. As far as I know, his eyes are perfect – he never squints or tilts and can read text at distance – and he has had basic eye tests done by his paediatrian of course. We recently took him to the dentist for the first time which was why I was also wondering about taking him to the optician.

If you’re in West LA and need an optician, I highly recommend Dr Betsy Blechman’s practice in downtown Culver City.

Money music

Wednesday, September 24th, 2008

If there’s a silver lining to the current financial crisis, it’s that I prefer the “market down” music on NPR’s Marketplace Morning Report to the “market up” music.

Markets are up: they play We’re in the Money
Markets are down: they play Stormy Weather
Markets are a mixture: they play It Don’t Mean a Thing (If It Ain’t Got That Swing)

How many programming languages do you know?

Tuesday, September 23rd, 2008

I had a lecturer in university who claimed to have taught something in the region of 15 languages. I was wondering recently about how many languages I know and how to count them, given that there are many languages out there that are similar.

Do C and C++ count as different? I think most people would say yes. They are sufficiently different that fluency and idiom in either one does not translate to fluency and idiom in the other.

Do Common Lisp and Scheme count as different? Again, I’d say yes, but for different reasons. I think it’s likely that people who know either one would be fluent in the other; they just wouldn’t like it so much. And they would definitely think it different (but probably for quite technical reasons, e.g. a Lisp2 vs a Lisp1).

What about languages that only differ slightly in syntax, e.g. do F# and ML count as different? Well I’m not fluent in either so I’m a poor judge, but it seems to be getting a little harder to make the case for separation. At least, if we discount matters of libraries and just consider the core languages.

Do HTML and SQL count as languages? I’m going to say yes even though they’re not Turing complete.

Is it sensible to make a distinction between different assembly languages? I’m inclined to say yes in some cases, no in others. Obviously Intel and ARM are very different. On the other hand, if you can read one RISC assembly language, you can probably get by reading most. And when it comes to assembly, reading is usually a more important skill than writing.

Anyway, to me it probably makes sense to rank language fluency on a 1-6 scale:

  1. Haven’t heard of the language or know only its name
  2. Have heard of the language; know which family it belongs to, what it’s suited to
  3. Have installed the language and/or tools; written an example program or altered an existing one
  4. Have written original programs to solve small but non-trivial problems
  5. Have written medium-sized projects
  6. Have worked on large scale projects; have used it professionally and/or in collaboration with others

So in theory one can take this scale, figure out one’s score for every PL out there, and sum to get the overall score. But given the sheer number of languages out there, I think it makes sense to consider just one’s top ten. Unlike my lecturer, I haven’t used ten languages to write large scale projects, so I won’t be scoring a 60. On the other hand, scoring 1 for any language on the top ten would be quite a poor show.

I’d imagine a typical score for a working programmer’s top ten would be close to 45: a couple of 6s, a couple of 5s, several 4s, and the remainder 3s. For me it would be: 6 (C++), 6 (C), 5 (Common Lisp), 5 (BASIC), 4 (Haskell), 4 (PHP), 4 (Javascript), 4 (SQL), and take your pick of 3s (Perl, Python, Bourne shell, C#, Ruby, Lua, …).

In conclusion: go learn a new language! It’s fun!

Dreamhost backups: the appropriate incantation

Sunday, September 21st, 2008

Recently, and without much fanfare, Dreamhost introduced backup users. Each account gets a single backup user, whose function is to provide a remote backup. Dreamhost’s TOS states that regular user content must be served on the web: it is not intended to be a backup service. However, clearly the need for remote backups exists, and so they now allow 50GB (plus more at the rate of $0.10 per GB per month) of backup space for each account, accessed through the backup user.

So I’m backing up my email remotely. Just the job for a shell script. First things first: what do I need to backup? Well I use Evolution for email, and first I need to stop things from changing while I backup, and make sure I’m starting from the right place:

gconftool-2 --shutdown
evolution --force-shutdown
cd

Next, I need to backup three directories, and encrypt them of course. (The recipient here is changed for security purposes.)

tar -cz .gconf/apps/evolution .gnome2_private/Evolution .evolution 
    | gpg -e -r me@example.com -o mail.tar.gz.gpg

Now of course I need to upload the backup to the dreamhost backup server via SFTP (Once again the username here is changed for security purposes.):

sftp -b /dev/stdin mybackupusername@example.com <<EOF
put mail.tar.gz.gpg
bye
EOF

And finally remove the intermediate file:

rm mail.tar.gz.gpg

Having previously uploaded my authorized ssh key to avoid having to type a password, of course, this process is now automated. The only fly in the ointment is that Dreamhost backup users only support FTP or SFTP; ideally they’d have support for rsync.

Book sale time again

Saturday, September 13th, 2008

And this time it seems like it’s been quite a while since the last one. Shelf space in our house is still at a premium, of course. We picked up lots of things for mini-Elbeno, including some really nice non-fiction stuff (a science encyclopedia and a book on weather).

Mrs. Elbeno got some interesting novels that she liked, and I got copies of Genius, Programming in Prolog, Little Brother and another copy of Stroustrup. But I resisted buying a third copy of the Dragon Book.

(William Clocksin, co-author of Programming in Prolog, was my AI/Prolog lecturer at university.)

(Part of) what I do all day

Friday, September 12th, 2008

Lately I’ve been asked what I do at work. What I really do. Well, if Great Biggary can frequently expound on his latest Maya scripts and python excursions, here goes with my explanations…

Most of what I do all day is read code. Just like an author or poet will read many many more books and poems than they write, so programmers read much more code than they write. At least, the ones that want to be any good do.

So I read and read and read, and hopefully understand, before I finally write a little bit which pokes the program in just that way and achieves the right result. Changing or writing as little code as possible means introducing as few bugs as possible.

I’m working with a few million lines of code, most of it C++. Today was a day when I didn’t quite get it to do what I wanted, but had to settle for second best. I’m writing some code to analyse how much data we load off disk every frame. (Note for my non-technical readers: a frame – in this case – is 1/30 of a second or 33ms. Quite a long time to an Xbox 360, but not long to you and me. It’s like… now to now. Nowtonow. N-now. You get the idea.)

Where was I? Oh yes. Well, while the game is running, we’re often loading things from the disk, which is why I’m writing this code. During development, we run off hard disk, which is really fast, but when we ship the final game, we need to run off DVD, which is really slow by comparison, which is why it’s important for us to know exactly how much data we’re asking it to read over time. Otherwise bad things happen like the player coming around a corner to discover that the ground ahead has temporarily disappeared. You might have seen something like this in video games from time to time, especially when discs get dirty.

There are basically a few categories of things we load (or “stream”) while the game is playing:

  • Sound effects and music
  • Textures (i.e. graphics, to explain further is outside the scope of this post, as they say)
  • Videos (when they are playing)
  • Extra parts of levels (“the world”, which are triggered to load at certain points as the player moves along)

So, imagine I’ve written a spiffy little display which shows how much data we loaded in each category over the last 2 seconds (or 60 frames, if you were paying attention earlier). All I need to do is actually feed the data into it, which means intercepting the part of the code where we actually read from the disc, and keeping a running total of the number of bytes we read, in each category.

This was quite easy to do for textures and levels, which go through one mechanism for loading. No problem there. But it was a little more tricky to do for the audio and video data. Especially the video. Here I need to get a bit technical to explain why.

To hook up the texture and level data to the right category in the display was fairly easy – at one point in time when a disc load is scheduled, I save a little piece of context along with the load request so that later, when the disc load happens, the saved context tells me what category to add to. The code that knows which category to add to is just one step away from the code that actually does the loading, so this is easy to keep track of.

But it’s a large codebase, and the code that reads data off the disc for the audio and video data is different from the code that reads data for the other categories. In fact, it turns out that the video code especially is much more complex. In this case, the code that actually does the loading is many steps away from the code that knows the context of the data. This is because the video and audio code is pretty aggressively multi-threaded.

Explanatory aside for the non-techies: a thread is a single set of instructions the computer is executing, one after another. In the real olden days, one thread was all you got. Then some bright spark (no, really) noticed that a lot of the time the processor was waiting for other stuff to happen (like reading from a disk, say) and the concept of multi-threading was born. If one lot of instructions (e.g. one program) was waiting for something to happen and unable to proceed, well, we could let the computer work at doing something else for a while until it could resume the first task. Heck, we could even interrupt the computer after it had been doing one thing for a while and let it do another thing for a while, and if we did this fast enough, it would look like the computer was doing two things at once!

This concept is now the basis of all modern operating systems, even those written by Microsoft. This is why you can have your email open and browse the web at the same time. And nowadays we don’t just have the illusion of computers (and consoles) doing multiple things at once; we are starting to get machines with more than one processor in them, so that they really can do multiple things in parallel.

So back to the video problem. It’s this: the code that counts the bytes that are loaded is in one thread. The video player control is in another. The code handling the audio is another. The code that actually reads the data off disc is yet another. And (this is the real kicker) the code that decodes the video and actually triggers more data to be read off disc when it’s ready for more? Well that code is on several threads, and they are changed around while the program is running according to which processors have spare time.

So to get a 100% accurate attribution of bytes read from disc to audio and video categories would require passing that bit of context through many handoff points, writing a lot of code, and recompiling many libraries, and therefore also maintaining those changes to the libraries, which are normally handled by other people in other parts of the company. (A library – being a chunk of code that is self-contained and that you can use generally without detailed knowledge of its innards – is primarily a mechanism for reducing complexity. Audio and video code is typical of the sort of thing that gets put into libraries. So changing the insides of a library, while technically as possible as changing any code is, is preferably to be avoided.)

So my solution? I put in a piece of code to flag when a video was playing, and a piece of code to intercept the actual reading the data off disc part that services the audio and video code. A piece of code at the very top and bottom layer of the cake, as it were. Videos play rarely, so when a video is not playing, I add up all the bytes and simply put them in the audio category. When a video is playing, I put them all in the video category (even though at that point it includes some audio data). The result is not perfect, since video and audio data can’t be separated, but since videos play rarely, it works well enough for my purposes.

Sometimes, even in programming, one needs to abandon the perfectly correct in favour of the good-enough, pragmatic solution.

Tales of a full hard drive

Monday, September 8th, 2008

In between watching the US Open finals this weekend, I had to fix up my PC. My hard drive (300GB) has been filling up with stuff and finally got full. My partitions looked like this:

/dev/sda1 (NTFS): 40GB at 80%
/dev/sda2 (ext2, /boot): 40MB at 50%
/dev/sda3 (swap): 2GB
/dev/sda4 (extended)
/dev/sda5 (ext3, /): 40GB at 100%
/dev/sda6 (FAT32): remaining space at 90%

The root drive being full had various bad effects on my ability to log in and do things under X. Also, the boot drive is quite small and has in the past got too full during large upgrades. So I wanted to fix both problems.

So I cracked my 500GB backup drive out of its enclosure and put it in the machine. My first thought was to move /home to its own partition. So reformatted it as ext3, and put in the necessary /etc/fstab line to mount it in the right place, renamed the old home directory and copied everything over.

Things started to go wrong immediately. Gnome wouldn’t load properly, and I got a stern message saying “HARDWARE ERROR”. Ulp. I tried to ignore that and tried plan B.

Plan B was to just mount the new drive as another volume and keep my home directory as is. So I reverted my changes there, and instead, copied a bunch of stuff from the FAT32 volume to the new volume. I didn’t really need to access that stuff from Windows (which I keep around just for games and VPN to work).

Plan B seemed to work better. To eliminate any sources of errors resulting from frankenstein configurations, I decided to start with a clean home directory, and restore some things from my backup. Having cleared some space from the FAT32 volume, I then just had to resize the partitions, so I downloaded a gparted live CD and got to work.

I shrunk the NTFS partition by 40MB and resized /boot into it, so that I doubled the size of /boot. This should help with future kernel updates. Then I shrunk the FAT32 partition by 50GB. This took a long while, and I went to bed while this was happening. I guess gparted had a lot of work to do. Anyway, I got up this morning and it was done, so I expanded the root partition into the leftover space, which didn’t take nearly as long.

And now I’m done, with an extra 50GB of space on the root partition. I still think the long term fix (and the desirable one) is plan A: to migrate /home. But I’ll leave that for now. I’m going to wait and see how the new drive pans out before I commit more important data to it, after that hardware error message.