Thursday, June 19, 2008, 04:53 PM - log
levelHead received an Honorary Mention in the Interactive Arts category at Ars Electronica this year.
Apparently it will also be on show at the Ars Electronica Centre in September.
Thankyou jury!
| permalink
Friday, May 23, 2008, 06:00 PM - live

I recently gave an interview for TAGMAG 6 as part of their feature on Augmented Reality. It's quite an interesting issue surveying AR from a cultural, philosphical and artistic perspective.
Get it here
If you're in Den Haag region come to TAG<> and play the best version of levelHead yet alongside some great work by other aritsts like Theo Watson and Jan Torpus.
Monday, April 21, 2008, 05:44 PM - live

As promised, here's a gallery of images of levelHead in action on day 2 of Homo Ludens Ludens. As you can see they were taken by a far better photographer, utilizing a special feature of the camera known as 'autofocus'..
Sunday, April 20, 2008, 12:13 AM - live

Last night at the opening was the first time levelHead has been seen in the wild. As such it's been extremely revealing watching people play it, something I've done for a few hours today.
The response has been very enthusiastic and almost all people seem to 'get' the interface pretty much immediately (with the exception of one woman using the camera to explore her nostrils on the projection at a rather inopportune moment).
That aside I'm surprised at the breadth of variance in the capacity of people to record and recall information about the room they were last in. Of the 50 or 60 people I watched play levelHead, I twice saw people demonstrate alien-savant powers in this regard, completing the first cube in under 2 minutes. Almost everyone I watched took their capacity to navigate effectively quite personally, even at times stopping to make mental notes before moving to a connecting room.
One thing I'm greatly enjoying about this piece is the ever presence of hands, made gigantic, carefully holding the cube complete with little world inside.
Aside from changing all the in-game dialogues to Spanish, I'm clear on the few tweaks I'll make for SonarMatica at Sonar08 in June. One thing is certain, the cubes will need to be an extremely durable plastic.
I've uploaded a little gallery of people playing on day 1 of Homo Ludens Ludens, one that expresses most of all just how little I understand our new Ricoh GR Digital camera (or perhaps photography in general). I'll make another one of people playing tomorrow on return home.
Monday, April 14, 2008, 12:56 PM - dev
It's been a good couple of weeks working on levelHead, in preparation for the Homo Ludens Ludens (aka "Man, the player") exhibition at LABoral, Gijon, Asturias, Spain.
The controls are far more robust and a great many bugs have been slayed (in a caring and respectful way). There are now 3 playable levels and a bunch of user-notications and other goodies that aid navigation.
At the 11th hour pix came on board to migrate the tracker from ARToolkit to ARToolkitPlus, which has worked splendidly: tracker stability is far better than it was with my previous ARToolkit implementation.
While working together he chose to go on a bug hunt, chasing in particular a graphic glitch where two rooms were being drawn at the same time. I'd written the first version with the intention of just one room being drawn at a time (one marker to be tracked for simplicity) but with the aid of a stencil-buffer he managed to make the use of the likely occurence that two or even three rooms can be seen at once:

Development hasn't all been in code, I also have some lovely new cubes:

So at the end of a fairly fierce two weeks of programming, levelHead is ready to be unleashed on the Asturians, where it will be installed for 5 months. For those that can't make it to Gijon, levelHead will next be exhibited at Sonar, Barcelona this year.
More about that later.
Friday, March 14, 2008, 06:00 PM - log
Here's a video of my Inclusiva-Net Conference, Cartofictions: Maps, The Imaginary and GeoSocial Engineering. It's around an hour long. Note that it has one or two mis-placed slides at around 34mins. This aside the editor did quite a good job.
Abstract:
From the earliest world maps to Google Earth, cartography has been a vital interface to the world. It guides our perceptions of what the world is and steers our actions in it. As our knowledge about the world has changed, so have maps with it (or so we like to think).
In this lecture Julian shows a darker side of map-making, covering various reality-distorting effects innate to the graphic language of cartography and how they can be easily exploited for gain.. In doing so Julian positions cartography as an abstract and influentual creative practice, rich with the power to engineer political views, religious ideas and even the material world itself.
Enjoy!
Be sure to check out some of the excellent projects that came out of Inclusiva-Net this year - super stuff ppl, it was a pleasure
Big thanks to the Medialab-prado team for making it all happen.
Thursday, February 28, 2008, 08:29 PM - log
.. that's the name of my latest paper, prepared for the Homo Ludens Ludens conference at Laboral, Gijon, Spain in mid April. It'll be published in the symposium book alongside the work of this esteemed bunch.
Download it here. You're free to reproduce and distribute it under the terms of the Creative Commons Attribution 2.0 License.
Out of interest I'd prefer to use a license like the GNU Free Documentation License for my papers but I can't find anything that comes close while remaining suitable to theory.
If you have any ideas I'd be glad to have an email from you.
Wednesday, February 6, 2008, 10:54 PM - dev
More Artvertising..

The below two videos show basic live image substitution of a postcard, seen by my webcam.
This clip demonstrates playing a movie 'on' the postcard and this video demonstrates cycling through a variety of images while attempting to emulate the local lighting conditions.
It's still not as stable as I'd like but nonetheless it's getting there.
The idea, of course, isn't to substitute images on arbitrary postcards but on big billboards, bus-stops and sign advertising in cities. I do have a clip of a substitution of a road-side sign but it's a bit rubbish due to it being quite dark at the time.
As opposed to (most) other augmented reality techniques - which use specially designed black-and-white fiducial markers - here the image itself is the marker.. This is much more processor intensive than normal marker tracking.
Naturally I'd love to see this working on a mobile phone but having played with a Nokia N95 recently - perhaps the best-specc'd phone for this sort of work - it's clear that fast image detection is well beyond the scope of current phone hardware; at least at more than a few frames a second. That's not to say standard augmentation using fiducial markers doesn't work fine on such a phone (like those used with ARToolkitPlus)..
Nonetheless, a UMPC built into a pair of binoculars is probably a bit more fun out on the field anyway.
Wednesday, January 23, 2008, 03:57 PM - log

This is a project I've been dreaming up for a while. Only until recently however have developments in both computer vision and mobile hardware platforms made it possible to produce.
Here's the blurb:
The Artvertiser is a computer vision project exploring live, locational substitution of advertising content for the purposes of exhibiting digital artwork.
The Artvertiser takes Puerta del Sol Madrid, Times Square New York, Shibuya Tokyo and other sites dense with advertisements as exhibition space. The Artvertiser is an instrument of conversion and reclamation, taking imagery seen by millions and re-purposing it as a surface for presentation of art.
By 'training' a computer to recognize billboard advertisements, logos and other images of commerce, that content can then be 'replaced' with alternative material when seen through a specially engineered digital video device. If an internet connection is present at the site, it can be documented and published in on line galleries such as Flickr and YouTube.
So far the software component is coming along well. It is already possible to perform live substitution of billboards with images, 3D models or movies when seen through a sufficiently good camera. To get this far I've written a C++ application ontop of the excellent image tracking library Bazar that supports substituting the detected image with an OpenGL surface upon which I can draw video (live or from file) or static imagery.
Working with Clara Boj and Diego Diaz - also competent practitioners in Augmented Reality - I hope we can add a network component such that when an 'artvert' is seen in the wild it can be published to Flickr and/or filmed and uploaded to YouTube and similar video hosting services.
Soon I hope to upload videos of early trials of the system out in the wild.
Tuesday, January 1, 2008, 06:41 PM - log
Name: Contemporary Art of Science and TechnologyISBN: 978-7-03-020415-8
Press name: Science Press
Language: Chinese
660 pages (62 pages in color)
We're on pages 319 and 320 next to a couple of great works. Here's a scan the editor was kind enough to send us:

.. and here's a scan of the cover:

Sunday, December 16, 2007, 03:31 PM - code
I recently spent some time looking around the hinternets for a simple method to stream live video, captured using OpenCV, from a webcam or firewire camera, to textures on one or more OpenGL polygons, windowed with something light like GLUT. Having found nothing that acheives this, and seeing that lots of people were trying, I wrote a program in C that does.
Why OpenCV? OpenCV offers advanced texture processing and analysis: being able to find natural features in images on OpenGL surfaces offers up many interesting possibilities.
The trick was just to pass correctly scaled (power of 2), captured IplImage data to glTexSubImage2D every frame. It needed to be correctly formatted and bound beforehand.
Get the source code here, licensed under the GPLv3. It will compile on a Linux system. OpenCV, FreeGlut and OpenGL are needed. You'll need hardware accelerated 3D too..
Enjoy!
Sunday, December 9, 2007, 12:49 PM - log

The q3apd project has been properly archived, with the inclusion of the LoveBytes06 Festival video documentation and galleries, here.
Friday, December 7, 2007, 02:37 PM - log
Hyperform Net Gallery has been kind enough to make me Artist of the Month for December 07, focussing on levelHead. Big thanks to all those involved at Hyperform.
Friday, December 7, 2007, 10:35 AM - log
Jean Poole was commissioned by Arnolfini to write on one or two aspects of my work over the years. Here's the text.
Thankyou Jean!
Saturday, November 10, 2007, 04:55 PM - log
Here's a manual I wrote introducing the basics of modeling, texturing and rendering using the excellent open-source software Blender for the FLOSSManuals project.
Later on I'll post a section on the Realtime Game Engine part of Blender toward the ends of rapidly prototyping game/3D interface ideas.
If you're interested in translating this manual into languages other than Dutch (Walter Langelaar is working on that) pls get in touch!
Monday, October 15, 2007, 06:25 PM - dev

I've just finished the first beta (really an alpha) of my little AR/tangible-interface game levelHead. Admittedly there's not much up on the project page yet, but here's a YouTube video that conveys the general idea pretty well. It still has glitches but i'll iron those out soon enough.
At some point i also want to look into the idea of using invisible markers (have a few promising possibilities there) or full colour picture markers (also possible, though requires much more CPU braun).

Here's a better quality video in the OGG/Theora format (plays in VLC).
Enjoy.
Thursday, August 2, 2007, 06:09 PM - dev
Here are packages of Packet Garden for Ubuntu 7.04.
To install just download, double-click and go. You might want to install dpkt and pycap first (also found at the above link).
Tuesday, July 31, 2007, 11:37 AM - code
Dilemma: Hotel in foreign country and must wake up very early. Phone critically low on battery, charger missing, hotelier appears to be asleep and no alarm clock in sight. Very tired, reasonably inebriated.
Fix: Write a script that emulates the sound of my phone's alarm before passing out:
# simple alarm script.
# requires the program 'beep'
# turn up your PC speaker volume and use as follows:
# 'python alarm.py HH:MM'
import time
import sys
import os
wakeTime = sys.argv[1].split(':')
while 1:
time.sleep(1)
if time.localtime()[3] >= int(wakeTime[0]):
if time.localtime()[4] >= int(wakeTime[1]):
os.popen('beep -l 40 -f 2000 -n -l 40 -f 10000 -n -l 40 -f 2000')
Monday, June 18, 2007, 03:56 PM - dev
Aside from moving country I've just finished developing a project at Interactivos at the excellent Media Lab Madrid. I tried to spend as
much time as possible there but alas had chores like setting up a new apartment. Nonetheless I had a lot of fun.
Simone Jones was one of the instructors - someone who has a great deal of experience with electronics, especially in the context of motorised cameras. Because my previously offered project proved to be unfeasible in the time frame and Simone wanted to work on something, we decided to team up.
We threw around several ideas, mostly to do with 'editing' the existing architecture of the exhibition space by adding an extra room seen only through a CCTV like display - a kind of a haunting. However, as the lighting conditions of the space were changing so frequently in the days leading up to the group-show, we couldn't pull this off. For this reason we decided to work small - really small.
The idea was simple, augment a solid cube with 6 little rooms such that the cube becomes a tangible interface for navigating through an architecture: a mind-game - "How are the rooms connected?"
I added some code to ARToolkit so that it could support occlusion - ie hiding virtual objects 'behind', or 'inside', real objects and used a simple mask object to aid the process.
Here's a little clip in the OGG Theora format (plays in VLC) that perhaps better explains it all.
Simone and I are already talking about a large version of this for a later show. In the meantime I'm adapting it into a small game where you must help a character to escape the block by leading it from room to room: by turning the cube you select the next room the character will enter. Several cubes can be used so that when a character is finally led to the exit door of one cube it will jump to the entrance of another cube (or 'level') placed nearby. I plan to make this puzzle game around 5 cubes long. More about that later..
The exhibition uses a Sony EyeToy on an Ubuntu Linux system. Worth mentioning is that I used the super Rastageeks OV51x-JPEG drivers: a 640x480 webcam on Linux for less than EUR40? Look no further!
Addendum 19-06-07 For a long list of reasons I have never found character animation a very satisfying task - probably due to me being quite horrible at it. For this reason I'm very open to collaborating with a good character animator on this project. The data needs to come from Blender via the osgCal3D exporter (shipped with recent versions of Blender).. Get in touch we me by interpreting this image.
Sunday, May 20, 2007, 06:52 PM - ideas
In the last few years quantum physicists and mathematicians have told us there may well be ground to the old "many worlds" theory - that there might be several different versions of any given dimension, or groups of dimensions, at the same time. Hugh Everett III is perhaps the most well known proponent of this theory.
Perhaps a many worlds gaming system would involve several players with the task of governing one simulated world each. Each world starts out with an equal number of objects and agents all of which begin as perfect temporal copies of the next. Gameplay might involve triggering/steering chains of events to the ends of creating the least synchronous world - ie. sequences of highly unlikely events. The world with the least eventful similarity within a given period of time will create a branch, and that player wins. At the point of a branch, identical copies are made and the game begins again, continuing from the point of that new branch.
Perhaps the notion of 'entanglement' could also be used as a strategic means of playing great similarity to an advantage: by successfully mirroring an event in another player's world entanglement could be triggered, giving the antagonist brief remote control over events therein.
While it could easily take on the form of a 2d game or orthographic sim-like title (like Habbo Hotel) the real work would be in creating a procedural event modeling system with an internal sense of consequence and wide potential for very absurd outcomes. Scenarios for an opening game need not be large at all - ordering a falafel or getting a haircut could give plenty of material to begin with.
12-05-07 Updated for clarity.
Saturday, May 5, 2007, 02:30 PM - ideas
Any time I had leading up to Gameworld was spent working on 2ndPS2 (read Second Person Shooter for 2-players). I'd been meaning to make this little mod for years and decided that Gameworld was as good an opportunity as any to put the idea to the test.Unlike the previous incarnation - a simple prototype written in Blender that far too many people got excited about - your views are switched with another player, not a bot. You are looking through their view and you through theirs. When they press the key for forward on the computer, the view you're looking through responds accordingly, and vice versa. As it's all networked it's possible to play over the internet just as you would a normal multiplayer Quake3 game.
Naturally this makes it very tricky to actually play the thing as you can only navigate yourself with effect when you can see yourself: ie. you are within view of your opponent's gaze. In the few tests I did of 2ndPS2 before putting it on show people with no experience playing first-person-shooter games struggled with this reversal of the control paradigm very much, and so at the advice of Marta I built a sort of visual radar system so you could see where the other player was and vice versa.

This worked really well as far as reducing the confusion people would've had otherwise: in an exhibition context of the scale of Laboral people have very short attention spans and so a bang-for-buck approach like this was perhaps necessary. In practice it actually stood up reasonably well to these ends.
Conceptually however using this radar-helper is a bit of a compromise: why switch the views at all if you're providing a means for people to avoid engaging with a primary dislocation of perspective as an active part of the interface?
With this in mind I've decided to replace the visual radar with a sound-based system. You can hear where you are in the scene in relation to the view of your opponent - the view you're looking through. Events like walking into walls and picking up items are distinct sound events. The orientation of yourself out there in the scene is represented as changes to the pitch and harmonics of a continuous signal.
While I already had much of this auditory feedback system already implemented I didn't use it at Laboral as it was far from ready for use.
I used ioquake3 to make 2ndPS2, spending a fair bit of time coming up with new rendering effects, sprites, weapons and other bits and bobs simply because I can't help myself when I have the source code in front of me (ahh the Garden Paths). Admittedly I could've simply taken a stock Quake3 map and consdered this strictly as a conceptual piece, but when I started this I had the distinct feeling that I was beginning something much larger. Perhaps this is still the case.

Where to from here? Perhaps a mod that allows many people to play simultaneously; a hoppable second-person view matrix allowing you to change to any view other than your own. There would be a strategic component where views themselves are resources that need to be managed toward the ends of finding yourself in the arena long enough to gain control, bumping others over to a new view as required. Weapons could include a POV-grenade that shuffles all the current views of players within impact range. FOV-weapons (i've already made a couple) that suddenly throw the target into orthogonal views or warp the current camera as though the world were a rippling surface.
This sort of stuff I wanted to save for another project entirely - a strategic multiplayer game where by you must find your first-person view in a large architecturally distributed view matrix - but Eddo suggested it would probably make a pretty nice addition to 2ndPS2.

Perhaps I will do this.. I'm always open to other suggestions and even collaborations.
Sunday, April 8, 2007, 03:56 AM - live
Here are a few galleries, broken up into categories based on when the images were taken during the cycle of action. I think what's in here is a little more interesting than what's seen in the earlier video as it also gives coverage of some live palette manipulation.
beginnings
fields
instants
endings
Monday, March 19, 2007, 11:23 PM - dev
The piece I made while serving as Artist in Residence at Georgia Tech finally concluded to be 'ioq3aPaint'; an automatic painting mechanism using QuakeIII where software agents in perpetual combat drag texture data as they fight, rendering attack vectors as graphic gesture. Here's a short clip (64M, 4"50', Ogg Theora) of one of the many iterations. It will play in VLC:The exhibition was breif but the opening night and talk brought many thoughtful questions. Game designer and theorist Michael Nitsche was responsible for alot of great commentary, some of which he wrote about here.
ioq3aPaint develops upon q3aPaint quite heavily, introducing a fresh palette and providing audiences with the ability to cycle through palettes in real time.
Not far off is the ability to send screenshots straight to a printer; the idea being that on the opening night of a future exhibition audience could take screenshots while the abstractions evolve which are in turn sent off to a large format canvas printer. The show itself would continue the following day as a normal painting exhbition.
If you're interested in playing around with QuakeIII as a painting tool you can get fairly far working only in the console. Play with
r_fov, r_drawWorld and r_showTris especially once you've 'team s' and there are a few bots in the scene. Therein start manipulating GL functions in code/render/ and drive them by adding new keybinds to code/client/cl_input.c.A big thanks to all those in the LCC department for making it happen - an extra special thanks to Celia Pearce for setting up the initiative in the first place. Celia is one of the few people really pushing experimental game development practices in both institutional and industry contexts, and has been doing so for some years. Cheers to that.
Saturday, March 17, 2007, 02:05 AM - live
while teaching at Georgia Tech i've been in the company of some big screens, so i took the opportunity to film a long overdue clip of Fijuu2 in use.
we'd hoped for an inset of the gamepad but i didn't have access to two cameras at the time. regardless, this clip should explain what it's all about.
get it here. It's in the Ogg Theora format. If you don't know what that is, just use VLC.
Saturday, March 3, 2007, 09:30 PM - code
While here at Georgia Tech I'm giving a class on the development of 'expressive games', and for the purpose I chose Nintendo Wiimotes as the control context for class designs. The final projects will be produced in Blender, using the Blender game engine.
Only having Windows machines at my disposal I wrote a basic Python script that exposes acceleromoter, tilt and button events from GlovePIE over Open Sound Control (which is natively supported by that application) to the realtime engine of Blender. I decided to go this way rather that create a bluetooth interface inside Blender for two reasons: GlovePIE is a great environment for building useful control models from raw input, it supports the network capable protocol OSC and I wanted to keep input-server like code out of Blender (for reasons you'd understand if you used Python in Blender).
GlovePIE however is more than I need on Linux alongwith the fact I don't have a Windows machine near me most of the time. I looked into various options for getting control data from a few different 'drivers' out over OSC and into Blender. Preferring to work in Python, I tried WMD but found it too awkward to develop with, although it is nothing short of comprehensive. I finally settled on the very neatly written (Linux only) libwiimote and wrote a simple little application in C to provide what I need. Here it is, wiiOSC.
To run it on your system you'll need libwiimote, Steve Harris's lightweight OSC implementation liblo, a bluetooth dongle (of course) and a bluetooth support in your kernel (most modern distro's support popular bt dongles out-of-the-box). wiiOSC will send everything libwiimote supports (IR, accelerometer, tilt, button events etc) to any computer you specify, whether to 127.0.0.1 or a machine on the internet.
wiiOSC is invoked as follows:
wiiOSC MAC remote_host port
For instance, to send wiimote data to a machine with the IP 192.168.1.102 on port 4950, I:
wiiOSC 00:19:1D:2C:31:E1 192.168.1.102 4950
To get the MAC addr of your wiimote, just use
hcitool scan.
I use Blender as my listener context but you can pickup the wiimote data in any application that supports it of course, PureData, Veejay etc. To use Blender as your listener you'll need Wiretap's Python OSC implementation and this Blender file.Enjoy.
Tuesday, February 6, 2007, 02:00 AM - code
I've picked out the packet capture part of PG and turned it into a reasonably useful and lightweight logger that should run on any UNIX system (tested on Linux). Packet length, remote IP, transaction direction, Country Code and port are all logged. Packet lengths are added over time, so you see an accumulation of traffic per IP.
Use (as root):
./pcap_collate <DEVICE> <PATH>This script will capture, log and collate TCP and UDP packets going over <DEVICE> (eth0, eth1 etc). the <PATH> argument sets the location the resulting GZIPped log will be written to, which will be updated every 1000 packets.
For this reason the script will automatically generate a new log on a new day and can be restarted at any time without losing more than 1000 packets of traffic.
The log is a dump of the dict containing comma separated fields structured as follows:
IP, direction, port, geo, lengthIt will filter out all the packets on the local network, and so is intended for use in recording Internet traffic going over a single host.
Ports to be filtered for can be set in the file config/filter.config
Stop capture with the script 'stop_capture'.
Get it here. Unpack and see the file README.txt.
Friday, February 2, 2007, 09:09 PM - dev

Two new projects are in the wings, the first of which I'll announce now.
This project takes a wooden chess-board and repurposes it as a musical pattern sequencer, where chess pieces in the course of a game define when and which notes will be played.
Each side has a different timbre to be easily distinguisable from the other. Pawns have different sounds than bishops, which in turn have different sounds than knights, and so on.
As the game progresses and pieces are removed, the score increasingly simplifies.
It'll be developed at Pickled Feet laboratories with the eminent micro-CPU expert Martin Howser.
Thursday, February 1, 2007, 06:00 PM - dev
After several months hacking on this, I've finally released PG for all three platforms simultaneously. It's now considered 'stable'. Head over the http://packetgarden.com and take it for a ride.A big thanks to: Jerub for detailed testing of the OS X PPC port, Marmoute for the OSX PPC package, Ababab for providing PPPoE test packets and extensive beta testing of the Windows port and for his feature suggestions, Davman for beta testing the Windows port and for some fine feature requests, Krishean Draconis for porting/compiling Python GeoIP for Windows, pix for optomisation advice, Marta for both her practical suggestions and eye for aesthetic detail, Atomekk for his early testing of the Win32 port and for the Win32 build of Soya, Jiba for Soya itself and all the other people that have sent bug-reports and hung out in IRC to help me fix them. A big and final thanks to Arnolfini (esp Paul Purgas) for the opportunity to learn alot about packet sniffing , this thing called 'The Internet' and a fair bit more about 3D programming along the way. I've really enjoyed the process.
Now for something completely different..
Sunday, January 14, 2007, 03:22 AM - dev

As the topic doesn't suggest, http://packetgarden.com is now live. BETA testing is also well underway, with packages for Linux, Win32 and OS X going out the door and into the hot mits of guinea pigs. if you're also up for a little beta testing, don't hesitate to get in touch!
i've had alot of questions about this project, some about privacy, some about the development and engineering side of things. for this reason i've put up an 'about Packet Garden' page here.
Thanks to open standards, I wasn't entertaining madness undertaking the task of writing for 3 different operating systems simultaneously. That said a big thanks to marmoute for help with the reasonably grisly task of packaging the OS X beta.
< rant >
It's clear that developing a free-software project on a Linux system involves substantially less guesswork than on Windows and OS X.
Determining at which point the UNIX way stops and the Apple way begins in OS X Tiger is pretty tricky, with
/System/Framework libraries often conflicting with libraries installed into /usr/local/lib or just libraries linked against locally. Because there is no ldconfig I don't have the advantage of a 'linker' and so I couldn't work out how to force my compiler to ignore libs in /System/Frameworks and link against my local installed libraries. If there is any rhyme or reason to this, or some FM I should RT, I'm keen to hear about it.Aquiring development software on the Mac is also tricky: in Debian I have access to a pool of 16000+ packages readily available, pre-packaged and tested for system compatibility. A proverbial fish out of water, I took the advice of a seasoned Apple software developer and tried Darwin Ports and Fink but both had less than a third of software I'm used to in Debian and were both pretty broken on the Mac I used anyway. So, it was back to Google, hunting around websites to find and download development libraries. I managed to find all the software ok, but as a result of finding it online, I'm never sure which version is compatible with the system as a whole - neither Windows or OS X have any compatibility policy database or watchdog in place to anticipate or deal with software conflicts portentially introduced by software not written by Microsoft or Apple respectively. This is still a major shortcoming of both OS's I think. I can't see this happening with Microsoft in future but perhaps Apple will get it together one day and create it in the form of a compatibility database/software channel or similar that allows developers to test and register their projects for compatibility against Apple's own Libraries (and ideally those by others), sorted by license. Maybe this already exists and I don't know about it.
At this stage my development environment was nearly complete, but the libs I'd downloaded were causing odd errors in GCC. It turns out I needed to download a new version of the compiler, which is bundled into a 900Mb package called XCode that contains a ton of other stuff I don't need..
Getting a development environment up and running on Windows wasn't as difficult, though it suffers the same problems surrounding finding and installing software, let alone determining whether you're allowed to redistribute it or not; if the software I'm looking at is in Debian main, I can be sure it's free-software, hence affording me the legal right of redistribution.
One great advantage of developing on Windows again, the first time in around 6 years, is having to write code for an operating system that has such poor memory management. Everything written to memory has to be addressed with such caution that it greatly improved my code in several parts, for all platforms. Linux however has excellent memory management, and gracefully dances around exceptions where possible. Perhaps developing on Windows every once and a while is, in the end, actually a healthy excerise.
That said working with anything relating to networking on Windows is absolute voodoo at the best of times. Thankfully OS X has the sanity of a UNIX base so at least I can find out what is actually going on with my network traffic and the devices it passes over. rant >
next





