pmuellr is Patrick Mueller

other pmuellr thangs: home page, twitter, flickr, github

Tuesday, December 02, 2008

more android thoughts

I remember right when Android came out, I downloaded the SDK and tried playing with it, and there was some kind of silly show-stopping problem with it on the Mac. Thoughts from then, just a little under a year ago. IIRC, the bug got fixed fairly quickly, but not quick enough for me to get a chance to get back to looking at it in any depth. A long time goes by, and right before this Thanksgiving, I got the urge to look at it again, so I did. Here are some unorganized thoughts.

Note that I still didn't really spend a lot of time with it; the time I spent probably raised more questions than it answered, in a good way. I did the obligatory HelloWorld demo, the Notepad tutorial, and lots of browsing and reading in between.

Please comment if I misspeak about something; lots of guessing and incomplete research on my part here (this thing is big!).

Wow. Lots of stuff here.

Hats off to the team. This thing is huge, and seems fairly well put together, especially considering the size. The source takes up 1.4GB on disk; Linux and WebKit and on and on and on. It's fun just to troll around the source directories to see what's there. But I recommend downloading the wad and browsing with your usual editors/tools.

Rev 1

So sure, maybe it's not as sexy and polished as the iPhone, and sure the devices got shipped with a silly little bug, but I'm willing to forgive early releases. Given the scope of the effort, again, I think they've done a great job. It's early days.

But I'm also not willing to shell out my own pocket money for a rev 1 device or o/s. Just like I (no longer) buy rev 1 Apple products or immediately install new versions of OS X. Tired of playing pioneer.

I guess I'm most worried about the potential immaturity of the application and UI frameworks. Usually good frameworks require evolution, and I've used lots of "intelligently designed" frameworks over the years, so I'm a bit of a doubter when I see a new one pop up overnight. Might be interesting to get some background on them, kinda like Google has done with BigTable and some of their other technologies.

Here's an example worry: the Touch Mode blog post has me a bit confused. Touching the screen causes selection and focus to be lost? eh? I've been using my finger on PDA touch screens since the Palm. I'm used to touching the screen causing selection and focus events to occur, not to be lost. I'm either not understanding what's going on here, or this seems wrong. And here's a funny little note: "This is why some developers are confused when they create a custom view and start receiving key events only after moving the trackball once: their application is in touch mode, and they need to use the trackball to exit touch mode and resurrect the focus." Um, developers are confused? I think users will be confused by this also, won't they? I'll give them the benefit of the doubt here, I'd like to see how well this works on an actual device before passing judgement. Lastly, I'm glad to see that the wiggly mouse is back - it's been years! The referenced mail doesn't go into details, but I've been dealing with self-inflicted wiggly mouse problems since the Smalltalk days. Those were due to bugs though, not design.

Devices other than the T-Mobile G1?

I kinda hate to even provide the link the aurally offensive T-Mobile G1 site, but I'll let you feel my pain.

So what other devices are going to be available? More phones, I guess, but I wonder if anyone will do a non-phone version, like Apple's iPod Touch? I'm currently not willing to pay the data charges for presumably light bandwidth usage (over cell); I'm too much of a cheapskate and live fine with my cheapo phone. But I'm more than willing to carry a device I can connect to wifi.

I didn't see much (any?) information on any newer devices coming out, which is a bit worrisome. Before spending a significant amount of time playing with this, I'd like to know this thing has a future. It seems like a no-brainer to me that device manufacturers would take advantage of Android (many caveats apply), but I'm not willing to bet on common sense.

How would you do a port something like the Nokia N800?

In lieu of new devices, what about porting to existing devices, like (my) Nokia N800? Or say, a netbook?

Turns out folks have ported Android to the N800; see here and here.

It appears what these folks have done is ported the entire stack to the device, starting with Linux. Youch. More what I was thinking was porting the minimum number of things to be able to run Android apps on it somehow. Basically, get something like the emulator running as a native app on the device, as an application container. I don't really feel the need to either blow away my existing device, or run another Linux on top of (beside?) it. And I'd be willing to live with having to run Android apps in a container, instead of as native apps themselves.

We did this sort of thing back when we were working on embedded Java; we had a MIDP environment that could run on the native operating system, with the UI layer (for instance) implemented in SWT. In fact, I assumed this was probably how the emulator that ships with the Android SDK was implemented, but it slowly dawned on me that it's actually running the entire stack, from Linux up, in a virtualization environment of some kind. Wowsa. That's why it takes 23 seconds for the emulator to start up.

To implement an emulator for Android, you need to decide how low-level access you want to support. And the answer is obviously: the "Java" code that you use to write applications. All I'd want to do is run existing apps, which can only (?) access "Java" APIs, so all you have to do is have a matching set of Java APIs that do the right thing on the platform (note the 'all' is italicized).

In terms of implementation, since this is, really, Java source, you could (?) just run the apps in plain old Java. Alternatively, if you didn't have a Java available, you could port the Dalvik VM, and run it that way.

For the N800, there is a Java available (I've not tried it); it's a J2ME flavor, so may not be enough anyway. Might be better off porting the Dalvik VM, then porting the UI bits to Hildon/GTK/whatever; the N800 is already running Linux, so presumably other natively accessed things would have a bit of hope of running out of the box (but not camera, GPS, etc). Webkit's got me nervous.

Lots and lots of caveats here, and it's entirely possible that for some reason or another, you really, really need that Linux kernel to even run the plain ole "Java" applications.


So I'll admit Java isn't my first choice for a language on a device like this. I like the option of Python on Maemo.

But I can live with Java on a constrained device like this; I wasn't unhappy doing embedded Java years ago. One of my pain points with Java is the sheer bulk of crap needed to get anything done, pulling in, eventually, loads of Apache and Eclipse projects to do some heavy lifting for you. The nice thing here, everything is included, or seems to be. But even we if end up needing more stuff (why do I even bother saying "if"), we're talking about a constrained device here, where you'll get laughed at if you want to install a 1MB library to pull in some function. People will have to learn how to write small code. Forced sanity.


Mark Murphy's blog post on using Java-based scripting languages on Android sums up the major points, especially as to language implementations that generate Java byte code directly; those will be a problem, and I think that affects a number of the interesting scripting languages. Presumably, someday you'll be able to generate and load Dalvik bytecode in Android. Maybe there will be a way to load Java bytecode and have it internally generate creamy Dalvik output automagically. Many possibilities.

But I wonder how appropriate scripting languages would be anyway? I've written about this before. If my option is to write Java code, or to write Java code in a Python dialect, I'll just stick to Java.


Of course there's always JavaScript available, either in the native browser via web applications, or in some other way through the use of the WebView UI View class. And check out this interesting looking method, in that class: WebView.addJavascriptInterface(). hmmm ...

Dalvik docs

Some Dalvik docs like bytecode descriptions, file shapes, etc are available here. Of course, almost no one needs this info, but in case you just had to know ...

I was a little surprised the Java-ness of Dalvik went so far as to support JNI, but I guess that really makes a lot of sense.

Proprietary stuff

The licensing situation with maps is a little disconcerting, and at 11 pages, a bit long. But this feels a lot more like an application-level service than, say, core infrastructure, and I'm fine with people setting limits on application services like this. Of course, the core infrastructure / application service line is a bit fuzzy. I use Google Maps a lot; it's starting to feel like core infrastructure. Google's got us right where they want us!

How about a native desktop version?

Imagine something similar to Apple's Dashboard, only running Android applications. You know I just want to start using that WebView.addJavascriptInterface() API on the desktop ...

I had the same thoughts on MIDP, back in the day, of using it as a container for little useful desktop apps. But that was pretty silly, given the lameness of MIDP.

Again, I think the nice way to do this would be to run the apps "native" instead of in a virtualized Linux box. I think if you looked at some of the common componentry used in Android and Chrome, you might end up thinking that being able to run Android apps in Chrome would be an interesting idea.

Now, you might say "the UI capabilities of Android are too lame to be useful on a desktop!" Thing is, I hear that complicated GUIs are on the outs. Even on the internets! I generally agree with those thoughts, and I have to admit, one of the most interesting UI's I've seen recently was Muxtape (R.I.P.); simple.

Speaking of simple UIs, after watching this video discussing ListView, I couldn't help but to think of our old, dear friend Gopher (R.I.P.?). Once upon a time, I imagined having Gopher server interfaces on everything, so I could have a simple universal client that could talk to anything. Good times.

Friday, November 14, 2008

javascript service frameworks

In my "brainwashed" post, I managed to get a diss in for Yahoo!'s BrowserPlus™, calling it a band-aid. However, I carefully, pro-actively covered my arse with an earlier Twitter message "Y! BrowserPlus™ looks interesting; maybe the service framework moreso than the function provided".

The existing services provided by BrowserPlus™ seem like eye-candy. Not sure I really need an IRC client in my web apps, nor text to speech. And I'm worried about the kill switch. Mainly because the implication is that I'm somehow always tethered in an uncacheable way to Yahoo!. shiver

But I do like the service framework stuff. At this point, don't really care how useful it is, glad to see people playing in the space. Why?

The level of functionality we have today, provided by the browsers, for components/modularity of javascript code is <script src="">. Basically, all the power, flexibility, and functionality of the C preprocessor's #include statement (good and bad). But I'm looking at my calendar to see what year this is again. Doh!

The big JavaScript frameworks build their component stories on top of this, and the whole thing just gives me the willies looking at it. While I often wonder if JavaScript actually needs real structuring capabilities in the language, like a class statement, et al., I'm happy to try living without it, and try playing with some other mechanisms.

Java likewise has a crap component/modularity capabilities out of the box. Jar files and class loaders. OSGi plugs that hole. JavaScript likewise needs it's hole filled (that link is sfw, I swear).

So let's start experimenting! Go, go Yahoo! BrowserPlus™!

Could we do this with Google Gears WorkerPools?

In my "fun with WorkerPools" blog post from a while back, I wondered about building a service framework on top of the WorkerPool bits from Google Gears. Seems like building something similar to what BrowserPlus™ provides is pretty do-able.

The great thing about the WorkerPool stuff is that it brings another dimension to the JavaScript story - separate environments / contexts / spaces. The ability to have a bunch of code running, separated into protected object spaces, with very explicit communication channels open between these spaces. The best you can do with <script src=""> is namespacing via fixed name objects. Ugly and unsafe. I assume BrowserPlus™ is doing the multi-context thing as well, but I haven't looked close enough.

The other neat thing about Google Gears is that something like it has leaked into the WHATWG work, via a draft recommendation called Web Workers. Meaning we may be able to do this in a browser-independent way sometime around 2022. Or, our Chrome Overlords will render it moot, since Chrome already ships Google Gears. Or all the browsers will start shipping Gears. Whatever. Workers FTW!

So let's start experimenting! Go, go Google Gears WorkerPool-based service frameworks!

Thursday, November 13, 2008

hiking in the triangle

I've been doing a fair amount of hiking around the area lately, for some reason. Since it seems like there are a lot of folk unfamiliar with the fantastic parks available in the triangle area, and North Carolina in general, thought I'd dump some information here.

Local Hikes

For the hikes listed below, you won't need any special equipment to take a hike. Outdoors-y clothes, something you'll be comfortable walking in for a couple of miles (you're hiking!). And sweating in. Some kind of tennis shoes /sneakers. Bug spray if it's a buggy season. Sunscreen. A hat? And a bottle of water. You'll be outside, walking around for an hour or two; use your head.

  • Hemlock Bluffs Nature Preserve

    This park is maintained by the Town of Cary, and is a little bit of forest in the southern end of the city, right off of Kildaire Farm Road. Just a couple of short paths through the woods, along side Swift Creek, with some unexpected bluffs to overlook. I don't think any of the individual trails in the park are more than two miles.

    The Stevens Nature Center is co-located here which has some information on the park available in one of the buildings at the park entrance. They also hold occasional classes on various topics, through the Town of Cary.

  • Swift Creek bluffs

    A small park maintained by the Triangle Land Conservancy, off Holly Springs Road, between Cary Parkway and Penney Road. Admit it, you had no idea there was a park in that swampy-looking land, did you?

  • William B. Umstead State Park

    Oddly enough, I haven't been there in years. It's odd, because I drive by this park every day I drive to work; it's directly to the south of the airport, filling the area between I-40 on the west, and US-70 on the east. Joe Miller (see below) frequently writes about hiking, running, and biking in the park. Sounds great, and I feel like an idiot for not having visited in so long.

  • Raven Rock State Park [pix] [pix]

    This is one of the family favorites. The main attraction is Raven Rock itself, which is a rocky outcrop on the banks of the Cape Fear river. Fairly simple hike to the river, then a set of stair steps down to the river. Lots of rocks to climb on, for the kids. The Raven Rock Loop Trail is 2.6 miles.

  • Occoneechee Mountain State Natural Area [pix]

    A small park in Hillsborough, bordering the Eno River. While you can't get to the summit itself, you can walk up to the second highest point, and around an old abandoned quarry with some resultant man-made cliffs. You can both look out over the cliffs, from above, then walk down to the Eno River and get a view from below. The eastern section of the Loop Trail is about two miles.

Less Local Hikes

Listed here a couple of parks that aren't really local to the triangle, but close enough to make a day trip. I'm totally geared up when I go to one of these; boots, pack with bunch of crap in it, staff, GPS, etc. Though I've done some of the shorter hikes with the kids with just a bottle of water.

The paths marked on the trails as "strenuous" generally mean - lots of climbing - which usually also means great views. There will be lots of sweat.

Do some research before you go.

  • Hanging Rock State Park [pix] [pix]

    Hanging Rock is the main draw here, a fairly easy hike with great views and some rock scrambling when you get to the top. Lots of people.

    Moore's Wall Loop Trail takes you up a different mountain. At the top is an observation deck with fantastic views. You can do a bit of rock scrambling at the top, and some points along the ridge. Not many people.

  • Stone Mountain State Park [pix] [pix]

    A big mound of granite that's pretty breath taking. To see the view of the mountain, you'll have to venture on the trail to Hutchinson Homestead, which is a pretty easy hike. The hike up the mountain is a different story. There will be sweat. They now have steps that take you almost the entire way up the side of the mountain; not as fun as before they had the steps, but certainly a lot safer.

    When I was up there last week, I met a 70 year old couple from Charlotte who hit the mountain every year. I can only hope to be so lucky.

  • Crowders Mountain State Park [pix]

    The main draw is Crowders Mountain; lots of people. King's Pinnacle is just as nice, with much fewer people.

Web Sites

  • North Carolina State Parks by NC Division of Parks and Recreation

    This is a site maintained by the same folks who actually maintain the fantastic North Carolina state park system. Information on all the parks and natural areas (what's the difference?) is provided, including maps to the parks, maps of the facilities and trails in the parks, and other general information. The maps are quite detailed, and are PDF versions of the slightly better quality print versions of the maps available at the parks themselves. You'll want to use the maps provided at the park, while you're hiking, because they are a bit larger than the print version, printed on nice heavy paper, pre-folded, and have additional park information available on the flip side of the map. But print one of the PDF maps before you go, just in case they're out, or if you happen to go to part of a park which doesn't have a maps kiosk or park station nearby.

  • Triangle Land Conservancy

    You can read all about what the conservancy is about on their web site, but for purposes of this blog post, their web site contains links to a couple of hikable areas that they provide access to.

  • The Town Of Cary Cary Parks, Recreation & Cultural Resources Department

    Contains links to Cary's parks, trails and greenways.

  • North Carolina Parks - Google Maps by me

    This is a crude "mashup" I wrote a while back, recently refurbished to fix some broken links. It's a view of North Carolina in Google Maps, with a marker placed at all the state parks, and a few other random parks I've added. Clicking on a marker gives you a popup with a link to the park's web site, and the current weather and coupla day forecast, with links to more detailed weather information. A right click (if you're a righty) might bring up a context menu that offers to get directions to the park from somewhere else, like your house.

    You can also load the park data into Google Earth, by using the Add / Network Link menu item (on the Mac version anyway), and pasting the URL to the KML file into the Link field: Once you've added it to your My Places list, you can use the context menu on the entry for the parks to refresh the data from the KML file (basically, the weather). Of course, Google Earth can show you the weather itself. It can also show you pictures taken from within the parks, links to Wikipedia entries, and so on. Google Earth is a great way to get familiar with the layout of the hilly parks, especially if you set "Elevation Exaggeration" to the maximum value of 3.


  • Get Out! Get Fit! by Joe Miller

    Joe works for the Raleigh's News and Observer newspaper. He authors a print column "Take It Outside", which is also republished to the web. He usually references his print columns in his blog, so there's not much need to watch for the print columns. Just follow his blog instead.

    Joe covers all manner of outdoor activities, not just hiking, but he spends quite a bit of time covering hiking and biking in North Carolina. He has also written a few books, which I'll reference below.


  • 100 Classic Hikes in North Carolina by Joe Miller - $16

    This is Joe's most recent book, and covers, as it claims, 100 hikes in North Carolina - east, west, north, south and in between. He covers a lot of the state parks, so provides addition information over what's available on the state park's web site. In the Introduction, he discusses clothing and gear, though as I mention above, for short hikes, you won't need much. It's a good introduction to other gear you might want to get though.

  • Take It Outside: Hiking in the the Triangle by Joe Miller - $13

    This is Joe's earlier book, which covers hikes in the Triangle area of North Carolina. Ten years old but still relevant. Simple hikes, places you should definitely check out because they're right in your backyard, if you live here.

Wednesday, November 12, 2008


I remember reading Michael Lewis's "Liar's Poker" back in the day, and enjoying it, so it's not a big surprise that I also enjoyed reading his recent article on our financial crisis, "The End".

But my reaction to the article is a little different than Tim Bray's "Angry" blog post. Anger is an understandable reaction; we've essentially been fleeced. But when you get fleeced, you have to realize that often you walked right into the scam. One of the people to be angry at, is yourself.

How did this happen?

One word: brainwashing. We all wanted to believe there really was a lot of money to be made, that level of wealth building going on was somehow sustainable, and that we could all be a part of it.

DJIA from 1930 to today - link to live chart

But honestly, did you really believe your realtor when they told you the property prices in the neighborhood you bought into were appreciating 10% a year, with no end in sight? Does it really seem ok to spend $30,000 on an automobile? Can you really look at the historical record of the Dow Jones Industrial Average from the 1930's to today, and say: "I don't see anything abnormal".

We all wanted to believe, so ignored the signs, if we saw them at all. Were there some bad apples out there? Sure. But far more folks who weren't really bad apples, really just duped like we were. Deluded by fantasies of riches. Brainwashed.

Relating this to computer tech

I couldn't help but to extrapolate from my take on Lewis's article, to some of the goings on in the computer tech field today, because I think there's a fair amount of brainwashing going on in tech also. What do you think about the following ideas:

  • Using LaTeX files or Microsoft Word .doc files as serialization formats for structured data.

  • Using C or C++ to build your next web-based application.

To me, these sound crazy. Why would you use a word processing system as the basis for a data serialization format? Nuts, right? But we are. XML traces it's roots back to IBM's Generalized Markup Language, a text markup language not like LaTeX or nroff (good times, good times). XML is frankly not all that different from GML, I'd say easier on the eyes (brackets are easier to visually parse than colons), and more regularized syntax. But it's fundamentally a language to apply bits of text formatting to large amounts of raw text. A typical BookMaster document was mainly plain old text, with occasional tags to mark paragraph boundaries, etc.

Same sort of nutso thinking with Java. A potentially decent systems-level programming language, it could have been a successor to C and C++ had things worked out a bit differently. But as an application programming language? Sure, some people can do it. But there's a level of complexity there, over and above application programming languages we've used in the past - COBOL and BASIC, for instance - that really renders it unsuitable for a large potential segment of the programmer market.

How did these modern-day accidents occur? Hype. Being in the right place at the right time. They more or less worked for simple cases. We all wanted to believe. We brainwashed ourselves.

Luckily, evolution is taking it's toll on these two languages. XML isn't the only game in town now for structured data; JSON and YAML are often used where it makes sense; Roy Fielding notes that perhaps GIF images are an interesting way to represent sparse bit arrays (ok, that's a random mutation in evolutionary terms). We're seeing an upswing in alternative languages where Java used to be king: Ruby, Python, Groovy, etc (with many of these languages having implementations in Java - perfect!).

Reality is setting in; do what makes sense; think different; the hype curve doesn't always point to the right answer.

My new favorite example of tech brainwashing

So while I can't complain so much about XML and Java as I used to, it's kicking a dead horse at this point, I do have a new whipping boy in the tech world for where we've been brainwashed - "web applications". Those things that run in our web browser, giving us the power, beauty and flexibility of desktop apps. Web 2.0. RIAs. GMail and Google Maps.

I don't have to explain to anyone who's ever tried putting together a "Web 2.0 application" the utter pain in doing this. Ignoring server-side issues, you have to deal with three different languages: HTML, CSS, and JavaScript; typically interspersed betwixt themselves. That behave differently in the three to five popular browsers you might hope to support. And the iPhone. Server side programming throws more wrenches in the gears, as in our simpler web 1.0 world, we frequently mixed HTML, CSS, and JavaScript with whatever programming language(s) we used on the server. And we still do, at times, in this new world also, only there's lots more HTML, CSS and JavaScript, so it's even messier.

Or maybe you're using Flex or Silverlight or some other more constrained environment for your web app. But then you have different problems; your users might expect to be able to bookmark in the middle of the app - it's running on the web after all. Or cut and paste some of the content. Etc. And you're probably still deploying in a web browser anyway! There is no escape!

What really frustrates me about the situation we're in, is that we've painted ourselves into a corner. We started with a very useful, mainly read-only, networked hypertext viewer, and over the years bolted new geegaws on the sides. Blink tags. Tables. JavaScript. Frames. DHTML. File uploads. XmlHTTPRequest. SVG. HTML Canvas. (stop me, my head's about to explode!) All great stuff to add to a mainly read-only, networked hypertext viewer. Can you use it to build a word processor? Yes, you can. Well, a team of software developers at Google can. Not sure that I can.

But I used to be able to build GUI apps without a lot of difficulty. High function, richly formatting text editors even. In C, for gawd's sake, though life got a lot easier in Smalltalk.

It's time to stop thinking we can apply bandages to the status quo and make everything better. We want to believe. Just another gadget or framework is going to make everything better! We've brainwashed ourselves.

Wake up! We fundamentally have the wrong tools to do the job. We're using a spoon where we should be using a backhoe. Look down! You're using a spoon for *bleep*'s sake! Don't you realize it? Slap yourself around a little and clear the fog from your eyes. Expect better.

To be a little more concrete, I actually do think that the fundamentals of our web tech economy are sound. I'm not unhappy with HTTP, HTML, CSS, and JavaScript when viewed as separate technologies. I don't think we've fit them together in the best way, to make it easy to build applications with. And they're wrapped in a shell optimized around mainly read-only, history-enabled, page-by-page navigation, which doesn't seem like the best base on which to build an application. It's time for some new thinking here. Can we take some of the basic building blocks we already have and build a better application runtime platform than what we've got in front of us? I have to believe the answer is: Yes we can.

Saturday, October 18, 2008


As a reader of the great blog create digital music, I've been reading about and playing a bit with all the great Nintendo DS music software, and just reading about the iPhone/touch software (I am currently iPhone/iTouch challenged). Fun stuff.

Of most interest to me are MIDI controllers; hardware or software that you can use to control some other software via the enduring MIDI protocol. Having a MIDI controller on a portable device is a great idea; the more devices you have available, the better. But one of the problems with MIDI controllers is that there's never one that fits your needs exactly. Most of the existing handheld software is fairly fixed function.

So what I really want is some software that easily lets me construct a MIDI controller that does exactly what I want. That I can use on one of my portable devices: a Nokia N800 and a Nintendo DS.

There are two general problems to solve: making it easy to construct a new controller, and getting input from handheld into a MIDI device on my desktop where my music software is running.

At some point it struck me that having the MIDI controllers implemented as UIs that can run in a web browser solves some of the general constraints. Writing UIs in HTML, CSS, and JavaScript is (to some extent) easier than writing to native GUI toolkits. Plus, there's the hope of writing a UI once and being able to reuse it on several devices. Writing code specific to an iPhone doesn't help me if I later want to run it on a Nokia N800 or Nintendo DS later.

On the MIDI side, providing an HTTP interface on the desktop to MIDI devices make it easy to interface to them from other devices. There are some existing ways to interface with MIDI over tcp/ip, including Network MIDI (RFC 4695 and RFC 4696; Apple provides an implementation of these protocols, but doesn't document some of the control messaging bits) and DSMI. But none of these actually flow over HTTP, and it just seems so appropriate to actually surface the interface over HTTP. Duh.

And thus, webimidi is born.

Below is a movie of the single device that I've included with the code, which is a simple two octave keyboard.

I wrote webimidi in Python, although I was tempted to use the Ruby MIDI interface that Giles Bowkett's provides in Archaeopteryx, but that would have been too easy. ha ha. Actually, I've been feeling more python-y lately than ruby-y, and this was a great excuse to learn two parts of Python I haven't played with:

  • ctypes - a foreign function interface (FFI). ctypes allows you to interface with native shared libraries, the native data structures they use, etc. The world (of C) is your oyster.

  • wsgi - Python's Web Server Gateway Interface. For folks familiar with Java, wsgi is the "servlet" of the Python world.

Currently, webimidi only runs on Mac OS X, and probably only 10.5. Porting to Windows would really just be a matter of doing a ctypes interface to Windows MIDI functions (I did this MANY years ago in Smalltalk; it's doable, I just have no interest in actually doing it today).

Up next, a controller for my Line6 TonePort UX1. Also, seeing if I can get something useful to run on my Nintendo DS; not clear that it supports XmlHTTTPRequest, which is currently a pre-req for controllers built like this.

Friday, September 19, 2008

fun with WorkerPools


I suspect I''e looked at Google Gears a half dozen or so times since it was originally released. Always looked kinda intarstin', but hadn't mightily any use for it. I took another look when Chrome was announced, since, o' all the stuff in Chrome, the fact that Gears was baked in was the most intarstin' bit t' me.

(Bein' a bit curmugdeonly har, as in, the flashy chrome bits don't excite me. "When I was a kid" my web browser (OS/2's WebExplorer) could only remember 10 bookmarks. The Nintendo DS Browser works for me, in a pinch. Now GET OFF MY YARD!)


Pirate-speak translation above provided by the Pirate Speak Translator.

Gears is the most interesting bit in Chrome because it's the programmable bit.

What I decided to look through the other day was the WorkerPool bits, since I hadn't really looked into them much before. What I realized pretty quickly is that they should have actually called these ActorPools. You may be familiar with the Actor paradigm via the recent Erlang hotness.

In a nutshell, the WorkerPool facility provides the following capabilities:

  • the ability to run a hunk of JavaScript code in a new context
  • that context does not share code or state with any other JavaScript context, including the context which 'launched' the code
  • the only way to communicate with the code is via asynchronous one-way message sends of arbitrary JavaScript objects (basically, the same sorts of objects describable in JSON; no functions or non-trivial objects allowed, for instance)
  • the ability to run that code independently of other contexts. Think threads, though that's just an implementation detail.

If that's all it was, wouldn't be terribly interesting. You can get aspects of this type of stuff with traditional JavaScript code, though it's often messy. Gears provides, at least, a fairly clean way of providing these capabilities.

But here's where it gets interesting. That hunk of code that you've created a worker for can be loaded from an arbitrary server, referenced via a URL. And then that code follows the "same origin policy" rules for other Gears APIs available to workers, for instance, the Database APIs and HttpRequest APIs. In other words, your worker code can access HTTP-resources on the server it came from, and have a 'protected' Database that it manages that is only visible to workers that also were loaded from that server.

Very cool, because this means that you can build workers that act as self-contained service modules to allow access to HTTP resources for other applications to use. None of the usual cross-site chicanery we've had to deal with. In addition, these modules can manage their own protected cache of data. Also, such modules can be reused across multiple web applications, with each one reusing the same code and database store.

This seems like powerful mojo.

Building an RPC mechanism on top of the message send APIs

One of the downers, for most people, with the current WorkerPool APIs, is going to be the message sending paradigm. It's pretty low-level and raw. The great thing is that asynchronous message sends are a type of atomic building block upon which other forms of IPC can easily be built. The QNX operating system is famously built up on this core concept, slightly expanded.

I've taken a run at building a simple RPC mechanism built on top of the message send API. The proof of concept is available here: Here's how it works:

To build your RPC-styled worker, create a JavaScript file that includes the services you want to expose, implemented as plain old functions, along with a list of the functions you want to expose. Here's an example of some math services:

    // service function to add a list of numbers
    function add() {
        var result = 0
        for (var i=0; i<arguments.length; i++) {
            result += arguments[i]
        return result


    // list of exported services
    services = [
    // boiler-plate from here to end

At the bottom of this file is some boiler-plate code that deals with the messaging interface. Basically, messages are received that are serialized versions of the function invocation: function name, arguments, and an identifier to indicate which invocation this was (needed to match up return values later). The boiler-plate code cracks open the message, reflectively calls the function, then sends back the function result as a message to the original message sender.

On the client end, in your main HTML / JavaScript code, you'll be using the service like this:

   math_service = new ggw_services_Service("math_services.js")


   // callback function to display the result of our service call
   function print_sum(sum) {

   // handler that invokes our service call
   function do_sum() {, 1,2,3,4,5,6,7,8,9,10)


In this code, we instantiate the services with the ggw_services_Service constructor, passing it the URL of the JavaScript code we want to run as a worker - this would be the service implementation file described above. The resulting object will then expose proxies for the exposed functions in the service in the services field of the object. You call these just like normal functions. One trick: because the message sends are one-way, and you'll probably want to get a result back, the first parameter can be a call-back function which is invoked when the service method returns it's value. In this case, that would be the print_sum function.

Neat. But crude. To do this right, would require a bit more infrastructure, as well as making sure you can catch all the sorts of error conditions that can happen. The end result won't be (shouldn't be) as transparent as the example above, but you can probably get pretty close.


  • Because there is no sharing of code or data in a worker, and anything else, things like debugging get hard, because you can't access the DOM, you can't access the document, you can't access alert(). In FireBug, you can see the message text from Errors, which is useful, especially when you throw them yourself. But the code source isn't identified, just that the error occurred on "worker 0" or the like.

  • Speaking of FireBug, I was able to pretty consistently lock up FireFox while debugging my example. Due to my browser coding naiveté, don't know if this was me, FireBug, Gears, or some combination of them. But it got old fast.

  • It's not clear what the best way is to handle security credentials for HTTP requests from within workers. My gut tells me that some Gears APIs to manage sensitive data like passwords and keys would be useful. Storing credentials in a server-specific database doesn't sound great, but doesn't sound terrible either. But it's not even clear how a worker would go about prompting a user for credentials, and do it safely.

  • The current story for worker code is that all the worker code has to be in a single file. Not great. It would be nice to have an API like loadScript() or some such that would allow you to add additional JavaScript code to your context. In lieu of that, you can always XHR GET your additional code, and eval() it into the context. Icky, but should work.

  • One of the nice things about the stark bleakness of the context in which the workers run is that it makes these workers applicable to other environments. For instance, it's not a huge stretch to consider porting the Gears APIs to Rhino, allowing reuse of the workers from within Java. Very cool for Java, where loading "live code" is typically not something that is done, for various reasons, but this makes it relatively straight-forward.

  • In terms of higher-level frameworks for this stuff, I think I'd start looking at OSGi and it's Bundle and Service concepts. I suspect there's a pretty good fit there. Combined with the previous thought of reusing workers in Java itself, why not design it around OSGi, so that I could actually design a worker such that it could easily be spun into OSGi bundle itself, and so directly consumable by OSGi-friendly code without having to have them deal with "yucky" JavaScript. Having access to JavaScript in Java isn't always considered a "plus" by Java programmers, but they can be easily fooled.

Thursday, August 21, 2008

better than json

Well, JSON is the new hotness. This week. Again. Thought I'd jot down some thoughts I had on JSON last summer, in the area of improving on JSON.

Before doing that, let me go over some of my core beliefs about JSON:

  • It defines an ASCII serialization format for a lowest common denominator of primitive and composite types used in most common programming languages, so it can be used to serialize natural data structures in many environments.

  • JSON doesn't attempt to handle traditional OO mechanics, it's really just plain old simple data and data structures, keeping things simpler.

  • The fact that JSON is legal JavaScript, and so can be eval()'d and <script src="">'d in web browsers via JSONP, is not a benefit, it's a security nightmare waiting to happen. JSON parsers are easy to write, and with JSONP you're giving up too much control over your HTTP requests.

  • The 'readability' and 'writeability' of JSON, by humans, isn't a big priority for me. But it does bug me that it could easily more readable and writable, by humans.

Basically, I like what it can express, and I care less about the syntax.

So, what would I change? Turns out, not much, mainly syntax changes to aid in human readability and writability.

Add some additional primitive data types / literal forms. Some obvious ones would be decimal and date. Most languages can support these, somehow. Date gets dicey in terms of subsetting, like date with no time, time with no date, etc.

Remove the idiotic "\/" escaping rule. People think they have to encode "/" character because it shows up in the 'backslash escapeable character list'. Section 8, Examples, of RFC 4627 has an example which clearly does not use escaped "/" characters. I assume this escaping rule had something to do with the way JavaScript is/was parsed in some web browsers. feh.

Remove extraneous punctuation. The "," separators used between elements in {} and [] structures aren't needed. Make them optional. Same with the ":" characters between key / value pairs in {} structures. For human readable output, I'd not use "," but would use ":", because I like the visual parsing you get between key value pairs. I would typically write key / value pairs on a single line anyway, obviating the need for the "," separator.

Stop requiring quotes around strings unless required. This is the biggest noise reducer. There are only 3 keywords in JSON - true, false, and null. For property keys specifically, many times the key values aren't one of these keywords, aren't another primitive literal value, and don't include whitespace or other separator characters. Their values are basically that of 'identifiers'. That get used unquoted in JavaScript itself, in JavaScript dot notation and object literals.

I assume this quoted property name rule exists is to keep people from inadvertently using JavaScript keywords as property keys, which would then screw up the parse-by-eval() trick. For program generated JSON, it's probably just easier to always use quotes, but when JSON is being rendered for humans, I'd like quotes removed if they could be. When writing literal JSON, I'd also not use quotes if I didn't have to.

The same rules apply for all string usage, not just property keys. But I'd probably typically want to read and write strings as quoted values everywhere but property keys, as that's just what I'm used to.

Using 'bare strings' means that you might write out a property name that in the future became a new JSON keyword. Not good. So perhaps something like the Smalltalk/Ruby symbol syntax construct could be used. Basically a new character prefix, with no terminator, indicating the following characters up to a separator are the string value. If "#" was the prefix, then #abc would be the same thing as "abc".

Or perhaps we can find and set some rules that would allow new keywords and operators to be introduced without breaking serializations that weren't aware of them. The NextREXX language does this.

Provide single line and multi-line comments. Duh.


Here are some examples of using some of these changes, based on the following JSON object that I found on the web somewhere. To be fair, since these examples are supposed to show readability enhancements, I did a reformatting in TextMate. Here's the original, but reformatted, legal JSON.

    "members": [
            "href": "34KJDCSKJN2HHF0DW20394",
            "etag": "0hf0239hf2hf9fds09sadfo90ua093j",
            "entity": {
                "id": "",
                "name": {
                    "unstructured": "Jane Doe"
                "gender": {
                    "displayvalue": "女性",
                    "key":          "FEMALE"
            "href": "aaaaaaaaaaa11111",
            "etag": "alsjdfieflsajfajsfjadsljfalksjd",
            "entity": {
                "id": "",
                "name": {
                    "unstructured": "Joe Gregorio"
                "gender": {
                    "displayvalue": "Male",
                    "key":          "MALE"
    "next": null

Let's remove the comma's. Comma removal doesn't help much with the readability, but it's much easier to not have to add the commons when you're writing JSON literals by hand.

    "members": [
            "href": "34KJDCSKJN2HHF0DW20394"
            "etag": "0hf0239hf2hf9fds09sadfo90ua093j"
            "entity": {
                "id": ""
                "name": {
                    "unstructured": "Jane Doe"
                "gender": {
                    "displayvalue": "女性"
                    "key":          "FEMALE"
            "href": "aaaaaaaaaaa11111"
            "etag": "alsjdfieflsajfajsfjadsljfalksjd"
            "entity": {
                "id": ""
                "name": {
                    "unstructured": "Joe Gregorio"
                "gender": {
                    "displayvalue": "Male"
                    "key":          "MALE"
    "next": null

Now remove the quotes around property keys. That's really nice, but for compatibility with potential new keywords, it probably is best to not use 'bare strings'.

    members: [
            href: "34KJDCSKJN2HHF0DW20394"
            etag: "0hf0239hf2hf9fds09sadfo90ua093j"
            entity: {
                id: ""
                name: {
                    unstructured: "Jane Doe"
                gender: {
                    displayvalue: "女性"
                    key:          "FEMALE"
            href: "aaaaaaaaaaa11111"
            etag: "alsjdfieflsajfajsfjadsljfalksjd"
            entity: {
                id: ""
                name: {
                    unstructured: "Joe Gregorio"
                gender: {
                    displayvalue: "Male"
                    key:          "MALE"
    next: null

Now try prefixing property keys with "#" using a symbol-like construct, and not using ":" to separate key value pairs, to make up for having to put the quote in. Add some comments.

    // the list of entries in the list
    #members [
        // Jane Doe's entry
            #href "34KJDCSKJN2HHF0DW20394"
            #etag "0hf0239hf2hf9fds09sadfo90ua093j"
            #entity {
                #id ""
                #name {
                    #unstructured "Jane Doe"
                #gender {
                    // apparently this data is multi-byte encoded
                    #displayvalue "女性"
                    #key          "FEMALE"
        // Joe Gregorio's entry
            #href "aaaaaaaaaaa11111"
            #etag "alsjdfieflsajfajsfjadsljfalksjd"
            #entity {
                #id ""
                #name {
                    #unstructured "Joe Gregorio"
                #gender {
                    #displayvalue "Male"
                    #key          "MALE"
    #next null

To me, I see some readability enhancement with these changes. I think the big bang for the buck is the writability enhancement.

As I said above, the syntax isn't a really big deal for me. It's only a big deal when I have to read it with my eyes or type it at a keyboard. Which we seem to be doing more of these days. In fact, I had to fix two typos in Joe's original sample. Can you find them?

It's funny that with all the whining I do about unreadable and overly verbose XML, I find it easier to read and write XML than JSON. That might well just be because I'm typically reading and writing XML with syntax coloring viewers and editors, but not doing that so much with JSON.

Sunday, June 08, 2008

hoping for the ide wars

Joe calls it "The professionalization of scripting languages"; I call it "the javascript wars". As it relates to SquirrelFish anyway. As for maglev, I say "thank you for setting a new bar for 'dynlang' runtimes; it was desperately needed".

That's the big take away; as we see competing implementations of JavaScript and Ruby, the upside for us consumers is great - all the implementors trying to out-do themselves, giving us better and better runtimes. We saw the same with the web browsers and Java VMs in the 90's.

But all of that is just getting the runtimes to run faster, use less memory, and in general just being better citizens on our computers. That doesn't change the way I work all that much, just lets me develop (and test) faster. What else can we take away from the Smalltalk experience?

Development environments.

There were plenty of warts in the Smalltalk IDE story - you had to use the IDE's text editor, your entire development environment was mixed in with your product code, etc. But of course there were lots of advantages to that also. Smalltalk was the last development environment where I actively took advantage of extending the environment with my own code. Before Smalltalk I was using programmable text editors and commmand lines; I was quite comfortable scripting both, even on mainframes. Since Smalltalk I've generally been using Eclipse for work stuff, but the effort in extending the environment has just been too much work for me to invest in, though I've tried a few times. I settle for writing ant scripts; that's how bad things have gotten - I'm programming in XML.

What's on the horizon? Eclipse has E4. Which seems like it's largely a cleanup/sanitization of Eclipse, vs a complete re-think. I suspect I will write as many extensions for E4 as I have for the previous versions of Eclipse. Perhaps I'll sit that out and wait for Steve Northover's E5. Frankly, I think it's time we started fresh; let's call it F1.

Gilad Bracha has a recent blog post on "Incremental Development Environments" including a link to a paper on Hopscotch. Frankly, the paper didn't do much for me, and the sole screenshot is of something which appears to be a browser-based IDE. Not terribly fascinating at this point, in terms of the specifics. I hope to hear more about Hopscotch after Vassilli Bykov's presentation "Interfaces Without Tools" at Smalltalk Solutions 2008.

It'll be terribly fascinating if people are actually getting interested in IDEs again. Let the IDE Wars begin!

Thursday, May 15, 2008


Bill de hÓra points out some seeming discrepancies in Jeff Atwood's views on XML. Or my reading-between-the-lines, XML-ish markup languages, like HTML. Here's my take:

  • XML is a great way to mark up text.
  • XML is a terrible way to represent data.

I see no discrepancies here. My view of XML as a text markup language are biased given that I used GML for many years, since 1983 or so, to produce documentation.

Likewise, though I've often pooped all over Java, claiming it's a terrible language, I think there is (or was anyway) a place for it:

  • Java could have been a great replacement for C, in many cases.
  • Java is a terrible language with which to build applications.

I'd no more want to build a web or desktop app with Java (read: C), than I would want to write a sound driver in JavaScript.

Sunday, March 09, 2008

nokia n800

I recently succumbed to my gadget lust and purchased a Nokia N800, which bills itself as a "Internet Tablet". I'd classify it as a PDA.

So first, why choose this thing over some of the more likely gadget candidates? I've had people ask me about the following devices:

  • iPhone or iPod Touch
    I'm a contrarian, you wouldn't expect me to get the same thing everyone else has, would you? I was hoping for something a little less locked down, that had memory slots, etc. iPhone itself was definitely out because I can't justify spending that kind of money on a monthly cell phone bill.

  • Nokia N810
    Since the N810 is the successor to the N800, it would seem to have been a natural choice in terms of getting the latest and greatest. But you forgot to factor in my cheapskate-ness. The N800 is a lot cheaper. And the N810 didn't seem to have enough features I wanted / needed to justify the additional cost. Perhaps the GPS.

  • Asus EEE
    Too big. Otherwise, would have been a no-brainer. I want something that will fit in my man-bag, and the N800 is almost the exact same size as a Nintendo DS (it's a little thinner), which is a perfect size.

So, here are some of the things I like about this device:

  • Big (for a PDA) hi-res screen; 800 x 480 resolution, 3.5" x 2.0". Rendered text looks gorgeous.

  • Two SD slots. After upgrading the OS to OS 2008 (from OS 2007 which it's shipped with), you can use SDHC cards. I have a 4GB card in now, and a 8GB card on order.

  • Bluetooth, including audio support. Perhaps even A2DP, can't tell for sure.

  • Built-in stand so the device can "sit up" instead of just laying down.

  • Mini USB jack; provides access to the 'internal' SD card as a file device.

  • Development kit freely available.

It's not perfect though:

  • With that big hi-res screen, I thought I'd be able to watch some (relatively) hi-res video. Like perhaps a 4:3 video rendered at 640x480. For example, the Cosmos DVD set I got for Christmas a few years ago that I haven't finished watching. Well ... it would appear the box can't really drive video at that resolution cleanly. I'm still experimenting with HandBrake and QuickTime Pro to figure out what I'm willing to live with in terms of quality and size, but I'm getting some results I'm pleased enough with at a resolution of 320 x 240. But not being able to drive video at, at least 640x480, is a bummer.

  • OS 2008 includes a "Mozilla-based" web browser (previous versions shipped a version of Opera), so I had some hope of being able to run something like Google Reader on the device. My big worry was the screen size, and that does end up being a problem for that particular app. But there's a bigger problem, and that's the speed. Running Google Reader was like watching it run on a desktop in slow motion. In fact, the N800 might serve as a useful debugging device (a slower-downer, as it were). You'll largely want to stick to 'mobile' web sites for this device, and I've cataloged a few of them here.

  • On my previous PDAs, all Palms, I used the wonder iSilo 'e-Reader' program to do off-line reading of HTML files and scraped web sites. It's really a fantastic program, and I've been spoiled. For the N800, your text-reading options are: the web browser (HTML files), a PDF reader, and you can download the popular FBReader program. Problems: for big files, like a book, the browser takes too long to come up. For PDF files, unless the PDF file has been rendered pretty much like a PowerPoint presentation, you'll only be able to read it with the postage-stamp-sized 'view' window which you drag over the page; painful. FBReader? All I can say is, I've used iSilo, and it would be difficult to use something with less function. I may have to actually write some code or something here; I think this device would make for an excellent eReader platform.

  • The 'camera' hardly deserves to be called a camera. Here's my first picture from the device, and it will probably be my last. I mean, why bother? Mariano's comment is perfect, especially since he made it before I added the title or description about it coming from the N800.

  • Although it claims to support WPA, WPA2, EAPs, PEAPs, and other parts of the WiFi acronym zoo, I was unable to get the device connected to the IBM network. Bummer. I haven't fully exhausted the options yet, but it's not looking good at present.

All in all, pretty much what I expected. Some good, some bad. If you've not used a PDA before, you will most likely be unhappy with things like the browsing experience. For the casual user, there's actually lots of stuff here that I won't use much, but I'm sure other people would: Skype and Gizmo, IM, Email client, RSS client. And of course, Doom.

Saturday, January 19, 2008

schemas are good?

I'm not a "DB guy", by any stretch of the imagination, but something about DeWitt and Stonebraker's "MapReduce: A major step backwards" post seemed immediately ... wrong. Greg Jorgensen's response, "Relational Database Experts Jump The MapReduce Shark", points out at least one obvious thing - they're comparing apples to screwdrivers to a certain extent.

But there was one thing that jumped right out at me, from the original blog post:

"Schemas are good."

Talking specifically about database schemas here, just to be clear :-) And my immediate thought was: That's Not Right. Now, perhaps I'm confusing you; haven't I been yapping for ages about meta-data and schema and how it's going to save the world? Yes I have.

I bristle at the statement, "schemas are good", in relation to DB schemas, in the same way I bristle at folks who make claims about all the benefits of strongly typed languages compared to dynamically typed languages. There's value in being able to talk about types at interface boundaries, where I need to make a contract / commitment about data a program accepts and/or returns, because other people will be using these programs. But most of the code I'm writing is NOT a public interface, nor is the layout of my database; they're both implementation details. Enforcing strict typing at these levels is just getting in my way.

Here's the description in the original paper about why "schemas are good":

"The DBMS community learned the importance of schemas, whereby the fields and their data types are recorded in storage. More importantly, the run-time system of the DBMS can ensure that input records obey this schema. This is the best way to keep an application from adding "garbage" to a data set. MapReduce has no such functionality, and there are no controls to keep garbage out of its data sets. A corrupted MapReduce data-set can actually silently break all the MapReduce applications that use that data-set."

In theory, I agree. It's usually nice to keep your DB clean. But we've also moved beyond the days when applications talked directly to the databases; the smart money seems to be exposing your 'data' via a set of services, presumably whose implementation is using a database (or persistence service). Or at least you've probably encapsulated your data access in some kind of library. Those seems like perfectly good places to put your data validation and introspection.

And then, of course, not everyone wants to keep their database clean in the first place.

I thought it was interesting that they didn't mention performance, which you might guess would be another positive aspect of having schema. Wonder why not? Perhaps, having schema doesn't help performance?

Wednesday, January 16, 2008

IDL vs. Human Documentation

Ping pong; in reply to Steve's post on the same subject ...

Like I said in my previous post, interface definition languages exist for machines to generate code. They're totally inadequate, though, for instructing developers on how to write code to use a service. The need for human documentation in this context isn't quaint or impractical at all - it's simply reality.

Steve also points out the reams of documentation produced by OMG, and produced about WS-*, over the years, as a proof point of this.

He's right. Programmers can not survive on IDL alone, or more generally, meta-data. Human language is still often required to express subtleties or non-intuitive aspects of programming libraries, services, etc., simply because we have no other formal means of doing so. Or you wouldn't want to read it, if we did. Human readable documentation is also key to providing overview, introductory, relationship, etc type of information.

I appreciate having that sort of documentation. Lots of it.

But here's what I see as reality: Flickr Services.

Overlook the fact that parts of the interface aren't terribly RESTy, if you're a REST purist. But go out on a limb for me, and pretend they did the right thing in terms of REST; it's not a big leap, I don't think. Let's ask some questions. Is there some kind of XML schema that describes the input and output documents of these service definitions? Nope; can't find one. What HTTP status codes might get returned? Not documented, though they define their own error codes returned in the HTTP response payload; that commonly seen pattern is something else we need to talk about, BTW. Are ETags in use? Last-Modified checking? Who knows. etc

So, keep in mind that this documentation is all you've got. I'm going to claim it's inadequate.

Now the kicker: this API is still perfectly usable! In fact, I use it every time I write a blog entry, via a GreaseMonkey script to generate my little flickr images with annotations. It's not rocket science! It's HTTP!

While the documentation isn't completely adequate, it's perfectly acceptable, because I'm only using a small number of the APIs. The fact that's it's inadequate is containable. As Steve has previously mentioned, it's easy to do REPL-ish experiments with REST APIs. Heck, Flickr includes an API "Explorer" on their site to let you play, live. Nice!

But here is my fear. It's early days for REST. I want to believe that we'll see lots of people using REST over the coming decade to provide 'API's to their services. Where the promise of DCE, CORBA, WS-* got mired down in their complexities, REST is simple enough to actually be practical.

And that's my tension. It's simple/easy (you pick your semantic) to do simple stuff. I'd like to make sure that it's possible to do hard stuff. What happens when I need to deal with MORE that just one interface, and those interfaces are BIGGER than Flickr's? I feel like we're going to get lost in a sea of differently inadequate documentation. Using meta-data to help describe at least the nuts+bolts layer, is something that can help. The trick is to make sure it doesn't get too much in the way, and that you can keep it in sync with your code. My belief is that both are entirely possible. BTW, the link in the sentence above is a couple steps back in this conversation thread, if you've lost your place.

Let me finish this with a challenge. We all agree that documentation is good. Do you think there's some kind of a standard 'template' that we might agree on, that could provide all the practical, required information regarding usage of REST services? All text, prose, for now; just identify the information actually required. That seems like a worth-while goal, especially to try to educate people on the "way of REST" - and I'm talking about RESTy service providers here. Lots of people don't get REST. Let's teach them, by example, and by the way, I'm sure I still have a lot to learn myself.

You never know; I may see that prose and say "My gods. I've been such a dumb-ass! I can see that there's no way to formalize that to some machine readable format!". But I'll bet you a beer, that I won't say that. Take that prose, create an XHTML template that you could style, morph it into a MicroFormat (might not be so micro), create a JSON mapping when JSON takes over the universe this year, etc. Now I got my meta-data and I'll stop yammering on about it (not likely). But baby steps for now. I'd be happy with prose for now. Let's go top-down.

One more final note :-) I wrote this post last night, before Tim Bray posted about ''. Synchronicity? Wouldn't it be great to have a whole site full of patterns, best practices, examples, etc on REST? I've posed this question before, and gotten some positive responses. But some action is obviously required. I can't do it alone, but I'd be happy to help. I'm willing to sign a waiver indicating I won't talk about meta-data for the first six months. Who can pass that up?

phpBB on project zero

My colleague Rob Nicholson posted a blog entry yesterday announcing a new significant accomplishment reached by the Project Zero team working on the PHP interpreter, which is written Java.

They've got phpBB running on Project Zero!

Hats off to the team; they've been working on the PHP interpreter for just over a year, and I'm quite impressed by the progress they've made.

While Project Zero is a commercial product, our PHP team has been making use of the test cases provided by the community, thanks to the generous license of the tests. I'm proud that the team has been able to contribute test cases back to the community as well. Rob tells me they've contributed more than 1000 tests so far. They've been using the existing PHP test cases to validate the interpreter, and have been contributing tests back as they find undocumented or untested behavior, that we need to implement, for which we need to determine the exact semantics.

phpBB is just the start; the team has been busy working on getting another interesting, non-trivial PHP application running. But they told me to keep my big yap shut, so you'll need to wait a bit longer to find out which one ...

Tuesday, January 15, 2008

on naming

No Name
No Name
© Patrick (but not me!)

Check out Stephen Wolfram's latest blog post, "Ten Thousand Hours of Design Reviews". In it, he describes how they run design reviews at Wolfram Research, for their Mathematica product.

The whole article is interesting, but the most interesting bit is in the second half of the blog post, where he talks about naming. He couldn't be more right about that; coming up with good names is hard, really hard. And he nails some of the tensions.

I've come up with some schemes to name projects, based on google searches on words with letter mutations. That's actually not terribly difficult. But those names are just labels; class names, function/method names, constant names, etc are things that can have a very long life-time, if you're lucky. Names that your programming peers will have to put up with. It's great to come up with perfect names.

Unlike human languages that grow and mutate over time, Mathematica has to be defined once and for all. So that it can be implemented, and so that both the computers and the people who use it can know what everything in it means.

on qooxdoo

For some reason, I hadn't really looked at the qooxdoo JavaScript framework until last night. I remember noticing it when Eclipse RAP hit my radar, but I guess I didn't drill down on it. And after browsing the doc, playing with the demos, browsing the code, there are a few things that I really like about the framework.

explicit class declarations - So we know that ActionScript and the new drafts of the JavaScript language include explicit class declaration language syntax. Why? Classes provide a type of modularity that makes it easier to deal with building large systems. There are many ways to get this sort of modularity, and other toolkits provide facilities similar to what qooxdoo does. So why do I like the way qooxdoo does it?

It feels pretty familiar. It's fairly concise. Seems easy to introspect on, if you need to. I'm guessing it made it a lot easier to generate the gorgeous doc available in their API Documentation application.

On the downside, some ugliness seeps through. "super" calls. Primitive Types vs. Reference Types. Remembering to use commas, in general, since the class definition is largely a bag of object literals in object literals.

'window system' vs. 'page' GUI - Running the demos, it's clear that this is a toolkit to build applications that look like 'native' applications - a typical GUI program you would run on your desktop. Compared to the the traditional 'page' -based applications that we've been using since we've been surfing the web.

This is of course a contentious issue. Uncanny Valley and all that. Still, seems like there's potentially promise here, for at least certain types of apps.

we don't need no stinkin' HTML or CSS - Rather than follow the typical pattern of allowing/forcing folks to mix and match their 'GUI' bits of code via HTML, CSS and JavaScript, qooxdoo has a "you only need JavaScript" story. It's an interesting idea - it certainly seems like there's potentially value having to deal with just one language instead of three.

Here's some rational on the non-CSS-based styling.

The downside I see is that the API is fairly large, and without syntax assist, I can see that you'd be keeping that API doc open all the time to figure out what methods to call. This style of programming also seems more relevant for 'window system' UIs compared to building typical web 1.0-style 'page' UIs.

Test Runner - 'nuff said.

lots of documentation - I couldn't find any kind of introduction on the widgets in all this though. Still, this level of documentation seems to be above average, relative to other toolkits.

summary - Cool stuff. I'd wondered when I'd see a 'bolted on' class system in JavaScript that I liked, and this one is certainly getting close. Likewise, I've often wondered about completely library-izing the GUI end of browser clients; the shape here seems about right.

Anyone else played with this?

steve vinoski on schema

2007/01/15: if you happen to read through this blog entry, please also read the comments. Steve felt a bit misquoted here, especially in the title, and I offer up my rational, and apology, but more importantly, we continue the argument :-)

As someone who believes there is some value for 'schema' in the REST world, I'd like to respond to Steve Vinoski's latest blog post "Lying Through Their Teeth: Easy vs. Simple".

Since I'm more firmly in the REST camp than the WS-* camp, and since I love an underdog, I'll side with schema just for the halibut. It's probably more correct to say that I haven't made up my mind w/r/t schema than I believe there is some value - my 'pro' stance on schema is a gut feel - I've seen no significant evidence to sway me either way.

I definitely feel like schema has gotten a bad rap from it's association with WS-*. Well-deserved, to some extent; XML schema is overly complex, resulting in folks having differing interpretations of some of the structures; WSDL itself is so overly indirect it's horribly un-DRY, while on the other hand being terribly brittle at points like hardcoding URLs to your service in the document.

So? Yeah, they got it wrong. They got a lot of stuff wrong. Anyone consider we might be throwing the baby (schema) out with the bath water (WS-*)?

I'd say my general reaction to the typical RESTafarian's rant against schema is that it's misplaced, guilt-by-association.

Let's look at some of Steve's arguments:

"After all, without such a language, how can you generate code, which in turn will ease the development of the distributed system by making it all look like a local system?"

That's making a bit of a leap - "making it all look like a local system?". That's pretty much what the WS-* world did. Why would he think someone who is pro-schema would necessarily want to repeat the mistakes of the WS-* world? I don't want to!

Anyone who's done any amount of client-side HTTP programming is going to realize there's a fair amount of boiler plate-ish code here. Setting up connections, marshalling inputs, making requests, demarshalling outputs, etc. Want caching? How about connection pooling? You're going to be writing a little framework, my friend, if you don't have one handy. And once you've done that, perhaps you'd like to wrapper some of the per-resource operation bits in some generated code, just to make your life easier. And note, I count things like dynamic proxies as code generation; it's just that isn't static code generated by a tool that you have to re-absorb back into your application.

And guess what? Your generated code doesn't have to hide the foibles of the network, if it doesn't want to. And as Steve implies, it shouldn't.

"trying to reverse-map your programming language classes into distributed services, such as via Special Object Annotations, is an attempt to turn local design artifacts into distributed ones"

Again, no. Schema and "hiding remoteness" are two separate things, they aren't necessarily related.

Similarly, with REST, you read the documentation and you write your applications appropriately, but of course the focus is different because the interface is uniform.

The documentation? What documentation? I'm picturing here that Steve has in mind a separate document (separate from the code, not generated from the code) written by a developer. But human generated documentation like this is still schema, only it's not understandable by machines, pretty much guaranteed to get out of sync with the code, probably incomplete and/or imprecise. Not that machine generated schema might fare any better, but it couldn't be any worse.

But there are more problems with this thought. The notion of hand-crafted documentation for an API is quaint, but impractical if I'm dealing with more than a handful of APIs. In fact, the "uniform interface" constraint of REST pretty much implies you are going to have a greater number of "interfaces" than had you defined the functionality as a service (or set of services) in WS-*. Though presumably each REST interface has a smaller number of operations. Of course it would be nice to have this documentation highly standardized (at least given a related 'set' of documented interfaces), available electronically, etc. I don't see that happening with documentation generated by a human.

Another problem here is that although the "uniform interfaces" themselves will be generally easy to describe - "GET returns an Xyz in the HTTP response payload" - the format of the data is certainly not uniform. Is this sufficient documentation for data structures sent in HTTP payloads? It's not for me. Documenting data structures 'by example' like that seems to be the status quo, and is of course woefully inadequate.

Lastly, I'll point out that I think the primary reason for having some kind of schema available is specifically to generate standardized, human-readable documentation. But not the only one. I think there are opportunities to have have machine-generated 'client libraries', machine-generated test suites, many-flavored documentation (PDFs, web-based, man pages, context-assisted help in editors, etc), validators, assembly tools for use by non-programmers (think Yahoo! Pipes), etc.

What the proponents of definition languages seem to miss is that such languages are primarily geared towards generating tedious interface-specific code, which is required only because the underlying system forces you to specialize your interfaces in the first place.

Right. Except for the data, which will be different; and URL template variables. Don't forget that sometimes you'll be needing to be setting cache validators up; maybe some special headers. Face it, there's plenty of tedious code here, especially if you're making more than onesy-twosey requests. Especially if you're in a low signal-to-noise ratio language like Java. Tedious code is no fun. Why not framework-ize some of it?


Even though I have this feeling that schema will be of some value in the REST world, I actually welcome having to have an argument about it. Absolutely, assume we don't need it until we find a reason that we really need it. We haven't yet hit the big time with REST, to know if we'll need it or not. When we see APIs on the order of eBay's, only REAL REST, then we'll know REST will have hit the big time.

For right now though, none of the anti-schema arguments I've ever heard is very compelling to me.

In the meantime, it's also refreshing to see folks experimenting with stuff like WADL. If we end up needing / wanting schema, it would be nice to have some practical experience with a flavor or two, for REST, before trying to settle on some kind of universal standard.