pmuellr is Patrick Mueller, Senior Node Engineer at NodeSource.

other pmuellr thangs: home page, twitter, flickr, github

Thursday, December 13, 2007

thoughts on appjet

AppJet. Have you heard of it?

There's plenty of rocks that can be thrown here.

  • All my code needs to go in ONE file?
  • No static resources?
  • Seemingly overly-general db approach, probably yielding terrible performance
  • Where's my debugger?

But there's a whole lot more to like than dislike with AppJet. Like most of the new-fangled things on the web, I enjoy looking and learning, as a breath of fresh air in spaces that we've become too comfortable with; spaces where we've not done enough thinking outside the box (hello, relational databases!). That said, here are some thoughts.

I've been kidding my Rational brethren for years that Eclipse is dead *, everything's happening on the web, we'll using web-based IDEs in the near future. A chuckle generator for sure. Time's up though, folks. The future is here. Or at least getting very, very close.

There's lots of challenges in this space, you can see even from just a bit of playing around with AppJet. The code editing is flakey. (Peruse it with FireBug; I'm amazed it works as well as it does, honestly, given the implementation as rendering of HTML DOM; it's amazing, actually). I need to be able to edit more than one resource - I certainly don't want to have to have all my code lumped in one file; actually, I don't care if it's one file in the end, but I don't want to see it that way in the IDE. And clearly we need to be able to edit other things like HTML bits, CSS bits, etc.

It's pretty clear to me that Eclipse is too much, and AppJet is not enough, in terms of IDE capabilities, for a space like this. Whadya think - is it easier to remove function from Eclipse, or add function to AppJet? And I worry that Eclipse is veering off on a tangent (runtimes) when there is clearly plenty of work to be done in the IDE space; maybe not in the Java-hosted IDE space - what more does Eclipse really need? But what about spreading the love a little?

The code editing itself was flakey enough that I ran this on Safari just so I could see how well "Edit in TextMate" would work. It did work, but then I have two separate applications open for my "IDE", and that quickly got out of sync. Here's what I'm wondering. If these new-fangled client-based VMs (Silverlight, Flex) are really up to snuff, it should be possible to build a new text editor component with them. From scratch, painting glyphs on the canvas with low-level graphics calls. How do you think Eclipse's rich text editors work, anyway?

Then there's the database. From 50K feet, there's some similarity to couchdb here. The programming language is JavaScript, the objects stored are constrained JavaScript objects, etc. Lots of dissimilarities as well; with couchdb you have a single 'document store' with associated views; with AppJet you have multiple StorageContainers. With couchdb, an alternate 'view' of the database (filters, keyed differently, etc) are available transiently and persistently; with AppJet, the views are always transient (a classic time vs. space trade-off. Hint: storage is sometimes cheaper than time).

Just in terms of raw db functionality, what's nice is that there's not much there; just a set of simple operations you can perform against the db. You have to imagine the performance is going to be terrible on this, especially the filter/view operations. But it's clearly an interesting angle to take in the db space, and one I welcome. SQL has always given me the cold pricklies. I hope that the work that's gone on in recent years introducing XML (another thing you happily won't find in AppJet) into the database world, might have paved the way for JSON and JavaScript. Are you listening, Anant?

This space, in general, has lots of low fruit to be picked; it's great to see things like AppJet making a run for it. I certainly hope that someone doesn't buy them and then send them to a dark place.

For more info, Dion Almaer posted a short interview with an AppJet dev yesterday.

* update on 2007/12/13 at 1:00pm

Of course, no direct offense to Eclipse meant here; I use it on a daily basis, and simply couldn't live without it. I meant to implicate the whole notion of desktop-based development, Eclipse being my flavorite for Java, in light of our new web-based IDE overlord takeover; or at least the possibility of that happening. I'll try to attack things in a more broad sense in the future. :-)

Tuesday, November 20, 2007

rest won, what now?

So, REST won, what now?

Design tools.

Ugh, you're saying! We don't need no stinkin' tools! We're tough men and women, we use text!

Yeah, well, even fuzzy seemed to like these drawrings (sic; remembering Simon from SNL). And I have to admit, there's certainly some value in being able to visualize services like this.

The images though, are quite crude. You'd see much better stuff coming out of something like Rational Solution Architect. Can it model REST services like this? Would be nice.

But even just visualizing these isn't quite enough. I might want to actually develop my services in a tool like this. Look at the pictures again; not much there; a URL template, an HTTP verb, a mime-type indicating input and output data. That's just meta-data, stuff that you could easily extract from a service definition. We don't need to go very deep to be able to reason about that kind of stuff.

It's easy to imagine even being able to go bi-directional on it. Start by drawing your services, implement the code, and then iterate, modifying your service definitions and having a visual tool pick up the changes. Easy pickens.

Let's take it a step further; imagine having your REST services available in a palette that you can connect up to some kind of user interface builder. How hard is that? Easy to imagine anyway.

And that's where we need to get. For all the grumbling and debates over the complexity of implementing mutating REST services, that's actually the easy part. Building a user interface around usage of those services is the hard part. Building UIs has actually gotten harder over the years, since we have much more flexible user interface capabilities at hand. Built a spiffy web-based user interface in HTML recently? - not easy. Integrating a lot of RESTy service invocations into this mix just adds even more complexity. I think we can make things a little easier.

Tools like TIBCO General Interface point the way to how us mortals will be building web-based user interfaces in the future. The main problem with TIBCO GI is that even it's too complicated; folks would be willing to live with something less rich, yielding a cruftier user interface for their application, if you gave them something an order of a magnitude easier to use. It's coming. It needs to be coming.

Until we're there, it's time to start thinking about what we can do to make life easier later. Basically, we need to start figuring out how to describe our services and data in a machine-readable way. If I was dumb, I'd bring up the work 'schema', but ... I won't. Instead, I'll bring up the trendy, happy word 'microformats'. Documenting our web services in hand-crafted HTML, or Visio diagrams isn't enough.

Thursday, November 15, 2007


In "iPhone and text entry" K1v1n complains about the iPhone / iTouch 'keyboard', and wonders if instead of spending time correcting the endemic finger checks, "Can we libe with good eoifg?"

No, we can't.

Here is a step in the future direction we'll be going with mobile keyboards: Frogpad. Obviously not good enough; it's too big. I'm thinking I'd want something like the stick of a joystick, that I could grip and do entry with my fingers, with buttons where the fingers rest. TrackPoint on the end controllable via your thumb, for moving the 'cursor'. Hell, I'll live with one finger button and learn morse code.

Anything is better than a soft keyboard. Multi-tap is better than a soft keyboard.

The funniest thing to me is that the assumption that a soft keyboard is deemed acceptable at all. We've been stuck with this same damn keyboard layout for over 100 years. I think we can move on. Apple, you're supposed to be innovative, right? Let's see some real innovation, please.

Once we have that nasty 'keyboard' problem solved, and since we already have the audio problem solved, the last problem is video. That solved, I can just keep my iBoxen, which only needs an on/off switch and maybe a power plug, in my pocket or man-purse, where it belongs.

We're not talking about flying cars here folks.

BTW, the FrogPad looks like it would be fun to try as a MIDI controller, except for the fact that it costs $150.

Tuesday, November 13, 2007

initial android thoughts

android wallpaper thumbnail

Some initial, random thoughts / wild speculation on Android, given the limited amount of information currently available.

  • I don't think the mobile app 'problem' has ever been about the programming experience. Frankly, all the wonky tools, languages, frameworks are nothing but an enticement for interested hackers. More puzzles. Certainly, that's why I got involved with these guys about ten years ago. I'll go ahead and make the obligatory Smalltalk reference now, and then just shut up about it, mkay?

  • No, the problem with the mobile app 'problem' has always been about lock-in by the various entities. Service providers that charge exhorbitant prices for network data access; that don't allow the download and install of Joe Rando application, instead, monetizing the application story; not even having a toolkit available to build apps in the first place. Maybe times have changed since I stopped paying attention to the market, but I doubt it. Perhaps Google and indirectly Apple can help break down some of these barriers. That's the real problem to be solved. Or perhaps Google could more directly fix the problem. I should be able to write an app for my mom that she can easily install on her plain old phone. That's got to be the long term story, anyway.

  • Android is Java, and not Java. You write your application in Java, but it's translated to Dalvik format for running on the Dalvik VM. The Dalvik VM is not a Java VM, presumably.

  • Dalvik is a registered-based VM, instead of a stack-based VM. What does that mean? A friendly link in the android mailing list points to an document describing the inferno VM. Note one of the authors, Rob Pike. How conveeeeenient. :-) BTW, he's got a great email address!

  • I've heard folks claim "it's not J2ME", or whatever it's called today. Not quite right. Esmertec says it will support J2ME per customer request. Presumably, customers being service providers. But you could imagine Sprint charging you a few bucks for a MIDlet capability. Of course, your MIDlets will have to be modified to Dalvik VM format, presumably, before run.

  • There are actually three J2ME-ish packages included in the android.jar file that shows up in Eclipse when you create an "Android project". javax.microedition.[khronos.opengles|lcdui|], though the packages appear to be radically stripped down. Why are they even there? The khronos.opengles package appears to be 'public', not sure why the others are there.

  • Apache Harmony classes are included. Any Java lover that ever dissed Apache Harmony as pointless, needs to eat crow right now. Because, had it not been for Apache Harmony, you might be programming Android in Python instead of Java. Or C++.

  • Includes an XML pull parser. Thank &diety;! Well, it's in the android.jar used in Eclipse, but I don't see how you get to it via published API.

  • Includes SQLite. I'd rather have seen LittleTable. heh

  • The .class files in the jars seem to have (or at least some have) debug information. Decompile away! Though you're probably not allowed to per whatever click-through license you clicked through.

  • Ian Skerrett has a blog post titled "What Does Android Mean for Sun's OpenJDK" which is pretty self-descriptive. Read up, if you're interested in licensing hoo-haa, especially the Java variety.

  • Miguel de Icaza has some thoughts on "Android's VM" and Mono. We'll see a lot more of this, including folks wanting to target existing languages to the Dalvik VM, presumably. I wonder if any of the PyPy developers work at TheGoog?

  • Could 'typical' JavaDoc have been so hard to build?

  • I got to install Gutsy in a VMware session on my MacBook, and then Java, and then Eclipse, and then the Android SDK, then the Android plugin. Because there's a little problem running the Android plugin in Eclipse for some Mac users. Perhaps that's why there's soft keyboard in the emulator?

  • Personally, I think Google is shooting a bit low here; the platform sounds like it would be good enough to run little 'mini-apps' on your desktop. Like say, AIR does. Or like Google Gadgets.

  • It's not a real platform until it has IDL. Woo Hoo!!

Friday, November 09, 2007

how buildings learn

My hall-mate Bill Higgins has often recommended the book "How Buildings Learn" by Stewart Brand as "the best book about software design that never mentions software". He's right, it's a great book, and although it doesn't talk about software, there's a lot to take away from a software perspective.

In general, Brand's premise is that most buildings have fairly long lives, and end up serving multiple distinct uses during their lives. Those uses end up forcing the buildings to change, and much of the book discusses aspects of buildings that work in favor of buildings successfully changing to suite new uses, and factors that work against it.

In terms of software, I think one of the sins we most frequently commit is not future-proofing our products. If your product is successful, it might well have a very long life. Have you constructed it such that it can easily be modified to handle future requirements placed on it? The answer is typically, no. For example, see Steve Northover's "API grows like fungus!".

I don't want to spoil the book for you, but I will advise you to do this: after reading a section of the book , spend a little time imagining how what you just read applies to software development. It doesn't all apply; but a lot does.

If you've already read the book, you'll get a kick out of the following article from Slashdot this week:

MIT Sues Frank Gehry Over Buggy $300M CS Building

If you haven't read the book, here's a little spoiler; Brand spends a number of pages complaining about a different "modern architecture" building on MIT's campus, the Weisner Builder. Brand also spends a fair bit of time complaining about problems endemic to modern architecture, including leaks, and included the following quote talking about Frank Lloyd Wright:

His stock response to clients who complained of leaking roofs was, "That's how you can tell it's a roof."

Note Brand's book was published in 1994; construction on the Stata Center (Gehry's building) began in 2001. The Stata Center replaced Building 20, which was the 'vernacular' building that Brands praised so highly. Buildings might learn, but people apparently don't.

The only real complaint I have is the form factor of the book. It's 11" wide by 8.5" high (or so). The only comfortable place I've found to read it is in bed. I assume the layout was chosen because the large number of photos included in the book; it might well have been the only practical way to publish it. But it does seem a bit ironic, for a book that is pushing "function over form", to be such a difficult physical read.

Highly recommended.

defining simple

Let me expound a bit on "simple", since Sam referenced me.

  • I consider myself a Ruby n00b.

  • I'm very familiar with HTTP.

  • I'm pretty familiar with AtomPub, and the Blogger posting interface is basically AtomPub-ish.

Most of the time spent writing the feedBlogger.rb script translating the pseudo-code in my head into Ruby. Also figuring out the ClientLogin protocol. The Blogger Data API Reference Guide was all I really needed to do the business logic. Not much there, but there's not much to know, assuming you're already familiar with Atom and AtomPub.

So keep in mind, this is fairly low-level code, executing HTTP transactions pretty close to the metal, from someone familiar with doing that. And it's a one-off program, didn't worry about error checking, caching, etc. Things that a program which isn't a one-off would like have to concern itself with.

If you're lucky enough to not be working in Ruby, you might have gotten by with one of the Google Data API Client Libraries, available for Java, .NET, PHP, Python, Objective-C (???), and JavaScript. Not sure if you could really do what I did, with those libraries or not, since I couldn't use them, so I didn't even look at them.

Now, as a thought experiment, let's consider if the Blogger API was actually WS-* ish rather than RESTful.

  • In theory, I could take a WSDL description of the API, and use it with a dynamic WSDL/SOAP library, if one exists for Ruby (I know one exists for PHP).

  • Or I could generate some stubs in Ruby for that I could call as functions, assuming there is a stub-generator program available for Ruby.

  • Or knowing HTTP pretty well, as well as SOAP (not much there), as well as being able to translate WSDL into an HTTP payload because I'm familiar with THAT story, I could have written it in a low-level style as well.

The problem with the first is the first option is dealing with a thick runtime stack that I might have to debug because it's doing a lot of stuff. The typical problem is dealing with the XML payloads; getting all the data generated the way the server expects it. I've had to deal with that many, many times over the years.

The problem with the second is that there's additional stuff I have to do, and maintain, for my simple little program. Instead of just a script, I'd have to keep the WSDL, the build script to generate the stubs, and the stubs themselves. And even then I'm still left with a thick runtime library I might have to debug (see paragraph above).

The third option is just more work I'd have to do, and generally yuckifying the client program even more.

However, let me throw a little water on the REST fire. The Google Data API Client Libraries also suffer some of the same issues with WSDL. In particular, while I think people have the impression that RESTful clients can get by with just a decent HTTP library, the fact of the matter is, that's too much for some folks and having client libraries available is a nice thing to have. Which I will have to maintain with my programs.

And I did have to do some squirrelly stuff with the XML payload I was sending the Blogger; bolting the Atom xmlns onto the entries; there's probably a cleaner way to do this, so it might just be my unfamiliarity with REXML.

The big difference, to me, is that REST is less filling, as opposed to tasting great; ie, there's still a fair bit of highly technical skill needed to use this stuff, there's just a little bit less with REST compared to even simplistic WS-* WSDL/SOAP usage.

moving back

As I mentioned on my developerWorks blog, I've decided to do my blogging back here at Blogger.

As part of that process, I set up a virtual feed at feedburner, so that when I move next time, it'll be complete transparent, for feed aggregators anyway.

Moved my content over with a simple matter of programming.

Monday, November 05, 2007

RubyConf 2007

A few notes from RubyConf 2007:

(The sessions at RubyConf 2007 were recorded by Confreaks, and will hopefully be available soon on the web. I'll update this post with a link when they are.)

  • Ropes: an alternative to Ruby strings by Eric Ivancich
    Interesting. Strings have always been a problem area in various ways, SGI's ropes provides an interesting solution for some use cases. In particular, Strings always show up as pigs in Java memory usage, what with the many, long-ish class names and method signatures, that have lots of duplication internally; I wonder if something like ropes might help.

    See notes by James Avery.

  • Ruby Town Hall by Matz
    Didn't take any notes, but the one thing that stuck out for me was the guy from Texas Instruments who said they were thinking of putting Ruby in one of their new calculators. He asked Matz if future releases would be using the non-GPL'd regex library, as the TI lawyers were uncomfortable (or something, can't remember the exact words) with it (the licensing). See also the notes under Rubinius, below, regarding licensing.

    But the big news for me, from this question, was the calculator with Ruby. Awesome, if it comes to be. I talked to the guy later and he indicated it was, as I guessed/hoped, the TI-Nspire. Sadly, he also indicated the calculator probably wouldn't be available till mid-2008.

    See notes by my evil twin, Rick DeNatale.

  • IronRuby by John Lam
    Didn't go into enough technical depth, sadly. And the technical presentation included info on the DLR and then XAML, which I don't think were really required. John had a devil's tail attached to his jacket or pants, which appeared about half way through the presentation. Really, I think everyone seemed to be quite open to IronRuby, no one seems to be suggesting it's evil or anything. Are they?

    See notes by Nick Sieger.

  • JRuby by Charles Nutter and Thomas Enebo
    Lots of good technical info. Tim Bray did a very brief announcement at the beginning about some kind of research Sun is doing with a university on multi-VM work; sounded like it didn't involve Java, and given the venue, I assume it had something to do with Ruby. Sounds like we'll hear more about it in the coming weeks.

    The JRuby crew seems to be making great progress, including very fresh 1.0.2 and 1.1 beta 1 releases.

    One thing that jumped right out at me when the 'write a Ruby function in Java' capability was discussed, was how similar it seemed to me to what I've seen in terms of the capabilities of defining extensions in the PHP implementation provided in Project Zero. That deserves some closer investigation. It would be great if we could find some common ground here - perhaps a path to a nirvana of defining extension libraries for use with multiple Java-based scripting languages?

  • Rubinius by Evan Phoenix
    Chad Fowler twittered: "Ruby community, if you're looking for a rock star, FYI it's Evan Phoenix. Please adjust accordingly. kthxbai".

    I happened to hit the site a few times this weekend, and at one point noticed the following at the bottom of the page: Distributed under the BSD license. It's been a while since I looked at the Ruby implementations, in terms of licensing, but I like the sound of this, because I know some of the other implementations' licenses were not completely permissive (see Ruby Town Hall above). Ruby has still not quite caught on in the BigCo environments yet, and I suspect business-friendly licensing may be needed to make that happen. It certainly won't hurt.

    See notes by Nick Sieger.

  • Mac OS X Loves Ruby by Laurent Sansonetti
    Oh boy, does it ever. Laurent covered some of the new stuff for Ruby in Leopard, and had people audibly oohing and ahhing. The most interesting was the Cocoa bridge, which allows you to build Cocoa apps in Ruby, using XCode, which (now?) supports Ruby (syntax highlighting, code completion?). Most of the oohing had to do with the capability of injecting the Ruby interpreter into running applications, and then controlling the application from the injector. Laurent's example was to inject Ruby into TextEdit, to create a little REPL environment, right in the editor. Lots of ooh for the scripting of Quartz Composer as well.

    Apple also now has something called BridgeSupport which is a framework whereby existing C programming APIs (and Objective C?) are fully described in XML files, for use by frameworks like the Cocoa bridge, as well as code completion in XCode. That's fantastic. I've had to do this kind of thing several times over the years, and, assuming the ideas are 'open', it would be great to see more people step up to this, so we can stop hacking C header file parsers (for instance). And I think I could live with never having to write a java .class file reader again, thankyouverymuch.

    I suspect all this stuff is available for Python as well.

    Laurent also showed some of the DTrace support. No excuse not to look at DTrace now. Well, once I upgrade to Leopard anyway.

    Someone asked "Will Ruby Cocoa run on the iPhone?" Laurent's reply: "Next question". Much laughter from the crowd. Funny, in a sad way, I guess.

    See notes by Alex Payne.

  • Matz Keynote
    Matz covered some overview material, mentioned Ruby will get enterprisey: "The suit people are surrounding us". He then dove into some of the stuff coming in 1.9. Most of it sounds great, except for the threading model moving from green threads to native threads, and a mysterious new loop-and-increment beast, which frankly looked a bit too magical to me. The green vs. native threads thing is personal preference of mine; I'd prefer that languages not be encumbered with the threading foibles provided by the platform they're running on. Green threads also give you much finer control over your threads. On the other hand, given our multi-core future, I think there's probably no way to avoid interacting with OS-level threads, at some level.

  • Behaviour Driven Development with RSpec by David Chelimsky and Dave Astels
    I really need to catch up on this stuff, I'm way behind the times here. They showed some new work they were doing that better captured design aspects like stories, including executable stories, with a web interface that can be used to build the stories. That's going to be some fun stuff. Presentation available as a PDF.

    See notes by Nick Sieger.

  • Controversy: Werewolf considered harmful?
    Charles Nutter wonders if the ever-popular game is detracting from collaborative hack-fests. The game certainly is quite popular. I played one game, my first, and it was a bit nerve-wracking for me. But then, I was a werewolf, and the last one killed (the villagers won), the game came down to the final play, and I'm a lousy liar.

I kept notes again on a Moleskine Cahier pocket notebook, which works out great for me. Filled up about 3/4 of the 64 pages, writing primarily on the 'odd' pages, leaving the 'even' pages for other doodling, meta-notes, drawing lines, etc. I can get a couple of days in one notebook. The only downside is you need something to write on, for the Cahiers, for support, and the last half of the notebook pages are perforated. I don't really need the perforation, but it wasn't a big problem. They end up costing about $2 and change for each notebook.

I was, like usual, primarily surfing during the conference on my Nintendo DS with the Opera browser; good enough to twitter, check email, check Google Reader. It's a good conversation starter, also. At one point, a small-ish, slightly scruffy Asian gentleman leaned over my shoulder to see what in the world I was doing, so I gave him my little spiel on how it was usable for the simple stuff, yada yada. He seemed amused.

Wednesday, October 24, 2007

only the server knows

In "Is It Atompub?", James Snell discusses how an arbitrary binary file could be posted to a feed via AtomPub, along with it's meta-data, at the same time, in a single HTTP transaction.

I'm assuming here that by "metadata", James means "stuff in the media link entry".

All James' options look like reasonable things a server CAN do. But it raises the question: how does the client know which of these methods is actually supported? I guess one of the answers is via exposing capabilities via features, which is something James is currently working on.

I think the answer is, if you're writing generic-ish client code, that you can't, today.

Doesn't mean you can't get close. The sequence described in Section 9.6.1 of the brand spankin new RFC 5023 seems to describe it pretty well. You can shortcut through that example, by issuing a PUT against the entry returned by the original POST, I assume, eliminating the secondary GET. It's just not one transaction anymore, it's two.

I'll also add that James' first option: have the server retrieve the meta-data from the binary blob's intrinsic meta-data, seems wrong. It will require the server to know about meta-data formats for all manner of binary blobs users might want to store. I think it's fine for a server to support that, if they want, but it doesn't seem to right to assume that every server DOES support it. But don't get me wrong, I would love to have the exif data extracted out of my photos upon upload.

So the next interesting question is, if the server DOES populate the meta-data for the resource on my behalf, based on the resource, how do I as a generic client know that? Check for a non-empty atom:summary element?

BTW, I hadn't gotten a chance to extend thanks to the folk working on the Atom and AtomPub specs, for all the work they've done over the years. Standards work is hard and thankless. Kudos all around.


Edited at 9:03pm: ops - I wanted to title this "only the server knows" not "only the shadow knows"

Thursday, October 18, 2007

the vice of debugging

In Giles Bowkett's "Debugger Support Considered Harmful", Giles makes the claim that "Ruby programmers shouldn't be using a debugger".

First, as a meta-comment to this, I truly love to see folks expressing such polarizing and radical opinions. Great conversation starters, gets you thinking in someone else's viewpoint, devil's advocate, etc. At the very least, it's a change of pace from the REST vs everything else debate. :-)

Back to the issue at hand, debugging Ruby. I certainly understand where Giles is coming from, as I've questioned a few Rubyists about debugging, only to have them claim "I don't need a debugger" and "the command-line debugger is good enough". There is clearly a notion in the Ruby community, unlike one I've seen almost anywhere else, that debuggers aren't important.

I twittered this morning, in reference to Giles' post, the following: "Seems like Stockholm syndrome to me". As in, if you don't have a decent debugger to use in the first place, it seems natural to be able to rationalize reasons why you don't need one. I have the exact same feelings for folks who spend almost all of their time programming Java making claims "the only way to do real programming is in an IDE"; because it's pretty clear to me, at this point, the only way to do Java programming is with an IDE. I've personally not needed an IDE to do programming in any other language, except of course Smalltalk, where it was unavoidable. Of course, extremely programmable text editors like emacs, TextMate, and even olde XEDIT kinda blur the line between a text editor and an IDE.

My personal opinion: I love debuggers. If I'm stuck with a line-mode debugger, then fine, I'll make do, or write a source-level debugger myself (aside: you just haven't lived life if you haven't written a debugger). But I'll usually make use of a source-level debugger, if it's easy enough to use. Sometimes they aren't.

Honestly, I love all manner of program-understanding tools. Doc-generators, profilers, tracers, test frameworks, code-generators, etc. They're tools for your programming kit-bag. I use 'em if I need 'em and I happen to have 'em. They're all 'crutches', because in a pinch, there's always printf() & friends. But why hop on one leg if I can actually make better headway, with a crutch, to the finish line of working code? Seems like a puritanical view to me to say "you shouldn't use crutches", and especially ironic for Ruby itself, which is a language full of crutches!

Perhaps there is something about Ruby itself, which makes debugging bad. Or as Giles seems to be indicating, that testing frameworks can completely take the place of debugging. Some unique aspect of Ruby that sets it apart from other languages where debuggers are deemed acceptable and desirable. As a Ruby n00b, I'm not yet persuaded.

As for Ruby debuggers themselves, NetBeans 6.0 (beta) has a fairly nice source-level debugger that pretty much works out-of-the-box. Eclipse fanatics can get Aptana or the dltk (Dynamic Languages Toolkit). I think it would be nice to have a stand-alone source-level Ruby debugger, outside the IDE, because honestly I think TextMate is a good enough IDE for Ruby anyway.

BTW, Rubyists, I'm getting tired of the Sapir-Whorf references.

Friday, October 12, 2007

on links

When people think about links, with regard to tying together information on the web, the usual thoughts are of URLs. Either absolute URLs, or a URL relative to some base (either implicitly the URL of the resource that contains the link, or explicitly via some kind of xml:base-like annotation).

But I wrestle with this.

Here's one issue. Let's say I have multiple representations of my resources available; today you see this typically as services exposing data as either JSON or XML. If that representation includes a link to other data that can be exposed as either JSON or XML, do you express that link as some kind of "platonic URL"? Or if you are doing content-negotiation via 'file name extension' sort of processing, does your JSON link point to a .json URL, but your XML link point to a .xml URL?

See a discussion in the JSR 311 mailing list for some thoughts on this; I stole the term "platonic URL" from this discussion.

The godfather had something interesting to say in a recent presentation. In "The Rest of REST", on slide 22, Roy Fielding writes:

Hypertext does not need to be HTML on a browser
- machines can follow links when they understand the data format and relationship types

Where's the URL? Perhaps tying links to URLs is a constraint we can relax. Consider, as a complementary alternative, that just a piece of data could be considered a link.

Here's an example: let's say I have a banking system with a resource representing a person, that has a set of accounts associated with it. I might typically represent the location of the account resources as a URL. But if I happen to know, a priori, the layout of the URLs, I could just provide an account number (assuming that's the key). With the account number, and knowledge of how to construct a URL to an account given that information (and perhaps MORE information), the URL to the account can easily be constructed.

The up-side is that the server doesn't have to calculate the URL, if all they have is the account number. They just provide the account number. The notion of content-type-specific URLs goes away; there is only the account number. The resources on the server can be a bit more independent of themselves; they don't have to know where the resource actually resides, just to generate the URL.

Code-wise, on the server, this is nice. There's always some kind of translation step on the server that's pulling your URLs apart, figuring out what kind of resource you're going after, and then invoking some code to process the request. "Routing". For that code to also know how to generate URLs going back to other resources, means the code needs the reverse information.

The down-side, of course, is that you can't use a dumb client anymore; your client now needs to know things like how to get to an account given just an account number.

And just generally, why put more work on the client, when you can do it on the server? Well, server performance is something we're always trying to optimize - why NOT foist the work back to the client?

But let's also keep in mind that the Web 2.0 apps we know and love today aren't dumb clients. There's user-land code running there in your browser. Typically provided by the same server that's providing your resources in the first place. ie, JavaScript.

I realize that's a bad example for me to use; me being the guy who thinks browsers are a relatively terrible application client, but what the heck; that's the way things are today.

For folks who just want the data, and not the client code, because they have their own client code, well, they'll need some kind of description of how everything's laid out; the data within a resource representation, and the locations of the resources themselves. But the server already knows all that information, and could easily provide it in one of several formats (human- and machine-readable).

As an proof point of all of this, consider Google Maps. Think about how the map tiles are being downloaded, and how they might be being referenced as "links". Do you think that when Google Maps first displays a page, all the map tiles for that first map view are sent down as URLs? Think about what happens when you scroll the map area, and new tiles need to be downloaded. Does the client send a request to the server asking for the URLs for the new tiles? Or maybe those URLs were even sent down as part of the original request.

All rhetorical questions, for me anyway. I took a quick look at the JavaScript for Google Maps in FireBug, and realized I've already debugged enough obfuscated code for a few lifetimes. Probably a TOS violation to do that anyway. Sigh. I'll leave that exercise to younger pups. But ... what would you do?

For Google maps, it's easy to imagine programmatically generating the list of tiles based on location, map zoom level, and map window size. Assuming the tiles are all accessible via URLs that include location and zoom level somewhere in the URL. In that case, the client code for calculating the URLS of the tiles needed is just a math problem. Why make it more complex than that?

I think there are problem domains where dealing with 'links' as just data, instead of explicit URLs make sense, as outlined with Google Maps. Remember what Roy wrote in his presentation: "machines can follow links when they understand the data format and relationship types". Of course, there's plenty of good reason to use continue to use URLs for links as well, especially with dumb-ish clients.

the envelope, please

Yaron Goland posted a funny Star Wars -flavored story on jamming square pegs in round holes with Atom. There's a great conversation in the comments in Sam Ruby's reply.

I think it's fair to say there's a tension here; on the one hand, there's no need to wrap your data in Atom or protocols in APP if you don't need to - that's just more stuff you have to deal with. On the other hand, if Atom and APP support becomes ubiquitous, why not take advantage of some of that infrastructure; otherwise, you may find yourself reinventing wheels.

I can certainly feel Yaron's point of "I'm running into numerous people who think that if you just sprinkle some magic ATOM pixie dust on something then it suddenly becomes a standard and is interoperable." I'm seeing this a lot now too. Worrisome. Even more so now that more and more people are familiar with the concept of feeds, but don't understand the actual technology. I've seen people describe things as if feeds were being pushed to the client, for instance, instead of being pulled from the server.

One thing that bugs me is the potentially extraneous bits in feeds / entries that are actually required: atom:name, atom:title, and atom:summary. James Snell is right; it's simple enough to boilerplate these when needed. But to me, there's enough beauty in Atom and APP, to really make it reusable across a wide variety of problem domains, that these seem like warts.

Another thorn in my side is the notion of two ways of providing the content of an item. atom:content with embedded content, or linked to with the src attribute. The linked-to style is needed, clearly, for binary resources, like image files. But it's a complication; it would clearly be easier to have just one way to do it, and that would of course have to be the linked-to style.

The picture in my mind is that Atom becomes something like ls; just get me the list of the 'things' in the collection, and some of their metadata. I'll get the 'things' separately, if I even ever need them. Works for operating systems.

Of course, the tradeoff there is that there are big performance gains to be had by including your content IN the feed itself, with the embedded style; primarily in reducing the number of round-trips between client and server from 1 + # of resources, to 1. It doesn't help that our current web client of choice, the web browser, doesn't provide programmatic control over it's own cache of HTTP requests. Maybe if it did, the linked-to style would be less of a performance issue.

I suspect I'm splitting hairs to some extent; one of my many vices. I'm keeping an open mind; I'm glad people are actually playing with using Atom / APP as an application-level envelope / protocol. It's certainly worth the effort to try. There's plenty to learn here, and we're starting to have some nice infrastructure here to help us along, and the more people play, the more infrastructure we'll get, and ...

Tuesday, October 09, 2007

web and rest punditry

  • In "Google Map 'Documents'", Stefan Tilkov notes:

    In a comment, Patrick Mueller asks whether I'd consider Google Maps "document-oriented".

    Given that I can say that this is where I live and this is where I work I'd claim it's document-oriented enough for me :-)

    Stefan is referring to a comment discussion in his blog post on Lively. How we got from Lively to whether Google Maps is "document-oriented" ... well, read the original post and the comments.

    w/r/t "document oriented", the term is used on the Lively page, but I think Stefan is misinterpreting it. I can only infer from Stefan's post that he believes a web application is "document oriented" if you can generate a URL to somewhere in the middle of it. Like he did with two map links he included.

    I'd refer to this capability as having addressable pages. Addressable in terms of URLs that I can paste into a browser window, email to my mom, or post in a blog entry. Important quality. It's especially nice if you can just scrape the URL from the address bar of your browser, but not critical, as Google Maps proves; a button to generate a link is an acceptable, but less friendly substitute. Being addressable is especially hard for "web 2.0" apps, which is why I mention it at all.

    My read of the Lively page's usage of "document-oriented" is more a thought on application flow. In particular, pseudo-conversational processing. Which is both the way CICS "green screen" applications are written, as well as the way Web 1.0 applications are written. Turns out to be an incredible performant and scalable style of building applications that can run on fairly dumb clients. The reason I infer this as "document-oriented" is that, in the web case, you literally are going from document (one URL) to another document (another URL) as you progress through the application towards your final goal. Compared to the style of imperative and event driven programming you apparently do with Lively.

    So, with that thought in mind, Google Maps is clearly not "document oriented". The means by which you navigate through the map is 'live'; you aren't traversing pages like you did in old school MapQuest (which, btw, is also now "live").

    But even still, given my interpretation of Stefan's definition, I'd say there's no reason why a Lively app can be just as "document-oriented" as Google Maps given his definition; that is, exposing links to application state inside itself as URLs. You may need to work at it, like you would with any Web 2.0 app, but I don't see technical reason why it can't be done. Hint: client code running in your browser can inspect your URLs.

    Back to Stefan's original note about Lively: "it might be a good idea to work with the Web as opposed to fight against it". I think I missed Stefan's complaints about Google Maps fighting against the web. Because if Lively is fighting against the web, then so is Google Maps.

    Lastly, a note that Lively is built out of two technologies. JavaScript and SVG, and it runs in a web browser. I'm finding it really difficult to figure out how Lively is fighting the web.

  • In "What is SOA?", Pete Lacey notes:

    Another problem with the SOA name is the "service" bit. At least for me, the term "service" connotes a collection of non-uniform operations. I don't even like the phrase "REST Web services." Certainly, SOAP/WS-*, CORBA, DCOM, etc. fit this definition. But REST? Not so much. In REST the key abstraction is the resource, not the service interface. Therefore SOA (and I know this is not anyone's strict definition) encompasses the above mind set, but includes SOAP and similar technologies and excludes REST.

    If you change Pete's definition of "service" to be "a collection of operations", independent of whether they are uniform or not, then REST fits the definition of service. Next, you can simply say the resource (URL) is the service interface, for REST. Just a bunch of constraints / simplifications / specializations of the more generic 'service' story.

    Sure there are plenty of other details that separate REST from those other ... things. But you can say that about all of them; they're ALL different from each other, in the details. And at 10K feet, they're all doing generally the same thing.

    As a colleague mentioned to me the other day, REST is just another form of RPC.

    I feel like we might be throwing out the baby with the bath water here. It's true that I never want to go back to the CORBA or SOAP/WS-* worlds (I have the scars to prove I was there), but that doesn't mean there's nothing to learn from them. For instance, the client-facing story for REST seems a bit ugly to me. I know this isn't real REST, but if this is the kind of 'programming interface' that we have to look forward to, in terms of documentation and artifacts to help me as a programmer ... we got some problems.

    I look forward to seeing what Steve Vinoski brings to the table, as a fellow scar-bearer.

Saturday, October 06, 2007

php in project zero

Project Zero's ebullient leader, Jerry Cuomo, just published an article at his blog talking about the PHP support in Project Zero. If you didn't already know, PHP is supported in Project Zero using a PHP interpreter written in Java. Pretty decent 10K ft summary of why, what, etc. With links to more details.

Wanted to point out two interesting things, to me, from Jerry's post ...

"The idea with XAPI-C is to take the existing extensions, apply some macro magic, potentially some light code rewrite, and make those extension shared libraries available to Java through JNI."

Interesting programming technique; trying to reuse all the great, existing PHP extensions out there. One of PHP's great strengths is the breadth of the extension libraries. It would be great to be able to reuse this code, if possible. It's a really interesting idea to me in general, to be able to reuse extension libraries, not just across different implementations of the same language, but across multiple languages. It just seems like a terrible waste to have lots of interesting programming language libraries available, but not be able to use them in anything but a single language.

"We have debated the idea of moving our PHP interpreter into a project unto it's own - where it can explore better compatibility with as well as pursue supporting a full complement of PHP applications. Thoughts?"

This seems like it makes a lot of sense. Especially if it meant being able to open source the interpreter to allow a wider community to develop it. I think the main question is, is there a wider community out there interested in continuing to develop the interpreter and libraries?

Lastly, want to point out how impressed I've been by the team in Hursley, UK and RTP, NC for getting the interpreter as functional as it is so quickly. The team maintains a wiki page at the Project Zero web site describing the current state of functionality, if you're interested in seeing how far along they are.

Thursday, September 20, 2007

which flavor do you favor

A question: if you had to provide a client library to wrapper your RESTful web services, would you rather expose it as a set of resources (urls) with the valid methods (request verbs) associated with it, or provide a flat 'function library' that exposed the resources and methods in a more human-friendly fashion?

Example. Say you want to create a to-do application, which exposes two resources: a list of to-do items, and a to-do item itself. A spin on the "Gregorio table" might look like this:

resource URL template HTTP VERB description
to-do items /todo-items GET return a list of to-do items
to-do items /todo-items POST create a new to-do item
to-do item /todo-items/{id} GET return a single to-do item
to-do item /todo-items/{id} PUT update a to-do item
to-do item /todo-items/{id} DELETE delete a to-do item

(Please note: the examples below are painfully simple, exposing just the functional parameters (presumably uri template variables and HTTP request content) and return values (presumably HTTP response content), and not contextual information like HTTP headers, status codes, caches, socket pools, etc. A great simplification. Also note that while I'm describing this in Java code, the ideas are applicable to other languages.)

If you were going to convert this, mechanically, to a Java interface, it might look something like this:

For the first two rows of the table, you use the following interface:

1: public interface ToDoItems {
2:     public ToDo[] GET();
3:     public ToDo POST(ToDo item);
4: }

For the last three rows of the table, you use the following interface:

1: public interface ToDoItem {
2:     public ToDo GET(String id);
3:     public ToDo PUT(String id, ToDo item);
4:     public void DELETE(String id);
5: }

I'll call this the 'pure' flavor.

A different way of thinking about this is to think about the table as a flat list of functions. In that flavor, add another column to the table, named "function", where the value in the table will be unique across all rows. Presumably the function names are arbitrary, but sensible, like ToDoList() for the /todo-items - GET operation.

Here's our new table:

function resource URL template HTTP VERB description
ToDoList to-do items /todo-items GET return a list of to-do items
ToDoCreate to-do items /todo-items POST create a new to-do item
ToDoGet to-do item /todo-items/{id} GET return a single to-do item
ToDoUpdate to-do item /todo-items/{id} PUT update a to-do item
ToDoDelete to-do item /todo-items/{id} DELETE delete a to-do item

This flavor might yield the following sort of interface:

1: public interface ToDoService {
2:     public ToDo[] ToDoList();
3:     public ToDo   ToDoCreate(ToDo item);
4:     public ToDo   ToDoGet(String id);
5:     public ToDo   ToDoUpdate(String id, ToDo item);
6:     public void   ToDoDelete(String id);
7: }

I'll call this the 'applied' flavor.

Now, if you look at the combination of the two 'pure' interfaces, compared with the 'applied' interface, there's really no difference in function. In fact, the code to implement all these methods across both flavors would be exactly the same.

The only difference is how they're organized.

Now, the question is, which one is better?

Now, you might say I'm crazy, who would ever choose the 'pure' story over the 'applied' story? And my gut tells me you're right. The 'applied' story seems to be a better fit for humans, who are largely going to be the clients of these interfaces, writing programs to use them.

But this flies in the face of transparency, where we don't want to hide stuff so much from the user. HTTP is in your face, and all that. At what point do we hide HTTP-ness? If you don't want to hide stuff from your users, you might choose 'pure'.

And I wonder, are there other advantages of the 'pure' interface? You might imagine some higher-level programming capabilities (mashup builders, or even meta-programming facilities if your programming language can deal with functions/methods as first class objects) that would like to take advantage of the benefits of uniform interface (as in the 'pure' interface).

And of course, there's always the option of supporting both interfaces, as in something like this:

01: public interface ToDoService2 {
02:     public ToDo[] ToDoList();
03:     public ToDo   ToDoCreate(ToDo item);
04:     public ToDo   ToDoGet(String id);
05:     public ToDo   ToDoUpdate(String id, ToDo item);
06:     public void   ToDoDelete(String id);
08:     static public interface ToDoItems {
09:         public ToDo[] GET();
10:         public ToDo POST(ToDo item);
11:     }
13:     static public interface ToDoItem {
14:         public ToDo GET(String id);
15:         public ToDo PUT(String id, ToDo item);
16:         public void DELETE(String id);
17:     }
18: }

Even though you have 10 methods to implement, each real 'operation' is duplicated, in both the 'pure' and 'applied' flavors, so you really only have 5 methods to implement. No additional code for you to write (relatively), but double the number of ways people can interact with your service.

Wednesday, September 19, 2007

NetBeans 6.0 Beta 1

Shiver me timbers, yes, I'm blogging a bit about NetBeans. After hearing such gushing reviews of it, I figured I'd take a look. It's been at least a year, and probably more, since I last looked at it. And I should note, I'm just looking at the Ruby bits.

Thought I'd provide some quick notes on NetBeans 6.0 Beta 1 as a confirmed Eclipse user. I'd give the history of my usage of Eclipse, but then Smalltalk enters the picture around pre-Eclipse VisualAge Java time-frame, and you don't want me to go there, do ya matey? That would just make me sad anyway, remembering the good old days.

I've also used the RDT functionality available in Aptana, and will make comparisons as appropriate.


  • On the mac, uses a package installer, instead of just a droppable .app file.

  • Mysterious 150% cpu usage after setting my Ruby VM to be my MacPorts installed ruby. I didn't see any mention in the IDE of what it was doing, but I figured it was probably indexing the ruby files in my newly-pointed-to runtime. Only lasted a minute or so. If it had lasted much longer, I might have killed it, and then uninstalled it.

  • Can only have a single Ruby VM installed; Eclipse language support usually allows multiple runtimes to be configured, one as default, but overrideable on a per-project, or per-launch basis. What do JRuby folks do, who want to run on JRuby or MRI alternatively?

  • Plenty of "uncanny valley" effects going on, because Swing is in use. Of course, Eclipse also has a lot of custom ui elements; I'm becoming immune to the uncanny valley; and FireFox on the mac doesn't help there either.


  • I see the Mac .app icon changed from the sharp-cornered version to a kinder, gentler version (see the image at the top), but I think I can still validly compare Eclipse and NetBeans to two of my favorite sci-fi franchises, given their logo choices. But it's certainly less Borg-ish than older versions.

  • Install now ships as a .dmg for the Mac (disk image file) instead of an embedded zip file in a shell script.

  • Debugging works great. Same as Eclipse with the RDT.

  • I can set Eclipse key-bindings.

  • F3 works most of the time ("find the definition") like in Eclipse. In fact, this is cool: F3 on a 'builtin' like Time, and NetBeans generates a 'dummy' source file showing you what the source would look like, sans method bodies, but with comments, and the method navigator populated. Nice!

  • A Mercurial plugin is available for easy installation through the menus, and CVS and SVN support is built-in. I played a bit with the Mercurial plugin in a previous milestone build, and it was easy enough to use, but I never could figure out how to 'push' back to my repo. Why Eclipse doesn't ship SVN support, built-in, in this day and age, is a mystery to me.

  • Don't need to add the "Ruby project nature" to a project just to edit a ruby file in the project. How I despise Eclipse's project natures.

  • Provides great competition for Eclipse.

Quite usable, overall. Hat's off to the NetBeans folks! I'll probably start using it for one-off-ish Ruby work I do.

Thursday, September 06, 2007

nc seas

Well, it's September, which means it's time to start planning my "take a day off work and head to the beach to spend some quality time on the boogey board" day.

Plan is the operative word; it's easy to find out when the weather is going to be good; it's a lot harder to find out when the waves are going to be good. I finally spent some time trolling through the massive amount of data at NOAA, and finally found something close enough; forecasts of sea wave heights going out a couple of days. Not the same as the height of the waves at the beach, but close enough.

Wrote a little Ruby to generate a KML file after parsing the interesting data out of the reports, and the result is a little Google Maps / Earth mapplication, North Carolina Seas.

Source here. Note the program is set up to regenerate the KML file via a cron job every few hours, rather than as a 'web app'.

But then, after all that, maybe I'll go for a hike in the mountains at a state park instead of a day at the beach. Life is rough.

Saturday, August 18, 2007

response to jsr 311

Marc Hadley responded quite quickly to my blog entry on JSR 311. Excellent. I'm really digging the transparency. I'm thinking Twitter may have played a part in the quick response.

w/r/t the GPL vs CDDL licensing issue; sorry, I'm sure I saw CDDL on some of the pages, I should have included it in the blog post. I frankly am not all that familiar with the CDDL, but Dave Johnson got me curious about something, and sure enough, it looks like Apache allows CDDL licensed binary software to be shipped with Apache projects. Note that the link above is a proposal, but it sounds like Apache is already doing this in some cases with CDDL. Very cool.

As for the other issues Marc raised; it's silly to debate via blog posts; looks like the would be the most appropriate place to have the discussion.

But I can't resist making two notes.

From Marc: "On the Java ME side, the fact that 311 is primarily a server side API ... means it is less suitable for the ME environment anyway."

To quote from the Java ME Sun Products overview, "As CDC is not limited to just the wireless market, and expands into other emerging markets like digital television, telematics, multi-function printers, ...". Those sound like platforms that could make use of a server supporting RESTful web services.

From Marc: "Jersey will automatically produce a WADL description"

Wonderful. The point isn't even necessarily that you should make this part of the spec, but to ensure that this is do-able within the current spec's design. Because you don't want to find out later, after the spec has been finalized, that the design prevents something like generating a description. Sounds like you're doing that with the WADL generation. I'd like to see the same done with the client.

Friday, August 17, 2007

on JSR 311

Some thoughts on JSR 311 - JAX-RS: The Java™ API for RESTful Web Services. The description of the JSR, from the web page is: "This JSR will develop an API for providing support for RESTful (Representational State Transfer) Web Services in the Java Platform."

Here are a few related links:

Some of the positive aspects of the ongoing effort:

  • It's good to be thinking about some facility to help people with RESTful services, in Java.

  • This JSR seems to be much, much more transparent than most JSRs; published meeting minutes (though a bit skimpy), mailing lists, reference implementation work being done in the open, etc. Wonderful. Wonderful!

  • Even the current drafts of the spec are available, on-line, without a soul-sucking click-through license agreement! The most recent one is the "Editors draft - July 3rd, 2007 (PDF)".

  • The general approach as currently implemented / described by Jersey feels more or less right. Of course, I say that because I've been recently looking at doing something similar, without having seen JSR 311, and there's a lot of commonality between the road I was heading down, and the one JSR 311 is going down. Use of annotations to augment methods and classes which describe the services, specifically. Some of the annotations I had come up with were in fact the exact same names that Jersey uses!

OK, now for the bad news.

  • The licensing for the open source side only accommodates the GPL half of the universe. Leaving folks wanting something more like a BSD style license, to fend for themselves. Of course, that's not necessarily bad, it's always useful to have some competition. But it's also nice to have a single common implementation that everyone can use. There are many licenses that would be compatible with the GPL and acceptable to a wider audience.

  • Specific issues from the 2007/07/03 spec draft: Section 1.2 Non-Goals:

    • Support for Java versions prior to J2SE 5.0: The API will make extensive use of annotations and will require J2SE 5.0 or later.

      Read: Sorry, J2ME. Sorry, folks stuck on Java 1.4.

    • Description, registration and discovery: The specification will neither define nor require any service description, registration or discovery capability.

      Read: We'll figure this out later; hopefully we won't have to change anything in this spec once we start thinking about this aspect of the problem.

    • Client APIs: The specification will not define client-side APIs. Other specifications are expected to provide such functionality.

      Read: We'll figure this out later; hopefully we won't have to change anything in this spec once we start thinking about this aspect of the problem.

The licensing issue is bad enough that it's really going to force an alternate implementation to be developed. This might be something that Apache would typically do, but given the Apache / Sun disagreement on JCP issues, it's not really clear to me that Apache will ever be interested in working on JSR implementations again.

Another interesting wrench to throw into the gears is Eclipse. As of Eclipse 3.3, the Equinox project has been shipping Jetty, along with some goop to allow you to use OSGi within servlets, or start an HTTP server as an OSGi service. Adding RESTful service support to this story seems like a logical next step to me. Note that the existing JSR 311 non-goal of support for <= Java 5 support violates an OSGi constraint of running in their smaller (1.4.x-ish) environments.

Seems to me, if we're going to have to work on an alternate implementation anyway (to solve the licensing issues), we might as well solve some of the technical problems as well (J2ME / Java 1.4 support, service descriptions, client apis, etc).

And yes, I do know of RESTlet™ but have not looked at it extensively; the licensing and Joint Copyright Agreement and trademark issues are non-starters for me. It is also on a pathway towards becoming a JSR.

Anyone up for a "Battle of the RESTful frameworks"?

Sunday, August 12, 2007

Ruby Hoedown, day 2

Saturday was day 2 of the Ruby Hoedown 2007. I had to leave early and so missed "Using C to Tune Your Ruby (or Rails) Application" by Jared Richardson, and the final keynote by Marcel Molina, Jr.

Nathaniel Talbott, one of the conference organizers, mentioned before one of the sessions that the sessions are being recorded, audio and video, along with the slides presented, and will be available on the web shortly. With some kind of a Creative Commons license. Wow! I wish more conferences did that. I won't dive into details on the sessions below, since the info will be available soon, verbatim.

Sessions I did attend:

  • Building Games with Ruby - Andrea O.K. Wright

    Andrea discussed a handful of gaming toolkits for Ruby. Supporting everything from MUDs to 3D. A few of the toolkits were wrappers over SDL.

  • Lightning talks

    • test/spec - Clinton Nixon - layers an RSpec-inspired interface on top of Test::Unit
    • a strategy game written in JRuby, using Swing - didn't get the presenter's name
    • methodphitamine - Jay Phillips - a more powerful alternative to Symbol#to_proc
    • Ruleby - Joe Kutner - a pure Ruby rule engine
    • tyrant - Patrick Reagan - a way to run Rails applications without each user needing a development environment
    • generating Ruby classes - Luke Kanies - dynamic code gen
    • Iron Ruby - Brian Hitney - Ruby implemented on the CLR
    • static sites with Rails - Brian Adkins - how to enable mostly static pages with Rails
    • Thoughtworks test strategies - Dan Manges - how TW designs tests for teams
    • myDecisionHelper - Tyler Start (sp?) - self-help web app

  • BOFs over lunch

    Went to Rick Denatale's "Smalltalk for Rubyists; Ruby for Smalltalkers". More the former than the latter, unfortunately for me. But still interesting and informative.

  • Does Ruby Have a Chasm to Cross? - Ken Auer

    Great talk; like Bruce Tate's keynote on Friday, referenced the book "Crossing the Chasm" (just added it to my 'wanna read' list). Ken's talk came at it from a different angle: being there, when Smalltalk was just about to 'cross the chasm', but didn't make it. Why didn't it? What can we learn from Smalltalk's tragic history?

All in all, great conference. I'll be there next year, assuming they have it again. Maybe I'll have some more Ruby under my belt by then.

Forgot to mention in the previous post that I used my Nintendo DS-Lite with the Opera browser, again, to do some lite surfing, email checking, twittering, during the conference. Very convenient.

Saturday, August 11, 2007

Ruby Hoedown, day 1

theRab tweets: "i'm afraid to google 'ruby hoedown' so i'll just patiently wait for your blog post". Here you go. BTW, no need to be afraid of Google (at the moment).

The Ruby Hoedown 2007 is a small conference covering Ruby topics, held in Raleigh, NC. At Red Hat HQ, the same place where barcampRDU was held last week.

Friday was just a half day afternoon session, though there was a testing workshop in the morning that I didn't attend. A bit over a hundred attendees, I'd guess, many from out of town. Usual list of sponsors, with one surprise: Microsoft; though obviously that's not a complete surprise.

I recognized a lot of folks there from last week's barcamp, and some from the erlounge meet-up. Some "Old Dudes Who Know Smalltalk" were present, including my evil twin, Rick Denatale, and someone I hadn't talked to in ages, Ken Auer.

Sessions this afternoon:

  • Exploring Merb - Ezra Zygmuntowicz

    Merb is Ezra's pocket-framework for web serving. Lighter than Rails. Ezra stressed twice that Merb "doesn't use cgi.rb" - I'm showing my n00by-ness by assuming this is a good thing.

    Between this session, and the camping session at barcamp, there appears to be active and interesting development still in the web server framework arena, at the lowest levels. Like the days in Java, before servlet. Sigh.

    One cool feature of Merb is supporting direct streaming of content. For example, if you need to return the contents of a file, you just return the file handle of the file where you would normally return the content, and the file gets streamed on your behalf. Handles uploads too, and it sounded like a lot of people use it just for the upload capability. Ezra also mentioned someone using Amazon S3 as their store, and making use of this feature, where Merb was really just the gateway between the streams (to/from S3, to/from client).

  • Next-Gen VoIP Development with Ruby and Adhearsion - Jay Phillips

    General notion here is that setting up Asterisk is hard, because the configuration files are large, monolithic, and complicated. Adhearsion makes that simpler, as your configuration becomes small Ruby programs. Which maybe you could imagine sharing with other people ("Want my call-center scripts?"), compared to the situation today where granular sharing isn't really possible.

    I don't have much need to set up a VoIP box, otherwise I'm sure this would be fun to play with. Jay did mention that Asterisk has VMWare images available, which would make it even easier to play with, although Adhearsion is not current available on those images. If I only had more time.

  • Keynote - Bruce Tate

    Really good talk discussing how Ruby became as popular as it is, how Java became as popular as it was (is?), what has dragged Java down in recent years, and some future thinking and advice.

    Best piece of advice: "Java didn't start as something that was complicated, it evolved into something that was complicated. Ruby is not immune to this."

    SmallTalk was mentioned; yes, sadly, with the capital T; so Bruce wasn't one of us I guess, he at least knew it was good stuff. OTI was even mentioned.

    Bruce mentioned the ultimate plan for Ruby: TGD - Total Global Domination. Hmmm, that reminds me of something.

I asked whether the presentations would be available on the web, and the answer was yes. In addition, the sessions are being recorded (audio and video) and will also be available on the web at some point.

Looking forward to Saturday's sessions.