Links

pmuellr is Patrick Mueller

other pmuellr thangs: home page, twitter, flickr, github

Friday, November 09, 2007

moving back

As I mentioned on my developerWorks blog, I've decided to do my blogging back here at Blogger.

As part of that process, I set up a virtual feed at feedburner, so that when I move next time, it'll be complete transparent, for feed aggregators anyway.

Moved my content over with a simple matter of programming.

Monday, November 05, 2007

RubyConf 2007

A few notes from RubyConf 2007:

(The sessions at RubyConf 2007 were recorded by Confreaks, and will hopefully be available soon on the web. I'll update this post with a link when they are.)

  • Ropes: an alternative to Ruby strings by Eric Ivancich
    Interesting. Strings have always been a problem area in various ways, SGI's ropes provides an interesting solution for some use cases. In particular, Strings always show up as pigs in Java memory usage, what with the many, long-ish class names and method signatures, that have lots of duplication internally; I wonder if something like ropes might help.

    See notes by James Avery.

  • Ruby Town Hall by Matz
    Didn't take any notes, but the one thing that stuck out for me was the guy from Texas Instruments who said they were thinking of putting Ruby in one of their new calculators. He asked Matz if future releases would be using the non-GPL'd regex library, as the TI lawyers were uncomfortable (or something, can't remember the exact words) with it (the licensing). See also the notes under Rubinius, below, regarding licensing.

    But the big news for me, from this question, was the calculator with Ruby. Awesome, if it comes to be. I talked to the guy later and he indicated it was, as I guessed/hoped, the TI-Nspire. Sadly, he also indicated the calculator probably wouldn't be available till mid-2008.

    See notes by my evil twin, Rick DeNatale.

  • IronRuby by John Lam
    Didn't go into enough technical depth, sadly. And the technical presentation included info on the DLR and then XAML, which I don't think were really required. John had a devil's tail attached to his jacket or pants, which appeared about half way through the presentation. Really, I think everyone seemed to be quite open to IronRuby, no one seems to be suggesting it's evil or anything. Are they?

    See notes by Nick Sieger.

  • JRuby by Charles Nutter and Thomas Enebo
    Lots of good technical info. Tim Bray did a very brief announcement at the beginning about some kind of research Sun is doing with a university on multi-VM work; sounded like it didn't involve Java, and given the venue, I assume it had something to do with Ruby. Sounds like we'll hear more about it in the coming weeks.

    The JRuby crew seems to be making great progress, including very fresh 1.0.2 and 1.1 beta 1 releases.

    One thing that jumped right out at me when the 'write a Ruby function in Java' capability was discussed, was how similar it seemed to me to what I've seen in terms of the capabilities of defining extensions in the PHP implementation provided in Project Zero. That deserves some closer investigation. It would be great if we could find some common ground here - perhaps a path to a nirvana of defining extension libraries for use with multiple Java-based scripting languages?

  • Rubinius by Evan Phoenix
    Chad Fowler twittered: "Ruby community, if you're looking for a rock star, FYI it's Evan Phoenix. Please adjust accordingly. kthxbai".

    I happened to hit the rubini.us site a few times this weekend, and at one point noticed the following at the bottom of the page: Distributed under the BSD license. It's been a while since I looked at the Ruby implementations, in terms of licensing, but I like the sound of this, because I know some of the other implementations' licenses were not completely permissive (see Ruby Town Hall above). Ruby has still not quite caught on in the BigCo environments yet, and I suspect business-friendly licensing may be needed to make that happen. It certainly won't hurt.

    See notes by Nick Sieger.

  • Mac OS X Loves Ruby by Laurent Sansonetti
    Oh boy, does it ever. Laurent covered some of the new stuff for Ruby in Leopard, and had people audibly oohing and ahhing. The most interesting was the Cocoa bridge, which allows you to build Cocoa apps in Ruby, using XCode, which (now?) supports Ruby (syntax highlighting, code completion?). Most of the oohing had to do with the capability of injecting the Ruby interpreter into running applications, and then controlling the application from the injector. Laurent's example was to inject Ruby into TextEdit, to create a little REPL environment, right in the editor. Lots of ooh for the scripting of Quartz Composer as well.

    Apple also now has something called BridgeSupport which is a framework whereby existing C programming APIs (and Objective C?) are fully described in XML files, for use by frameworks like the Cocoa bridge, as well as code completion in XCode. That's fantastic. I've had to do this kind of thing several times over the years, and, assuming the ideas are 'open', it would be great to see more people step up to this, so we can stop hacking C header file parsers (for instance). And I think I could live with never having to write a java .class file reader again, thankyouverymuch.

    I suspect all this stuff is available for Python as well.

    Laurent also showed some of the DTrace support. No excuse not to look at DTrace now. Well, once I upgrade to Leopard anyway.

    Someone asked "Will Ruby Cocoa run on the iPhone?" Laurent's reply: "Next question". Much laughter from the crowd. Funny, in a sad way, I guess.

    See notes by Alex Payne.

  • Matz Keynote
    Matz covered some overview material, mentioned Ruby will get enterprisey: "The suit people are surrounding us". He then dove into some of the stuff coming in 1.9. Most of it sounds great, except for the threading model moving from green threads to native threads, and a mysterious new loop-and-increment beast, which frankly looked a bit too magical to me. The green vs. native threads thing is personal preference of mine; I'd prefer that languages not be encumbered with the threading foibles provided by the platform they're running on. Green threads also give you much finer control over your threads. On the other hand, given our multi-core future, I think there's probably no way to avoid interacting with OS-level threads, at some level.

  • Behaviour Driven Development with RSpec by David Chelimsky and Dave Astels
    I really need to catch up on this stuff, I'm way behind the times here. They showed some new work they were doing that better captured design aspects like stories, including executable stories, with a web interface that can be used to build the stories. That's going to be some fun stuff. Presentation available as a PDF.

    See notes by Nick Sieger.

  • Controversy: Werewolf considered harmful?
    Charles Nutter wonders if the ever-popular game is detracting from collaborative hack-fests. The game certainly is quite popular. I played one game, my first, and it was a bit nerve-wracking for me. But then, I was a werewolf, and the last one killed (the villagers won), the game came down to the final play, and I'm a lousy liar.

I kept notes again on a Moleskine Cahier pocket notebook, which works out great for me. Filled up about 3/4 of the 64 pages, writing primarily on the 'odd' pages, leaving the 'even' pages for other doodling, meta-notes, drawing lines, etc. I can get a couple of days in one notebook. The only downside is you need something to write on, for the Cahiers, for support, and the last half of the notebook pages are perforated. I don't really need the perforation, but it wasn't a big problem. They end up costing about $2 and change for each notebook.

I was, like usual, primarily surfing during the conference on my Nintendo DS with the Opera browser; good enough to twitter, check email, check Google Reader. It's a good conversation starter, also. At one point, a small-ish, slightly scruffy Asian gentleman leaned over my shoulder to see what in the world I was doing, so I gave him my little spiel on how it was usable for the simple stuff, yada yada. He seemed amused.

Wednesday, October 24, 2007

only the server knows

In "Is It Atompub?", James Snell discusses how an arbitrary binary file could be posted to a feed via AtomPub, along with it's meta-data, at the same time, in a single HTTP transaction.

I'm assuming here that by "metadata", James means "stuff in the media link entry".

All James' options look like reasonable things a server CAN do. But it raises the question: how does the client know which of these methods is actually supported? I guess one of the answers is via exposing capabilities via features, which is something James is currently working on.

I think the answer is, if you're writing generic-ish client code, that you can't, today.

Doesn't mean you can't get close. The sequence described in Section 9.6.1 of the brand spankin new RFC 5023 seems to describe it pretty well. You can shortcut through that example, by issuing a PUT against the entry returned by the original POST, I assume, eliminating the secondary GET. It's just not one transaction anymore, it's two.

I'll also add that James' first option: have the server retrieve the meta-data from the binary blob's intrinsic meta-data, seems wrong. It will require the server to know about meta-data formats for all manner of binary blobs users might want to store. I think it's fine for a server to support that, if they want, but it doesn't seem to right to assume that every server DOES support it. But don't get me wrong, I would love to have the exif data extracted out of my photos upon upload.

So the next interesting question is, if the server DOES populate the meta-data for the resource on my behalf, based on the resource, how do I as a generic client know that? Check for a non-empty atom:summary element?

BTW, I hadn't gotten a chance to extend thanks to the folk working on the Atom and AtomPub specs, for all the work they've done over the years. Standards work is hard and thankless. Kudos all around.

---

Edited at 9:03pm: ops - I wanted to title this "only the server knows" not "only the shadow knows"

Thursday, October 18, 2007

the vice of debugging

In Giles Bowkett's "Debugger Support Considered Harmful", Giles makes the claim that "Ruby programmers shouldn't be using a debugger".

First, as a meta-comment to this, I truly love to see folks expressing such polarizing and radical opinions. Great conversation starters, gets you thinking in someone else's viewpoint, devil's advocate, etc. At the very least, it's a change of pace from the REST vs everything else debate. :-)

Back to the issue at hand, debugging Ruby. I certainly understand where Giles is coming from, as I've questioned a few Rubyists about debugging, only to have them claim "I don't need a debugger" and "the command-line debugger is good enough". There is clearly a notion in the Ruby community, unlike one I've seen almost anywhere else, that debuggers aren't important.

I twittered this morning, in reference to Giles' post, the following: "Seems like Stockholm syndrome to me". As in, if you don't have a decent debugger to use in the first place, it seems natural to be able to rationalize reasons why you don't need one. I have the exact same feelings for folks who spend almost all of their time programming Java making claims "the only way to do real programming is in an IDE"; because it's pretty clear to me, at this point, the only way to do Java programming is with an IDE. I've personally not needed an IDE to do programming in any other language, except of course Smalltalk, where it was unavoidable. Of course, extremely programmable text editors like emacs, TextMate, and even olde XEDIT kinda blur the line between a text editor and an IDE.

My personal opinion: I love debuggers. If I'm stuck with a line-mode debugger, then fine, I'll make do, or write a source-level debugger myself (aside: you just haven't lived life if you haven't written a debugger). But I'll usually make use of a source-level debugger, if it's easy enough to use. Sometimes they aren't.

Honestly, I love all manner of program-understanding tools. Doc-generators, profilers, tracers, test frameworks, code-generators, etc. They're tools for your programming kit-bag. I use 'em if I need 'em and I happen to have 'em. They're all 'crutches', because in a pinch, there's always printf() & friends. But why hop on one leg if I can actually make better headway, with a crutch, to the finish line of working code? Seems like a puritanical view to me to say "you shouldn't use crutches", and especially ironic for Ruby itself, which is a language full of crutches!

Perhaps there is something about Ruby itself, which makes debugging bad. Or as Giles seems to be indicating, that testing frameworks can completely take the place of debugging. Some unique aspect of Ruby that sets it apart from other languages where debuggers are deemed acceptable and desirable. As a Ruby n00b, I'm not yet persuaded.

As for Ruby debuggers themselves, NetBeans 6.0 (beta) has a fairly nice source-level debugger that pretty much works out-of-the-box. Eclipse fanatics can get Aptana or the dltk (Dynamic Languages Toolkit). I think it would be nice to have a stand-alone source-level Ruby debugger, outside the IDE, because honestly I think TextMate is a good enough IDE for Ruby anyway.

BTW, Rubyists, I'm getting tired of the Sapir-Whorf references.

Friday, October 12, 2007

on links

When people think about links, with regard to tying together information on the web, the usual thoughts are of URLs. Either absolute URLs, or a URL relative to some base (either implicitly the URL of the resource that contains the link, or explicitly via some kind of xml:base-like annotation).

But I wrestle with this.

Here's one issue. Let's say I have multiple representations of my resources available; today you see this typically as services exposing data as either JSON or XML. If that representation includes a link to other data that can be exposed as either JSON or XML, do you express that link as some kind of "platonic URL"? Or if you are doing content-negotiation via 'file name extension' sort of processing, does your JSON link point to a .json URL, but your XML link point to a .xml URL?

See a discussion in the JSR 311 mailing list for some thoughts on this; I stole the term "platonic URL" from this discussion.

The godfather had something interesting to say in a recent presentation. In "The Rest of REST", on slide 22, Roy Fielding writes:

Hypertext does not need to be HTML on a browser
- machines can follow links when they understand the data format and relationship types

Where's the URL? Perhaps tying links to URLs is a constraint we can relax. Consider, as a complementary alternative, that just a piece of data could be considered a link.

Here's an example: let's say I have a banking system with a resource representing a person, that has a set of accounts associated with it. I might typically represent the location of the account resources as a URL. But if I happen to know, a priori, the layout of the URLs, I could just provide an account number (assuming that's the key). With the account number, and knowledge of how to construct a URL to an account given that information (and perhaps MORE information), the URL to the account can easily be constructed.

The up-side is that the server doesn't have to calculate the URL, if all they have is the account number. They just provide the account number. The notion of content-type-specific URLs goes away; there is only the account number. The resources on the server can be a bit more independent of themselves; they don't have to know where the resource actually resides, just to generate the URL.

Code-wise, on the server, this is nice. There's always some kind of translation step on the server that's pulling your URLs apart, figuring out what kind of resource you're going after, and then invoking some code to process the request. "Routing". For that code to also know how to generate URLs going back to other resources, means the code needs the reverse information.

The down-side, of course, is that you can't use a dumb client anymore; your client now needs to know things like how to get to an account given just an account number.

And just generally, why put more work on the client, when you can do it on the server? Well, server performance is something we're always trying to optimize - why NOT foist the work back to the client?

But let's also keep in mind that the Web 2.0 apps we know and love today aren't dumb clients. There's user-land code running there in your browser. Typically provided by the same server that's providing your resources in the first place. ie, JavaScript.

I realize that's a bad example for me to use; me being the guy who thinks browsers are a relatively terrible application client, but what the heck; that's the way things are today.

For folks who just want the data, and not the client code, because they have their own client code, well, they'll need some kind of description of how everything's laid out; the data within a resource representation, and the locations of the resources themselves. But the server already knows all that information, and could easily provide it in one of several formats (human- and machine-readable).

As an proof point of all of this, consider Google Maps. Think about how the map tiles are being downloaded, and how they might be being referenced as "links". Do you think that when Google Maps first displays a page, all the map tiles for that first map view are sent down as URLs? Think about what happens when you scroll the map area, and new tiles need to be downloaded. Does the client send a request to the server asking for the URLs for the new tiles? Or maybe those URLs were even sent down as part of the original request.

All rhetorical questions, for me anyway. I took a quick look at the JavaScript for Google Maps in FireBug, and realized I've already debugged enough obfuscated code for a few lifetimes. Probably a TOS violation to do that anyway. Sigh. I'll leave that exercise to younger pups. But ... what would you do?

For Google maps, it's easy to imagine programmatically generating the list of tiles based on location, map zoom level, and map window size. Assuming the tiles are all accessible via URLs that include location and zoom level somewhere in the URL. In that case, the client code for calculating the URLS of the tiles needed is just a math problem. Why make it more complex than that?

I think there are problem domains where dealing with 'links' as just data, instead of explicit URLs make sense, as outlined with Google Maps. Remember what Roy wrote in his presentation: "machines can follow links when they understand the data format and relationship types". Of course, there's plenty of good reason to use continue to use URLs for links as well, especially with dumb-ish clients.

the envelope, please

Yaron Goland posted a funny Star Wars -flavored story on jamming square pegs in round holes with Atom. There's a great conversation in the comments in Sam Ruby's reply.

I think it's fair to say there's a tension here; on the one hand, there's no need to wrap your data in Atom or protocols in APP if you don't need to - that's just more stuff you have to deal with. On the other hand, if Atom and APP support becomes ubiquitous, why not take advantage of some of that infrastructure; otherwise, you may find yourself reinventing wheels.

I can certainly feel Yaron's point of "I'm running into numerous people who think that if you just sprinkle some magic ATOM pixie dust on something then it suddenly becomes a standard and is interoperable." I'm seeing this a lot now too. Worrisome. Even more so now that more and more people are familiar with the concept of feeds, but don't understand the actual technology. I've seen people describe things as if feeds were being pushed to the client, for instance, instead of being pulled from the server.

One thing that bugs me is the potentially extraneous bits in feeds / entries that are actually required: atom:name, atom:title, and atom:summary. James Snell is right; it's simple enough to boilerplate these when needed. But to me, there's enough beauty in Atom and APP, to really make it reusable across a wide variety of problem domains, that these seem like warts.

Another thorn in my side is the notion of two ways of providing the content of an item. atom:content with embedded content, or linked to with the src attribute. The linked-to style is needed, clearly, for binary resources, like image files. But it's a complication; it would clearly be easier to have just one way to do it, and that would of course have to be the linked-to style.

The picture in my mind is that Atom becomes something like ls; just get me the list of the 'things' in the collection, and some of their metadata. I'll get the 'things' separately, if I even ever need them. Works for operating systems.

Of course, the tradeoff there is that there are big performance gains to be had by including your content IN the feed itself, with the embedded style; primarily in reducing the number of round-trips between client and server from 1 + # of resources, to 1. It doesn't help that our current web client of choice, the web browser, doesn't provide programmatic control over it's own cache of HTTP requests. Maybe if it did, the linked-to style would be less of a performance issue.

I suspect I'm splitting hairs to some extent; one of my many vices. I'm keeping an open mind; I'm glad people are actually playing with using Atom / APP as an application-level envelope / protocol. It's certainly worth the effort to try. There's plenty to learn here, and we're starting to have some nice infrastructure here to help us along, and the more people play, the more infrastructure we'll get, and ...

Tuesday, October 09, 2007

web and rest punditry

  • In "Google Map 'Documents'", Stefan Tilkov notes:

    In a comment, Patrick Mueller asks whether I'd consider Google Maps "document-oriented".

    Given that I can say that this is where I live and this is where I work I'd claim it's document-oriented enough for me :-)

    Stefan is referring to a comment discussion in his blog post on Lively. How we got from Lively to whether Google Maps is "document-oriented" ... well, read the original post and the comments.

    w/r/t "document oriented", the term is used on the Lively page, but I think Stefan is misinterpreting it. I can only infer from Stefan's post that he believes a web application is "document oriented" if you can generate a URL to somewhere in the middle of it. Like he did with two map links he included.

    I'd refer to this capability as having addressable pages. Addressable in terms of URLs that I can paste into a browser window, email to my mom, or post in a blog entry. Important quality. It's especially nice if you can just scrape the URL from the address bar of your browser, but not critical, as Google Maps proves; a button to generate a link is an acceptable, but less friendly substitute. Being addressable is especially hard for "web 2.0" apps, which is why I mention it at all.

    My read of the Lively page's usage of "document-oriented" is more a thought on application flow. In particular, pseudo-conversational processing. Which is both the way CICS "green screen" applications are written, as well as the way Web 1.0 applications are written. Turns out to be an incredible performant and scalable style of building applications that can run on fairly dumb clients. The reason I infer this as "document-oriented" is that, in the web case, you literally are going from document (one URL) to another document (another URL) as you progress through the application towards your final goal. Compared to the style of imperative and event driven programming you apparently do with Lively.

    So, with that thought in mind, Google Maps is clearly not "document oriented". The means by which you navigate through the map is 'live'; you aren't traversing pages like you did in old school MapQuest (which, btw, is also now "live").

    But even still, given my interpretation of Stefan's definition, I'd say there's no reason why a Lively app can be just as "document-oriented" as Google Maps given his definition; that is, exposing links to application state inside itself as URLs. You may need to work at it, like you would with any Web 2.0 app, but I don't see technical reason why it can't be done. Hint: client code running in your browser can inspect your URLs.

    Back to Stefan's original note about Lively: "it might be a good idea to work with the Web as opposed to fight against it". I think I missed Stefan's complaints about Google Maps fighting against the web. Because if Lively is fighting against the web, then so is Google Maps.

    Lastly, a note that Lively is built out of two technologies. JavaScript and SVG, and it runs in a web browser. I'm finding it really difficult to figure out how Lively is fighting the web.

  • In "What is SOA?", Pete Lacey notes:

    Another problem with the SOA name is the "service" bit. At least for me, the term "service" connotes a collection of non-uniform operations. I don't even like the phrase "REST Web services." Certainly, SOAP/WS-*, CORBA, DCOM, etc. fit this definition. But REST? Not so much. In REST the key abstraction is the resource, not the service interface. Therefore SOA (and I know this is not anyone's strict definition) encompasses the above mind set, but includes SOAP and similar technologies and excludes REST.

    If you change Pete's definition of "service" to be "a collection of operations", independent of whether they are uniform or not, then REST fits the definition of service. Next, you can simply say the resource (URL) is the service interface, for REST. Just a bunch of constraints / simplifications / specializations of the more generic 'service' story.

    Sure there are plenty of other details that separate REST from those other ... things. But you can say that about all of them; they're ALL different from each other, in the details. And at 10K feet, they're all doing generally the same thing.

    As a colleague mentioned to me the other day, REST is just another form of RPC.

    I feel like we might be throwing out the baby with the bath water here. It's true that I never want to go back to the CORBA or SOAP/WS-* worlds (I have the scars to prove I was there), but that doesn't mean there's nothing to learn from them. For instance, the client-facing story for REST seems a bit ugly to me. I know this isn't real REST, but if this is the kind of 'programming interface' that we have to look forward to, in terms of documentation and artifacts to help me as a programmer ... we got some problems.

    I look forward to seeing what Steve Vinoski brings to the table, as a fellow scar-bearer.

Saturday, October 06, 2007

php in project zero

Project Zero's ebullient leader, Jerry Cuomo, just published an article at his blog talking about the PHP support in Project Zero. If you didn't already know, PHP is supported in Project Zero using a PHP interpreter written in Java. Pretty decent 10K ft summary of why, what, etc. With links to more details.

Wanted to point out two interesting things, to me, from Jerry's post ...

"The idea with XAPI-C is to take the existing php.net extensions, apply some macro magic, potentially some light code rewrite, and make those extension shared libraries available to Java through JNI."

Interesting programming technique; trying to reuse all the great, existing PHP extensions out there. One of PHP's great strengths is the breadth of the extension libraries. It would be great to be able to reuse this code, if possible. It's a really interesting idea to me in general, to be able to reuse extension libraries, not just across different implementations of the same language, but across multiple languages. It just seems like a terrible waste to have lots of interesting programming language libraries available, but not be able to use them in anything but a single language.

"We have debated the idea of moving our PHP interpreter into a project unto it's own - where it can explore better compatibility with php.net as well as pursue supporting a full complement of PHP applications. Thoughts?"

This seems like it makes a lot of sense. Especially if it meant being able to open source the interpreter to allow a wider community to develop it. I think the main question is, is there a wider community out there interested in continuing to develop the interpreter and libraries?

Lastly, want to point out how impressed I've been by the team in Hursley, UK and RTP, NC for getting the interpreter as functional as it is so quickly. The team maintains a wiki page at the Project Zero web site describing the current state of functionality, if you're interested in seeing how far along they are.

Thursday, September 20, 2007

which flavor do you favor

A question: if you had to provide a client library to wrapper your RESTful web services, would you rather expose it as a set of resources (urls) with the valid methods (request verbs) associated with it, or provide a flat 'function library' that exposed the resources and methods in a more human-friendly fashion?

Example. Say you want to create a to-do application, which exposes two resources: a list of to-do items, and a to-do item itself. A spin on the "Gregorio table" might look like this:

resource URL template HTTP VERB description
to-do items /todo-items GET return a list of to-do items
to-do items /todo-items POST create a new to-do item
to-do item /todo-items/{id} GET return a single to-do item
to-do item /todo-items/{id} PUT update a to-do item
to-do item /todo-items/{id} DELETE delete a to-do item

(Please note: the examples below are painfully simple, exposing just the functional parameters (presumably uri template variables and HTTP request content) and return values (presumably HTTP response content), and not contextual information like HTTP headers, status codes, caches, socket pools, etc. A great simplification. Also note that while I'm describing this in Java code, the ideas are applicable to other languages.)

If you were going to convert this, mechanically, to a Java interface, it might look something like this:

For the first two rows of the table, you use the following interface:

1: public interface ToDoItems {
2:     public ToDo[] GET();
3:     public ToDo POST(ToDo item);
4: }

For the last three rows of the table, you use the following interface:

1: public interface ToDoItem {
2:     public ToDo GET(String id);
3:     public ToDo PUT(String id, ToDo item);
4:     public void DELETE(String id);
5: }

I'll call this the 'pure' flavor.

A different way of thinking about this is to think about the table as a flat list of functions. In that flavor, add another column to the table, named "function", where the value in the table will be unique across all rows. Presumably the function names are arbitrary, but sensible, like ToDoList() for the /todo-items - GET operation.

Here's our new table:

function resource URL template HTTP VERB description
ToDoList to-do items /todo-items GET return a list of to-do items
ToDoCreate to-do items /todo-items POST create a new to-do item
ToDoGet to-do item /todo-items/{id} GET return a single to-do item
ToDoUpdate to-do item /todo-items/{id} PUT update a to-do item
ToDoDelete to-do item /todo-items/{id} DELETE delete a to-do item

This flavor might yield the following sort of interface:

1: public interface ToDoService {
2:     public ToDo[] ToDoList();
3:     public ToDo   ToDoCreate(ToDo item);
4:     public ToDo   ToDoGet(String id);
5:     public ToDo   ToDoUpdate(String id, ToDo item);
6:     public void   ToDoDelete(String id);
7: }

I'll call this the 'applied' flavor.

Now, if you look at the combination of the two 'pure' interfaces, compared with the 'applied' interface, there's really no difference in function. In fact, the code to implement all these methods across both flavors would be exactly the same.

The only difference is how they're organized.

Now, the question is, which one is better?

Now, you might say I'm crazy, who would ever choose the 'pure' story over the 'applied' story? And my gut tells me you're right. The 'applied' story seems to be a better fit for humans, who are largely going to be the clients of these interfaces, writing programs to use them.

But this flies in the face of transparency, where we don't want to hide stuff so much from the user. HTTP is in your face, and all that. At what point do we hide HTTP-ness? If you don't want to hide stuff from your users, you might choose 'pure'.

And I wonder, are there other advantages of the 'pure' interface? You might imagine some higher-level programming capabilities (mashup builders, or even meta-programming facilities if your programming language can deal with functions/methods as first class objects) that would like to take advantage of the benefits of uniform interface (as in the 'pure' interface).

And of course, there's always the option of supporting both interfaces, as in something like this:

01: public interface ToDoService2 {
02:     public ToDo[] ToDoList();
03:     public ToDo   ToDoCreate(ToDo item);
04:     public ToDo   ToDoGet(String id);
05:     public ToDo   ToDoUpdate(String id, ToDo item);
06:     public void   ToDoDelete(String id);
07: 
08:     static public interface ToDoItems {
09:         public ToDo[] GET();
10:         public ToDo POST(ToDo item);
11:     }
12: 
13:     static public interface ToDoItem {
14:         public ToDo GET(String id);
15:         public ToDo PUT(String id, ToDo item);
16:         public void DELETE(String id);
17:     }
18: }

Even though you have 10 methods to implement, each real 'operation' is duplicated, in both the 'pure' and 'applied' flavors, so you really only have 5 methods to implement. No additional code for you to write (relatively), but double the number of ways people can interact with your service.

Wednesday, September 19, 2007

NetBeans 6.0 Beta 1

Shiver me timbers, yes, I'm blogging a bit about NetBeans. After hearing such gushing reviews of it, I figured I'd take a look. It's been at least a year, and probably more, since I last looked at it. And I should note, I'm just looking at the Ruby bits.

Thought I'd provide some quick notes on NetBeans 6.0 Beta 1 as a confirmed Eclipse user. I'd give the history of my usage of Eclipse, but then Smalltalk enters the picture around pre-Eclipse VisualAge Java time-frame, and you don't want me to go there, do ya matey? That would just make me sad anyway, remembering the good old days.

I've also used the RDT functionality available in Aptana, and will make comparisons as appropriate.

cons:

  • On the mac, uses a package installer, instead of just a droppable .app file.

  • Mysterious 150% cpu usage after setting my Ruby VM to be my MacPorts installed ruby. I didn't see any mention in the IDE of what it was doing, but I figured it was probably indexing the ruby files in my newly-pointed-to runtime. Only lasted a minute or so. If it had lasted much longer, I might have killed it, and then uninstalled it.

  • Can only have a single Ruby VM installed; Eclipse language support usually allows multiple runtimes to be configured, one as default, but overrideable on a per-project, or per-launch basis. What do JRuby folks do, who want to run on JRuby or MRI alternatively?

  • Plenty of "uncanny valley" effects going on, because Swing is in use. Of course, Eclipse also has a lot of custom ui elements; I'm becoming immune to the uncanny valley; and FireFox on the mac doesn't help there either.

pros:

  • I see the Mac .app icon changed from the sharp-cornered version to a kinder, gentler version (see the image at the top), but I think I can still validly compare Eclipse and NetBeans to two of my favorite sci-fi franchises, given their logo choices. But it's certainly less Borg-ish than older versions.

  • Install now ships as a .dmg for the Mac (disk image file) instead of an embedded zip file in a shell script.

  • Debugging works great. Same as Eclipse with the RDT.

  • I can set Eclipse key-bindings.

  • F3 works most of the time ("find the definition") like in Eclipse. In fact, this is cool: F3 on a 'builtin' like Time, and NetBeans generates a 'dummy' source file showing you what the source would look like, sans method bodies, but with comments, and the method navigator populated. Nice!

  • A Mercurial plugin is available for easy installation through the menus, and CVS and SVN support is built-in. I played a bit with the Mercurial plugin in a previous milestone build, and it was easy enough to use, but I never could figure out how to 'push' back to my repo. Why Eclipse doesn't ship SVN support, built-in, in this day and age, is a mystery to me.

  • Don't need to add the "Ruby project nature" to a project just to edit a ruby file in the project. How I despise Eclipse's project natures.

  • Provides great competition for Eclipse.

Quite usable, overall. Hat's off to the NetBeans folks! I'll probably start using it for one-off-ish Ruby work I do.