Links

pmuellr is Patrick Mueller

other pmuellr thangs: home page, twitter, flickr, github

Friday, April 28, 2006

A Reel Man making the rounds

My buddy Todd Lothery's movie, A Reel Man, is making the rounds. Here's a short blurb I pulled from the Indianapolis International Film Festival:

A Reel Man is a profile of Skip Elsheimer, a Raleigh, North Carolina resident who collects educational films. Skip started collecting these 16mm films many years ago, and he now has almost 17,000. His archive includes films from the 1940s, ‘50s, ‘60s, ‘70s and ‘80s on every conceivable instructional topic – sex education, personal hygiene, public safety, driver’s ed, occupational training, the dangers of drugs and alcohol, how to be popular, etc. The films in Skip’s archive are campy, amusing and not a little propagandistic, but they’re also valuable, fascinating time capsules that reflect America’s social attitudes and cultural values throughout the last half-century.

A Reel Man appears to also have been shown on a public tv network in South Carolina.

I saw A Reel Man as Todd's project at the Center for Documentary Studies at Duke, where he honed his craft. The movie had it's first public showing at the Asheville Film Festival.

Cool.

For Raleigh-ites, you may remember Todd as a movie reviewer for the News and Observer a few years back. He and his wife Kay Wiles headed back home to Indy to be closer to family. We miss 'em!

Also for Raleigh-ites, Skip's A/V Geeks web site has a list of events where he'll be showing his movies, including free showings fairly often at the NC Museum of Natural Sciences.

For everyone, A/V Geeks has lots of stuff available for purchase, and some downloadable movies.

I gotta get Todd to upload his movie to YouTube after it makes the rounds.

Monday, April 24, 2006

The problems with JSON

My love affair with JSON is over. I'm going to outline the problems I've got with it, and then also reiterate some of the things I do still like about it.

  • Licensing
    I will admit this point is rather weak. It's based on the license that Douglas Crockford ("father of JSON?") uses for the JSON code he makes available. It's an MIT-ish license, I suppose, but has an odd additional stipulation in it. The Software shall be used for Good, not Evil. There is some discussion of the license here (scroll about half way down the page). My experience dealing with IBM lawyers for the past 15 years, is that this is exactly the thing that drives them nuts. I'm with Mark on this (read the blog post), and I'll admit that when I see stuff like this, I just immediately assume a battle ahead with lawyers trying to get permission to reship it. I'm getting too old for that shit.

    So, like I said, this is a weak point, since as Crockford himself says, JSON parsers and generators are easy to write. It's just ... a shame.

  • Lack of complex data types
    While JSON has great support for basic data types, unlike XML, it has no built-in support for complex data types. In JSON, instead of objects, you really have hash tables, where the properties/attributes of the 'object' are string keys, and the values are any of the valid JSON objects, including an 'object' (hash table).

    Unfortunately, for my purposes, this isn't good enough. The objects I'm dealing with right now at work are actually classically designed object-oriented instances of classes. That can be subclassed. JSON has no built-in way to indicate the actual type of the object, just the properties on the object. Every object is 'anonymous'. Or again, just a hash table.

    You can kind of fix this by including a type name as a property of the object, but this is a bit of a hack. And it's non-standard. And if you actually looked at some of the stuff we were JSON-ing, you'd realize that you'd really like to have something like XML namespaces to help cut down on string duplication, if your type names are long (ie, they include an XML namespace, package, module name). That's hard to fix.

    XML on the other hand, does a pretty good job of this. Type names can end up becoming actual element names, using namespaces to end up with a short form of the longer, universally unique class name. With the long name declared up front, kinda like a Java import statement. Nice.

  • Executable
    The end of the day, this is really the biggest problem. This is executable code. How long have we known that you really shouldn't be sending executable code over the network; you gotta have a lot of trust in your partner if you're going to execute code they give you. With JSON in JavaScript, you really should forgoe using eval() or any other function using eval(), unless you are really sure you can trust your partner. And I would claim that's hard. Ask MySpace.

    Instead of using eval(), you should really use some code which actually parses up the JSON text. Like every other language does. But, you gotta wonder, what's faster, a JSON parser written in JavaScript, or the existing DOM engines to parse XML? My bet's on the DOM engines. Complete guess. But even if for some reason a JSON parser was faster than XML, it still means you have to carry around this wad of JS code to parse JSON for your pages. Baggage.

So, what do I still like about it?

  • Support for basic data types
    I still really love the self-descriptive typing for basic data types in JSON. Compared to XML, where just because you see a text node or attribute value of "true", you really have no idea if that this is a string, or a boolean, or maybe even some specialized marshalling of another type. No problem with JSON; "true", the string, is rendered in a completely different way than "true", the boolean. This is a fantastic property of JSON; hopefully future structured serialization languages can likewise incorporate this.

  • Easy for JavaScript users
    Really easy. My friends using dojo said that handling dojo from REST calls is quite simple. Anything to make life a little easier in that world is nice. If nothing else, using JSON for prototyping is probably a great idea.