Links

pmuellr is Patrick Mueller, an IBMer doing dev advocate stuff for node.js on BlueMix.

other pmuellr thangs: muellerware.org, twitter.com/pmuellr

Friday, June 06, 2014

debugging node apps running on Cloud Foundry

For node developers, the node-inspector package is an excellent tool providing debugger support, when you need it. It reuses the Chrome DevTools debugger user interface, in the same kinda way my old weinre tool for Apache Cordova does. So if you're familiar with Chrome DevTools when debugging your web pages, you'll be right at home with node-inspector.

If you haven't tried node-inspector in a while, give it another try; the new-ish node-debug command orchestrates the dance between your node app, the debugger, and your browser, that makes it dirt simple to get the debugger launched.

Lately I've been doing node development with IBM's Bluemix PaaS, based on the Cloud Foundry. And wanting to use node-inspector. But there's a problem. When you run node-inspector, the following constraints are in play:

  • you need to launch your app in debug mode
  • you need to run node-inspector on the same machine as the app
  • node-inspector runs a web server which provides the UI for the debugger

All well and good, except if the app you are trying to debug is a web server itself. Because with CloudFoundry, an "app" can only use one HTTP port - but you need two - one for your app and one for node-inspector.

And so, the great proxy-splitter hack.

Here's what I'm doing:

  • instead of running the actual app, run a shell app
  • that shell app is a proxy server
  • launch the actual app on some rando port, only visible on that machine
  • launch node inspector on some rando port, only visible on that machine
  • have the shell app's proxy direct traffic to node-inspector if the incoming URL matches a certain pattern
  • for all other URLs the shell app gets, proxy to the actual app

AND IT WORKS!

And then imagine my jaw-dropping surprise, when last week at JSConf, Mikeal Rogers did a presentation on his occupy cloud deployment story, which ALSO uses a proxy splitter to do it's business.

This is a thing, I think.

I've cobbled the node-inspector proxy bits together as cf-node-debug. Still a bit wobbly, but I just finished adding some security support so that you need to enter a userid/password to be able to use the debugger; you don't want strangers on the intertubes "debugging" your app on your behalf, amirite?

This works on BlueMix, but doesn't appear to work correctly on Pivotal Web Services; something bad is happening with the web sockets; perhaps we can work through that next week at Cloud Foundry Summit?

Monday, May 19, 2014

enabling New Relic in a Cloud Foundry node application

Something you will want to enable with your apps running in a Cloud Foundry environment such as BlueMix, is monitoring. You'll want a service to watch your app, show you when it's down, how much traffic it's getting, etc.

One such popular service is New Relic.

Below are some instructions showing how you can enable a node.js application to optionally make use of monitoring with New Relic, and keep hard-coded names and license keys out of your source code.

The documentation for using New Relic with a node application is available in the Help Center documentation "Installing and maintaining Node.js".

But we're going to make a few changes, to make this optional, and apply indirection in getting your app name and license key.

  • instead of copying node_modules/newrelic/newrelic.js to the root directory of your app, create a newrelic.js file in the root directory with the following contents:
  • This module is slightly enhanced from the version that New Relic suggests that you create (see https://github.com/newrelic/node-newrelic/blob/master/newrelic.js). Rather than hard-code your app name and license key, we get them dyanmically.

  • The app name is retrieved from your package.json's name property; and the license key is obtained from an environment variable. Note this code is completely portable, and can be copied from one project to another without having to change keys or names.

  • To set the environment variable for your CloudFoundry app, use the command

    cf set-env <app-name> NEW_RELIC_LICENSE_KEY 983498129....
    
  • To run the code in the initialize() function, use the following code in your application startup, as close to the beginning as possible:

  • This code is different than what you what New Relic suggests you use at the beginning of your application code; instead of doing the require("newrelic") directly in your code, it will be run via the require("./newrelic").initialize() code.

  • If you don't have the relevant environment variable set, then New Relic monitoring will not be enabled for your app.

Another option to keeping your license key un-hard-coded and dynamic is to use a Cloud Foundry service. For instance, you can create a user-provided service instance using the following command:

cf cups NewRelic -p '{"key":"983498129...."}'

You can then bind that service to your app:

cf bind-service <app-name> NewRelic

Your application code can then get the key from the VCAP_SERVICES environment variable.

I would actually use services rather than environment variables in most cases, as services can be bound to multiple apps at the same time, whereas you need to set the environment variables for each app.

In this case, I chose to use an environment variable, as you really should be doing the New Relic initialization as early as possible, and there is less code involved in getting the value of an environment variable than parsing the VCAP_SERVICES values.

You may want to add some other enhancements, such as appending a -dev or -local suffix to the app name if you determine you're running locally instead of within Cloud Foundry.

I've added optional New Relic monitoring to my node-stuff app, so you can see all this in action in the node-stuff source at GitHub.


update on 2014/05/19

After posting this blog entry, Chase Douglas (@txase) from New Relic tweeted that "the New Relic Node.js agent supports env vars directly", pointing to the New Relic Help Center doc "Configuring Node.js with environment variables".

Thanks Chase. Guess I need to learn to RTFM!

What this means is that you can most likely get by with a much easier set up if you want to live the "environment variable configuration" life. There may still be some value in a more structured approach, like what I've documented here, if you'd like to be a little more explicit.

Also note that I specified using an environment variable of NEW_RELIC_LICENSE_KEY, which is the exact same name as the environment variable that the New Relic module uses itself! (Great minds think alike?) As such, it would probably be a good idea, if you want to do explicit configuration as described here, to avoid using NEW_RELIC_* as the name of your environment variables, as you may get some unexpected interaction. In fact, my read of the precedence rules are that the environment variables override the newrelic.js config file settings, so the setting in the newrelic.js is ignored in favor of the environment variable, at least in this example.

Tuesday, April 15, 2014

my new job at IBM involving node.js

Over the last year, as my work on weinre (part of the Apache Cordova project) has wound way down, folks have been asking me "What are you working on now?"

The short answer was "cloud stuff". The long answer started with "working with the Cloud Foundry open source PaaS project".

IBM has been involved in the Cloud Foundry project for a while now. I've been working on different aspects of using Cloud Foundry, almost all of them focused around deploying node.js applications to Cloud Foundry-based platforms.

About a month and a half ago, IBM announced our new BlueMix PaaS offering1, based on Cloud Foundry.

And as of a few weeks ago, I've taken a new job at IBM that I call "Developer Advocate for BlueMix, focusing on node.js". Gotta word-smith that a bit. In this role, I'll continue to be able to work on different aspects of our BlueMix product and the open source Cloud Foundry project, using node.js, only this time, more in the open.

This is going to be fun.

I already have a package up at npm - cf-env - which makes it a bit easier to deal with your app's startup configuration. It's designed to work with Cloud Foundry based platforms, so works with BlueMix, of course.

I've also aggregated some node.js and BlueMix information together into a little site, hosted on BlueMix:

http://node-stuff.mybluemix.net

I plan on working on node.js stuff relating to:

  • things specifically for BlueMix
  • things more generally for Cloud Foundry
  • things more generally for using any PaaS
  • things more generally for using node.js anywhere

I will be posting things specific to BlueMix on the BlueMix dev blog, and more general things on this blog.

If you'd like more information on using node.js on BlueMix or CloudFoundry, don't hesitate to get in touch with me. The easiest ways are @pmuellr on twitter, or email me at Patrick_Mueller@us.ibm.com.

Also, did you know that IBM builds it's own version of node.js ? I'm not currently contributing to this project, but I've known the folks that are working on it, for a long time. Like, from the Smalltalk days. :-)


note on weinre

I continue to support weinre. You can continue to use the following link to find the latest information, links to forums and bug trackers, etc.

http://people.apache.org/~pmuellr/weinre/docs/latest/Home.html

I will add that most folks have absolutely no need for weinre anymore; both Android and iOS have great stories for web app debug.

As Brian LeRoux has frequently stated, one of the primary goals of Apache Cordova is to cease to exist. weinre has nearly met that goal.


1 Try Bluemix for free during the beta - https://bluemix.net/

Tuesday, November 12, 2013

gzip encoding and compress middleware

Hopefully everyone who builds stuff for the web knows about gzip compression available in HTTP. Here's a quick intro if you don't: https://developers.google.com/speed/articles/gzip

I use connect or express when building web servers in node, and you can use the compress middleware to have your content gzip'd (or deflate'd), like in this little snippet.

However ...

Let's think about what's going on here. When you use the compress middleware like this, for every response that sends compressable content, your content will be compressed. Every time. Of course, for your "static" resources, the result of that compression is the same every time, and so for those resources, it's really kind of pointless to run the compression for each request. You should do it once, and then reuse that compressed result for future requests.

Here are some tests using the play Waste, by Harley Granville-Barker. I pulled the HTML version of the file, and then also gzip'd the file manually from the command-line for one of tests.

The HTML file is ~300 KB. The gzip'd version is ~90 KB.

And here's a server I built to serve the files:

The server runs on 3 different HTTP ports, each one serving the file, but in a different way.

Port 4000 serves the HTML file with no compression.

Port 4001 serves the HTML file with the compress middleware

Port 4002 serves the the pre-gzip'd version of the file that I stored in a separate directory; the original file was waste.html, but the gzip'd version is in gz/waste.html. It checks the incoming request to see if a gzip'd version of the file exists (caching that result), and internally redirects the server to that file by resetting request.url. Setting the appropriate Content-Encoding, etc headers.

What a hack! Not quite sure that "fixing" request.url is kosher, but, worked great for this test.

Here's some curl invocations.

$ curl --compressed --output /dev/null --dump-header - 
   --write-out "%{size_download} bytes" http://localhost:4000/waste.html

X-Powered-By: Express
Accept-Ranges: bytes
ETag: "305826-1384296482000"
Date: Wed, 13 Nov 2013 01:21:13 GMT
Cache-Control: public, max-age=0
Last-Modified: Tue, 12 Nov 2013 22:48:02 GMT
Content-Type: text/html; charset=UTF-8
Content-Length: 305826
Connection: keep-alive

305826 bytes

Looks normal.

$ curl --compressed --output /dev/null --dump-header - 
   --write-out "%{size_download} bytes" http://localhost:4001/waste.html

X-Powered-By: Express
Accept-Ranges: bytes
ETag: "305826-1384296482000"
Date: Wed, 13 Nov 2013 01:21:13 GMT
Cache-Control: public, max-age=0
Last-Modified: Tue, 12 Nov 2013 22:48:02 GMT
Content-Type: text/html; charset=UTF-8
Vary: Accept-Encoding
Content-Encoding: gzip
Connection: keep-alive
Transfer-Encoding: chunked

91071 bytes

Nice seeing the Content-Encoding and Vary headers, along with the reduced download size. But look ma, no Content-Length header; instead the content comes down chunked, as you would expect with a server-processed output stream.

$ curl --compressed --output /dev/null --dump-header - 
   --write-out "%{size_download} bytes" http://localhost:4002/waste.html

X-Powered-By: Express
Content-Encoding: gzip
Vary: Accept-Encoding
Accept-Ranges: bytes
ETag: "90711-1384297654000"
Date: Wed, 13 Nov 2013 01:21:13 GMT
Cache-Control: public, max-age=0
Last-Modified: Tue, 12 Nov 2013 23:07:34 GMT
Content-Type: text/html; charset=UTF-8
Content-Length: 90711
Connection: keep-alive

90711 bytes

Like the gzip'd version above, but this one has a Content-Length!

Here are some contrived, useless benches using wrk, that confirm my fears.

$ wrk --connections 100 --duration 10s --threads 10 
   --header "Accept-Encoding: gzip" http://localhost:4000/waste.html

Running 10s test @ http://localhost:4000/waste.html
  10 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    71.72ms   15.64ms 101.74ms   69.82%
    Req/Sec   139.91     10.52   187.00     87.24%
  13810 requests in 10.00s, 3.95GB read
Requests/sec:   1380.67
Transfer/sec:    404.87MB

$ wrk --connections 100 --duration 10s --threads 10 
   --header "Accept-Encoding: gzip" http://localhost:4001/waste.html

Running 10s test @ http://localhost:4001/waste.html
  10 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   431.76ms   20.27ms 493.16ms   63.89%
    Req/Sec    22.47      3.80    30.00     80.00%
  2248 requests in 10.00s, 199.27MB read
Requests/sec:    224.70
Transfer/sec:     19.92MB

$ wrk --connections 100 --duration 10s --threads 10 
   --header "Accept-Encoding: gzip" http://localhost:4002/waste.html

Running 10s test @ http://localhost:4002/waste.html
  10 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    48.11ms   10.66ms  72.33ms   67.39%
    Req/Sec   209.46     24.30   264.00     81.07%
  20795 requests in 10.01s, 1.76GB read
Requests/sec:   2078.08
Transfer/sec:    180.47MB

Funny to note that the server using compress middleware actually handles less requests/sec than the one that doesn't compress at all. But this is a localhost test so the network bandwidth/throughput isn't realistic. Still, makes ya think.

Tuesday, October 08, 2013

sourcemap best practices

Source map support for web page debugging has been a thing now for a while. If you don't know what a sourcemap is, Ryan Seddon's article "Introduction to JavaScript Source Maps", provides a technical introduction. The "proposal" for source map seems to be here.

In a nutshell, sourcemap allow you to ship minized .js files (and maybe .css files), which point back to the original source, so that when you debug your code in a debugger like Chrome Dev Tools, you'll see the original source.

It's awesome.

Even better, we're starting to see folks ship source maps with their minized resources. For example, looking at the minized angular 1.2.0-rc.2 js file you can see the sourcemap annotation near the end:

/*
//@ sourceMappingURL=angular.min.js.map
*/

The contents of that .map file looks like this (I've truncated bits of it):

{
"version":3,
"file":"angular.min.js",
"lineCount":183,
"mappings":"A;;;;;aAKC,SAAQ,CAACA,CAAD,...",
"sources":["angular.js","MINERR_ASSET"],
"names":["window","document","undefined",...]
}

You don't need to understand any of this, but what it means is that if you use the angular.min.js file in a web page, and also make angular.min.js.map and angular.js files available, when you debug with a sourcemap-enabled debugger (like Chrome Dev Tools), you'll see the original source instead of the minized source.

But there are a few issues. Although I haven't written any tools to generate source maps, I do consume them in libraries and with tools that I use (like browserify). Sometimes I do a bit of surgery on them, to make them work a little better.

Armed with this experience, I figured I'd post a few "best practices", based on some of the issues I've seen, aimed at folks generating sourcemaps.

do not use a data-url with your sourceMappingURL annotation

Using a data url, which browserify does, ends up creating a huge single line for the sourceMappingURL annotation. Many text editors will choke on this, if you accidently, or purposely edit the file with the annotation.

In addition, using a data-url means the mapping data is uuencoded, which means humans can't read it. The sourcemap data is actually sometimes interesting to look at, for instance, if you want to see what files got bundled with browserify.

Also, including the sourcemap as a data-url means you just made your "production" file bigger, especially since it's uuencoded.

Instead of using a data-url with the sourcemap information inlined, just provide a map file like angular does (described above).

put the sourceMappingURL annotation on a line by itself at the end

In the angular example above, the sourceMappingURL annotation is in a // style comment inside a /* */ style comment. Kinda pointless. But worse, it no longer works with Chrome 31.0.1650.8 beta.

Presumably, Chrome Dev Tools got a bit stricter with how they recognize the sourceMappingURL annotation; it seems to like having the comment at the very end of the file. See https://code.google.com/p/chromium/issues/detail?id=305284 for more info.

Browserify also has an issue here, as it adds a line with a single semi-colon to the end of the file, right before the sourceMappingURL annotation, which also does not work in the version of Chrome I referenced.

name your sourcemap file <min-file>.map.json

Turns out these sourcemap files are json. But no one uses a .json file extension, which seems unfortunate as the browser doesn't understand them, if you happen to try loading one of the files there. Not sure if there's a restriction about naming them, but it seems like there shouldn't be, and it seems like it makes sense to just use a .json extension for them.

embed the original source in the sourcemap file

Source map files contain a list of the names of the original source files (urls, actually), and can optionally contain the original source file content in the sourcesContent key. Not everyone does this - I think most people do not put the original source in the sourcemap files (eg, jquery, angular).

If you don't include the source with the sourcesContent key, then the source will need to be retrieved by your browser. Not only is this another HTTP GET required, but you will need to provide the original .js files as well as the minized .js files on your server. And they will need to be arranged in whatever manner the source files are specified in the source map file (remember, the names are actually URLs).

If you're going to provide a separate map file, then you might as well add the source to it, so the whole wad is available in one file.

generate your sourcemap json file so that it's readable

I'm looking at you, http://code.jquery.com/jquery-2.0.3.min.map.

The file is only used at debug time, it's unlikely there's going to be much of a performance/memory hit by single-lining the file. Remember, as I said above, it's often useful to be able to read the sourcemap file, so ... make it readable.

if you use the names key in your sourcemap, put it at the end of the map file

Again, sourcemap files are interesting reading sometimes, but the names key ends up being painful to deal with, if you generate the sourcemap with something like JSON.stringify(mapObject, null, 4) or something. Because the value of the names key is an array of strings, and it's pretty long, as is not nearly as interesting to read as the other bits in the source map. Add the property to your map object last, before you stringify it, so it doesn't get in the way.

where can we publish some sourcemap best practices?

I'd like to see somewhere we can publish and debate sourcemap best practices. Idears?

bugs opened

Some bugs relevant to the "sourceMappingURL not the last thing in the file" issue/violation:

Thursday, February 07, 2013

dustup

A few months ago, I took a look at Nathan Marz's Storm project. As someone who has used and implemented Actor-like systems a few times over the years, always fun to see another take. Although Storm is aimed at the "cloud", it's easy to look at it and see how you might be able to use it for local computation. In fact, I have a problem area I'm working in right now where something like Storm might be useful, but I just need it to work in a single process. In JavaScript.

Another recent problem area I've been looking at is async, and using Promises to help with that mess. I've been looking at Kris Kowal's q specifically. It's a nice package, and there's a lot to love. But as I was thinking about how I was going to end up using them, I imagined building up these chained promises over and over again for some of my processing. As live objects. Which, on the face of it, is insane. If I can build a static structure and have object flow through them instead. Should be able to cut down on the amount of live objects created, and it will probably be easier to debug.

So, with that in mind, I finally sat down and reinterpreted Storm this evening, with my dustup project.

Some differences from Storm, besides the super obvious ones (JavaScript vs Java/Clojure, and local vs distributed):

  • One of the things that didn't seem right to me in Storm was differentiating spouts from bolts. So I only have bolts.

  • I'm also a fan of the PureData real-time graphical programming environment, and so borrowed some ideas from there. Namely, that bolts should have inlets where data comes in, and outlets where they go out, and that to hook bolts together, that you'll connect an inlet to an outlet.

  • Designed with CoffeeScript in mind. So you can do nice looking things like this:

    x = new Bolt
        outlets:
           stdout: "standard output"
           stderr: "standard error"
    

    which generates the following JavaScript:

    x = new Bolt({
        outlets: {
           stdout: "standard output",
           stderr: "standard error"
        }
    })
    

    and this:

    Bolt.connect a: boltA, b: boltB
    

    which generates the following JavaScript

    Bolt.connect({a: boltA, b: boltB})
    

    Even though I designed it for and with CoffeeScript, you can of course use it in JavaScript, and it seems totally survivable.

Stopping there left me with something that seems useful, and still small. I hope to play with it in anger over the next week or so.

Here's an example of a topology that just passes data from one bolt to another, in CoffeeScript:
(here's the JavaScript for you CoffeeScript haters)

In the lines 1-2, we get access to the dustup package and the Bolt class that it exports.

On lines 4-6, we're creating a new bolt that only has a single outlet, named "a" (the string value associated with the property a - "outlet a" - is ignored).

On lines 8-11, we're creating a new bolt that only has a single inlet, named "b". Inlets need to provide a function which will be invoked when the inlet receives data from an outlet. In this case, the inlet function will write the data to "the console".

On line 13, we connect the a outlet from the first bolt to the b inlet of the second bolt. So that when we send a message out the a outlet, the b inlet will receive it.

Which we test on line 15, by having the first bolt emit some data. Guess what happens.

next

The next obvious thing to look at, is to see how asynchronous functions, timers, and intervals fit into this scheme.

Friday, November 30, 2012

weinre installable at heroku (and behind other proxy servers)

I finally resolved an issue with weinre relating to using it behind proxy servers.

Matti Paksula figured out what the problem was, but I wasn't happy with the additional complexity involved, so I made a simpler and more drastic change (in the getChannel() function).

Used to be, subsequent weinre connections after the first were checked to make sure they came from the same "computer"; that was just a silly sort of security check that can be worked around, if need be. So why bother with it, if it just gets in the way.

So, now weinre can run on heroku. I have a wrapper project at GitHub you can use to create your own server: https://github.com/pmuellr/weinre-heroku

Since Heroku provides 750 free dyno-hours per app per month, and weinre is one app, and uses one dyno at a time, and 750 hours are in a month, you can run a weinre server on the Intertubes for free.

As usual, weinre has been updated on npm.

Friday, March 30, 2012

unofficial weinre binary packages for your convenience at my apache home page

weinre is now getting ready to go for the brass ring at Apache: an "official" release.

NOT THERE YET - NO OFFICIAL RELEASES YET.

Until that time, I have unofficial binary packages available for your convenience.

I've gone ahead and created a little download site at my personal Apache page, here: http://people.apache.org/~pmuellr/.

The download site contains all the doc and binary packages for every weinre version I've 'shipped', in the 1.x directories. It also contains unofficial binary packages of recent cuts of weinre for your convenience, as the bin, doc, and src directories in the weinre-builds directory, and the same version of the doc, unpacked, in the latest directory of weinre-docs.

As with previous binary releases for your convenience, you can npm install weinre. Here's the recipe:

npm install {base-url}apache-cordova-weinre-{version}-bin.tar.gz

{base-url} = http://people.apache.org/~pmuellr/weinre-builds/bin/
{version}  = 2.0.0-pre-H0FABA3W-incubating

That version number will change, so I'm not providing an exact link. Check the directory.

help me help you

If you'd like to accelerate the process of getting weinre to "official" status, you can help!

  • install it
  • use it
  • review it
  • build it
  • enhance it

Feel free to post to the cordova mailing list at Apache, the weinre google group, clone the source from Apache or GitHub, etc. All the links are here: http://people.apache.org/~pmuellr/weinre-docs/latest/.

One link I forgot to add, and feel free to open a bug about this, is the link to open a bug. It's hideous (the URL). Add component "weinre" and maybe a "[weinre]" prefix on the summary.

Thank you for helping yourself!

you f'ing moved the downloads again you f'er!

Chill. And some bad news. The official releases won't be here (I don't think). But I will provide a link on that site to the official releases, when they become real.

I intend to keep this page around until I can get all the weinre bits available in more appropriate places at Apache. Not there yet.

personal home pages at Apache

Apache provides "home pages" for Apache committers. That's what I'm using for this archive site. If you're a committer, and curious about how I did the directory indexing, the source for the site (modulo actual archives), is here: https://github.com/pmuellr/people.apache.org.

2012/03/30 1pm update

Jan Lehnardt pointed out to me on twitter that I should avoid the use of the world "release", so that word has been changed to things like "binary package for your convenience".

Friday, February 17, 2012

npm installable version of weinre

As part of the port of weinre from Java to node.js it seemed logical to make it, somehow, npm installable.

Success.

There is currently a tar.gz download of the weinre runtime available at the pmuellr/incubator-cordova-weinre downloads page. I may keep a 'recent' build here until I get the continuous builds of weinre at Apache working. At which point, the tar.gz should be available somewhere at apache.

Of course, let me know if you have any issues. Preferred place for discussion is the weinre Google Group.

install locally

npm install https://github.com/downloads/pmuellr/incubator-cordova-weinre/weinre-node-1.7.0-pre-2012-02-17--16-47-15.tar.gz

After that install, you can run weinre via

./node_modules/.bin/weinre

install globally

sudo npm -g install https://github.com/downloads/pmuellr/incubator-cordova-weinre/weinre-node-1.7.0-pre-2012-02-17--16-47-15.tar.gz

After that install, you can run weinre via

weinre

notes/caveats

  • The command-line options have not changed, although the --reuseAddr option is no longer supported.

  • There is no longer a Mac 'app' version of weinre.

  • This is not an 'official build' of weinre, in the Apache sense. I need to figure out what that even means first.

  • These 'unofficial' builds have a horrendous semver patch version names. Deal. If you have a better scheme, please let me know.

  • I don't have plans to ever put weinre up at npmjs.org, since I have no idea how this would work, in practice. I don't want to own the weinre package up there; ideally it would be owned by Apache. Can't see how that would work either. If I ever do put a weinre package up at npm, I think it would be a placeholder with instructions on how to get it from Apache. Or maybe there's a way we can auto-link the npmjs.org package to Apache mirrors somehow?

  • I need to look at whether I should continue to ship all my package's pre-reqs (and their pre-reqs, recursively) or not. And relatedly, whether I should actually make fixed version pre-reqs of all the recursively pre-req'd modules. An issue came up today that I "refreshed" the node_modules, and got an unexpected update to the formidable package (a pre-req of connect which is a pre-req of express which is a pre-req of weinre). That's not great. And could be fixed by having explicit versioned dependencies of all the recursively pre-req'd modules in weinre itself. Sigh. What do other people do here?

  • Prolly need to improve the user docs. The remainder of this blog post is the shipped README.md file. So, to read the docs, start the server, read the docs the server embeds, online. Sorry.

running

For more information about running weinre, you can start the server and browse the documentation online.

Start the server with the following command

node weinre

This will start the server, and display a message with the URL to the server. Browse to that URL in your web browser, and then click on 'documentation' link, which will display weinre's online documentation. From there click on the 'Running' page to get more information about running weinre.


Update: Patrick Kettner posted a link to this blog post on Hacker News, so you can discuss there, since I'm one of those "disable comments on blog" people. http://news.ycombinator.com/item?id=3604291.

Thursday, January 19, 2012

AMD - the good parts

Based my blog post yesterday "More on why AMD is Not the Answer", you might guess I'm an AMD-hater. I strive to not be negative, but sometimes I fail. Mea culpa.

I do have a number of issues with AMD. But I think it's also fair to point out "the good parts".

my background

I've only really played with a few AMD libraries/runtimes. I've been more fiddling around with ways of taking node/CommonJS-style modules, and making them consumable by AMD runtimes. The AMD runtimes I've played with the most are almond and (my own) modjewel. My focus has been on just the raw require() and define() functions.

But I follow a lot of the AMD chatter around the web, and have done at least reading and perusing through other AMD implementations. To find implementations of AMD, look at this list and search for "async". The ones I hear the most about are require.js, Dojo, PINF, and curl.

And now, "the good parts".

the code

Browsing the source repositories and web sites, you'll note that folks have spent a lot of time on the libraries - the code itself, documentation, test suites, support, etc. Great jobs, all. I know a lot of these libraries have been battle-tested by folks in positions to do pretty good battle-testing with them.

the evangelism

Special mention here to James Burke (not this one), who has done a spectacular job leading and evangelizing the AMD cause, both within the "JavaScript Module" community, and outside. In particular, he's been trying to get AMD support into popular libraries like jQuery, backbone, underscore and node.js. Sometimes with success, sometimes not. James has been cool and collected when dealing with jerks like me, pestering him with complaints, questions, requests and bug reports. Some open source communities can be cold and bitter places, but the AMD community is not one of those, and I'm sure that's in part due to James.

Thanks for all your hard work James!

the async support

AMD is an acronym for Asynchronous Module Definition. That first word, Asynchronous, is key. For the most part, if you're using JavaScript, you're living in a world of single-threaded processing, where long-running functions are implemented as asynchronous calls that take callback functions as arguments. It makes sense, at least in some cases, to extend this to the function of loading JavaScript code.

If you need to load JavaScript in an async fashion, AMD is designed for you, and uses existing async function patterns that JavaScript programmers should already be familiar with.

support for CommonJS

This is a touchy subject, because when you ask 10 people what "CommonJS" is, you'll get 15 different answers. For the purposes here, and really whenever I use the phrase "CommonJS", I'm technically referring to one of the Modules/1.x specifications, and more specifically, the general definitions of the require() function and the exports and module variables used in the body of a module. These general definitions are also applicable, with some differences, in node.js.

When I use the phrase "CommonJS", I am NEVER referring to the other specifications in the CommonJS specs stable (linked to above).

AMD supports this general definition of "CommonJS", in my book. Super double plus one good!

Aside: I'm looking for another 'name' to call my simplistic notion of CommonJS. So far, things like simple require()y support, seem to capture my intent. But you know how bad I am with naming.

the design for quick edit-debug cycles

One of the goals of AMD is to facilitate a quick edit-debug cycle while you are working on your JavaScript. To do this, AMD specifies that you wrap the body of your module in a specific type of function wrapper when you author it. Because of the way it's designed, these JavaScript files can be loaded directly with a <script src=> element in your HTML file, or via <script src=> elements injected into your live web page dynamically, by a library. This means that as soon as you save the JavaScript you're editing in your text editor/IDE, you can refresh the page in your browser without an intervening "build step" or special server processing - the same files you are editing are loaded directly in the browser.

the resources

Beyond just loading JavaScript code, AMD supports the notion of loading other things. The things you can load are determined by what Loader Plugins you have available. Examples include:

Coming from the Java world, this is familiar territory; see: Class.getResource*() methods. For the most part, I consider this to be amongst the core bits of functionality that a programming library should provide. It's a common pattern to ship data with your code, and you need some way to access it. And, of course, this resource loading is available in an async manner with AMD.

Aside: I'm not sure what the analog of this is in the node.js world. You can certainly do it by hand, making use of the __dirname global, and then performing i/o yourself. I'm just not sure if someone has wrapped this up nicely in a library yet.

the build

At the end of the day, hopefully, your web application is going to be available to some audience, who will be using it for realz. It's unlikely that you will want to continue to load your JavaScript code (and resources) as individual pieces as you do in development; instead, you'll want to concatenate and minimize your wads of goo to keep load time to a minimum for your customers.

And it seems like most AMD implementations provide a mechanism to do this.