tag:blogger.com,1999:blog-223672662024-03-12T21:11:18.700-04:00pmuellrPatrick MuellerPatrick Muellerhttp://www.blogger.com/profile/04900886008475308281noreply@blogger.comBlogger238125tag:blogger.com,1999:blog-22367266.post-44274140154074555912019-01-22T18:21:00.000-05:002019-01-22T18:21:36.895-05:00reboot<p>After 3.5 years at <a href="https://nodesource.com/">NodeSource</a>, I retired at the end of last week. I really
loved working there - great people, fun products, challenging problems to solve.
Alas, <a href="https://muellerware.org/resume/Patrick-Mueller-Resume.html">my work history</a> shows that I tend to not stick around an organization
more than 3-4 years, even with great projects. I kinda enjoy mixing things up
every few years, and as I get older ... there realistically won't be that many
more for me, before I <em>really</em> retire.</p>
<p>So, shakin' things up!</p>
<p>Taking a few months off, but here's my current rough list of TODOs:</p>
<ul>
<li>relax</li>
<li>hike, bike, maybe kayak if it gets warm enough</li>
<li>fiddle with guitar / uke / <a href="https://www.ableton.com/en/live/">Ableton</a> / <a href="https://cycling74.com/products/max-features">Max</a></li>
<li>level up on my web client skillz:
<a href="https://reactjs.org/">React</a>, <a href="https://webpack.js.org/">webpack</a>, <a href="https://developers.google.com/web/fundamentals/primers/service-workers/">service workers</a>, <a href="https://abookapart.com/products/progressive-web-apps">progressive web apps</a></li>
<li>polish up the ole résumé</li>
<li>re-read <a href="https://pragprog.com/book/algh/land-the-tech-job-you-love">Land the Tech Job You Love</a> by the awesome <a href="https://twitter.com/petdance">Andy Lester</a></li>
<li>play with some more FaaS / PaaS / notebook envs:
<a href="https://arc.codes/">architect</a>, <a href="https://glitch.com/">Glitch</a>, <a href="https://zeit.co/">Zeit NOW</a>, <a href="https://runkit.com/">RunKit</a>, <a href="https://observablehq.com">Observable</a></li>
<li>start looking for my next gig, or maybe I can start a business (but sounds like work)</li>
</ul>
<p>In order to kill more birds with less stones, there's a particular app I'd like
to build to display weather information, something I've been wanting to build
for a while. So I guess I can try building that with the client bits, deploying
that a bunch of different ways, and blog about it. A <a href="https://cycling74.com/tutorials/advanced-max-standalones-part-1">Max standalone app</a> for
it is a stretch goal :-)</p>
Patrick Muellerhttp://www.blogger.com/profile/04900886008475308281noreply@blogger.comtag:blogger.com,1999:blog-22367266.post-89848621090985701022017-09-10T23:11:00.001-04:002017-09-10T23:11:53.930-04:00moar profiling!<p>I'm a long-time fan of using <a href="https://en.wikipedia.org/wiki/Profiling_(computer_programming)">CPU profilers</a> to analyze programs.</p>
<p>CPU profilers are great because they provide insight into our programs that you
can't get easily any other way - how fast - or more likely how slow - are
functions in your code. I've learned with profilers that there's no point in
<strong>guessing</strong> what code you think is fast or slow, because:</p>
<ul>
<li>you're likely <strong>wrong</strong>, or will find <strong>surprises</strong></li>
<li>it's so easy to profile some code, just do it!</li>
</ul>
<p>For Node.js, there are some useful profilers available:</p>
<ul>
<li><a href="https://medium.com/@paul_irish/debugging-node-js-nightlies-with-chrome-devtools-7c4a1b95ae27">Chrome DevTools</a></li>
<li><a href="https://nodesource.com/products/nsolid">NodeSource's N|Solid</a></li>
</ul>
<p>Of course I'm partial to N|Solid since I work on it, <wink/>.</p>
<p>And yet ... I've grown a little restless with our current crop of profiling
tools.</p>
<p>The defacto visualization for profiles is <a href="http://www.brendangregg.com/flamegraphs.html">flame graphs</a>. But I have issues:</p>
<ul>
<li><p>the interesting dimension to be looking for is the <strong>widths</strong> of the boxes,
but my eye is often drawn to the <strong>heights</strong></p>
</li>
<li><p>they don't provide much room for textual information, since many of the boxes
can be very thin</p>
</li>
<li><p>usually, all I care about looking at a profile are the most expensive
functions, but visually sorting box widths across a 2 dimensional graph of
boxes doesn't make a lot of sense</p>
</li>
</ul>
<p>The age-old profile visualization is the table/tree, where functions are
displayed as rows in a table, but can be expanded to show the callers or callees
of the individual functions. While a grand idea - merging tables and trees into
a single UI widget - it's also as <strong>tedious as you can imagine it might be</strong>.
Lots of clicking to expand/contract the tree bits, and then the whole thing gets
very noisy.</p>
<p>Both of these tools are sufficient, and do provide a lot of value, but ... <strong>I
feel like we can do better</strong>.</p>
<p>And so I've started playing with some <strong>new tooling</strong>:</p>
<ul>
<li><p><a href="https://github.com/moar-things/moar-profiler#readme"><code>moar-profiler</code></a> - a command-line tool to generate profiles
for Node.js processes, that adds even <strong>moar</strong> information to the profile
than what v8 provides (eg, source, package info, process meta data). These
profiles are compatible with every v8 <code>.cpuprofile</code> digesting tool, they
just extend the data provided in the "standard" profile data.</p>
</li>
<li><p><a href="https://moar-things.github.io/moar-profile-viewer/"><code>moar-profile-viewer</code></a> - a web app to view v8
<code>.cpuprofile</code> files generated from Node.js profiling tools, that provides
even <strong>moar</strong> goodies when displaying profiles generated from <code>moar-profiler</code></p>
</li>
</ul>
<p>Here's what the profile viewer looks like with a profile loaded:</p>
<p><img border="1" width="100%" alt="moar profile viewer" src="https://github.com/moar-things/moar-profile-viewer/raw/master/docs/images/screen-cap.png"></p>
<p>The viewer is at Minimum Viable Product state. There are no graphs. There are
no call stacks. There are only lists of functions, and the ability to get
more information on those functions - callers and callees - and a view of the
source code of the function. <strong>Gotta start somewhere!</strong></p>
<p>Some of the fun stuff the profile viewer does have is:</p>
<ul>
<li><p>displaying the package and version a function came from, along with a link to
the package info at npmjs.org</p>
</li>
<li><p>annotated source code</p>
</li>
<li><p>ability to toggle hiding "system" functions vs "user" functions</p>
</li>
</ul>
<p>Generating a profile with <code>moar-profiler</code> is pretty easy:</p>
<ul>
<li>run your program with the <code>--inspect</code> option (requires Node.js >= version 6):</li>
</ul>
<pre><code class="lang-text">node --inspect myProgram.js arg1 arg2 ...
</code></pre>
<ul>
<li>in a separate terminal, run <code>moar-profiler</code> which will generate the profile
to <code>stdout</code></li>
</ul>
<pre><code class="lang-text">moar-profiler --duration 5 9229 > myProgram.cpuprofile
</code></pre>
<p>In this case, a 5 second profile will be collected, by connecting to the
Node.js process using the inspect port 9229 (the default inspect port).</p>
<p>Now that you have a <code>.cpuprofile</code> file, head over to the <a href="https://moar-things.github.io/moar-profile-viewer/">moar profile viewer web
app</a>, and load the file (or drop it on the page).</p>
<p>If you would like to test drive the web app without generating a <code>.cpuprofile</code>
file first, you can download this file, which is a profile of <code>npm update</code>, and
then load it or drop it in the <a href="https://moar-things.github.io/moar-profile-viewer/">moar profile viewer web
app</a>:</p>
<ul>
<li><a download="npm-update.moar.cpuprofile" href="https://raw.githubusercontent.com/moar-things/moar-profile-viewer/master/test/fixtures/npm-update.moar.cpuprofile">
<tt>npm-update.moar.cpuprofile</tt>
</a></li>
</ul>
<p>If you don't feel comfortable sending a <code>.cpuprofile</code> file over the intertubes,
you can clone the <a href="https://github.com/moar-things/moar-profile-viewer">moar-profile-viewer repo</a> and open the <code>docs/index.html</code> page
on your own machine - it's a single page app in an <code>.html</code> file - <strong>no server
required</strong>.</p>
<p>There's a bunch of cleanup to do with the viewer, but I'd like for it to serve
as a basis for <strong>further investigation</strong>. The cool thing with both
<code>moar-profiler</code> and the moar profile viewer is that they work with existing v8
profile tools. If you really need a flame chart from a profile generated with
<code>moar-profiler</code>, just load it into your flavorite profile visualizer (eg, Chrome
DevTools). Likewise, the moar profile viewer can read profiles generated
with other Node.js profiling tools, but you won't have as nice of an experience
compared to profiles generated with <code>moar-profiler</code>.</p>
<p>I do want to provide the ability to see <strong>call stacks</strong>, somehow. Current
thought is Graphviz visualizations of the call stacks involving a selected
function. Those <em>might</em> be small enough to provide useful visualization.
Previous experiments rendering the entire profile call graph as a single
Graphviz graph were ... <strong>disastrous</strong>, as most extremely large and complex
Graphviz graphs <a href="https://www.dropbox.com/s/59whoizajkzfxxs/jsc-grammar-network.pdf?dl=0">tend to be</a>.</p>
<p>This is all open source, of course, so feel free to
<a href="https://github.com/moar-things">chime in and help out</a>. I'm obviously interested in bug reports,
but also new feature requests. <strong>Dream on!</strong></p>
Patrick Muellerhttp://www.blogger.com/profile/04900886008475308281noreply@blogger.comtag:blogger.com,1999:blog-22367266.post-30161857892349672422016-03-14T10:43:00.000-04:002016-03-14T10:43:11.935-04:00watching your projects<p>I'm a big fan of having an "automated build step" as part of the development
workflow for a project. Figured I'd share what I'm <em>currently</em> doing with
my Node.js projects.</p>
<p>What is an "automated build step"? For me, this mean some "shell" code that
gets run when I save files while developing. It may not actually be "building"
anything - often I'm just running tests. But the idea is that as I save files
in my editor, things are run against the code that I just saved.</p>
<p>So there's basically two concepts here:</p>
<ul>
<li>file watchers - watching for updates to files</li>
<li>code runners - when files are updated, run some code</li>
</ul>
<p><strong>file watchers</strong></p>
<p>I have some history with file watchers - <a href="https://www.npmjs.com/package/wr">wr</a>, <a href="https://www.npmjs.com/package/jbuild">jbuild</a>, <a href="https://www.npmjs.com/package/cakex">cakex</a>. The
problem with file watching is that it's not easy. The file watching bits in
jbuild and cakex are also bundled with other build-y things, which is good and
bad - basically bad because you have to buy in to using jbuild and cakex
exclusively. wr is a little better in that it's <em>just</em> a file watcher/runner,
but I ended finding a replacement for that with <a href="https:/npmjs.org/package/nodemon">nodemon</a>. The advantage to
using nodemon is that it's already an accepted tool, and I don't have to support
it. :-) Thanks <a href="https://remysharp.com/">Remy</a>!</p>
<p><strong>code runners</strong></p>
<p>There are many options for code runners; <a href="https://www.gnu.org/software/make/manual/">make</a>, <a href="http://coffeescript.org/#cake">cake</a>, <a href="http://npmjs.org/package/grunt">grunt</a>, <a href="http://npmjs.org/package/gulp">gulp</a>,
<a href="https://docs.npmjs.com/cli/run-script">npm scripts</a>, etc. I'm currently using npm scripts, because they're the
easiest thing to do, for non-complex "build steps". </p>
<p>For example, here are the <code>scripts</code> defined in the <code>package.json</code> of the
<a href="https://github.com/nodesource/nsolid-statsd/blob/34f4cf558a07f787d17f625fe445e1bde5dbd0c9/package.json#L17-L22">nsolid-statsd package</a>.</p>
<pre><code class="lang-js"><span class="hljs-string">"scripts"</span>: {
<span class="hljs-string">"start"</span>: <span class="hljs-string">"node daemon"</span>,
<span class="hljs-string">"utest"</span>: <span class="hljs-string">"node test/index.js | tap-spec"</span>,
<span class="hljs-string">"test"</span>: <span class="hljs-string">"npm run utest && standard"</span>,
<span class="hljs-string">"watch"</span>: <span class="hljs-string">"nodemon --exec 'npm test' --ext js,json"</span>
}
</code></pre>
<p>Everything's kicked off by the <code>watch</code> script. It watches for file changes -
in this case, changes to <code>.js</code> and <code>.json</code> files, and runs <code>npm test</code> when
file changes occur. <code>npm test</code> is defined to run the <code>test</code> script, and for
this package, that means running <code>npm run utest</code> and then <code>standard</code>. <code>utest</code>
is the final leg of the scripts here, which runs my <code>tape</code> tests rooted at
<code>test/index.js</code>, piping them through <code>tap-spec</code> for colorization, etc.</p>
<p>So the basic flow will be this, when running the <code>watch</code> script:</p>
<ul>
<li>run <code>node test/index.js</code>, piping through <code>tap-spec</code></li>
<li>if that passes, run <code>standard</code></li>
<li>wait for a <code>.js</code> or <code>.json</code> file to change</li>
<li>when one of those files changes, start over from the beginning</li>
</ul>
<p>Just run the <code>watch</code> script in a dedicated terminal window, perhaps beside
your editor, and start editing and saving files.</p>
<p>Note that <code>nodemon</code>, <code>standard</code> and <code>tap-spec</code> are available in the local
<code>node_modules</code> directory, because they are <code>devDependencies</code> in the package
itself (<code>tape</code> is used when running <code>test/index.js</code>):</p>
<pre><code class="lang-js"><span class="hljs-string">"devDependencies"</span>: {
<span class="hljs-string">"nodemon"</span>: <span class="hljs-string">"~1.8.1"</span>,
<span class="hljs-string">"tape"</span>: <span class="hljs-string">"~4.2.0"</span>,
<span class="hljs-string">"tap-spec"</span>: <span class="hljs-string">"~4.1.1"</span>,
<span class="hljs-string">"standard"</span>: <span class="hljs-string">"~5.4.1"</span>
}
</code></pre>
<p>As such, I can use their 'binaries' directly in the npm scripts.</p>
<p><strong>command line usage</strong></p>
<p>You would then kick this all off by running <code>npm run watch</code>. But since it turns
out I've been adding a <code>watch</code> script to projects for a long time, all of which
do basically the same thing (maybe with different tools), I instead wrote a
bash script a while back called <code>watch</code>:</p>
<pre><code class="lang-plain">#!/bin/sh
npm run watch
</code></pre>
<p>So now I just always run <code>watch</code> in the project directory to start my
automated build step.</p>
<p>Here's what the complete workflow looks like:</p>
<ul>
<li>open a terminal in the project directory</li>
<li>open my editor (Atom)</li>
<li>run <code>watch</code> at the command-line</li>
<li>see the tests runs successfully</li>
<li>make a change by adding a blank line that <code>standard</code> will provide a diagnostic
for</li>
<li>save file</li>
<li>see tests run, see message from standard</li>
<li>remove blank line, save file</li>
<li>see tests run, no message from standard</li>
</ul>
<p><img width="100%" src="https://4.bp.blogspot.com/-8TRSSzNbcc4/Vua5CsnIXeI/AAAAAAAABjo/2D8GdlQL2gI2UsLTY5Q8Gkr8jYU46turA/s800/watching-your-projects.gif"></p>
<p><a href="https://4.bp.blogspot.com/-8TRSSzNbcc4/Vua5CsnIXeI/AAAAAAAABjo/2D8GdlQL2gI2UsLTY5Q8Gkr8jYU46turA/s1600/watching-your-projects.gif"><em>(open image in a new window)</em></a></p>
<p>It's basically an IDE using the command-line and your flavorite editor,
and I love it.</p>
<p>As projects get more complex, I'll start using some additional tooling like
make, cake, etc, or just invoke stand-alone bash or node scripts available
in a <code>tools</code> directory. Those would just get added as new npm scripts, and
injected into the script flow with nodemon. For instance, you might do
something like this:</p>
<pre><code class="lang-js"><span class="hljs-string">"scripts"</span>: {
<span class="hljs-string">"start"</span>: <span class="hljs-string">"node daemon"</span>,
<span class="hljs-string">"utest"</span>: <span class="hljs-string">"node test/index.js | tap-spec"</span>,
<span class="hljs-string">"test"</span>: <span class="hljs-string">"npm run utest && standard"</span>,
<span class="hljs-string">"build"</span>: <span class="hljs-string">"tools/build.sh"</span>,
<span class="hljs-string">"build-test"</span>: <span class="hljs-string">"npm run build && npm test"</span>,
<span class="hljs-string">"watch"</span>: <span class="hljs-string">"nodemon --exec 'npm run build-test' --ext js,json"</span>
}
</code></pre>
<p>Now when I <code>watch</code>, <code>tools/build.sh</code> will be run before the tests. You can
also run each of these steps individually, via <code>npm run build</code>, <code>npm test</code>,
<code>npm run utest</code>, etc.</p>
<!--
references
================================================================================
-->
Patrick Muellerhttp://www.blogger.com/profile/04900886008475308281noreply@blogger.comtag:blogger.com,1999:blog-22367266.post-1314966162331798652015-09-28T12:38:00.000-04:002015-09-30T16:40:06.955-04:00getting started with N|Solid at the command line<!-- getting started with N|Solid at the command line -->
<style>
.nsolid { font-family: Source Sans Pro, sans-serif; }
code { font-weight: bold; }
pre code { font-weight: normal; }
</style>
<p>Last week, <a href="https://nodesource.com/">NodeSource</a>
(where I work) announced a new product,
<a href="https://nodesource.com/blog/nsolid-enterprise-node-finally">
<span class='nsolid'>N|Solid</span>
</a>.
<span class='nsolid'>N|Solid</span>
is a platform built on Node.js that provides a number of enhancements
to improve troubleshooting, debugging, managing, monitoring and securing your
Node.js applications.</p>
<p><span class='nsolid'>N|Solid</span> provides a
<a href="https://docs.nodesource.com/nsolid/1.0/docs/cpu-profiling">gorgeous
web-based console</a>
to monitor/introspect your applications,
but also allows you to introspect your Node.js applications, in the same way,
at ye olde command line.</p>
<p>Let's explore that command line thing!</p>
<h2>installing <span class='nsolid'>N|Solid Runtime</span></h2>
<p>In order to introspect your Node.js applications, you'll run them
with the <span class='nsolid'>N|Solid Runtime</span>,
which is shaped similarly to a typical
Node.js runtime, but provides some additional executables.</p>
<p>To install <span class='nsolid'>N|Solid Runtime</span>, download
and unpack an <span class='nsolid'>N|Solid Runtime</span> tarball
(<code>.tar.gz</code> file) from the
<a href="https://downloads.nodesource.com/"><span class='nsolid'>N|Solid</span> download site</a>.
For the purposes of this blog post, you'll only need to download
<span class='nsolid'>N|Solid Runtime</span>; the additional downloads
<span class='nsolid'>N|Solid Hub</span> and
<span class='nsolid'>N|Solid Console</span> are not required.</p>
<p>On a Mac, you can alternatively download the native installer <code>.pkg</code> file.
If using the native installer, download the <code>.pkg</code> file, and then
double-click the downloaded file in Finder to start the installation.
It will walk you through the process of installing
<span class='nsolid'>N|Solid Runtime</span> in the usual Node.js installation location,
<code>/usr/local/bin</code>.</p>
<p>If you just want to take a peek at <span class='nsolid'>N|Solid</span>,
the easiest thing is to
download a tarball and unpack it.
On my mac, I downloaded the
<span class='nsolid'>"Mac OS .tar.gz"</span> for
<span class='nsolid'>"N|Solid Runtime"</span>, and
then double-clicked on the <code>.tar.gz</code> file in Finder to unpack it.
This created the directory <code>nsolid-v1.0.1-darwin-x64</code>. Rename
that directory to <code>nsolid</code>, start a terminal session, <code>cd</code> into that directory,
and prepend it's <code>bin</code> subdirectory to the <code>PATH</code> environment variable:</p>
<pre>
$ <b>cd Downloads/nsolid</b>
$ <b>PATH=./bin:$PATH</b>
$ <b>nsolid -v</b>
v4.1.1
$
</pre>
<p>In the snippet above, I also ran <code>nsolid -v</code> to print the version of Node.js
that the <span class='nsolid'>N|Solid Runtime</span> is built on.</p>
<p>This will make the following executables
available on the PATH, for this shell session:</p>
<ul>
<li><code>nsolid</code> is the binary executable version of Node.js that
<span class='nsolid'>N|Solid</span> ships</li>
<li><code>node</code> is a symlink to <code>nsolid</code></li>
<li><code>npm</code> is a symlink into <code>lib/node_modules/npm/bin/npm-cli.js</code>, as it is with
typical Node.js installs</li>
<li><code>nsolid-cli</code> is a command-line interface to the
<span class='nsolid'>N|Solid Agent</span>, explained later in this
blog post</li>
</ul>
<p>Let's write a <code>hello.js</code> program and run it:</p>
<pre>
$ <b>echo 'console.log("Hello, World!")' > hello.js</b>
$ <b>nsolid hello</b>
Hello, World!
$
</pre>
<p>Success!</p>
<h2 id="the-extra-goodies">the extra goodies</h2>
<p><span class='nsolid'>N|Solid Runtime</span>
version 1.0.1 provides the same Node.js runtime as
<a href="https://nodejs.org/en/blog/release/v4.1.1/">Node.js 4.1.1</a>,
with some extra goodies. Anything that can run in
Node.js 4.1.1, can run in <span class='nsolid'>N|Solid</span> 1.0.1.
NodeSource will release new versions of
<span class='nsolid'>N|Solid</span> as new releases of Node.js
become available.</p>
<p>So what makes <span class='nsolid'>N|Solid</span> different from regular Node.js? </p>
<p>If you run <code>nsolid --help</code>, you'll see a listing of additional options and
environment variables at the end:</p>
<pre>
$ <b>nsolid --help</b>
...
{usual Node.js help here}
...
N|Solid Options:
--policies file provide an NSolid application policies file
N|Solid Environment variables:
NSOLID_HUB Provide the location of the NSolid Hub
NSOLID_SOCKET Provide a specific socket for the NSolid Agent listener
NSOLID_APPNAME Set a name for this application in the NSolid Console
$
</pre>
<p><span class='nsolid'>N|Solid</span> policies allow you to harden your
application in various ways. For
example, you can have all native memory allocations zero-filled by
<span class='nsolid'>N|Solid</span>,
by using the <code>zeroFillAllocations</code> policy. By default, Node.js does not zero-fill
memory it allocates from the operating system, for performance reasons.</p>
<p>For more information on policies, see the
<a href="https://docs.nodesource.com/nsolid/1.0/docs/policies">
<span class='nsolid'>N|Solid</span> Policies documentation</a>.</p>
<p>Besides policies, the other extra goody that <span class='nsolid'>N|Solid</span>
provides is an agent that
you can enable to allow introspection of your Node.js processes. To enable the
<span class='nsolid'>N|Solid Agent</span>, you'll use the environment variables
listed in the help text above.</p>
<p>For the purposes of the rest of this blog post, we'll just focus on interacting
with a single Node.js process, and will just use the <code>NSOLID_SOCKET</code> environment
variable. The <code>NSOLID_HUB</code> and <code>NSOLID_APPNAME</code> environment variables are used
when interacting with multiple Node.js processes, via the
<span class='nsolid'>N|Solid</span> Hub.</p>
<p>The <span class='nsolid'>N|Solid Agent</span> <strong>is enabled</strong> if the
<code>NSOLID_SOCKET</code> environment variable is set,
and <strong>is not enabled</strong> if the environment variable is not set.</p>
<p>Let's start a Node.js <a href="https://nodejs.org/api/repl.html">REPL</a>
with the <span class='nsolid'>N|Solid Agent</span> enabled:</p>
<pre>
$ <b>NSOLID_SOCKET=5000 nsolid</b>
> 1+1 // just show that things are working
2
>
</pre>
<p>This command starts up the typical Node.js REPL, with the
<span class='nsolid'>N|Solid Agent</span> listening
on port 5000. When the <span class='nsolid'>N|Solid Agent</span> is enabled,
you can interact with it using
<span class='nsolid'>N|Solid</span> Command Line Interface (CLI), implemented as
the <code>nsolid-cli</code> executable.</p>
<h2 id="running-nsolid-cli-commands">running <code>nsolid-cli</code> commands</h2>
<p>Let's start with a <code>ping</code> command.
Leave the REPL running, start a new terminal window,
<code>cd</code> into your <code>nsolid</code> directory again, and set the <code>PATH</code> environment variable:</p>
<pre>
$ <b>cd Downloads/nsolid</b>
$ <b>PATH=./bin:$PATH</b>
$
</pre>
<p>Now let's send the <code>ping</code> command to the
<span class='nsolid'>N|Solid Agent</span> running in the REPL:</p>
<pre>
$ <b>nsolid-cli --socket 5000 ping</b>
"PONG"
$
</pre>
<p>In this case, we passed the <code>--socket</code> option on the command line,
which indicates the <span class='nsolid'>N|Solid Agent</span> port to connect to. And we
told it to run the <code>ping</code> command. The response was the string <code>"PONG"</code>.</p>
<p>The <code>ping</code> command just validates that the <span class='nsolid'>N|Solid Agent</span>
is actually running.</p>
<p>Let's try the <code>system_stats</code> command, with the REPL still running in the other window:</p>
<pre>
$ <b>nsolid-cli --socket 5000 system_stats</b>
{"freemem":2135748608,"uptime":2414371,"load_1m":1.17431640625,"load_5m":1.345703125,"load_15m":1.3447265625,"cpu_speed":2500}
$
</pre>
<p>The <code>system_stats</code> command provides some system-level statistics, such as amount
of free memory (in bytes), system uptime, and load averages.</p>
<p>The output is a single line of JSON. To make the output more readable,
you can pipe the output through the
<code>json</code> command, available at <a href="https://www.npmjs.com/package/json">npm</a>:</p>
<pre>
$ <b>nsolid-cli --socket 5000 system_stats | json</b>
{
"freemem": 1970876416,
"uptime": 2414810,
"load_1m": 1.34765625,
"load_5m": 1.26611328125,
"load_15m": 1.29052734375,
"cpu_speed": 2500
}
$
</pre>
<p>Another <code>nsolid-cli</code> command is <code>process_stats</code>,
which provides some process-level statistics:</p>
<pre>
$ <b>nsolid-cli --socket 5000 process_stats | json</b>
{
"uptime": 2225.215,
"rss": 25767936,
"heapTotal": 9296640,
"heapUsed": 6144552,
"active_requests": 0,
"active_handles": 4,
"user": "pmuellr",
"title": "nsolid",
"cpu": 0
}
$
</pre>
<p>The full list of commands you can use with <code>nsolid-cli</code> is
available at the doc page
<a href="https://docs.nodesource.com/nsolid/1.0/docs/using-the-cli">
<span class='nsolid'>N|Solid Command Line Interface (CLI)</span>
</a>.</p>
<h2 id="generating-a-cpu-profile">generating a CPU profile</h2>
<p>Let's try one more thing - generating a CPU profile. Here's a
link to a sample program to run, that will keep your CPU busy:
<a href="https://gist.github.com/pmuellr/b55948a7eb3d29b3ef6a"><code>busy-web.js</code></a></p>
<p>This program is an HTTP server that issues an HTTP request to itself, every
10 milliseconds. It makes use of some of the
<a href="https://nodejs.org/en/docs/es6/">new ES6 features available in Node.js 4.0</a>,
like
<a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/template_strings">template strings</a>
and
<a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Functions/Arrow_functions">arrow functions</a>.
Since the <span class='nsolid'>N|Solid Runtime</span> is using the
latest version of Node.js, you can make use of those features with
<span class='nsolid'>N|Solid</span> as well.</p>
<p>Let's run it with the agent enabled:</p>
<pre>
$ <b>NSOLID_SOCKET=5000 nsolid busy-web</b>
server listing at http://localhost:53011
send: 100 requests
recv: 100 requests
...
</pre>
<p>In another terminal window, run the <code>profile_start</code> command,
wait a few seconds and run the <code>profile_stop</code> command, redirecting
the output to the file <code>busy-web.cpuprofile</code>:</p>
<pre>
$ <b>nsolid-cli --socket 5000 profile_start</b>
{"started":1443108818350,"collecting":true}
... wait a few seconds ...
$ <b>nsolid-cli --socket 5000 profile_stop > busy-web.cpuprofile</b>
</pre>
<p>The file <code>busy-web.cpuprofile</code> can then be loaded into Chrome Dev Tools
for analysis:</p>
<ul>
<li>in Chrome, select the menu item View / Developer / Developer Tools</li>
<li>in the Developer Tools window, select the Profiles tab</li>
<li>click the "Load" button</li>
<li>select the <code>busy-web.cpuprofile</code> file</li>
<li>in the CPU PROFILES list on the left, select "busy-web"</li>
</ul>
<p>For more information on using Chrome Dev Tools to analyze a CPU
profile, see Google's
<a href="https://developers.google.com/web/tools/profile-performance/rendering-tools/js-execution">Speed Up JavaScript Execution</a>
page.</p>
<p>Note that we didn't have to instrument our program with any special profiling
packages - access to the V8 CPU profiler is baked right into
<span class='nsolid'>N|Solid</span>!
<strong>About time someone did that, eh?</strong></p>
<p>You can easily write a script to automate the creation of a CPU profile, where
you add a <code>sleep</code> command to wait some number of seconds between
the <code>profile_start</code> and <code>profile_stop</code> commands.</p>
<pre><code class="lang-text">#!/bin/sh
echo "starting CPU profile"
nsolid-cli --socket 5000 profile_start
echo "waiting 5 seconds"
sleep 5
echo "writing profile to busy-web.cpuprofile"
nsolid-cli --socket 5000 profile_stop > busy-web.cpuprofile
</code></pre>
<p>Or instead of sleeping, if your app is an HTTP server, you can drive some
traffic to it with
<a href="http://httpd.apache.org/docs/2.2/programs/ab.html">Apache Bench</a> (<code>ab</code>),
by running something like this instead of the <code>sleep</code> command:</p>
<pre><code class="lang-text">ab -n 1000 -c 100 http://localhost:3000/
</code></pre>
<h2 id="generating-heap-snapshots">generating heap snapshots</h2>
<p>You can use the same technique to capture heap snapshots, using the
<code>snapshot</code> command. The <code>snapshot</code> command produces output which
should be redirected to a file with a <code>.heapsnapshot</code> extension:</p>
<pre>
$ <b>nsolid-cli --socket 5000 snapshot > busy-web.heapsnapshot</b>
</pre>
<p>You can then load those files in
Chrome Dev Tools for analysis, the same way the CPU profiles are loaded.</p>
<p>For more information on using Chrome Dev Tools to analyze a heap
snapshot, see Google's
<a href="https://developers.google.com/web/tools/profile-performance/memory-problems/heap-snapshots">How to Record Heap Snapshots</a>
page.</p>
<h2 id="more-info">more info</h2>
<p>The full list of commands you can use with <code>nsolid-cli</code> is
available at the doc page
<a href="https://docs.nodesource.com/nsolid/1.0/docs/using-the-cli">
<span class='nsolid'>N|Solid Command Line Interface (CLI)</span>
</a>.</p>
<p>All of the documentation for <span class='nsolid'>N|Solid</span>
is available at the doc site
<a href="https://docs.nodesource.com/nsolid/1.0/docs/">
<span class='nsolid'>N|Solid Documentation</span></a>.</p>
<p>If you have any questions about N|Solid, feel free to post them at
<a href="http://stackoverflow.com/questions/ask">Stack Overflow</a>, and add
a tag <code>nsolid</code>.</p>
Patrick Muellerhttp://www.blogger.com/profile/04900886008475308281noreply@blogger.comtag:blogger.com,1999:blog-22367266.post-51146550737314074682015-05-13T11:06:00.000-04:002015-05-13T11:06:49.216-04:00 resuscitating a 2006 MacBook<p>One bad assumption I made about leaving IBM is that I'd be able to get a new MacBook Pro quickly. Nope. 2-3 weeks. NO LAPTOP! HOW AM I SUPPOSED TO TAKE TIME OFF AND RELAX?
<p>Poking through my shelves of computer junk, I spied my two old MacBooks. I forgot I had two of them.
<p>The first was a G4 (PPC) iBook. That was the laptop I bought in 2003 to kick the tires on Apple hardware and OS X 10.2. I was hooked.
<p>The second was a Intel Core Duo (x86) MacBook. 2 2Ghz x86 cores, 2GB RAM, 120GB drive. I bought that in 2006, and remember it being a productive machine. Eventually replaced that with a line of MacBook Pros IBM provided me, up until this week.
<p>Hmmm, is that MacBook still usable? Powered it up fine. Poking around, it seemed like constraints on hardware / OS levels were that I should try to get this box from OS X 10.5 to 10.6. Also - this being a 32-bit machine - some number of apps wouldn't run. Eg, Chrome.
<p>Luckily I still had an OS X 10.6 DVD lying around in my piles of old stuff. Upgraded easily. So now I can run PathFinder, Sublime, iTerm, Firefox, and Skype. Can't run Chrome, Atom, or Twitter. Or io.js. Again, HOW AM I SUPPOSED TO TAKE TIME OFF AND RELAX?
<p>The <a href="https://github.com/iojs/io.js#build">build requirements for io.js</a> looked like something that I might be able to meet. Rolled up my sleeves.
<p>First I wanted to update git. Currrent version was at 1.6. It was already complaining about <tt>git clone</tt> operations on occasion. So, first installed <a href="http://brew.sh/">homebrew</a>, then installed git 2.4.
<p>That was easy. Now to install gcc/g++ - they were not already on the machine, and the latest stable at brew is 4.9.2. After a long time, it installed that fine. But it doesn't override the built-in gcc/g++. Instead, it provides <tt>gcc-4.9</tt> and similar named tools for that version of the gcc toolchain.
<p>To get the iojs Makefile to use these instead of the built-in gcc tools, I set env vars in my <tt>.bash_profile</tt>:
<pre>
export CC=gcc-4.9
export CXX=g++-4.9
export AR=gcc-ar-4.9
export NM=gcc-nm-4.9
export RANLIB=gcc-ranlib-4.9
</pre>
<p>Ready to build iojs! Ran <tt>./configure</tt> - it completed successfully but was a little complain-y about some unrelated looking things. Ran <tt>make</tt>. And then fell asleep. Woke up, it had completed successfully, so ran a little test and ... WORKED! Finally ran <tt>sudo make install</tt> and NOW IT IS EVERYWHERE.
<p>Got some test failures, but they may be environmental. Tried some typical workflow stuff and things seem fine.
<p>Not a dream machine, by any means. Slow. Constantly watching memory usage and quitting apps as appropriate. And the fan is quite noisy. And the display seems like it doesn't have long to live (display adapter problems are my lot in life as a MacBook owner).
<p>But kinda fun.Patrick Muellerhttp://www.blogger.com/profile/04900886008475308281noreply@blogger.comtag:blogger.com,1999:blog-22367266.post-17951429176230136932015-05-13T10:56:00.000-04:002015-05-13T10:56:20.173-04:00leaving IBM, taking a break, going somewhere else<p>This last Monday - May 11, 2015 - was my last day as an IBMer.
<p>I had a great run at IBM. Worked on a lot of great, diverse projects with great people.
<p>But I'm ready for a change. I've been planning on retiring this year, for a while now, but it kinda snuck up on me. Mid-life crisis? Maybe. Anyway, it's time!
<p>First, a breather.
<p>Then, looking forward to starting my next adventure(s) in software development.
<p>I'm talking to some potential employers right now, and would like to talk to more. Contact info is in my resume, linked to below.
<p>In terms of what I'm looking for, I'd like to continue working in the node.js environment. Still having a lot of fun there. My favorite subject matter is working on tools for developers. But I'm handy - and interested - in lots of stuff.
<p>I live in the Raleigh/NC area, and can't relocate for a few years. I'm quite comfortable working remote, or local.
<p>My resume is available here:
<blockquote>
<a href="http://muellerware.org/resume/Patrick-Mueller-Resume.html">http://muellerware.org/resume/Patrick-Mueller-Resume.html</a>
</blockquote>
Patrick Muellerhttp://www.blogger.com/profile/04900886008475308281noreply@blogger.comtag:blogger.com,1999:blog-22367266.post-27641580481516793872015-03-26T09:23:00.001-04:002015-03-26T09:30:29.369-04:00wiki: something old, something new<!-- wiki: something old, something new -->
<p>Back in 1995, <a href="http://en.wikipedia.org/wiki/Ward_Cunningham">Ward Cunningham</a> - a fellow <a href="http://en.wikipedia.org/wiki/Purdue_Boilermakers">BoilerMaker</a> -
created a program called <a href="http://c2.com/cgi/wiki?WikiHistory">WikiWikiWeb</a>, which created a new species of web
content we now know as the "wiki".</p>
<p>My colleague <a href="http://c2.com/cgi/wiki?RickDeNatale">Rick DeNatale</a> showed it to me, and I was hooked. So hooked,
I wrote a <a href="http://c2.com/cgi/wiki?PatsWiki">clone in REXX for OS/2</a>, which I've sadly been unable to find. It's
funny to see the names of some other friends on list of <a href="http://c2.com/cgi/wiki?VisitorsInNinetyFive">initial visitors</a>.</p>
<p>Since then, I've always had some manner of wiki in my life. We still
use wikis to document project information in IBM, I use Wikipedia all the time for
... whatever ..., and occaisonally use the free wiki service that GitHub provides
for projects there. I think some credit also goes to Ward for helping push the
"simplied html markup" story (eg, Markdown), to make editing and creating web
content more approachable to humans.</p>
<p>Ward started a new project a few years back, <a href="http://fed.wiki.org/">Smallest Federated Wiki</a>, to
start exploring some interesting aspects of the wiki space - federating wikis, providing
plugin support, multi-page views, rich change history. It's fascinating.</p>
<p>I've had in the back of my mind to get Federated Wiki to run on Bluemix for a
while, and it seemed like an appropriate time to make something publishable. So, I created
the <a href="https://github.com/IBM-Bluemix/cf-fed-wiki">cf-fed-wiki</a> project, which will let you easily deploy a version of
Federated Wiki on Cloud Foundry (and thus Bluemix), and is also set up to use
with the new easy-to-deploy Bluemix Button (below).</p>
<p><a target="_blank" href="https://bluemix.net/deploy?repository=https://github.com/IBM-Bluemix/cf-fed-wiki.git">
<img src="http://bluemix.net/deploy/button.png" alt="Deploy to Bluemix">
</a>
<!-- __ those two underscores are needed to fix atom hilighting - grumble --></p>
<p>There are still some rough spots, but it seems workable, or at least something
to play with. </p>
<p>The best way to learn about Federated Wiki is to watch <a href="http://video.fed.wiki.org/view/welcome-visitors/view/federated-wiki-videos">Ward's videos</a>.</p>
<p>Enjoy, and thanks for the wiki, Ward!</p>
<!-- ======================================================================= -->
Patrick Muellerhttp://www.blogger.com/profile/04900886008475308281noreply@blogger.comtag:blogger.com,1999:blog-22367266.post-5661022063186618732015-02-13T15:21:00.000-05:002015-02-17T12:08:38.615-05:00having your cake and cakex'ing it too<!-- having your cake and cakex'ing it too -->
<p>As a software developer, I care deeply about my build tools. I strive to
provide easy-to-use and understand build scripts with my projects, so that other
folks can rebuild a project I'm working on, as simply as possible. Often one
of those other folks is me, coming back to a project after being away for it
for months, or years.</p>
<p>I cut my teeth on <a href="http://www.gnu.org/software/make/manual/"><code>make</code></a>. So awesome, for it's time. Even today, all kinds
of wonderful things about it. But there are problems. Biggest is ensuring
<code>Makefile</code> compatibility across all the versions of <code>make</code> you might encounter.
I've used <code>make</code> on OS/2, Windows, AIX, Linux, Mac, and other Unix-y platforms.
The old version of Windows <code>nmake</code> was very squirrelly to deal with, and if
you had to build a <code>Makefile</code> that could run on Windows AND anywhere else,
well best of luck, mate! <code>make</code> is also a bit perl-ish with all the crazy
syntax-y things you can do. It's driven me to near-insanity at times.</p>
<p>I was also a big user of <a href="http://ant.apache.org/"><code>Ant</code></a>. I don't do Java anymore, and would prefer
to not do XML either, but I do still
<a href="https://github.com/apache/cordova-weinre/blob/master/weinre.build/build.xml">build weinre with Ant</a>.</p>
<p>Since I'm primarily doing node.js stuff these days, it makes a lot of sense
to use a build tool implemented on node.js; I've already got the runtime
for that installed, guarantee. In the past, I've used <a href="http://jakejs.com/"><code>jake</code></a> with the
Apache Cordova project, and have flirted with the newest build darlings of the
node.js world, <a href="http://gruntjs.com/"><code>grunt</code></a> and <a href="http://gulpjs.com/"><code>gulp</code></a>.</p>
<p>I tend towards simpler tools, and so am happier at the <code>jake</code> level of simple-ness
comared to the whole new way of thinking required to use <code>grunt</code> or <code>gulp</code>, which
also have their own sub-ecosystem of plugins to gaze upon.</p>
<p>Of course, there are never enough build tools out there, so I've also built some
myself: <a href="https://www.npmjs.com/package/jbuild"><code>jbuild</code></a>. I've been using <code>jbuild</code> for projects since I built it, and
have been quite happy with it. But, it's my own tool, I don't really want to
<strong>own</strong> a tool like this. The interesing thing about <code>jbuild</code> was the
additional base level functionality it provided to the actual code you'd
write in your build scripts, and not the way tasks were defined and what not.</p>
<p>As a little experiment, I've pulled that functionality out of <code>jbuild</code> and
packaged up as something you can use easily with <a href="http://coffeescript.org/#cake"><code>cake</code></a>. <code>cake</code> is one
the simplest tools out there, in terms of what it provides, and let's me
write in CoffeeScript, which is closer to the "shell scripting" experience
with make (which is awesome) compared to most other build tools.</p>
<p>Those extensions are in the <a href="https://www.npmjs.com/package/cakex"><code>cakex</code> package</a> available via <code>npm</code>.</p>
<p><strong>why <code>cakex</code></strong></p>
<ul>
<li><p><a href="http://documentup.com/arturadib/shelljs"><code>shelljs</code></a> function built in as globals. So I can do things like</p>
<pre><code>mkdir "-p", "tmp"
rm "-rf", "tmp"
</code></pre>
<p>(that's CoffeeScript) right in my <code>Cakefile</code>. Compare to how you'd do that
in Ant. heh.</p></li>
<li><p>scripts in <code>node_modules/.bin</code> added as global functions that invoke those
scripts. Hat tip, <a href="https://docs.npmjs.com/cli/run-script"><code>npm run</code></a>. So I can do things like</p>
<pre><code>opts = """
--outfile tmp/#{oBase}
--standalone ragents
--entry lib/ragents.js
--debug
"""
opts = opts.trim().split(/\s+/).join(" ")
log "running browserify ..."
browserify opts
</code></pre></li>
<li><p>functions acting as watchers, and server recyclers. I always fashion build
scripts to that they have a <code>watch</code> task, which does a build, runs tests,
restarts the server that's implemented in the project, etc. So that when
I save a file in my text editor, the build/test/server restart happens
all the time. These are hard little things to get right; I know, I've been
trying for years to get them right. Here's an example usage:</p>
<pre><code>taskWatch = ->
watchIter() # run watchIter when starting
watch
files: sourceFiles # watch sourceFiles for changes
run: watchIter # run watchIter on changes
watchIter = ->
taskBuild() # run the build
taskServe() # run the server
taskServe = ->
log "restarting server at #{new Date()}"
# starts / restarts the server, whichever is needed
daemon.start "test server", "node", ["lib/server"]
</code></pre></li>
</ul>
<p>The <code>cakex</code> npm page - <a href="https://www.npmjs.com/package/cakex">https://www.npmjs.com/package/cakex</a> - includes a
complete script that is the kind of thing I have in all my projects, so you
can take in the complete experience. I love it.</p>
<p>Coupla last things:</p>
<ul>
<li><p>yup, globals freaking everywhere. It's awesome.</p></li>
<li><p>I assume this would be useful with other node.js-based build tools, but
wouldn't surprise me if the "globals everywhere" strategy causes problems
with other tools.</p></li>
<li><p>I'm using <a href="https://www.npmjs.com/package/gaze"><code>gaze</code></a> to watch files, but it appears to have a
<a href="https://github.com/shama/gaze/issues/174">bug</a> where single file patterns end up matching too many
things; hence having to do extra checks when you're watching a single
file.</p></li>
<li><p>I've wrestled with the demon that are the <code>daemon</code> functions in <code>cakex</code>
for a long time. Never completely happy with any story there, but it's
usually possible to add enough hacks to keep things limping. Wouldn't
be surprised if I have to re-architect the innards there, again, but hopefully
the API can remain the same.</p></li>
<li><p>Please also check the section in the README titled
"<a href="https://www.npmjs.com/package/cakex#integration-with-npm-start">integration with <code>npm start</code></a>", for what I believe to be a best
practice of including all your build tools as dependencies in your package,
instead of relying on globally installed tools. For node.js build tools
anyway.</p></li>
</ul>
<!-- ======================================================================= -->
<meta name="twitter:card" content="summary" />
<meta name="twitter:site" content="@pmuellr" />
<meta name="twitter:title" content="having your cake and cakex'ing it too" />
<meta name="twitter:description" content="blog post from Patrick Mueller" />
<meta name="twitter:image" content="http://muellerware.org/images/logo-32x32.gif" />
Patrick Muellerhttp://www.blogger.com/profile/04900886008475308281noreply@blogger.comtag:blogger.com,1999:blog-22367266.post-6579865488770613772015-01-01T15:56:00.000-05:002015-01-01T15:56:09.292-05:00ragents against the machine<!-- ragents against the machine -->
<p>A few years back, I developed a tool called
<b>we</b>b <b>in</b>spector <b>re</b>mote - aka <strong><a href="http://people.apache.org/~pmuellr/weinre-docs/latest/">weinre</a></strong>.
It's an interesting hack where I took the WebKit Web Inspector UI
code, ran it in a plain old (WebKit-based) browser, and
hooked things up so it could debug
a web browser session running somewhere else. I was specifically targeting
mobile web browsers and applications using web browser views, but there wasn't
much (any?) mobile-specific code in weinre.</p>
<p><em>(btw, I currently run a publicly accessible <a href="http://weinre.mybluemix.net/">weinre server at Bluemix</a>)</em></p>
<p>One of the interesting aspects of weinre, is the way
the browser running the debugger user interface (the client)
communicates with
the browser being debugged (the target).
The WebKit Web Inspector code was originally designed so that the
client would connect to the target in a traditional client/server pattern.
But on mobile devices, or within a web browser
environment, it's very difficult to run any kind of a tradtional "server" -
tcp, http, or otherwise.</p>
<p>So the trick to make this work with weinre, is to have both the client and
target connect as HTTP clients to an HTTP server that shuttles messages
between the two. That HTTP server <em>is</em> the weinre server.
It's basically just a message switchboard between debug clients and targets.</p>
<p><a href="https://www.flickr.com/photos/usnationalarchives/3660047829" title="Photograph of Women Working at a Bell System Telephone Switchboard by The U.S. National Archives, on Flickr">
<img src="https://farm4.staticflickr.com/3613/3660047829_7e26b20599_z.jpg" width="640" height="524" alt="Photograph of Women Working at a Bell System Telephone Switchboard">
</a></p>
<p>This switchboard server pattern is neat, because it allows two programs
to interact with each other, where neither has to be an actual "server" of any
kind (tcp, http, etc).</p>
<p>And this turns out to be a useful property in other environments as well.
For instance, dealing with server farms running
"in the cloud". Imagine you're running some kind of web application server,
and decide to run 5 instances of your server to handle the load. And now you
want to "debug" the server. How do you connect to it? Presumably the 5 instances
are running behind one ip address - which server instance are you going to
connect to? How could you connect to all of them at once?</p>
<p>Instead of using the typical client/server pattern for debugging in this environment,
you can use the switchboard server pattern, and have each web application server
instance connect to the switchboard server itself, and then have a single debug
client which can communicate with all of the web application server
instances.</p>
<p>I've wanted to extract the switchboard server pattern code out of weinre for
a while now, and ... nows the time. I anticipate being able to define a generic
switchboard server, and messaging protocol that can be used to allow multiple,
independent, orthagonal tools to communicate with each other, where the tools
are running in various environments. The focus here isn't specifically
traditional graphical step debugging, but diagnostic tooling in general. Think
loggers, REPLs, and other sorts of development tools.</p>
<p>One change I've made as part of the extraction is terminology. In weinre,
there were "clients" and "targets" that had different capabilities. But there
really doesn't need to be a distinction between the two, in terms of naming or
functionality. Instead, I'm referring to these programs as "agents". </p>
<p>And sometimes <strong>agents</strong> will be communicating with <strong>agents</strong> running on
other computers - <strong>r</strong>emote <strong>agents</strong> -
"<strong>ragents</strong>" - hey, that would be a
cool name for a project!</p>
<p>Another thing I'm copying from the Web Inspector work is the messaging
protocol. At the highest level, there are two types of messages that agents
can send - request/response pairs, and events.</p>
<ul>
<li><p>A request message can be sent from one agent to another, and then a response
message is sent the opposite direction. Like RPC, or an HTTP request/response.</p></li>
<li><p>An event message is like a pub/sub message; multiple agents can listen for
events, and then when an agent posts an event message, it's sent to every
agent that is listening.</p></li>
</ul>
<p>You can see concrete examples of these request/response and event messages in
the <a href="https://code.google.com/p/v8-wiki/wiki/DebuggerProtocol">V8 debugger protocol</a> and the <a href="https://developer.chrome.com/devtools/docs/protocol/1.1/index">Chrome Remote Debugging Protocol</a>.</p>
<p>This pattern makes it very easy to support things like having multiple
debugger clients connect to a single debug target; all state change in the target
gets emitted as events so every connected client will see the change. But
you can also send specific requests from the client to the target, and only
the client will see the responses.</p>
<p>For example, one debugger client might send a request "step" to have the
debugger execute the next statement then pause. The response for this might be
just a success/failure indicator. A set of events would end up being posted that
the program started, and then stopped again (on the next statement). That way,
every debugger client connected would see the effects of the "step" request.</p>
<p>Turns out, you really <strong>need</strong> to support multiple clients connected
simultaneously if your clients are running in a web browser.
Because if you only support a single client connection to your program,
what happens when the user can't find their browser tab running the client?
The web doesn't want to work with singleton connections.</p>
<p>I'm going to be working on the base libraries and server first, and then build
some diagnostic tools that make use of them.</p>
<p>Stay tuned!</p>
<!-- ======================================================================= -->
Patrick Muellerhttp://www.blogger.com/profile/04900886008475308281noreply@blogger.comtag:blogger.com,1999:blog-22367266.post-52153581176475144682014-09-10T11:14:00.000-04:002014-09-10T11:14:42.259-04:00keeping secrets secret<p>If you're building a web app, you probably have secrets you have to deal with:</p>
<ul>
<li>database credentials</li>
<li>session keys</li>
<li>etc</li>
</ul>
<p>So, where do you keep these secrets? Typical ways are:</p>
<ul>
<li>hard-code them into your source</li>
<li>require them to be passed on the command-line of your program</li>
<li>get them from a config file</li>
<li>get them from environment variables</li>
</ul>
<p>Folks using Cloud Foundry based systems have another option:</p>
<ul>
<li>get them from <a href="http://docs.cloudfoundry.org/devguide/services/user-provided.html">user-provided services</a></li>
</ul>
<p>This blog post will go over the advantages and disadvantages of these approaches.
Examples are provided for node.js, but are applicable to any language.</p>
<h2>secrets via hard-coding</h2>
<p>The documentation for the <a href="https://www.npmjs.org/package/express-session">express-session package</a> shows the following
example of hard-coding your secrets into your code:</p>
<script src="https://gist.github.com/pmuellr/43c146e2286fb138ce09.js?file=expressjs_session.js"></script>
<p>This is awful:</p>
<ul>
<li><p>If you need to change the secret, you need to change the code; apply
some separation of concerns, and keep your code separate from your
configuration.</p></li>
<li><p>If you happen to check this code into a source code management (SCM) system,
like GitHub, then everyone with access to that SCM will have access to your
password. That might be literally everyone.</p></li>
</ul>
<p>Please, <strong>DO NOT DO THIS!!</strong></p>
<p>Don't be <a href="http://www.webmonkey.com/2013/01/users-scramble-as-github-search-exposes-passwords-security-details/">one of these people</a>. Use one of the techniques below, instead.</p>
<h2>secrets via config files</h2>
<p>Here is an example using <code>require()</code> to get a secret from
a JSON file:</p>
<script src="https://gist.github.com/pmuellr/43c146e2286fb138ce09.js?file=secret-file.js"></script>
<p>This example takes advantage of the node.js feature of being able to load
a JSON file and get the parsed object as a result.</p>
<p>If you're going to go this route, you should do the following:</p>
<ul>
<li><p>Do <strong>NOT</strong> store the config file in your SCM, because otherwise you may still
be making your secret available to everyone who has access to your SCM.</p></li>
<li><p>To keep the config file from being stored, add the file to your
<a href="http://git-scm.com/docs/gitignore"><code>.gitignore</code> file</a> (or equivalent for your SCM).</p></li>
<li><p>Create an example config file, say <code>secret-config-sample.json</code>, which folks
can copy to the actual <code>secret-config.json</code> file, and use as an example.</p></li>
<li><p>Document the example config file usage.</p></li>
</ul>
<p>You now have an issue of how to "manage" or save this file, since it's not
being stored in an SCM.</p>
<h2>secrets via command-line arguments</h2>
<p>Here is an example using the <a href="https://www.npmjs.org/package/nopt">nopt package</a> to get a secret from
a command-line argument:</p>
<script src="https://gist.github.com/pmuellr/43c146e2286fb138ce09.js?file=secret-arg.js"></script>
<p>You can then invoke your program using either of these commands:</p>
<pre><code>node secret-arg.js --sessionSecret "keyboard cat"
node secret-arg.js -s "keyboard cat"
</code></pre>
<p>This is certainly nicer than having secrets hard-coded in your app, but it
also means you will be typing the secrets a lot. If you decide to "script"
the command invocation, keep in mind your script now has your secrets in it.
Use the "example file" pattern described above in "secrets via config files"
to keep the secret out of your SCM.</p>
<h2>secrets via environment variables</h2>
<p>Here is an example using <code>process.env</code> to get a secret from
an environment variable:</p>
<script src="https://gist.github.com/pmuellr/43c146e2286fb138ce09.js?file=secret-env.js"></script>
<p>You can then invoke your program using the following command:</p>
<pre><code>SESSION_SECRET="keyboard cat" node secret-env.js
</code></pre>
<p>Like using command-line arguments, if you decide to script this, keep in mind
your secret will be in the script.</p>
<p>You likely have other ways of setting environment variables when you run your
program. For instance, in Cloud Foundry, you can set environment variables
via a <a href="http://docs.cloudfoundry.org/devguide/deploy-apps/manifest.html#env-block"><code>manifest.yml</code> file</a> or with the <code>cf set-env</code> command.</p>
<p>If you decide to set the environment variable in your <code>manifest.yml</code> file, keep
in mind your secret will be in the manifest.
Use the "example file" pattern described above in "secrets via config files"
to keep the secret out of your SCM. Eg, put <code>manifest.yml</code> in your <code>.gitignore</code>
file, and ship a <code>manifest-sample.yml</code> file instead.</p>
<h2>secrets via Cloud Foundry user-provided services</h2>
<p>Here is an example using the <a href="https://www.npmjs.org/package/cfenv">cfenv package</a> to get a secret from
a user-provided service:</p>
<script src="https://gist.github.com/pmuellr/43c146e2286fb138ce09.js?file=secret-vcap.js"></script>
<p>This is my favorite way to store secrets for Cloud Foundry.
In the example above, the code is expecting a service whose
name matches the regular expression <code>/session-secret/</code> to contain the secret
in the credentials property named <code>secret</code>. You can create the user-provided
service with the <a href="http://docs.cloudfoundry.org/devguide/services/user-provided.html#user-cups"><code>cf cups</code></a> command:</p>
<pre><code>cf cups a-session-secret-of-mine -p secret
</code></pre>
<p>This will prompt you for the value of the property <code>secret</code>, and then create
a new service named <code>a-session-secret-of-mine</code>. You will need to <code>cf bind</code>
the service to your application to get access to it.</p>
<p>There are a number of advantages to storing your secrets in
user-provided services:</p>
<ul>
<li><p>A service can be bound to multiple applications; this is a great way
to store secrets that need to be shared by "micro-services", if you're
into that kind of thing.</p></li>
<li><p>Once created, these values are persistent until you delete the service or
use the new <a href="http://docs.cloudfoundry.org/devguide/services/user-provided.html#user-uups"><code>cf uups</code></a> command to update them.</p></li>
<li><p>These values are only visible to users who have the appropriate access to the
service.</p></li>
<li><p>Using regular expression matching for services makes it easy to switch
services by having multiple services with regexp matchable names,
and binding only the one you want. See my project
<a href="https://github.com/pmuellr/bluemix-service-switcher">bluemix-service-switcher</a> for an example of doing this.</p></li>
</ul>
<h2>secrets via multiple methods</h2>
<p>Of course, for your all singing, all dancing wunder-app, you'll want to allow
folks to configure secrets in a variety of ways. Here's an example that uses
all of the techniques above - including hard-coding an <code>undefined</code> value in the
code! That should be the only value you ever hard-code. :-)</p>
<p>The example uses the
<a href="http://underscorejs.org/#defaults"><code>defaults()</code> function from underscore</a>
to apply precedence for obtaining a secret from multiple techniques.</p>
<script src="https://gist.github.com/pmuellr/43c146e2286fb138ce09.js?file=secret-multi.js"></script>
<!-- ======================================================================= -->
Patrick Muellerhttp://www.blogger.com/profile/04900886008475308281noreply@blogger.comtag:blogger.com,1999:blog-22367266.post-32387195451748820542014-09-03T10:31:00.000-04:002014-09-03T10:33:54.501-04:00cfenv 1.0.0 with new getServiceCreds() method<p>I've updated the node.js <a href="https://www.npmjs.org/package/cfenv">cfenv package at npm</a>:</p>
<ul>
<li>moved from the netherworld of 0.x.y versioned packages to version 1.0.0</li>
<li>updated some of the package dependencies</li>
<li>added a new <code>appEnv.getServiceCreds(spec)</code> method</li>
</ul>
<p>In case you're not familiar with the <code>cfenv</code> package, it's intended to be the
swiss army knife of handling your Cloud Foundry runtime environment variables,
including: <code>PORT</code>, <code>VCAP_SERVICES</code>, and <code>VCAP_APPLICATION</code>. </p>
<p>Here's a quick example that doesn't including accessing services in
<code>VCAP_SERVICES</code>:</p>
<script src="https://gist.github.com/pmuellr/f9f3dc1922ceb2895e78.js?file=cfenv-server.js"></script>
<p>You can start your server with this kind of snippet, which provides the
correct port, binding address, and url of the running server; and it will
run locally as well as on CloudFoundry.</p>
<p>For more information, see the <a href="https://github.com/cloudfoundry-community/node-cfenv#readme">cfenv readme</a>.</p>
<h2>new API <code>appEnv.getServiceCreds(spec)</code></h2>
<p>Lately I've been finding myself just needing the <code>credentials</code> property value from
service objects. To make this just a little bit easier than:</p>
<script src="https://gist.github.com/pmuellr/f9f3dc1922ceb2895e78.js?file=cfenv-creds-before.js"></script>
<p>you can now do this, using the new
<a href="https://github.com/cloudfoundry-community/node-cfenv#appenvgetservicecredsspec"><code>appEnv.getServiceCreds(spec)</code></a>
API:</p>
<script src="https://gist.github.com/pmuellr/f9f3dc1922ceb2895e78.js?file=cfenv-creds-after.js"></script>
<p>No need to get the whole service if you don't need it, and you don't have to
type out <code>credentials</code> all the time :-)</p>
<h2>what else?</h2>
<p>What other gadgets does <code>cfenv</code> need? If you have thoughts, don't hesitate
to <a href="https://github.com/cloudfoundry-community/node-cfenv/issues">open an issue</a>, send a pull request, etc.</p>
<!-- ======================================================================= -->
Patrick Muellerhttp://www.blogger.com/profile/04900886008475308281noreply@blogger.comtag:blogger.com,1999:blog-22367266.post-213914549014586432014-06-06T13:45:00.000-04:002014-06-06T15:43:33.076-04:00debugging node apps running on Cloud Foundry<p>For node developers, the
<a href="https://www.npmjs.org/package/node-inspector">node-inspector</a>
package is an excellent tool providing debugger support, when you need it.
It reuses the
<a href="https://developer.chrome.com/devtools/index">Chrome DevTools debugger</a>
user interface, in the same kinda way my old
<a href="http://people.apache.org/~pmuellr/weinre/docs/latest/Home.html">weinre</a>
tool for Apache Cordova does. So if you're familiar with Chrome DevTools when debugging your
web pages, you'll be right at home with node-inspector.</p>
<p>If you haven't tried node-inspector in a while, give it another try; the
new-ish <code>node-debug</code> command orchestrates the dance between your node app,
the debugger, and your browser, that makes it dirt simple to get the
debugger launched.</p>
<p>Lately I've been doing node development with
<a href="https://bluemix.net">IBM's Bluemix PaaS</a>,
based on the
<a href="http://cloudfoundry.org/">Cloud Foundry</a>.
And wanting to use node-inspector. But there's a problem. When you run
node-inspector, the following constraints are in play:</p>
<ul>
<li>you need to launch your app in debug mode</li>
<li>you need to run node-inspector on the same machine as the app</li>
<li>node-inspector runs a web server which provides the UI for the debugger</li>
</ul>
<p>All well and good, except if the app you are trying to debug is a web server
itself. Because with CloudFoundry, an "app" can only use one HTTP port -
but you need two - one for your app and one for node-inspector.</p>
<p>And so, the great proxy-splitter hack.</p>
<p>Here's what I'm doing:</p>
<ul>
<li>instead of running the actual app, run a shell app</li>
<li>that shell app is a proxy server</li>
<li>launch the actual app on some rando port, only visible on that machine</li>
<li>launch node inspector on some rando port, only visible on that machine</li>
<li>have the shell app's proxy direct traffic to node-inspector if the
incoming URL matches a certain pattern</li>
<li>for all other URLs the shell app gets, proxy to the actual app</li>
</ul>
<p><strong>AND IT WORKS!</strong></p>
<p>And then imagine my jaw-dropping surprise, when last week at JSConf,
Mikeal Rogers did a presentation on his
<a href="https://www.npmjs.org/package/occupy">occupy cloud deployment story</a>,
which <strong>ALSO</strong> uses a proxy splitter to do it's business.</p>
<p>This is a thing, I think.</p>
<p>I've cobbled the node-inspector proxy bits together as
<a href="https://www.npmjs.org/package/cf-node-debug">cf-node-debug</a>.
Still a bit wobbly, but I just finished adding some security support so that
you need to enter a userid/password to be able to use the debugger; you don't
want strangers on the intertubes "debugging" your app on your behalf, amirite?</p>
<p>This works on BlueMix, but doesn't appear to work correctly on
<a href="https://run.pivotal.io/">Pivotal Web Services</a>;
something bad is happening with the web sockets; perhaps we can work through
that next week at
<a href="http://cfsummit.com/">Cloud Foundry Summit</a>?</p>
Patrick Muellerhttp://www.blogger.com/profile/04900886008475308281noreply@blogger.comtag:blogger.com,1999:blog-22367266.post-9934994806090156092014-05-19T12:54:00.000-04:002014-05-19T17:38:16.981-04:00enabling New Relic in a Cloud Foundry node application<p>Something you will want to enable with your apps running in a
<a href="http://cloudfoundry.org/">Cloud Foundry</a> environment such as <a href="https://bluemix.net">BlueMix</a>,
is
monitoring. You'll want a service to watch your app, show you when it's down,
how much traffic it's getting, etc.</p>
<p>One such popular service is <a href="https://newrelic.com">New Relic</a>.</p>
<p>Below are some instructions showing how you can enable a <a href="http://nodejs.org/">node.js</a> application to
optionally make use of monitoring with New Relic, and keep hard-coded names
and license keys out of your source code.</p>
<p>The documentation for using New Relic with a node application is available
in the Help Center documentation "<a href="https://docs.newrelic.com/docs/nodejs/installing-and-maintaining-nodejs">Installing and maintaining Node.js</a>".</p>
<p>But we're going to make a few changes, to make this optional, and apply
indirection in getting your app name and license key.</p>
<ul>
<li>instead of copying <code>node_modules/newrelic/newrelic.js</code> to the root directory
of your app, create a <code>newrelic.js</code> file in the root directory with the
following contents:</li>
</ul>
<script src="https://gist.github.com/pmuellr/d598afa2a40a16d41949.js"></script>
<ul>
<li><p>This module is slightly enhanced from the version that New Relic suggests
that you create
(see <a href="https://github.com/newrelic/node-newrelic/blob/master/newrelic.js">https://github.com/newrelic/node-newrelic/blob/master/newrelic.js</a>).
Rather than hard-code your app name and license key, we get them dyanmically.</p></li>
<li><p>The app name is retrieved from your <code>package.json</code>'s <code>name</code> property;
and the license key is obtained from an environment variable. Note this
code is completely portable, and can be copied from one project to another
without having to change keys or names.</p></li>
<li><p>To set the environment variable for your CloudFoundry app, use the command</p>
<pre><code>cf set-env <app-name> NEW_RELIC_LICENSE_KEY 983498129....
</code></pre></li>
<li><p>To run the code in the <code>initialize()</code> function, use the following code in
your application startup, as close to the beginning as possible:</p></li>
</ul>
<script src="https://gist.github.com/pmuellr/4997d5829fee09d10a88.js"></script>
<ul>
<li><p>This code is different than what you what New Relic suggests you use at the
beginning of your application code;
instead of doing the <code>require("newrelic")</code> directly in your code, it will be
run via the <code>require("./newrelic").initialize()</code> code.</p></li>
<li><p>If you don't have the relevant environment variable set, then New Relic monitoring
will not be enabled for your app.</p></li>
</ul>
<p>Another option to keeping your license key un-hard-coded and dynamic is to
use a Cloud Foundry service. For instance, you can create a
<a href="http://docs.cloudfoundry.org/devguide/services/user-provided.html">user-provided service instance</a> using the following command:</p>
<pre><code>cf cups NewRelic -p '{"key":"983498129...."}'
</code></pre>
<p>You can then bind that service to your app:</p>
<pre><code>cf bind-service <app-name> NewRelic
</code></pre>
<p>Your application code can then get the key from the
<a href="http://docs.cloudfoundry.org/devguide/deploy-apps/environment-variable.html#VCAP-SERVICES"><code>VCAP_SERVICES</code></a> environment variable.</p>
<p>I would actually use services rather than environment variables in most cases, as
services can be bound to multiple apps at the same time, whereas you need to set
the environment variables for each app. </p>
<p>In this case, I chose to use an environment variable, as you <em>really</em> should be
doing the New Relic initialization as early as possible, and there is less
code involved in getting the value of an environment variable than parsing the
<code>VCAP_SERVICES</code> values.</p>
<p>You may want to add some other enhancements, such as appending a <code>-dev</code> or <code>-local</code>
suffix to the app name if you determine you're running locally instead of within
Cloud Foundry.</p>
<p>I've added optional New Relic monitoring to my <a href="http://node-stuff.ng.bluemix.net">node-stuff</a> app, so you can
see all this in action in the <a href="https://github.com/node-at-bluemix/node-stuff-web">node-stuff source</a> at GitHub.</p>
<hr>
<p><b>update on 2014/05/19</b>
<p>After posting this blog entry,
Chase Douglas (<a href="https://twitter.com/txase">@txase</a>) from New Relic
<a href="https://twitter.com/txase/status/468464754021773321">tweeted that
"the New Relic Node.js agent supports env vars directly"</a>, pointing to the
New Relic Help Center doc
<a href="https://docs.newrelic.com/docs/nodejs/configuring-nodejs-with-environment-variables">"Configuring Node.js with environment variables"</a>.
<p>Thanks Chase. Guess I need to learn to RTFM!
<p>What this means is that you can most likely get by with a much easier
set up if you want to live the "environment variable configuration" life.
There may still be some value in a more structured approach, like what I've
documented here, if you'd like to be a little more explicit.
<p>Also note that I specified using an environment variable of
<tt>NEW_RELIC_LICENSE_KEY</tt>, which is <b>the exact same name</b>
as the environment variable that the New Relic module uses itself!
(Great minds think alike?)
As such, it would probably be a good idea, if you want to do explicit
configuration as described here, to avoid using <tt>NEW_RELIC_*</tt> as
the name of your environment variables, as you may get some unexpected
interaction. In fact, my read of the precedence rules are that the environment
variables override the <tt>newrelic.js</tt> config file settings, so the setting
in the <tt>newrelic.js</tt> is ignored in favor of the environment variable, at
least in this example.Patrick Muellerhttp://www.blogger.com/profile/04900886008475308281noreply@blogger.comtag:blogger.com,1999:blog-22367266.post-43791434126571297002014-04-15T13:18:00.000-04:002014-07-07T22:53:18.366-04:00my new job at IBM involving node.js<p>Over the last year, as my work on <code>weinre</code>
(part of the <a href="http://cordova.apache.org/">Apache Cordova</a> project) has wound <em>way</em> down,
folks have been asking me "What are you working on now?"</p>
<p>The short answer was "cloud stuff". The long answer started with "working with
the <a href="http://cloudfoundry.org/">Cloud Foundry</a> open source <a href="http://en.wikipedia.org/wiki/Platform_as_a_service">PaaS</a> project".</p>
<p>IBM has been involved
in the Cloud Foundry project for a while now.
I've been
working on different aspects of using Cloud Foundry, almost all of them focused
around deploying <a href="http://nodejs.org/">node.js</a> applications to Cloud Foundry-based platforms.</p>
<p>About a month and a half ago,
<a href="http://www-03.ibm.com/press/us/en/pressrelease/43257.wss">IBM announced</a>
our new
<a href="https://bluemix.net">BlueMix</a>
PaaS offering<sup><a id="fn-2014-04-15-1" href="#fn-2014-04-15-1-ref">1</a></sup>,
based on Cloud Foundry.</p>
<p>And as of a few weeks ago, I've taken a new job at IBM that I call
"<em>Developer Advocate for BlueMix, focusing on node.js</em>".
Gotta word-smith that a bit.
In this role, I'll
continue to be able to work on different aspects of our BlueMix product and
the open source Cloud Foundry project,
using node.js, only this time, <strong>more in the open</strong>.</p>
<p>This is going to be fun.</p>
<p>I already have a package up at npm - <a href="https://www.ibmdw.net/bluemix/2014/03/12/cf-env-package-node-js/"><code>cf-env</code></a> - which makes it a bit
easier to deal with your app's startup configuration. It's designed to work with
Cloud Foundry based platforms, so works with BlueMix, of course.</p>
<p>I've also aggregated some node.js and BlueMix information together into
a little site, hosted on BlueMix:</p>
<blockquote>
<p><a href="http://node-stuff.mybluemix.net">http://node-stuff.mybluemix.net</a></p>
</blockquote>
<p>I plan on working on node.js stuff relating to:</p>
<ul>
<li>things specifically for BlueMix</li>
<li>things more generally for Cloud Foundry</li>
<li>things more generally for using any PaaS</li>
<li>things more generally for using node.js anywhere</li>
</ul>
<p>I will be posting things specific to BlueMix on the
<a href="https://www.ibmdw.net/bluemix/blog/">BlueMix dev blog</a>,
and more general things on this blog.</p>
<p>If you'd like more information on using node.js on BlueMix or CloudFoundry,
don't hesitate to get in touch with me.
The easiest ways are
<a href="https://twitter.com/pmuellr">@pmuellr</a> on twitter,
or email me at
<a href="mailto:Patrick_Mueller@us.ibm.com">Patrick_Mueller@us.ibm.com</a>.</p>
<p>Also, did you know that
<a href="http://node-stuff.mybluemix.net/ibm-node">IBM builds it's own version of node.js</a>
? I'm not currently contributing to this project, but I've known the folks
that are working on it, for a long time. Like, from the Smalltalk days. :-)</p>
<hr />
<p><strong>note on <code>weinre</code></strong></p>
<p>I continue to support <code>weinre</code>.
You can continue to use the following link to find the latest information,
links to forums and bug trackers, etc.</p>
<blockquote>
<p><a href="http://people.apache.org/~pmuellr/weinre/docs/latest/Home.html">http://people.apache.org/~pmuellr/weinre/docs/latest/Home.html</a></p>
</blockquote>
<p>I will add that most folks
have absolutely no need for <code>weinre</code> anymore;
<a href="http://prototest.com/guide-to-remote-debugging-ios-and-android-mobile-devices/">both Android and iOS have great
stories for web app debug</a>.</p>
<p>As
<a href="https://rawgit.com/brianleroux/slides/gh-pages/pgday2012/index.html#/13">Brian LeRoux has frequently stated</a>,
one of the primary goals of Apache Cordova is to <strong>cease to exist</strong>.
<code>weinre</code> has nearly met that goal.</p>
<hr />
<p><a href="#fn-2014-04-15-1"><sup id="fn-2014-04-15-1-ref">1</sup></a> Try Bluemix for free during the beta -
<a href="https://bluemix.net/">https://bluemix.net/</a></p>
<!-- ======================================================================= -->
Patrick Muellerhttp://www.blogger.com/profile/04900886008475308281noreply@blogger.comtag:blogger.com,1999:blog-22367266.post-59873927709491877672013-11-12T21:13:00.000-05:002013-11-12T21:13:33.919-05:00gzip encoding and compress middleware<p>Hopefully everyone who builds stuff for the web knows about gzip compression
available in HTTP. Here's a quick intro if you don't:
<a href="https://developers.google.com/speed/articles/gzip">https://developers.google.com/speed/articles/gzip</a></p>
<p>I use
<a href="http://www.senchalabs.org/connect/">connect</a>
or
<a href="http://expressjs.com/">express</a> when building web servers in node, and
you can use the
<a href="http://www.senchalabs.org/connect/compress.html">compress middleware</a>
to have your content gzip'd (or deflate'd), like in
<a href="http://expressjs.com/api.html#compress">this little snippet</a>.</p>
<p>However ...</p>
<p>Let's think about what's going on here. When you use the compress
middleware like this, for every response that sends compressable content,
your content will be compressed. Every time. Of course,
for your "static" resources, the result of that
compression is the same every time,
and so for those resources, it's really kind of pointless
to run the compression for each request. You should do it once, and
then reuse that compressed result for future requests.</p>
<p>Here are some tests using the play
<a href="http://www.gutenberg.org/ebooks/15788">Waste</a>,
by Harley Granville-Barker. I pulled the HTML version of the file, and
then also gzip'd the file manually from the command-line for one of tests.</p>
<p>The HTML file is ~300 KB. The gzip'd version is ~90 KB.</p>
<p>And here's a server I built to serve the files:</p>
<script src="https://gist.github.com/pmuellr/7441742.js"></script>
<p>The server runs on 3 different HTTP ports, each one serving the file, but
in a different way.</p>
<p>Port 4000 serves the HTML file with no compression.</p>
<p>Port 4001 serves the HTML file with the compress middleware</p>
<p>Port 4002 serves the the pre-gzip'd version of the file that I stored in
a separate directory; the original file was <code>waste.html</code>, but the gzip'd
version is in <code>gz/waste.html</code>. It checks the incoming request to see
if a gzip'd version of the file exists (caching that result), and
internally redirects the server to that file by resetting <code>request.url</code>.
Setting the appropriate <code>Content-Encoding</code>, etc headers. </p>
<p>What a hack! Not
quite sure that "fixing" <code>request.url</code> is kosher, but, worked great for
this test.</p>
<p>Here's some <code>curl</code> invocations.</p>
<pre>
$ <b>curl --compressed --output /dev/null --dump-header -
--write-out "%{size_download} bytes" http://localhost:4000/waste.html</b>
X-Powered-By: Express
Accept-Ranges: bytes
ETag: "305826-1384296482000"
Date: Wed, 13 Nov 2013 01:21:13 GMT
Cache-Control: public, max-age=0
Last-Modified: Tue, 12 Nov 2013 22:48:02 GMT
Content-Type: text/html; charset=UTF-8
Content-Length: 305826
Connection: keep-alive
305826 bytes
</pre>
<p>Looks normal.</p>
<pre>
$ <b>curl --compressed --output /dev/null --dump-header -
--write-out "%{size_download} bytes" http://localhost:4001/waste.html</b>
X-Powered-By: Express
Accept-Ranges: bytes
ETag: "305826-1384296482000"
Date: Wed, 13 Nov 2013 01:21:13 GMT
Cache-Control: public, max-age=0
Last-Modified: Tue, 12 Nov 2013 22:48:02 GMT
Content-Type: text/html; charset=UTF-8
Vary: Accept-Encoding
Content-Encoding: gzip
Connection: keep-alive
Transfer-Encoding: chunked
91071 bytes
</pre>
<p>Nice seeing the <code>Content-Encoding</code> and <code>Vary</code> headers, along with the reduced
download size. But look ma, no <code>Content-Length</code> header; instead the content
comes down chunked, as you would expect with a server-processed output stream.</p>
<pre>
$ <b>curl --compressed --output /dev/null --dump-header -
--write-out "%{size_download} bytes" http://localhost:4002/waste.html</b>
X-Powered-By: Express
Content-Encoding: gzip
Vary: Accept-Encoding
Accept-Ranges: bytes
ETag: "90711-1384297654000"
Date: Wed, 13 Nov 2013 01:21:13 GMT
Cache-Control: public, max-age=0
Last-Modified: Tue, 12 Nov 2013 23:07:34 GMT
Content-Type: text/html; charset=UTF-8
Content-Length: 90711
Connection: keep-alive
90711 bytes
</pre>
<p>Like the gzip'd version above, but this one has a <code>Content-Length</code>!</p>
<p>Here are some contrived, useless benches using <a href="https://github.com/wg/wrk">wrk</a>,
that confirm my fears.</p>
<pre>
$ <b>wrk --connections 100 --duration 10s --threads 10
--header "Accept-Encoding: gzip" http://localhost:4000/waste.html</b>
Running 10s test @ http://localhost:4000/waste.html
10 threads and 100 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 71.72ms 15.64ms 101.74ms 69.82%
Req/Sec 139.91 10.52 187.00 87.24%
13810 requests in 10.00s, 3.95GB read
Requests/sec: 1380.67
Transfer/sec: 404.87MB
$ <b>wrk --connections 100 --duration 10s --threads 10
--header "Accept-Encoding: gzip" http://localhost:4001/waste.html</b>
Running 10s test @ http://localhost:4001/waste.html
10 threads and 100 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 431.76ms 20.27ms 493.16ms 63.89%
Req/Sec 22.47 3.80 30.00 80.00%
2248 requests in 10.00s, 199.27MB read
Requests/sec: 224.70
Transfer/sec: 19.92MB
$ <b>wrk --connections 100 --duration 10s --threads 10
--header "Accept-Encoding: gzip" http://localhost:4002/waste.html</b>
Running 10s test @ http://localhost:4002/waste.html
10 threads and 100 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 48.11ms 10.66ms 72.33ms 67.39%
Req/Sec 209.46 24.30 264.00 81.07%
20795 requests in 10.01s, 1.76GB read
Requests/sec: 2078.08
Transfer/sec: 180.47MB
</pre>
<p>Funny to note that the server using compress middleware actually handles
less requests/sec than the one that doesn't compress at all.
But this is a localhost test so
the network bandwidth/throughput isn't realistic. Still, makes ya think.</p>
Patrick Muellerhttp://www.blogger.com/profile/04900886008475308281noreply@blogger.comtag:blogger.com,1999:blog-22367266.post-46880531564396833122013-10-08T13:47:00.000-04:002013-10-08T17:56:15.518-04:00sourcemap best practices<p>Source map support for web page debugging has been a thing now for a while.
If you don't know what a sourcemap is, Ryan Seddon's article
"<a href="http://www.html5rocks.com/en/tutorials/developertools/sourcemaps/">Introduction to JavaScript Source Maps</a>", provides
a technical introduction. The "proposal" for source map seems to be
<a href="https://docs.google.com/document/d/1U1RGAehQwRypUTovF1KRlpiOFze0b-_2gc6fAH0KY0k/edit">here</a>.</p>
<p>In a nutshell, sourcemap allow you to ship minized .js files (and maybe .css
files), which point back to the original source, so that when you debug your
code in a debugger like Chrome Dev Tools, you'll see the original source.</p>
<p>It's awesome.</p>
<p>Even better, we're starting to see folks ship source maps with their minized
resources. For example, looking at the
<a href="https://ajax.googleapis.com/ajax/libs/angularjs/1.2.0-rc.2/angular.min.js">minized angular 1.2.0-rc.2 js file</a>
you can see the sourcemap annotation near the end:</p>
<pre><code>/*
//@ sourceMappingURL=angular.min.js.map
*/
</code></pre>
<p>The contents of that .map file looks like this (I've truncated bits of it):</p>
<pre><code>{
"version":3,
"file":"angular.min.js",
"lineCount":183,
"mappings":"A;;;;;aAKC,SAAQ,CAACA,CAAD,...",
"sources":["angular.js","MINERR_ASSET"],
"names":["window","document","undefined",...]
}
</code></pre>
<p>You don't need to understand any of this, but what it means is that
if you use the <code>angular.min.js</code> file in a web page, and also make
<code>angular.min.js.map</code> and <code>angular.js</code> files available, when you debug
with a sourcemap-enabled debugger (like Chrome Dev Tools), you'll see
the original source instead of the minized source.</p>
<p>But there are a few issues. Although I haven't written any tools to generate
source maps, I do consume them in libraries and with tools that I use
(like <a href="http://browserify.org/">browserify</a>). Sometimes I do a bit of
surgery on them, to make them work a little better.</p>
<p>Armed with this experience, I figured I'd post a few "best practices", based
on some of the issues I've seen, aimed at folks generating sourcemaps.</p>
<p><strong>do not use a data-url with your sourceMappingURL annotation</strong></p>
<p>Using a data url, which browserify does, ends up creating a huge single line
for the sourceMappingURL annotation. Many text editors will choke on this,
if you accidently, or purposely edit the file with the annotation.</p>
<p>In addition, using a data-url means the mapping data is uuencoded, which
means humans can't read it. The sourcemap data is actually sometimes interesting
to look at, for instance, if you want to see what files got bundled with
browserify. </p>
<p>Also, including the sourcemap as a data-url means you just made your "production" file bigger, especially since it's uuencoded.
<p>Instead of using a data-url with the sourcemap information inlined, just provide
a map file like angular does (described above).</p>
<p><strong>put the sourceMappingURL annotation on a line by itself at the end</strong></p>
<p>In the angular example above, the sourceMappingURL annotation is in a <code>//</code> style
comment inside a <code>/* */</code> style comment. Kinda pointless. But worse,
it no longer works with Chrome 31.0.1650.8 beta.</p>
<p>Presumably, Chrome Dev Tools got a bit stricter with how they recognize the
sourceMappingURL annotation; it seems to like having the comment at the
very end of the file. See <a href="https://code.google.com/p/chromium/issues/detail?id=305284">https://code.google.com/p/chromium/issues/detail?id=305284</a>
for more info.</p>
<p>Browserify also has an issue here, as it adds a line with a single semi-colon
to the end of the file, right before the sourceMappingURL annotation, which
also does not work in the version of Chrome I referenced.</p>
<p><strong>name your sourcemap file <code><min-file>.map.json</code></strong></p>
<p>Turns out these sourcemap files are json. But no one uses a .json file extension,
which seems unfortunate as the browser doesn't understand them, if you happen
to try loading one of the files there. Not sure if there's a restriction about
naming them, but it seems like there shouldn't be, and it seems like it makes
sense to just use a .json extension for them.</p>
<p><strong>embed the original source in the sourcemap file</strong></p>
<p>Source map files contain a list of the names of the original source files
(urls, actually), and can optionally contain the original source file content
in the <code>sourcesContent</code> key.
Not everyone does this - I think most people do not put the original source
in the sourcemap files (eg, jquery, angular). </p>
<p>If you don't include the source with the <code>sourcesContent</code> key, then the source
will need to be retrieved by your browser. Not only is this another HTTP GET
required, but you will need to provide the original .js files as well as the
minized .js files on your server. And they will need to be arranged in whatever
manner the source files are specified in the source map file (remember, the
names are actually URLs).</p>
<p>If you're going to provide a separate map file, then you might as well add
the source to it, so the whole wad is available in one file.</p>
<p><strong>generate your sourcemap json file so that it's readable</strong></p>
<p>I'm looking at you, <a href="http://code.jquery.com/jquery-2.0.3.min.map">http://code.jquery.com/jquery-2.0.3.min.map</a>.</p>
<p>The file is only used at debug time, it's unlikely there's going to be
much of a performance/memory hit by single-lining the file. Remember,
as I said above, it's often useful to be able to read the sourcemap
file, so ... make it readable.</p>
<p><strong>if you use the <code>names</code> key in your sourcemap, put it at the end of the map file</strong></p>
<p>Again, sourcemap files are interesting reading sometimes, but the <code>names</code> key
ends up being painful to deal with, if you generate the sourcemap with
something like <code>JSON.stringify(mapObject, null, 4)</code> or something. Because
the value of the <code>names</code> key is an array of strings, and it's pretty long,
as is not nearly as interesting to read as the other bits in the source map.
Add the property to your map object last, before you stringify it, so it
doesn't get in the way.</p>
<p><strong>where can we publish some sourcemap best practices?</strong></p>
<p>I'd like to see somewhere we can publish and debate sourcemap best practices.
Idears?</p>
<p><strong>bugs opened</strong></p>
Some bugs relevant to the "sourceMappingURL not the last thing in the file" issue/violation:
<ul>
<li><a href="https://code.google.com/p/chromium/issues/detail?id=299001">https://code.google.com/p/chromium/issues/detail?id=299001</a>
<li><a href="https://code.google.com/p/chromium/issues/detail?id=305284">https://code.google.com/p/chromium/issues/detail?id=305284</a>
<li><a href="http://bugs.jquery.com/ticket/14433">http://bugs.jquery.com/ticket/14433</a>
<li><a href="https://github.com/angular/angular.js/issues/4331">https://github.com/angular/angular.js/issues/4331</a>
<li><a href="https://github.com/substack/node-browserify/issues/496">https://github.com/substack/node-browserify/issues/496</a>
</ul>Patrick Muellerhttp://www.blogger.com/profile/04900886008475308281noreply@blogger.comtag:blogger.com,1999:blog-22367266.post-39827670925639429982013-02-07T23:22:00.000-05:002013-02-07T23:22:55.528-05:00dustup<p>A few months ago, I took a look at Nathan Marz's <a href="http://storm-project.net/">Storm project</a>.
As someone who has used and implemented <a href="http://en.wikipedia.org/wiki/Actor_model">Actor-like</a>
systems a few times over the years, always fun to see another take. Although
Storm is aimed at the "cloud", it's easy to look at it and see how you
might be able to use it for local computation. In fact, I have a problem area
I'm working in right now where something like Storm might be useful, but I just
need it to work in a single process. In JavaScript.</p>
<p>Another recent problem area I've been looking at is async, and
using Promises to help with that mess. I've been looking at
Kris Kowal's <a href="http://documentup.com/kriskowal/q/"><code>q</code></a>
specifically.
It's a nice package, and there's a lot to love.
But as I was thinking about how I was going to end up using them, I imagined
building up these chained promises over and over again for some of my
processing. As live objects. Which, on the face of it, is insane. If
I can build a static structure and have object flow through them instead.
Should be able to cut down on the amount of live objects created, and it will
probably be easier to debug.</p>
<p>So, with that in mind, I finally sat down and reinterpreted Storm
this evening, with my
<a href="https://github.com/pmuellr/dustup">dustup</a> project.</p>
<p>Some differences from Storm, besides the super obvious ones (JavaScript vs
Java/Clojure, and local vs distributed):</p>
<ul>
<li><p>One of the things that didn't seem right to me in Storm was differentiating
spouts from bolts. So I only have bolts.</p></li>
<li><p>I'm also a fan of the <a href="http://puredata.info/">PureData</a>
real-time graphical programming environment, and so borrowed some ideas
from there. Namely, that bolts should have inlets where data comes in, and
outlets where they go out, and that to hook bolts together, that you'll connect
an inlet to an outlet.</p></li>
<li><p>Designed with CoffeeScript in mind. So you can do nice looking things like this:</p>
<pre><code>x = new Bolt
outlets:
stdout: "standard output"
stderr: "standard error"
</code></pre>
<p>which generates the following JavaScript:</p>
<pre><code>x = new Bolt({
outlets: {
stdout: "standard output",
stderr: "standard error"
}
})
</code></pre>
<p>and this:</p>
<pre><code>Bolt.connect a: boltA, b: boltB
</code></pre>
<p>which generates the following JavaScript</p>
<pre><code>Bolt.connect({a: boltA, b: boltB})
</code></pre>
<p>Even though I designed it for and with CoffeeScript, you can
of course use it in JavaScript, and it seems totally survivable.</p></li>
</ul>
<p>Stopping there left me with something that seems useful, and still small.
I hope to play with it in anger over the next week or so.</p>
<p>Here's an example of a topology that just passes data from one bolt to another,
in CoffeeScript:
<br>
(<a href="https://github.com/pmuellr/dustup/blob/master/test/sample.js">here's the JavaScript</a>
for you CoffeeScript haters)</p>
<script src="https://gist.github.com/pmuellr/4736380.js"></script>
<p>In the lines 1-2, we get access to the <code>dustup</code> package and
the <code>Bolt</code> class that it exports.</p>
<p>On lines 4-6, we're creating a new bolt that only has a single outlet, named
<code>"a"</code>
(the string value associated with the property <code>a</code> - <code>"outlet a"</code> - is ignored).</p>
<p>On lines 8-11, we're creating a new bolt that only has a single inlet, named
<code>"b"</code>. Inlets need to provide a function which will be invoked when the
inlet receives data from an outlet. In this case, the inlet function will
write the data to "the console".</p>
<p>On line 13, we connect the <code>a</code> outlet from the first bolt to the <code>b</code> inlet of
the second bolt. So that when we send a message out the <code>a</code> outlet, the
<code>b</code> inlet will receive it.</p>
<p>Which we test on line 15, by having the first bolt emit some data. Guess what
happens.</p>
<p><strong>next</strong></p>
<p>The next obvious thing to look at, is to see how asynchronous functions,
timers, and intervals fit into this scheme.</p>
Patrick Muellerhttp://www.blogger.com/profile/04900886008475308281noreply@blogger.comtag:blogger.com,1999:blog-22367266.post-64182382606550399092012-11-30T14:23:00.001-05:002012-11-30T14:23:55.385-05:00weinre installable at heroku (and behind other proxy servers)<p>I finally resolved
<a href="https://issues.apache.org/jira/browse/CB-1494">an issue with weinre relating to using it behind
proxy servers</a>.</p>
<p>Matti Paksula figured out what the problem was, but I wasn't happy with the
additional complexity involved, so I
<a href="https://git-wip-us.apache.org/repos/asf?p=cordova-weinre.git;a=blobdiff;f=weinre.server/lib/channelManager.coffee;h=d55bb0130179e4b0e6c3f62d19ef99a92fa6e7e9;hp=48abca6ea13d86582632efca0adfc073b9b0614e;hb=19454cfe2cd3c4515f7fb615f35fa2c7b2565401;hpb=de980f8b0de7c6472576aef1e4ab0b07d54534f7">made a simpler and more drastic change</a> (in the <code>getChannel()</code> function).</p>
<p>Used to be, subsequent weinre connections after the first were
checked to make sure they came from the same "computer"; that was just a
silly sort of security check that can be worked around, if need be. So
why bother with it, if it just gets in the way.</p>
<p>So, now weinre can run on <a href="http://www.heroku.com/">heroku</a>.
I have a wrapper project at GitHub you can use to create your own server:
<a href="https://github.com/pmuellr/weinre-heroku">https://github.com/pmuellr/weinre-heroku</a></p>
<p>Since Heroku provides
<a href="https://devcenter.heroku.com/articles/usage-and-billing">750 free dyno-hours per app per month</a>,
and weinre is one app, and uses one dyno at a time, and 750 hours are in a
month, you can run a weinre server on the Intertubes for free.</p>
<p>As usual, weinre has been updated on <a href="https://npmjs.org/package/weinre">npm</a>.</p>
Patrick Muellerhttp://www.blogger.com/profile/04900886008475308281noreply@blogger.comtag:blogger.com,1999:blog-22367266.post-68441699114337690252012-03-30T12:50:00.000-04:002012-03-30T13:39:27.932-04:00unofficial weinre binary packages for your convenience at my apache home page<p>weinre is now getting ready to go for the brass ring at Apache:
an "official" release. </p>
<p><strong>NOT THERE YET - NO OFFICIAL RELEASES YET</strong>.</p>
<p>Until that time, I have <strong>unofficial</strong> binary packages available for your convenience.</p>
<p>I've gone ahead and created a little download site at my personal Apache
page, here: <a href="http://people.apache.org/~pmuellr/">http://people.apache.org/~pmuellr/</a>.</p>
<p>The download site contains all the doc and binary packages for every weinre version
I've 'shipped', in the 1.x directories. It also contains <strong>unofficial</strong>
binary packages of recent cuts of weinre for your convenience, as the <code>bin</code>, <code>doc</code>, and <code>src</code> directories
in the <a href="http://people.apache.org/~pmuellr/weinre-builds/">weinre-builds</a> directory, and the same version of the doc, unpacked,
in the <code>latest</code> directory of <a href="http://people.apache.org/~pmuellr/weinre-docs/">weinre-docs</a>.</p>
<p>As with previous binary releases for your convenience, you can <code>npm install</code> weinre. Here's the recipe:</p>
<pre><code>npm install {base-url}apache-cordova-weinre-{version}-bin.tar.gz
{base-url} = http://people.apache.org/~pmuellr/weinre-builds/bin/
{version} = 2.0.0-pre-H0FABA3W-incubating
</code></pre>
<p>That version number will change, so I'm not providing an exact link. Check
the directory.</p>
<p><strong>help me help you</strong></p>
<p>If you'd like to accelerate the process of getting weinre to "official" status,
you can help!</p>
<ul>
<li>install it</li>
<li>use it</li>
<li>review it</li>
<li>build it</li>
<li>enhance it</li>
</ul>
<p>Feel free to post to the cordova mailing list at Apache, the weinre google group,
clone the source from Apache or GitHub, etc. All the links are here:
<a href="http://people.apache.org/~pmuellr/weinre-docs/latest/">http://people.apache.org/~pmuellr/weinre-docs/latest/</a>.</p>
<p>One link I forgot to add, and feel free to open a bug about this, is the link
to open a bug. <a href="https://issues.apache.org/jira/secure/CreateIssue.jspa?pid=12312420&issuetype=1&Create=Create">It's hideous</a> (the URL).
Add component "weinre" and maybe a "[weinre]" prefix on the summary.</p>
<p>Thank you for helping yourself!</p>
<p><strong>you f'ing moved the downloads again you f'er!</strong></p>
<p>Chill. And some bad news. The official releases won't be here (I don't think).
But I will provide a link on that site to the official releases, when they become
real.</p>
<p>I intend to keep this page around until I can get all the weinre bits available
in more appropriate places at Apache. Not there yet.</p>
<p><strong>personal home pages at Apache</strong></p>
<p>Apache provides "home pages" for Apache committers. That's what I'm using for
this archive site. If you're a committer, and curious about how I did the directory indexing,
the source for the site (modulo actual archives), is here:
<a href="https://github.com/pmuellr/people.apache.org">https://github.com/pmuellr/people.apache.org</a>.</p>
<p><strong>2012/03/30 1pm update</strong></p>
<p><a href="https://twitter.com/#!/janl">Jan Lehnardt</a> pointed out to me
<a href="https://twitter.com/#!/janl/status/185771996561420288">on twitter</a>
that I should avoid the use of the world "release", so that word has been
changed to things like "binary package for your convenience".</p>Patrick Muellerhttp://www.blogger.com/profile/04900886008475308281noreply@blogger.comtag:blogger.com,1999:blog-22367266.post-23284033985703298982012-02-17T12:56:00.001-05:002012-02-17T13:54:18.432-05:00npm installable version of weinre<p>As part of the
<a href="https://issues.apache.org/jira/browse/CB-83">port of weinre from Java to node.js</a>
it seemed logical to make it, somehow,
<a href="https://issues.apache.org/jira/browse/CB-259">npm installable</a>.</p>
<p>Success.</p>
<p>There is currently a tar.gz download of the weinre runtime
available at the
<a href="https://github.com/pmuellr/incubator-cordova-weinre/downloads">pmuellr/incubator-cordova-weinre downloads page</a>.
I may keep a 'recent' build here until I get the
<a href="https://issues.apache.org/jira/browse/INFRA-4343">continuous builds of weinre at Apache</a>
working. At which point, the tar.gz should be available
somewhere at apache.</p>
<p>Of course, let me know if you have any issues. Preferred place for discussion
is the
<a href="https://groups.google.com/forum/?fromgroups#!forum/weinre">weinre Google Group</a>.</p>
<h2>install locally</h2>
<pre><code>npm install https://github.com/downloads/pmuellr/incubator-cordova-weinre/weinre-node-1.7.0-pre-2012-02-17--16-47-15.tar.gz
</code></pre>
<p>After that install, you can run weinre via</p>
<pre><code>./node_modules/.bin/weinre
</code></pre>
<h2>install globally</h2>
<pre><code>sudo npm -g install https://github.com/downloads/pmuellr/incubator-cordova-weinre/weinre-node-1.7.0-pre-2012-02-17--16-47-15.tar.gz
</code></pre>
<p>After that install, you can run weinre via</p>
<pre><code>weinre
</code></pre>
<h2>notes/caveats</h2>
<ul>
<li><p>The command-line options have not changed, although the <code>--reuseAddr</code> option
is no longer supported.</p></li>
<li><p>There is no longer a Mac 'app' version of weinre. </p></li>
<li><p>This is <strong>not</strong> an 'official build' of weinre, in the Apache
sense. I need to figure out what that even means first.</p></li>
<li><p>These 'unofficial' builds have a horrendous
<a href="http://semver.org/">semver</a>
patch version names. Deal. If you have a better scheme, please
<a href="https://issues.apache.org/jira/secure/CreateIssueDetails!init.jspa?pid=12312420&components=12316604&issuetype=1">let me know</a>.</p></li>
<li><p>I don't have plans to ever put weinre up at
<a href="http://npmjs.org/">npmjs.org</a>,
since I have no idea how this would work, in practice.
I don't want to <strong>own</strong> the weinre package up there; ideally
it would be <strong>owned</strong> by Apache. Can't see how that would
work either. If I ever do put a weinre package up at
npm, I think it would be a placeholder with instructions on
how to get it from Apache. Or maybe there's a way we can
auto-link the npmjs.org package to Apache mirrors somehow?</p></li>
<li><p>I need to look at whether I should continue to ship all my package's
pre-reqs (and their pre-reqs, recursively) or not. And relatedly,
whether I should actually make fixed version pre-reqs of all the
recursively pre-req'd modules. An issue came up today that
I "refreshed" the node_modules, and got an unexpected update
to the <code>formidable</code> package (a pre-req of <code>connect</code> which is a pre-req
of <code>express</code> which is a pre-req of <code>weinre</code>). That's not great.
And could be fixed by having explicit versioned dependencies of
all the recursively pre-req'd modules in <code>weinre</code> itself.
Sigh. What do other people do here?</p></li>
<li><p>Prolly need to improve the user docs. The remainder of this blog
post is the shipped <code>README.md</code> file. So, to read the docs,
start the server, read the docs the server embeds, online. Sorry.</p></li>
</ul>
<h2>running</h2>
<p>For more information about running weinre, you can start the server
and browse the documentation online.</p>
<p>Start the server with the following command</p>
<pre><code>node weinre
</code></pre>
<p>This will start the server, and display a message with the URL to the
server. Browse to that URL in your web browser, and then click on
'documentation' link, which will display weinre's online documentation.
From there click on the 'Running' page to get more information about
running weinre.</p>
<hr>
<p>Update: <a href="https://twitter.com/#!/patrickkettner">Patrick Kettner</a> posted a link to this blog post on Hacker News, so you can discuss there, since I'm one of those "disable comments on blog" people. <a href="http://news.ycombinator.com/item?id=3604291">http://news.ycombinator.com/item?id=3604291</a>.Patrick Muellerhttp://www.blogger.com/profile/04900886008475308281noreply@blogger.comtag:blogger.com,1999:blog-22367266.post-28973385591582407742012-01-19T13:58:00.000-05:002012-01-19T13:58:03.504-05:00AMD - the good parts<!--
wr --exec "markdown AMD-the-good-parts.md > AMD-the-good-parts.html" *.md
-->
<p>Based my blog post yesterday
"<a href="http://pmuellr.blogspot.com/2012/01/more-on-why-amd-is-not-answer.html">More on why AMD is Not the Answer</a>",
you might guess I'm an AMD-hater. I strive to
<a href="http://jsconf.eu/2011/an_end_to_negativity.html">not be negative</a>,
but sometimes I fail. Mea culpa.</p>
<p>I <em>do</em> have a number of issues with AMD. But I think it's also fair to
point out "the good parts". </p>
<p><strong>my background</strong></p>
<p>I've only really played with a few AMD
libraries/runtimes. I've been more fiddling around with ways of
taking node/CommonJS-style modules, and making them
consumable by AMD runtimes. The AMD runtimes I've played with
the most are
<a href="https://github.com/jrburke/almond">almond</a>
and (my own)
<a href="https://github.com/pmuellr/modjewel">modjewel</a>.
My focus has been on just the raw <code>require()</code> and <code>define()</code>
functions.</p>
<p>But I follow a lot of the AMD chatter around the web,
and have done at least reading and perusing through other
AMD implementations. To find implementations of AMD, look at
<a href="http://www.commonjs.org/impl/">this list</a>
and search for "async". The ones I hear the most about are
<a href="http://requirejs.org/">require.js</a>,
<a href="http://dojotoolkit.org/features/">Dojo</a>,
<a href="https://github.com/pinf/loader-js#readme">PINF</a>,
and
<a href="https://github.com/cujojs/curl#readme">curl</a>.</p>
<p>And now, "the good parts".</p>
<h2>the code</h2>
<p>Browsing the source repositories and web sites,
you'll note that folks have spent a
lot of time on the libraries - the code itself, documentation,
test suites, support, etc. Great jobs, all. I know a lot
of these libraries have been battle-tested by folks in
positions to do pretty good battle-testing with them.</p>
<h2>the evangelism</h2>
<p>Special mention here to
<a href="http://tagneto.blogspot.com/">James Burke</a>
(not
<a href="http://en.wikipedia.org/wiki/James_Burke_(gangster)">this one</a>),
who has done a spectacular job leading and evangelizing
the AMD cause, both within the "JavaScript Module"
community, and outside. In particular, he's been
trying to get AMD support into popular libraries like
<a href="http://bugs.jquery.com/ticket/7102">jQuery</a>,
<a href="https://github.com/documentcloud/backbone/issues/search?q=AMD">backbone</a>,
<a href="https://github.com/documentcloud/underscore/issues/search?q=AMD">underscore</a> and
<a href="https://github.com/joyent/node/issues/search?q=AMD">node.js</a>.
Sometimes with success, sometimes not. James has been
cool and collected when dealing with jerks like me,
pestering him with complaints, questions, requests and bug reports.
Some open source communities can be cold and bitter
places, but the AMD community is not one of those, and I'm sure
that's in part due to James.</p>
<p>Thanks for all your hard work James!</p>
<h2>the async support</h2>
<p>AMD is an acronym for
<a href="https://github.com/amdjs/amdjs-api/wiki/AMD">Asynchronous Module Definition</a>.
That first word, <em>Asynchronous</em>, is key.
For the most part, if you're using JavaScript, you're living in a world of
single-threaded processing, where long-running functions are implemented as
asynchronous calls that take
callback functions as arguments. It makes sense, at least in some cases,
to extend this to the function of loading JavaScript code.</p>
<p>If you need to load JavaScript in an async fashion, AMD is designed for you,
and uses existing async function patterns that
JavaScript programmers should already be familiar with.</p>
<h2>support for CommonJS</h2>
<p>This is a touchy subject, because when you ask 10 people what "CommonJS" is,
you'll get 15 different answers. For the purposes here, and really whenever
I use the phrase "CommonJS", I'm technically referring to
<a href="http://www.commonjs.org/specs/">one of the Modules/1.x specifications</a>,
and more specifically, the general definitions of the <code>require()</code> function
and the <code>exports</code> and <code>module</code> variables used in the body of a module.
These general definitions are also applicable, with some differences, in
<a href="http://nodejs.org/docs/latest/api/modules.html">node.js</a>.</p>
<p>When I use the phrase "CommonJS", I am <strong>NEVER</strong> referring to
the other specifications in the CommonJS specs stable (linked to above).</p>
<p>AMD supports this general definition of "CommonJS", in my book.
Super double plus one good!</p>
<p>Aside: I'm looking for another 'name' to call my simplistic notion
of CommonJS. So far, things like <em>simple <code>require()</code>y support</em>,
seem to capture my intent. But you know how
<a href="http://callback.github.com/callback-weinre/">bad I am with naming</a>.</p>
<h2>the design for quick edit-debug cycles</h2>
<p>One of the goals of AMD is to facilitate a quick
edit-debug cycle while you are working on your JavaScript. To
do this, AMD specifies that you wrap the body of your module in
a specific type of function wrapper when you author it.
Because of the way it's designed, these JavaScript files can
be loaded directly with a <code><script src=></code> element in your HTML
file, or via <code><script src=></code> elements injected into your live web page
dynamically, by a library. This means that as soon as you save the JavaScript you're
editing in your text editor/IDE, you can refresh the page in
your browser without an intervening "build step" or special
server processing - the same files you are editing are loaded
directly in the browser.</p>
<h2>the resources</h2>
<p>Beyond just loading JavaScript code, AMD supports the notion
of loading other <em>things</em>. The <em>things</em> you can load
are determined by what
<a href="https://github.com/amdjs/amdjs-api/wiki/Loader-Plugins">Loader Plugins</a>
you have available. Examples include:</p>
<ul>
<li><a href="https://github.com/jrburke/requirejs/blob/master/text.js">the text of a file/resource</a></li>
<li><a href="https://github.com/jrburke/requirejs/blob/master/i18n.js">i18n bundles</a></li>
<li><a href="https://github.com/ZeeAgency/requirejs-tpl">templates</a></li>
<li>etc</li>
</ul>
<p>Coming from the Java world, this is familiar territory; see:
<a href="http://docs.oracle.com/javase/6/docs/api/java/lang/Class.html#getResourceAsStream(java.lang.String)">Class.getResource*() methods</a>.
For the most part, I consider this to be amongst the core bits of
functionality that a programming library should provide. It's a common
pattern to ship data with your code, and you need some way to access it.
And, of course, this resource loading is available in an async manner
with AMD.</p>
<p>Aside: I'm not sure what the analog of this is in the node.js world.
You can certainly do it by hand, making use of the
<a href="http://nodejs.org/docs/latest/api/globals.html#__dirname"><tt>__dirname</tt></a>
global, and then performing i/o yourself. I'm just not sure if
someone has wrapped this up nicely in a library yet.</p>
<h2>the build</h2>
<p>At the end of the day, hopefully, your web application is going to be available
to some audience, who will be using it for realz. It's unlikely that you will want to
continue to load your JavaScript code (and resources) as individual pieces as you do
in development; instead, you'll want to concatenate and minimize your wads of goo
to keep load time to a minimum for your customers.</p>
<p>And it seems like most AMD implementations provide a mechanism to do this.</p>Patrick Muellerhttp://www.blogger.com/profile/04900886008475308281noreply@blogger.comtag:blogger.com,1999:blog-22367266.post-647804191741788522012-01-18T10:38:00.000-05:002012-01-18T10:40:59.557-05:00More on why AMD is Not the Answer<!--
wr --exec "markdown More-on-why-AMD-is-Not-the-Answer.md > More-on-why-AMD-is-Not-the-Answer.html" *.md
-->
<p>Follow-on to Tom Dale's blog post "<a href="http://tomdale.net/2012/01/amd-is-not-the-answer/">AMD is Not the Answer</a>" and James Burke's response "<a href="http://tagneto.blogspot.com/2012/01/reply-to-tom-on-amd.html">Reply to Tom on AMD</a>". I pretty much concur with everything Tom said. Here are some points brought up by James:</p>
<h2>need an alternative</h2>
<p>James says:</p>
<blockquote><i>
<p>what is missing is a solid alternative to AMD. In particular, "use build tools to wrap up CommonJS modules" is not a generic solution. It does not allow for generic dynamic loading of resources, particularly from CDNs. Dynamic loading is needed for big sites that still do builds but want to stage loading of the site as the user uses it.</p>
</i></blockquote>
<p>I don't think using build tools precludes you from building wads of modules that can be loaded dynamically. Generic solutions bother me. See: J2EE.</p>
<h2>ceremony</h2>
<p>James says:</p>
<blockquote><i>
<p>On his concern about too much ceremony -- this is the minimum amount of ceremony:</p>
</i></blockquote>
<pre><code> define(function(require) {
//Module code in here
});
</code></pre>
<p>On the other hand, here's some code pulled from <a href="http://svn.dojotoolkit.org/src/dijit/trunk/Calendar.js"><code>dijit.Calendar.js</code></a>:</p>
<pre><code> define([
"dojo/_base/array", // array.map
"dojo/date",
"dojo/date/locale",
"dojo/_base/declare", // declare
"dojo/dom-attr", // domAttr.get
"dojo/dom-class", // domClass.add domClass.contains domClass.remove domClass.toggle
"dojo/_base/event", // event.stop
"dojo/_base/kernel", // kernel.deprecated
"dojo/keys", // keys
"dojo/_base/lang", // lang.hitch
"dojo/sniff", // has("ie")
"./CalendarLite",
"./_Widget",
"./_CssStateMixin",
"./_TemplatedMixin",
"./form/DropDownButton"
], function(array, date, local, declare, domAttr, domClass, event, kernel, keys, lang, has,
CalendarLite, _Widget, _CssStateMixin, _TemplatedMixin, DropDownButton){
//Module code in here
});
</code></pre>
<p>This isn't just ceremony. It's ugly ceremony of the sort that's difficult to debug when you screw up.</p>
<h2>module consumption</h2>
<p>James says:</p>
<blockquote>
<p>the alternative he mentions does not handle the breadth of JS module consumption</p>
</blockquote>
<p>The simple <code>require()</code>y story of <a href="http://npmjs.org">npm</a> modules seems to handle the JS module consumption pretty well, for node.js. For the browser, you can use npm modules with <a href="https://github.com/substack/node-browserify">Browserify</a> or other "build tools". Where is the equivalent of npm for AMD? Or maybe npm already contains AMD modules? Or let's say I want to just use a single module out of Dojo in my app. How do I do that?</p>Patrick Muellerhttp://www.blogger.com/profile/04900886008475308281noreply@blogger.comtag:blogger.com,1999:blog-22367266.post-29760124197133459122012-01-12T20:12:00.000-05:002012-01-12T20:12:10.638-05:00making Web SQL easier to use<!--
wr --exec "markdown WebSQLStepper.md > WebSQLStepper.html" *.md
-->
<p>One HTML5-ish area I've not yet played much with, but am starting to now, is Web SQL. Web SQL is the W3 standard for "SQL in the browser", which has been orphaned by the W3C. Here are some warnings from the <a href="http://www.w3.org/TR/webdatabase/">Web SQL Database spec at w3.org</a>:</p>
<ul>
<li><p><em>"Beware. This specification is no longer in active maintenance and the Web Applications Working Group does not intend to maintain it further."</em></p></li>
<li><p><em>"This document was on the W3C Recommendation track but specification work has stopped. The specification reached an impasse: all interested implementors have used the same SQL backend (Sqlite), but we need multiple independent implementations to proceed along a standardisation path."</em></p></li>
</ul>
<p>Despite the fact that the W3C no longer endorses Web SQL and will never 'standardize' it, implementations of Web SQL are shipping in browsers, including <a href="http://mobilehtml5.org/">some mobile platforms</a>. It's not dead yet.</p>
<h2>what's the api like?</h2>
<p>Here's an example right out of the spec, to count the number of rows in a table:</p>
<script src="https://gist.github.com/1603740.js?file=showDocCount.js"></script>
<p>A function <code>showDocCount()</code>that calls a function <code>db.readTransaction()</code> that takes two anonymous functions as callbacks. One of those callbacks calls <code>t.executeSql()</code> passing it yet another callback, which finally, mercifully, gets the row count.</p>
<p>It's executing <strong>a single SQL statement</strong>. </p>
<p>But wait, it gets better.</p>
<p>If you need to run multiple SQL statements within a single transaction, you'll have to chain those <code>executeSql()</code> invocations within increasingly nested callbacks.</p>
<p>It's basically a horror show. At least as far as I'm concerned. Nested callbacks are things I hate to have to write, and things I really hate seeing other people have to write. The result is almost always unintelligible.</p>
<h2>can we make this easier to understand?</h2>
<p>For <a href="http://callback.github.com/callback-weinre/">weinre</a>, I implemented a <a href="https://github.com/callback/callback-weinre/blob/master/weinre.web/modules/weinre/target/SqlStepper.coffee">stepper module</a> used by the <a href="https://github.com/callback/callback-weinre/blob/master/weinre.web/modules/weinre/target/WiDatabaseImpl.coffee">inspector's database interface</a>. Not very clear what's going on there though, sorry.</p>
<p>The basic idea is that you provide an array of functions - the steps - that the stepper will arrange to run in order, by managing the callbacks itself. Double bonus good - you get to linearize your database transaction steps <strong>and</strong> you don't have to deal with callbacks!</p>
<p>I found another instance of this kind of processing in Caolan McMahon's <a href="https://github.com/caolan/async">async</a> library - specifically the <a href="https://github.com/caolan/async#waterfall"><code>waterfall()</code></a> function. Not really appropriate for Web SQL, but at least it seems to validate the approach.</p>
<p>I decided to implement a clean implementation of the stepper, trying to make it easy to use and understand. The result is available as <a href="https://github.com/pmuellr/WebSQLStepper">WebSQLStepper at GitHub</a>. As a use case, I took the <a href="http://documentcloud.github.com/backbone/#examples-todos">Todos sample</a> from <a href="http://documentcloud.github.com/backbone/">Backbone</a>, and modified it to use this library to persist the data to a Web SQL database. The complete sample is <a href="https://github.com/pmuellr/WebSQLStepper/blob/master/samples/backbone-todos/todos-websql.coffee">included in the GitHub repo</a>, and here are the steps used to read all the rows of a database:</p>
<script src="https://gist.github.com/1603847.js?file=some-steps.coffee"></script>
<h2>references</h2>
<ul>
<li><a href="https://github.com/pmuellr/WebSQLStepper">WebSQLStepper at GitHub</a></li>
<li><a href="http://www.w3.org/TR/webdatabase/">Web SQL Database spec at W3</a></li>
<li><a href="http://www.sqlite.org/">SQLite documentation</a> - SQLlite is used in the Web SQL implementation for WebKit-based browsers</li>
</ul>Patrick Muellerhttp://www.blogger.com/profile/04900886008475308281noreply@blogger.comtag:blogger.com,1999:blog-22367266.post-13835198426834292662012-01-12T13:20:00.000-05:002012-01-12T15:27:01.402-05:00wr - an auto-build command for the command-line<!-- wr command invocation
wr --exec "markdown wr.md > wr.html" wr.md
-->
<p>
For much of my programming career, I've been a user of "IDE"s, including <a href="http://www.youtube.com/watch?v=JDnypVoQcPw">BASIC</a>, <a href="http://www.youtube.com/watch?v=0ZQCXXhXq6Q">Turbo Pascal</a>, various Smalltalks, IBM's VisualAge products and Eclipse.
<p>
Lately, I've been spending most of my time working without any IDEs. My tools are text editors, terminal windows, a boat load of unix commands built-in to my OS, some additional command-line tools from various open source locales, and web browsers. I think of this loosely organized/integrated set of tools as my IDE, they're just not packaged up in a nice single app.
<p>
But there's one thing I dearly love and really depend on in most IDE's that's missing from this command-line-based environment: an "auto-build" capability. Specifically, when I save a file I'm editing, I want a "build" to happen automagically. Without an "auto-build" capability, I'm left in a world where I save a change in my text editor, jump over to the terminal, run <code>make</code>, then jump to my browser to test the build. The jump to the terminal is not desired. Using some existing editor's capability of running shell commands doesn't work either, because I use a wide variety of tools to "edit" things - all the tools would need this capability.
<p>
<em>(Quick aside about "builds". A lot of people see "build" and think "30 minutes of waiting for the compiles to be done". Yes, those builds are long and painful. I'm looking at you WebKit. But in all the IDEs I used to use, builds were <strong>insanely</strong> fast. Sub-second fast. If you're living the "big C++ project" lifestyle, I feel your pain, and "auto-build" is probably not applicable for you.</em>
<p>
<em>So, I'm talking about quick builds here. As an example, once you've done at least one build of <a href="http://callback.github.com/callback-weinre/">weinre</a>, subsequent builds take on the order of 6 seconds or so - I consider that slow, but it's doing a lot of work, and I could probably cut the time in half if I needed to - I just haven't felt the need to.</em>)
<h2>
a command to run something when a file changes</h2>
<p>
A few years ago, I wasn't aware of a tool that could watch for arbitrary file changes and run a command for me, so I wrote one - <a href="https://gist.github.com/240922"><code>run-when-changed</code></a>. This script served me well for a long time, but there were always little thingees I wanted to add to it. And since <code>run-when-changed</code> polled the file system to see when files changed, I was always interested in finding a better story than polling and the inevitable waiting for the polling cycle to hit - even waiting 3 seconds for <code>run-when-changed</code> to realize a file changed was bothersome.
<p>
A port to <a href="http://nodejs.org">node.js</a> seemed like it might be in order, especially since it has a non-polling file watch function <a href="http://nodejs.org/docs/latest/api/fs.html#fs.watch"><code>fs.watch()</code></a>.
<p>
Earlier this week, I spent a few hours and now I have a replacement for <code>run-when-changed</code>, called <code>wr</code>, <a href="https://github.com/pmuellr/wr">available at GitHub</a>, written using node.js. The name <code>wr</code> comes from "<strong>w</strong>atch and <strong>r</strong>un".
<p>
Some features:
<ul>
<li><p>
installable via <a href="http://search.npmjs.org/#/wr">npm</a>
</li>
<li><p>
colorized stdout and stderr
</li>
<li><p>
reversed video status messages
</li>
<li><p>
when run with no arguments, will look for a <code>.wr</code> file in the current directory to use as arguments
</li>
</ul>
<p>
Here's what it looks like, using it with one of my current projects:
<p>
<a href="http://4.bp.blogspot.com/-5M0PsRQ0gC4/Tw5mzNVj39I/AAAAAAAAArA/XF0Pta31sPU/s1600/wr-screen-shot.jpg">
<img src="http://1.bp.blogspot.com/-k7bmbZoGF2o/Tw8OhYH--EI/AAAAAAAAAsI/4X5KAJRK0uk/s1600/wr-screen-shot-3.jpg" border=0>
</a>
<p>
<em>Click the image for a slightly larger version.</em>
<p>
Legend:
<ul>
<li>the reverse video blue lines are status messages</li>
<li>the reverse video green lines are command success messages</li>
<li>the reverse video red lines are command failure messages</li>
<li>the blue text is stdout</li>
<li>the red text is stderr</li>
</ul>
<p>
As another example, I'm generating this blog post by running the following command:
<pre><code> wr --exec "markdown wr.md > wr.html" wr.md
</code></pre>
<p>
Breaking this command down, we have:
<ul>
<li><code>wr</code> - the command</li>
<li><code>--exec</code> - a command-line option (see below in <strong>gotchas</strong>)</li>
<li><code>"markdown wr.md > wr.html"</code> - the command to run</li>
<li><code>wr.md</code> - the file to watch for changes.</li>
</ul>
<p>
The arguments to <code>wr</code> are options, the command to run, and the names of files or directories to watch. When the <code>wr</code> command is run, it waits for one of the specified files to change, runs the specified command, reports the results of running the command, and then starts waiting for a file to change again. Exit the program by pressing <code>^C</code>.
<p>
The ability to store a <code>.wr</code> file in a project directory makes life even easier. I used to create a target in my <code>make</code> files to invoke <code>run-which-changed</code> ; now I just put the wr invocation for a project in the <code>.wr</code> file, and run <code>wr</code> with no arguments.
<p>
My work-flow for using <code>wr</code> goes something like this:
<ul>
<li>open a terminal window</li>
<li>cd into my project's home directory</li>
<li>launch an editor, returning control to the command-line</li>
<li>run <code>wr</code></li>
<li>move the terminal window where I will always be able to see the last few lines of the window, to see the live status</li>
<li>switch between text editor and browser, for my edit-save-reload-test cycle, all day long, keeping an eye out for red stuff in the <code>wr</code> output</li>
<li>when the day is done, <code>^C</code> out of <code>wr</code>, and then commit work to my SCM</li>
</ul>
<p>
This can all be yours, with a simple command-line incantation:
<pre><code> sudo npm -g install wr
</code></pre>
<h2>
current gotchas</h2>
<p>
<code>wr</code> has two ways of invoking the command to run, determined by the command-line option <code>--exec</code>; see the <a href="https://github.com/pmuellr/wr">README</a>, in the <strong>ENVIRONMENT</strong> section, for more information on the differences.
<p>
The big gotcha though is the non-polling file watching. This is an awesome feature, as it means as soon as you save a file, <code>wr</code> will wake up and run a command. The problem, on the Mac anyway, is that there is a relatively small limit on the number of files that can be watched this way. Like, 200 or so. If you go over the limit, you'll see some error messages from <code>wr</code>, and you'll have to resort to using the <code>--poll</code> option, which uses a polling file watcher.Patrick Muellerhttp://www.blogger.com/profile/04900886008475308281noreply@blogger.comtag:blogger.com,1999:blog-22367266.post-5354501725614419532011-12-16T12:00:00.003-05:002011-12-16T12:07:59.870-05:00accessible backed up git repos for transient work-related projects<p><b>preface:</b> <i>this is basically a "note to self", but thought others might find it useful</i>
<p>I tend to create a lot of "projects" at work. A "project" is a directory in my "<tt>~/Projects</tt>" directory. Who knows what might be in it; probably code, or maybe just documentation. Many times, it's a "project" that I think may be useful to other people, and I may eventually "publish" to an either a public SCM repo farm like GitHub or an internal IBM SCM repo farm. I currently have 317 projects in my "<tt>~/Projects</tt>" directory.
<p>Before "publishing" it though, I'll usually want to crank on it for a bit, see if I can get the crap to work, and figure out if it's actually useful. So there's a window where my project isn't being "backed up" or available to anyone else, because it's sitting on my laptop's spinning disk of rust. That's not great.
<p>In the past, I've used Subversion and/or Mercurial to help manage this "back up" process for personal, non-work-related projects. What I'd do is set up a repo on my shared server, and then just use it like any other remote SCM.
<p>Problems:
<ul>
<li><p>What about work-related stuff? I don't want to publish something that's work-related on a non-IBM site.
<li><p>I made it sound like "<i>set up a repo on my shared server</i>" was easy. It isn't. Depending on the capabilities of your server, it might be impossible.
<li><p>Yeah, we do have various internal-only SCM repo farms in IBM, but those still require some "setup" and "maintenance", just a different flavor than shared hosting, and even that can be too much sometimes. Especially for a project which may only be useful for a couple of days.
</ul>
<p>Bummer.
<p><b style="font-size:120%">a new hope</b>
<p>What I'm doing now is using Git, and storing my Git repos on IBM's internal, backed up, rsync-/smb-/http-accessible cloud file store. Even if you don't have rsync or smb access, there are likely ways you can do something similar, with different tools, on your own server.
<p>Here's how it works:
<ul>
<li><p>mount the file store on my local machine, create a directory for the new repo in my "<tt>git-repos</tt>" directory, which is in a part of my cloud file store that is publically accessible via http, unmount the file store.
<li><p>initialize the project on my local machine with git (eg, "<tt>git init</tt>")
<li><p>write a "<tt>backup</tt>" script, which looks like this:
<pre>
#!/bin/sh
#--------------------------
# make the git repo accessible via:
# git clone http://{host-name}/~pmuellr/git-repos/{project-name}/.git
#--------------------------
cd `dirname $0`
git update-server-info
rsync -av . {host-name}:{path-to}/git-repos/{project-name}
</pre>
<li><p>Instead of using a "<tt>git add/commit/push</tt>" workflow, I use a "<tt>git add/commit; ./backup</tt>" workflow.
<li><p>As the comment in the script notes, the git repo is accessible via http, to anyone who has access to that URL. For the general case for me, that's anyone in IBM.
<li><p>The "<tt>backup</tt>" script, as written, also backs up my working directory. Which is a win as far as I'm concerned. Now I can also point folks to web-accessible/-renderable resources in my working directory.
</ul>
<p>I've found this to be a very useful, very lightweight process to keep my immature projects backed up. I don't doubt there are ways to do similar things with SVN or Mercurial or whatever as well, I just never tried.
<p>I can, but haven't yet, set up a similar story for personal projects, using my shared server, but that should be pretty easy; it supports rsync, but not smb. Instead of "mount and create project directory", I can "ssh and create project directory". Even that's probably not needed; rsync can likely create that project directory for me, but I'm an rsync n00b and I kind of like having to make that step explicit.Patrick Muellerhttp://www.blogger.com/profile/04900886008475308281noreply@blogger.com