30 Jan 2012

Thoughts on Socket.IO

I have been hacking away with Socket.IO for the past few months. My initial goal was to explore how Node.js/Socket.IO scales when you need to support a largish audience, when compared to my previous solutions involving Erlang/OTP and a handmade COMET library.

I have built these COMET based streaming solutions by hand a few years ago. Back then, there was no HTML5 or websockets, and most off-the-shelf COMET “solutions” were bloated Java-based solutions. We had to roll out our own client side COMET library, because there were no standalone or decoupled open source alternatives. And, boy was it hard, especially when it came to supporting support IE (as always). I mention some of the elaborate client side techniques involved on these slides. The guys at LearnBoost have not only come up with a completely transparent solution that degrades gracefully all the way down to IE 5.5, but have also decoupled the client and server side components. There are third-party server-side bindings for plenty of other languages too. Having said that, the Erlang implementation (and quite a few other languages as well) seems to be supporting only version 0.6.x of Socket.IO. It looks like it will remain that way for a while, as Socket.IO v0.8 is almost a complete rewrite and breaks backward compatibility.

I wrote a simple application using Node.js and Socket.IO that covered most of the basic “features” you would expect in a typical chat/pubsub system. This involves stuff like: detecting online/offline status of users, sending messages to a single user, broadcasting messages to everyone in a group/room, handling users who use multiple tabs etc. The code is up on GitHub.

If you’re not already familiar with Socket.IO, I urge you to check it out, because I couldn’t believe that I ended up doing all of that in just about 100 lines of code – client and server-side combined! I have always loved JavaScript, but the absolute joy of sharing the same paradigm and code conventions across both the client-side and server-side is just amazing.

Socket.IO provides a lot of things you need out of the box. Consider for example, sending a message to a group of people. Socket.IO allows you to make a user join a channel/room and once you do that, it also allows you to send messages to only that specific channel:

// make the user's socket join a room called "foo"
socket.join("foo"); 
 
// send message only to "foo"
sio.sockets.in('foo').emit('new_message', {'message': 'hello', 'from': 'bar'});

Want to send a message to everyone else, except the current user (say, when “Jack” comes online)?

socket.broadcast.emit('user.online', {'username': 'jack'});

To do these by hand, it would involve writing boiler-plate code that will deal with storing the user sockets in a map, iterating over it etc. which Socket.IO handles for you.

What about scaling?

Of course, we all need planet scale, don’t we? The single threaded nature of Node.js does limit the default Socket.IO set-up, which in my crude benchmarks seemed to be a few thousand moderately active connections. If you are interested in seeing actual benchmark numbers, Drew Harry has an excellent write-up on that.

To get around the limitation imposed by a single core, you can load balance multiple node processes with node-http-proxy or HA Proxy. Unfortunately, we can’t use Nginx for load-balancing as the current stable version does not support reverse-proxying of HTTP/1.1 connections used by Websockets.

Alternatively, you can use Redis pubsub to allow you to scale past a single process. But even that has some limitations due to the current chatty nature of Socket.IO. I’m sure that this is already something that the LearnBoost guys are looking to address.

Another idea which I had which I have not explored fully is to use Node’s cluster module and its message passing API to synchronize between multiple Socket.IO processes.

It should be noted here that 3-4k concurrent users is still more than sufficient for a lot of web applications, and when you are looking to support hundreds of thousands of users, you are looking at a different problem space altogether. The approach to take also depends on the specific needs of your application. Socket.IO with Node.js offers a ridiculously simple way to get off the blocks for building web applications with soft real-time elements, but I wish it was as simple to scale it to multiple cores or machines. For example, in our Erlang/OTP solution, we had the ability to scale past multiple machines as Erlang handled that layer of abstraction for us very well. So, going forward, I really wish Node would allow robust process synchronization and message passing. Perhaps the current work being done on isolates would allow Node applications to scale past a single core and perhaps even machine.

If you have something interesting for me to work on in Node, shoot me an email.

17 Jan 2012

First element of an object in JavaScript

This one will go into the bag of clever hacks which you should use sparingly, if not ever. Given an object like this:

var obj = { a: "foo", b: "bar", c: "baz" }

Here’s how we get the “first” property of the object/dict:

for (var first in obj) break;
console.log("First property is: "+first);

We’re assuming here that the enumeration order matches the insertion order. Since ECMAScript standard itself does not specify any enumeration order, most browsers I have tested this in defaults to the insertion order. However, there is a catch: as explained in this discussion, if any of the property names can be parsed as an integer (e.g. 123), then V8 (Chrome’s JS engine) does not give any guarantee on the ordering of the keys.

There is lots of debate on that thread on the merits and demerits of preserving the order of insertion. Although, the V8 guys are pushing back on this on the basis of performance implications, Firefox 7.0 on my mac always preserves the order of insertion.

If you want to try it out on your browser, the following snippet

var a = {"foo":"bar", "3": "3", "2":"2", "1":"1"};
for(var i in a) { console.log(i) };

should print

foo
3
2
1

14 Jan 2012

Git bisect

You are part of a large team, and a bug has secretly crept into your codebase. You are on a mission to find out which exact commit caused that bug. I found myself on such a mission recently, and I thought I will write a quick post on how I used the awesome bisect command to identify the faulty commit quickly.

Let’s assume that the latest git commit id on your master branch is abcd. Let’s say that you do know that this bug did not exist in the previous release, whose last git commit id is wxyz. To identify the git commit which introduced this bug, and which lies between wxyz and abcd, do this:

$ git bisect start
$ git bisect good wxyz
$ git bisect bad abcd

You are essentially telling Git to bisect the commits between wxyz and abcd. Git will temporarily move you to a new branch called bisect and will now point to a commit that’s in the middle of both commits.

Bisecting: 34 revisions left to test after this
[25d71758dd0e131e9409f3896416eabc81d69ec8] Search field fixes

Verify whether the bug exists at this state. If it still does, type:

$ git bisect bad

Once again, Git will halve the number of commits needed to test by pointing you to a commit that’s right in the middle, between the previous bad commit and the good commit. Continue this process, telling git at each stage, whether a commit is “good” or “bad”, and git will eventually identify the first bad commit.

5bc592ab9065b12bf1bc516ab9e0fe461699971b is the first bad commit

Once you are done figuring out what changes caused the bug, you can return back to your original working state by:

$ git bisect reset

No responses

/
tagged with:

4 Jan 2012

Where I use HTML5 the most – admin interfaces

Both HTML5 and CSS3 have constantly evolving specs. With browser vendors going ahead and implementing new features using vendor prefixes, and long before they get accepted as part of the standard, we’re faced with a dilemma: We have all these cool toys which we can play with, but can’t really put them to use immediately in production because it’s going to take time for the rest of the world to upgrade their browsers.

What I have been doing lately is using admin interfaces as a ground for testing and learning about these new features. So long you get your clients to use the latest build of Firefox or Chrome, you are good to go. With Chrome, that’s pretty easy with the automatic rolling updates, and Firefox will soon be doing the same. Of course, this is not going to happen with every client, but it’s not that hard to make them use a better browser by just telling them how it can save them development cost. And, that’s true.

The other day, I had to get color alternating rows of a table – grey, white, grey, white, and so on. With an older browser, you have to resort to first modifying your code to give alternate rows class="odd" and then specifying a different color for this class in the CSS file. With CSS3, you can just do:

tr:nth-child(odd) { 
    background-color:#eee; 
}

Want to highlight only the first child of a parent?

#parent-id:nth-child(1) {
    background: #eee;
}

Quick form validation on an internal admin tool? HTML5 form validators FTW.

<input type="text" pattern="[-0-9]+" required />

Those are admittedly trivial examples, but the fact is knowing these little shortcuts quickly add up and help you get stuff done faster. As an added bonus, you also keep your presentation and code separated. The next time you look at a CSS3 or HTML5 feature and think that you can only use them in 2015, think again!

26 Dec 2011

ECMAScript harmony features in Chrome Canary

The latest Canary build of Google Chrome allows you to unlock some new features from ECMAScript Harmony. However, they are disabled by default, so to enable them you have to type about:flags into your address bar, and turn on “Enable Experimental JavaScript” which is found right at the bottom of the page.

Now, you have access to proxies, weak maps, maps, and sets!

Proxies

I am very excited about the Proxy API – it will allow you to create objects whose properties can be computed at run-time dynamically, and for hooking into other objects for auditing and logging purposes. A simple example:

var proxyObj = Proxy.create({
  get: function(obj, propertyName) {
    return 'Hello, '+ propertyName;
  }
});
 
console.log(proxyObj.John); // "Hello, John"

You can do all kinds of meta programming stuff using Proxies (that’s probably another blog post).

Maps and Weak Maps

The use of Maps and WeakMaps allows you to create dicts where the keys are objects.

var m = new Map();
var obj = {foo: "bar"};
m.set(obj, "baz");
m.get(obj);   // "baz"

The difference between a WeakMap and Map is that the WeakMap is not enumerable. Although the Map is enumerable, it relies on the iterator proposal, which is yet to be implemented in V8. So for all practical purposes, the Map and WeakMap are essentially the same for now.

Set

Again, you can add, delete and check for existence of a value in a Set, but cannot yet iterate through it.

var s = new Set();
s.add(2);
s.has(2); // true
s.had(10); // false

Finally, there was is a small but significant change in the way typeof operates on null. In ES3:

typeof null === 'object';  // true

But in ES5:

typeof null === 'null';  // true

This page from ECMAScript Harmony Wiki talks more about this semantic change, and how to support both ES5 and ES3 when writing code.

The immediate value of any of these features to browser-side JavaScript development is probably none, but nevertheless I’m excited to see the roll out of ES5 and ES6 features into V8, so we can start using them soon on atleast node.js.

You can follow me on Twitter right here.