27 Feb 2012

Optimizing for productivity

I came across this interesting anecdote about a waiter in a Swedish restaurant using his computer screen like a static whiteboard. Forgetting for a moment what his real intentions might have been for doing that, let us just take his word for it: by requiring him to click too many times, the computer system was just not optimized for his productivity.

This post reminded me of a client for whom we were designing an interface to record eye readings. The client, being an opthamologist himself, was extremely vocal about how the entire user experience should be. He wanted an interface in which you can enter eye readings by clicking on a series of buttons that had various prescription numbers. He showed it to a few people and they felt that it was fairly easy to use – everything was obvious and lucid. I had some reservations in taking a mouse driven approach, and just as I had suspected, this really ended up slowing down people who were using this interface over and over again. What seemed intuitive and having zero learning curve eventually turned out to be pretty slow and cumbersome for regular and repeated use.

When designing user interactions, one should balance the long term productivity goals of a active user and the apparent immediate ease-of-use of the system for new users. Kevin Fox had recently written about how Google seems to be “simplifying the UX for current users at the expense of the new user learning curve”. I’m sure Google had reasons for doing that, but nevertheless, it’s not trivial to optimize a user experience for both new and power users.

On the other hand, there are also lots of applications that treat all their users equally. In reality, user behavior and engagement changes over time, and so do their needs. Yet, run-off-the-mill analytics software only offer a broad picture of user engagement. This is where cohort analysis becomes useful.

A cohort analysis is a tool that helps measure user engagement over time. It helps UX designers know whether user engagement is actually getting better over time or is only appearing to improve because of growth. – Cohort analysis – measuring engagement over time.

With the help of cohort analysis, one could evolve the user experience to make it more productive the for power users, while at the same time, making it easy enough for new users to get going with the system. We already use graceful degradation as a strategy for enhancing the user experience in modern browsers, while still not completely dropping support for people with older browsers. I see optimizing for productivity the same way – user interactions should offer alternative hooks for the power users to exploit without making the external interface complex. A good example of that would be the spotlight tool on OS X. It stays out of the way, but it’s still just a keyboard shortcut away. A well-designed, modern command line interface can really complement the graphical user interface.

Finally, while designing interfaces, one should be making decisions based on facts and data, rather than gut feeling. I will end this post with another anecdote. We’re currently trying to convince a client to get rid of the confirm email address field in their signup page. In addition to making the user fill an additional field, the current form also prevents the user from copy pasting their email address from the previous field. When we asked the client why they are doing that, they replied, “We don’t want our users to accidentally endup typing a wrong email address”.

This is a classic case of trusting the gut blindly, and it’s clearly not the best way to build a user interface. They are pissing off a lot of users, while all the time thinking that they are actually helping them.

You can follow me on Twitter right here.

No responses

/
tagged with:

23 Feb 2012

My JavaScript reading list

Of late, quite a few people have been asking me for good resources to learn JavaScript in a structured way. Over the past few years, I have amassed quite a few good links, so I decided to write a quick post highlighting some of them. I can just point people to this post in future!

The Good Parts

Of course, the first book that people usually recommend reading is Douglas Crockford’s JavaScript: The Good Parts. I also consider this book to be an essential read. It talks about how the language evolved to be what is right now, and how to avoid tripping over the bad parts of JavaScript.

Learning JavaScript Design Patterns

This online book talks about implementing the various design patterns in JavaScript, and when to use them. It also talks about jQuery related patterns.

JavaScript Patterns

Provides a overview of a large number of design patterns, and good JavaScript practices.

Understanding JavaScript OOP

A pretty comprehensive guide on using object literals, prototypes, and prototypal inheritance.

JavaScript Prototypal Inheritance

A thorough and rigorous drilling into prototypal inheritance, classical inheritance patterns, and use of privileged methods and properties.

How Prototypal Inheritance really works

More explanation on prototypal inheritance.

Private, Privileged, Public And Static Members

Talks about encapsulating logic into private, privileged and public methods in JavaScript.

Understanding this

Well, this is almost always a source of confusion and frustration for newcomers to JavaScript. This post clearly explains how exactly this works.

A fresh look at JavaScript Mixins

A clever way to use mixins in JavaScript using this. Also talks about other conventional approaches of achieving mixins.

Optional parameters in Javascript

JavaScript does not directly support optional parameters. This article discusses on ways to mimic it.

Hidden Features of JavaScript

A bag of assorted tips, tricks and hacks. There is a whole series of such posts on StackOverflow on other languages as well.

One-Line JavaScript Memoization

Explores strategies for implementing memoization in JavaScript.

Google JavaScript Style Guide

These are just Google’s recommendations. Mix and match them with your own personal styles.

If you’re into JavaScript and Node, you might want to follow me on Twitter right here.

3 responses

/
tagged with:

22 Feb 2012

Writing applications versus frameworks

At work, since I mostly write web applications I usually end up consuming frameworks rather than writing them. So, I have been reading a lot of framework-ish code on GitHub lately, and it has turned out to be an awesome source for finding clever tricks, patterns and coding conventions.

Writing frameworks requires a slightly different mindset to programming. While writing application code, readability and maintainability are the top things on my mind, and rightly so. However, when writing frameworks, while code cleanliness is still important, they sometimes have to take a backseat when compared to performance and the ease with which the framework is to be consumed.

Consider the following two versions of memoization functions (snippets from Oliver Steele):

// version 1
Bezier.prototype.getLength = function() {
  if ('_length' in this) return this._length;
 
  var length = ... // expensive computation
  this._length = length;
  return length;
}
 
// version 2
Bezier.prototype.getLength = function() {
  var length = ... // expensive computation
  this.getLength = function(){return length};
  return length;
}

The second version (which uses the Russian Doll pattern), is better optimized but makes the code harder to understand and debug. I have found this to be true when using other metaprogramming constructs (method_missing anyone?) as well. There is always a struggle between writing readable idiomatic code and clever hacks that makes a library faster and cute for the consumers.

The same could be said about writing truly REST-ful services. Ultimately, it should be pragmatism, and not dogmatism that should govern our decisions.

No responses

/
tagged with:

20 Feb 2012

Power of the unknown

Today, I spent nearly S-I-X hours cramped inside a budget flight, as dense fog brought air traffic in Chennai airport to a stand still. Looking back, I have no idea how I sat there wide awake through all of that 6 hours doing absolutely nothing. However, I did not simply sit and wait out a pre-determined 6-hour block. Throughout that wait, I kept thinking that the fog will lift any moment, as I was just desperate to get out. It did not, and on hindsight, it was probably stupid of me to expect such a sudden change in the ways of nature.

I am sure that if I had known at 4.30 AM today (when I boarded the plane) that I would be sitting in a stationary flight till 9.30 AM, I would have ended up feeling far worse than I eventually did. People often talk about the fear of the unknown, and how we’re afraid to undertake something when we don’t know what’s in store for us. What we seldom realize, or don’t appreciate enough is that the unknown could also be a powerful weapon. Sometimes, by just being ignorant of the exact amount of work something is going to entail, we can stop ourselves from giving up in despair.

4 responses

/
tagged with:

6 Feb 2012

Currying in JavaScript using bind()

Currying, or more accurately, partial application is the process of breaking down a function that takes multiple arguments into a series of functions that take parts of the arguments. It comes in handy whenever you don’t have all the required arguments to a function at the present time.

For example, consider a function add that takes 3 integers, and returns their sum. Using partial application, you can do this:

intermediate = add(1,2)
 
# .. do some other calculations ..
 
result = intermediate(3)  # 6

So, currying simply allows you to apply the arguments to a function in multiple steps. Some languages support curried functions out-of-the-box (like OCaml), while in other languages like JavaScript, you need to use a helper function to achieve this (side note: Functional.js and Underscore are both terrific libraries that offer various functional extensions to JavaScript, including currying).

ECMAScript 5 introduced bind() which brings (among other things) native currying to JavaScript. Once again, let’s take the add function.

function add(a,b,c) {
  return a+b+c;
}

This is how you curry it using bind().

var intermediate = add.bind(undefined, 1, 2);
var result = intermediate(3);   // 6

The first argument to bind() actually sets the infamous this context of the function. We can leave it undefined here, since it has no effect.

So, why curry? Currying is both elegant and useful when you want to cache re-usable computations. I am going to steal this converter example, which I have refactored to use bind().

function converter(toUnit, factor, offset, input) {
    offset = offset || 0;
    return [((offset+input)*factor).toFixed(2), toUnit].join(" ");
}
 
var milesToKm = converter.bind(undefined, 'km', 1.60936, 0);
var poundsToKg = converter.bind(undefined, 'kg', 0.45460, 0);
var farenheitToCelsius = converter.bind(undefined, 'degrees C',0.5556, -32);
 
milesToKm(10);            // returns "16.09 km"
poundsToKg(2.5);          // returns "1.14 kg"
farenheitToCelsius(98);   // returns "36.67 degrees C"

You can follow me on Twitter right here.