28 Oct 2012
I have been tackling some incredibly hard problems at work over the past few months. While I can’t quite yet talk about all of it, I wanted to share a meta observation that I gleaned along the way.
It’s hard to dispute that repeatedly failing at something tends to increase our chances of succeeding at it. This is the basic premise of Gladwell’s 10,000 hour rule of success as well: over time, we learn from our mistakes and get better.
There is however, one more aspect of repeated failure that’s a lot more subtle, and worth mentioning here: repeated failure helps us understand what it takes to succeed, and in the process even redefine the notion of success. Success is a fairly arbitrary term, and in a complicated field of work, it’s usually not very easy to define it in an isolated, cut and dry fashion. We also tend to have multiple goals, often in conflict with each other. For instance, you might fail to solve a simple problem, but in the process partially solve a different problem, having far greater impact on your other goals. Or you might even realize that you did not in fact want to succeed at what you wanted for various reasons. Both of these have occurred to me in the past few months.
18 Mar 2012
I just finished watching this talk by Jack Diederich (Python core developer) – somewhat flamebait-ishly named “Stop Writing Classes”. In his talk, Jack shows, through various examples, how introducing a class just for the sake of it actually makes the code harder to read and maintain.
While reflecting on the talk, I realized that the actual problem here is that people take OO too far. I don’t know how OO is taught elsewhere in the world, but from my own experience, a large part of this abuse can be attributed to the way OO is preached in various CS courses. A lot of emphasis is placed on grilling into young heads the virtues of OO and it’s place in the big enterprise world, without actually explaining why OO works when developing large applications. And, do courses on OO actually highlight when it does not work?
In interviews, when asked to solve a tricky problem, I find it disturbing that people immediately start off with skeletons of classes. It has become some kind of protocol that one should be modeling everything in classes and objects to come off as a competent software developer. Jack shows a succinct solution to Conway’s game of life problem. A solution that highlights the programmer’s knowledge and elegant use of Python’s
yield, but which will probably be rejected as “poor procedural code” in a lot of places.
In many ways, OO has become a safety belt. By having a few classes, atleast no one can fire you for writing procedural code right? If poorly written procedural code is death by repetition, poorly written OO code is death by multiple levels of insane abstractions.
One technique that has worked for me when solving a problem from scratch is to first actually solve it by using functions that perform highly specific operations. As the solution evolves, I start identifying fragments of code that can either be pulled into a separate class for better abstraction or moved from one class to another. This way, my code moves towards object orientation based on actual need, rather than because of a hypothetical high level “modeling” of the problem. This way, I also don’t end up with a class like this:
Simply because I (hopefully) would have felt stupid refactoring something, anything to that.
16 Mar 2012
A client wanted to prevent paid users of their product from sending messages with email addresses to the free users. The client felt that allowing such exchanges to happen would make the free users less inclined to upgrade to a paid account. Anyhow, we went ahead and implemented a robust email masking “feature” which blanked out any fragment of text that appeared to be an email address. We felt pretty smug about it because it could even catch smart users who pulled tricks like john at example dot com. Heck, we had automated tests to cover all those edge cases and hairy scenarios!
The users defeated the system in the following ways:
You can contact me off here – jack sp 1967 at g mail (all in one address).
Take the first letter from each of the following words: please don’t count rabbits because they increase everyone’s expectations at great mayhem and internal lost dot clouds over mountains.
When we were implementing the email masking functionality, at one point, I was wondering whether we were going overboard in coming up with all kinds of ways to break the system. In fact, I’m sure I even thought, “Huh, these are probably non-technical folks, so we don’t have to go to really convoluted extents”. Boy, was I wrong.
My favorite exploit included sending the email address using NINE separate messages:
I’m sure even if we had spent two more weeks on the masking feature, we wouldn’t have been able to catch that one!
NOTE: the above messages do not of course contain the actual email addresses of the users.
2 Mar 2012
In Git, if you find yourself constantly typing
git push origin branchname to push your local commits to the remote branch, here’s a tip: make your local branch automatically track your remote branch.
When creating a new remote branch, here’s how you can make your local branch track its remote branch:
# creates a new local branch
git branch foobranch
# creates a new remote branch, and makes local branch track that (-u)
git push -u origin foobranch
If you want to set-up tracking for an existing branch, you can do so by:
# set-up tracking for an existing branch
git branch --set-upstream foobranch origin/foobranch
If you are checking out an already existing remote branch, you can set-up tracking in a single command:
git branch --track foobranch origin/foobranch
# or, alternatively
git checkout --track origin/foobranch
git checkout -b foobranch origin/foobranch
Setting up tracking offers two advantages. Firstly, it reduces the number of characters you have to type to push your changes to a remote branch. More importantly, it prevents you from accidentally typing
git push which will end up pushing out the local commits in all your other branches as well (which you might not be ready to push yet!).
Of course, you can also set-up tracking by directly modifying your
27 Feb 2012
I came across this interesting anecdote about a waiter in a Swedish restaurant using his computer screen like a static whiteboard. Forgetting for a moment what his real intentions might have been for doing that, let us just take his word for it: by requiring him to click too many times, the computer system was just not optimized for his productivity.
This post reminded me of a client for whom we were designing an interface to record eye readings. The client, being an opthamologist himself, was extremely vocal about how the entire user experience should be. He wanted an interface in which you can enter eye readings by clicking on a series of buttons that had various prescription numbers. He showed it to a few people and they felt that it was fairly easy to use – everything was obvious and lucid. I had some reservations in taking a mouse driven approach, and just as I had suspected, this really ended up slowing down people who were using this interface over and over again. What seemed intuitive and having zero learning curve eventually turned out to be pretty slow and cumbersome for regular and repeated use.
When designing user interactions, one should balance the long term productivity goals of a active user and the apparent immediate ease-of-use of the system for new users. Kevin Fox had recently written about how Google seems to be “simplifying the UX for current users at the expense of the new user learning curve”. I’m sure Google had reasons for doing that, but nevertheless, it’s not trivial to optimize a user experience for both new and power users.
On the other hand, there are also lots of applications that treat all their users equally. In reality, user behavior and engagement changes over time, and so do their needs. Yet, run-off-the-mill analytics software only offer a broad picture of user engagement. This is where cohort analysis becomes useful.
A cohort analysis is a tool that helps measure user engagement over time. It helps UX designers know whether user engagement is actually getting better over time or is only appearing to improve because of growth. – Cohort analysis – measuring engagement over time.
With the help of cohort analysis, one could evolve the user experience to make it more productive the for power users, while at the same time, making it easy enough for new users to get going with the system. We already use graceful degradation as a strategy for enhancing the user experience in modern browsers, while still not completely dropping support for people with older browsers. I see optimizing for productivity the same way – user interactions should offer alternative hooks for the power users to exploit without making the external interface complex. A good example of that would be the spotlight tool on OS X. It stays out of the way, but it’s still just a keyboard shortcut away. A well-designed, modern command line interface can really complement the graphical user interface.
Finally, while designing interfaces, one should be making decisions based on facts and data, rather than gut feeling. I will end this post with another anecdote. We’re currently trying to convince a client to get rid of the confirm email address field in their signup page. In addition to making the user fill an additional field, the current form also prevents the user from copy pasting their email address from the previous field. When we asked the client why they are doing that, they replied, “We don’t want our users to accidentally endup typing a wrong email address”.
This is a classic case of trusting the gut blindly, and it’s clearly not the best way to build a user interface. They are pissing off a lot of users, while all the time thinking that they are actually helping them.
You can follow me on Twitter right here.