Sunday, November 20, 2011

Never blame yourself. Always blame the machine.

I hear the words, "I'm not computer savvy" on a regular basis.  My unequivocal response is: "Its the other way around, the computer is the one who isn't savvy."

In today's software ecology, our software tends to treat us as statistics of a user community. It forces us to fit a set of assumptions and constraints.    We are, to be blunt, managed by the constraints and assumptions of our software and, to be perfectly honest, our software sucks.  Sometimes we have software that is designed for the average user of our profession.  But even in these rare cases, the software still sucks.  If you don't want to be managed by your software you must deal with the complexity of writing your own.   This is Linux and GNU in a nutshell.  If you don't like it, change it yourself.  Nonetheless, most of us don't want to live with the tedium of constantly managing our user interface so we throw our hands up and give in to being managed by our user interfaces.  There appears to be no other way.

There are significant cognitive differences between individuals. Software developers and computer systems that we build usually ignore these differences, treating individuals as population metrics.  The resulting designs may be optimal for the 'average' use case are far from optimal for anyone who doesn't fall on near the mean. Human factors testing and human-machine usability is not simple nor easy. And the distributions are not a neat little normal distributions. Just because you can get 'decent' performance from a test user population doesn't mean that your solution is optimal for anyone, even the individuals in your test population.

The software of the future must be better than "optimal on average".   Don't just test for a user community, test for individual users.  Test for the weird users.  Do the opposite of what human factors tells us and test and design for the outliers.   We need to begin asking ourselves more difficult questions regarding what makes an effective user interface.  How can we write software that adapts to individual cognitive characteristics?  How can we ensure that instead of software functionality in-spite-of user differences we can create software functionality which leverages these differences?  In essence, we need adaptive user interfaces, not a plethora of adaptably confusing configuration options.   This means that our computers need to start paying attention to us; learning us.  We are beyond the point where "learning the machine" is even remotely practical.  There are simply too many possibilities for average junk, but there is only one "me" and one "you".

The last decade has seen remarkable developments in the design of user interfaces and some of the best "optimal on average" interfaces that have ever existed (think iPhone/iPad).   Nonetheless, there are a wealth of opportunities to improve user interfaces.  I know this because although we all have different cognitive efficiencies and deficiencies we are all still using essentially the same small set of user interfaces.






Until user interfaces learn my individual cognitive characteristics and adapt themselves to be optimally suitable for me, my mantra to my fellow computer users will stay the same.

"Never blame yourself.  Always blame the machine."


Friday, November 18, 2011

Bans on Texting-While driving are not the solution

The solution to these types of problems is almost never a law which makes it illegal.  We have probably all done this at least once, and many of us probably text while at stop-lights; arguably safer, but no less illegal and probably still a stupid idea.

A proper solution is one that addresses the problem.  In this case the problem isn't texting while driving, it is the way that texting is performed: using thumbs and eyeballs.  If we started getting innovative and stopped driving with our thumbs in our eye sockets we would realize that there are numerous technological solutions.  Many of these rely on text-to-speech and speech dictation technologies, but some of the more interesting potential approaches rely on a combination of utilizing spoken audio and gesture.  Is anyone looking into whether Siri (or the equivalent on Android) will having any impact?


Tuesday, November 9, 2010

Blogging from 35,000 feet



I am blogging this somewhere between Dallas and Albuquerque at 35,000 ft.  I'd never had a chance to try inflight wireless before, so regardless of the extra cost, thought to try it and see how it met my expectations (or failed to). 

The service is provided by GoGo which is a subsidiary of AirCell. See:  http://www.gogoinflight.com



In the order of my efforts at using (and totally and completely abusing the service): 


  • Cost per a single flight: $5 -- hmmm, maybe for a long flight
  • Googling -- WIN
  • Facebooking -- WIN
  • Streaming must using Pandora -- FAIL (it just fails to work at all)
  • Emailing from 35,000 feet -- WIN
  • Text Chat -- WIN (Lunch on Tuesday Tim!)
  • VPN to work -- WIN 
  • SSH over VPN --- FAIL (uhg...totally not usable)
  • Netflix -- WIN (wait, WTF?...I only watched about 2 minutes but the buffer was keeping up.)
  • Ok..gotta try this...Hulu -- FAIL (It loads, but buffering fail ... the Netflix thing had to be a fluke)
  • Pandora (2nd attempt) -- double FAIL 
  • YouTube -- WIN (http://www.youtube.com/watch?v=jHjFxJVeCQs)
  • Blogging -- WIN

I ran a couple of DSL Reports speed tests while in the air. Given that Pandora didn't load at first, but then Netflix did, I think there is some definite differences in service quality depending on where you are at and whether there are Aircell towers nearby.  So, basic service is pretty consistent, but don't expect to do any streaming media for awhile as things were definitely not consistent. This is significantly better than old-school dialup, but has far to go to get the speeds we are used to at home.  

Ok...we are going to land...time to go.  This thing shuts off at 10,000 feet.   Bye!