Pocobor.
Pocobor.

Interface/Off (Part 1)

As the pace of technological development continues to increase, it becomes more and more interesting to me to try to predict the future – I’m impatient and enjoy getting closure on my guesses sooner rather than later. Despite the difficulty of accurately forecasting the future, it’s worth taking a shot at because the upside is so high: if you can position yourself or your company as an expert on the next world-changing technology before it breaks big, you will be as popular as a PBR salesman in Dolores Park. Conversely, a poor understanding of where technology is going can be the death knell for even a dominant company – look at how quickly Nokia and RIM have fallen following their inability to anticipate and understand the smart phone revolution.

I would argue that we are in the midst of a pretty significant shift right now in the field of interface technology, specifically with regard to smart products. For a few decades after the advent of the personal computer, the keyboard and mouse were standard interface technology without much in the way of alternatives. However, the overlapping rises of mobile computing, smart products, and some new interface technologies have significantly opened the playing field as we look forward (nowadays Minority Report seems a lot less futuristic and a lot more presentistic, which is definitely not a word). Just look at the last 5 years – touchscreens and then voice control have dramatically expanded the realm of possibility for how we interact with our phones. It is possible that touch screens’ day in the sun will be shorter than one might think, though – new technologies such as gestural interfaces and brain-machine interfaces (BMIs) are showing significant promise and could become ubiquitous within a few years. So, the question is, how will you interact with the (man-made) world around you in 5, 10 or 25 years?

Because this topic deserves a little more space to breathe than I can give it in a single post, I decided to break up my thoughts into 4 follow-up sections, distributed on a spectrum of approximately increasing sophistication and system versatility:

  1. In the red corner are Touch-Based Systems: physical button systems such as keyboard and mouse, touch sensors, and touch screens
  2. In the blue corner are Touch-Free (Gestural) Systems: systems that are controlled physically but without contact, such as eye tracking, voice control and gestural interfaces
  3. In the yellow corner (it’s a triangular ring apparently) are Brain-Machine Systems: revisiting thought control, which I’ve touched on several times in previous posts
  4. Finally, the fight can’t end without a Judge’s Decision: what system will claim victory?

Les Machines de L’Ile

We’re going today to Nantes, France, and a really cool interactive mechatronic art installation located on an island in the Loire river. Called Les Machines de L’Ile and created by Francois Delaroziere and Pierre Orefice, its purpose is to fire the imaginations of visitors by helping them visualize a fantasy world at the intersection of Jules Verne and Leonardo Da Vinci.

There are currently two main installations (with a third scheduled for completion in 2014):

1. The Great Elephant: weighing in at nearly 50 tons, this beast can take up to 49 passengers.

2. The Marine World Carousel: even bigger than the Great Elephant, this carousel boasts 35 underwater creatures over 3 levels.

The site is very interactive, with many of the pieces built such that guests can ride them. I’ve never been there but as far as I’m concerned, it sounds way better than Disneyland.

Exploded Views

I stopped by the SF Museum of Modern Art the other day for the first time in a few years and was blown away by Exploded Views, an installation by Jim Campbell (an alum of the EE and math departments at MIT – my kind of artist) that is hanging in the atrium. If you look above your head as you walk into the museum, you will notice 2880 white LEDs hanging from the ceiling in the shape of a large box. From below, you can notice that various lights are flickering on and off but the pattern driving them is not immediately apparent. However, when you walk up the stairs to the first balcony and then look back at the array, you immediately find that you are looking at a kind of 3d screen showing footage of moving silhouettes. When I was there, the film was a boxing match, but there have been several clips that have played at various times.

From a technical perspective, this piece is fascinating to me for a variety of reasons. First of all, the conversion of what I assume is originally standard 2d footage to a signal controlling when each LED turns on and off is a meaty design problem, especially given the ability of the 3d array to provide depth of field. Furthermore, I know from experience that driving large scale LED arrays can be a surprisingly involved process from a hardware perspective, involving thermal management and a significant wiring effort just to locate, connect and debug nearly 3000 LEDs without sacrificing serviceability.

Beyond the engineering points of interest, though, seeing the installation was an extremely compelling artistic experience. Campbell did a great job of executing what was an inspired initial vision to begin with and created an effect that was surprisingly sticky – I spent a lot longer staring at the piece then I normally do at museums and spent the next few days thinking about ideas for variations that would be cool personal projects. I think I talk a lot in these posts about how much I appreciate it when I see something inspiring, especially something that can get people excited about the potential of mechatronics – this is a perfect example and if you have a chance to visit SFMOMA while the piece is up (until October 23), I strongly recommend checking it out.