Pocobor.

Interface/Off (Part V): THRILLING CONCLUSION

(Context: this is the final installment in our look at the future of smart product interface technology. Part I set the stage, Part II looked at touch-based systems, Part III covered touch-free approaches, and Part IV dug into brain-computer links.)

Well, this is it – the day that you’ve no doubt been waiting breathlessly for since July, when the gorgeous half-faces of Nic Cage and John Travolta confusingly headlined a mechatronics blog post. We are (finally) concluding our Interface/Off series examining the state and future of smart product interface technology.

How Did We Get Here?
As a very brief refresher, we’ve structured our look so far in roughly increasing order of sophistication. We started with touch-based interface technology such as physical buttons, touch sensing, and touch screens – these systems are simple and well-understood (by both designers and users) but are hobbled by physical limitations such as proximity requirements and dimensionality (it’s hard to make a 3-D touch-based interface). Touch-free systems, such as gestural interfaces and voice control, eliminate some of these limitations and broaden the designer’s palette, enabling more intuitive user behavior. However, they have their own disadvantages, including “translation” issues (both gestures and spoken language) and scaling issues when multiple potential users are present. Finally, brain-computer interfaces (BCIs) have the potential to maximize the communication bandwidth between the user and the smart product but require the solution of thorny issues in fields as diverse as medicine, engineering, and ethics.

Judge’s Decision
So, what is the interface of the future? A realistic answer is inevitably something of a cop-out; I expect to see all of these interface types in applications that they are particularly well suited for throughout the foreseeable future. For instance, there will always be contexts in which the low price, simplicity and reliability of a physical button will make it the preferred solution.

Furthermore, as these technologies mature, I think we will see more blended systems. Let’s not forget – even touch screens have only become ubiquitous with the rise of smart phones over the past 5 years; nearly all of the technologies we’ve looked at have a tremendous amount of room left to grow and evolve. For example, imagine a product that offers the ability to communicate with it via voice commands and/or gestures (actually, this is strikingly similar to face-to-face human conversation, where a mix of verbal and body language is typically used).

However, it would be too anticlimactic to leave you with such a non-answer after all these months. So, if I have to choose one technology to rule them all, there’s no way I can choose anything other than direct brain-computer links.

Think of it this way: the whole history of communication is an effort, with increasing success, to convey thoughts and ideas with more precision and accuracy, in as close to real time as possible. Languages create more and more new words; technology has let us communicate from a distance, then actually talk to each other, then even see each other live (so to speak), all so we can communicate the most accurate possible representation of what we think and feel at a given time. However, even when talking with each other face to face, there is a middleman. Language (both spoken and body) is a double-edged sword, allowing us to better understand ourselves and our feelings by giving us a framework with which to think but also requiring that we translate thoughts and feelings into that framework, sometimes inelegantly or inaccurately. Have you ever struggled to find the word to express something or felt like your words can’t keep up with your thoughts? Imagine cutting out that middleman and communicating what you are thinking with no mouth-enforced bandwidth limitations or language-driven description restrictions. The possibilities in both depth of communication and also efficiency and even privacy are staggering.

The above vision is ambitious, and we may even find that parts of it are physically or physiologically impossible (e.g. directly conveying a feeling from one person to another). And, unlike a lot of mechatronic technology we write about on this blog, it is not around the corner. However, despite those and a whole host of other issues currently associated with the technology, the potential is too great for humankind not to keep reaching for that future.

Inflatable Robotics

I recently saw a cool article and video (below) on a new project from Other Lab, one of the most interesting groups in the Bay Area robotics scene.

The video gets into some inflatable robotics work that they are doing, with some really interesting potential applications around human-safe robots and medical robotics. However, what I found most interesting were some thoughts from Otherlab co-founder Saul Griffith on the impact that engineers can and do have on the world around them. The topic of how and how meaningfully we as engineers can affect the world really resonates with me and I am happy to see it get discussed in a larger forum. I couldn’t agree more with Saul’s challenge to all of today’s (and tomorrow’s) engineers: keep dreaming and stretching your notions of what is possible. The world is a canvas with infinite possibility for improvement and beauty.

21st Century Pack Mule

Boston Dynamics, who many of you may know from Big Dog and other autonomous quadruped robots they have developed, released a new video a few days ago featuring their Legged Squad Support System (LS3) robot (as well as Friend of Pocobor Alex Perkins, the thespian/engineer who co-stars in the video).

The LS3 is essentially a pack mule for the future: it is meant to carry supplies for US soldiers and Marines in the field to both reduce their physical burden and also free up cognitive bandwidth for more important tasks. As such, it is designed to carry up to 400 pounds, walk 20 miles and operate for 24 hours without human intervention (other than voice commands to follow / stop / etc.).

From a mechatronics perspective, there are a number of interesting design challenges that an application like this poses. First of all, the dynamics and controls for a quadruped robot are considerably more complex than those for a wheeled or tracked equivalent. However, quadrupeds (or bipeds like humans, for that matter) can handle much rougher terrain than wheeled or tracked robots and this expanded mobility drastically increases their value. Ultimately the hope is that pack systems like the LS3 will be able to follow the soldier anywhere they are capable of going.

The second significant engineering obstacle centers around the the sensing and control systems required to follow the soldier and choose the best path given the upcoming terrain at any given time. There are considerable hardware, software and algorithmic issues that have to be addressed to arrive at a prototype robot like one in the video. There are also some interesting overlaps with the technology used for self-driving cars or any other mobile autonomous system.

Based on the examples that have been publicized over the past few years by Boston Dynamics and other groups pursing similar research and development, I have been deeply impressed by the pace of development of these types of systems. At this rate, even mountaineering porters might be feeling a little nervous about their job security soon…