Pocobor.

Smart Contact Lens

A new product (or at least potential product) announcement from Google caught my eye this week – they have developed and are testing prototypes of a smart contact lens that can measure glucose levels in tears. Diabetes is becoming increasingly common, with upwards of 5% of the world’s population afflicted, and managing blood sugar levels is a vital and challenging element of treating the disease. Most patients have to resort to self-administered blood tests via pricking a finger or similar, multiple times per day.

Tears can provide a good barometer of glucose levels in the body, but historically there has not been a good way to access them easily to take measurements. Integration of the sensing technology into a contact lens is an elegant solution to this problem, albeit one with significant technical hurdles to overcome before something like this could become an actual product.

It’s been an eventful few months for Google’s efforts around mechatronics. This announcement followed shortly after they announced they were buying Nest for $3.2B. It’s heartening to see them making both big and small bets on smart technology and I look forward to watching their continued progress.

Robocop

Strictly speaking, this post is about a project that is only tangentially related to mechatronics. However, given Pocobor’s name and history (and my background as a native Michigander), I couldn’t resist passing along an update regarding a unique public art project coming to Detroit.

In 2011, there was a successfully funded Kickstarter project to create a “life-sized” monument in Detroit that would pay tribute to that city’s greatest half-man, half-machine crime fighting hero. Although the pace has been slow, things are still moving forward and the makers have given us an update.

It’s nice to be reminded that sometimes, science fiction decides to look at the positive side of a future, more mechatronically advanced world. I can’t wait to see the statue.

Pulse of the City

The (legions of) regular readers of this blog have probably noticed some common threads running through at least my posts. One of them is that I’m a sucker for art that provides interesting ways for the audience to interact with the piece, the artist, or each other. Today’s post looks at another really cool project in this vein, this one called Pulse of the City.

Spearheaded by artist George Zisiadis, the project has placed 5 interactive public art installations around Boston. The concept is that passing pedestrians grab the heart-shaped installation and it turns their heartbeat into a unique musical offering, which it plays in real time on embedded speakers. As the creators describe it, “amidst the chaotic rhythms of the city, it helps pedestrians playfully reconnect with the rhythm of their bodies.”

As always, there are some interesting technical elements. Each unit is fully solar powered and has both a Raspberry Pi and an Arduino – my guess is that the Raspberry Pi is responsible for more of the music generation and playing and that the Arduino handles the classical embedded system tasks such as the heart rate sensor and driving LEDs, but documentation is still pending so that’s just a guess at this point. I would also be very curious to learn more about their algorithm for generating the musical compositions each time – that’s definitely an interesting problem and the effort to make the exhibit’s response non-deterministic and unique to each instance really adds to the beauty of the experience for me.

This application is also a great example of some of the engineering challenges that can crop up with interactive art. Measuring heart rate can be kind of a finicky undertaking to begin with and is made more so by the wide range of skin conductivities and signal strengths that can be expected in this context, depending on the individual touching the handles and environmental conditions such as temperature and humidity. Compared to something that only has to function in a more controlled setting, this design illustrates both the headaches required to build a system robust enough to interact with a random passerby and also the reward for doing so – just look at the faces of the people in the video.

Between this and the Color Commons, Boston has seen some really cool interactive public art popping up recently – hopefully some other cities will be inspired and join the trend..

Color Commons

I recently heard about Color Commons, an interactive public artwork created by New American Public Art. The idea is to allow the public to change the color of the Boston Greenway Light Blades via text message. You simply send an SMS to the Light Blades number (917.525.2337) that says what color you want the lights to be and within about 1 second, the lights will change.

I think this is an awesome project on several different levels. First is the element of interactivity – I really like art that engages viewers beyond a passive consumption level. I also think that the creativity and diversity of perspective embodied by “the public” can discover and create really interesting usage scenarios that the original artist would never have thought of (perhaps this just betrays a lack of confidence in my own creativity and artistic vision but nevertheless…).

It is also interesting to learn more about how the project was executed from an engineering perspective. The creators have generously published their source code and other details about their implementation to help inspire others – check out their project page here. In short, they used a Rascal MCU that runs Python and has a built-in web server and linked that module to a server, which they connected to Twilio to receive the text messages. When a message comes in, they parse it using a Python script to determine which color is being requested and then send the command to the Light Blades controller (a Color Kinetics iPlayer3 with ColorPlay software).

I love seeing this kind of project and hope it inspires others to create interactive art. Hopefully one of these days Pocobor will have time to put together one of the ideas that have been rattling around the office lately…

Studiomates in the NY Times

Recently our shared studio space in Brooklyn, Studiomates, was written up in the NY Times. It’s an interesting article discussing the recent trend of people wanting to work in spaces shared with other creative types, and not isolated at home or in a coffee shop. I completely agree with that premise.

Ball Balancing Robot

Over the last few years, inertial sensing systems have made significant progress as sensors and actuators have become more powerful and cheaper. For instance, although it did not change the world in the way that it was intended, the Segway has become ubiquitous enough that everyone knows what it is, and inertial sensing platforms are regularly included in consumer products from smart phones to golf clubs. However, this ball balancing robot from the Robot Development Engineering Laboratory at Tohoku Gakuin University in Japan is one of the coolest implementations of this type of system that I have seen.

Although ball balancing robots are not new (from a controls perspective, it is a great example of the classic inverted pendulum problem), there are several noteworthy features about this robot. First, the robot is omnidirectional and can rotate around its vertical axis (zero turning radius). Second, it has a passive control mode so you can push it around without exerting much force – the video shows some good examples of this. These two wrinkles considerably increase the versatility of the robot.

The other thing that is cool about this project is the accessibility of the hardware. It uses a 16-bit MCU with a few sets of accelerometers and gyros, all of which are readily available for on the order of a few dollars, even at prototype volumes. And, since the project was publicized in 2010, companies like Invensense, ST, and Kionix have released integrated 6-axis chips (a 3-axis accelerometer and 3-axis gyro on the same die with integrated signal processing) and announced the imminent release of 9-axis chips (add a 3-axis magnetometer for orientation using the earth’s magnetic field). Advances like these are just more evidence of how feasible it is becoming to implement remarkably cool functionality into consumer products. Personally, I can’t watch the video without imagining this robot as a mobile drink tray – hopefully something like it will be bringing me a beer before I know it.

Mothrabot?

I saw an interesting post on the website for The Atlantic a few weeks ago about moth-driven robots. The idea is that nature has evolved elegant and effective solutions to problems that science and engineering are still struggling mightily with; in this case, the ability to track smells. Researchers at the University of Tokyo built a wheeled robot that could be driven by moths walking on a crude trackball (picture what was in your computer mouse in the dark ages before optical mice became common). The (male) moth-bots were placed in an obstacle course with female moth sex pheromones at the opposite end, towards which they made their way surprisingly quickly and effectively (even when the researchers biased the steering to always pull in one direction).

The moral of the story, other than the fact that moths have a very healthy libido, is that it may be possible to harness features of nature that cannot yet easily be replicated artificially. For instance, tracking environmental spills to their source is one potential immediate application in the tracking realm, and the concept can easily be stretched to any number of other fields. Godzilla would be wise to beware – perhaps next time Mothra will bring some new toys to the fight.

Useless Machines

Machines don’t always have a functional purpose; sometimes they are built just to entertain. The machine in the video below takes that concept to a whole new level. It’s only function is to turn itself back off.

After finding this video, I came across an even more awesomely useless machine made by a German hobbyist named Andreas Fiessler. He adapted a broken printer for his machine, which is about the best use I can think of for a broken printer. The video is pretty amusing:

Just in case that video inspires you to make your own, Andreas offers a great description of how he made it happen on his website.

Interface/Off (Part V): THRILLING CONCLUSION

(Context: this is the final installment in our look at the future of smart product interface technology. Part I set the stage, Part II looked at touch-based systems, Part III covered touch-free approaches, and Part IV dug into brain-computer links.)

Well, this is it – the day that you’ve no doubt been waiting breathlessly for since July, when the gorgeous half-faces of Nic Cage and John Travolta confusingly headlined a mechatronics blog post. We are (finally) concluding our Interface/Off series examining the state and future of smart product interface technology.

How Did We Get Here?
As a very brief refresher, we’ve structured our look so far in roughly increasing order of sophistication. We started with touch-based interface technology such as physical buttons, touch sensing, and touch screens – these systems are simple and well-understood (by both designers and users) but are hobbled by physical limitations such as proximity requirements and dimensionality (it’s hard to make a 3-D touch-based interface). Touch-free systems, such as gestural interfaces and voice control, eliminate some of these limitations and broaden the designer’s palette, enabling more intuitive user behavior. However, they have their own disadvantages, including “translation” issues (both gestures and spoken language) and scaling issues when multiple potential users are present. Finally, brain-computer interfaces (BCIs) have the potential to maximize the communication bandwidth between the user and the smart product but require the solution of thorny issues in fields as diverse as medicine, engineering, and ethics.

Judge’s Decision
So, what is the interface of the future? A realistic answer is inevitably something of a cop-out; I expect to see all of these interface types in applications that they are particularly well suited for throughout the foreseeable future. For instance, there will always be contexts in which the low price, simplicity and reliability of a physical button will make it the preferred solution.

Furthermore, as these technologies mature, I think we will see more blended systems. Let’s not forget – even touch screens have only become ubiquitous with the rise of smart phones over the past 5 years; nearly all of the technologies we’ve looked at have a tremendous amount of room left to grow and evolve. For example, imagine a product that offers the ability to communicate with it via voice commands and/or gestures (actually, this is strikingly similar to face-to-face human conversation, where a mix of verbal and body language is typically used).

However, it would be too anticlimactic to leave you with such a non-answer after all these months. So, if I have to choose one technology to rule them all, there’s no way I can choose anything other than direct brain-computer links.

Think of it this way: the whole history of communication is an effort, with increasing success, to convey thoughts and ideas with more precision and accuracy, in as close to real time as possible. Languages create more and more new words; technology has let us communicate from a distance, then actually talk to each other, then even see each other live (so to speak), all so we can communicate the most accurate possible representation of what we think and feel at a given time. However, even when talking with each other face to face, there is a middleman. Language (both spoken and body) is a double-edged sword, allowing us to better understand ourselves and our feelings by giving us a framework with which to think but also requiring that we translate thoughts and feelings into that framework, sometimes inelegantly or inaccurately. Have you ever struggled to find the word to express something or felt like your words can’t keep up with your thoughts? Imagine cutting out that middleman and communicating what you are thinking with no mouth-enforced bandwidth limitations or language-driven description restrictions. The possibilities in both depth of communication and also efficiency and even privacy are staggering.

The above vision is ambitious, and we may even find that parts of it are physically or physiologically impossible (e.g. directly conveying a feeling from one person to another). And, unlike a lot of mechatronic technology we write about on this blog, it is not around the corner. However, despite those and a whole host of other issues currently associated with the technology, the potential is too great for humankind not to keep reaching for that future.

Inflatable Robotics

I recently saw a cool article and video (below) on a new project from Other Lab, one of the most interesting groups in the Bay Area robotics scene.

The video gets into some inflatable robotics work that they are doing, with some really interesting potential applications around human-safe robots and medical robotics. However, what I found most interesting were some thoughts from Otherlab co-founder Saul Griffith on the impact that engineers can and do have on the world around them. The topic of how and how meaningfully we as engineers can affect the world really resonates with me and I am happy to see it get discussed in a larger forum. I couldn’t agree more with Saul’s challenge to all of today’s (and tomorrow’s) engineers: keep dreaming and stretching your notions of what is possible. The world is a canvas with infinite possibility for improvement and beauty.