Color Commons

I recently heard about Color Commons, an interactive public artwork created by New American Public Art. The idea is to allow the public to change the color of the Boston Greenway Light Blades via text message. You simply send an SMS to the Light Blades number (917.525.2337) that says what color you want the lights to be and within about 1 second, the lights will change.

I think this is an awesome project on several different levels. First is the element of interactivity – I really like art that engages viewers beyond a passive consumption level. I also think that the creativity and diversity of perspective embodied by “the public” can discover and create really interesting usage scenarios that the original artist would never have thought of (perhaps this just betrays a lack of confidence in my own creativity and artistic vision but nevertheless…).

It is also interesting to learn more about how the project was executed from an engineering perspective. The creators have generously published their source code and other details about their implementation to help inspire others – check out their project page here. In short, they used a Rascal MCU that runs Python and has a built-in web server and linked that module to a server, which they connected to Twilio to receive the text messages. When a message comes in, they parse it using a Python script to determine which color is being requested and then send the command to the Light Blades controller (a Color Kinetics iPlayer3 with ColorPlay software).

I love seeing this kind of project and hope it inspires others to create interactive art. Hopefully one of these days Pocobor will have time to put together one of the ideas that have been rattling around the office lately…

Useless Machines

Machines don’t always have a functional purpose; sometimes they are built just to entertain. The machine in the video below takes that concept to a whole new level. It’s only function is to turn itself back off.

After finding this video, I came across an even more awesomely useless machine made by a German hobbyist named Andreas Fiessler. He adapted a broken printer for his machine, which is about the best use I can think of for a broken printer. The video is pretty amusing:

Just in case that video inspires you to make your own, Andreas offers a great description of how he made it happen on his website.

Swarm Robotics + Flying Drones = Awesome

This is a very cool video on swarm robotics. Specifically, it demonstrates how flying drones can work together with ground based robots to accomplish different tasks. Oh, and if you make it to the end of the video you get to find out the ultimate application of this amazing technology.

Brain Hacking

I just saw this article about the potential to involuntarily extract information from someone’s brain using an off-the-shelf brain-computer interface (BCI) such as the systems that we’ve previously blogged about and decided to take a quick side track from our Interface/Off series to address it. The idea is to use an electroencephalograph (aka EEG) headset and process the measurements when the subject thinks about various subjects to extract meaningful data. In the study referenced in the article, which was performed by researchers from UC Berkeley, Oxford and the University of Geneva, the data that was extracted included ATM PIN numbers and home addresses.

I find these results to be fascinating, exciting and a little bit disconcerting. Despite the imperfect success rate of this initial study (10-40% chance of obtaining useful information), it is clear that the potential exists to cross a threshold of human privacy that has never been violated – the sanctity of private thought. Obviously, this has the potential to change the world in fairly fundamental ways. I don’t actually think that we are on the cusp of a time in which your thoughts can be plucked out of your head right and left (if nothing else, I believe the necessity of the subject wearing an EEG headset is a limitation that is unlikely to be surmounted any time soon), but these results bring up a really interesting discussion about the ethics of progress.

This is not a new debate – people have been arguing over the benefits and drawbacks of scientific and technological development for centuries, in contexts from economic (robots will steal our jobs!) to medical (cloning, gene therapy, etc.) to apocalyptic (nuclear, biological and chemical weapons). However, a significant difference in this version of the debate is the ubiquity of this technology. I frequently write on this blog about how exciting and powerful I find it that the tools and materials to develop smart products and mechatronic systems are so accessible and inexpensive but this can be a double-edged sword when the resulting technology has the potential for misuse or abuse. For example, the Emotiv and Neurosky BCIs cost around $200-300, including access to their APIs.

I think this post is already long enough, so instead of getting into a detailed look at the philosophy and ethics of science and engineering, I’ll just give my two cents on the big picture and leave it there for now. I think that it is impossible and usually counter-productive to try to restrict development of science or technology, all the more so when there are not natural barriers (such as enormous capital requirements). I also believe that there is inherent good in the pursuit and acquisition of knowledge. However, I think that as a scientist or a developer / engineer, we have a responsibility to let our work be guided by our personal morals and ethics. Hopefully this is enough to ensure that none of us have to worry about stolen thoughts any time soon.

Exploded Views

I stopped by the SF Museum of Modern Art the other day for the first time in a few years and was blown away by Exploded Views, an installation by Jim Campbell (an alum of the EE and math departments at MIT – my kind of artist) that is hanging in the atrium. If you look above your head as you walk into the museum, you will notice 2880 white LEDs hanging from the ceiling in the shape of a large box. From below, you can notice that various lights are flickering on and off but the pattern driving them is not immediately apparent. However, when you walk up the stairs to the first balcony and then look back at the array, you immediately find that you are looking at a kind of 3d screen showing footage of moving silhouettes. When I was there, the film was a boxing match, but there have been several clips that have played at various times.

From a technical perspective, this piece is fascinating to me for a variety of reasons. First of all, the conversion of what I assume is originally standard 2d footage to a signal controlling when each LED turns on and off is a meaty design problem, especially given the ability of the 3d array to provide depth of field. Furthermore, I know from experience that driving large scale LED arrays can be a surprisingly involved process from a hardware perspective, involving thermal management and a significant wiring effort just to locate, connect and debug nearly 3000 LEDs without sacrificing serviceability.

Beyond the engineering points of interest, though, seeing the installation was an extremely compelling artistic experience. Campbell did a great job of executing what was an inspired initial vision to begin with and created an effect that was surprisingly sticky – I spent a lot longer staring at the piece then I normally do at museums and spent the next few days thinking about ideas for variations that would be cool personal projects. I think I talk a lot in these posts about how much I appreciate it when I see something inspiring, especially something that can get people excited about the potential of mechatronics – this is a perfect example and if you have a chance to visit SFMOMA while the piece is up (until October 23), I strongly recommend checking it out.

An Eye-Tracking Camera

This is a really cool concept for a camera. It’s called Iris, and it was designed by Mimi Zou, a student at the Royal College of Art in London. As you might guess from the name, you control Iris with your eye. You can squint to zoom, and you blink twice to take a picture. It will track your eye to make sure that it focuses on what you focus on so that you never have a blurry picture again. It will also identify the person behind the camera and load his or her favorite settings before taking any pictures.

Sadly, this video is just a demonstration of a possible design; however, Mimi has demonstrated a working prototype. This concept is beautiful, and I hope Mimi continues to develop this unique camera.

Smart Milk Jug

I saw an interesting post on Techcrunch the other day about a smart milk jug project developed by GE and Quirky. The 1-quart product uses pH, temperature and weight sensors to track both how much milk is left and whether it has gone bad. There is a discrete LED interface on the base in addition to a GSM radio and SIM card so that the user can receive a text message reminder when it is time to buy more milk.

The value proposition for this particular product is a bit underwhelming in my opinion (but then again, a quart of milk usually disappears long before it would go bad at my house) but I love the thinking behind it. It’s great to see people working to use smart technology to improve their day-to-day lives, especially in areas that aren’t typically associated with high technology.

Bioenergy Harvesting

I came across these boot add-ons that the military is considering using. They will basically harvest the energy used by soldiers as they walk to power the growing number of electronics a modern day soldier must carry with him (night vision, gps, light saber…). The hope is that by harvesting energy from the soldiers, they won’t need to carry as many batteries which can weigh as much as 20 lbs. I wouldn’t mind having a pair of these for my next camping trip. If I ever need to charge my iPhone all I need to do is go on a hike.

Button Sprouts

A company called Tactus is developing an amazing touchscreen technology that allows buttons to literally sprout from a completely flat touch surface giving the users real 3d buttons to use (as opposed to a vibration based haptic button like on the Blackberry Storm). When the buttons are no longer needed they retreat back into the touch surface leaving a smooth flat touchscreen. Wow.

Robot Swarms

I came across a very cool research video from a lab out of UPenn showing synchronized flying of a flock of nana quadrotors. This stuff is really fun to watch. Let’s just hope they never become self aware.