The 4 Levels… of Robotics
I created the 4 Levels… model in 1999 while I was at a conference focused on intelligent user interfaces with the objective of analyzing where a company, product, technology, team or individual was in terms of its natural state of evolution. Some of the terminology is a bit dated, but I feel that the model continues to be relevant. This post will apply robotics to the 4 levels.
Simple Overview
Level 1: Static deals with items that are static. E.g., print on a paper, a song in an MP3 file, a golf club, or an old school switch on a wall that turns on/off the air conditioner. Level 2: Dynamic is about the ability for items to capture, display and save data, when possible. E.g., a form on a web site, a golf club with sensors that capture and send the swing data to a database, or a thermometer near the switch on the wall for the AC that displays the current temperature. This is the level of 1-way communication between the user, the experience and the database. Level 3: Reactive introduces the ability to take the data saved in Level 2, do simple analysis on it and react to it based on some dynamic rules that are triggered based on the incoming data. E.g., a driver that adapts its center of gravity and the flex of the shaft based on your historic and current game, a thermostat that automatically turns on/off the AC when it exceeds/falls below a max/min temperature or based on a timer. This is the level of real-time interaction, the feedback back loop, the level of Cybernetics and two-way communication between most components. Level 4: Proactive is the space where the data is mined and analyzed in relation to what has previously been learned in order to make decisions that predict events in the future and adapt strategies beforehand in order to achieve whatever the fitness objectives are. E.g., a golf club driver + software that coaches you during the 18 holes based on your historic and current game along with the current environmental conditions and can predetermine what your game is going to look like so self-adjusts itself before, or the Lyric Thermostat is based on proximity and can be turned on based on criteria such as the current temperature and humidity, whether family members occupy the house or how far away from the home you are and whether you’re heading back. Future versions of smart theromstats, whether it’s the Lyric or not, should be smart enough to know when the most efficient times of the day to achieve the goals based on current and forecast weather related data, which rooms to cool based on who’s in or near the home, etc., and would work in concert with reactive architecture to adapt the home in ways that minimize the usage of AC.
Level 1: Static
The Static level of robotics refers to functionality that we would traditionally expect from a robot that is set up for a single purpose, like repetitive tasks, that are typically programmed either on the pendant attached to the robot or via an application that can write the robot code to a file which in turn is copied to the pendant via a flash drive. A normal Level 1 example is pick-and-place where the target location of an object is triggered when it rides on a conveyor and passes a sensor that tells the robot it’s time pick it up in the same location as before. After picking it up, the robot drops the object off in a predefined location. Rinse and repeat x infinity.
The behavior of the robot is repetitive an non-realistic. The sequence is linear like a movie or album and plays the same content in a single direction each time. The entire script that defines how the robot will move is created beforehand and is unmodifiable at run-time. It is unintelligent and inarticulate. It is unaware of other robots in its vicinity, if it’s even on a network.
Level 2: Dynamic
The Dynamic level inherits functionality from Level 1 as well as focusing on capturing, displaying and/or saving the data. From an anthropomorphic perspective, this is the phase dedicated to memory. Ideally we would be able to see and/or retrieve raw data about the performance of the robot(s) at a later point in time for analysis. That data could include: overall results, errors, time it took to execute each process, joint positions over time, processes/events that were triggered, etc.
This is also the level where we begin one-way communication with the robot in real-time. Complete or partial scripts can be streamed to the robot(s) via a socket over TCP. The constraints are the same that we have experienced one medium to another: throughput/channel limitations – how big is the pipe and the data we’re trying to send down it at any point in time and how do we split up the data we are sending to bypass this limitation while maintaining low entropy. Claude Shannon & Warren Weaver’s The Mathematical Theory of Communication explores the way information travels from the source to destination in explicit detail in this paper.
Turbulence: Watercolor + Magic from Dr. Woohoo! on Vimeo.
In the Turbulence video, I’m sending the toolpath data as a stream to the robot testing the limits of one-way communication in terms of how the minimum and maximum amount of data that can be sent/received from the robot. Tests included sending the entire script to a single waypoint at a time as well as different methods for controlling the robot. The data from the robot was displayed in the robot controller app to help with debugging, when needed.
Level 3: Reactive
The main characteristics of the Reactive level are the feedback loop (two-way communication), reminiscent of Herbert Wiener’s Cybernetics, along with dynamic business rules that react to the incoming data that represents how successful the robot completed its task(s). This is the first level that embraces change. The robot controller software – with or without design capabilities – by-passes programming in the pendant and can talk, listen and respond to the robot via code generated within its own application. If the type of robot supports piping into it, e.g., via a socket over TCP, a wide spectrum of features and functionalities that compliment real-time, reactive robotics can be quickly be integrated.
This example of a video feedback loop was a collaboration between Steina and Woody Vasulka’s work with the Rutt-etra Video Synthesizer in 1973. More info here.
A feedback loop in music, as expressed by the way Jimi Hendrix played his guitar (respect the classics, man)
In the context of robotics, a feedback loop enables us to integrate near-real-time functionality. Integration of computer vision for: object recognition (which can be used to define the target and ideal tip location(s) and orientation(s) of the end-effector relative to a spectrum of different objects coming down a conveyor belts); dynamic changes within the environment that avoid new obstacles (humans); and when working with dynamic materials that change over time (foam, watercolor, etc.), computer vision and custom code can be used to dynamically adjust toolpath strategies to accommodate materials and creative challenges that embrace change.
Robotic Bead Rolling from robotsinarchitecture on Vimeo.
Even if a real-time feedback loop is not fully realized in the Robotic Bead Rolling by Friedman, Hosny and Lee from Harvard’s GSD program, they are only a step away from doing so if it made sense to their project. This is significant in that the creative process, if you include topology optimization via structural analysis and procedural modeling (Rhino/Grasshopper/Millipede) as part of it, can easily be combined with the manufacturing process in the same flow (Grasshopper/Kuka|PRC or Hal). The current simulation includes virtual feet (loads) applied to the surface, where presumably the support areas are also defined, but there’s no reason why this can’t be done with real people standing on real surfaces with the necessary sensors in place. A step beyond, which I’ll explore a bit in Level 4, is the idea of integrating an evolution algorithm like the Grasshopper component Galapagos, where a range of feet sizes, loads and other variables could be used to find the most fit solution for a range of options quickly, without having a spectrum of individuals stand on the surface. Each has its benefits as well as combinations of both, e.g., simulation and then real-world analysis based on the suggested most fit examples.
Robotics, Human Interactions & Space / Eyerobot 2 from robotsinarchitecture on Vimeo.
Batliner and Newsum from SCI-Arc erase the boundaries between the virtual and real-world by streaming in real-time data
Other characteristics of this level are the degree to which two-way communication takes place between, e.g., input devices (iOS/Android, game and custom controllers), sensors (vision, hearing, etc.), multiple robots and end-effectors (touch) and the software, which can work in concert with dynamically driven goals in order to attempt to reach the objective(s) in an environment and/or with materials that are changing.
Information saved from past sessions can be retrieved, replayed or used to influence the behavior of the robot. Integration of 3rd party online services, along with their data, can be integrated into the mix and include the spectrum of APIs. Here are a few of them: Amazon, DataGov, DropBox, EnviroFacts, Google, PayPal, Twitter, NOAA, WolframAlpha and many more.
Additional features in this level include: structural and environmental analysis; collision detection and simulations; kinetic solvers; real-time physics libraries within 3d environments; and ability to change out end-effectors by itself, changing its functionality space, e.g., from additive manufacturing (3d printer) to subtractive (CNC router); scalability in terms of the # of robots interacting with each other as well as their degrees of freedom.
Absolut Originality from robotsinarchitecture on Vimeo.
Robotic Sound Processing from robotsinarchitecture on Vimeo.
If the audio was analyzed in real-time and the robot control and milling took place at the same time, the Robotic Sound Processing by Simon Lullin from Artis GmbH is a great example of Level 3 robotic experience.
an exploration in art + robotics, representing wind through digital fabrication and the tangible from robotsinarchitecture on Vimeo.
Ill.Mannered from robotsinarchitecture on Vimeo.
e-David Robot Painting from eDavid on Vimeo.
e-David can be classified at the late stages of Level 3 due to a beautiful sequential feedback loop that analyzes what it has currently painted and compares it to the original image via computer vision algorithms (absolute difference, blob detection, contour analysis, vector flow fields with a splash of Line Integral Convolution).
Level 4: Proactive
The Proactive level leverages all of the previous features and functionality in order to make sense out of it before it happens in order to achieve a set of (fitness) goals. Features of robots at this level are about fast motion-planning with instantaneous motion trajectory analysis as much as they are about Genetic Algorithms to neurons & synapsis, (semi)autonomous behaviors with the eventual ability to pass the Turing Test. The behavior(s) of the robot(s) are realistic and adaptable based on real-time analysis of incoming data from sensors and online data streams. It is articulate, intelligent and creative. It can replicate and communicate with its peer robots without the assistance from a human. At this level, there is no longer a need for manufacturing equipment to be separate machines like they are today (CNC Mills, Lathes, Waterjets, Laser cutters, etc.) because the robot can adapt its toolset to solve whatever manufacturing requirements are at hand.
Synthetic Biology + Robots
This story is incomplete without a similar exploration into Materials, where the most advanced materials are proactive examples that live under an umbrella where Synthetic Biology + Robots collaborate and test the limits of our imagination, philosophies, ethics and laws. I hope to add to this conversation in the near future from a creative roboticist’s perspective.
The sweet spot integrating neurobiology, robotics and synthetic biology – Joseph Ayers
As a side note, you might have noticed the majority of the videos included in this post come from the Vimeo user Robots In Architecture who are the creators of KUKA|prc and are responsible for the incredible Robots In Architecture conference. For more information on them, please check out their website here.