Modern robots are not unlike toddlers: It’s hilarious to watch them fall over, but deep down we know that if we laugh too hard, they might develop a complex and grow up to start World War III. None of humanity’s creations inspires such a confusing mix of awe, admiration, and fear: We want robots to make our lives easier and safer, yet we can’t quite bring ourselves to trust them. We’re crafting them in our own image, yet we are terrified they’ll supplant us.
But that trepidation is no obstacle to the booming field of robotics. Robots have finally grown smart enough and physically capable enough to make their way out of factories and labs to walk and roll and even leap among us. The machines have arrived.
You may be worried a robot is going to steal your job, and we get that. This is capitalism, after all, and automation is inevitable. But you may be more likely to work alongside a robot in the near future than have one replace you. And even better news: You’re more likely to make friends with a robot than have one murder you. Hooray for the future!
The definition of “robot” has been confusing from the very beginning. The word first appeared in 1921, in Karel Capek’s play R.U.R., or Rossum’s Universal Robots. “Robot” comes from the Czech for “forced labor.” These robots were robots more in spirit than form, though. They looked like humans, and instead of being made of metal, they were made of chemical batter. The robots were far more efficient than their human counterparts, and also way more murder-y—they ended up going on a killing spree.
R.U.R. would establish the trope of the Not-to-Be-Trusted Machine (e.g., Terminator, The Stepford Wives, Blade Runner, etc.) that continues to this day—which is not to say pop culture hasn’t embraced friendlier robots. Think Rosie from The Jetsons. (Ornery, sure, but certainly not homicidal.) And it doesn’t get much family-friendlier than Robin Williams as Bicentennial Man.
The real-world definition of “robot” is just as slippery as those fictional depictions. Ask 10 roboticists and you’ll get 10 answers—how autonomous does it need to be, for instance. But they do agree on some general guidelines: A robot is an intelligent, physically embodied machine. A robot can perform tasks autonomously to some degree. And a robot can sense and manipulate its environment.
Think of a simple drone that you pilot around. That’s no robot. But give a drone the power to take off and land on its own and sense objects and suddenly it’s a lot more robot-ish. It’s the intelligence and sensing and autonomy that’s key.
But it wasn’t until the 1960s that a company built something that started meeting those guidelines. That’s when SRI International in Silicon Valley developed Shakey, the first truly mobile and perceptive robot. This tower on wheels was well-named—awkward, slow, twitchy. Equipped with a camera and bump sensors, Shakey could navigate a complex environment. It wasn’t a particularly confident-looking machine, but it was the beginning of the robotic revolution.
Around the time Shakey was trembling about, robot arms were beginning to transform manufacturing. The first among them was Unimate, which welded auto bodies. Today, its descendants rule car factories, performing tedious, dangerous tasks with far more precision and speed than any human could muster. Even though they’re stuck in place, they still very much fit our definition of a robot—they’re intelligent machines that sense and manipulate their environment.
Robots, though, remained largely confined to factories and labs, where they either rolled about or were stuck in place lifting objects. Then, in the mid-1980s Honda started up a humanoid robotics program. It developed P3, which could walk pretty darn good and also wave and shake hands, much to the delight of a roomful of suits. The work would culminate in Asimo, the famed biped, which once tried to take out President Obama with a well-kicked soccer ball. (OK, perhaps it was more innocent than that.)
Today, advanced robots are popping up everywhere. For that you can thank three technologies in particular: sensors, actuators, and AI.
So, sensors. Machines that roll on sidewalks to deliver falafel can only navigate our world thanks in large part to the 2004 Darpa Grand Challenge, in which teams of roboticists cobbled together self-driving cars to race through the desert. Their secret? Lidar, which shoots out lasers to build a 3-D map of the world. The ensuing private-sector race to develop self-driving cars has dramatically driven down the price of lidar, to the point that engineers can create perceptive robots on the (relative) cheap.
Lidar is often combined with something called machine vision—2-D or 3-D cameras that allow the robot to build an even better picture of its world. You know how Facebook automatically recognizes your mug and tags you in pictures? Same principle with robots. Fancy algorithms allow them to pick out certain landmarks or objects.
Sensors are what keep robots from smashing into things. They’re why a robot mule of sorts can keep an eye on you, following you and schlepping your stuff around; machine vision also allows robots to scan cherry trees to determine where best to shake them , helping fill massive labor gaps in agriculture.
New technologies promise to let robots sense the world in ways that are far beyond humans’ capabilities. We’re talking about seeing around corners: At M
IT, researchers have developed a system that watches the floor at the corner of, say, a hallway, and picks out subtle movements being reflected from the other side that the piddling human eye can’t see. Such technology could one day ensure that robots don’t crash into humans in labyrinthine buildings, and even allow self-driving cars to see occluded scenes.
Within each of these robots is the next secret ingredient: the actuator, which is a fancy word for the combo electric motor and gearbox that you’ll find in a robot’s joint. It’s this actuator that determines how strong a robot is and how smoothly or not smoothly it moves. Without actuators, robots would crumple like rag dolls. Even relatively simple robots like Roombas owe their existence to actuators. Self-driving cars, too, are loaded with the things.
Actuators are great for powering massive robot arms on a car assembly line, but a newish field, known as soft robotics, is devoted to creating actuators that operate on a whole new level. Unlike mule robots, soft robots are generally squishy, and use air or oil to get themselves moving. So for instance, one particular kind of robot muscle uses electrodes to squeeze a pouch of oil, expanding and contracting to tug on weights. Unlike with bulky traditional actuators, you could stack a bunch of these to magnify the strength: A robot named Kengoro, for instance, moves with 116 actuators that tug on cables, allowing the machine to do unsettlingly human maneuvers like pushups. It’s a far more natural-looking form of movement than what you’d get with traditional electric motors housed in the joints.
And then there’s Boston Dynamics, which created the Atlas humanoid robot for the Darpa Robotics Challenge in 2013. At first, university robotics research teams struggled to get the machine to tackle the basic tasks of the original 2013 challenge and the finals round in 2015, like turning valves and opening doors. But Boston Dynamics has since that time turned Atlas into a marvel that can do backflips, far outpacing other bipeds that still have a hard time walking. (Unlike the Terminator, though, it does not pack heat.) Boston Dynamics has also begun leasing a quadruped robot called Spot, which can recover in unsettling fashion when humans kick or tug on it. That kind of stability will be key if we want to build a world where we don’t spend all our time helping robots out of jams. And it’s all thanks to the humble actuator.
At the same time that robots like Atlas and Spot are getting more physically robust, they’re getting smarter, thanks to AI. Robotics seems to be reaching an inflection point, where processing power and artificial intelligence are combining to truly ensmarten the machines. And for the machines, just as in humans, the senses and intelligence are inseparable—if you pick up a fake apple and don’t realize it’s plastic before shoving it in your mouth, you’re not very smart.
This is a fascinating frontier in robotics (replicating the sense of touch, not eating fake apples). A company called SynTouch, for instance, has developed robotic fingertips that can detect a range of sensations, from temperature to coarseness. Another robot fingertip from Columbia University replicates touch with light, so in a sense it sees touch: It’s embedded with 32 photodiodes and 30 LEDs, overlaid with a skin of silicone. When that skin is deformed, the photodiodes detect how light from the LEDs changes to pinpoint where exactly you touched the fingertip, and how hard.
Far from the hulking dullards that lift car doors on automotive assembly lines, the robots of tomorrow will be very sensitive indeed.
Increasingly sophisticated machines may populate our world, but for robots to be really useful, they’ll have to become more self-sufficient. After all, it would be impossible to program a home robot with the instructions for gripping each and every object it ever might encounter. You want it to learn on its own, and that is where advances in artificial intelligence come in.
Take Brett. In a UC Berkeley lab, the humanoid robot has taught itself to conquer one of those children’s puzzles where you cram pegs into different shaped holes. It did so by trial and error through a process called reinforcement learning. No one told it how to get a square peg into a square hole, just that it needed to. So by making random movements and getting a digital reward (basically, yes, do that kind of thing again) each time it got closer to success, Brett learned something new on its own. The process is super slow, sure, but with time roboticists will hone the machines’ ability to teach themselves novel skills in novel environments, which is pivotal if we don’t want to get stuck babysitting them.
Another tack here is to have a digital version of a robot train first in simulation, then port what it has learned to the physical robot in a lab. Over at Google, researchers used motion-capture videos of dogs to program a simulated dog, then used reinforcement learning to get a simulated four-legged robot to teach itself to make the same movements. That is, even though both have four legs, the robot’s body is mechanically distinct from a dog’s, so they move in distinct ways. But after many random movements, the simulated robot got enough rewards to match the simulated dog. Then the researchers transferred that knowledge to the real robot in the lab, and sure enough, the thing could walk—in fact, it walked even faster than the robot manufacturer’s default gait, though in fairness it was less stable.