Artificial intelligence is here. It’s overhyped, poorly understood, and flawed but already core to our lives—and it’s only going to extend its reach. 

AI powers driverless car research, spots otherwise invisible signs of disease on medical images, finds an answer when you ask Alexa a question, and lets you unlock your phone with your face to talk to friends as an animated poop on the iPhone X using Apple’s Animoji. Those are just a few ways AI already touches our lives, and there’s plenty of work still to be done. But don’t worry, superintelligent algorithms aren’t about to take all the jobs or wipe out humanity.

The current boom in all things AI was catalyzed by breakthroughs in an area known as machine learning. It involves “training” computers to perform tasks based on examples, rather than relying on programming by a human. A technique called deep learning has made this approach much more powerful. Just ask Lee Sedol, holder of 18 international titles at the complex game of Go. He got creamed by software called AlphaGo in 2016.

There’s evidence that AI can make us happier and healthier. But there’s also reason for caution. Incidents in which algorithms picked up or amplified societal biases around race or gender show that an AI-enhanced future won’t automatically be a better one.

What Defines Artificial Intelligence The Complete WIRED Guide
The Beginnings of Artificial Intelligence

Artificial intelligence as we know it began as a vacation project. Dartmouth professor John McCarthy coined the term in the summer of 1956, when he invited a small group to spend a few weeks musing on how to make machines do things like use language. 

He had high hopes of a breakthrough in the drive toward human-level machines. “We think that a significant advance can be made,” he wrote with his co-organizers, “if a carefully selected group of scientists work on it together for a summer.”

Those hopes were not met, and McCarthy later conceded that he had been overly optimistic. But the workshop helped researchers dreaming of intelligent machines coalesce into a recognized academic field.

Early work often focused on solving fairly abstract problems in math and logic. But it wasn’t long before AI started to show promising results on more human tasks. In the late 1950s, Arthur Samuel created programs that learned to play checkers. In 1962, one scored a win over a master at the game. In 1967, a program called Dendral showed it could replicate the way chemists interpreted mass-spectrometry data on the makeup of chemical samples.

As the field of AI developed, so did different strategies for making smarter machines. Some researchers tried to distill human knowledge into code or come up with rules for specific tasks, like understanding language. Others were inspired by the importance of learning to understand human and animal intelligence. They built systems that could get better at a task over time, perhaps by simulating evolution or by learning from example data. The field hit milestone after milestone as computers mastered tasks that could previously only be completed by people.

Deep learning, the rocket fuel of the current AI boom, is a revival of one of the oldest ideas in AI. The technique involves passing data through webs of math loosely inspired by the working of brain cells that are known as artificial neural networks. As a network processes training data, connections between the parts of the network adjust, building up an ability to interpret future data.

Artificial neural networks became an established idea in AI not long after the Dartmouth workshop. The room-filling Perceptron Mark 1 from 1958, for example, learned to distinguish different geometric shapes and got written up in The New York Times as the “Embryo of Computer Designed to Read and Grow Wiser.” But neural networks tumbled from favor after an influential 1969 book coauthored by MIT’s Marvin Minsky suggested they couldn’t be very powerful.

Not everyone was convinced by the skeptics, however, and some researchers kept the technique alive over the decades. They were vindicated in 2012, when a series of experiments showed that neural networks fueled with large piles of data could give machines new powers of perception. Churning through so much data was difficult using traditional computer chips, but a shift to graphics cards precipitated an explosion in processing power. 

In one notable result, researchers at the University of Toronto trounced rivals in an annual competition where software is tasked with categorizing images. In another, researchers from IBM, Microsoft, and Google teamed up to publish results showing deep learning could also deliver a significant jump in the accuracy of speech recognition. Tech companies began frantically hiring all the deep-learning experts they could find. It’s important to note however that the AI field has had several booms and busts (aka, “AI winters”) in the past, and a sea change remains a possibility again today. 

The State of AI Today

Improvements to AI hardware, growth in training courses in machine learning, and open source machine-learning projects have accelerated the spread of AI to other industries, from national security to business support and medicine.   

Alphabet-owned DeepMind has turned its AI loose on a variety of problems: the movement of soccer players, the restoration o
f ancient texts, and even ways to control nuclear fusion. In 2020, DeepMind said that its AlphaFold AI could predict the structure of proteins, a long-standing problem that had hampered research. This was widely seen as one of the first times a real scientific question has been answered with AI. AlphaFold was subsequently used to study Covid-19 and is now helping scientists study neglected diseases. 

Meanwhile, consumers can expect to be pitched more gadgets and services with AI-powered features. Google and Amazon, in particular, are betting that improvements in machine learning will make their virtual assistants and smart speakers more powerful. Amazon, for example, has devices with cameras to look at their owners and the world around them. 

Much progress has been made in the past two decades, but there’s plenty to work on. Despite the flurry of recent progress in AI and wild prognostications about its near future, there are still many things that machines can’t do, such as understanding the nuances of language, commonsense reasoning, and learning new skills from just one or two examples. 

AI software will need to master tasks like these if it is to get close to the multifaceted, adaptable, and creative intelligence of humans, an idea known as artificial general intelligence that may never be possible. One deep-learning pioneer, Google’s Geoff Hinton, argues that making progress on that grand challenge will require rethinking some of the foundations of the field.

Generative AI and Its Controversies

There’s a particular type of AI making headlines—in some cases, actually writing them too. Generative AI is a catch-all term for AI that can cobble together bits and pieces from the digital world to make something new—well, new-ish—such as art, illustrations, images, complete and functional code, and tranches of text that pass not only the Turing test, but MBA exams. 

Tools such as OpenAI’s Chat-GPT text generator and Stable Diffusion’s text-to-image maker manage this by sucking up unbelievable amounts of data, analyzing the patterns using neural networks, and regurgitating it in sensible ways. The natural language system behind Chat-GPT has churned through the entire internet, as well as an untold number of books, letting it answer questions, write content from prompts, and—in the case of CNET—write explanatory articles for websites to match search terms. (To be clear, this article was not written by Chat-GPT, though including text generated by the natural language system is quickly becoming an AI-writing cliche.)

While investors are drooling, writers, visual artists, and other creators are naturally worried: Chatbots are (or at least appear to be) cheap, and humans require a livable income. Why pay an illustrator for an image when you can prompt Dall-E to make something for free? 

Content makers aren’t the only ones concerned. Google is quietly ramping up its AI efforts in response to OpenAI’s accomplishments, and the search giant should be worried about what happens to people’s search habits when chatbots can answer questions for us. So long Googling, hello Chat-GPTing? 

Challenges loom on the horizon, however. AI models need more and more data to improve, but OpenAI has already used the easy sources; finding new piles of written text to use won’t be easy or free. Legal challenges also loom: OpenAI is training its system on text and images that may be under copyright, perhaps even created by the very same people whose jobs are at risk from this technology. And as more online content is created using AI, it creates a feedback loop in which the online data-training models won’t be created by humans, but by machines. 

Data aside, there’s a fundamental problem with such language models: They spit out text that reads well enough but is not necessarily accurate. As smart as these models are, they don’t know what they’re saying or have any concept of truth—that’s easily forgotten amid the mad rush to make use of such tools for new businesses or to create content. Words aren’t just supposed to sound good, they’re meant to convey meaning too.

What Defines Artificial Intelligence The Complete WIRED Guide
The Challenges (and Future) of Artificial Intelligence

There are as many critics of AI as there are cheerleaders—which is good news, given the hype surrounding this set of technologies. Criticism of AI touches on issues as disparate as sustainability, ethics, bias, disinformation, and even copyright, with some arguing the technology is not as capable as most believe and others predicting it’ll be the end of humanity as we know it. It’s a lot to consider. 

To start, deep learning inherently requires huge swathes of data, and though innovations in chips mean we can do that faster and more efficiently than ever, there’s no question that AI research churns through energy. A startup estimated that in teaching one system to solve a Rubik’s Cube using a robotic hand OpenAI consumed 2.8 gigawatt-hours of electricity—as much as three nuclear plants could output in an hour. Other estimates suggest training an AI model emits as much carbon dioxide as five American cars being manufactured and driven for their average lifespan. 

There are techniques to reduce the impact: Researchers are developing more efficient training techniques, models can be chopped up so only necessary sections are run, and data centers and labs are shifting to cleaner energy. AI also has a role to play in improving efficiencies in other industries and otherwise helping address the climate crisis. But boosting the accuracy of AI generally means having more complicated models sift through more data—OpenAI’s GPT2 model reportedly had 1.5 billion weights to assess data, while GPT3 had 175 billion—suggesting AI’s sustainability could get worse before it improves. 

Vacuuming up all the data needed to build these models creates additional challenges, beyond the shrinking availability of fresh data mentioned above. Bias remains a core problem: Data sets reflect the world around us, and that means models absorb our racism, sexism, and othe
r cultural assumptions. This causes a host of serious problems: AI trained to spot skin cancer performs better on white skin; software designed to predict recidivism inherently rates Black people as more likely to reoffend; and flawed AI facial recognition software has already incorrectly identified Black men, leading to their arrests. And sometimes the AI simply doesn’t work: One violent crime prediction tool for police was wildly inaccurate because of an apparent coding error. 

Again, mitigations are possible. More inclusive data sets could help tackle bias at the source, while forcing tech companies to explain algorithmic decision-making could add a layer of accountability. Diversifying the industry beyond white men wouldn’t hurt, either. But the most serious challenges may require regulating—and perhaps banning—the use of AI decision-making in situations with the most risk of serious damage to people. 

Those are a few examples of unwanted outcomes. But people are also already using AI for nefarious ends, such as to create deepfakes and spread disinformation. While AI-edited or AI-generated videos and images have intriguing use cases—such as filling in for voice actors after they leave a show or pass away—generative AI has also been used to make deepfake porn, adding famous faces to adult actors, or used to defame everyday individuals. And AI has been used to flood the web with disinformation, though fact-checkers have turned to the technology to fight back. 

As AI systems grow more powerful, they will rightly invite more scrutiny. Government use of software in areas such as criminal justice is often flawed or secretive, and corporations like Meta have begun confronting the downsides of their own life-shaping algorithms. More powerful AI has the potential to create worse problems, for example by perpetuating historical biases and stereotypes against women or Black people. Civil-society groups and even the tech industry itself are now exploring rules and guidelines on the safety and ethics of AI. 

But the hype around generative models suggests we still haven’t learned our lesson when it comes to AI. We need to calm down; understand how it works and when it doesn’t; and then roll out this tool in a careful, considered manner, mitigating concerns as they’re raised. AI has real potential to better—and even extend—our lives, but to truly reap the benefits of machines getting smarter, we’ll need to get smarter about machines.

What Defines Artificial Intelligence The Complete WIRED Guide
Learn More
  • AI Shouldn’t Compete With Workers—It Should Supercharge Them
    The economy could get a boost if machine-learning engineers switched from copying human abilities to augmenting them.
  • How to Stop Robots From Becoming Racist
    Algorithms can amplify patterns of discrimination. Robotics researchers are calling for new ways to prevent mechanical bodies acting out those biases.
  • Generative AI Won’t Revolutionize Game Development Just Yet
    Hypesters say artificial intelligence will one day automate all the hard work of video game creation. But it’s not that simple.
  • It’s Time to Teach AI How to Be Forgetful
    By emulating the human ability to forget some of the data, psychological AIs will transform algorithmic accuracy.
  • The Race to Build a ChatGPT-Powered Search Engine
    A search bot you converse with could make finding answers easier—if it doesn’t tell fibs. Microsoft, Google, Baidu, and others are working on it.
  • AI Reveals the Most Human Parts of Writing
    When do writers want help finding inspiration? And when do they want full control? Computers could expose the true future of the medium.
  • ChatGPT Isn’t the Only Way to Use AI in Education
    AI can be a tool to create meaningful connections and learning experiences for children—and may help foster more equitable outcomes.
  • Where the AI Art Boom Came From—and Where It’s Going
    In the past year, algorithms got a lot better at generating illustrations, art, and photo-realistic scenes. Next up? Video.
  • Plus! AI’s hallucination problem and more WIRED artificial intelligence coverage.

Similar Posts