Avi Loeb believes AI could save humanity—but first we have to stop feeding it junk food
Avi Loeb believes AI could save humanity—but first we have to stop feeding it junk food
Sometimes the thought of Avi Loeb being an extremely advanced AI has crossed my mind. It’s the only way I can explain that a man so prolific in his research, so busy with teaching, trips, and conferences, replies so swiftly to email.
Whether early in the morning, when the sun is barely out—after jogging in the forests by his home near the Harvard University campus in Cambridge, Massachusetts—or from the middle of the Pacific Ocean, after a long day searching for evidence of the first known interstellar meteor on the coast of Papua New Guinea, Loeb always answers my emails, seemingly within seconds of me sending them. But his replies, always kind, warm, and illuminating, can’t come from any of our current AIs.
Loeb—who is the Frank B. Baird Jr. Professor of Science at Harvard, director of the Institute for Theory and Computation at the Harvard-Smithsonian Center for Astrophysics, and bestselling author of Extraterrestrial and Interstellar—does have a lot of thoughts about artificial intelligence.
He has been thinking about the current approach to training AI, and how we can correct the path so that the technology can harness the best of humanity. Loeb believes that AI could ultimately become humanity’s eternal spirit, traveling the universe in the same way that ancient alien civilizations may have already done, sending AI probes across the immensity of the Milky Way.
I spoke with Loeb about all of this via video conference, and came out of the conversation full of both hope and despair. Loeb is a scientist who is never afraid to ask questions that others ignore. He has built a reputation as a (sometimes controversial) maverick in the scientific community by challenging the dominant orthodoxy anchored in the eternal fight for research money and the fear of ridicule in academia.
He genuinely believes that science would have progressed faster if the adults practicing it were guided by their childhood curiosity. “Instead, experts often worry about their public image and pretend that they can explain all new evidence based on their past knowledge,” he says.
[Source Photo: choness/Getty Images]
Teach AI as our own children
Loeb believes AI is being developed too rapidly. Today, most systems are trained on vast amounts of data pulled from across the internet. This approach carries significant risks, Loeb tells me, as it could embed the worst aspects of humanity into the algorithms that will shape our future.
He compares the process of training AI to how we raise children, emphasizing the importance of being just as cautious with AI as we are with young minds. “The way I understand it is that AI is being trained on all texts that are available on the internet. This is equivalent to taking a teenager or a young kid and exposing them to everything that you find in magazines, newspapers, everywhere,” he says.
In very broad terms, this forced feeding is a product of companies’ insatiable need to continue training their large language models with as much information as possible in order for them to, in essence, become more complex and “smart.” The more information the models can eat, the more able they will be to respond to queries by predicting the most likely bit of language.
While this has provided some immediate satisfaction to corporations and consumers, the strategy will inevitably lead to long-term harm to AI’s “brain”—and ultimately to everyone who uses AI. “It’s like saying, ‘Okay, we have some kids that we want to grow, and we have to feed them, so we will feed them with junk food so that they grow very fast,’” Loeb says. “You might say, ‘Okay, well, that may be a solution for one generation.’ But I don’t want to give authority to these kids that are eating junk food because they would be unhealthy in their mentality.”
To extend the metaphor, we know that too much junk food can lead to unhealthy outcomes; eventually, a bad diet can lead to disease and death. This is not so different from AI. As companies run out of material, the quality of the text keeps decreasing until, eventually, it feeds on its own production scattered around the web causing models to collapse.
Low-quality training data can lead to systems that reflect and even amplify negative human behaviors, from racism to gender discrimination. Loeb equates it to raising a child in an environment filled with harmful influences. It’s a parent’s job to curate information that will help them raise their children to be responsible adults. In school, kids follow a structured curriculum for a reason. Shouldn’t we be equally careful in selecting the data we use to train AI?
“Some of this material has negative content that is not constructive to society,” Loeb says. “Some people are not constructive to society,” he says. “Instead [we should] imagine a society that is far better than what we humans were able to produce in the past.”
Better curation of training data is a moral obligation for future generations,
Sometimes the thought of Avi Loeb being an extremely advanced AI has crossed my mind. It’s the only way I can explain that a man so prolific in his research, so busy with teaching, trips, and conferences, replies so swiftly to email.
Whether early in the morning, when the sun is barely out—after jogging in the forests by his home near the Harvard University campus in Cambridge, Massachusetts—or from the middle of the Pacific Ocean, after a long day searching for evidence of the first known interstellar meteor on the coast of Papua New Guinea, Loeb always answers my emails, seemingly within seconds of me sending them. But his replies, always kind, warm, and illuminating, can’t come from any of our current AIs.
Loeb—who is the Frank B. Baird Jr. Professor of Science at Harvard, director of the Institute for Theory and Computation at the Harvard-Smithsonian Center for Astrophysics, and bestselling author of Extraterrestrial and Interstellar—does have a lot of thoughts about artificial intelligence.
He has been thinking about the current approach to training AI, and how we can correct the path so that the technology can harness the best of humanity. Loeb believes that AI could ultimately become humanity’s eternal spirit, traveling the universe in the same way that ancient alien civilizations may have already done, sending AI probes across the immensity of the Milky Way.
I spoke with Loeb about all of this via video conference, and came out of the conversation full of both hope and despair. Loeb is a scientist who is never afraid to ask questions that others ignore. He has built a reputation as a (sometimes controversial) maverick in the scientific community by challenging the dominant orthodoxy anchored in the eternal fight for research money and the fear of ridicule in academia.
He genuinely believes that science would have progressed faster if the adults practicing it were guided by their childhood curiosity. “Instead, experts often worry about their public image and pretend that they can explain all new evidence based on their past knowledge,” he says.
[Source Photo: choness/Getty Images]
Teach AI as our own children
Loeb believes AI is being developed too rapidly. Today, most systems are trained on vast amounts of data pulled from across the internet. This approach carries significant risks, Loeb tells me, as it could embed the worst aspects of humanity into the algorithms that will shape our future.
He compares the process of training AI to how we raise children, emphasizing the importance of being just as cautious with AI as we are with young minds. “The way I understand it is that AI is being trained on all texts that are available on the internet. This is equivalent to taking a teenager or a young kid and exposing them to everything that you find in magazines, newspapers, everywhere,” he says.
In very broad terms, this forced feeding is a product of companies’ insatiable need to continue training their large language models with as much information as possible in order for them to, in essence, become more complex and “smart.” The more information the models can eat, the more able they will be to respond to queries by predicting the most likely bit of language.
While this has provided some immediate satisfaction to corporations and consumers, the strategy will inevitably lead to long-term harm to AI’s “brain”—and ultimately to everyone who uses AI. “It’s like saying, ‘Okay, we have some kids that we want to grow, and we have to feed them, so we will feed them with junk food so that they grow very fast,’” Loeb says. “You might say, ‘Okay, well, that may be a solution for one generation.’ But I don’t want to give authority to these kids that are eating junk food because they would be unhealthy in their mentality.”
To extend the metaphor, we know that too much junk food can lead to unhealthy outcomes; eventually, a bad diet can lead to disease and death. This is not so different from AI. As companies run out of material, the quality of the text keeps decreasing until, eventually, it feeds on its own production scattered around the web causing models to collapse.
Low-quality training data can lead to systems that reflect and even amplify negative human behaviors, from racism to gender discrimination. Loeb equates it to raising a child in an environment filled with harmful influences. It’s a parent’s job to curate information that will help them raise their children to be responsible adults. In school, kids follow a structured curriculum for a reason. Shouldn’t we be equally careful in selecting the data we use to train AI?
“Some of this material has negative content that is not constructive to society,” Loeb says. “Some people are not constructive to society,” he says. “Instead [we should] imagine a society that is far better than what we humans were able to produce in the past.”
Better curation of training data is a moral obligation for future generations,