If 2023 was the year that AI finally broke into the mainstream, 2024 could be the year it gets fully enmeshed in our lives -- or the year the bubble bursts.
But whatever happens, the stage is set for another whirlwind 12 months, coming in the wake of Hollywood's labor backlash against automation; the rise of consumer chatbots, including OpenAI's GPT-4 and Elon Musk's Grok; a half-baked coup against Sam Altman; early inklings of a regulatory crackdown; and, of course, that viral deepfake of Pope Francis in a puffer jacket.
To gauge what we should expect in the new year, The Times asked a slate of experts and stakeholders to send in their 2024 artificial intelligence predictions. The results alternated between enthusiasm, curiosity and skepticism -- an appropriate mix of sentiments for a technology that remains both polarizing and unpredictable.
Regulators will step in, and not everyone will be happy about it.
When a surgeon or a stockbroker goes to work, they do so with the backing of a license or certification. Could 2024 be the year we start holding AI to the same standard?
"In the next year, we may require AI systems to get a professional license," said Amy Webb, chief executive of the Future Today Institute, a consulting firm. "While certain fields require professional licenses for humans, so far algorithms get to operate without passing a standardized test. You wouldn't want to see a urologist for surgery who didn't have a medical license in good standing, right?"
Trending nowLoading...
It'd be a development in line with political changes over the last few months, which saw several efforts to more conscientiously regulate this powerful new technology, including a sweeping executive order from President Joe Biden and a draft Senate policy aimed at reining in deepfakes.
"I'm particularly concerned about the potential impact (generative AI) could have on our democracy and institutions in the run-up to November's elections," Sen. Chris Coons, D-Del., who co-sponsored the deepfakes draft, said of the coming year. "Creators, experts and the public are calling for federal safeguards to outline clear policies around the use of generative AI, and it's imperative that Congress do so."
Regulation isn't just a domestic concern, either. Justin Hughes, a professor of intellectual property and trade law at Loyola Law School, said he expects the European Union will finalize its AI Act next year, triggering a 24-month countdown for broad AI regulations in the EU. Those would include transparency and governance requirements, Hughes said, but also bans on dangerous uses of AI such as to infer someone's ethnicity and sexual orientation or manipulate their behavior. And as with many European regulations, the effects could trickle down to American firms.
Yet the rising calls for guardrails have already triggered a backlash. In particular, a movement known as effective accelerationism -- or "e/acc" -- has picked up steam by calling for rapid innovation with limited political oversight.
Julie Fredrickson, a tech investor aligned with the e/acc movement, said she envisions the new year bringing further tensions around regulation.
"The biggest challenge we will encounter is that using (tools that) compute IS speech and that raises critical constitutional issues here in the United States that any regulatory framework will need to deal with," Fredrickson said. "The public must make our government understand that it cannot make trade-offs restricting our fundamental rights like speech."
Authenticity will grow more important than ever.
Imagine being able to know with certainty whether that vacation photo your friend just posted on Instagram was taken in real life or generated on a server farm somewhere.
Mike Gioia, co-founder of the AI workflow startup Pickaxe, thinks it might soon be possible. He predicts Apple will launch a "Photographed on iPhone" stamp next year that would certify AI-free photos.
Other experts agree that efforts to bolster trust and authenticity will only grow more important as AI floods the internet with synthetic text, photos and videos (not to mention bots aimed at imitating real people). Andy Parsons, senior director of Adobe's Content Authenticity Initiative, said he anticipates the increased adoption of "Content Credentials," or metadata embedded in digital media files that, almost like a nutrition label, would record who made something and with what tools.
Such stopgaps could prove particularly important as America enters a presidential election year -- its first in history that will take place amid a torrent of cheap, viral AI media.
Bill Burton, former deputy press secretary for the Obama administration, predicted: "The most viewed and engaged videos in the 2024 election are generated by AI."
The steam engine of innovation keeps chugging along ...
Last year brought substantial advances in AI technology, from the launch of mainstream products -- ChatGPT, deemed the fastest-growing consumer app in history, released its fourth version -- to continued breakthroughs in AI research and development.
Many AI insiders think that pace of innovation will continue into the new year.
"Every business and consumer app user will be using AI and they won't know it," said Ted Ross, general manager of the City of Los Angeles Information Technology Agency. "I predict that artificial intelligence features and high-visibility [generative] AI platforms, such as ChatGPT, will rapidly integrate into existing business and consumer applications with the user often unaware."
Other developments could be more niche but no less impactful. Some experts predict a rise in leaner and more targeted alternatives to the "large language models" that underlie ChatGPT and Grok. The AI itself could get better at self-improvement, too.
"There hasn't been a lot of tooling that targets speeding up AI research," said Anastasis Germanidis, chief technology officer of the synthetic video startup Runway. "We'll likely see more of those tools emerge in the coming year," including to help write or debug code.
Follow The Columbian on InstagramFollow ... Unless the bubble bursts.
The AI market is frothy, but not everyone thinks the glory days can last.
"A hyped AI company will go bankrupt or get acquired for a ridiculously low price" at some point in 2024, Clément Delangue, chief executive of the open source AI development community Hugging Face, wrote.
Eric Siegel, a former Columbia University professor and the author of "The AI Playbook: Mastering the Rare Art of Machine Learning Deployment," has struck an even warier tone.
"There will be growing consternation as the lack of a killer (generative) AI app becomes increasingly apparent," Siegel said, referencing an app that would drive widespread adoption of AI.
Related news