The future of technology is notoriously difficult to predict with any degree of accuracy. Just ask those hoping to be driving a flying car on Mars by now.
Just a few years ago many experts were predicting the end of the internet as we know it unless net neutrality was restored. That didn’t happen. Now, as generative AI has become the big topic of conversation doomsday prophecies abound. These range from age-old fears about worker displacement to human extinction. However, the large number of past failed predictions should serve as a warning for lawmakers to approach these questions with humility and not legislate issues that do not yet exist.
In 2015, Net Neutrality became the technology issue of the day when the Federal Communications Commission (FCC) decided to repeal the Obama-era Open Internet Order. The fear was that without a robust regulatory framework to protect the internet, internet service providers (ISPs) would engage in anticompetitive behaviors such as blocking access to content or throttling speeds for different websites and services. New businesses would be left in the dust and free speech would be at the mercy of corporations. Net neutrality was needed so that ISPs would treat all internet traffic the same, and consumers would be protected.
But here’s the rub: none of these dreaded outcomes ever materialized. When the Trump-era FCC rescinded these regulations in 2017, consumers were just fine, and the internet thrived. Unfortunately, this hasn’t stopped the government from trying to reinstitute Net Neutrality regulations before they were blocked in court despite the negative impact they were likely to have on investment and newer capabilities like network slicing.
Today the debate surrounding AI, while different in some important ways, is following a similar pattern to that surrounding Net Neutrality. A small, but vocal, minority of people are demanding that the government impose burdensome new regulations on AI to get ahead of the, at times literal, doomsday predictions. Initial fears about the danger of AI included everything from mass job displacement, malicious actors undermining democracy through deepfakes and misinformation, and even human extinction.
Some high-profile pieces of AI legislation were written with these potential pitfalls in mind. For instance, the EU AI Act classifies use cases according to risk, some of which are grounded in reality. In California, Governor Gavin Newsom vetoed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, which if passed, would have imposed excessive liability on developers of powerful AI models. Fortunately, Newsom recognized some of the flaws of the bill and is now working on a new AI safety proposal.
Politicians are again looking to pass legislation in a vain attempt to protect consumers from harms that may never materialize. And because technological innovation is extremely difficult, these pre-emptive actions are doomed to fail. AI, in particular, has advanced so quickly and over such a short time frame that the EU AI Act already had to be amended to account for generative AI like ChatGPT.
These repeated failures to regulate the future reveal the problem with regulations that aim to resolve a problem before any harm occurs. In seeking to avoid a future problem, regulators sometimes unintentionally create a new one. They now risk doing the same with AI.
Bills and laws such as the EU AI Act and the failed California AI safety bill, for example, naively target potential harms while ignoring the possibility they will spawn new ones. Ex-post regulations, or regulations that are implemented after the fact that target proven harms, work better for a rapidly evolving industry with so much potential for consumer benefit like AI. Otherwise, we endanger the consumer benefits of artificial intelligence.
Failed doomerism from the Net Neutrality debates is an instructive lesson in how doomsday predictions about technology are often wrong, and that the future of technology is impossible to predict. One can rarely tell what the next big thing is or even whether it will stay big. Regulators should learn from the past and only focus on regulating technology using hard data after it develops rather than trying to predict outcomes before they occur.