The consequences of allowing ourselves to be distracted from the most important challenge humanity may ever face could be fatal.
Whatever approach we take, “We need to do something quickly,” as Senator Blumenthal says. “And so far, we haven’t been doing it.” With the first land war in Europe for more than a generation, concerns about the rise of China, and political turmoil is many Western nations, it is easy to be distracted by immediate dangers that appear more pressing.
The U.S. presidential election in November will have an enormous impact on the trajectory our country takes over the next four years. The two candidates have staked out often starkly different positions on some of the most important issues of our time. Yet on one key question the silence has been deafening. Over the next decade, the development and application of Artificial Intelligence will change the way we live and work. It may even change the way we think and vote. But the question of how we should regulate this technology barely registers as a blip on the nation’s political radar.
This is not to say that the candidates don’t care about Artificial Intelligence. Kamala Harris served as the current administration’s AI czar while Donald Trump has promised to repeal the federal government’s current policy framework. But few voters would be able to tell you where either party stands on even the biggest picture issues.
We cannot afford to remain complacent about this paradigm-breaking technology.
The impact of AI is already with us. Large language models, such as ChatGPT 4, Claude, and Gemini, have already begun to replace human workers in many industries. Machine learning algorithms are powering new advances in our healthcare system at the same time as they risk entrenching old biases. And while it is unlikely that AI policy will play any role in the outcome of the election, AI itself is already being used by bad actors as a tool to try to sway the results.
Legislation to regulate the development and application of AI is needed and it is needed now. Unfortunately, with partisan divisions feeding the current congressional gridlock, there is little appetite to expend political capital on an issue that does not yet matter to voters. But we cannot afford for this shortsightedness to prevail. The stakes are simply too high.
Over the last year, I have been working on a book about AI – about its enormous potential and its equally enormous dangers, and about how we can harness the former and manage the latter (Transcend: Unlocking Humanity in the Age of AI will be published by Post Hill Press in early 2025). As part of this process, I have been sitting down with distinguished thinkers in the fields of science, philosophy, and politics to hear their views on how we should be tackling the defining issue of our time. I recently met with United States Senator Richard Blumenthal to talk about how we can effectively regulate a technology that we don’t yet fully understand.
Senator Blumenthal is co-author of the Bipartisan Framework for U.S. AI Act, also known as the Blumenthal-Hawley Framework. Created with Senator Josh Hawley, the Bipartisan Framework, has five pillars:
-
Establish a Licensing Regime Administered by an Independent Oversight Body
-
Ensure Legal Accountability for Harms
-
Defend National Security and International Competition
-
Promote Transparency
-
Protect Consumers and Kids
Blumenthal & Hawley Announce Bipartisan Framework on Artificial Intelligence Legislation | U.S. Senator Richard Blumenthal (senate.gov)
As Senator Blumenthal put it when discussing the framework with me, “AI has enormous potential for doing good, whether in medical research and treatment, exploring space, or simply doing the rapid calculations that are necessary for all kinds of other benefits to people around the globe. But there are also perils. Some seem like science fiction, but they are very real. The idea that AI models could be smarter than human beings and could, in effect, program themselves to be out of control, is a frightening prospect.” Senator Hawley frames the need for action in similarly stark terms: “Congress must act on AI regulation … Our American families, workers, and national security are on the line.”
Any serious attempt at regulation has to address both the utopian and dystopian possibilities that the development of AI will open up. Given the extreme nature of some of the risks, which could potentially include human extinction, Senator Blumenthal argues that it is necessary for regulation to pay close attention to the dangers: “we should be very risk-oriented behind what we do.” The regulation of the pharmaceutical industry offers a model here. The United States imposes strict regulations on the development and manufacturing of pharmaceuticals, ensuring that the entire process is subject to government oversight. And yet America’s leading role in the global pharmaceuticals industry shows that this can be done without damaging market competitiveness and vital progress in the sector.
As with pharmaceuticals, Senator Blumenthal believes the risks surrounding AI are simply too grave to permit self-regulation by either businesses or individuals. “I think the government has to support and sustain an oversight mechanism, an entity that provides this kind of set of safeguards or regulations,” he explains. This regulatory entity would have the authority to oversee AI technologies, ensuring that they are developed and deployed responsibly. However, there are important challenges to overcome in crafting the necessary legislation. I spoke to the senator just a few days after the Supreme Court’s decision to overturn the Chevron deference doctrine, which allowed administrative agencies considerable latitude to interpret ambiguous statutes. Blumenthal acknowledges these difficulties but remains steadfast in his belief that creating an effective regulatory body is not only possible but essential. “This body, whatever it’s going to be called, has to feel its way and establish first that it has the power to regulate, and second, what that will include.”
Regulation is essential, then, and it must be risk-oriented. But we must be careful that we don’t throw out the baby with the bathwater. The only way to reduce the risk of AI to zero through legislation would be to introduce a blanket ban on all development in the field. Taking this approach is neither desirable nor possible given the certainty of international competition. Instead, the need to protect against potential harms must be balanced with the imperative to foster innovation. “We don’t want to stifle innovation,” says Blumenthal. “That’s why the regimented regulation has to be risk-based. If it doesn’t pose a risk, we don’t need a regulator.”
One of the key risks of AI is that as it develops it will render vast numbers of workplace roles redundant, contributing to mass unemployment, with all its attendant economic, psychological, and social consequences.
Senator Blumenthal argues that government has a responsibility to manage this risk. Most obviously, he says, “we have to do what we’ve done in the past, which is to provide support for training programs and skills to enable workers to do jobs that are in greater supply.” We cannot and should not attempt to stop the AI revolution from making work more efficient, but we must ensure that the frameworks and support networks are in place to enable workers to retrain and upskill so that they can move from declining career roles to those that will flourish in an AI-powered future.
Generals, they say, are always fighting the last war. Similarly, governments tend to focus their efforts on regulating the last revolution and responding to the last disaster. That is not an option with AI.
As I suggest in the book, and as Senator Blumenthal also argues, here as in few other areas, it will be essential for governments to get ahead of the curve by developing regulatory tools that are not just appropriate for the last breakthrough but for the next one as well.
How we deal with the looming problem of AI-driven misinformation and disinformation will be an important test case. Senator Blumenthal endorses watermarking as a way to verify AI outputs. As I have discussed elsewhere, when partnered with the immutable ledgers of blockchain technology, this could indeed be a powerful way of providing a point of information stability in the shifting sands of a future in which AI can put words in peoples’ mouths in a heartbeat. But to make this kind of solution work, we need to move quickly. It is essential that we have the necessary technical and regulatory frameworks in place before we are overwhelmed by rapidly developing AI capabilities.
At the moment, regulations are lagging behind the technology, and we cannot afford for this trend to continue.
Senator Blumenthal’s suggestion that we think about AI regulation in terms of the models we use for the pharmaceutical industry provides a valuable analogy. But in my view, it doesn’t take us far enough. There are at least two key difference between AI development and the pharmaceutical industry. The first is that, unlike the drug development process, the barriers to entry are minimal when it comes to adapting existing AI models for new purposes.
For many people with entry-level software development skills, this is something that can be done at home. Second, while the manufacturing and distribution of pharmaceuticals requires significant infrastructure, computer code can be replicated endlessly and transmitted anywhere on the planet in a fraction of a second. The possibility of problematic AI being created and leaking out into the wild is simply much higher than is the case for new and dangerous drugs.
As Mustafa Suleyman, founder of DeepMind and CEO of Microsoft AI, argues in his book The coming wave: technology, power, and the twenty-first century’s greatest dilemma, the day is not far away when individuals will be able to use AI to create potentially devastating biological organisms at low cost and in the privacy of their own homes.
The direct and indirect dangers that arise from possibilities like this go beyond anything posed by pharmaceuticals and this difference needs to be accounted for in our regulatory processes.
It is also important to consider that our current frameworks for pharmaceutical regulation are not even close to being flawless. There is a very large and flourishing black market for drugs in the United States, which shows the limits of the government’s ability to control manufacturing, while the prescription drug crisis shows that even where regulation does reach, current laws are not always able to contain possible harms.
Given the potential for AI to generate extinction-level outcomes, it may be necessary to think in terms of the regulatory frameworks surrounding nuclear weapons and nuclear energy rather than those that apply in the drug industry.
Whatever approach we take, “We need to do something quickly,” as Senator Blumenthal says. “And so far, we haven’t been doing it.” With the first land war in Europe for more than a generation, concerns about the rise of China, and political turmoil is many Western nations, it is easy to be distracted by immediate dangers that appear more pressing. But we cannot afford to shunt AI to the back of the line. The consequences of allowing ourselves to be distracted from the most important challenge humanity may ever face could be fatal.
From my book TRANSCEND.
Adapted/published with permission from ‘TRANSCEND’ by Faisal Hoque (Post Hill Press, March 25, 2025). Copyright 20204, Faisal Hoque, All rights reserved.
Look for my latest book TRANSCEND available for pre-order now.