On November 8, 2023, Daron Acemoglu testified at the US Senate Committee on Homeland Security and Governmental Affairs Hearing on “The Philosophy of AI: Learning from History, Shaping Our Future.”
Executive Summary: Digital technologies have already ushered in a multifaceted economic, social and political transformation. Artificial intelligence (AI) promises to amplify these epochal changes, for good and bad. Although these tools have tremendous potential to expand our production, communication, and informational capabilities, they also pose major risks to economic prosperity, social cohesion, democracy, and national security — as did many other transformative technologies in the past.
These risks are rooted in three related social changes: (1) economic shifts, especially greater inequality, brought about by new technologies can create social and political tensions; (2) digital tools, including AI, alter who controls information and how that information can be used and manipulated, with direct implications for political behavior and democracy; (3) these technologies also unleash myriad social changes, affecting aspirations and norms, with potentially far-reaching effects. All of these risks apply to both democracy and national security. It is critical to understand them, learn from history about when humanity has and has not managed to develop institutions and norms to deal with similar risks, and chart a clear-eyed regulatory course to guard against the worst eventualities.
My overarching argument is that there is a pro-human (meaning pro-worker and pro-citizen) direction for AI tools that would be much better both for shared prosperity and for democracy, and therefore for national security. We need to take AI risks seriously because—although a pro-human direction for AI could strengthen prosperity, democracy, and security—we are currently on a very different and worrying trajectory.