Go back

Frequently Asked Questions about Power and Progress

Based on about 100 presentations and talks given around the world since the book was published in May 2023

Download pdf

 

Introductory Questions

Who is the intended audience of Power and Progress?

We wrote Power and Progress for anyone who is interested in technology, progress, prosperity, influence, or what will become of artificial intelligence. We are economists and there is plenty of economics (some long-established principles and some new ideas) throughout the book, but this material is intended to be accessible to all readers who are interested in the questions.

What is the contribution of Power and Progress — why did you write a whole book?

The last 40–50 years have shown startling growth in wage inequality based on education and occupation in the United States and other industrialized countries. We wanted to understand and explain a) what really delivered shared prosperity over the past 1,000 years, b) what went right during the Industrial Revolution and post-war period in the mid-20th century, c) what went wrong in the digital age from about 1980 onward, and d) how we can get back on the path toward shared prosperity—for example, by using AI in an inclusive fashion.

Could you explain some of the terms you frequently use in Power and Progress?

You can find a modest glossary for some key terms used in our book here.

Modern Day Techno-Optimism

What is wrong with techno-optimism and what is your alternative?

Power and Progress rejects the fundamental premise of techno-optimism, a widespread view among the tech elite. In our reading of the evidence, unchecked technological progress does not necessarily lead to shared prosperity. Genuine progress requires real efforts to help people who would otherwise simply be displaced by machines.

Are you fundamentally anti-technology or anti-AI?

No, but the prevailing ideology of AI development today does not pay close enough attention to the workers whose jobs it will eliminate. This situation will not improve unless and until we develop strong enough countervailing powers that support worker voice and workers’ influence over the use of these technologies.

For all the discontents, haven’t technologies like radio, the internet, and electricity improved our lives writ large?

This misses the point. New technologies are useful—that’s why they get adopted. The question is rather: can we find a path for technology along which more people gain? This happened in the mid-20th century, but much less after 1980. Today, many aspects surrounding the development of internet platforms and the emergence of artificial intelligence were not created with (all) humans’ best interests in mind. Technology is best when it is human-centered.

What do you think of generative AI, like GPT-4 and ChatGPT, and the trajectory of artificial intelligence?

All of our concerns about automation are applicable to the advent of generative AI and are even more urgent now that these tools are widely available and increasingly used. Given how quickly these artificial intelligence technologies develop and evolve, it is essential to act now to ensure that the direction of AI is beneficial to all workers, not just the elite few who develop it. At present, we worry that the focus of artificial intelligence does not adequately consider workers or their capabilities, but instead aims to replace them. We extend these ideas to emerging developments in AI in our op-eds “Big tech is bad. Big AI will be worse,” and “OpenAI’s drama marks a new and scary era in artificial intelligence.”

Where does ChatGPT fall on the spectrum of “automation” to “augmentation”?

Currently, ChatGPT is not designed with the concept of machine usefulness in mind. ChatGPT is very good at giving supposedly authoritative—but sometimes incorrect—answers, replacing human research and decision-making. This type of behavior is an example of automation. Similar technology, though, could instead be used to augment human skills by offering multiple answers, citations, and explanations, giving users the opportunity to engage and make more optimal decisions with appropriate context.

What are the implications of artificial intelligence for the developing world?

Artificial intelligence is likely to fail middle- and lower-income economies. Instead of tackling concerns that are most relevant to developing countries, like improving agriculture and education, artificial intelligence may actually reduce opportunities for economic growth. Currently, developing countries participate in the global economy partly through manufacturing, based on relatively low labor cost, but many routine tasks in factories are at risk of being imminently automated.

Other Topics

You don’t discuss Twitter much in the book. Why is that and what do you think about it?

Power and Progress was written before Elon Musk had officially taken over Twitter and instituted many significant changes (such as renaming to X and introducing a subscription tier). For more of our current thoughts on platform models, see our policy brief “The Urgent Need to Tax Digital Advertising.”

What was so special about Britain in the 18th and 19th centuries for it to lead the Industrial Revolution?

As we discuss at length in the book, the Industrial Revolution was led by the “middling sort,” aka the British middle class. The potential for these people to achieve social mobility, along with the enthusiasm for technology built to respond to real-world problems, produced a series of profound breakthroughs in how we interact with the natural world—and how we interact with each other. What set Britain apart, at the time, were the social and institutional frameworks that allowed for the growth and empowerment of this middle class.

If the same technological innovations were used in some contexts to subjugate or extract, but used to enable something like shared prosperity in other contexts (e.g., the rise of manufacturing in Asia), what makes the difference? Does this suggest that politics and culture matter more than the technology itself?

Yes, what really matters is who has power in a society—this then determines (or strongly affects) which technologies are adopted or further developed.

Were the Luddites right?

It is hard to know the precise motivation of Luddites when they destroyed early spinning and weaving machines (in several phases, at the end of the 18th century and in the early 19th century).  Most textile workers at that time were not against technology per se — for example, handloom weavers famously used improved machines in their home production. However, many of these same weavers naturally resented the specific way that technology came to be used in large factories, as this undermined their ability to earn a decent living. As we write in “Learning from Ricardo and Thompson: Machinery and labor in the early industrial revolution, and in the age of AI,” automation at that time forced people to take jobs in unhealthy factories where they were subject to close surveillance and had little or no autonomy. Automation can increase wages, but only when accompanied by the creation of new tasks that raise the marginal productivity of labor, preferably alongside strong additional hiring in complementary activities. As the British learned the hard way between 1760 and 1840, wages are unlikely to rise when workers cannot effectively push to participate fully in the benefits from productivity growth.

Were the Dark Ages really that dark?

No. Despite its reputation, there were still many technological advances during the so-called “Dark Ages” in Europe after the fall of the Roman Empire. We discuss some examples in the book, including the spinning wheel, the heavy-wheeled plow, early fireplaces and chimneys, and many others.

Where did the money go during the Middle Ages in Western Europe?

Cathedrals! Thousands of churches, monasteries, and cathedrals were built across Western Europe during the Middle Ages, often relying heavily on taxation.

Concluding Remarks

Can you offer any specific policy recommendations to catalyze shared growth?

You can find a detailed summary of our policy recommendations in “Can We Have Pro-Worker AI?” which builds from ideas in Chapter 11.

Is directed technological change really possible and desirable?

Yes—technology is shaped by choices. For example, the cost of renewable energy has been brought down dramatically in recent decades through exactly the sort of redirection of technology that we recommend. This framework can be extended to improve society in many other ways, including the future of AI development.

What are the biggest challenges we may face when implementing your plan for shared prosperity?

Big Tech is not enthusiastic about changing its business model. The people who run the dominant tech companies in the AI space are very smart and understand exactly what is going on. But they (and their shareholders) are making good money, so their incentives are not pointed toward rethinking the arrangement.

In an ultra-fast-paced world of artificial intelligence, what can the history of technology teach us—and should history really be our guide?

In the early 20th century, George Santayana suggested, “those who cannot remember the past are condemned to repeat it.”  When it comes to the history of work, this observation seems spot on. Workers are much more likely to benefit from technological progress when strong countervailing powers challenge corporate decision makers and when political institutions support appropriate regulation for powerful technologies. Some commentators like to over-simplify technological history as an inexorable march toward more widely shared prosperity, but it is often easy to use new technologies in ways that boost profits while squeezing labor. Artificial intelligence could go either way: becoming a tool of pervasive oppression; or allowing more people to live better lives. It is up to all of us to determine the precise path of development for artificial intelligence.