Over the past 40 years, the diffusion of digital technologies significantly increased income inequality. Generative Artificial Intelligence (AI) will surely impact inequality, but the nature of that effect depends on exactly how this technology is developed and applied. Nothing about the path of this (or any) technology is inevitable.

The private sector is pursuing a path for generative AI that emphasizes automation and the displacement of labor, along with intrusive workplace surveillance. Simply displacing workers  is never good for the labor market, even when the displaced are highly paid. Displaced formerly high-paid workers are forced to compete for jobs with lower-wage workers, leading to a downward cascade in wage levels.

A better path is available, along which generative AI would be complementary to most humans—augmenting their capabilities—including people without a four-year college degree. Choosing the human-complementary path is feasible but will require changes in the direction of technological innovation, as well as in corporate norms and behavior. The goal should be to deploy generative AI to create and support new occupational tasks and new capabilities for workers. If AI tools can enable teachers, nurse practitioners, nurses, medical technicians, electricians, plumbers, and other modern craft workers to do more expert work, this can reduce inequality, raise productivity, and boost pay by leveling workers up.

Public policy has a central role in encouraging this positive path of technology to complement all workers, elevating the achievable level of skill and expertise for everyone. At this time, the five most important federal policies should be:

  1. Equalize tax rates on employing workers and on owning equipment/ algorithms to level the playing field between people and machines.
  2. Update Occupational Safety and Health Administration rules to create safeguards (i.e., limitations) on the surveillance of workers. Finding ways to elevate worker voice on the direction of development could also be helpful.
  3. Increase funding for human-complementary technology research, recognizing that this is not currently a private sector priority.
  4. Create an AI center of expertise within the government, to help share knowledge among regulators and other officials.
  5. Use that federal expertise to advise on whether purported human-complementary technology is appropriate to adopt in publicly provided education and healthcare programs, including at the state and local level.