Will a California bill cripple AI or make it better?

8/22/24
 
   < < Go Back
 
from CJR,
8/22/24:

At various moments last year, the question of how to regulate artificial intelligence took center stage. In March, more than a thousand technology leaders, researchers, and others—including Elon Musk, the billionaire who owns SpaceX and Tesla—signed an open letter calling for a moratorium on the development of AI because of the potential dangers it poses; the same month, dozens of scientists signed an agreement aimed at ensuring the technology can’t be used to create dangerous new bioweapons by recombining DNA. In July, seven of the leading AI companies—including Meta, Google, Microsoft, and OpenAI—met with President Biden and agreed to voluntary safeguards on the technology’s development. In November, the White House published an executive order to ensure the “safe, secure, and trustworthy development” of AI, and the British government held a two-day summit on AI safety at Bletchley Park, the site where code-breakers deciphered German messages during World War II.

In spite of all this, the US still doesn’t have a federal law aimed at regulating either the development or use of artificial intelligence technology. (Surprise!) But a wide range of state laws that apply to AI have either been proposed or passed. According to the Cato Institute, as of this month, thirty-one states have passed some form of AI legislation: regulating the use of deepfake imagery for sexual harassment or political messaging, for example, or requiring corporations to disclose the use of AI in their products and services, or that they are collecting data for training AI models.

More From CJR: