New legislation on artificial intelligence introduced in the New York State Assembly is sparking criticism from industry players who say the bill would place significant restrictions on developers of frontier AI models and stifle innovation.
The Responsible AI Safety and Education Act, introduced by Assemblymember Alex Bores (D), has come under fire from the Chamber of Progress, an industry tech policy coalition, for provisions that the group asserts would harm AI startups while “favoring established AI players.”
The legislation aims to limit the ability of developers in New York to deploy the most powerful and costly AI models unless steps are taken to prevent abuses – including use of AI tech to develop weapons of mass destruction.
The bill, if it becomes law, would require AI developers to submit a written safety and security plan to the New York attorney general before deploying their models. It would additionally mandate the disclosure of compute costs and require developers to report any safety incidents within 72 hours.
Violators could face fines of up to five percent of their model’s training cost, with repeat offenders facing penalties of up to 15 percent.
“This bill would effectively crown the existing tech giants as the winners of the AI race while small model developers get tied up in red tape,” Brianna January, government relations director at the Chamber of Progress, said in a statement. “Stifling upstart competition won’t do anything to mitigate AI’s risks – in fact, it could make them worse.”
Bores’ legislation also includes requirements that developers “implement appropriate safeguards to prevent unreasonable risk of critical harm,” and would prohibit developers from deploying “a frontier model if doing so would create an unreasonable risk of critical harm.”
The legislation parallels a similar bill in California introduced last fall, that despite being approved by the state legislature was vetoed by Governor Gavin Newsom who warned at the time that the law was “well-intentioned” but included too “stringent” regulations that would burden California’s AI companies.
Other states that have enacted laws to regulate AI include Colorado and Utah, which passed tailored regulations to inform people when AI tech is being used by companies and government agencies, and mandate transparency on the use of generative AI in interactions with consumers.
If the New York legislation become law, it would be the first of its kind on the state level to mandate preventative steps for model abuses and risks.
Despite numerous efforts, Congress has yet to any comprehensive Federal legislation to regulate AI development and deployment.