California lawmakers who later introduced the AI safety legislation rejected by Gov. Gavin Newsom, have once again introduced the law to regulate strong technologies.
Sen. Scott Wiener (D-San Francisco) introduced Senate Bill 53. This protects AI Lab whistleblowers from retaliation if they make a statement about risks or irresponsible developments. The bill also creates computing clusters (Calcomputes) to allow a wide range of developers to access the calculations needed to succeed.
Specifically, the bill would prohibit developers of certain artificial intelligence models from enforcing policies that prevent employees from disclosing or retaliating information to those who report concerns.
California invoices will require insurance companies to pay full coverage without itemized listings
SB 53 defines “significant risk” as a foreseeable and material risk that the development, storage, or deployment of a basic model could result in the death or serious injury of more than 100 people or cause more than $1 billion in damages.
Additionally, developers need to establish an internal process that allows employees to report concerns anonymously.
“We are still in the early stages of the legislative process and this bill could evolve as the process continues. We are closely monitoring the work of the Governor’s AI Working Group and the development of the AI field for changes that will ensure legislative response,” Wiener said in a statement.
“California’s leadership on AI is more important than ever as the new federal administration advances its guardrails intended to keep Americans safe from known and foreseeable risks that have advanced AI systems.”
When Wiener introduced a similar bill last year, other lawmakers and tech companies opposed the proposed law, saying that “information is poor, causing more harm than good, and ultimately keeps developers away from the state.”
Source link