By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
7news7news7news
Notification Show More
Font ResizerAa
  • Home
  • HOME
  • Business
  • Technology
    TechnologyShow More
    The group co-directed by Fei-Fei Li suggests that AI security laws should anticipate future risks
    March 20, 2025
    Director responsible for fraud after blowing $ 4 million in cash from Netflix on Dogecoin
    March 20, 2025
    GM is associated with Nvidia to bring AI to robots, factories and autonomous cars
    March 19, 2025
    Disney + adds a Simpsons 24/7 channel
    March 19, 2025
    Reactway supported by YC applies AI to accelerate the manufacture of drugs
    March 18, 2025
  • Posts
    • Post Layouts
    • Gallery Layouts
    • Video Layouts
    • Audio Layouts
    • Post Sidebar
    • Review
      • User Rating
    • Content Features
    • Table of Contents
  • Pages
    • Contact US
    • Search Page
    • 404 Page
Reading: The group co-directed by Fei-Fei Li suggests that AI security laws should anticipate future risks
Share
Font ResizerAa
7news7news7news
  • World
  • Business
  • Entertainment
  • Technology
  • Sport
Search
  • Categories
    • Sport
    • Business
  • Home
  • More Foxiz
    • Login
    • Contact
    • Buy Theme
  • Categories
    • Technology
    • Entertainment
  • Bookmarks
  • More Foxiz
    • Sitemap
Have an existing account? Sign In
Follow US
Technology

The group co-directed by Fei-Fei Li suggests that AI security laws should anticipate future risks

ADAM
Last updated: March 20, 2025 2:15 pm
ADAM
Published March 20, 2025
Share
SHARE


In a new report, a group of policies based in California co-directed by Fei-Fei Li, an AI pioneer, suggests that legislators should consider the risks of AI which “have not yet been observed in the world” during the development of AI regulatory policies.

THE 41 pages temporary report Released on Tuesday comes from the working group on the policy of California jointly on AI Frontier Models, an effort organized by Governor Gavin Newsom after his veto of the controversial security bill of California, SB 1047. Although Newsom noted that SB 1047 missed the brand, he recognized last year the need for a more extensive assessment of the AI ​​risks to inform the legislators.

In the report, Li, as well as the co-authors Jennifer Chayes (UC Berkeley College of Computing Dean) and Mariano-Florentino Cuéllar (Carnegie Endowment for International Peace President), argue in favor of laws that would increase transparency in that the AI ​​border laboratories as operai build. The stakeholders in the industry of the whole ideological spectrum examined the report before its publication, including the ardent defenders of AI security as the winner of the Turing Prix Yoshua Bengio and those who pleaded against SB 1047, as the co -founder of Databricks, Ion Stoica.

According to the report, the new risks posed by AI systems may require laws that would force developers of AI models to publicly report their security tests, data acquisition practices and security measures. The report also recommends the increase in standards concerning third -party assessments of these corporate measures and policies, in addition to the extended protections of the denunciators for the employees and entrepreneurs of the IA company.

Li et al. Write that there is an “uncompromising level of evidence” of the potential of AI to help make cyber attacks, create biological weapons or cause other “extreme” threats. They also argue, however, that AI policy should not only face the current risks, but also to anticipate the future consequences that could occur without sufficient guarantees.

“For example, we don’t need to observe a nuclear weapon [exploding] To predict reliably that it could and would cause significant damage, ”says the report.

The report recommends a two -stea strategy to stimulate the transparency of the development of the AI ​​model: confidence but check. The developers of AI models and their employees should benefit from paths to report on the public concerns, according to the report, such as internal security tests, while being required to submit test allegations for the verification of third parties.

Although the report, the final version of which was to be released in June 2025, does not approve of any specific legislation, it was well received by experts on both sides of the debate on the development of AI policies.

Dean Ball, a researcher focused on AI at George Mason University who criticized SB 1047, said in an article on X that the report was A promising step for AI security regulations in California. It is also a victory for the AI ​​defenders in terms of security, according to the Senator of the State of California, Scott Wiener, who presented SB 1047 last year. Wiener said in a press release that the report was based on “urgent conversations around AI governance that we started in the legislative assembly [in 2024]. “”

The report seems to be aligned with several components of SB 1047 and the Wiener monitoring bill, SB 53, such as AI model developers to report the results of security tests. Taking a wider opinion, it seems to be a very necessary victory for people of IA security, whose agenda lost ground last year.

You Might Also Like

The patient with a low artificial heart the survival file

The best classified SSD of Samsung never? T7 SHIELD reaches its lowest price, now more than 50%

Everything you need to know about the AI chatbot

Daredevil updates: Born Again and more

SoundCore by Anker Space Q45 Headphones Reduce the price of $ 50 and the noise of 98%

Share This Article
Facebook Email Print

Follow US

Find US on Social Medias
FacebookLike
XFollow
YoutubeSubscribe
TelegramFollow

Weekly Newsletter

Subscribe to our newsletter to get our newest articles instantly!
[mc4wp_form]
Popular News
World

Passengers remember a few moments before the delta plane collision at Toronto airport

BARI
BARI
February 18, 2025
China urges us to “correct its errors” after the website of the State Department deleted the reference of Taiwan Independence
Dozens injured while the driver crashes the car in the Munich protest
Monetary market account rate today, February 15, 2025 (the best account provided 4.75% APY)
With immobilized fights, the Gazans are faced with a new trauma: looking for their dead
- Advertisement -
Ad imageAd image
Global Coronavirus Cases

Confirmed

0

Death

0

More Information:Covid-19 Statistics

Categories

  • ES Money
  • U.K News
  • The Escapist
  • Insider
  • Science
  • Technology
  • LifeStyle
  • Marketing

About US

We influence 20 million users and is the number one business and technology news network on the planet.

Subscribe US

Subscribe to our newsletter to get our newest articles instantly!

[mc4wp_form]
© Foxiz News Network. Ruby Design Company. All Rights Reserved.
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?