AI Regulations and Governance Monthly AI Update

AI Regulations and Governance Monthly AI Update

NIST Unveils Strategic Vision for AI Safety: Ensuring a Secure and Innovative Future

In an era of unprecedented advancements in AI, the National Institute of Standards and Technology (NIST) has released its "strategic vision for AI," focusing on three primary goals: advancing the science of AI safety, demonstrating and disseminating AI safety practices, and supporting institutions and communities in AI safety coordination.

The Vision for Safe AI Innovation

NIST envisions a future where AI innovation thrives safely, unlocking scientific discovery, technological progress, and economic growth. However, as AI systems become more powerful and pervasive, they bring significant risks. These include not only recognized harms but also emerging threats that remain unidentified. The frontier of AI capability and risk is vast and still largely uncharted.

New technologies like aviation, electricity, and automobiles have historically demonstrated that safety is crucial for innovation. AI, however, presents unique challenges: current models can be opaque, unpredictable, and often unreliable. Their development and deployment lack transparency, posing additional safety concerns. NIST aims to navigate these challenges by emphasizing the importance of public and private institutions dedicated to science-based safety.

The Role of the U.S. AI Safety Institute (AISI)

The U.S. AI Safety Institute (AISI), housed within NIST, is central to this strategic vision. AISI's mission is to advance the understanding and mitigation of AI risks, ensuring that AI's benefits can be fully harnessed. By conducting research, testing, and providing guidance, AISI aims to enable rigorous AI risk assessment, establish adequate safeguards, and build public confidence in AI technologies. This will ultimately lead to more responsible development and widespread adoption of AI.

AISI operates on two fundamental principles: beneficial AI depends on AI safety, and AI safety relies on science. Safety fosters trust, trust boosts confidence in adoption, and adoption accelerates innovation. AISI's goal is to define and advance the science of AI safety. It involves understanding advanced AI models and systems, adopting safe AI design and deployment standards, and developing safety evaluations for these systems and their broader impacts.

Advancing the Science of AI Safety

AISI's mission includes addressing several challenges to advance AI safety science:

  • Lack of Definitions: There is a need for commonly accepted definitions for AI safety, capabilities, and measurements, particularly for frontier models and advanced AI systems.
  • Underdeveloped Methods: Testing, evaluation, validation, and verification (TEVV) methods and best practices must be developed. Holistic risk assessments are required to cover model capabilities, human-AI interactions, and system-level impacts.
  • Risk Mitigations: Scientifically established risk mitigations are needed across the AI lifecycle, from design to deployment.
  • Understanding Model Behavior: Understanding the relationship between model architecture, design, and behavior is crucial, especially post-deployment.
  • Coordination: Industry, civil society, and international actors need to coordinate more on safety practices.

AISI aims to create a connected and diverse ecosystem involving industry, civil society, academia, and government to align resources and efforts in the area of AI safety.

AISI's Strategic Goals

AISI's strategic goals are ambitious, reflecting the rapid and dynamic nature of AI development:

  1. Advancing AI Safety Science: Through research and innovation, AISI aims to build a robust scientific foundation for AI safety.
  2. Demonstrating AI Safety Practices: AISI seeks to demonstrate effective AI safety practices by publishing tools, benchmarks, and guidance.
  3. Supporting AI Safety Coordination: AISI aims to foster national and global networks to coordinate AI safety efforts.

Initiatives and Projects

AISI plans to launch a variety of projects to address AI safety challenges, including:

  • Technical Research: Improving or creating safety guidelines and tools, such as techniques for detecting synthetic content and best practices for model security.
  • Pre-deployment TEVV: Assessing potential and emerging risks of advanced AI models before their deployment.
  • Post-deployment TEVV: Deepening the scientific understanding of risks associated with current AI capabilities, including impacts on individual rights, public safety, and national security.

California Senate Passes Groundbreaking AI Safety and Innovation Bill

In a significant bipartisan vote, the California Senate has passed SB 1047, a bill spearheaded by Senator Scott Wiener (D-San Francisco) designed to establish robust safety standards for developing large-scale artificial intelligence (AI) systems. The bill passed with a resounding 32-1 vote and now moves to the Assembly, where it must be approved by August 31.

Senator Wiener emphasized the bill's importance in supporting innovation while maintaining safety. "As AI technology continues its rapid improvement, it has the potential to provide massive benefits to humanity. We can support that innovation without compromising safety, and SB 1047 aims to do just that," said Wiener. The bill focuses on the well-resourced developers of the most powerful AI systems, ensuring sensible guardrails against risk while allowing startups to innovate freely.

Experts have raised alarms about the potential dangers of AI if not correctly regulated. A recent survey revealed that 70% of AI researchers believe safety should be a higher priority in AI research, with 73% expressing significant concern about AI potentially falling into the hands of dangerous groups.

SB 1047 has garnered support from leading AI researchers, including Geoffrey Hinton and Yoshua Bengio, often called the "Godfathers of AI." Hinton praised SB 1047 for its balanced approach, recognizing California as the right place to pioneer such legislation.

The bill aligns with President Biden's Executive Order on Artificial Intelligence and builds on voluntary safety commitments made by several AI developers in California. Governor Newsom has also directed state agencies to prepare for AI's impact, particularly on vulnerable communities, as outlined in a report released last November.

Key Provisions of SB 1047

SB 1047 seeks to balance AI innovation with safety through several key measures:

  • Setting Standards: It establishes clear standards for developers of AI models with computing power exceeding 10^26 floating-point operations and costing over $100 million to train, ensuring these powerful models are developed responsibly.
  • Safety Precautions: Developers of large AI models must implement pre-deployment safety testing, cybersecurity measures, red-teaming, safeguards against misuse, and post-deployment monitoring.
  • Whistleblower Protections: The bill includes protections for employees of AI laboratories who report unsafe practices.
  • Transparent Pricing: It requires transparency and prohibits price discrimination to protect consumers and ensure fair competition for startups.
  • Legal Action: It empowers the California Attorney General to take legal action if a powerful AI model causes significant harm or poses an imminent public safety threat.
  • CalCompute Cluster: The bill establishes CalCompute, a public cloud computing resource that will enable startups, researchers, and community groups to develop large-scale AI systems. The benefits align with California's values and needs.
  • Advisory Council: It creates an advisory council to support safe and secure open-source AI.

SB 1047 is co-authored by Senators Richard Roth (D-Riverside) and Henry Stern (D-Los Angeles) and sponsored by the Center for AI Safety Action Fund, Economic Security Action California, and Encode Justice.

As SB 1047 moves to the Assembly, its bipartisan support and comprehensive approach to balancing AI innovation with safety mark a significant step forward for responsible AI development in California.

EU Commission Establishes AI Office to Strengthen Leadership in Safe and Trustworthy Artificial Intelligence

In a significant stride towards advancing artificial intelligence (AI) within the European Union, the Commission has unveiled the AI Office dedicated to promoting the development, deployment, and use of AI that brings societal and economic benefits while mitigating associated risks. This office will be pivotal in implementing the AI Act, particularly concerning general-purpose AI models, and placing the EU as a leader in international AI discussions.

Structure and Units of the AI Office

The AI Office is composed of several specialized units, each with distinct roles:

  1. Regulation and Compliance Unit: This unit will work closely with national bodies to ensure uniform application and enforcement of the AI Act across EU Member States. It will handle investigations and potential infringements and administer sanctions.
  2. AI Safety Unit: This unit is focused on identifying systemic risks associated with powerful general-purpose AI models. It will also propose mitigation measures and develop evaluation and testing methodologies.
  3. Excellence in AI and Robotics Unit: This unit aims to support and fund research and development initiatives, fostering an excellence ecosystem. It coordinates the GenAI4EU initiative, which stimulates the development of AI models and their integration into innovative applications.
  4. AI for Societal Good Unit: This unit designs and implements international engagement projects, leveraging AI for societal benefits such as weather modeling, cancer diagnosis, and digital twins for reconstruction.
  5. AI Innovation and Policy Coordination Unit: Overseeing the execution of the EU AI strategy, this unit monitors trends and investments in AI, promotes the uptake of AI through European Digital Innovation Hubs, and supports the creation of AI Factories. It also fosters an innovative ecosystem through regulatory sandboxes and real-world testing.

The AI Office will be led by a Head of the AI Office, guided by a Lead Scientific Adviser to ensure scientific excellence, and an Adviser for International Affairs to manage international collaborations on trustworthy AI.

Operational Framework and Tasks

The AI Office will employ over 140 staff, including technology specialists, administrative assistants, lawyers, policy specialists, and economists. It will ensure the coherent implementation of the AI Act by supporting governance bodies in the Member States and directly enforcing rules for general-purpose AI models. In collaboration with AI developers, the scientific community, and other stakeholders, the office will coordinate the creation of state-of-the-art codes of practice, conduct testing and evaluation of AI models, request information, and apply sanctions when necessary.

The AI Office will engage with Member States and the broader expert community to foster well-informed decision-making through dedicated forums and expert groups. At the EU level, it will collaborate closely with the European Artificial Intelligence Board, composed of representatives from Member States. Additionally, the Scientific Panel of independent experts and an Advisory Forum representing diverse stakeholders, including industry, academia, and civil society, will ensure comprehensive expertise and balanced perspectives.

Download Example (1000 Synthetic Data) for testing

Click here to download csv

Signup for Our Blog

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Request for Trail

Start Trial

Rahul Sharma

Content Writer

Rahul Sharma graduated from Delhi University with a bachelor’s degree in computer science and is a highly experienced & professional technical writer who has been a part of the technology industry, specifically creating content for tech companies for the last 12 years.

Know More about author

Prevent millions of $ of privacy risks. Learn how.

We take privacy seriously.  While we promise not to sell your personal data, we may send product and company updates periodically. You can opt-out or make changes to our communication updates at any time.