Image

Google Establishes New Industry Group Focused on Secure AI Development

With the development of generative AI posting significant risk on various fronts, it seems like every other week the big players are establishing new agreements and forums of their own, in order to police, or give the impression of oversight within AI development.

Which is good, in that it establishes collaborative discussion around AI projects, and what each company should be monitoring and managing within the process. But at the same time, it also feels like these are a means to stave off further regulatory restrictions, which could increase transparency, and impose more rules on what developers can and can’t do with their projects.

Google is the latest to come up with a new AI guidance group, forming the Coalition for Secure AI (CoSAI), which is designed to “advance comprehensive security measures for addressing the unique risks that come with AI”

As per Google:

AI needs a security framework and applied standards that can keep pace with its rapid growth. That’s why last year we shared the Secure AI Framework (SAIF), knowing that it was just the first step. Of course, to operationalize any industry framework requires close collaboration with others – and above all a forum to make that happen.

So it’s not so much a whole new initiative, but an expansion of a previously announced one, focused on AI security development, and guiding defense efforts to help avoid hacks and data breaches.

A range of big tech players have signed up to the new initiative, including Amazon, IBM, Microsoft, NVIDIA and OpenAI, with the intended goal to create collaborative, open source solutions to ensure greater security in AI development.

And as noted, it’s the latest in a growing list of industry groups focused on sustainable and secure AI development.

For example:

  • The Frontier Model Forum (FMF) is aiming to establish industry standards and regulations around AI development. Meta, Amazon, Google, Microsoft, and OpenAI have signed up to this initiative.
  • Thorn has established its “Safety by Design” program, which is focused on responsibly sourced AI training datasets, in order to safeguard them from child sexual abuse material. Meta, Google, Amazon, Microsoft and OpenAI have all signed up to this initiative.
  • The U.S. Government has established its own AI Safety Institute Consortium (AISIC), which more than 200 companies and organizations have joined. 
  • Representatives from almost every major tech company have agreed to the Tech Accord to Combat Deceptive Use of AI, which aims to implement “reasonable precautions” in preventing AI tools from being used to disrupt democratic elections.

Essentially, we’re seeing a growing number of forums and agreements designed to address various elements of safe AI development. Which is good, but at the same time, these aren’t laws, and are therefore not enforceable in any way, these are just AI developers agreeing to adhere to certain rules on certain aspects.

And the skeptical view is that these are only being put in place as an assurance, in order to stave off more definitive regulation.

EU officials are already measuring the potential harms of AI development, and what’s covered, or not, under the GDPR, while other regions are also weighing the same, with the threat of actual financial penalties behind their government-agreed parameters.

It feels like that’s what’s actually required, but at the same time, government regulation takes time, and it’s likely that we’re not going to see actual enforcement systems and structures around such in place till after the fact.

Once we see the harms, then it’s much more tangible, and regulatory groups will have more impetus to push through official policies. But till then, we have industry groups, which see each company taking pledges to play by these established rules, implemented via mutual agreement.

I’m not sure that will be enough, but for now, it’s seemingly what we have.

SHARE THIS POST