Image

Meta Joins AI Safety Collective To Ensure Responsible Development

As it continues to develop more advanced AI models, and work towards automated general intelligence (AGI), Meta is also keen to establish best practice guardrails and safety standards to ensure that AI doesn’t… well, enslave the human race.

Among other concerns.

That’s why today, Meta has announced that it’s joining the Frontier Model Forum (FMF) , a non-profit AI safety collective that’s working to establish industry standards and regulations around AI development.

As explained by FMF:

As a non-profit organization and the only industry-supported body dedicated to advancing the safety of frontier AI models, the FMF is uniquely suited to make real progress on identifying shared challenges and actionable solutions. Our members share a desire to get it right on safety – both because it’s the right thing to do, and because the safer frontier AI is, the more useful and beneficial it will be to society.”

Meta, along with Amazon, will join Anthropic, Google, Microsoft, and OpenAI as members of the FMF mission, which will ideally lead to the establishment of best-in-class AI safety regulations. Which could help to save us from relying on John Connor to lead the human resistance.

As per Meta’s President of Global Affairs Nick Clegg:

Meta has long been committed to the continued growth and development of a safer and open AI ecosystem that prioritizes transparency and accountability. The Frontier Model Forum allows us to continue that work alongside industry partners, with a focus on identifying and sharing best practices to help keep our products and models safe.

The FMF is currently working to establish an advisory board as well as various institutional arrangements, including a charter, governance and funding, with a working group and executive board to lead these efforts.

And while a robot-dominated future may seem far-fetched, there are many other concerns that the FMF will be covering, including the generation of illegal content, misuse of AI (and how to avoid it), copyright, and more (note: Meta also recently joined the “Safety by Design” initiative to prevent the misuse of generative AI tools to perpetrate child exploitation).

Though for Meta specifically, the dangers of AGI are indeed prescient.

Meta’s Fundamental AI Research team (FAIR) is already working towards the development of human-level intelligence, and simulating the neurons of the brain digitally, in what would equate to “thinking” in a simulated environment.

To be clear, we’re not anywhere close to this as yet, because while the latest AI tools are impressive in what they’re able to produce, they’re, really, highly complex mathematical systems, which match queries with responses based on the data that they can access. They’re not “thinking”, it’s just an estimation of what logically comes next, based on the parameters of a given question.

AGI will be able to do all of this by itself, and actually formulate ideas without human prompts.

Which is a little scary, and could, of course, lead to more problems.

Hence the need for groups like FMF to oversee AI development, and ensure that those in charge of such experiments don’t accidentally guide us towards the end times.   

SHARE THIS POST