Image

Meta Suspends AI Development in EU and Brazil Over Data Usage Concerns

Meta’s evolving generative AI push appears to have hit a snag, with the company forced to scale back its AI efforts in both the EU and Brazil due to regulatory scrutiny over how it’s utilizing user data in its process.

First off, in the EU, where Meta has announced that it will withhold its multimodal models, a key element of its coming AR glasses and other tech, due to “the unpredictable nature of the European regulatory environment” at present.

As first reported by Axios, Meta’s scaling back its AI push in EU member nations due to concerns about potential violations of EU rules around data usage.

Last month, advocacy group NOYB called on EU regulators to investigate Meta’s recent policy changes that will enable it to utilize user data to train its AI models. arguing that the changes are in violation of the GDPR.  

As per NOYB:

Meta is basically saying that it can use ‘any data from any source for any purpose and make it available to anyone in the world’, as long as it’s done via ‘AI technology’. This is clearly the opposite of GDPR compliance. ‘AI technology’ is an extremely broad term. Much like ‘using your data in databases’, it has no real legal limit. Meta doesn’t say what it will use the data for, so it could either be a simple chatbot, extremely aggressive personalised advertising or even a killer drone.”

As a result, the EU Commission urged Meta to clarify its processes around user permissions for data usage, which has now prompted Meta to scale back its plans for future AI development in the region.

Worth noting, too, that UK regulators are also examining Meta’s changes, and how it plans to access user data.

Meanwhile in Brazil, Meta’s removing its generative AI tools after Brazilian authorities raised similar questions about its new privacy policy in regards to personal data usage.

This is one of the key questions around AI development, in that human input is needed to train these advanced models, and a lot of it. And within that, people should arguably have the right to decide whether their content is utilized in these models or not.

Because as we’ve already seen with artists, many AI creations end up looking very similar to actual people’s work. Which opens up a whole new copyright concern, and when it comes to personal images and updates, like those shared to Facebook, you can also imagine that regular social media users will have similar concerns.

At the least, as noted by NOYB, users should have the right to opt out, and it seems somewhat questionable that Meta’s trying to sneak through new permissions within a more opaque policy update.

What will that mean for the future of Meta’s AI development? Well, in all likelihood, not a heap, at least initially.

Over time, more and more AI projects are going to be looking for human data inputs, like those available via social apps, to power their models, but Meta already has so much data that it likely won’t change its overall development just yet.

In future, if a lot of users were to opt out, that could become more problematic for ongoing development. But at this stage, Meta already has large enough internal models to experiment with that the developmental impact would seemingly be minimal, even if it is forced to remove its AI tools in some regions.

But it could slow Meta’s AI roll out plans, and its push to be a leader in the AI race.

Though, then again, NOYB has also called for similar investigation into OpenAI as well, so all of the major AI projects could well be impacted by the same.    

The final outcome then is that EU, UK and Brazilian users won’t have access to Meta’s AI chatbot. Which is likely no big loss, considering user responses to the tool, but it may also impact the release of Meta’s coming hardware devices, including new versions of its Ray Ban glasses and VR headsets.

By that time, presumably, Meta would have worked out an alternative solution, but it could highlight more questions about data permissions, and what people are signing up to in all regions.  

Which may have a broader impact, beyond these regions. It’s an evolving concern, and it’ll be interesting to see how Meta looks to resolve these latest data challenges.  

SHARE THIS POST