Image

Meta Expands AI Chatbot to More Regions, Adds New Functionality

Despite regulatory challenges in Europe, Meta is moving forward with the next stage of its AI development plan, with the company today announcing an expansion of its Meta AI chatbot to seven more languages, as well as new creative functions in-stream, and the capacity to choose which Meta AI model you use for different functions.

Which, really, provides a path to the future of AI interaction, but we’ll get to that.

First off, Meta’s expanding access to Meta AI to seven more regions, with Argentina, Chile, Colombia, Ecuador, Mexico, Peru and Cameroon the latest to get access to Meta’s in-app chatbot.

In addition to this, users can also now prompt the bot in seven new languages: French, German, Hindi, Hindi-Romanized Script, Italian, Portuguese and Spanish.

Meta’s in-built AI bot has received mixed reviews, but Meta CEO Mark Zuckerberg says that the bot is on track to becoming the most used AI assistant in the world.

Which is not really a surprise, when you consider that half the connected world uses Facebook, WhatsApp and IG, and Meta’s put it right in people’s faces when they open each app. As such, I’m not sure that this reflects popularity or utility, so much as ubiquity, but regardless, Zuckerberg is clearly using this as an indicator that the company’s AI development is on the right track.

So now, many millions more people will have that Meta AI prompt appearing when they open each app. Welcome to the future.

Meta’s also adding more functionality into its chatbot, with users now able to create AI generated images of themselves direct from the chat stream.

Meta AI update

As explained by Meta:

“Have you ever dreamed of being a superhero, a rockstar or a professional athlete? Now, you can see yourself in a whole new light with “Imagine me” prompts in Meta AI – a feature we’re starting to roll out in beta in the US to start. Imagine yourself creates images based on a photo of you and a prompt like ‘Imagine me surfing’ or ‘Imagine me on a beach vacation’ using our new state-of-the-art personalization model.”

So it’s the same as Snapchat’s “Dreams” functionality, providing a way to create fantastical images of yourself. And like Dreams, I imagine the novelty will wear off pretty quick, but it could be another way to get more people at least trying out Meta’s AI tools.

Meta’s also adding in new editing tools for generative AI images, so you can customize them in stream.

Meta AI update

Meta says that the process will make it easy to add or remove objects, and change or edit them, while still maintaining the main image:

“You could say “Imagine a cat snorkeling in a goldfish bowl” and then decide you want it to be a corgi. So you’d simply write “Change the cat to a corgi” to adjust the image. And next month, you’ll see the addition of an Edit with AI button that you can use to fine tune your imagined images even further.

This could be a handy addition, as one of the main headaches of AI generated images is the inability to refine and correct them when they’re not quite what you want. Depending on how this works, this could be a valuable addition for AI art in Meta’s apps.

Though this, I like a lot less:

Meta AI update

As you can see in this example, Meta’s also giving users the ability to add an AI generated image to their Facebook posts. Which is not so bad in itself, but look at the example, creating a fake image of a real place that you’ve been.

Like, why?

My biggest concern with Meta’s broader integration of AI content is that it will dilute, and potentially supersede, the human element, the actual “social” aspects of “social media”. For years, people have been complaining about bot-generated content in social apps detracting from the connective experience, but now we’re being encouraged to use bots ourselves.

Is that beneficial? Is that what we really want from interactive communities?

I don’t know, I get that these functions can expand creative capacity in new ways. But I don’t think that this is it.

Meta’s also rolling out a new option that will enable users to choose which AI model they use for different tasks in the app.

Meta AI update

It’s a slightly more technical option, but the idea is that by enabling users to access different Llama models, that will give them more opportunity to pose more complex, technical queries: “especially on the topics of math and coding”.

Like, most people will obviously choose the most powerful option if they can, but at the same time, this won’t be an up front choice, you’ll have to actually access your AI model settings to do this, so it’s unlikely to see wide use.

Finally, Meta’s also launching Meta AI in VR in beta with selected users in North America.

“Meta AI will replace the current Voice Commands on Quest, allowing you to control your headset hands-free, get answers to questions, stay up to date with real-time information, check the weather and more.”

This is a significant update, as the true value of Meta’s AI tools will likely be in VR creation. VR development is limited right now, due to the technical complexity in building immersive worlds, which requires a lot of expertise, development time, and investment.

But what if Meta could enable its AI tools to create VR experiences based on simple text prompts? Then you could put yourself into any experience that you can imagine, right then and there, by simply speaking things into existence.

That’s the ultimate promise of Meta’s AI tools, which is also why Meta’s so keen to push its AI chatbot, as a means to get people acclimated to the process of asking the tool for whatever they need.

It’s a longer term vision, but this is the next step, and bringing Meta AI into VR will enable users to shift their behaviors towards this path.  

That’s also why these current developments, which detract from the “social” aspects, are potentially a lesser concern, because they’re not really designed with this current state in mind. Yes, there’s utility, or at least novelty, to some degree, and yes, Meta is keen to create the best version of its chatbot. But really, it’s all about future habits, and getting new users, in particular, more habitually aligned with interacting via chat prompts.

So, yes, for some, Meta’s AI bot is a little annoying, but in ten years time, when we’re all interacting in VR, it’ll be the only way that the next generation engages with digital tools.  

SHARE THIS POST