Image

Hugging Face’s chief science officer worries AI is changing into ‘yes-men on servers’

AI company founders have a reputation for making bold claims about the technology’s potential to reshape fields, particularly the sciences. But Thomas Wolf, Hugging Face’s co-founder and chief science officer, has a more measured take.

In an essay published to X on Thursday, Wolf said that he feared AI becoming “yes-men on servers” absent a breakthrough in AI research. He elaborated that current AI development paradigms won’t yield AI capable of outside-the-box, creative problem-solving — the kind of problem-solving that wins Nobel Prizes.

“The main mistake people usually make is thinking [people like] Newton or Einstein were just scaled-up good students, that a genius comes to life when you linearly extrapolate a top-10% student,” Wolf wrote. “To create an Einstein in a data center, we don’t just need a system that knows all the answers, but rather one that can ask questions nobody else has thought of or dared to ask.”

Wolf’s assertions stand in contrast to those from OpenAI CEO Sam Altman, who in an essay earlier this year said that “superintelligent” AI could “massively accelerate scientific discovery.” Similarly, Anthropic CEO Dario Amodei has predicted AI could help formulate cures for most types of cancer.

Wolf’s problem with AI today — and where he thinks the technology is heading — is that it doesn’t generate any new knowledge by connecting previously unrelated facts. Even with most of the internet at its disposal, AI as we currently understand it mostly fills in the gaps between what humans already know, Wolf said.

Some AI experts, including ex-Google engineer Francois Chollet, have expressed similar views, arguing that while AI might be capable of memorizing reasoning patterns, it’s unlikely it can generate “new reasoning” based on novel situations.

Wolf thinks that AI labs are building what are essentially “very obedient students” — not scientific revolutionaries in any sense of the phrase. AI today isn’t incentivized to question and propose ideas that potentially go against its training data, he said, limiting it to answering known questions.

“To create an Einstein in a data center, we don’t just need a system that knows all the answers, but rather one that can ask questions nobody else has thought of or dared to ask,” Wolf said. “One that writes ‘What if everyone is wrong about this?’ when all textbooks, experts, and common knowledge suggest otherwise.”

Wolf thinks that the “evaluation crisis” in AI is partly to blame for this disenchanting state of affairs. He points to benchmarks commonly used to measure AI system improvements, most of which consist of questions that have clear, obvious, and “close-ended” answers.

As a solution, Wolf proposes that the AI industry “move to a measure of knowledge and reasoning” that’s able to elucidate whether AI can take “bold counterfactual approaches,” make general proposals based on “tiny hints,” and ask “non-obvious questions” that lead to “new research paths.”

The trick will be figuring out what this measure looks like, Wolf admits. But he thinks that it could be well worth the effort.

“[T]he most crucial aspect of science [is] the skill to ask the right questions and to challenge even what one has learned,” Wolf said. “We don’t need an A+ [AI] student who can answer every question with general knowledge. We need a B student who sees and questions what everyone else missed.”

SHARE THIS POST