A new AI lab called Flapping Airplanes launched on Wednesday, with $180 million in seed funding from Google Ventures, Sequoia, and Index. The founding team is impressive, and the goal — finding a less data-hungry way to train large models — is a particularly interesting one.
Based on what I’ve seen so far, I would rate them as Level Two on the trying-to-make-money scale.
But there’s something even more exciting about the Flapping Airplanes project that I hadn’t been able to put my finger on until I read this post from Sequoia partner David Cahn.
As Cahn describes it, Flapping Airplanes is one of the first labs to move beyond scaling, the relentless buildout of data and compute that has defined most of the industry so far:
The scaling paradigm argues for dedicating a huge amount of society’s resources, as much as the economy can muster, toward scaling up today’s LLMs, in the hopes that this will lead to AGI. The research paradigm argues that we are 2-3 research breakthroughs away from an “AGI” intelligence, and as a result, we should dedicate resources to long-running research, especially projects that may take 5-10 years to come to fruition.
[…]
A compute-first approach would prioritize cluster scale above all else, and would heavily favor short-term wins (on the order of 1-2 years) over long-term bets (on the order of 5-10 years). A research-first approach would spread bets temporally, and should be willing to make lots of bets that have a low absolute probability of working, but that collectively expand the search space for what is possible.
It might be that the compute folks are right, and it’s pointless to focus on anything other than frenzied server buildouts. But with so many companies already pointed in that direction, it’s nice to see someone headed the other way.











