---
title: "Flapping Airplanes and the promise of research-driven AI"
type: "News"
locale: "en"
url: "https://longbridge.com/en/news/274173313.md"
description: "A new AI lab named Flapping Airplanes has launched with $180 million in seed funding from Google Ventures, Sequoia, and Index. The lab aims to find a less data-intensive method for training large models. Unlike the prevalent scaling approach in AI, Flapping Airplanes focuses on long-term research breakthroughs, potentially taking 5-10 years to achieve significant advancements. This research-first paradigm contrasts with the compute-first approach, which prioritizes immediate results. The lab's innovative direction is a refreshing change in the AI landscape, which is heavily leaning towards scaling."
datetime: "2026-01-29T15:24:19.000Z"
locales:
  - [zh-CN](https://longbridge.com/zh-CN/news/274173313.md)
  - [en](https://longbridge.com/en/news/274173313.md)
  - [zh-HK](https://longbridge.com/zh-HK/news/274173313.md)
---

> Supported Languages: [简体中文](https://longbridge.com/zh-CN/news/274173313.md) | [繁體中文](https://longbridge.com/zh-HK/news/274173313.md)


# Flapping Airplanes and the promise of research-driven AI

A new AI lab called Flapping Airplanes launched on Wednesday, with $180 million in seed funding from Google Ventures, Sequoia, and Index. The founding team is impressive, and the goal — finding a less data-hungry way to train large models — is a particularly interesting one.

Based on what I’ve seen so far, I would rate them as Level Two on the trying-to-make-money scale.

But there’s something even more exciting about the Flapping Airplanes project that I hadn’t been able to put my finger on until I read this post from Sequoia partner David Cahn.

As Cahn describes it, Flapping Airplanes is one of the first labs to move beyond scaling, the relentless buildout of data and compute that has defined most of the industry so far:

> The scaling paradigm argues for dedicating a huge amount of society’s resources, as much as the economy can muster, toward scaling up today’s LLMs, in the hopes that this will lead to AGI. The research paradigm argues that we are 2-3 research breakthroughs away from an “AGI” intelligence, and as a result, we should dedicate resources to long-running research, especially projects that may take 5-10 years to come to fruition.
> 
> \[…\]
> 
> A compute-first approach would prioritize cluster scale above all else, and would heavily favor short-term wins (on the order of 1-2 years) over long-term bets (on the order of 5-10 years). A research-first approach would spread bets temporally, and should be willing to make lots of bets that have a low absolute probability of working, but that collectively expand the search space for what is possible.

It might be that the compute folks are right, and it’s pointless to focus on anything other than frenzied server buildouts. But with so many companies already pointed in that direction, it’s nice to see someone headed the other way.

### Related Stocks

- [Alphabet Inc. (GOOG.US)](https://longbridge.com/en/quote/GOOG.US.md)
- [Global X Cloud Computing ETF (CLOU.US)](https://longbridge.com/en/quote/CLOU.US.md)
- [Alphabet Inc. (GOOGL.US)](https://longbridge.com/en/quote/GOOGL.US.md)

## Related News & Research

- [TCS Rewires Enterprise Tech With AI](https://longbridge.com/en/news/280993412.md)
- [Google Launches Gemma 4 For Advanced On-Device AI](https://longbridge.com/en/news/281653590.md)
- [The AI Revolution and The 90s Internet Boom](https://longbridge.com/en/news/281005956.md)
- [BullFrog AI Signs Major AI Drug Discovery Partnership](https://longbridge.com/en/news/281092205.md)
- [Google's Nobel Prize-Winning AI Chief: 'We Don't Feel Any Pressure' While OpenAI Burns $14 Billion](https://longbridge.com/en/news/281220147.md)