A Yann LeCun–Linked Startup Charts a New Path to AGI


If you ask Yann LeCun, Silicon Valley has a groupthink problem. Since leaving Meta in November, the researcher and AI luminary has taken aim at the orthodox view that large language models (LLMs) will get us to artificial general intelligence (AGI), the threshold where computers match or exceed human smarts. Everyone, he declared in a recent interview, has been “LLM-pilled.”

On January 21, San Francisco–based startup Logical Intelligence appointed LeCun to its board. Building on a theory conceived by LeCun two decades prior, the startup claims to have developed a different form of AI, better equipped to learn, reason, and self-correct.

Logical Intelligence has developed what’s known as an energy-based reasoning model (EBM). Whereas LLMs effectively predict the most likely next word in a sequence, EBMs absorb a set of parameters—say, the rules to sudoku—and complete a task within those confines. This method is supposed to eliminate mistakes and require far less compute, because there’s less trial and error.

The startup’s debut model, Kona 1.0, can solve sudoku puzzles many times faster than the world’s leading LLMs, despite the fact that it runs on just a single Nvidia H100 GPU, according to founder and CEO Eve Bodnia, in an interview with WIRED. (In this test, the LLMs are blocked from using coding capabilities that would allow them to “brute force” the puzzle.)

Logical Intelligence claims to be the first company to have built a working EBM, until now just a flight of academic fancy. The idea is for Kona to address thorny problems like optimizing energy grids or automating sophisticated manufacturing processes, in settings with no tolerance for error. “None of these tasks is associated with language. It’s anything but language,” says Bodnia.

Bodnia expects Logical Intelligence to work closely with AMI Labs, a Paris-based startup recently launched by LeCun, which is developing yet another form of AI—a so-called world model, meant to recognize physical dimensions, demonstrate persistent memory, and anticipate the outcomes of its actions. The road to AGI, Bodnia contends, begins with the layering of these different types of AI: LLMs will interface with humans in natural language, EBMs will take up reasoning tasks, while world models will help robots take action in 3D space.

Bodnia spoke to WIRED over videoconference from her office in San Francisco this week. The following interview has been edited for clarity and length.

WIRED: I should ask about Yann. Tell me about how you met, his part in steering research at Logical Intelligence, and what his role on the board will entail.

Bodnia: Yann has a lot of experience from the academic end as a professor at New York University, but he’s been exposed to real industry through Meta and other collaborators for many, many years. He has seen both worlds.

To us, he’s the only expert in energy-based models and different kinds of associated architectures. When we started working on this EBM, he was the only person I could speak to. He helps our technical team to navigate certain directions. He’s been very, very hands-on. Without Yann, I cannot imagine us scaling this fast.

Yann is outspoken about the potential limitations of LLMs and which model architectures are most likely to bump AI research forward. Where do you stand?

LLMs are a big guessing game. That’s why you need a lot of compute. You take a neural network, feed it pretty much all the garbage from the internet, and try to teach it how people communicate with each other.

When you speak, your language is intelligent to me, but not because of the language. Language is a manifestation of whatever is in your brain. My reasoning happens in some sort of abstract space that I decode into language. I feel like people are trying to reverse engineer intelligence by mimicking intelligence.



Source link