Defining “AGI”

This week, one of the papers we discussed in my team was, the spicely titled, What The F*ck Is Artificial General Intelligence?” by Michael Timothy Bennett, which I found after hearing him on MLST (still one of my favorite podcasts, has super high signal/noise). Several interesting points came up:

  • It’s a western thing: Someone mentioned that the whole concept of “AGI” feels very Western. In Eastern thought, intelligence is everywhere on a spectrum. Even very simple life forms like cells demonstrate intelligence by communicating with each other. For example, cells exchange chemical signals when they meet, adapt their behavior, and coordinate responses. This broader framing aligns with Bennett’s critique of anthropocentric definitions of intelligence.
  • Kids: Someone pointed out how their 2-year-old can pick up concepts after just a few repetitions. That speed of skill acquisition, and doing so with very little data, is central to generalist intelligence. Bennett frames this as adaptation with limited resources, which also brings energy efficiency into the picture.
  • Energy: We debated whether energy cost should be part of the definition. If something burns the energy of a star to reach human-level capability, is that really AGI? Bennett argues adaptability includes both sample efficiency and energy efficiency, so by his framing it matters.
  • New Science: We agreed that being able to discover new science, as Bennett calls out with the “artificial scientist” framing, is a key marker of AGI. It’s more than just doing tasks; it’s also about prioritizing, experimenting, and finding new knowledge.
  • It’s a spectrum: There was consensus that intelligence isn’t binary but a spectrum: at the high end are systems that not only learn new skills but do so efficiently, making them “more intelligent” than others that reach the same outcome at much higher cost.
  • Methods: On methods, we noted that search is necessary but not sufficient—you can’t just brute-force your way through the unknown. Approximation (fitting the messy world) is also critical. Bennett calls these the two foundational tools, and points out both are inefficient in different ways.
  • Hybrid: The group leaned toward hybrid architectures (like AlphaGo, or more recent blends like o3 and AlphaGeometry) as the likely path forward. Bennett also highlights cognitive architectures that try to integrate perception, reasoning, and memory, exactly the kind of fusion we thought made sense.
  • Finally, we asked the “is GPT-5 AGI?” question. We realized how quickly the goal-posts move. If someone had shown us GPT-5 in ChatGPT just a few years ago, we’d probably have called it AGI on the spot. Bennett makes the same observation: public hype keeps redefining AGI as whatever we don’t yet have.

Posted

in

by

Comments

Leave a comment