Apr 18, 2025

"AGI" considered harmful

An excerpt from some recent correspondence between Andrew Conner (a) and myself, which is perhaps particularly relevant given Tyler Cowen's recent assessment of o3 being AGI.

Milan:

over the last couple of years i've gotten to conviction that the term "AGI" isn’t well-specified enough to be useful

to unpack this a bit, the core issue is that AI is already superhuman at many things, and still far below human at many other things

there’s this weird thing where everyone now says "AGI isn’t well-specified as a concept" as one of their talking points, but then proceeds to talk about AGI timelines

(sorta like how there's a popular talking point of "ivy league education doesn’t really track quality" but ~everyone still gives a lot of weight to ivy league degrees in practice... a reflexivity)

Andrew:

there's a parallel to industrial revolution.

consider an "artificial general machine", as potentially envisioned in the 19th century.

it turns out that "machining" isn't one thing, and its deployment is massively uneven. there are some tasks that machines are so much better than humans (digging holes, moving dirt, bending metal) that it's incomparable.

others are ~equal to us but save time or automate a task (dish washer).

and others are only barely possible. most clothes manufacturing is still human-driven because it's a hard machine problem.

port to intelligence. Your pocket calculator is super intelligent at a very limited set of tasks. Very quickly, simple circuits outperform even the best humans. LLMs opened this space many orders of magnitude.

there's clearly still a gap, in ways we're only beginning to understand. there are still classes of problems the average 8 year old can do that LLMs cannot (mostly related to intuitive reasoning, world modeling, etc). sometimes, we come up with a very legible set of examples (like ARC), which can be directly optimized for (and we do).

this isn't saying that AI can't pass humans in all domains, but that what we're doing is SO fundamentally different that it'll be entirely blurry until it isn't.

just like answering "when did machines outperform humans in mechanical tasks?" is meaningless.

it is indeed meaningful to say things like "AI is better than nearly all humans at nearly all tasks", and plan for that reality. it's a huge change, but will probably stay fuzzy. we may not even notice the day it is true.

Cross-posted to the EA Forum.