Automating to superintelligence

AI Labs are building general AI researchers.

They want to build superintelligence.

But first, they'll build AGI (whatever that means).

I believe that human preferences, and ultimately market forces, will lead to the proliferation of domain-specific AGI. In fact, I think AI is already sufficiently general in narrow domains to be called AGI in that field.

From the perspective of labs intent on building superintelligence, all that matters is whether the AGI is reliable enough to do the work of a human researcher.

They won't need to sleep. No toilet breaks. They think a lot faster than us. They can predict the next token all day every day. They could eventualy communicate with each other in something that isn't language.

How much progress will they make when 1,000 of them work together?

With enough energy and compute, they probably come up with the next step change in AI. The big thing after transformers? Maybe an efficient continuous learning algorithm? And how long will the ARC-AGI challenge hold up?

The AIs are already here. We invited them to the party and – together – we're inviting their successors.