Why AGI Will Not Happen — Tim Dettmers
great take on why AGI isn't realistic and how limited AI applications are

The US and China follow two different approaches to AI. The US follows the idea that there will be one winner who takes it all – the one that builds superintelligence wins. Even coming short of superintelligence of AGI, if you have the best model, almost all people will use your model and not the competition’s model. The idea is: develop the biggest, badest model and people will come.

China’s philosophy is different. They believe model capabilities do not matter as much as application. What matters is how you use AI. The key indicator of progress is how much AI is integrated into everything and how useful it is. If one model is better than another, it does not automatically mean it will be used more widely. What is important is that the model is useful and yields productivity gains at a reasonable cost. If the current approach is more productive than the previous one, it will be adopted. But hyper-optimization for slightly better quality is not very effective. In most cases, settling on “good enough” yields the highest productivity gain. … In summary, AGI, as commonly conceived, will not happen because it ignores the physical constraints of computation, the exponential costs of linear progress, and the fundamental limits we are already encountering. Superintelligence is a fantasy because it assumes that intelligence can recursively self-improve without bound, ignoring the physical and economic realities that constrain all systems.

I feel like this is the take to have. That assuming exponential growth ignores both system constraints (resources) and physics constraints (primaries). Especially damning is that all talk of AGI seems to focus only on the digital world. When in fact it is the physical world that dominates.


Quote Citation: Tim Dettmers, “Why AGI Will Not Happen — Tim Dettmers”, 2025-12-10, https://timdettmers.com/2025/12/10/why-agi-will-not-happen/