Sure, the brain and supercomputers are very different. I just doubt AGI is truly years-and-years away.
The brain evolved from the ground-up to survive nature, by necessity having degrees of intelligence hard for computers to have, and in humans becoming even more intelligent and adaptable as our niche. Even the biggest supercomputer is based largely on Von-Neumann architecture with high parallel processing abilities. But computers are arguably more general and powerful in terms of processing and data compared to the human brain.
A "Small" amount of ram, 4GB, can store approximately 4,000,000,000 individual ASCII characters, even if the human brain is "more efficient" by storing a word as a "unit", the brain is far beat in terms of both detail, size, and reliability by a poultry amount of RAM.
Another thing is long-term storage, a single hardrive could store TERABYTES, in effectively perfect precision, and data can typically be "compressed" so it can effectively be much larger. The human brain with memories isn't reliable and most of the detail in our lives is forgotten, it usually "feels" like most of it is remembered for most people because the important stuff is remembered while we usually don't think about what's forgotten at all because we didn't need it. The human brain also evolved to "compress" data, abstracting it and making generalizations, you don't remember visuals as MP4s, the brain "describes" it across many neurons, in an abstract form, then it uses many neurons to rebuild or "paint" the visual in memory or painting. It does this for many memories, on the fly. Still, in an objective sense, a hard drive is better than human memory, just because it stores data as-is without the complex abstraction and encoding the human brain does.
I don't think the problem is just a matter of sheer processing power and data. I think it mostly comes down to our implementation. "AI", or what's mostly dominated by large-language-models, AKA neural networks, are effectively function approximators. The dominating approach is just to make something that more-or-less predicts text, as that complex function would then approximate an understanding of language in a way. Then you can re-fit it into something else, like a chatbot, DM, or virtual assistant that can execute commands to an extent.
But fundamentally, it's a huge approximation of a function describing to respond to text in a certain way, it's trained instead of programed, leading to generalization and certain emergent properties, but as-of-now companies are reliant on just having a larger model; or bigger and more complex function, for generating and responding to text, typically it "Super auto-completes" it's responses word by word. They're gonna hit a wall fast and hard if they become dependent on just dumping money and resources into this, as making a mental-model of reality by just being really good at text prediction and responses isn't a sound model, at this point it's just emulating instead of really being intelligent. Same stuff for imagegen, vidgen, and other AI generators, big function approximating the correlation between text or even an abstract internal description and an image. You also see many technical and artistic artifacts because it's not really drawing or properly simulating how an image is actually created.
You see clever implementation of such simple and narrow ways of emulating intelligence, but it will always be limited. You just see so much corporate hype and salesman around this. Much of the growth is actually people getting better at using it, plenty of metrics can easily fall for emulation while real-world performance without a skilled "babysitter" will suffer. You only see people gushing about AI in terms of text, image, and video generation, wich was incidentally the function they ACTUALLY directly approximate.
My tirade about AI is just that the HARDWARE and DATA isn't the real problem, it's just implementation, not even necessary engineering. Theoretically, it could just take 1 clever human with a novel approach to AI and we're suddenly at AGI. But because the current paradigm in on LLMs and effectively trying to modify it to hide it's weakness by leveraging the brute-force power of computers. Also, everyone is focused on the LLM design pattern and less on the fundamentals of neural networks, and there's lots of venture-capital so the old strategy of "Pour money and we'll see returns" is being used.
But I predict the real limits will hit hard, and the method of measuring AI capacity will suddenly come into question. Despite all of tech pushing AI, it's largely unused by people outside of mass-producing slop or selling hype. It WILL hit the wall of delusion, and it will bring the economy down with it and push some countries to the BRINK.