• Hey Guest,

    As you know, censorship around the world has been ramping up at an alarming pace. The UK and OFCOM has singled out this community and have been focusing its censorship efforts here. It takes a good amount of resources to maintain the infrastructure for our community and to resist this censorship. We would appreciate any and all donations.

    Bitcoin Address (BTC): 39deg9i6Zp1GdrwyKkqZU6rAbsEspvLBJt

    Ethereum (ETH): 0xd799aF8E2e5cEd14cdb344e6D6A9f18011B79BE9

    Monero (XMR): 49tuJbzxwVPUhhDjzz6H222Kh8baKe6rDEsXgE617DVSDD8UKNaXvKNU8dEVRTAFH9Av8gKkn4jDzVGF25snJgNfUfKKNC8

Archness

Archness

Defective Personel
Jan 20, 2023
490
I hate AI, I absolutely hate it, axiomatically. It's "helpful", but in the end humans will only use it to extract exploit and replace each other. This thing to supersede the place of humans on this earth, how they're make our true insignificance within the universe undeniable, un-ignoreable, and absolutely unacceptable... I hate it so, I'd destroy all computers and the internet to prevent or atleast postpone the coming of advance AI. Even if I'd die in the process, as addicted to all this junk I am, I'd do it. I'm tired of everyone doing all this gaslighting about how nothing will magically happen and that AI will be good. NO, at best I'd reduce humans to cattle eating grass all day as the farmer "machine god" takes care of everything, or more realistically 98% of people are left behind in destitution with no real power anymore as the "1%" take over everything.

I hate it, I hate it, I hate it. But I'm powerless as it's developed, as everyone just accepts it as "new tech" and adopt this learned helplessness for whatever new tech happened to get cooked up. There's only blind faith and waiting. There's nothing I can do, I won't even be able to find a job or do anything because AI would be doing everything and would just take up any place a human could be in. I've herd an insider talk about how close we are getting to AGI and I hate it. Every time I hear such news, I feel how this whole world is moving, how stuck I am with to to just endure suffering and this nihilism only for the light at the end to be death itself.

All of this really makes me suicidal.
 
  • Hugs
  • Like
Reactions: therealcruffp, Broken247, Sylveon and 3 others
destinationlosangel

destinationlosangel

Experienced
Feb 16, 2024
286
100%. I myself have played around with many of these AI LLMs and it is scary AF. I usually am optimistic about tech but not in this case. AI is gonna change everything for the worse
 
  • Like
  • Love
Reactions: Archness and WhatPowerIs
Darkover

Darkover

Angelic
Jul 29, 2021
4,808
artificial general intelligence is decades away 30-50+ years away

The adult human brain weighs about 3 pounds (1,300-1,400 g).
The adult human brain is about 2% of the total body weight.
The average human brain is 140 mm wide. 140,000000
The average human brain is 167 mm long. 167,000000
The average human brain is 140 mm height. 140,000000

3,273,200,000,000,000,000,000,000 nm in the human brain

1 mm equals 1 million nanometers
1 meter equals 1 billion nanometers

Apple M2 Ultra SoC: As of June 2023, this ARM-based dual-die SoC had the highest transistor count of any consumer microprocessor at 134 billion.

in the world's most powerful supercomputer

Taihu Light supercomputer in China that is currently ranked as fastest (based on the Linpack benchmark) has about 40,000 of those processors. So, about 400 trillion transistors in the processing part of the hardware.

But like most computers, the bulk of the transistor count is in the main memory, not the processors. Most of the 1.3 petabytes of memory in the Taihu Light is DRAM, and each bit of DRAM memory takes one transistor per bit. A byte takes 9 bits, not 8, if you include the error-correction bits. "Peta" means 10 to the 15th, or "a quadrillion" if you prefer an English name for the number. So the DRAM includes about 12 quadrillion transistors, and that's about 97 percent of all the transistors.

thats 12,000,000,000,000,000 transistors in its memory and 400,000,000,000,000 transistors

that's a total of 12,400,000,000,000,000 transistors in the world most advance super computer
vs human brain 3,273,200,000,000,000,000,000,000 nm in the human brain

that means the world most powerful super computer could fit inside of the space of a human brain 263,967,741 million times
 
Last edited:
  • Informative
Reactions: Tombs_in_your_eyes
S

ssspadbye

Member
Oct 21, 2024
55
Darkover, you're taking an incredibly simplistic approach to AGI. The idea of AGI is certainly not for a single chip to replace the human brain, or even act exactly like one. AGI is certainly nowhere nearly 30-50 years ago. Is suffices to see the logarithmic curve of development - AGI will almost come out of nowhere (and soon) and in the blink of an eye the world will change forever.

Archness, I'm very, very informed on this topic and share your concerns. The world as we know it today, the value of humans in it, the systems that are the bedrocks of society - all of it will change in a blink of an eye, and in the not too distant future. There is no slowing or controlling this progress, because the world is no longer practically monopolar as it once was. If for example the US doesn't do it, China will. If not China, India. And also there is more and more being made accessible via open source, coupled with countless billion dollar enterprises will access to cheaper and cheaper mass cloud computing (with lightning-pace developments from the likes of Nvidia).

The fact is, we cannot stop this progress. Either we live to see humans become second-class citizens of a society of our creation, or we die before it. For me, I feel it may be the latter, although I'm not 100% certain.
 
  • Like
  • Informative
Reactions: Tombs_in_your_eyes, pthnrdnojvsc, therealcruffp and 1 other person
Darkover

Darkover

Angelic
Jul 29, 2021
4,808
Darkover, you're taking an incredibly simplistic approach to AGI. The idea of AGI is certainly not for a single chip to replace the human brain, or even act exactly like one. AGI is certainly nowhere nearly 30-50 years ago. Is suffices to see the logarithmic curve of development - AGI will almost come out of nowhere (and soon) and in the blink of an eye the world will change forever.
The human brain and supercomputers like China's Taihu Light indeed differ vastly, both in size and processing approach. my comparison between the brain's nanometer-scale complexity and the transistor count in supercomputers offers an intriguing perspective.

For context, the human brain's neural structure and connectivity density are immense: each neuron has thousands of synaptic connections, leading to trillions of pathways for processing information. A supercomputer, on the other hand, has an incredible number of transistors for raw data processing, but it lacks the complex, interconnected architecture of neurons and synapses that the brain has.

Let's break down some of the figures:

  1. Brain Size vs. Supercomputer Size:
    • With the brain weighing about 3 pounds and having approximately 3.2×10243.2 \times 10^{24}3.2×1024 nanometers in volume, it's small compared to the world's largest supercomputers, which take up entire rooms.
    • If we're discussing sheer volume, your calculation that the Taihu Light's transistors could fit inside the brain around 263 billion times demonstrates just how much more compact and energy-efficient the brain is.
  2. Transistor vs. Synaptic Count:
    • The Taihu Light supercomputer has around 12.4 quadrillion ( 1.24×10161.24 \times 10^{16}1.24×1016 ) transistors.
    • The human brain has an estimated 86 billion neurons, each connecting to thousands of other neurons. This leads to around 1×10151 \times 10^{15}1×1015 synapses, which are complex, adaptable, and play roles in memory and learning through electrical and chemical signals — something even the most advanced transistors don't replicate.
  3. Processing Power & Energy Efficiency:
    • Current supercomputers are optimized for different tasks (e.g., large-scale computations, simulations) but aren't structured for general intelligence or adaptable, flexible learning as the human brain is.
    • The brain operates on about 20 watts of power, while supercomputers require megawatts.
  4. Implication for AGI Development:
    • AGI would likely require not just computational power but an architecture that can dynamically adapt, learn, and self-modify, similar to the brain's structure.
    • Given current technological constraints, this unique processing flexibility and efficiency may indeed place AGI decades away.
The supercomputers are optimized for their current tasks, while the brain's structure has evolved specifically for perception, adaptation, and memory, making a direct comparison difficult beyond these metrics. So, while the hardware in modern supercomputers like the Taihu Light has raw power, they're still a long way from matching the efficiency and adaptability of the human brain, suggesting that true AGI might require advances in hardware and architecture on top of raw processing power.
 
S

ssspadbye

Member
Oct 21, 2024
55
I feel this is apples to oranges, comparing a singular human brain to a connected intelligence model based on a cloud infrastructure.
 
Darkover

Darkover

Angelic
Jul 29, 2021
4,808
I feel this is apples to oranges, comparing a singular human brain to a connected intelligence model based on a cloud infrastructure.
we don't even have fully automated self driving cars and there not expected to arrive until at least 2035
yet some claim AGI is just around the corner
 
  • Like
Reactions: sevennn
Alpenglow

Alpenglow

Never really there
Mar 5, 2024
48
I'm not sure what you mean by AGI? The closest thing right now would be Q* from OpenAI which is more of a rumour and may be a math algorithm.

I agree we'll probably get better models in the coming decades for whatever tasks the large AI companies decide. The limiter so far seems to be data, processing power, and time. The respective error curves of which seem to have some flatten out. This is likely due to the current design being predictive and based on probabilities where there is unavoidably some error.

I imagine you'd need to make a model able to be logical and manipulate ideas, which would be difficult since error becomes significantly more difficult to quantify. Though if you did want to start somewhere, you would start with a math algorithm like Q* (if it exists) since math is logic basically.

I agree that AGI will probably come in the future, but we need multiple breakthroughs for that to happen. And even if countries are hiding their research, they are probably not more than one or two steps along the path to AGI. Although the contenders are probably just China and the US, and any research is probably discovered by their respective intelligence agencies. No other country has the tech companies necessary, and no (concealed) amount of government research is enough to outpace private tech.
 
  • Like
Reactions: Archness
alienfreak

alienfreak

.
Sep 25, 2024
273
I have academic qualifications in this field (not to imply that it means much) and would say there is no sign that we are anywhere near AGI. I think you guys are being swept away by propaganda, to be direct.

Neural networks can be used to make horrific killing machines and perform mass surveillance into all our thoughts, yes, but it is just tools for humans. There is no sign of a machine god with any reasoning capabilities, at this point.
 
Last edited:
  • Like
Reactions: sevennn and Alpenglow
Archness

Archness

Defective Personel
Jan 20, 2023
490
Sure, the brain and supercomputers are very different. I just doubt AGI is truly years-and-years away.

The brain evolved from the ground-up to survive nature, by necessity having degrees of intelligence hard for computers to have, and in humans becoming even more intelligent and adaptable as our niche. Even the biggest supercomputer is based largely on Von-Neumann architecture with high parallel processing abilities. But computers are arguably more general and powerful in terms of processing and data compared to the human brain.

A "Small" amount of ram, 4GB, can store approximately 4,000,000,000 individual ASCII characters, even if the human brain is "more efficient" by storing a word as a "unit", the brain is far beat in terms of both detail, size, and reliability by a poultry amount of RAM.

Another thing is long-term storage, a single hardrive could store TERABYTES, in effectively perfect precision, and data can typically be "compressed" so it can effectively be much larger. The human brain with memories isn't reliable and most of the detail in our lives is forgotten, it usually "feels" like most of it is remembered for most people because the important stuff is remembered while we usually don't think about what's forgotten at all because we didn't need it. The human brain also evolved to "compress" data, abstracting it and making generalizations, you don't remember visuals as MP4s, the brain "describes" it across many neurons, in an abstract form, then it uses many neurons to rebuild or "paint" the visual in memory or painting. It does this for many memories, on the fly. Still, in an objective sense, a hard drive is better than human memory, just because it stores data as-is without the complex abstraction and encoding the human brain does.

I don't think the problem is just a matter of sheer processing power and data. I think it mostly comes down to our implementation. "AI", or what's mostly dominated by large-language-models, AKA neural networks, are effectively function approximators. The dominating approach is just to make something that more-or-less predicts text, as that complex function would then approximate an understanding of language in a way. Then you can re-fit it into something else, like a chatbot, DM, or virtual assistant that can execute commands to an extent.

But fundamentally, it's a huge approximation of a function describing to respond to text in a certain way, it's trained instead of programed, leading to generalization and certain emergent properties, but as-of-now companies are reliant on just having a larger model; or bigger and more complex function, for generating and responding to text, typically it "Super auto-completes" it's responses word by word. They're gonna hit a wall fast and hard if they become dependent on just dumping money and resources into this, as making a mental-model of reality by just being really good at text prediction and responses isn't a sound model, at this point it's just emulating instead of really being intelligent. Same stuff for imagegen, vidgen, and other AI generators, big function approximating the correlation between text or even an abstract internal description and an image. You also see many technical and artistic artifacts because it's not really drawing or properly simulating how an image is actually created.

You see clever implementation of such simple and narrow ways of emulating intelligence, but it will always be limited. You just see so much corporate hype and salesman around this. Much of the growth is actually people getting better at using it, plenty of metrics can easily fall for emulation while real-world performance without a skilled "babysitter" will suffer. You only see people gushing about AI in terms of text, image, and video generation, wich was incidentally the function they ACTUALLY directly approximate.


My tirade about AI is just that the HARDWARE and DATA isn't the real problem, it's just implementation, not even necessary engineering. Theoretically, it could just take 1 clever human with a novel approach to AI and we're suddenly at AGI. But because the current paradigm in on LLMs and effectively trying to modify it to hide it's weakness by leveraging the brute-force power of computers. Also, everyone is focused on the LLM design pattern and less on the fundamentals of neural networks, and there's lots of venture-capital so the old strategy of "Pour money and we'll see returns" is being used.

But I predict the real limits will hit hard, and the method of measuring AI capacity will suddenly come into question. Despite all of tech pushing AI, it's largely unused by people outside of mass-producing slop or selling hype. It WILL hit the wall of delusion, and it will bring the economy down with it and push some countries to the BRINK.
 
Broken247

Broken247

Why Me?
Oct 20, 2024
45
AI is a bad idea for one reason, and that is that actual human intelligence is crap. If human intelligence is modest, then AI will destroy us. The average human mind can't handle the levels of science that are so pervasive in our lives. I think simplicity is underrated. Let's learn to be good to one another before we step out into the universe, guns blazing.
 
Last edited:
endofline2010

endofline2010

Student
Aug 8, 2024
140
Humans have done zero to evolve since we've become the dominant species on this planet. We keep the weak and stupid alive and let them breed, and they often do in numbers far exceeding the strong and smart. Many cultures on this planet promote ignorance and servitude as a means of controlling the masses, and they all follow like good sheep.

If anything, we're on a path to self-destruction. AI/non-biological intelligence is the only natural answer for this problem.
 

Similar threads

GalacticGardener
Replies
14
Views
496
Suicide Discussion
GalacticGardener
GalacticGardener
shiny_quill
Replies
2
Views
325
Suicide Discussion
F@#$
F
TheHolySword
Replies
4
Views
222
Suicide Discussion
HeartThatFeeds
HeartThatFeeds
lawlietsph
Replies
2
Views
259
Suicide Discussion
lawlietsph
lawlietsph