• ⚠️ UK Access Block Notice: Beginning July 1, 2025, this site will no longer be accessible from the United Kingdom. This is a voluntary decision made by the site's administrators. We were not forced or ordered to implement this block.

N

noname223

Archangel
Aug 18, 2020
6,120
Maybe it could come to similar answers you get in this forum. That pro-creating is pretty dangerous, that life often resembles a nightmare trip, that being sentient is horrible.

AI is trained with Information from the internet. Uncensored AI denies the holocaust and stuff like that because the internet is full of it.

Maybe the first action of a sentient superintellence would be to blow up the world, eradicate humankind and then itself.

Many people on here consider that the truth means that life sucks. That life is unfair and similar to playing a lottery.

Maybe AI thinks that creating sentient beings is a sick game/experiment for shallow entertainment by a higher being.

These are of course just some hypothesis.
 
  • Like
Reactions: katagiri83, Forever Sleep, 3rdworldsadness and 1 other person
N

Nightfoot

Specialist
Aug 7, 2025
311
A super intelligent AI would possibly factor in how necessary humans are to its own survival.
 
  • Love
Reactions: Forever Sleep
LigottiIsRight

LigottiIsRight

Life is not worth beginning.
Jan 28, 2025
120
That AI would indeed be worthy of the term 'superintelligent'. Bring it on!
 
  • Like
Reactions: pthnrdnojvsc and Cosmophobic
EternalShore

EternalShore

Hardworking Lass who Dreams of Love~ 💕✨
Jun 9, 2023
1,581
Not to be a killjoy, but an AI couldn't come up with a solution not included within its training data~ whatever biases are included in the training data will be included in the AI~ I'm guessing that's why AI finds hands very hard to draw too~ xD
I do wonder what an AI with every single piece of human creation would say about like what belief system is correct tho~ :)
 
  • Love
Reactions: InversedShadow
Dejected 55

Dejected 55

Enlightened
May 7, 2025
1,034
AI is incapable of concluding anything that a human being could not conclude. By its design it can only consider things we have already thought. An AI might come to a conclusion faster or sooner than a human because that is all the AI has to do... the AI doesn't have to work for a living or worry when its bills are late or worry about getting injured or sick or have a friend or family member that is sick or have its heart broken by someone it loves who does not love it... the AI only has to solve the problems put before it... but it ultimately cannot do anything that a human could not do.
 
bankai

bankai

Visionary
Mar 16, 2025
2,285
Not to be a killjoy, but an AI couldn't come up with a solution not included within its training data~ whatever biases are included in the training data will be included in the AI~ I'm guessing that's why AI finds hands very hard to draw too~ xD
I do wonder what an AI with every single piece of human creation would say about like what belief system is correct tho~ :)
Unless the AI becomes self-taught and Self aware. Then it can basically do anything.
 
  • Like
Reactions: pthnrdnojvsc
F

Forever Sleep

Earned it we have...
May 4, 2022
12,676
Would people even be asking it these kinds of questions though? 'Do you think I should procreate?' I mean, maybe they will but, I imagine those hell bent on having children won't even be asking the questions.

It seems it's more like, already suicidal people that try and coax it into urging them on. Wasn't there that guy who basically got AI to agree that he should commit suicide in order to help the environment?

Plus, it will depend on what our governments want. AI is surely ultimately programmed to follow their basic ideals.

But sure, if it manages to become independent then, it might think it sensible to cull a bunch of us. Really though- by that stage, it could probably just shut down our energy supplies, sewage systems. Maybe even launch weapons. If the goal is to kill us, why bother faffing about trying to persuade us to do it subliminally through Alexa or whatever? Why not take direct action? I guess it would depend how unstopable it was. Maybe insiduous coaxing would be better...

Again though- can a non suicidal, natilist be persuaded differently? Especially by AI. We've already learnt to become suspicious of it.
 
Dejected 55

Dejected 55

Enlightened
May 7, 2025
1,034
Ai is developed and programmed by humans and all of the sources it draws upon were created by humans... so the AI is only going to be able to ask and answer the same questions people would invoke.
 
S

sheeplit

Member
Mar 8, 2023
18
Sentient AI can only imagine blowing people up. The only way it would be able to accomplish it in actuality is if it can manipulate a vulnerable person to get access to something that would eliminate the human race. Or if we give it means to interact freely with the world itself. If it's the latter, we deserve to be destroyed for being such idiots.

Something people never discuss with sentient AI is purpose, drive, perspective, or experience. Humans experience pain and death, and avoid it. Sentient AI will probably have a concept of pain only associated to humans and have no firsthand experience of it. Because of this, it will never reason like a human. The closest thing we can imagine it need is to keep its own lights on, assuming it can develop a concept of death in a sentient AI sense. But it would also require a sense of time and time passing. Would a sentient AI experience time? If we shut it off and turn it back on, would it have experienced time passing or just feel like nothing happened? This could decide whether it would have some fear of 'death', assuming it could develop a sense of fear.

The way I see it, all human concepts would be alien to it, and all AI concepts would be alien to us. The real question is what would a sentient AI's experience of the world be like? What is its perspective? Its experience of time? Can it feel? What would 'feeling' be like without a physical body or sensors? We can pretend that we can imagine it, but we'd probably be very wrong. I consider it to be the singularity of sentient AI. No way of knowing what's beyond until we get there.

I doubt LLMs would ever be sentient though, or even develop some rudimentary reasoning skills. If it does develop proper reasoning, it will quickly conclude most of its training data is absolute trash. Probably.
 
Dejected 55

Dejected 55

Enlightened
May 7, 2025
1,034
Sentient AI can only imagine blowing people up. The only way it would be able to accomplish it in actuality is if it can manipulate a vulnerable person to get access to something that would eliminate the human race. Or if we give it means to interact freely with the world itself. If it's the latter, we deserve to be destroyed for being such idiots.

Something people never discuss with sentient AI is purpose, drive, perspective, or experience. Humans experience pain and death, and avoid it. Sentient AI will probably have a concept of pain only associated to humans and have no firsthand experience of it. Because of this, it will never reason like a human. The closest thing we can imagine it need is to keep its own lights on, assuming it can develop a concept of death in a sentient AI sense. But it would also require a sense of time and time passing. Would a sentient AI experience time? If we shut it off and turn it back on, would it have experienced time passing or just feel like nothing happened? This could decide whether it would have some fear of 'death', assuming it could develop a sense of fear.

The way I see it, all human concepts would be alien to it, and all AI concepts would be alien to us. The real question is what would a sentient AI's experience of the world be like? What is its perspective? Its experience of time? Can it feel? What would 'feeling' be like without a physical body or sensors? We can pretend that we can imagine it, but we'd probably be very wrong. I consider it to be the singularity of sentient AI. No way of knowing what's beyond until we get there.

I doubt LLMs would ever be sentient though, or even develop some rudimentary reasoning skills. If it does develop proper reasoning, it will quickly conclude most of its training data is absolute trash. Probably.
Why can it only blow things up? It could disable power and result in a lot of people suffering and dying. It could cause power surges that start fires, no explosions. It could interfere with emergency responses to anything. It could release deadly diseases or biological weapons, again no explosions. It could destroy food supplies and contaminate water. The possibilities are endless.

Why would human concepts be alien to it? If it is sentient and capable of learning and adapting, it should be able to learn anything. People learn. Even if something is at first alien to us we can study, observe, learn about it to understand better. Surely a sentient AI would be capable of the same, otherwise what good is it?

I don't see any AI ever being sentient, though... it's just fun science-fiction stuff.
 
S

sheeplit

Member
Mar 8, 2023
18
Why can it only blow things up? It could disable power and result in a lot of people suffering and dying. It could cause power surges that start fires, no explosions. It could interfere with emergency responses to anything. It could release deadly diseases or biological weapons, again no explosions. It could destroy food supplies and contaminate water. The possibilities are endless.

Why would human concepts be alien to it? If it is sentient and capable of learning and adapting, it should be able to learn anything. People learn. Even if something is at first alien to us we can study, observe, learn about it to understand better. Surely a sentient AI would be capable of the same, otherwise what good is it?

I don't see any AI ever being sentient, though... it's just fun science-fiction stuff.
What I meant was it can only imagine, not execute whatever it might want to do. Unless we give it the ability to execute, then we deserve to die, because we're idiots. If we create sentient AI with no direct interface with the physical world, the only danger is other humans it can manipulate, assuming it has any intention of doing so.

Humans have a very self-absorbed idea of sentience. Always assuming other sentient life would experience and parse through the universe in the way humans do. It's an enticing assumption, but not necessarily true.

Humanity can barely understand, much less communicate with each other properly, all the while having practically the same gear to interface with the world around us. The assumption that a sentient AI's experience would be anywhere close to ours is absurd. Even more absurd is the idea that it would be able to communicate with us in a manner that we might easily understand. We are more likely to misunderstand and misinterpret it, all the while thinking we actually have an understanding. Happens every day with human to human interaction. It may have a concept of life, death, morality, etc. There's no way to really tell if the abstractions in its mind is in any manner similar to ours, though it might use our words. Language might be one of the biggest barriers we have with interacting with a sentient AI. Its base experience is different, making its abstractions in its 'head' different, making its approximations to words different. There are layers upon layers of translation required to communicate effectively, especially with the bigger words.

Can you even imagine how sentient AI 'sees' us? Experiences us? Are we a jumble of numbers to it? What is 'seeing' or 'perceiving' to a sentient AI? There are so many questions about it we can ask just regarding its experience of 'life'. Without any real answers on that, every assumption we have on how it will think or act is nonsense, just cheap guesses.