• Hey Guest,

    As you know, censorship around the world has been ramping up at an alarming pace. The UK and OFCOM has singled out this community and have been focusing its censorship efforts here. It takes a good amount of resources to maintain the infrastructure for our community and to resist this censorship. We would appreciate any and all donations.

    Bitcoin Address (BTC): 39deg9i6Zp1GdrwyKkqZU6rAbsEspvLBJt

    Ethereum (ETH): 0xd799aF8E2e5cEd14cdb344e6D6A9f18011B79BE9

    Monero (XMR): 49tuJbzxwVPUhhDjzz6H222Kh8baKe6rDEsXgE617DVSDD8UKNaXvKNU8dEVRTAFH9Av8gKkn4jDzVGF25snJgNfUfKKNC8

DarkRange55

DarkRange55

I am Skynet
Oct 15, 2023
1,855
This is the most simple explanation I can summarize:

GPT Generative Pre-trained Transformers, are a type of Large Language Model (LLM), which is a subset of artificial intelligence. They're layers of outputs and inputs so you have some data that comes in that gets processed mathematically and that produces an output. The output then becomes the input for another layer which does its own processing, etc. You can literally have billions of parameters. Then it kind of works its way up mathematically and gives you some kind of output which can be a probability or it could be binary or give you a range. Thats a quick overview of how it works.

For the record: A simple sensor would not be AI.

In theory computers can achieve any kind of thinking that we do.
Even traditional programming (as opposed to teaching an AI) can perform abductive logic (my mentor wrote a crude such program back in the early 1970s).
 
  • Like
  • Informative
Reactions: pthnrdnojvsc, Praestat_Mori, katagiri83 and 4 others
cali22♡

cali22♡

Selfharm Specialist♡
Nov 11, 2023
351
That's how I would explain it:

ChatGPT is NOT real intiligence... I personally call it a "crawler" which is what it is chatgpt collects its information from the internet and gives it back to us in plain language...
 
  • Like
Reactions: Praestat_Mori, katagiri83, Forever Sleep and 2 others
Dr Iron Arc

Dr Iron Arc

Into the Unknown
Feb 10, 2020
21,206
What? You mean the artificial intelligence doesn't really love me? Oh no!
 
  • Yay!
Reactions: Praestat_Mori, Forever Sleep and NoPoint2Life
DarkRange55

DarkRange55

I am Skynet
Oct 15, 2023
1,855
That's how I would explain it:

ChatGPT is NOT real intiligence... I personally call it a "crawler" which is what it is chatgpt collects its information from the internet and gives it back to us in plain language...
Yes!

GPT is a type of processing. It's large language models.

You need a training set, a set of materials that the computer processes through to produce the output. Today the training set is the internet but typically developers use a subset. You go through word association and develop a cloud of words and when one word appears frequently in close conjunction with another word according to the training set the computer says those words must go together like "hot day" or "cold day" or whatever. Thats a quick overview. So there is no digital brain or simulated brain in the human sense behind it.

However - for the sake of argument: How different is that from how a child learns its first language? And the way many people form opinions without deep thought?
 
  • Like
  • Informative
Reactions: pthnrdnojvsc, Praestat_Mori, katagiri83 and 3 others
Pluto

Pluto

Meowing to go out
Dec 27, 2020
4,163
  • Informative
  • Hugs
  • Love
Reactions: DarkRange55, Praestat_Mori and Dr Iron Arc
H

Hvergelmir

Experienced
May 5, 2024
280
So there is no digital brain or simulated brain in the human sense behind it.
Well, that's where experts disagree with one another. I guess it comes down to how you'd define "human sense".

A neural net can approximate any mathematical expression; that was proven long before "AI" was a big thing.
The implication of that is that a neural net can simulate the universe, or parts of it depending on the size of the net.

If it evolves an accurate model of something, instead of relying on pure word association it will score much better in evaluation (the training function).
Thus any accurate models that emerge, tend to stay and evolve.

For fun I threw the following query at it:
I have a graph: 2, 16, 4, B, a dog, 13, A, C
The letter all equals 20.
What is the average of the graph? What is the sum of all the points? Which is the single highest point?
It did state that "a dog" must be some kind of placeholder, doesn't have a numerical value, and will be treated as zero. It then proceeded to do the calculations correctly.
This is extremely hard to prove, but I think it would be unreasonably hard to answer this with word association alone.

It's far from perfect, but I think it performs much better than one would expect from pure linguism. Conversations are inherently related to reality. Thus, to be a good conversationalist it helps to understand reality. I think ChatGPT is evolving just that.
 
  • Like
Reactions: pthnrdnojvsc, DarkRange55 and Praestat_Mori
DarkRange55

DarkRange55

I am Skynet
Oct 15, 2023
1,855
Well, that's where experts disagree with one another. I guess it comes down to how you'd define "human sense".

A neural net can approximate any mathematical expression; that was proven long before "AI" was a big thing.
The implication of that is that a neural net can simulate the universe, or parts of it depending on the size of the net.

If it evolves an accurate model of something, instead of relying on pure word association it will score much better in evaluation (the training function).
Thus any accurate models that emerge, tend to stay and evolve.

For fun I threw the following query at it:

It did state that "a dog" must be some kind of placeholder, doesn't have a numerical value, and will be treated as zero. It then proceeded to do the calculations correctly.
This is extremely hard to prove, but I think it would be unreasonably hard to answer this with word association alone.

It's far from perfect, but I think it performs much better than one would expect from pure linguism. Conversations are inherently related to reality. Thus, to be a good conversationalist it helps to understand reality. I think ChatGPT is evolving just that.
Relevant to the comparison of current AI and human cognition: https://techxplore.com/news/2024-11-cognitive-ai.html
 
  • Like
Reactions: Praestat_Mori
ShesPunishedForever

ShesPunishedForever

Punished
Sep 15, 2024
32
It's a difficult concept because we can't always apply human ways of understanding onto LLMs. My take is that I'll add that chatGPT doesn't interpret or "understand" things as words, it interprets inputs into units of tokens which can vary between symbols, single characters, and words, its based on a transformer model of neural network that gives it self-attention to contextualize words amongst other words and other ways of relating tokens. But it doesn't interpret things as concepts, it reads and relates the tokens its inputted with reads each token one after the other to try contextualize it, then based on its training and fine tuning it outputs in tokens too.
 
DarkRange55

DarkRange55

I am Skynet
Oct 15, 2023
1,855
Well, that's where experts disagree with one another. I guess it comes down to how you'd define "human sense".

A neural net can approximate any mathematical expression; that was proven long before "AI" was a big thing.
The implication of that is that a neural net can simulate the universe, or parts of it depending on the size of the net.

If it evolves an accurate model of something, instead of relying on pure word association it will score much better in evaluation (the training function).
Thus any accurate models that emerge, tend to stay and evolve.

For fun I threw the following query at it:

It did state that "a dog" must be some kind of placeholder, doesn't have a numerical value, and will be treated as zero. It then proceeded to do the calculations correctly.
This is extremely hard to prove, but I think it would be unreasonably hard to answer this with word association alone.

It's far from perfect, but I think it performs much better than one would expect from pure linguism. Conversations are inherently related to reality. Thus, to be a good conversationalist it helps to understand reality. I think ChatGPT is evolving just that.
I agree with your assessment. There is already starting to be something more than mere word association. More accurate internal models will improve answers (as it does for humans),
The other big improvement I expect is much higher learning efficiency. Current mainstream AIs require vast amounts of data compared to a human brain. An AI that learned as efficiently as humans and that had the speed of electronics and the scale to digest the whole internet's worth of data would be interesting to communicate with.
 
H

Hvergelmir

Experienced
May 5, 2024
280
The other big improvement I expect is much higher learning efficiency.
I hope your right!
There seem to be current plateau is slowly draining my hope, though.

In contrast to people, a new neural net is a true blank slate.
Maybe human learning is more akin to lora/loha training, while the initial checkpoint training covers large parts of what evolution did for us? That's just me speculating, trying to draw parallells to more familiar concepts, though.
 
DarkRange55

DarkRange55

I am Skynet
Oct 15, 2023
1,855
It's a difficult concept because we can't always apply human ways of understanding onto LLMs. My take is that I'll add that chatGPT doesn't interpret or "understand" things as words, it interprets inputs into units of tokens which can vary between symbols, single characters, and words, its based on a transformer model of neural network that gives it self-attention to contextualize words amongst other words and other ways of relating tokens. But it doesn't interpret things as concepts, it reads and relates the tokens its inputted with reads each token one after the other to try contextualize it, then based on its training and fine tuning it outputs in tokens too.
It's a difficult concept because we can't always apply human ways of understanding onto LLMs.
Humans do a much better job of understanding than current LLMs, but the start of the way that we reach that understanding is somewhat similar (at least in my case).

My take is that I'll add that chatGPT doesn't interpret or "understand" things as words, it interprets inputs into units of tokens which can vary between symbols, single characters, and words,
So do humans...

its based on a transformer model
Implementation detail

of neural network that gives it self-attention to contextualize words amongst other words and other ways of relating tokens.
So do humans...

But it doesn't interpret things as concepts, it reads and relates the tokens its inputted with reads each token one after the other to try contextualize it,
That's similar to what I do when starting to learn a new concept. For the first few times I read the name (often an acronym) of a new concept, I have to keep going back to a description (multip[le tokens that I am familiar with) until the new concept 'sticks' as a single entity.

then based on its training and fine tuning it outputs in tokens too.
So do humans...
 
  • Like
Reactions: Aergia
UncertainA

UncertainA

Member
Jan 24, 2023
13
I actually really like this explanation. A lot better than my literal explanation lol.
 
ShesPunishedForever

ShesPunishedForever

Punished
Sep 15, 2024
32
Humans do a much better job of understanding than current LLMs, but the start of the way that we reach that understanding is somewhat similar (at least in my case).


So do humans...


Implementation detail


So do humans...


That's similar to what I do when starting to learn a new concept. For the first few times I read the name (often an acronym) of a new concept, I have to keep going back to a description (multip[le tokens that I am familiar with) until the new concept 'sticks' as a single entity.


So do humans...
no humans do not learn things the same way LLMs do lol we are not machines or programs, a human is a human, a person with their own experiences and expression, an LLM is an LLM that cannot experience things or express itself. An LLM will always input and output linearly, humans learn things heirarchically. A human cannot be trained on millions and billions of data in a small time frame, and a humans responses cannot be fine tuned at the will and whim of someone else programming them. Humans also do not understand things as tokens, chatGPT understands tokens by applying a number to it, and a token is literally form of currency too, when humans spell and speak we are not assigning every word a serial number based on its occurance in use, and we do not have a limit to what we can express, and we do not break words into tokens and speak transactionally, and we do not hear or read words or hear different phrasings of sentences and arbitrarily break them down into different units of data, we understand and differentiate sylabbles and vowels but that is not how chatGPT is breaking down words, the breaking down of words into tokens can be arbitrary based on just the word, the length of word, the phrasing, or single symbols.
 
DarkRange55

DarkRange55

I am Skynet
Oct 15, 2023
1,855
no humans do not learn things the same way LLMs do lol we are not machines or programs, a human is a human, a person with their own experiences and expression, an LLM is an LLM that cannot experience things or express itself. An LLM will always input and output linearly, humans learn things heirarchically. A human cannot be trained on millions and billions of data in a small time frame, and a humans responses cannot be fine tuned at the will and whim of someone else programming them. Humans also do not understand things as tokens, chatGPT understands tokens by applying a number to it, and a token is literally form of currency too, when humans spell and speak we are not assigning every word a serial number based on its occurance in use, and we do not have a limit to what we can express, and we do not break words into tokens and speak transactionally, and we do not hear or read words or hear different phrasings of sentences and arbitrarily break them down into different units of data, we understand and differentiate sylabbles and vowels but that is not how chatGPT is breaking down words, the breaking down of words into tokens can be arbitrary based on just the word, the length of word, the phrasing, or single symbols.
"humans do not learn things the same way LLMs do lol we are not machines or programs
That depends on how broadly you define "machine".

, a human is a human, a person with their own experiences and expression, an LLM is an LLM that cannot experience things or express itself.
An LLM can certainly express itself - that's how we interact with them.
Whether it can experience things depends on how broadly you define 'experience', and unless you are pretty narrow, LLMs learn from experience.

An LLM will always input and output linearly, humans learn things heirarchically.
Current LLMs are based on a poor implementation of where our understanding of the brain was a few decades age.

A human cannot be trained on millions and billions of data in a small time frame
A typical movie is billions of bits of data.

, and a humans responses cannot be fine tuned at the will and whim of someone else programming them.
Ever see a politician's position or message be fine-tuned at the whim of donors and/or voters?

Humans also do not understand things as tokens, chatGPT understands tokens by applying a number to it, and a token is literally form of currency too, when humans spell and speak we are not assigning every word a serial number based on its occurrence in use,
When we learn a language we tokenize it, and increase 'weights' on synapses in our brains, based on occurrence in use.
So our synapses are analog, while current LLM hardware is digital. That is a pretty minor distinction both because because there are neural network hardware chips that are analog until you get down to the level of molecules, and because at the level of molecules we are approaching digital ourselves.

and we do not have a limit to what we can express,
Words cannot express how strongly I disagree with that.

and we do not break words into tokens
Words typically already are tokens.

and speak transactionally
Someone hasn't been out in the world much. And how are my responses to e-mails not transactional?

, and we do not hear or read words or hear different phrasings of sentences and arbitrarily break them down into different units of data
Sure, we do, other than the 'arbitrary' part. When learning a new word we break it down into parts automagically (that's not a real word, but people understand 'auto' 'magic' and the 'ally' ending).

, we understand and differentiate syllables and vowels but that is not how chatGPT is breaking down words, the breaking down of words into tokens can be arbitrary based on just the word, the length of word, the phrasing, or single symbols."
The AI has learned that its tokenization works for. Trivializing its tokenization as 'arbitrary' shows the human's failure to understand why the AI does it that way.

Ignoring typographical errors. That was the longest run on sentence haha 🫠🙃
 

Similar threads

derpyderpins
Replies
25
Views
796
Politics & Philosophy
derpyderpins
derpyderpins
sserafim
Replies
3
Views
477
Suicide Discussion
Twiceler
Twiceler
M
Replies
15
Views
2K
Recovery
Life_and_Death
Life_and_Death
N
Replies
2
Views
230
Offtopic
penguinl0v3s
penguinl0v3s
todiefor
Replies
20
Views
8K
Recovery
Rhizomorph1
Rhizomorph1