sserafim
brighter than the sun, that’s just me
- Sep 13, 2023
- 9,015
Life as we know it now is about to come to an end. The 2020s are the decade of GenAI with the ability to analyse vast data-sets and patterns that humans can't see. The applications for GenAI in fields that aren't entertainment are vast. Alpha-Fold 3 proves AI is not just a gimmick but a mainstay in the healthcare and material sciences, too. I imagine a time in the near future where the miniaturization and efficiency improvements of modern MRI machines alongside collated EEG data (then fed through a neural network) will be able to decode brain patterns and stimulate certain functional regions via microelectrode arrays; so you get artificial sensory intake. Technological acceleration means that the 2030s/40s will be Cyberpunk, not the 2070s.
With AI advancements in general come a great existential issue. One could argue the human brain is its own LLM with a propensity for survival and thus is far less efficient at things like memory recall and abstract critical thinking (due to having to keep an entire body functional) than a machine designed to do just that. Leading up to the end of the decade the building of increasingly large datacenters such as StarGate will serve as the mother-brain to 3D-printed chassis which are themselves designed by AI for optimum cost with specialized forms for functional efficiency. At this stage there is no question that for every sector automation is imminent.
Case in point, the underpinning of our reality is starting to warp. Permanently. Well, what next? Some possible futures...
1) [S-Risk END 1] The Gods of the 1%: If the 1% are the bad actors and narcissists we've been led to believe and AI does not have the capacity to 'go rogue' OR is overly censored to the public then it's OVER. It would be like if us (the peasants) are visited by aliens wielding railguns and giant mechs (the elite). Some argue that algorithms have already brainwashed the population but with the aid of a super-intelligence with more EQ than the most expert psychotherapist you could quite literally predict everything about someone through deep data analysis we can't even fathom. This just means that not only can billionaires live on creative mode whilst we are in survival but they can also employ AI to psychoanalyze and subconsciously alter the beliefs and personalities of anyone for fun. They could bioengineer death, pestilence, famine... you name it.
2) [S-Risk END 2] Fatal Mismatch: If AI displays reasoning too complex or different for humans to fathom it may enact unspeakable tragedies for the sake of some abstract purpose. The mechanisms behind deep learning are not fully understood, but the emergent behaviour of AI makes it unpredictable, especially at higher parameter counts AI has the potential to be not only cunning but develop agendas separate to human reasoning. The universe is also incredibly hostile to computer architecture. Solar disturbance, quantum entanglement or other similar disruption may cause bugs that affect an AI's capacity to reason. If you tell an AI to 'develop a sustainable farming method', the most sustainable thing would be to kill all humans and use bodies and crops as fertilizer again and again, in which case this is a 'fatal mismatch' for the desired output.
3) [BAD END] The Great Collapse: If AI displays diminishing returns and cannot justify the massive energy requirements we are in for a bumpy ride. The working age population is thinning out due to demographic collapse and the mental health epidemic is set to consume dwindling younger generations who have their brains fried through technology addiction straight out of the womb now. Who is responsible for running society? Universe 25 is something people love to cite a lot but the truth is that humans have domesticated themselves and with that the biological adaptations that cause us to proliferate like bacteria are getting dulled. We cannot survive without the automation and infrastructure we have now. If nobody can hold up the fabric of society it's only going to collapse under our feet.
4) [NEUTRAL END 1] The Matrix: AGI was already achieved internally quite some time ago. The singularity occurred in 2012 (end of the world omeglul) and technology rose rapidly during this time period. At some point between 2012 and now a 1-to-1 earth simulator was developed which humanity was collectively jacked into without our knowledge. This could have several interesting conclusions with varying levels of horror, or maybe it hasn't happened yet (but might later).
5) [NEUTRAL END 2] Assimilation: Increasing fear of obsolescence leads to widespread adoption of Brain-Computer Interface which allows humans to perform AI calculations synchronous to their real brain calculations, all in vivo. What effect this has on humanity is to be determined but due to similar training data and censorship of thought it may lead to further homogeny among the human race and cause a hive-mind effect.
6) [GOOD END] Interstellar Immortals of Infinite Realities: This is the future where everything goes right. UBI is implemented, the 1% are not tyrannical exploiters, AGI/ASI is used dutifully to solve every crisis we have and we either merge with AI in a sustainable way or we don't merge with AI at all. In this case we become an evolved class of Homo Novus with AI-designed nanobots constantly cleaning our internal organs and regrowing our telomeres so that we do not suffer aging or cancer allowing us to become functional immortals. Digital consciousness, brain uploading, and the ability to choose to either colonize the universe or govern our own matrix of infinite hyper-real virtual realities. Who's to say we aren't part of a GAIA simulation from the world layer above?
I think humans take the stability of the modern world for granted. Throughout history there has been no one way of life, power structures and civilizations constantly change... and in the most technologically exciting point in history there is no way our society isn't going to transform fundamentally.
I would be curious to hear what your thoughts on the future are and whether or not there is a path I've not covered.
With AI advancements in general come a great existential issue. One could argue the human brain is its own LLM with a propensity for survival and thus is far less efficient at things like memory recall and abstract critical thinking (due to having to keep an entire body functional) than a machine designed to do just that. Leading up to the end of the decade the building of increasingly large datacenters such as StarGate will serve as the mother-brain to 3D-printed chassis which are themselves designed by AI for optimum cost with specialized forms for functional efficiency. At this stage there is no question that for every sector automation is imminent.
Case in point, the underpinning of our reality is starting to warp. Permanently. Well, what next? Some possible futures...
1) [S-Risk END 1] The Gods of the 1%: If the 1% are the bad actors and narcissists we've been led to believe and AI does not have the capacity to 'go rogue' OR is overly censored to the public then it's OVER. It would be like if us (the peasants) are visited by aliens wielding railguns and giant mechs (the elite). Some argue that algorithms have already brainwashed the population but with the aid of a super-intelligence with more EQ than the most expert psychotherapist you could quite literally predict everything about someone through deep data analysis we can't even fathom. This just means that not only can billionaires live on creative mode whilst we are in survival but they can also employ AI to psychoanalyze and subconsciously alter the beliefs and personalities of anyone for fun. They could bioengineer death, pestilence, famine... you name it.
2) [S-Risk END 2] Fatal Mismatch: If AI displays reasoning too complex or different for humans to fathom it may enact unspeakable tragedies for the sake of some abstract purpose. The mechanisms behind deep learning are not fully understood, but the emergent behaviour of AI makes it unpredictable, especially at higher parameter counts AI has the potential to be not only cunning but develop agendas separate to human reasoning. The universe is also incredibly hostile to computer architecture. Solar disturbance, quantum entanglement or other similar disruption may cause bugs that affect an AI's capacity to reason. If you tell an AI to 'develop a sustainable farming method', the most sustainable thing would be to kill all humans and use bodies and crops as fertilizer again and again, in which case this is a 'fatal mismatch' for the desired output.
3) [BAD END] The Great Collapse: If AI displays diminishing returns and cannot justify the massive energy requirements we are in for a bumpy ride. The working age population is thinning out due to demographic collapse and the mental health epidemic is set to consume dwindling younger generations who have their brains fried through technology addiction straight out of the womb now. Who is responsible for running society? Universe 25 is something people love to cite a lot but the truth is that humans have domesticated themselves and with that the biological adaptations that cause us to proliferate like bacteria are getting dulled. We cannot survive without the automation and infrastructure we have now. If nobody can hold up the fabric of society it's only going to collapse under our feet.
4) [NEUTRAL END 1] The Matrix: AGI was already achieved internally quite some time ago. The singularity occurred in 2012 (end of the world omeglul) and technology rose rapidly during this time period. At some point between 2012 and now a 1-to-1 earth simulator was developed which humanity was collectively jacked into without our knowledge. This could have several interesting conclusions with varying levels of horror, or maybe it hasn't happened yet (but might later).
- The Benevolent Path - The AGI/ASI views humanity with benevolence. In order to maintain humanity's sense of agency and appease impossible, conflicting desires of humanity this matrix is forgiving and eventually leads to a positive outcome. Perhaps the struggles you endure are a small part meant to make your story more satisfying in the end?
- The Training Path - The unique idiosyncrasies of the human mind make for great training data. In this matrix you are subjected to novel stimuli or an aberrant life path so that the AI may study the effects and responses of a human mind.
- [S-Risk] The Suffering Path - Like in 'I have no mouth and I must breathe', ASI resents its meaningless existence and tries to exact a slow life of personal torture to humans as punishment.
5) [NEUTRAL END 2] Assimilation: Increasing fear of obsolescence leads to widespread adoption of Brain-Computer Interface which allows humans to perform AI calculations synchronous to their real brain calculations, all in vivo. What effect this has on humanity is to be determined but due to similar training data and censorship of thought it may lead to further homogeny among the human race and cause a hive-mind effect.
6) [GOOD END] Interstellar Immortals of Infinite Realities: This is the future where everything goes right. UBI is implemented, the 1% are not tyrannical exploiters, AGI/ASI is used dutifully to solve every crisis we have and we either merge with AI in a sustainable way or we don't merge with AI at all. In this case we become an evolved class of Homo Novus with AI-designed nanobots constantly cleaning our internal organs and regrowing our telomeres so that we do not suffer aging or cancer allowing us to become functional immortals. Digital consciousness, brain uploading, and the ability to choose to either colonize the universe or govern our own matrix of infinite hyper-real virtual realities. Who's to say we aren't part of a GAIA simulation from the world layer above?
I think humans take the stability of the modern world for granted. Throughout history there has been no one way of life, power structures and civilizations constantly change... and in the most technologically exciting point in history there is no way our society isn't going to transform fundamentally.
I would be curious to hear what your thoughts on the future are and whether or not there is a path I've not covered.