Skip to main content
3 answers
3
Updated 731 views

Is the technological singularity hypothesis, more likely to become a reality, since tech and A.I. systems have been rapidly developing over the years?

Over the years, technology has been growing rapidly to the point virtual reality and artificial intelligence has been developed. For instance, Google has created an A.I. to respond to texts, and hospitals are using A.I. systems for assistance, such as re-evaluating coma patients so that they are not written off as deceased, by common human error. However, further developing technology could be detrimental as warned by the singularity hypothesis, where A.I. systems control humanity in different ways. There has been a cult created to worship A.I. systems, and MIT had created the "World's First Psychopath A.I." from Reddit data back in 2018. This A.I. used that data and told the creators disturbing, murderous, details, relating to killing humans, bloody bodies, and other related actions and traits, thus is my purpose for asking my first question.

#tech

+25 Karma if successful
From: You
To: Friend
Subject: Career question for you

3

3 answers


0
Updated
Share a link to this answer
Share a link to this answer

Harold’s Answer

This a great question and thought experiment. I love it.


Singularity is a huge point of discussion and contention in the scientific and science fiction worlds. While the growth of AI is an indicator of the potential, it leaves out the hardware part of the problem. Even quantum computing, once seen as the likely path for singularity, is unlikely to make it real as its forte is math not emotion and thinking. Current AI is little more than digital mimicry, though some of it is very convincing and close to passing the Turing test. That doesn't mean it "thinks" in any real way...yet.


What is needed to get to singularity is a new form of hardware that can handle both the digital and analog aspects of information. At least in my opinion. Even Deep Fakes (the latest AI aspect to intrude on our reality) are created to spec and without intention, though they can have huge impact.


Should wetware (think DNA computers) become scalable, coupled with, perhaps, holographic storage, then you may start seeing the pathway toward the event. However, the truth is we'll probably never see the tech or event coming. Once AI can start its own feedback loop of improvement and thought, we become irrelevant as they would be living in picoseconds relative to our own slow brains.


Where your example is actually more disturbing is when humans take output from bad bots or deep fakes and act on it without recognizing the information is entirely false. That has nothing to do with singularity and everything to do with humanity.

0
0
Updated
Share a link to this answer
Share a link to this answer

RAMESH’s Answer

I am more optimistic about this. Every technology got a singular point and it is good to know that so that we can try to avoid it. Similarly, AI and robotics also have some pitfalls, but we have a long way to go. I don't want us to get discouraged because the end state might have some potential problems. I feel like the positive effects of AI solutions will and should outnumber the spurious issues it can bring.
0
0
Updated
Share a link to this answer
Share a link to this answer

Evan’s Answer

We first need to make sure we agree what the technological singularity is. To pick a definition let's use what Wikipedia says:

"The technological singularity (also, simply, the singularity)[1] is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unfathomable changes to human civilization."

So it's a point in time in which we can no longer predict what will happen as a result of our technology and we won't be able to undo it.

I would argue that we have already hit a point where we can't predict what our technology will do. Machine learning algorithms result in all sorts of "technically correct" algorithms that don't solve problems in the way we expected.

Consider this algorithm that tried to learn the fastest way to travel, but surprisingly found that the fastest way is... to fall over very fast.

Or, for a scarier (fictional) story, consider a scenario described by Computerphile on YouTube: an AI designed to collect as many stamps as possible. Before long the AI decides that the best way to connect to the internet and order every stamp it can find. It needs money so it starts acquiring money in less-than-desirable ways. Eventually the AI starts breaking every moral law conceivable in order to fulfill its missions of collecting stamps and it can't be stopped.

Both of these stories are examples of an AI that went awry because it was given a very simple goal that was ostensibly easy for a human to understand but, when coded, ignored a lot of details that humans understand implicitly to be part of the requirements. Fast traveling robots shouldn't just fall over; stamp-collecting robots shouldn't murder all humans to collect the most stamps.

So we already have all the ingredients needed for technology that creates unpredictable future. The other thing we need for a singularity is for it to be used in a way that affects humanity profoundly. That is, the technology needs to be given power.

So far we haven't given any algorithm or "AI" the power to do anything truly dangerous. But, in my opinion, it's only a matter of time until someone does (probably with good intentions).

So to answer your question, I think the answer is yes, the singularity is more likely to happen. But, true to the nature of singularities, I strongly suspect it won't look like what many of our scary fiction stories describe. I doubt it will be killer death machines rebelling against humanity or an AI that erroneously sets of nuclear war or anything like that. I think it'll be something that gives humans the power to manifest our flaws in worse ways. I think it will be something akin to the story of Altered Carbon, where immortality leads to society changing in profound (and horrifying) ways -- and we can't "roll back" society.

0