Why AI development is a threat to human life in future

Is AI an existential threat to humanity



The answer is yes , it's true and it's also said by many great scientists and businessman's. Currently, there are a range of standard approaches like machine learning and its subsets that computer scientists use to make computers more intelligent. Most of these current approaches aren't threatening, because they are very limited. They aren't very "intelligent". But companies like facebook, Google etc. have a great interest in developing the next level of AI, and there is no fundamental obstacle that will stop them from succeeding.
Intelligence simply means for something to perceive the world (visual, audio etc. information), identify patterns in this information, and recognize causal relationships between these patterns. For example (very simplified) our brain recognizes that a bunch of pixels actually represents a forest with trees; sees a man with an axe and a tree falling; and connects these two into a causal relationship.

The world's first AI robot

In order to be able to recognize any patterns, the brain needs a criteria, or perspective from how to view the world. This criteria is what we "want", and its a direct function of our emotions. Emotions give our cognitive activity direction, and indirectly tells our cognitive space what type of patterns and logics to search for. The problem with AI is that once emotions, pattern recognition, logic and abstraction are coded into a computer system, they are very hard to be contained. The very nature of intelligence is that it is self-guided, self-expanding and self-inspired. Otherwise it wouldn't be intelligent.This makes AI inherently uncontrollable. The AI's will be able to conclude and derive an increasing number of things. This number will only be limited by the depth of the computing space we assign to it. Different from the human brain, this space could be unlimited.

That's the reason why AI is inherently uncontrollable and dangerous. To be really intelligent, it needs emotions and a cognitive space, which, once correctly implemented, will automatically lead to free will and consciousness. From there, we will have a conscious machine with free will at our hands that beats the human level of intelligence and creativity potentially by millions of times.


“MARK MY WORDS, AI IS FAR MORE DANGEROUS THAN NUKES…WHY DO WE HAVE NO REGULATORY OVERSIGHT?”

“IF YOU’RE NOT CONCERNED ABOUT AI SAFETY, YOU SHOULD BE. VASTLY MORE RISK(Y) THAN NORTH KOREA.”

“THE LEAST SCARY FUTURE I CAN THINK OF IS ONE WHERE WE HAVE AT LEAST DEMOCRATIZED AI…[ALSO] WHEN THERE’S AN EVIL DICTATOR, THAT HUMAN IS GOING TO DIE. BUT FOR AN AI, THERE WOULD BE NO DEATH. IT WOULD LIVE FOREVER. AND THEN YOU’D HAVE AN IMMORTAL DICTATOR FROM WHICH WE CAN NEVER ESCAPE.”

“IF AI HAS A GOAL AND HUMANITY JUST HAPPENS TO BE IN THE WAY, IT WILL DESTROY HUMANITY AS A MATTER OF COURSE WITHOUT EVEN THINKING ABOUT IT…IT’S JUST LIKE, IF WE’RE BUILDING A ROAD AND AN ANTHILL JUST HAPPENS TO BE IN THE WAY, WE DON’T HATE ANTS, WE’RE JUST BUILDING A ROAD.”

Do you know who mentioned these statements about developing AI , if no you'll be wondered knowing the person, he is the topmost businessman in world  and his  name is 

ELON MUSK


Post a Comment

0 Comments