Bill Gates Also Worries Artificial Intelligence Is A Threat

Eric Mack
Forbes
1/28/2015

Aside from founding Microsoft, Bill Gates is known as an all-around smart guy who has put his money where his mouth is when it comes to saving the world. It would seem that this makes his opinions worth considering when he tells us that he, like fellow brainiac Stephen Hawking and Tesla Motors founder / Iron Man inspiration Elon Musk, fears that artificial intelligence could pose a threat to humanity.

In a Reddit Ask Me Anything (AMA) session on Wednesday, Gates echoed the concerns expressed over the past year by Hawking, Musk and others that something vaguely resembling the science fiction scenarios from the Terminator and Matrix franchises could come to pass if the potential of artificial superintelligence is not taken seriously.

“I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well,” Gates wrote. “A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”

Earlier this month, Elon Musk put down $10 million of his own money to fund an effort to keep artificial intelligence friendly. Gates and Musk both have an interest in ensuring that artificial intelligence not only stays friendly, but stays viable (e.g. – public sentiment and lawmakers don’t turn against the basic notion of smart networks and devices), given that it’s likely to play a role in the future of not only Microsoft, but also Musk’s SpaceX and Tesla.

Personal interests aside, Musk and Gates could just be right about the threat posed by artificial superintelligence. When the guys most likely to benefit from a new technology see a need for it to be put on a leash, there’s probably something worth worrying about.

To really understand the potential threat Musk and Gates are talking about, I highly recommend reading Nick Bostrom’s recent book, “Superintelligence,” which lays out the entire artificial superintelligence landscape – including threats posed. Musk has referred to and recommended it in the past and it seems to be the primary foundation for much of the recent concern over A.I.

CategoriesUncategorized

2 Replies to “Bill Gates Also Worries Artificial Intelligence Is A Threat”

  1. I am not a computer nor for that matter some kind of software engineer. So my understanding of AI is pretty limited and is based for a fair bit on impressions.

    AI has a long way to catch up with HI (human intelligence).

    For one thing, human has this immense ability to multi-task. Doing different things at the same time is to my mind the easy part – like throwing the ball while running and half hopping. The could easily be repeated by an AI enabled computer – a matter of hardware, I suspect.

    Then again, there is the ability to hold information / knowledge. This too the computer could achieve without too much trouble. In fact the computer could be programed to assess the WWW for info / knowledge update.

    The harder part is enabling the computer to make multiple and simultaneous observations and or assessment/judgments and fine tuning decisions to best fit a number of situations (some known and others are plainly based of anticipations); and in this regard the ability to make the decision to store away inexplicable observations for later assessment. Quite obviously this ability concerns myriad complex operations of the mind which at the moment can only be replicated by coding (my guess). The number of logic loops one has to incorporate in a software must be monumental.

    But assuming that the third difficulty could be overcame. Could AI replace HI and eventually destroy mankind? In theory, I am afraid, yes. And actually our real enemy then would not be AI but our laziness and complacency.

    Just as the comfort of driving takes away from people the desire to walk, a thinking computer would likewise take away our need to make decisions etc. And when that happens, it is not too difficult to image what would happen next.

    I would propose AI for umno.

  2. I don’t think anyone can prevent anyone modify machine code to be unfriendly if this so called Superintelligence if this technology machine exist unless the code only access by limited one dozen of people only.

Leave a Reply