Artificial intelligence Deep learning Technology

The scary implementations of unsupervised AI: Botnets and hijacking physical machines.

790306V2Q3
Written by adamdedanga

AI that create it’s own goals

If you create algorithms using unsupervised AI, you’ll pretty soon have a thinking machine that can learn concepts.
If you let AI analyze literature, first it begins to understand what’s a subject or a verb and finally ends with understanding concepts such as power.

The big danger here is not if the machine understand concepts from data, but if the machine can freely create it’s own goals, goals that could be compared to human want.
Then you’ll have AI that figure out how to create their own algorithms. An example of such Algorithms could be a virus that infect computers to gain more computing power, much like today’s Botnet‘s works.

It’s quite scary how a AI algorithm could start replicating and hide itself.
Similar things of draining computing power have been done before, earlier it’s been with the goal of mining bitcoin or sending traffic to websites.

A small sum up.

  • We’ll soon have AI that understand concepts such as power.
  • With AI that allow goal mutation we’ll soon have a wide range of algorithms with mutated goals.
    • Goals like harnessing computer power through a botnet.
    • Gathering financial power
    • Infect security camera system, drones and cars

AUG8VM3FWS
We’re probably not going to have a transformers scenario any time soon.
Although with an expanding use of AI we’re definitely going to bump into algorithms that do things we never intended them to do. So it’s a good time to start thinking about these problems and how to avoid them.
Elon Musk share the same fear and recently donated 10 $ million to Future of Life Institute, which investigates the existential risk from advanced artificial intelligence.

Hopefully this will lead to a better understanding how things could go wrong and how to prevent it, leading to policies and regulation that’ll prevent AI from mutating to something that go against human values.

The AI I’m talking about in this article is not AI that learns a specific task, like how to play chess. But AI that get’s data and start figuring out it’s own concepts and also create it’s own goals. Such as a Restricted Boltzmann machine.
I got into AI lately because It also holds enormous potential upside and I’ve been investigating how to use it in a project related to epidemiology and bioinformatics.

I’m not the first one to think about these problems, and I’m far from an AI expert. But as I started to learn about AI through reading literature like MIT Press book on deep learning, it felt like AI that hijacks server farms or create it’s own botnet could happen pretty soon.

About the author

adamdedanga

Programmer, Documentary filmmaker and Music producer.

Leave a Comment