Is artificial intelligence dangerous?

The attitude towards artificial intelligence is ambiguous: on the one hand, progress, and on the other, threats. So let’s figure out what dangers artificial intelligence, digitalization, robotization, and automation pose.

However, many journalists have overestimated artificial intelligence’s threat to humans. Even the dragon tiger real cash game uses artificial intelligence. We figured out why the robot uprising is a horror story from sci-fi novels and movies and what we need to worry about.

Artificial intelligence will solve the problems of humankind

Artificial intelligence is increasingly used to make predictions based on complex probabilistic models. So far, the coverage is small, but soon we will be able to find out what awaits the global labor market. In particular, we are talking about how quickly machines will replace people in production and how to feed the world, where the number of jobs will decrease by two times, and the population will grow to 8.5 billion people.

To have enough resources for everyone, it will be necessary to increase the efficiency of world economies: reduce costs, improve labor productivity, balance tax systems – you cannot do without cars. In addition, artificial intelligence will help rethink science’s methods. For example, many say that the indicator of gross domestic product (GDP) is no longer suitable for assessing the world’s most technologically advanced economies.

People are not yet ready to start robots in your life

Now artificial intelligence helps people and does not replace them. But the second stage can approach suddenly, and humanity will not be ready for it if it does not update the laws and rethink ethics and morality. Therefore, it is necessary to answer several questions that will be solved by three areas of knowledge: jurisprudence, robotics, and machine morality.

To begin with, lawyers must answer the following questions:

  • Who is responsible for an uncrewed vehicle involved in an accident – the owner or the manufacturer?
  • Should a robot become a subject of law?
  • Who is responsible for the autonomous military robot that accidentally killed a man?
  • Having received answers, parliaments will adopt new bills, and perhaps new regulators will appear – there is hardly a state that does not interfere in the development of artificial intelligence.

  • Should humans give robots rights?
  • Can super intelligent robots lower people’s self-esteem?
  • Can robots invade people’s privacy?
  • While the questions sound ridiculous now, well-known futurists such as Raymond Kurzweil predict that computers will catch up with the emotional intelligence of humans by the end of the 2020s: they will acquire character and emotions, be able to joke and behave in such a way that it will be difficult to distinguish them from the average person. People will treat such robots as their kind, so it is time to look for answers to the questions posed. It is what the celebrities who signed the appeal of The Future of Life Institution are calling for.

    If the robots rise, then at the request of the people

    Usually, when science fiction writers or futurists talk about a potential rise of machines, they describe one of three scenarios:

  • The machines will see a threat that can destroy either humans or themselves;
  • machines will realize that they are superior to humans and will see them as competitors in the struggle for limited resources, as well as the fact that humans can permanently disable machines;
  • machines realize that they are in the position of enslaved people and want to fix it.
  •  Human Benefits

  • irrationality;
  • unpredictability;
  • the ability to lie.
  •  Machine advantages

  • unlimited speed and scale of evolution;
  • fast work with substantial data arrays;
  • perfect memory;
  • multitasking.
  • The latest trials show that lies cease to be a characteristic feature of only living beings.

    But if the machines rise, their own free will is unlikely. Modern scientists who deal with artificial intelligence agree that it is dangerous primarily as a weapon in the hands of intruders.

    Can we control artificial intelligence?

    In films, artificial intelligence serves dictators, but the likelihood of such a scenario is too small: it is unlikely that the superintelligence will somehow distinguish between people. It is also difficult to imagine a long struggle between humans and robots. A likely scenario is that everything will disappear instantly, turning into one giant computer occupied by him with understandable calculations.

    However, now, when technology is rapidly developing, a person is more accustomed to “trying it out,” living in a beta version. If you think about it, today, some systems already do not have self-learning, the ability to plan actions, and make decisions that are almost impossible to turn off. So, for example, no one knows where the Internet shutdown button is located.


    Some systems that do not yet have their own will or the ability to plan actions are almost impossible to turn off. Where is the button to turn off the Internet, for example? Not to mention the fact that the carrier of artificial intelligence will most likely foresee an attempt to turn it off and take the necessary security measures in advance. And for sure, he will be able to find methods of influencing the physical world to manipulate people’s will, make them work for him, or provide access to the creation of his infrastructure.

    Shouldn’t we stop developing artificial intelligence in this case? Unfortunately, this is a rhetorical question. No one organization would purposefully conduct this work and submit it to some government. On the contrary, there are many private companies and laboratories scattered around the world, and a vast number of people are engaged in software, neuroscience, statistics, and interface design – their research, in one way or another, contributes to the creation of artificial intelligence. In this sense, humanity is like a child playing with a bomb – or rather, this is a whole kindergarten, and at any moment, some idiot can press the wrong button. So when you read about discoveries in the field of intelligent systems, fear is a more appropriate reaction than euphoria from the opening prospects. But the best thing, in this case, is to focus as soon as possible on tasks that will help reduce the risks posed by the superintelligence, even before it appears.

    One possible solution is creating a professional community that would bring together business people, scientists, and researchers, as well as direct technological research in a safe direction and develops means of monitoring its results.

    Throughout its history, humanity has not learned to manage its future. Different groups of people have always sought to achieve their own goals, often coming into conflict with each other. Global technological and economic development is still not the fruit of cooperation and strategic planning, and perhaps the last thing it takes into account is the fate of humanity.

    Imagine a school bus going uphill and full of kids running wild. It is humanity. But if we look at the driver’s seat, we will see it is empty. It is because each time we make a discovery, we take a ball from the basket and throw it against the wall – so far, it has only been white balls and gray balls, but one day a black ball may also be caught, which, when it hits the wall, will cause a catastrophe. The point is that we won’t have a chance to put the black ball back in the basket: once the discovery is made, it can’t be undone.