AI: where we stand and where we are going

Author:  Antonio Ostuni | R&D Director

In a world in which humanity exercises total power over machines, super robots must respect three fundamental laws:

  1. A robot cannot injure a human being or allow a human being to be harmed by remaining inert;
  2.  A robot must obey an order given by a human, as long as it does not conflict with the first rule;
  3. A robot must protect its existence without conflicting with the other two rules.

This is the scenario depicted in I, Robot (2004), a film set in the near 2035, that is based on the laws described by Isaac Asimov in “Runaround”.

But today, on the threshold of the new decade, just fifteen years from the date envisioned in the film by Alex Proyas, where have we arrived? What is the current state of artificial intelligence? In which direction are we heading?

We can start with the certainty that AI is not just about robots, as movies often show us. Artificial intelligence is in a search algorithm, in a chatbot, in IoT devices, in cars, and even in autonomous weapons, such as missiles that follow their target.

Today, we can identify two main trends in AI:

  • Narrow AI (or weak artificial intelligence), which deals with single tasks by integrating only one part of the mind;
  • General AI (AGI or strong artificial intelligence), which pertains to the ability of a machine to understand or learn any intellectual task that a human being can perform.

AI: stato dell’arte

The increasingly advanced technology provides researchers with new tools that are capable of achieving important goals, and these tools are great starting points in and of themselves. Among the achievements of recent years, the following are some specific domains:

  • Machine learning;
  • Reinforcement learning;
  • Deep learning;
  • Natural language processing.

 

Machine Learning (ML)

Machine learning is a subcategory of AI that often uses statistical techniques to give machines the ability to absorb data without explicitly receiving instructions to do so. This process is known as ‘training’ a ‘model’ using a learning ‘algorithm’, which progressively improves performance on a specific activity. The successes achieved in this field have encouraged researchers to push harder on the accelerator.

These successes include the ability of machines to learn how to synthesize molecules. With the use of a system composed of 3 neural networks with a search tree algorithm called ‘Monte Carlo’, it was possible to train the machines with about 12.4 million reactions, which managed to solve some retrosynthetic analyses. This method is far faster than the one employed today that uses computerized planning of molecular synthesis. In fact, it solves more than 80% of a single molecular test, where each test (or test-run) has a maximum time limit of 5 seconds as a target for each individual molecule that is being tested.

Research also continues in the direction of optimizing tools such as hyperparameters and neural networks, which are fixed parameters given to machines as initial values ​​for learning. The use of revolutionary algorithms allows one to maximize network performance by minimizing the complexity and size of the calculation. An example of this is LEAF – Learning Evolutionary AI Framework – which manages to use these algorithms precisely to conduct optimization for both hyperparameters and network architectures by welding together smaller, more effective networks.

 

Reinforcement Learning (RL)

Reinforcement learning is an area of ​​ML that is related to software agents that learn ‘goal-oriented’ behavior by trying and making mistakes in environments that provide rewards in response to the agents’ actions (called ‘Policy’) toward achieving the objectives.

This field is perhaps the one that has most captured the attention of researchers in the last decade. In 2018, OpenAI, a non-profit research association on artificial intelligence with the aim of promoting and developing friendly artificial intelligence, achieved very important results in the game Montezuma’s Revenge. The performance of a superhuman was reached using a technique called Random Network Distillation (RND), which encouraged the RL agent to explore unpredictable states. The graph below shows how this technique far surpassed the other various AI systems in this game.

This is just one of several examples of results that were obtained in 2019. Another AI that’s worth mentioning is AlphaStar of DeepMind. It beat the 5-time world champion in the real-time strategy game StarCraft 2 using multi-agent algorithm training. It acted in the first instance by making agents compete against each other, allowing it to learn the immense strategic space. Later, a new agent was produced that combined the best strategies that have been developed by individuals. In Quake 3 Arena, a level of performance equal to humans has been reached using multiple agents that independently learned and acted together to compete against one another.

 

Deep Learning

Also within ML, deep learning takes inspiration from the activity of neurons within the brain to learn how to recognize complex patterns through learned data. This is thanks to the use of algorithms, mainly statistical calculations. The word ‘deep’ refers to the large number of levels of neurons that ML models simultaneously, which helps acquire rich representations of data to obtain performance gains.

2019 was a pivotal year for deep learning and its application in various sectors, particularly in the field of medicine. For example, an approach called ‘two stages’ has produced expert-level diagnosis suggestions and treatment prescriptions for various eye diseases. The first stage was able to reconstruct a map of eye tissue from a digital 3D scanner, while the second stage operated on this map to make predictions on the severity of the disease. Another example is the deep learning model that was used in 54 thousand electrocardiogram traces. It’s capable of highlighting 12 classes of arrhythmias.

Even more important is what the researchers believe they can achieve through future study, namely the possibilities of restoring communications in paralyzed patients and restoring movement in quadriplegics.

In the first case, some researchers from Columbia University were able to synthesize voice using a vocoder through methods of measuring the neural activity of 5 patients with epilepsy. The system has reached 75% accuracy, but with a deep learning model, a 65% improvement in speech comprehensibility was observed.

In the even more surprising second case, researchers implanted a microelectrode in the hand and arm area of ​​the left side of the primary motor cortex of a quadriplegic patient. They instructed the neural network to predict what the patient’s arm movement intentions might be based on the voltage signals inside the patient’s skull. The patient was able to sustain a high-accuracy resuscitation of his paralyzed forearm with functional electrical stimulation for more than a year without supervision.

 

Natural Language Processing (NLP)

Natural language processing is the mechanism by which machines acquire the ability to analyze, understand, and manipulate textual data. 2019 was a great year for NPL with Google AI’s BERT and Transformer, Allen Institute’s ELMo, OpenAI’s Transformer, Ruder and Howard’s ULMFit, and finally, Microsoft’s MT-DNN. All of these have shown that pre-taught language models can substantially improve performance on a wide variety of NLP tasks.

Let’s take the example of the OpenAI GPT-2 pre-educated language model:

 

INPUT: Miley Cyrus was caught shoplifting from Abercrombie and Fitch on Hollywood Blvd.

OUTPUT (written entirely by machine on its second attempt): The 19 years old singer was caught on camera being escorted out of the store by security guards. The singer was wearing a black hoodie with the label ‘Blurred lines’ on the front and ‘Fashion Police’ on the back. Scroll down for video.

Shoplifting: Miley Cyrus was caught shoplifting from Abercrombie and Fitch on Hollywood Boulevard today (pictured) 

The singer was wearing a black hoodie with the label ‘Blurred lines’ on the front and ‘Fashion Police’ on the back.

The singer was also wearing a pair of black-rimmed glasses, a blank jacket, black jeans and black sandals. She was carrying a pair of black and white striped gloves and small black bag.

You can try this NLP at the following address: transformer.huggingface.co

To understand the progress that has been made, the GLUE (General Language Understanding Evaluation) benchmark system (i.e., a collection of resources for training, evaluation, and analysis of NLPs) has put some of the NLP systems under pressure with specific tests. These tests evaluate different aspects of task understanding. They encompass logic, common sense, understanding, and lexical semantics. Over the past 13 months, these systems have shown very promising growth, exceeding the human baseline by one point.

In this sector, there is an ever-growing interest in federated learning for real-world products. In fact, going back as early as 2018, Google has been using FL for training Android keyboards. As you may have noticed, Android keyboards begin to suggest words that are used more often after a period of use, even if they are not included in its vocabulary. While in 2019, Google put its general FL system in place by introducing federated ‘TensorFlow’, which is a library of functions that can be performed in a decentralized setting. This has generated a lot of attention, especially concerning the growing interest in the sensitivity of the data used by machine learning systems. TensorFlow’s privacy allows for machine learning systems to create models based on user data. It also provides strong mathematical guarantees that these models do not learn and/or remember any details about specific users.

 

Where we are going?

There are various directions that have already been taken and some that have still yet to be taken. For example, the technology for autonomous vehicles still remains in the R&D stage. Google, however, with the leaps and bounds it has made in the implementation of hardware for quantum computing, has facilitated the creation of 5 start-ups that deal with quantum machine learning.

AI has spread to many fields. Its ever-expanding applications are helping us carry out more and more complex operations in less time and with less effort. But, what if AI were to take over our lives?

There are other, perhaps less reassuring, assumptions about less friendly AI. Stuart Armstrong, a philosopher and researcher, has addressed the risks that AI could carry on a large scale. Basically, he does not talk about intelligent machines that will exterminate humanity. Instead, his real fear is having machines, robots, or systems become more intelligent than human beings, not only in mechanical operations, but also in social relations (from politics to the economy).

Job loss represents one of the main risks, and here’s a great example. If we take an AI that has reached the same level of intelligence as a human, copy it 100 times, teach them all 100 different professions, then finally take each of them and copy them another 100 times, there will be 10 thousand experienced employees in 100 different sectors in less than a week.

Another more realistic hypothesis comes from the immense amount of data that AI systems acquire in order to feed their own ‘data-driven’ automations. What is the new nature of the work that these systems face with their immense datasets? How can we manage the social impact of these huge amounts of data? How do we deal with privacy, security, and freedom?

In 2018, we witnessed the Cambridge Analytica scandal, but the data problem did not stop then. In fact, in 2019, this negative trend continued with other privacy scandals (Facebook, Apple, and Amazon) and data trading. Among the most disconcerting is the ‘deepfake’, or a technique for the synthesis of the human image based on artificial intelligence. It’s used to combine and superimpose existing images and videos with other videos or original images through a machine learning technique known as the generative antagonist network.

With greater developments in technology, the level of risk rises (as history has taught us). However, artificial intelligence has now become part of everyday life and is already changing some routine processes. See, for example, the voice assistants of Google, Amazon, and Apple. A study conducted by the Boston Consulting Group showed that retailers that implemented machine learning technology for their own personalization had a 6 to 10% increase in sales. AI will certainly continue to shape the future of a variety of industries like IoT (Internet of Things), transport and logistics, digital health, and many branches of fintech and insurtech.

Regardless of the sector, artificial intelligence is everywhere, and it will certainly change the way we do business.

The question is: Are we ready for change?

Contact us to find out all ThinkOpen news and services

*I dati raccolti verranno trattati secondo la nostra informativa