The study of the human mind and the attempt to reproduce its functioning by means of computers has been the objective of many scholars: scientists, philosophers, engineers and sociologists have long discussed this fascinating and controversial subject.
From its official birth at the historic conference at Dartmouth College in 1956 by the American computer scientist J. McCarthy – winner of the Turing Prize in 1971 – to the present day, Artificial Intelligence, despite initial difficulties, has established itself in every field of work, taking over the most elementary activities of daily life, acquiring the merit of being one of the most important technologies of the 21st century.
It should be stressed, however, that the progress of robotics and consequently of artificial intelligence has come about through the interpenetration of two different levels: the technical-engineering and the theoretical-philosophical. Their development made it possible to arrive at the current conception of the automaton, understood as an android, given in the 1950s by the British scientist Alan Turing, today universally recognised as one of the pioneers of computer science.
In fact, Artificial Intelligence can be defined as ‘the theory and development of computer systems capable of performing tasks that normally require the presence of a human being, such as visual perception, voice recognition and decision-making’ and in recent decades has reached a level of maturity such that it can be used pervasively in various fields: from bio-medical diagnosis to finance, from management consulting to personnel selection criteria.
We wanted to take a closer look at the recent developments and the growing computational capabilities of Artificial Intelligence systems, examining the potential and the risks by talking to Raffaele Miele, Professor of the Master in Data Science at Rome Business School, who said:
“Artificial Intelligence is based on algorithms capable of replicating human behaviour and is divided into a strong paradigm, relating to cognitive activities, and a weak paradigm, relating to specific activities.
The very ambitious goal (concerning the strong paradigm) of creating machines that could completely replace humans has not been achieved. On the other hand, it is undeniable that, on the weak paradigm, important progress has been made in practical approaches that make it possible to help and increase the capacity of humans on many technical tasks. For example, it is possible to use AI for the analysis of video streams generated by cameras for security applications: a well constructed algorithm can carry out a rather reliable check on large amounts of information and identify, for example, the moments when a person is entering a sensitive place. In this case we can certainly say that AI-based automation in workplaces improves efficiency.”
At the present time, in analysing the risks connected to the use of AI, one often speaks of “bias”, i.e. all those distortions in the results produced, which derive from erroneous assumptions in the learning process. In fact, the use of a model of AI with “bias” leads to making decisions “subject to bias”. The consequences can be many. One example among many: an algorithm for selecting optimal candidates from a large company was dismissed because it tended to systematically penalise women for jobs in the technology sector.
Biases can creep into algorithms in different ways: for instance by introducing wrong assumptions into the learning process. Of course, there are various solutions to measure and reduce the negative consequences of so-called bias and thus allow Artificial Intelligence to be ‘responsible’ in the sense of its ethical and transparent use.
“In order not to compromise an effective evolution of AI, it is also important that decisions based on algorithms can be explainable. This is referred to as explainability, i.e. the concrete need to understand how and why the model arrived at a certain choice or prediction. This problem is particularly acute since the use of Deep Learning has become widespread, as the decision-making process followed by the algorithms is based on very complex objects that are difficult to interpret.”
For companies, understanding consumer behaviour as well as the factors that influence their decision-making is a key driver on which to base business models.
Consumer behaviour is a complex phenomenon to observe, which is not only related to purchasing power, but is a combination of factors such as age, cultural level, gender and is influenced by reference groups, opinion leaders and, in the strictest sense of the word, brand ambassadors and social networks.
In this scenario, machine learning, i.e. the ability of machines to learn data without being programmed in advance and make predictions, is of paramount importance. From this point of view, big data and machine learning platforms are widely used by companies to achieve their objectives.
“Machine learning is all about that part of computer science that relates to the weak paradigm of artificial intelligence and seeks to increase and amplify the ability of people to follow tasks. They are actually algorithms that study data and extract patterns or regularities.”
Among the various applications of artificial intelligence, the hybrid cloud is one of the solutions that is becoming more and more widespread among international companies and gradually also among Italian companies. It is a combination of :
– a public cloud environment, which allows access to huge computing resources when needed
– with a private cloud environment, where data and applications are only accessible to the company.
This allows you to get all the benefits of the cloud while ensuring very high standards of security and data protection.The benefits are manifold: cost savings, agility and the direct consequence of a fast transformation that in business is an asset to gain a competitive advantage.
“The benefits of the hybrid cloud, especially for digital business, are undeniable. For example, if I own an e-commerce site that has a big peak during the Christmas period, it is in my interest to carry the load, because it is during that traffic that my sales reach the expected peaks. At the same time, however, it is important to ensure the necessary data protection standards. By using the hybrid cloud during the peak demand period, part of the work is transferred to a public platform, which allows the resources to be scaled and augmented in real time so that no money is lost, while the sensitive data remains on a private cloud under the direct control of the company.”
“An AI professional spends at least 80% of their time doing one of the following things:
– Study the data and understand the Business problem (if focused on the AI algorithm building part).
– Putting the results into production (if focused on the Engineering part).
These activities require a mix of very different knowledge: advanced statistical skills to process the data, ability to understand business problems (e.g. to stop a haemorrhage of customers or to bring more efficiency to one’s marketing campaigns) and IT skills, because all the algorithms created must then be released into the company’s systems. It is difficult to find a single professional capable of managing all this knowledge, so teamwork becomes important. It is also essential to study and keep up to date with lifelong learning, because the world changes every 5 years and you have to keep up with the changes. I like to repeat that in this era, an AI professional must see himself, at least for part of his time, as a researcher“.
The power and disruptive impact of AI in society is also changing the world of work where companies are demanding new applications. There are countless emerging professional figures because in the last 10 years artificial intelligence is finding massive application in all industrial sectors. It will therefore be important to put in place a regulatory framework capable of governing these issues. The European Union is working on this. To date, experimentation is mainly taking place in start-ups.
“I feel I can say that the limit of the applicability of Artificial Intelligence in companies lies only in imagination. For example, it is no longer science fiction to say that it is possible to make coffee machines talk to consumers in order to increase their level of satisfaction. As far as the new trends are concerned, having overcome the classic activity of supervised learning because it is slightly overused, I find the ability to carry out ‘unsupervised’ analyses (such as the identification of anomalies) very important because they are necessary in various sectors. Among the many examples I mention: security (physical and cyber) and different contexts of the IoT (Internet of Things). We are also talking about Quantum Computing, which is a way of revising computing and which exploits the laws of quantum mechanics to solve problems that are too complex for classical computers. Sectors that are very popular with young people include the metaverse, blockchain technology, computer vision and Natural Language Processing.”
Data Scientist with technical and business-acumen skills. Ten years of academic background (Machine Learning and Statistics) and more than 10 years of experience in Consulting, Teaching and Coaching. Several years of experience as a remote worker. Since late 2014 built from scratch and led a small Data Science team that he currently manages remotely. Professional experience in applying Artificial Intelligence in a variety of areas including: marketing, fraud, cyber security, forecasting, intelligence, recommendation engines. He holds the chair of Data Mining and Big Data at Unimercatorum University and the course “Database Lab” at Federico II University of Naples.
If you are interested in Artificial Intelligence, have a look to our Master in Data Science