Professor Stephen Hawking said, “The development of full artificial intelligence could spell the end of the human race; it would take off on its own and redesign itself at an increasingly rapid rate. Humans, who are limited by slow biological evolution, could not compete and would be overwhelmed.
But the good professor admitted that AI could benefit humanity and said “The establishment of shared theoretical frameworks, combined with the availability of data and processing power, has yielded remarkable successes in various component tasks such as speech recognition, image classification, autonomous vehicles, translation automatic, legged locomotion and question-answer systems. As capabilities in these and other areas cross the threshold from laboratory research into economically worthwhile technologies, a virtuous circle sets in where even small performance improvements are worth large sums of money. which encourages more investment in research. There is now a broad consensus that AI research is advancing steadily and its impact on society is likely to increase… Due to the great potential of AI, it is important to research how to derive take advantage of its benefits while avoiding potential pitfalls.
What is Artificial Intelligence?
There aren’t too many people who haven’t heard of AI these days. Most of us would have been touched by it in one way or another. The simplest example being the smartphone we use which has voice assistants, fingerprints and facial recognition. Although AI has been around for years, its true potential is being tapped. Scientists have classified AI into four categories:
- Reactive – these are machines that provide a result based on the input given to them. Although there is a predictability in their production, there is no learning.
- Limited memory – these have a built-in learning engine and can use past data to make accurate predictions.
- Theory of Mind – this is still a work in progress and when ready it will have human-like decision-making prowess with emotion-based behavior.
- Self-awareness – this is when the machines will be aware of their own emotions with wants and needs. Nothing like this has been developed so far, but it’s within the realm of the possible, in the future.
There is another way to classify AI according to its use:
- Narrow AI can undertake one type of activity or task repeatedly and accurately, such as spam filters or TV show recommendations.
- General AI has human-like abilities. Although this is a theoretical possibility and research is ongoing, we are far from achieving it so far. The closest example of this type of AI is the robot Sophia, developed by Hanson Robotics.
- The super AI will have the capabilities to surpass humans. It is debatable whether it is even possible to achieve this, but if it does it will put us at odds with the machines.
In its 2022 Hype Cycle for Emerging Technologies, Gartner listed three distinct themes, one of which is accelerated AI automation. It highlights the need to expand the adoption of AI to scale products, services and solutions. It emphasizes the accelerated creation and deployment of AI models with results that include more accurate predictions and decision making.
Application of artificial intelligence
Currently, the most advanced version of AI is Narrow AI. Systems designed to perform repetitive tasks independently and accurately. Some examples are:
- Spam filters and separations like those performed by Outlook and Gmail
- Voice assistants like Siri, Alexa, Google Assistant
- TV show recommendations on Netflix, YouTube
- Facial and fingerprint recognition
- Autonomous vehicles
- Machine maintenance and failure alerts in manufacturing
- Fraud detection in the financial services sector
- Telemedicine and healthcare
From the non-exhaustive list above, it is evident that AI is now being deployed in all walks of life and in most sectors of industry to help workers be more productive. It is present in our daily lives in the form of smartphones, home automation systems, robot vacuum cleaners, road maps, etc.
Have you ever wondered what are the underlying technologies used by these applications?
Speech recognition is used by voice assistants that are prevalent in our smart phones and also standalone devices like Alexa. Voice recognition works in conjunction with Natural Language Processing (NLP) by breaking down the structure (morphology) of sentences and then using syntactic analysis, semantic analysis, sentiment analysis and other linguistic science techniques to make systems intelligent enough to interact with humans.
Machine learning and deep learning
Automatic and deep learning Learning algorithms are used to train models for data classification. Which of the two to use depends on the type of problem to be solved. Machine learning and deep learning algorithms are also used in data analysis, processing and cleaning, more so when the data is unstructured and consists of images, videos and voice. This has found critical application in cybersecurity in today’s digital age. Based on its ability to decipher patterns from huge volumes of data, these patterns help identify and prevent threats.
computer vision uses Machine Learning algorithms in simple applications and Deep Learning for complex applications in the detection and then the identification of objects. This is put to good use in the facial recognition function of smartphones and computers on the one hand and autonomous vehicles on the other. Other examples of computer vision are barcodes, automatic payment terminals and medical imaging.
That said, today’s AI has limitations and is an evolving science. It lacks the following characteristics that we humans have:
- Common sense – without the right input it won’t give the right result
- Limited self-learning capability – most AI implementations will require repeated offline training
- Understand the cause and effect of any event.
- Ethical reasoning – Microsoft had to abruptly remove its chatbot Tay who inadvertently tweeted offensive messages. He just didn’t understand the concept of good and evil.
Where is artificial intelligence going?
Like it or not, AI is ubiquitous in our lives today, whether on a personal level or on a corporate level. This influences our decisions and makes things easier. However, this is just the beginning and with the kind of money that is poured into AI research and development, this field will see new products and solutions. These may well disrupt the status quo for better or for worse.
The main fear is that this will have a negative impact on the workforce and eliminate routine and repetitive jobs very soon. Whereas before the idea was to gradually reduce the workforce, now we are talking about the size of the workforce? Then there are huge privacy concerns – in 2018, Section 19a human rights organization expressed serious reservations about the direction AI is taking: “If implemented responsibly, AI can benefit society. However, as is the case with most emerging technologies, there is a real risk that commercial and state use will have a detrimental impact on human rights. In particular, the applications of these technologies frequently rely on the generation, collection, processing and sharing of large amounts of data, both on individual and collective behavior. They also highlighted the fact that automated decision-making can lead to biased results.
What is Augmented Intelligence?
Gartner defines augmented intelligence as a design blueprint for a human-centered partnership model in which people and artificial intelligence (AI) work together to improve cognitive performance, including learning, decision-making, decision and new experiences. As the name suggests, augmented intelligence is the use of machines to support humans, not replace them. The idea is to use human cognitive faculties as well as machine speed and precision to improve results.
(This is the first part of a two-part article that aims to shed light on the fascinating field of artificial intelligence)