AIML01: Artificial Intelligence – Past, Present, and Future
Introduction
Welcome to Lecture 1 of Module 1 in which we will study the relationship between AI and the fourth industrial revolution.
In this lecture, we discuss a prediction by J. C. R. Licklider in 1960 about the way the relationship between humans and computer technology would evolve. We will explain what is meant by AI and we will explain how Licklider's prediction is being realized today by AI in what is known as the fourth industrial revolution during what has been dubbed "The Cognitive Era". We look at some examples of the use of AI in Africa and we will finish up by summarizing what we have covered and identifying the articles that you should read to consolidate what you have learned.
We have four learning objectives, so that, after studying the material covered in this lecture, you should be able to do the following.
Humans have always used tools to augment & amplify their physical capabilities, whether they are for cutting or digging.
The computer extended this to mental work, mainly as a tool for greatly increasing the speed of processing. However, the developments in artificial intelligence over the past sixty-five years have ushered in what John Kelly at IBM refers to as the cognitive era (Kelly, 2015).
In 1960, J. C. R. Licklider predicted a symbiotic partnership between humans and computers. Symbiosis refers to a situation where two distinct species cooperate closely to achieve what neither could achieve on their own. Licklider said this relationship would occur between men and computers. (Unfortunately, there was little awareness of gender bias in 1960. Today, we avoid such gender bias and we would refer to a relationship between humans and computers.)
This symbiotic partnership between humans and computers will perform intellectual operations much more effectively than humans can perform them on their own. This symbiotic partnership is currently being realized through AI and machine learning AI both amplifies and augments human cognitive abilities, improving our existing skills and giving us new ones. Thus, we can do what we used to do, but now we can do it much more quickly, much more efficiently, and much more effectively. We can also solve problems that we simply weren't able to solve before. Licklider also recognized the possibility that computer system could become more intelligent that humans. We haven't reached that point yet but if and when we do, then we will have reached what is known as the technological singularity: the point in time when the autonomous capabilities of AI exceed those of humans.
What is the best way to describe the benefits of AI for humans?
So, what do we mean by artificial intelligence?
While the remaining lectures in this course will answer this question in detail, we need an answer here to get started.
Here are some definitions from (Smith and Neupane, 2018).
The first definition - using a computer to solve [the] kinds of problems now reserved for humans - comes from the original proposal by John McCarthy and other pioneering scientists to hold a workshop on AI. We return to John McCarthy in the next lecture.
This definition focusses on prospection - the ability to anticipate the future - as the key attribute of intelligence, artificial or natural.
This definition focusses on action as the key attribute of AI.
The fourth definition notes that there are many elements in AI, one of which is machine learning. Others are perception, reasoning, and natural language processing.
The sixth definition emphasizes the human-level nature of AI and the need to acquire and use knowledge.
It helps to distinguish between what AI can do, that is, the behaviors an AI system can engage in, and how it does it, that is, the underlying techniques. Example behaviors include optimization, pattern recognition, pattern detection, prediction, hypothesis testing, natural language processing, and machine translation. These behaviors are achieved using various AI techniques. Two of the most important of these are machine learning algorithms and knowledge-based systems (or knowledge representation and reasoning systems).
Machine learning techniques, such as deep learning, form the basis AI applications after being trained using very large datasets (sometimes referred to as big data). In contrast, knowledge-based systems attempt to emulate the problem-solving skills of a human expert by using explicitly encoded knowledge and inference procedures, or reasoning, to solve problems. Knowledge-based systems operate in situations where there is already a corpus of explicit expertise on how to perform a task or solve a particular problem. Knowledge-based systems were among the original AI techniques developed before big data, at a time when computational power was limited. These used to be called expert systems. We cover these and other techniques in more detail in Module 2: expert systems in Lecture 1 and machine learning in Lectures 2 and 3. There is also a third, related approach: probabilistic Bayesian learning, also referred to as Bayesian networks, Bayes nets, belief networks, decision networks, and probabilistic graphical models. These provide a powerful way to capture the probabilistic, that is, the statistical, relationships among the entities being modelled. In turn, this allows the AI system deal with uncertainty: inferring the most likely outcomes and drawing conclusions that are the most likely to be correct when solving problems. We return to this topic in Module 2, Lecture 3 on statistical machine learning.
Who suggested that AI refers to the use of computers to solve the kinds of problems now reserved for humans?
Today, we are in the middle of another industrial revolution - the fourth industrial revolution - and AI is one of its main foundations.
The fourth industrial revolution - sometimes referred to has 4IR or Industry 4.0 - involves the fusion or tight integration of physical, digital, and biological technologies.
These are often referred to has cyber-physical systems.
Think of smart, wearable devices that interface directly with our sensorimotor or brain function and that assist us in our daily lives.
These are powered by AI and machine learning.
And they work by being able to communicate everywhere with other devices and data sources.
The Fourth Industrial Revolution represents a fundamental change in the ways that we live and work.
It is a new chapter in human development, ... merging the physical, digital, and biological worlds and fusing technologies in ways that create both promise and peril.
The World Economic Forum enables a fast-growing network of Centres for the Fourth Industrial Revolution.
There is a centre in Rwanda and one in South Africa.
One of the biggest challenges we face is make sure we harness the power of AI in an ethical manner.
So that the economic benefits and social advances are achieved for everyone, everywhere.
Or, as the World Economic Forum puts it, "in ways that create a more inclusive, human-centred global economy."
For everyone, everywhere ... that includes Africa.
The Fourth Industrial Revolution and the digital transformation of Africa has to potential to greatly increase the rate of growth and advancement in many sectors of life and industry.
What technologies is the Fourth Industrial Revolution based on?
Many people agree, while recognizing that the fourth industrial revolution will, and must, unfold in a way that takes into account, and takes advantage of, "unique geographical, cultural and political nature of the continent" (Wairegi et al., 2021). Travaly Muvunyi (2020) say that "AI in particular presents countless avenues for both the public and private sectors to optimize solutions to the most crucial problems facing the continent today, especially for struggling industries." They conclude that "Artificial intelligence for Africa presents opportunities to put the continent at the forefront of the Fourth Industrial Revolution". AI is sometimes associated with displacing workers, but that doesn't have to be the case. AI can also empower low-skilled workers and equip them to take on more complex responsibilities (Novitske, 2018). AI has the potential to overcome some of the most pressing challenges facing Africa, and drive growth and development in core sectors such as agriculture, healthcare, public services, financial services, among many others. Here are a few examples. AI and drone technology, also known as UAVs (unmanned aerial vehicles), are being used for precision agriculture: using targeted interventions that optimize the use of available resources to increase profitability and sustainability of agricultural operations. Their use is growing quickly in situations where crops are grown as a monoculture on large holdings.
And there are several companies, such as Charis Unmanned Aerial Solutions in Rwanda and IAS and Aerobotics in South Africa, that are now addressing the challenges of deployment for small-scale, multi-crop farms. This also opens up opportunities to develop systems that can automatically incorporate agronomic expertize to identify appropriate interventions based on real-time sensor data, for example, soil moisture level, pH level, nitrate level, and temperature, often exploiting IoT platforms. Microsoft is using its FarmBeats platform - combining AI, internet of things technology, and drones - to provide cost-effective solutions for small-holder farmers. Ircad, a France-based research institute, has opened a training and R&D centre in Rwanda - Ircad Africa - for minimally-invasive surgery using the latest in computer vision and robotics technology. Ircad Africa also conducts research in surgical data science, focussing on digestive cancer prevention to improve early cancer diagnosis and to implement new therapeutic strategies.
Silicon Valley-startup Zipline delivers more than 50 types of blood products to rural hospitals and clinics using custom-designed drones.
The Zipline drones have a range of more than 100 kilometers. As soon as a drone leaves the launch catapult, it is fully autonomous.
Hepta Analytics, a startup by seven Carnegie Mellon University Africa graduates, specializes in helping local industry leverage the benefits of data science. One of their products, Najua, focusses on using machine learning to make the web available in local African languages. Another Carnegie Mellon University Africa graduate heads a team of entrepreneurs deploying IoT technology on tea plantations in Uganda. Ubenwa is a mobile app developed by a start-up in Nigeria. It uses AI to analyse acoustic signatures in newborn babies to detect early signs on perinatal asphyxia, a leading cause of neonatal disability and death.
Many developing countries in Africa have an agrarian economy driven primarily by smallholder farmers. uLima is a smartphone app for farmers, agro-dealers, and others in the agriculture sector. It provides access to crop and livestock management information, weather and market price information, as well as customized crop and livestock calendars, all focused on improving farm productivity and the livelihoods of farmers and their families. Since most farmers in Africa are smallholders and don't necessarily have access to smartphone technology, other companies such as iCow provide similar services using lower-tech feature phones.
Summary
Let's summarize the main points in this lecture.
Recommended Reading
Here are two articles on which the material in this lecture is based. Read the introduction to both.
We will revisit these two papers throughout the course.
References
Here are some of the references cited to support the main points in what we covered in this lecture.
Kelly JE (2015) Computing, cognition and the future of knowing. White paper, IBM Corporation
Shanahan M (2015) The Technological Singularity. MIT Press.
|