Defining Artificial Intelligence:
A Research Brief

Keketso Kgomosotho

Cite as Keketso Kgomosotho, ‘Defining Artificial Intelligence’ (2023) TECHila LAW Research Brief

The expression “Artificial Intelligence” was devised in 1956 by John McCarthy, the father of the technical field of Ai.[1] His work at the time, which significantly informed the progression of the field, was driven by the desire to get a computer to do things that, when done by humans, are said to involve intelligence. However, since 1956 experts and scholars still cannot agree on what Ai means as a concept[2] and to which technologies the idea refers.[3] A survey of Ai jurisprudence reveals that there’s a plethora of definitions for Ai.[4] The definitions vary widely, and each role player in the Ai life cycle defines the concept differently[5] – depending on the goals, incentives, classification model and theories for understanding and defining Ai.

There is currently no legal definition for Ai, and of course, any legal, regulatory regime must, in the end, define what is being regulated.[6] Below I turn to the question of what Ai is and isn’t, as well as what distinguishes it from other technological advances.

Defining “Artificial” and “intelligence”

The primary difficulty with defining Ai is that nobody really knows what intelligence is,[7] and logically this is a necessary precondition to understanding and defining its artificial manifestation.[8]

  1. “Artificial”

The artificial part is readily understood – it denotes something that is inorganic, man-made and lacking in natural quality.[9] Therefore this adjective – artificial – describes the fake manifestation of natural intelligence.

  1. “Intelligence”

On the other hand, there is currently no agreed-on definition for intelligence. In more than 120 years of the study of intelligence, its conceptual meaning has continued to evade all who have attempted to understand or measure it.[10] Dick Stevens likens this lack of success to biologists’ attempt to define “life.”[11] The result of this conceptual ambiguity is a myriad of definitions and theories of intelligence,[12] with each one organised around a range of intersecting human characteristics which are themselves difficult, if not impossible, to define, including traits like self-awareness, consciousness, or the use of reason or language.[13]

Other authors reason that intelligence is perhaps not as capable of hard definition as it is of being described in approximate, give-or-take terms.[14] Kaplan goes further to note that trying to define something as “subjective and abstract” as intelligence is a set-up for failure because that level of definitional precision cannot be achieved. The direct implication of this, he notes, is that definitions of Ai can only point to an “ideal target rather than a measurable research concept.”[15]

  1. There Is No Agreed Definition for Artificial Intelligence

The literature reveals that since Ai is the artificial manifestation of intelligence, it necessarily inherits each of these definitional challenges and ambiguities.[16] As such, the term has been characterised as “misleading,” “ambiguous,”[17] “generic,” “a marketing phrase,” “a buzz-word”[18] and “an intellectual wildcard”[19] – which, as highlighted by Jordan, makes it difficult to engage the “scope and consequences” of Ai, including for purposes of legal regulation.[20]

Several authors have aptly undertaken the work of cataloguing and analysing the different definitions of Ai.[21] All of these definitions are anchored to human intelligence[22] because – as McCarthy notes – we haven’t figured out what kinds of computational procedures we want to call intelligent without reference to human intelligence.[23] For McCarthy, Ai is “the science and engineering of making intelligent machines, especially intelligent computer programs.[24] In 1988 he posited that the goal of Ai is to make “…machines more capable than humans at solving problems and achieving goals requiring intelligence”. To achieve this in practice, computer scientists attempt to understand the commonsense world in which people achieve their goals, and to develop intelligent computer programs. The computer systems replicate this intelligence by perceiving and engaging their environment, processing the information and taking action that is appropriate in the circumstances to achieve a specified goal, solve a set problem or make a decision – often with some measure of autonomy.

The Ai Watch Report synthesises four common features of Ai and suggests that they may be considered indicative features of Ai: (a) the ability to perceive the environment; (b) the ability to process information by gathering and interpreting data; (c) the ability to make decisions including through reasoning, learning, taking actions; and (d) the ability to achieve specified goals.[25]

While this is far from precise, these features correlate with the extant literature. For instance, the UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression defines Ai as a “constellation of processes and technologies enabling computers to complement or replace specific tasks otherwise performed by humans, such as making decisions and solving problems.”[26] Likewise, the EC JRC Report on Ai defines it as “…a generic term that refers to any machine or algorithm that is capable of observing its environment, learning, and based on the knowledge and experience gained, taking intelligent action or proposing decisions.”[27]

For the HSRC, Ai is “a term typically applied today in reference to neural-network-based learning systems that process big data according to algorithmic models, and data processing digital systems that function to produce results and decisions with a level of machine autonomy.”[28] The Organisation for Economic Cooperation and Development (OECD) Group of Experts define it as “including a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.”[29] The leading Ai textbook by Russell and Norvig presents eight definitions of Ai, which it further categorises into the categories of computer systems, which can either (a) thinking humanly, (b) acting humanly, (c) thinking rationally, and (d) acting rationally.[30]

The European Union’s Artificial Intelligence Act (AI Act) is a pioneering piece of legislation, the first of its kind, poised to regulate the use and development of AI technologies. Article 3(1) of the draft Act defines ‘artificial intelligence system’ as

…software that is developed with [specific] techniques and approaches [listed in Annex 1] and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.

This encompassing definition applies to AI systems that function independently, as well as those operating as part of a product, aiming to capture both contemporary and future AI technological developments.

In an attempt to remain adaptable to the evolving landscape of technology, the Commission has put forth a proposal to establish a technology-neutral definition of AI systems under EU law.

However, this definition has not been without critique. Detractors have voiced concerns that:

  • It is excessively broad. The proposed definition of Ai systems encapsulates far more than what is conventionally understood as Ai, including the most basic search, sorting, and routing algorithms. This would inadvertently subject these simple processes to a new set of regulations.
  • Critics recommend adopting a more precise definition of AI systems, focusing solely on high-risk AI applications, rather than extending to all AI applications, or software more generally.
  • It lacks clarity on how components of larger AI systems, such as pre-trained Ai components from different manufacturers or components that aren’t released separately, ought to be dealt with.

The Commission has acknowledged these criticisms and has promised to revisit the definition of AI systems during the finalisation of the AI Act.

Different types of Ai systems

There are different types of Ai systems. The leading classification theory identifies two major categories of Ai: Narrow Ai (or domain-specific Ai) and Artificial General Intelligence (AGI).[31] Since the inception of the field of Ai in 1956, the Holy Grail for the field of Ai has been the development of AGI – which is a speculative machine intelligence that can learn, understand and perform any intellectual task human beings can, across the full range of cognitive capabilities, limited only by how far the machine is willing to improve itself.[32] This hypothetical type of Ai is usually depicted in sci-fi media as conscious, super-intelligent overlords that take over the world. The Terminator comes to mind here. According to some authors,[33] the successful developments of AGI is where we will realise or experience the “Ai Singularity” – a point where Ai supersedes human intelligence and capabilities in every field, making computers permanently superior and turning computer systems into overlords who will govern our lives.[34]

What we do have, however, is Narrow Ai – a category of Ai systems which are capable of intelligence in a specific domain.[35] These systems are capable of doing only that for which they were trained and operate within a predefined environment.[36] This type of Ai can outperform human capabilities in only a very narrowly defined task, such as driving a car, specific decision-making, playing chess, making predictions, or diagnosing diseases.[37] Although narrow Ai is superior in a limited domain, it cannot re-use its learned knowledge across domains. These systems can rapidly analyse massive amounts of data and draw inferences from people’s identities, preferences, demographic characteristics, likely future behaviours, and/or  objects associated with them.[38]

At the moment, this is the most advanced stage of Ai capability in the evolution of the technology. Thus all Ai systems employed today are narrow Ai in that they can perform only those tasks for which they have been trained, without the capability to engage in a different task. This category of Ai will be the subject of this research enquiry.

Ai systems can be both software-based,[39] as well as embedded in devices such as humanoid robots or self-driving vehicles. The former includes virtual assistants like Apple’s Siri, chatbots; data analysis based on machine learning, facial recognition systems; and speech recognition and translation software. Ai can also be embedded in devices, for instance, autonomous weapons, robots, self-driving cars, or drones.[40] 

Understanding Algorithms

An algorithm is “a computer code designed and written by humans, carrying instructions to translate data into conclusions, information or outputs.”[41] An algorithm can also be understood as “[a] finite list of instructions, most often used in solving problems or performing tasks and commonly used in computer science and programming processes.”[42]

Algorithms are at the heart of Ai systems.[43] They are often described as a set of coded instructions for the achievement of a set goal.[44] For instance, a recipe is often cited as a key example of what an algorithm is. Algorithms are not new. They have been part of computer functionality for a while; however, they owe their recent popularity to Machine Learning techniques which are responsible for the current Ai summer.[45]

Algorithms can be joined to form a network of complex algorithmic systems such as search engines or autonomous lethal weapons, or self-driving cars. Algorithms can be programmed by human programmers, and they can also be generated automatically from the data through machine learning.[46]

Different ways of training Ai algorithms

In the 1970s and 1990s, the dominant category of Ai was symbolic Ai, which is exemplified most keenly by expert knowledge systems.[47] Machine Learning (ML) has become the most dominant Ai technique and is credited for the current “Ai summer.”[48] ML has been described as “the scientific study of computer algorithms that improve automatically through experience. ML algorithms build a model based on training data, in order to make predictions or decisions without being explicitly programmed to do so.”[49] Lehr and Ohm define it as “an automated process of discovering correlations (sometimes alternatively referred to as relationships or patterns) between variables in a dataset, often to make predictions or estimates of some outcome.”[50] ML has also been called “statistics on steroids”.[51]

ML algorithms can be divided into different techniques: First, we have supervised learning: which links input to output values based on labelled historic examples of input-output pairs in training data. For instance, in order to train a ML algorithm to recognise a human face, a programmer would typically use a labelled data set of images with many different faces. Although the ML algorithm will have no conceptual understanding of what a face is, it will learn to recognise patterns in the images, to “learn” which constellation of features comprises a human face. This technique requires profuse amounts of data, which is often collected and labelled by humans. This is where the term “Big data” comes from – it refers to this computational analysis of extremely large datasets to reveal patterns, connections and trends.[52]

Next is unsupervised learning, a machine learning technique which finds new patterns in datasets with non-labelled data without a programmer to “teach” it. Here the algorithm aims to explore and define the hidden data structure, sometimes by grouping similar items into clusters and then classifying them based on the discovered commonalities. Unsupervised learning is used, for instance, in Pattern recognition, facial recognition, defining data trends, Document classification and data mining.[53]

Third, semi-supervised learning – the machine learning technique considered to lie between supervised and unsupervised learning, requiring both labelled and unlabeled data.

Finally, reinforcement learning is a machine learning technique exploring how agents take actions in an environment to maximise a reward. An example is when the Ai agent plays chess against itself, learns the game, and acquires above human intelligence in chess. Because of this wide scope of Ai systems and their applications, I propose a study limited to Ai algorithms used in the context of decision making, where decisions have a legal impact on individuals and their internationally protected human rights. 

Generative Ai

Generative AI, as the name suggests, refers to a sub-field of Ai systems capable of generating new content from processing data. This could be anything from images and music to text and even entire virtual worlds. This unique ability comes from the type of machine learning algorithms generative Ai employs, particularly a type known as Generative Adversarial Networks (GANs).

GANs were introduced by Ian Goodfellow and his colleagues in 2014. They consist of two main components: a generator, which produces new data instances, and a discriminator, which evaluates the authenticity of the generated data against real data. The two parts work in tandem, with the generator improving its ability to create realistic data and the discriminator improving its ability to detect the generated data, leading to a continuously improving system.

The remarkable trait of generative Ai is its capacity to learn and understand the underlying structure and patterns of the input data, allowing them to generate new data that mirrors the original. OpenAI’s ChatGPT is an example of a generative AI model.  ChatGPT is trained to generate human-like text based on the input it’s given. It has been trained on a vast amount of data from the internet, and it uses this training to generate responses to user inputs that are contextually relevant and coherent. While ChatGPT can generate creative and seemingly knowledgeable responses, its responses are generated based on patterns it learned during its training, rather than any kind of conscious understanding or reasoning. Other leading examples are Midjourney, Dall-e, and DeepMind.

While generative Ai has considerable potential for creative applications, it also presents significant legal and ethical challenges. Generative AI technologies can be used to create ‘deepfake’ images or videos – synthetic media where a person in an existing image or video is replaced with someone else’s likeness. These manipulations can be extremely convincing, raising concerns about privacy, consent, and the potential for misuse in misinformation campaigns. Moreover, the copyright implications of generative Ai are still largely unsettled. If an Ai generates a piece of music or a work of art, who owns the copyright? Is it the programmer who created the Ai, the user who selected the inputs, or does it belong to the Ai itself? Although some countries such as South African have recognized Ai’s capacity to register a patent, these are complex issues are far from settled, and will likely take years of legal debates to resolve.

The technology is new and recent

Although the scientific study of Ai has been underway since 1956, ML has only recently become popularised and mainstreamed, many in industry, academia and regulators often refer to “emerging technology” to denote Ai and its associated technologies. Three recent conditions have made this possible: First, enabled by recent technological developments, human beings have recently started creating and digitally recording an abundant amount of data in everyday life. At the time of writing, human beings are reported to create about 2.5 quintillion bytes of data every day.

Second, computers have, in the recent past, become exponentially faster, with increased processing power often powered by cloud services – another recent development. Finally, the cost of developing computers has decreased dramatically in the recent past. The combination of these factors has made the current “Ai summer” possible and have led to the rapid increase in the development and application of ML algorithms in industry and various government functions. Apart from making human existence more convenient, Ai is also helping human societies address some of the world’s most pressing global challenges, including with the development of Covid-19 vaccines,[54] increasing the accuracy of predicting the time and location of future outbreaks,[55] combating climate change, accurately diagnosing and curing diseases, pre-empting cyber-attacks, and to respond to social injustices and inequalities through the achievement of the United Nations 2030 Sustainable Development Goals.[56]

Ai algorithms are perceived as better decision makes than human beings. Ai algorithms have exponentially more processing power than the human mind,[57] and since Ai is seen as lacking consciousness and subjectivity, it is often imbued with absolute neutrality and objectivity – Ai algorithms find patterns and connections in large data sets without a care as to the normative meanings of those connections and patterns. But, this mindlessness is both a benefit and a curse.[58]

 Conclusion

Defining Artificial Intelligence presents unique challenges because of the inherent complexities and ambiguities of understanding the concept of intelligence itself. However, a unified definition is necessary, particularly in the context of legal regulation, to provide legal certainty for the development of governance frameworks, regulation, legislation, and standards.

This distinct lack of consensus on the definition of Ai is already resulting in varying interpretations, which will lead to gaps and fragmentation in regulatory oversight and inconsistent legal protections.

In this brief, we also underscore the importance of examining Ai’s technological underpinnings, notably algorithms and their training methods. Algorithms form a core part of the concept of Ai. Understanding these underlying mechanisms is crucial to responding effectively to the governance challenges in the context of Ai systems.

Further exploration and research is required to adequately define and understand Ai, and to develop appropriate and targeted regulatory measures that uphold human rights and ensure fair and equitable Ai applications. This research brief provides a valuable starting point for such investigations.

References

[1] John McCarthy, ‘WHAT IS ARTIFICIAL INTELLIGENCE?’ 15.

[2] Pei Wang, ‘On Defining Artificial Intelligence’ (2019) 10 Journal of Artificial General Intelligence 1.

[3] European Commission. Joint Research Centre., AI Watch, Defining Artificial Intelligence 2.0: Towards an Operational Definition and Taxonomy for the AI Landscape. (Publications Office 2021) <https://data.europa.eu/doi/10.2760/019901> accessed 24 May 2022; Dimiter Dobrev, ‘A Definition of Artificial Intelligence’ (arXiv, 3 October 2012) <http://arxiv.org/abs/1210.1568> accessed 24 May 2022.

[4] European Commission Joint Research Centre, AI Watch, Defining Artificial Intelligence 2.0: Towards an Operational Definition and Taxonomy for the AI Landscape. (Publications Office 2021) <https://data.europa.eu/doi/10.2760/019901> accessed 24 May 2022; Wang (n 34).

[5] Hawley Scott H., ‘Challenges for an Ontology of Artificial Intelligence’ [2019] arXiv <https://arxiv.org/abs/1903.03171>.

[6] Matthew U Scherer, ‘Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies’ [2015] SSRN Electronic Journal <http://www.ssrn.com/abstract=2609777> accessed 9 August 2022; Rex Martinez, ‘Artificial Intelligence: Distinguishing Between Types & Definitions’ 19 ARTIFICIAL INTELLIGENCE 28.

[7] Wang (n 34); Scherer (n 38); Shane Legg and Marcus Hutter, ‘Universal Intelligence: A Definition of Machine Intelligence’ (arXiv, 20 December 2007) <http://arxiv.org/abs/0712.3329> accessed 10 August 2022; Sternberg Robert J, Handbook of Intelligence (Cambridge University Press 2006).

[8] Scherer (n 38).

[9] Mariam Webster Dictionary, available at https://www.merriam-webster.com/dictionary/artificial

[10] Robert J (n 39).; Robert J. Sternberg and Scott Barry Kaufman, The Cambridge Handbook of Intelligence (Cambridge University Press).; Hunt E B, ‘Intelligence as an Information-Processing Concept’ [1980] British journal of psychology.

[11] Dick Steven J, Plurality of Worlds: The Origins of the Extraterrestrial Life Debate from Democritus to Kant (Cambridge University Press 1982).

[12] Ibid.; Shane Legg and Marcus Hutter, ‘A Collection of Definitions of Intelligence’ (arXiv, 25 June 2007) <http://arxiv.org/abs/0706.3639> accessed 10 August 2022. Sternberg and Gregory propose that there may be as many definitions of intelligence as there are experts defining it. See Gregory R.L., ‘The Oxford Companion to the Mind’ (Oxford University Press 1998).

[13] Shamla V.M., ‘ANALYSIS OF THEORIES OF INTELLIGENCE: EMERGING THEME’ [2018] International Journal of Creative Research Thoughts.; Scherer (n 38); Robert J (n 39).; Terman LM, ‘Intelligence and Its Measurement: A Symposium-II’ [1921] Journal of Educational Psychology.; One definition describes intelligence as “…the ability to process information properly in a complex environment. The criteria of properness are not predefined and hence not available beforehand. They are acquired as a result of the information processing.” See Hideyuki Nakashima, ‘AI as Complex Information Processing’ (1999) 9 Minds and Machines 57.

[14] Legg and Hutter (n 45).

[15] Kaplan Jerry, Artificial Intelligence: What Everyone Needs to Know (1st edn, Oxford University Press 2016).

[16] Legg and Hutter (n 39).

[17] European Commission Joint Research Centre (n 36).

[18] Scott H. (n 37).

[19] Michael I Jordan, ‘Artificial Intelligence—The Revolution Hasn’t Happened Yet’ [2019] Harvard Data Science Review <https://hdsr.mitpress.mit.edu/pub/wot7mkc1> accessed 10 August 2022.

[20] ibid.

[21] European Commission Joint Research Centre (n 36).; Legg and Hutter (n 45).; ‘European Commission. Joint Research Centre. – 2021 – AI Watch, Defining Artificial Intelligence 2.0 to.Pdf’.; Wang (n 34). McCarthy (n 32); Wang (n 34); European Commission Joint Research Centre (n 36); Legg and Hutter (n 39).

[22] European Commission Joint Research Centre (n 36).

[23] McCarthy (n 32); Scherer (n 38).

[24] McCarthy (n 32).

[25] European Commission Joint Research Centre (n 36).

[26] Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, October 2018, A/73/348, available at https://documents-dds-ny.un.org/doc/UNDOC/GEN/N18/270/42/PDF/N1827042.pdf?OpenElement

[27] European Commission Joint Research Centre (n 36).

[28] HSRC-2021-HR-and-4IR-HSRC-Press-Full-Pdf, p.178

[29]Recommendation of the Council on Artificial Intelligence, 2022, available at https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&ved=2ahUKEwizs-i-1pr6AhUlnVwKHZTsB_oQFnoECA8QAQ&url=https%3A%2F%2Flegalinstruments.oecd.org%2Fapi%2Fprint%3Fids%3D648%26lang%3Den&usg=AOvVaw3bU62HpvCxeAcd6gxRGeJ6

[30] Stuart J Russell and Peter Norvig, Artificial Intelligence: A Modern Approach (Prentice Hall 1995).

[31] European Commission Joint Research Centre (n 36).

[32] Russell, Stuart J. (Stuart Jonathan). Artificial Intelligence : a Modern Approach. Upper Saddle River, N.J. :Prentice Hall, 2010; Scott McLean and others, ‘The Risks Associated with Artificial General Intelligence: A Systematic Review’ (2021) 0 Journal of Experimental & Theoretical Artificial Intelligence 1.

[33] Tony J Prescott, ‘The AI Singularity and Runaway Human Intelligence’ in Nathan F Lepora and others (eds), Biomimetic and Biohybrid Systems (Springer Berlin Heidelberg 2013); Adriana Braga and Robert K Logan, ‘AI and the Singularity: A Fallacy or a Great Opportunity?’ (2019) 10 Information <https://www.mdpi.com/2078-2489/10/2/73>.

[34] See Ibid and Toby Walsh, ‘The Singularity May Never Be Near’ (2017) 38 AI Magazine 58.

[35] Mariana Todorova, 2020. “Narrow AI” in the Context of AI Implementation, Transformation and the End of Some Jobs,” Nauchni trudove, University of National and World Economy, Sofia, Bulgaria, issue 4, pages 15-25, December.

[36] Ibid.

[37] Miles Brundage et al, The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation (Report, February 2018) 52–3.

[38] Braga and Logan (n 71); Nakashima (n 46); Cheng and others (n 4).

[39] These include

[40] European Commission Joint Research Centre (n 36).; Scherer (n 38).

[41] Paul Dourish, Algorithms and Their Others: Algorithmic Culture in Context, BIG DATA & SoC’Y, July-Dec. 2016, 1, 3.

[42] Algorithms can also be understood as a computer program, or “an abstract, formalized description of a computational procedure the outcome of which is decision, See Paul Dourish, Algorithms and Their Others: Algorithmic Culture in Context, BIG DATA & SoC’Y, July-Dec. 2016, 1, 3; Adams, R., Pienaar, G., Olorunju, N., Gaffley, M., Gastrow, M., Thipanyane, T., Ramkissoon, Y., Van der Berg, S. & Adams, F., Human rights and the fourth industrial revolution in South Africa (2021)  Cape Town: HSRC Press, p.178

[43] Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, October 2018, A/73/348, available at https://documents-dds-ny.un.org/doc/UNDOC/GEN/N18/270/42/PDF/N1827042.pdf?OpenElement

[44] European Parliament. Directorate General for Parliamentary Research Services., Understanding Algorithmic Decision-Making: Opportunities and Challenges.(Publications Office 2019) <https://data.europa.eu/doi/10.2861/536131> accessed 26 August 2022; Köchling and Wehner (n 16).

[45]

[46] European Parliament Directorate General for Parliamentary Research Services. (n 81); David Lehr and Paul Ohm, Playing with the Data: What Legal Scholars Should Learn About Machine Learning, (2017) 51(2) UC Davis Law Review.

[47] Lehr and Ohm, Playing with the Data: What Legal Scholars Should Learn About Machine Learning, (2017) 51(2) UC Davis Law Review; Domingos notes that another way of understanding Ai is by classifying it into either symbolic, connectionist, evolutionary, Bayesian, and analogizer. This classification follows the fundamental technique used by the different Ai systems.

[48] Delipetrev et al., 2020

[49] The impact of Artificial Intelligence and Machine Learning on software development, ASTHIT, October 2021, available at https://www.asthait.com/tech-stuffs/the-impact-of-artificial-intelligence-and-machine-learning-on-software-development-2/

[50] Janneke Gerards & Prof. Dr. Frederik Zuiderveen Borgesiu, Protected Grounds and the System of Non-Discrimination Law in the Context of Algorithmic Decision-Making and Artificial Intelligence, Colorado Technology Law Journal, June 2021, available at https://ctlj.colorado.edu/?p=860 ; David Lehr & Paul Ohm, Playing with the Data: What Legal Scholars Should Learn About Machine Learning, 51 U.C. Davis L. REV. 653, 671 (2017).

[51] Meredith Broussard, Artificial Unintelligence: How Computers Misunderstand The World 32 (MIT Press eds., 2018).

[52] Adams, R., Pienaar, G., Olorunju, N., Gaffley, M., Gastrow, M., Thipanyane, T., Ramkissoon, Y., Van der Berg, S. & Adams, F., Human rights and the fourth industrial revolution in South Africa (2021) Cape Town: HSRC Press.

[53] Guide To Unsupervised Machine Learning: Use Cases, 22 October 2021, available at https://bigdataanalyticsnews.com/unsupervised-machine-learning-use-cases/

[54] See https://www.technologyreview.com/2021/01/14/1016122/these-five-ai-developments-will-shape-2021-and-beyond/; According to experts, AI has the ability to identify viral components that have the necessary properties to stimulate the immune system; Toronto-based BlueDot’s tool scanned 100,000 governmental and media data sources regularly when it issued an alert about a potential outbreak in Wuhan, China, on 31st December 2019.

[55] Apoorva Komarraju, ARTIFICIAL INTELLIGENCE IN 2021 – THE DEVELOPMENTS SO FAR, April 16, 2021, available at https://www.analyticsinsight.net/artificial-intelligence-in-2021-the-developments-so-far/ 

[56] Margaret A. Goralski, Tay Keong Tan, Artificial intelligence and sustainable development, The International Journal of Management Education, Volume 18, Issue 1, 2020; UN Secretary-General’s Strategy on New Technologies (Report, September 2018) 8, available at https://www.un.org/en/newtechnologies/images/pdf/SGs-Strategy-on-New-Technologies.pdf

[57] Algorithms can analyse incredibly large data sets in order to support a decision, while human minds can only process limited data.

[58] Meredith Broussard, Artificial Unintelligence: How Computers Misunderstand The World 32 (MIT Press eds., 2018).