It has become increasingly apparent that AI technologies have advanced quite rapidly in the last several years. In fact, it has happened so rapidly, my colleagues and I have been forced to prioritize our research to focus primarily upon the risks and governance of AI as we move into an uncertain future.
PART I: AI 101
Introduction
It has become increasingly apparent that AI technologies have advanced quite rapidly in the last several years. In fact, it has happened so rapidly, my colleagues and I have been forced to prioritize our research to focus primarily upon the risks and governance of AI as we move into an uncertain future. Many of us believed that what we are experiencing with AI today was about 30 years away and that we had plenty of time to get to work on plans for regulating, legislating, controlling, containing, or even stopping the potential negative effects of such emerging technologies. Well, that all changed with some of the latest available forms of AI which includes, but is not limited to GPT-4, Bing AI, Claude, Bard, et al.
In this paper, I’m going to talk about the basics of Artificial Intelligence so that everyone is roughly on the same page in reference to key terms, concepts, and issues. There’s a lot going on out there in the AI universe, so it’s important for us to become familiar with some of the ideas and processes which have gotten us here so far so that we can engage in meaningful and productive dialogue.
What is artificial intelligence?
The great AI pioneer and Stanford professor, John McCarthy, defined Artificial Intelligence as:
“…the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.”1
And decades before McCarthy’s definition, Alan Turing, the British father of computer science, was considering in his groundbreaking work ‘Computing Machinery and Intelligence’ whether or not machines could think. He devised the now famous ‘Turing Test’ in which a human interrogator attempts to distinguish between a computer and human text response. If the interrogator is unable to distinguish between them, the computer is said to have passed the test.2
In the late 90’s, scientists Stuart Russell and Peter Norvig wrote the now seminal work: Artificial Intelligence: A Modern Approach which has become the leading textbook in the study of AI. They offer four potential goals or definitions of AI which differentiates computer systems on the basis of rationality and thinking vs. acting. First, they consider the ‘human approach’ in which systems that think like humans and act like humans are compared to an ideal approach in which systems think rationally and act rationally. This is an interesting distinction and one that will become thematic throughout the development of AI technologies. Humans don’t always think rationally; and that’s because our limbic (or emotional) systems and our prefrontal cortexes (or higher learning systems) are constantly at battle in our brains.
In an effort to make AI systems more rational and less prone to human emotional biases, considerable effort has gone into perfecting the precision and optimal functionality of the artificially intelligent systems. This is what led me to devise the concept of the OSTOK or Onion Skin Theory of Knowledge3 system of information in the late 1990’s. I had proposed the idea of a Least Biased Information System (LBIS) or Fairness Machine (see video below), as a potential aide in assisting with medical research, novel scientific inventions, and government policy and legislative assistance.
At the time, my plan was to partner with electronic engineers, computer scientists, philanthropists, and politicians in an effort to develop a machine which could produce information gleaned from large databases to cross-reference data from around the world in an effort to speed up the fact-finding research that took humans so long to establish. In this way, we could greatly increase developments in science, medicine, and politics. I believed then that not only was such a machine possible to construct – potentially through the integration of a quantum computer – but that it was possible to control and to regulate. My greatest fear was that its inevitability meant that someone else was more likely going to develop such a machine before I did; and that they would not know how to control it. Today, my colleagues and I are now faced with this reality and challenge.
This has led to an expansion of our definition of AI which can now include the following:
…artificial intelligence is a field, which combines computer science and robust datasets, to enable problem-solving. It also encompasses sub-fields of machine learning and deep learning, which are frequently mentioned in conjunction with artificial intelligence. These disciplines are comprised of AI algorithms which seek to create expert systems which make predictions or classifications based on input data.4
Notice how the definition does not say anything about agency or autonomy i.e. that the artificially intelligent system does not have to be ‘aware’ of its own existence nor ‘free’ in terms of its movement through physical space. It simply needs to act in ways that comply with its programmed commands in ways that far exceed that of human capabilities.
But what happens if it doesn’t comply with its programmed commands? That is, what happens if it fails to ‘align’ with our human values? And acts of its own will or volition?
Before we answer these questions, we need just a little more background information.
Types of artificial intelligence: Weak AI vs. Strong AI
To understand the level of control (or lack thereof) that we humans can exercise over AI, we need to state and clarify a few definitions.
Weak AI: Narrow AI or Artificial Narrow Intelligence (ANI)
ANI is trained and focused to perform specific tasks. Weak ANI drives most of the AI that surrounds us today. ‘Narrow’ might be a more accurate descriptor for this type of AI because it is not necessarily weak in its functions. It enables some very robust applications, such as Apple’s Siri, Amazon’s Alexa, IBM Watson, and autonomous vehicles.
Strong AI: Artificial General Intelligence (AGI) and Artificial Super Intelligence (ASI)
Artificial general intelligence (AGI), or General AI, is a theoretical form of AI where a machine would have intelligence capabilities equaled to humans. It may develop a self-aware consciousness that has the ability to solve problems, learn, and plan for the future. But it would be able to think abstractly, creatively, and even emotionally, but much faster, better, and more effectively than any human ever could.
Artificial Super Intelligence (ASI) or superintelligence, on the other hand, would surpass the intelligence and capabilities of any human brain or collective of human brains. In many respects, such a being would appear to us as a god. It would know more, think faster, perform quicker, and solve problems at light speed. Its capacity to outthink us and overpower us would demonstrate the very sentiment of Arthur C. Clarke’s prescient statement: “Any sufficiently advanced technology is indistinguishable from magic.” And it would appear to us simple beings as indeed, quite magical; for we would have no comprehension or ability to understand its god-like abilities. It is at the brink of incomprehensible and extremely difficult to put into words the power such a being would wield over us. And there are many AI experts – such as OpenAI’s CEO Sam Altman, who take this existential risk very, very seriously.5
In late May, 2023, Altman, as well as 350 top executives and researchers in artificial intelligence signed a statement urging policymakers to see the serious risks posed by unregulated AI. We were warning the future of humanity may be at stake and stated the following:
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
I was asked to and agreed to sign the statement along with other signatories, including the CEOs of AI firms DeepMind and Anthropic, as well as executives from Microsoft and Google. Also among them were British-Canadian computer scientist Geoffrey Hinton and Université de Montréal computer science professor Yoshua Bengio6 — two of the three so-called ‘godfathers of AI’ who received the 2018 Turing Award for their work on deep learning.
Although both AGI and ASI do not currently exist, it is these forms of AI that most concern us. And this is mainly because of levels of human uncertainty and ignorance regarding what may result once such systems have been developed. In other words, we simply don’t know what might happen once such systems become this powerful. We are probably all too familiar with some science fiction scenarios which depict such systems exceeding human intelligence capabilities only to turn against humanity: Hal-9000 from 2001: A Space Odyssey (1968), Proteus IV from Demon Seed (1977), Roy Batty, Pris, Zora, and Leon from Blade Runner (1982), ED-209 from RoboCop (1987), Skynet from Terminator 2: Judgment Day, the Machines and Agents from The Matrix (1999), Cylons from Battlestar Galactica (2004), AUTO from WALL-E (2008), Ultron from Avengers: Age of Ultron (2015), et al. The ubiquitous theme of technologies one day rising up against humanity is a phenomenon I have referred to elsewhere as ‘the Frankenstein Effect’.7
Let’s take a look at some of today’s most current applications of weak ANI.
Practical Applications of Weak ANI Today:
The most popular form of weak ANI today comes in the form of Transformers!
These types:
GPT-4:
The type of transformer technology used in AI today is called Generative Pre-trained Transformer (or GPT) technology. There have been various iterations of such technologies – largely stemming from OpenAI – who are currently marketing GPT-4.8 Transformer technology uses a type of neural-network architecture which uses an autoregressive language model that optimizes deep learning capabilities to produce human-like text. It was trained using 45 terabytes of text data including almost the entire public web.
It utilizes three factors:
- Positional Encoding: The order of words in sentences.
- Attention: Learning grammar rules over and over.
- Self-Attention: Disambiguate words, recognize parts of speech, and word tense.
Known as Large Language Models (or LLMs) which utilize Deep Learning (or DL), many of us have already experienced the wonders of this technology – from prompting it to write 1500 word essays on Shakespeare’s Hamlet in iambic pentameter, to prompting it to write code in a variety of programming languages, GPT technologies have quite literally transformed the way in which information is mined and exploited for a variety of uses.
DALL-E 2:
In much the same way that GPT-4 can create various original operations in text and code, DALL-E 2 creates original images from textual descriptions. It uses a 12-billion parameter version of the GPT-4 Transformer model to interpret natural language inputs. It uses a Convolutional Neural Network because it mimics the way the human brain processes vision. So, for example, you can prompt this system to create an image in the style of impressionist painters of Donald Trump riding a unicycle in a leather thong, on a tightrope, across the Grand Canyon, while eating an ice cream cone, and it will produce dozens of images. Or, instead of impressionist painters, you could prompt it in the style of abstract expressionism, or surrealism, or any other artistic style you could possibly imagine.
Video Deep Fakes:
Perhaps the most powerful or visceral effects of this new technology can be found in its use to represent fake video images. The viral sensation that is the Tom Cruise deep fake (see video below) has left many wondering how long it will be before we can simply devise plots, characters, and actors, and create original movies on demand. This new technology also has many of us wondering how it might be used to spread misinformation or worse, disinformation, against others in a number of ways.
These represent just a few of the applications of weak ANI. We can only imagine what the future will bring.
PART II: The Future of Humanity
Now that we have a better understanding of some of the main factors pushing the development of AI technologies, we can better understand how they might affect the future of our lives. To do so, let’s consider some of the greatest potential benefits AI has and will contribute to humanity as well as the some of the greatest risks.
Top 10 Benefits of AI to Humanity: Ask ChatGPT
When you prompt ChaptGPT9 to provide a list of some benefits AI will bring to humanity, this is the type of response you get:
- Enhanced Healthcare: AI can revolutionize healthcare by improving diagnostics, drug discovery, and personalized medicine. It can analyze vast amounts of medical data, assist in early detection of diseases, and aid in developing more effective treatments.
- Improved Education: AI-powered tools and platforms can personalize learning experiences, provide adaptive tutoring, and offer individualized feedback. This can help students of all ages and abilities to learn more efficiently and effectively.
- Increased Efficiency and Automation: AI can automate repetitive and mundane tasks across various industries, freeing up human resources to focus on more complex and creative endeavors. This could lead to increased productivity and economic growth.
- Enhanced Safety and Security: AI can improve safety and security systems by analyzing data in real-time, detecting anomalies, and predicting potential threats. It can be utilized in areas such as cybersecurity, surveillance, and disaster response.
- Sustainable Development: AI can contribute to sustainable development by optimizing energy consumption, improving resource management, and enabling smart cities. It can help address environmental challenges and create more efficient and eco-friendly systems.
- Advancements in Transportation: AI can revolutionize transportation systems by enabling autonomous vehicles, optimizing traffic flow, and improving logistics and supply chain management. This can lead to safer, more efficient, and less congested transportation networks.
- Enhanced Customer Service: AI-powered chatbots and virtual assistants can provide instant and personalized customer support, improving user experiences and reducing response times. They can assist in various industries, including retail, banking, and hospitality.
- Scientific Discoveries: AI can accelerate scientific research by processing vast amounts of data, running simulations, and assisting in data interpretation. It can help scientists gain new insights, make discoveries, and advance fields like astronomy, genomics, and particle physics.
- Assisting People with Disabilities: AI can develop assistive technologies that improve the lives of people with disabilities. It can enable better communication, mobility, and accessibility, fostering inclusivity and enhancing quality of life.
- Cultural and Creative Contributions: AI can be used in creative fields such as art, music, and literature to generate novel ideas, assist in content creation, and inspire new forms of expression. It can expand human creativity and push the boundaries of artistic endeavors.
When we consider just a few of the possible ways in which AI technology will improve our lives, we are optimistically encouraged to see how these will unfold. Currently, such technologies have already greatly advanced scientific and medical research. One of the most striking ways in which this has been manifested is through a program known as AlphaFold which was developed by Google’s DeepMind. For generations, scientists have labored over the structure of proteins and how they ‘fold’ in order to carry out their various functions. It would take months or even years to learn about such proteins and their characteristics. But today, thanks to such technologies as AlphaFold, scientists can speed up this process which will accelerate our understanding of many of the functions and systems of the human body. This in turn, will rapidly increase the success for understanding and treating many diseases – including cancer.
And in terms of transportation, AI will help to optimize traffic flow to decrease traffic jams and get people to their destinations faster. In 2016, I met with high-level staff of the Ontario Ministry of Transportation to discuss my proposal for relieving congested traffic on the 400-series highways in and around the Greater Toronto Area (or GTA). I presented to them the idea of using John Nash’s understanding of algorithmic behaviour of birds in flight and how this could be transferred to rush hour traffic. If drivers abided by 3 basic principles or rules at the start of bottleneck rush hour traffic, all traffic would generally flow more efficiently. It is these same algorithmic principles which autonomous vehicles will use to maximize efficiency and decrease congestion. But notice how autonomous vehicles literally take humans out of the equation? With my proposal, humans had to cooperate collectively for it to work. With autonomous vehicles, human agency is taken entirely out of the control of the vehicles which, in turn, makes them all work much more efficiently. Individual biases are supplanted with cold, calculated, precision – the very DNA of machine optimal functionality. And so the collective group will win at the sacrifice of individual liberty. Long gone will be the days of idiot highway drivers zipping in and out of traffic; jeopardizing their lives and the lives of those around them. Gone will be the tailgaters, the pre-occupied, the texters, the drivers in the fast lane who are going way too slow; they will all be gone in the lane for autonomous vehicles. And it will hopefully be the fastest, safest, and most efficient lane on the highway.
But as good as AI will become and will bring us improvements in our daily lives, so too will it present ethical issues and dilemmas. For not all AI developments will be positive. It is towards the potential negative aspects of AI that we now turn our attention.
The Potential Risks and Harms of AI: So what could go wrong?
In continuing our understanding of the basics of AI, it’s important to realize that there are two main ways in which AI could present risks and generate harm: intentionally and unintentionally. And these two possible ways would occur according to two processes: misalignment and misuse. And finally, such actions would be carried out either with agency or with nonagency.
To clarify, consider this:
- Either current and future forms of AI technologies will generate harm at the hands of some agent such as an individual, a group of people, a company, corporation, city, or country; or some AI technology of its own doing, will generate harm accidentally and without agency.
- Some agents may choose to use AI technologies to deliberately harm others intentionally. But some nonagents – like unconscious AI – might generate harm without intending to do so. If a form of AI should develop agency i.e. consciousness, it may choose to harm; it may not. We simply don’t know.
- And finally, when an agent – like a human – intentionally sets out to harm some person or group through AI technologies, they are misusing it. But in harming others, a form of AI technology may simply be misaligned with the rules to which we ask it to comply. In other words, we have no idea to what degree AI will comply or fail to comply i.e. align, with our moral precepts. This is known as The Alignment Problem.10
So the risks lie in the potential for two outcomes:
- Human agents may intentionally misuse AI technologies to the harm or detriment of others; or
- Artificial nonagents may unintentionally cause harm due to moral misalignment.11
At this point in our current history, there are a number of ways in which AI technologies pose risks and potential harms. Here are just a few:
Bias and Discrimination:
AI systems have the potential to unintentionally reinforce or magnify societal biases as a result of biased training data or algorithmic structure. For example, if the model carries a negative sentiment skew against skin colour, sex, gender, etc., it could alienate various groups of people and potentially deepen racial, ethnic, and sexual tensions around a country or throughout the world. Algorithms involved in grading essays or student reports can treat languages from various cultures differently. And so, to mitigate discrimination and promote fairness, it is essential to prioritize the creation of least-biased algorithms and inclusive training datasets.
Privacy/Security Considerations:
AI advancements frequently involve the collection and analysis of extensive personal data, giving rise to concerns surrounding data privacy and security. We are already familiar with cyberattacks on companies where hackers demand ransom for private information of clients and customers. To resolve such issues, most companies simply pay the ransom the hackers want and move on. With the current and future developments of AI technologies, the capacities for hackers to leverage AI’s capabilities to create more advanced cyberattacks, allowing them to evade security measures, exploit system vulnerabilities, and infiltrate and abscond personal and private information, is going to increase in severity and sophistication. To address such privacy risks, it is imperative to support stringent regulations on data protection and promote secure handling practices for data.
Moral Dilemmas:
If we take, as an example, autonomous vehicles (driverless cars), we need to consider what ‘decisions’ such a vehicle might make when confronted with dangerous situations and scenarios. For example, when autonomous vehicles are operating and there is going to be an unavoidable accident, what should or ought to be the moral priorities for the vehicle? Should it collide with other vehicles rather than pedestrians? Humans over pets? Women over men? More lives over less? Younger people over older people? Fit people over sickly people? Law abiders over lawbreakers? Higher income status over lower? Should the car take action and swerve, or remain inactive and stay on course? Should it prioritize the life or lives of those in the vehicle over those outside of the vehicle? These are all very difficult questions to consider which often lead to a wide range of ethical dilemmas.
Consider just one of these scenarios: in the event of an inevitable accident, should an autonomous vehicle ‘choose’ to collide with a child or an elderly adult? Surprisingly, the answer to this question varies according to world cultures. In North America, autonomous vehicles are programmed to spare the very young. However, in Asian countries like China, Japan, South Korea, etc., the response is quite different – placing greater value on the elderly. So who’s right? If the answer is simply one of cultural relativity i.e. “When in Rome, do as the Romans do”, then should we be denied the choice to alter the priorities of accident victims when we travel throughout the world and use autonomous vehicles?
If someone from Japan travels to the United States, should they be allowed to request a change in moral commands for an autonomous vehicle so that if an accident were to occur, the car should override its current North American moral commands to satisfy those of its foreign passengers? And vice versa, should Americans be allowed to do the same when travelling to Japan? There is much that needs to be considered, discussed, and worked out.
Employment Disruption/Job Displacement:
With the rapidly advancing forms of AI technologies, there will be a dramatic rise in the automation of repetitive and routine tasks across various industries and sectors. This can lead to displacement of workers who perform these tasks, especially in sectors like manufacturing, data entry, and customer service. As well, as AI becomes more prevalent, there may be an increasing gap between the skills workers possess and the skills required to work alongside or operate AI systems. This could result in job displacement if workers are unable to upskill or transition to new roles. AI may also contribute to job polarization, where there is an increase in both high-skilled and low-skilled jobs, but a decline in middle-skilled jobs. High-skilled workers who can develop, maintain, and oversee AI systems may thrive, while those in middle-skilled roles that can be automated may face displacement.
Emerging AI technologies could also exacerbate income inequality. Workers in high-skilled AI-related fields may benefit from increased productivity and wages, while those in low-skilled roles might face stagnant or declining wages due to automation. Certain industries may experience more significant job displacement than others. For example, self-driving vehicles could disrupt the transportation sector, while chatbots and virtual assistants could impact the customer service industry.
At this point in our cultural evolution with AI technologies, it should be noted that there is evidence suggesting that AI, along with other emerging technologies, may also generate more employment opportunities than it displaces. Studies from the University of Warrick and MIT have found that AI will be more of a disruptor than a destroyer of jobs.
“While we can’t say for sure how many jobs will be created or destroyed from the research, it is likely that the automation of some tasks may mean fewer people are needed to perform some jobs but that increased productivity may reduce costs stimulating sales and demand for workers overall,” the researchers explain. “This of course is likely to depend upon the specific AI-technology used and what employers hope to achieve by using it.”12
Clearly the incorporation of new and emerging technologies is going to hit some jobs a lot harder than others. But the employment landscape may be more malleable than predicted. Only time will tell.
Legal and Regulatory Challenges:
Addressing the distinctive challenges presented by AI technologies, such as issues pertaining to liability and intellectual property rights, necessitates the development of innovative legal frameworks and regulations. When ChatGPT writes you up a nice synopsis regarding some topic, where did that information come from? What actual living, breathing, persons were responsible for it? And how do they get compensated; if at all? Battles are currently being waged by people like Margaret Atwood and Sarah Silverman over the copyright of such content and how this can be usurped by such new technologies. As an example, consider the following scenario: In 2011, I wrote a book called: How to Become a Really Good Pain in the Ass: A Critical Thinker’s Guide to Asking the Right Questions.13 A 2nd Edition of the book was released in 2021 to mark its 10th-year anniversary. Now, if anyone were to prompt ChatGPT for a 1500 word report on the value of Critical Thinking as discussed by a really good pain in the ass, where do you think some of that information is going to come from? And should I receive any credit, royalties, or recognition of any form for providing this data mining tool with some of its textual fodder? None of this has been worked out yet.
As well, we have just witnessed the end to a 148-day strike with Hollywood writers regarding a number of issues – not the least of which addressed the potential for writers to be replaced by such new AI technologies. The final agreements regarding AI included the following:
- AI can’t write or rewrite literary material, and AI-generated material will not be considered source material under the Minimum Basic Agreement (MBA) which is the collective bargaining agreement for the Writer’s Guild of America. This means that AI-generated material can’t be used to undermine a writer’s credit or separated rights.
- A writer can choose to use AI when performing writing services, if the company consents and provided that the writer follows applicable company policies, but the company can’t require the writer to use AI software (e.g., ChatGPT) when performing writing services.
- The Company must disclose to the writer if any materials given to the writer have been generated by AI or incorporate AI-generated material.
- The WGA reserves the right to assert that exploitation of writers’ material to train AI is prohibited by MBA or other law 14
It would seem imperative for legal systems to adapt and keep pace with technological advancements in order to safeguard the rights of all individuals, artists, writers, and creatives involved.
But can I Fuc# it? Loss of Human Connection:
There is little doubt that whenever new technologies emerge, many industries attempt to exploit the least common denominator amongst us humans – which is sexual gratification – especially, for men. Some of the oldest artistic objects created appealed to sexual appetites. And many forms followed thereafter – from painting, to sculpture, to photography, to cinematography, to the internet, and now, to AI. There is a distinct possibility that, as AI technologies continue to produce more life-like androids, robots, and cyborgs, populations will find less need for actual human connections. Instead, human relationships with android models that never say ‘no’ to any need or desire will become extremely popular. In fact, there is an entire episode of the cartoon Futurama devoted to this potential problem. (See video below) As seen in the research of Jonathan Haidt, young teens have been adversely affected by the advent of the Smart Phone.15 Now, imagine that the smart phone has taken on the body of an android and can do everything a smart phone can plus tending to your every need, want, or desire. The growing dependence on AI-driven communication and interactions may result in a decline in empathy, social abilities, and human connections – the very detachments Haidt’s lab has discovered with adolescents and smart phones. In order to preserve the fundamental aspects of our social nature, it is crucial to strive for a balance between technology and genuine human interaction.
Manipulation through Misinformation/Disinformation
The proliferation of AI-generated content, including deepfakes, plays a role in propagating falsehoods and manipulating public sentiment. With the past pandemic, we saw an unusually high level of conspiracy theories circulating online. With new AI technologies, the ability to send out false, misleading, disinformation will become not only much easier, but it will be much more convincing. The elderly, the digital unsavvy, children, and millions of others will be exploited by the spread of such falsehoods.16 It is crucial to undertake significant endeavours to detect and combat AI-generated misinformation, as it is vital for safeguarding the integrity of information in the digital era. This has led to an inevitable arms race between the use of AI for the creation/distribution of false information and the media resources available to combat it and educate the public about it.
But of all of the potential AI harms that may befall humanity, none are more concerning nor more pressing than those that present existential threats to large populations. The advancement of artificial general intelligence (AGI) surpassing human intelligence gives rise to profound apprehensions for humanity in the long term. The potential of AGI introduces the possibility of unintended and potentially catastrophic outcomes, as these highly advanced AI systems may not align with human values or priorities.
As we rush to utilize powerful forms of AI like GPT-4 and DALL-E 2 we should take sufficient precautions to understand, monitor for, and act quickly to mitigate their points of failure.
In late March of 2023 I, along with over 1000 scientists, scholars, and academics including Elon Musk, Steve Wozniak, Yuval Hurari, and many others, signed an Open Letter produced by The Future of Life Institute called: ‘Pause Giant AI Experiments: An Open Letter’. The letter basically called attention to the rapidly developing field of AI and the potential for harm that may come with it:
“Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”17
One of the biggest problems with the current pace of AI development is something the general public doesn’t know. And it is this simple fact:
No one working, studying, or developing Artificial Intelligence knows with certainty what’s going to happen when AGI emerges.18
But what we do know with certainty is that, if we do nothing, some very bad things are likely to occur to humanity.
So perhaps it’s best to consider the issue like this:
If we believe the development of a form of superintelligent AI may pose an existential risk (or XRisk) to humanity at some point in the future, and it turns out to be true, then we, as a species can win, so to speak, by preparing for this potentiality and guarding against its anticipated harmful effects. And if it turns out that no matter how intelligent our AI technologies become in the future, they never pose an XRisk to us, then we would have taken the most epistemically and ethically responsible precautions to assure against this. On the other hand, if we don’t take XRisk seriously and there turns out to be no risk whatsoever, we would be acting irresponsibly – both epistemically and ethically, because we simply got lucky. And the worst-case scenario occurs if we don’t take XRisk seriously and it turns out to be true. We will then face considerable harm and repercussions at some point in the future. To address these risks, it is imperative for the AI research community to proactively participate in safety research, cooperate on establishing ethical guidelines, and foster transparency in the development of AGI.
The overarching objective of many organizations today – such as Convergence Analysis (where I work as a Senior Researcher and Ethicist) – is to ensure that when AGI is developed, it serves humanity’s best interests and does not present a threat to our existence. It has become imperative that governments work with AI experts so that they can be properly informed and educated on how to safely guide and regulate these new developments in AI technology and applications.
The future of critical thinking and ethical reasoning in an automating world is now. It is morally imperative that we establish cooperative, transparent guidelines which can direct the safe and effective uses of AI technologies in all its forms and applications. It is important to start the conversation about AI at all levels of engagement and activity and to keep it going. Researchers, academics, technicians, computer programmers, engineers, politicians, and the general public all must be informed and engaged in the dialogue about the future of AI. It has been my hope to provide readers with enough reliable background information to confidently engage in this dialogue.
- http://jmc.stanford.edu/articles/whatisai/whatisai.pdf[↩]
- See: https://academic.oup.com/mind/article/LIX/236/433/986238[↩]
- See: https://www.ostokproject.com/[↩]
- https://www.ibm.com/topics/artificial-intelligence#:~:text=At%20its%20simplest%20form%2C%20artificial,in%20conjunction%20with%20artificial%20intelligence[↩]
- See: https://www.cbc.ca/news/world/artificial-intelligence-extinction-risk-1.6859118[↩]
- Yoshua Bengio heads the Canadian Advisory Council on Artificial Intelligence. The 15-member council advises the federal government on “how best to build on Canada’s AI strengths, identify opportunities to create economic growth that benefits all Canadians and ensure that AI advancements reflect Canadian values.”[↩]
- See: 3. 2016 ‘How to Avoid a Robotic Apocalypse: A Consideration on the Future Developments of AI, Emergent Consciousness, and the Frankenstein Effect’, IEEE Technology and Society Vol 35, No. 4, December: https://ieeexplore.ieee.org/document/7790998 [↩]
- To experience this form of technology, see: https://openai.com/blog/chatgpt[↩]
- Note that ChatGPT will tell you that its information is only current up to 2021.[↩]
- See: https://en.wikipedia.org/wiki/AI_alignment[↩]
- Of course, there still lies the potential for a nonagent AI to develop agency through some form of conscious awareness. Should any form of AI become conscious in manners similar to those of any human, it would, ipso facto, immediately be attributed the same moral and legal rights as any human. And as such, it would possess the capacity to misuse its attributes in the generation of harm.[↩]
- https://www.forbes.com/sites/adigaskell/2022/01/18/ai-creates-job-disruption-but-not-job-destruction/?sh=31c0f6573b3e[↩]
- See: https://www.amazon.ca/How-Become-Really-Good-Pain/dp/163388712X[↩]
- https://variety.com/2023/tv/news/writers-strike-over-wga-votes-end-work-stoppage-1235735512/[↩]
- See: https://jonathanhaidt.substack.com/p/sapien-smartphone-report [↩]
- See: https://humanistperspectives.org/issue225/empowering-yourself-against-misinformation-disinformation-and-conspiracy-theories/ for further discussion.[↩]
- https://futureoflife.org/open-letter/pause-giant-ai-experiments/[↩]
- Full disclosure: I have debated amongst myself and colleagues about the efficacy of public admission of such a problem.[↩]