Do you know, dear reader, how many robots you have in your house?
Over 30 million robots are used worldwide in people’s houses and factories, and experts predict that robots will soon outnumber humans. It is legitimate to ask what happens if a self-driving car harms or even kills a person accidentally or deliberately. This is not a hypothetical question: it happened. It is also reasonable to ask which of the many applications we use can be trusted. Are there cases when the answers are negative?
The well-known science fiction author Isaac Asimov proposed the first ethical laws qualifying for the Digital World in his 1942 short story Runaround (included in the 1950 collection of stories titled I, Robot). The Three Laws of Robotics (short, The Three Laws or Asimov’s Laws) are, in fact, four (the zeroth law was added as the last):
Zeroth Law: A robot may not harm humanity or, by inaction, allow humanity to come to harm.
First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Second Law: A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law.
Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
The Laws explicitly protect humans from any direct or indirect harm robots produce. They have influenced the ethics of AI[1], an essential part of the Digital World. The Laws are simple guidelines of containment strategies against potentially dangerous “ordinary” AI[2], like an autonomous car: once incorporated into the system, the Laws guarantee its safety.
These Laws offer a simple approach to a highly complex problem: for example, they do not exclude unpredictable scenarios. Minimally, the enforcement of the Laws requires that programmers are (i) willing and (ii) able to program them into their algorithms, and (iii) the resulting systems cannot transcend these Laws autonomously.
Are these simple laws enforced? Of course not: think about the military drones used in the Ukraine war. Would these laws be implemented soon? As we continue to see the creation of new horror movies[3] about android robots that turn themselves into killer robots, we tend to be pessimistic (or even super-pessimistic).
Ethics
Ethics is a domain of philosophy that seeks to understand and resolve questions of human morality. Ethics includes three main parts. Meta-ethics studies theoretical concepts such as good and evil, right and wrong, virtue and vice, justice and crime. Normative ethics is concerned with the practical means of determining moral courses of action. Applied ethics deals with specific domains of activity, e.g., what a person is obligated (or permitted) to do in a given context.
One of the critical aspects of ethics is its emphasis on practicality. Rather than proposing only abstract theories, ethics seeks to guide how to live a good life in the real world, the Human World. Ethics is also important because it helps promote accountability. Ultimately, ethics is about creating a society based on shared values and principles.[4]
Religion sets high ethical standards and provides intense motivations for ethical behaviour. Of course, ethics is different from religion.[5] In fact, ethics is more general, as it applies as much to the conduct of the atheist as to the devout religious person.
Ethics vs Fraud
We say that some behaviour (person) is ethical when conforms to a high moral standard: unethical” means morally wrong.[6]
A more decisive negation of ethical behaviour (person) is fraudulent. “Fraud involves the false representation of facts, whether by intentionally withholding important information or providing false statements to another party for the specific purpose of gaining something that may not have been provided without the deception.” [7]All fraudulent behaviours are unethical, but the opposite implication is not necessarily true: there are unethical behaviours which are not fraudulent. For example, targeting ads within the feed of postings on social media platforms is not a fraud, but it may be unethical when the financial gains are not transparent.
Political targeting ads can be very dangerous. The Cambridge Analytica scandal is one of the most harmful data breaches with global political effects.[8],[9] Commercial-targeted ads can be dangerous, too: they can isolate and divide, even when they are not political.[10]
In most cases, non-ethical behaviour is less damaging than fraud, but the relation is more complex. Some unethical behaviour can be as bad as a fraudulent one; for example, unethical behaviour leading to substantial financial gain. There are arguments showing that fraud could be considered an ethical issue.[11] The well-known example of “stealing” shows the complexity of the decision of right or wrong: stealing is generally ethically wrong, but stealing to save lives is right.
Ethics and Fraud in the Digital World: Examples
There are plenty of unethical examples from various domains of human activity. Do humans tend to behave ethically or not? Are fraud and ethics different in the Digital World?
The average humans tend to do good for others when they feel secure and their basic needs are satisfied. Often though, humans wish to have more than they need, so they may breach ethical principles to gain more money, power, etc.
The data security breaches committed using the application Find My iPhone[12] (Apple) and the collaboration of Microsoft with the NSA for Internet surveillance beyond responses to legal processes[13] are two examples of unethical, but non-fraudulent behaviour.
Unfortunately, the Digital World is littered with fraudulent behaviour. French lawsuits against Google “for aggressive tax evasion” in September 2019 forced Google to pay almost 1 billion EUR to France.[14] The Cambridge Analytica scandal is the most fraudulent behaviour in the Digital World. Facebook gave access to the sensitive user data of 87 million users to consultant firm Cambridge Analytica which used AI algorithms to micro-target their political ads in the 2016 U.S. elections.[15] Recently, in December 2022, Meta, the present owner company of Facebook, did not admit wrongdoing but settled the Cambridge Analytica case for $725 million: the company claimed the settlement was “in the best interest of our community and shareholders”. The complaint was filed on behalf of a large class of Facebook users (about 250-280 million people), and the settlement was opened only on 20 April 2023.
But even when a corporation is at fault, the fraudulent behaviour is driven by humans. A company committing fraud cannot go to prison: it can only be fined massive amounts of money (as Meta). In contrast, an individual can be jailed and, in addition, can be penalised financially. The justice system seems more satisfied when it can demand money rather than when it puts people in jail (which costs taxpayers money).
Ethics and Fraud in AI
Among all ethical issues in the Digital World, AI issues tend to be more complex than those in the Human World.
There are many examples of unethical uses of AI. The book by Cathy O’Neil, Weapons of Math Destruction[16] discusses several domains where AI algorithms have been used unethically, even fraudulently, with harmful consequences for people.
Since data used by programs refer to generic people, one would expect the programs to be gender, ethnicity, age etc., neutral. However, programmers implement “ideas” and “methods” requested by others (typically their employers) and use different codings and categories, which may introduce biases. Those biases are “hidden” in the code, so only those reading the programs, which are sometimes very long, can know them: it is almost impossible for a larger audience to know or detect them. AI program biases are a significant source of harming humans.
The improper use of AI in the justice system is one example. In many cities across the US, police departments have been facing budget cuts. To deal with this situation, they started using “personalized crime prediction software”, like PredPol, CompStat, HunchLab, etc. This type of predictive software is based on “predictive crime models”. Who decides that the model reflects a healthy reality? What are the parameters of the model? What data go into the model? It turns out that the police itself has a decisive role in answering these questions.
When police set up their crime prediction software, they have a choice: they can choose to focus only on violent crimes (like homicide, arson, assault etc.), or they can broaden their focus to include the so-called “nuisance” crimes (e.g., vagrancy, aggressive panhandling, selling and consuming small quantities of drugs). These nuisance crimes are endemic to many impoverished neighbourhoods; including them in the model threatens to skew the analysis because the resulting model sends the cops back to patrol the same communities. This situation creates a pernicious feedback loop. The policing itself spawns new data, which, in turn, justifies more policing. Thus, even though the software creators could stress that the model is blind to race and ethnicity, the prisons fill up hundreds of thousands of people from impoverished neighbourhoods, which historically tend to be populated primarily by black and Hispanic people.
Do We Need Ethical Principles Specific to the Digital World?
In the Human World, religion and philosophy provide some guidelines. Some ethical principles in the Human World should also apply to the Digital World. However, there must be other specific principles. Since there is no universal and complete set of ethical principles in the Human World, it will be challenging to have such regulations in the Digital World, which is a part of it. By comparison, do we need ethical principles specific to mathematics and mathematicians? According to the American Mathematical Society (AMS), mathematical education, research and publication of research results in mathematical journals should all be governed by ethical principles.[17] But none of the regulations cited is specific to mathematics. Can the world of mathematics be considered a part of the Human World, like the Digital World? In the affirmative, some ethical principles must be specific to mathematics, showing that the AMS ethics guide is incomplete. A negative response would also be unsatisfactory for most, if not all, mathematicians.
Guidelines for Ethics in the Digital World
A minimal set of universal ethical principles in the Human World consists of autonomy, justice, and fairness.
Autonomy refers to a group of people or a person’s capability to control their actions without external control. Autonomy implies freedom (a fundamental human right), independence, and self-determination. There are various degrees of autonomy, both for individuals and collectivities. If left unchecked, autonomy can have negative impacts. Some emerging technologies, like AI surveillance, control and manipulation systems, can threaten human autonomy.
Justice is the principle that people receive what they deserve. The meaning of “what they deserve” depends on several aspects, including ethics and law. Justice and fairness are closely related terms, often used interchangeably. Still, they are not synonymous: usually, justice refers to a standard of rightness; in contrast, fairness is often used as an ability to judge without reference to one’s feelings or interests.[18] Justice and fairness are ethical principles that apply to the Digital World, too.
Several ethical principles specific to the Digital World may come from AI: avoiding algorithm bias and enforcing data privacy, transparency, and accountability. Algorithm bias could appear because people write the algorithms, choose the data used by algorithms, and decide how to apply the results of the algorithms. Data privacy is essential when we deal with the bank, surf the Internet, use social media websites, and in many other situations. Transparency means algorithm design assumptions and methods must be open and clear to most technology users. Accountability requires being responsible or answerable for a digital system, its behaviour, and its potential impact. Accountability implies responsibility for actions, decisions, and products. Chat GPT[19] and many similar AI recent applications have accelerated ethical debates because they seem to elude the above principles.
We need legislation to enforce ethical principles in the Digital World. So far, very few laws are enforced and they only refer to privacy issues. The European Union passed the GDPR (General Data Protection Regulation) in May 2018, and New Zealand “Privacy Act 2020 and the Privacy Principles”[20] followed two years later. In the US, there is no such legislation yet. Still, on 8 July 2022, the California Privacy Protection Agency commenced the formal rule-making process to adopt regulations to implement the Consumer Privacy Rights Act of 2020. Would we have a “Digital Bill of Rights” in the US?
Is the Future Bleak?
Teaching machines ethical concepts is a fundamental goal AI is trying to achieve. An essential step towards it is enabling machines to “grasp” human-like concepts in the first place, which is still AI’s most important open problem.
Some scientists claim that AI ethics principles are useless. The reason is that most people don’t care about ethical principles at all! Why? One reason is that many of those in power do not care about ethics or human values. As one reader[21] commented on a Quanta Magazine paper[22] on aligning AI with human values: “People don’t care as much about‚ “human values”, whatever they are, as one might think. What people and organisations care about is spreading their narrative. Their agenda. Or making money through deception. That’s what these so-called AI are, and always will be, optimised to do.”
However, it’s important to remember that there are always challenges and difficulties in any given period and that humanity has always found ways to overcome them and progress. Individually it is essential to stay informed and engaged in these efforts. One such effort was the open letter “Pause Giant A.I. Experiments”, calling for a temporary moratorium on training models larger than G.P.T.-4. The letter had 27,572 signatories (as of 20 April 2023), including one of the authors of this article.
[1]. https://en.wikipedia.org/wiki/Ethics_of_artificial_intelligence
[2]. Not a superintelligence, i.e. “any intellect that vastly outperforms the best human brains in practically every field, including scientific creativity, general wisdom, and social skills” (philosopher Nick Bostrum).
[3]. The latest such movie is called M3GAN, https://www.imdb.com/title/tt8760708/: it was released in the US in the first week of January 2023.
[4]. https://enlightio.com/religion-vs-ethics-what-is-the-difference
[5]. https://www.scu.edu/ethics/ethics-resources/ethical-decision-making/what-is-ethics
[6]. https://www.merriam-webster.com/dictionary/unethical.
[7]. https://www.investopedia.com/terms/f/fraud.asp
[8]. https://www.bbc.co.uk/news/technology-64075067
[9]. https://edubirdie.com/examples/stealing-ethical-dilemma-and-moral-development/
[10]. https://theconversation.com/targeted-ads-isolate-and-divide-us-even-when-theyre-not-political-new-research-163669
[11]. https://maksi.binus.ac.id/2022/03/08/is-fraud-an-ethical-issue/
[12]. https://en.wikipedia.org/wiki/Criticism_of_Apple_Inc.#Data_security
[13]. https://en.wikipedia.org/wiki/Criticism_of_Microsoft#Privacy_issues
[14]. https://tpcases.com/france-vs-google-september-2019-courts-approval-of-cjip-agreement-google-pays-eur-1-billion-in-fines-and-taxes-to-end-supreme-court-casease/
[15].https://lasserouhiainen.com/unethical-use-of-artificial-intelligence/#:~:text=The%20most%20well%2Dknown%20is,in%20the%202016%20US%20elections
[16]. New York, Crown Publishers, 2016.
[17]. https://www.ams.org/about-us/governance/policy-statements/sec-ethics
[18]. https://www.scu.edu/ethics/ethics-resources/ethical-decision-making/justice-and-fairness/
[19]. https://openai.com/blog/chatgpt
[20]. https://www.privacy.org.nz/privacy-act-2020/privacy-principles/
[21]. https://disqus.com/by/JamesEadon/?
[22]. https://www.quantamagazine.org/what-does-it-mean-to-align-ai-with-human-values-20221213/