5.2. Essay: Artificial Intelligence

Published December 2, 2019

Note: A postscript is added in October 2022 to include notice of the White House Blueprint for an AI Bill of Rights.

Reflecting on a talk given by Dr Connor Upton at IADT Dun Laoghaire, November 6, 2019.

1. Introduction

Guest speaker, Dr Connor Upton delivered his “Building Centaurs: Designing Human Interaction with Artificial Intelligence” presentation at IADT Dun Laoghaire on November 6, 2019. It fitted our semester’s focus on speculative design well.

Note. Abbreviating Artificial Intelligence to “Ai” is a more accessible format than “AI” (reading as AL) when using sans-serif fonts.

2. Highlights

Connor invited interest with his and Fjord’s explaining artificial intelligence (Ai) to their clients in humanistic terms.

Connor discussed Webster’s definition of Artificial Intelligence (n.d.). It includes “intelligent” and “imitate human behaviour”. Marr (2019) is concerned that definitions depend on the goals of the organisation generating them. Ai is not about equating human intelligence or behaviour and more about using its ability to enhance our world. Connor reflected on the confusion of what intelligent human behaviour is and how Ai evolves more quickly than definitions can keep pace with.

2.1. Expectations of Ai

Speculative design and Ai are synonymous with popular dystopian science-fiction genres encouraging a negative perception. Paradoxically we are accepting Ai (Tsekleves, 2015). The utility of Ai to automate tasks in our everyday lives is clearly attractive and practicable (Mihajlovic, 2013; Naik, 2017).

Advanced technology featured in science fiction can build unrealistic expectations that exceed technology and lead to disappointment. When technology does meet the “hype”, interest can rapidly drop off into the “trough of disillusionment” (Gartner, n.d.).

The opposite may be true for Ai. It pervades a wide range of our daily tasks and activities and goes unnoticed or unrecognised when compared to the sensational iterations in science fiction.

2.2. Communicating Ai Design and Tasks

To focus clients on the advantages and opportunities of Ai, Fjord promote what Ai achieves rather than what it is. Ai is a moving feast between our fictions, expectations, and beliefs after all. Using human terms Fjord dispels myths of what Ai is and encourages creative thought on the when, where, why, and how to employ it within an enterprise user case.

Fjord’s six Ai uses are:

  • Computer vision.
  • Natural language processing.
  • Digital signal processing.
  • Robotics.
  • Connection & Prediction.
  • Generative Ai.

The human dialogue around Ai tasks enables our customer to envisage if not empathise with the technology’s viewpoint. Promoting an enterprise opportunity, the fiction can be pressed into reality.

3. Analysis

I am enthralled by the use of Ai and its sifting through Big Data to build useful and usable experiences and recommendations.

3.1. Real-world Ai

Arvoia’s Mobacar of Killarney built a powerful team of data scientists to develop a recommendation engine for business travellers (Arvoia, n.d.; McMahon, 2017). Based on times and business venues, the Ai interrogates service provider APIs to collate an itinerary.

The Ai learns the client’ routines, habits, and preferences to improve recommendations and satisfaction. The client concentrates on their business and not compiling distributed resources. A useful and usable outcome promotes trust in the Ai and the data it consumes.

eCommerce and marketing product recommendations experienced on Ebay, Amazon, and Google may undersell Ai’s potential. It surfaces products and generates leads that may or may not be useful to the user or merchant. Its implementation can be intrusive and the data collection and collation untrusted.

3.2. Trust and Accuracy

Connor introduced generative Ai. Parameters of engineering, physics, etc. are taught to the Ai. It can then render iterative 3D design solutions to the same specification and more quickly than a designer constrained by imagination. This was attractive to me before Connor illustrated when Ai works so closely to parameters that it reflects the bias of its own designers. There can be unintended consequences.

Connor briefly touched on the fear of Ai displacing the workforce. Ai may open new opportunities for employment and robotics will improve worker safety by participating in dangerous roles. Popular fiction conflicts with this view. It promotes a dystopian view of Ai. An example is Burton’s (2005) version of Roald Dahl’s, Charlie and the Chocolate Factory where Charlie’s father is made redundant by a clumsy robot. Children are fed a fear of future iterations of technology.

Work security is an area of mistrust in Ai. The European Commission (2019) works to regulate Ai and encourage ethical practices. It dispels prejudices around worker health and displacement when Ai and robotics replace human employees and is investing €1.5 billion in Ai between 2018 and 2020. The benefits to Europe must outweigh the risks of unemployment, or judge them negligible?

3.3. Ai Ethics and scope

Perhaps Ai is most untrusted within the area of surveillance, which includes law enforcement and defence.

Autonomous flight surface controls, “fly-by-wire” has been an accepted–and trusted application of Ai for over half a century (Creech, 2003). Ai-controlled drones and salient robot warriors have emerged from science fiction into at times an only ridiculous ethical debate. Is it right that a robot kills? Moreover, the Ai autonomously determines to kill? Meanwhile, young service personnel deliver deadly force on their own autonomous judgement and, if enacting orders are received as heroes. What of the Ai making a life-threatening judgement on orders while policing a busy high-street or whole society?

A similar ethical debate surrounds autonomous road vehicles. The automated surveillance of and reaction to a vehicle’s surroundings is an attractive experience. Human driver interactions result in injury and death. Insurance companies operate on risk: driving is dangerous. Similarly to ‘killer’ machines, can we trust Ai to choose who dies in a road traffic incident, or trust its programmers to develop and execute the Ai correctly?

Ethical consideration relates to behavioural surveillance, too. Science fiction has exploited the Orwellian backstory of manipulative powers exploiting surveillance for power. Real-world examples are plentiful. Connor discussed data surveillance proving necessary on social media platforms who are now accountable for content posted by their patrons. The moderation task is huge and Ai has played a part for some time. (Mihajlovic, 2013).

How invasive can Ai be to protect us from behavioural threats? The TV series, Person of Interest compared the combination of text and facial recognition, voice and sound analysis, global positioning, and predictive interventions by two competing, good and evil Ai (Newitz, 2014). Is it the Ai we cannot trust or the people implementing it and broadening its scope? The “wrong hands” is a matter of context and doctrine. How invasive is unethical? Fiction explores. Are Alexia, Cantana, and Siri listening?

The three paradigms of a trusted assistant (Mobacar), distrusted peeping tom (eCommerce platform), and surveillance (autonomous machines and intelligence gathering) are paradoxes of our belief. Each attracts significant investment and none will dissipate from history easily. The question of whether Ai is ethical or beneficial is made polarised between perceived gains and losses from financial, sociological, and political applications.

Dignum (2018) warns that suitable Ai ethics are unlikely. Behavioural ethics rely on the subjective frameworks of morality imparted by the designers.

Specia (2019) discusses Ai’s default female gender reinforcing negative gender stereotypes with a submissive servitude and flirtatious style. Insults invite humorous retorts that discriminate women. The United Nations Educational, Scientific and Cultural Organization (UNESCO) (2019) blame male gender bias in development teams. Specia reflects on the popular Ai assistant names (Siri, Alexa, and Cortana) being unmistakable female and in the case of Cortana, an inappropriately dressed fictitious woman.

The European Commission (2019) allude to the fairness and liability of decision making in Ai legislation and regulatory work. They list key requirements for a trustworthy Ai experience, which our guest speaker Connor spoke to without knowing the source when later questioned by our tutor, Hilary. The list infers conscientious inclusion, openness, transparency, and governance:

  • Human agency and oversight
  • Technical robustness and safety
  • Privacy and Data Governance
  • Transparency
  • Diversity, non-discrimination and fairness
  • Societal and environmental well-being
  • Accountability

These are human needs and values. Can we hold an autonomous Ai accountable? This is an important consideration when promoting trust in our Ai-led experiences.

3.4. Stupid Ai

We know Ai can be employed stupidly and what if the Ai is stupid? Bossmann (2016) reminds us that Ai learns its parameters from available data. If the data or its learning is unreliable then errors of application can take place.

Some Tesla 3 automated vehicle incidents resulted in death including when the system misunderstood threats from other road users (Tesla Deaths, 2019). A vehicle failed to recognise a lorry trailer from its side profile (Lambert, 2019). Dignum (2018) considers responsibility an important issue to determine with Ai. Are the Tesla incidents caused by a failure in the Ai or by the driver failing to maintain responsibility for control?

4. Reflection

Humanising Ai’s six primary task areas may not result in an ethical or popular application and it will at least ease the development of fantasy into science.

Ai is exciting to our users’ experience with opportunities to remove work from everyday tasks from time management to manufacturing. Opportunities to abuse or exploit the appetite for Ai need identifying. Ai is a prominent roadmap feature to ease customer journeys and personalise the delightful user experience. A regulated Ai appears to fit ethical business values and where is the Red Line?

To me, recommendations is the most exciting Ai area with automated vehicles, drones, robots fulfilling my expectancy of science fiction. Surveillance technologies remain a threat and for the larger part, a necessary invasion.

If data were not open to abuse, I have no doubt our uptake of Ai-led experiences would accelerate. Conversely, our safety online uses Ai to heavy-lift areas of moderation and security where the human remains our preferred and possibly most reliable or trusted implementation of judgement.

Ai’s growth is accelerating and growing across more applications and technology streams that impact our day to day experience of life and technology. There exist negative effects and opportunities to exploit people unethically, or to manipulate ethics to suit societal, commercial, or political needs. There needs to be a balance.

It is with appreciation that institutions such as the European Commission keep oversight of Ai’s growth in cohort with business and banking ethics, values, and legislation. Connor and Fjord’s idea to humanise Ai tasks may also assist enterprise to empathise with and dispel our users’ perceptions of risk.

Where Ai is going is still science fiction. Science fiction is where Ai comes from. Perhaps Ai is going home. We only need to hope it takes us with it and doesn’t determine to leave us behind.

5. Postscript, October 2022

The US presidential White House’s new Blueprint for an AI Bill of Rights includes protections from Ai. It lists 5 principles as follows:

  1. You should be protected from unsafe or ineffective systems.
  2. You should not face discrimination by algorithms and systems should be used and designed in an equitable way.
  3. You should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used.
  4. You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you.
  5. You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter.

“Thus, this framework uses a two-part test to determine what systems are in scope. This framework applies to (1) automated systems that (2) have the potential to meaningfully impact the American public’s rights, opportunities, or access to critical resources or services. These Rights, opportunities, and access to critical resources of services should be enjoyed equally and be fully protected, regardless of the changing role that automated systems may play in our lives.”

“Considered together, the five principles and associated practices of the Blueprint for an AI Bill of Rights form an overlapping set of backstops against potential harms. This purposefully overlapping framework, when taken as a whole, forms a blueprint to help protect the public from harm. The measures taken to realize the vision set forward in this framework should be proportionate with the extent and nature of the harm, or risk of harm, to people’s rights, opportunities, and access.”

This is encouraging, if late to the party?

6. References

ADA National Network. (2019, October). What is the Americans with Disabilities Act (ADA)? Retrieved from https://adata.org/learn-about-ada.

Artificial Intelligence. (n.d.) In Merriam-Webster’s dictionary. Retrieved from https://www.merriam-webster.com/dictionary/artificial%20intelligence.

Arvoia. (n.d.). Leading mobility and hospitality companies use Arvoia’s platform and solutions to create better customer experiences and capture untapped revenues. Retrieved from https://www.arvoia.com/.

Bossmann, J. (2016, October 21). Top 9 ethical issues in artificial intelligence. Retrieved from https://www.weforum.org/agenda/2016/10/top-10-ethical-issues-in-artificial-intelligence/.

Burton, T. (Director). (2005). Charlie and the Chocolate Factory [Motion Picture]. United Kingdom: Warner Brothers Entertainment.

Creech, G. (2003, December 17). Digital Fly By Wire: Aircraft Flight Control Comes of Age. Retrieved from https://www.nasa.gov/vision/earth/improvingflight/fly_by_wire.html.

Dignum, V. (2018). Ethics in artificial intelligence: In Ethics and Information Technology, 20 (1), 1-3. Doi: 10.1007/s10676-018-9450-z.

European Commission. (2019, November 21). Digital Single Market Policy, Artificial Intelligence. Retrieved from https://ec.europa.eu/digital-single-market/en/artificial-intelligence.

Gartner. (n.d.). Gartner Hype Cycle. Retrieved from https://www.gartner.com/en/research/methodologies/gartner-hype-cycle.

Lambert, F. (2019, March 1). Tesla Model 3 driver again dies in crash with trailer, Autopilot not yet ruled out. Retrieved from https://electrek.co/2019/03/01/tesla-driver-crash-truck-trailer-autopilot/.

Marr, B. (2019, March 7). The Key Definitions Of Artificial Intelligence (AI) That Explain Its Importance. Forbes. Retrieved from https://www.forbes.com/.

McMahon, C. (2017, August 27). This multimillion-euro Kerry startup is turning down calls from possible buyers. Retrieved from https://fora.ie/mobacar-startup-3560130-Aug2017/.

Mihajlovic, I. (2013, June 13). How Artificial Intelligence Is Impacting Our Everyday Lives. Retrieved from https://towardsdatascience.com/how-artificial-intelligence-is-impacting-our-everyday-lives-eae3b63379e1.

Naik, D. (2017, June 3). Understanding Recommendation Engines in AI. Retrieved from https://medium.com/humansforai/recommendation-engines-e431b6b6b446.

Newitz, A. (2014, October 1). Person of Interest’s producers tell us what the Machine really is. Retrieved from https://io9.gizmodo.com/person-of-interests-producers-tell-us-what-the-machine-1498754032.

Tesla Deaths. (2019). Tesla Deaths. Retrieved from https://www.tesladeaths.com/.

Tsekleves, E. (2015, August 13). Science fiction as fact: how desires drive discoveries. The Guardian. Retrieved from https://www.theguardian.com.

UNESCO. (2019). I’d blush if I could: closing gender divides in digital skills through education. (GEN/2019/EQUALS/1 REV 2). EQUALS partnership.

 

Top