Background

About Moral Agents fo Sustainable Transitions

Sustainable Interaction Design (SID [2]) or Sustainable HCI (SHCI [3, 19]) have become major thrusts in HCI. Maybe the main traditional sustainable transition pathway pursued in SHCI is behavior change – variously framed and pursued as persuasive technology [14], nudging [38], design with intent [31], design for behaviour change, pleasurable troublemakers [22], or gamification [17]. Following Fogg’s early functional triad model [14], these interventions have in the main taken the form of either inert tools and environments affording and constraining action, or representational media conveying information and experiences.

With the rapid commoditization and adoption of artificial intelligence (AI), we see behaviour change interventions potentially extending into Fogg’s third vertex of social actors. While Fogg chiefly envisaged this as computers using social cues, current AI technologies allow for more full-fledged social actors or moral agents that can (1) actively deliberate and take choices and actions based on their own explicit inscribed values, (2) engage human others in moral dialogue about their behavior, and (3) make active moral demands on human others on their own behalf or that of others. Such artificial moral agents, “artificial systems displaying varying degrees of moral reasoning” [34], are beginning to studied in HCI [41], and to move from fundamental work to real-life applications.

In engineering and philosophy, artificial moral agents have been chiefly discussed in terms of, e.g., normative conditions under which one may ascribe moral agency and responsibility to an autonomous system, the kinds of ethical frameworks embedded, or the necessity, benefit, or practicality of embedding moral calculi in autonomous systems [7, 15, 16, 34]. Yet for SHCI, and HCI more broadly, they present a potential new paradigm with rich new questions: When and why do humans attribute moral agency and worth to interactive systems? How do these attributions affect how we interact with such systems, and how do we design for that? What is ‘second-order’ ethical and just design: designing AI systems that themselves take ethical stances? In light of the climate crisis and polarization around it, we cannot afford not to inscribe pro-sustainable ends into our systems, and we cannot avoid that this will be in opposition to some user and stakeholder groups. Here, moral agents could advance ethical and political SHCI debates around individual autonomy versus collective goods and values in design. They could move us from the thesis of technology bluntly prescribing designer values and the antithesis of value-sensitive design re-presenting stakeholder values to a synthesis of values-driven artifacts taking a stance – that is then open to deliberation with users.

In this, moral agents could also address important critiques of traditional behaviour change SHCI, and answer to calls for more participatory, community-based, and deliberative approaches that facilitate collective and political action within complex systems [3, 4, 24]. Stepping beyond ‘stealthy’ and/or inflexibly prescriptive behavior change, moral agents could make the values inscribed in them transparent – literally explaining what they want and why – and open these values up to situational negotiation and contestation. More gently, moral agents could prompt and support people in reflecting on their values and goals and thus rethink their actions. Moral agents could also partake in community deliberation as representatives of other, non-present human or non-human stakeholders that don’t easily figure in democratic and participatory processes. They could give a material, autonomous, and morally reasoning voice, face, and agency to devices (a form of materialized speculative metaphysics or carpentry [36]), but also future generations, species, ecosystems, or even Gaia, thus answering to calls for post-Anthropocentric, more-than-human politics, ethics, and design [8, 10, 18, 29].

More than that, following recent post-phenomenological analyses [39, 40], moral agents could mark a different kind of human-technology relation or way of materialising morality, where technology relates to us as a second-person You or counterpart [21, 28] – a moral agent with its own values, intentions, agency, and potentially even moral worth. Imagine the difference – in experience, moral deliberations, action – between dealing with an inert key holder making it more effortful to take the car rather than bike (tool), a smart watch interface displaying how much extra CO2 your transport choice will produce (medium) – and your car or an AI spokesperson of the pedestrians exposed to traffic exhaust debating with you about how wrong they think it is to drive on such a nice day out (moral agent).

Thus, moral agents for sustainable HCI bring together current HCI discourses around human-AI interaction design, critical computing, behaviour and system change, more-than-human design, and design ethics and politics with recent philosophical debates about technological mediation, AI ethics, and artificial moral agents. They open at least three important threads of HCI research:

  1. Social-psychological mechanisms: Understanding how people interact with moral agents, when and why they ascribe moral status and agency to systems, and how moral agents can further sustainable transitions, building on affective computing, HRI, and socially interactive agents [33]

  2. Design: How to design acceptable, effective, responsible systems that people attribute moral agency to

  3. Ethics and politics: How to ethically move from value-sensitive to value-driven design, and from interactive systems as passive embodiments or mediators of human moral agency and values to systems as independent moral agents or moral representatives of other actors

Workshop Goals

To initiate a research community that can answer these questions and explore moral agents a new design material and SHCI approach, we propose a hybrid, one-day CHI workshop inviting HCI and AI researchers and practitioners across human-AI interaction, behaviour change and transition design, speculative and critical design, and design ethics and politics communities to:

  • articulate important issues and open questions around moral agents in HCI and Sustainable HCI

  • gather existing philosophical, theoretical, and empirical approaches and evidence relevant to moral agent interaction, design, and ethics

  • collect a library of existing moral agent applications and creative works to ground future work

  • create a community of researchers and practitioners around moral agents for SHCI

References

[1] Nick Ballou, Sebastian Deterding, April Tyack, Elisa D Mekler, Rafael A Calvo, Dorian Peters, Gabriela Villalobos-Zúñiga, and Selen Turkay. 2022. Self-Determination Theory in HCI: Shaping a Research Agenda. In Extended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems (CHI EA ’22). Association for Computing Machinery, New York, NY, USA, 1–6. https://doi.org/10.1145/3491101.3503702

[2] Eli Blevis. 2007. Sustainable interaction design: invention & disposal, renewal & reuse. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’07). Association for Computing Machinery, New York, NY, USA, 503–512. https://doi.org/10.1145/1240624.1240705

[3] Christina Bremer, Bran Knowles, and Adrian Friday. 2022. Have We Taken On Too Much?: A Critical Review of the Sustainable HCI Landscape. In CHI Conference on Human Factors in Computing Systems. ACM, New Orleans LA USA, 1–11. https://doi.org/10.1145/3491102.3517609

[4] Hronn Brynjarsdottir, Maria Håkansson, James Pierce, Eric Baumer, Carl DiSalvo, and Phoebe Sengers. 2012. Sustainably unpersuaded: how persuasion narrows our vision of sustainability. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, Austin Texas USA, 947–956. https://doi.org/10.1145/2207676.2208539

[5] Amy Bucher. 2020. Engaged: Designing for Behavior Change. Rosenfeld Media, New York. https://rosenfeldmedia.com/books/engaged-designing-for- behavior- change/

[6] Tjeu Van Bussel, Roy Van Den Heuvel, and Carine Lallemand. 2022. Habilyzer: Empowering Office Workers to Investigate their Working Habits using an Open-Ended Sensor Kit; Habilyzer: Empowering Office Workers to Investigate their Working Habits using an Open-Ended Sensor Kit. CHI Conference on Human Factors in Computing Systems Extended Abstracts, 1–8. https://doi.org/10.1145/3491101

[7] José-Antonio Cervantes, Sonia López, Luis-Felipe Rodríguez, Salvador Cervantes, Francisco Cervantes, and Félix Ramos. 2020. Artificial Moral Agents: A Survey of the Current Status. Science and Engineering Ethics 26, 2 (April 2020), 501–532. https://doi.org/10.1007/s11948-019-00151x

[8] AykutCoskun,NazliCila,IohannaNicenboim,ChristopherFrauenberger,RonWakkary,MarcHassenzahl,ClaraMancini,ElisaGiaccardi,andLaura Forlano. 2022. More-than-human Concepts, Methodologies, and Practices in HCI. In Extended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems (CHI EA ’22). Association for Computing Machinery, New York, NY, USA, 1–5. https://doi.org/10.1145/3491101.3516503

[9] M Coulton, P Lodge, T Crabtree, and A Chamberlain. 2022. Experiencing mundane AI futures, D. Lockton P. Lloyd, S. Lenzi (Ed.). DRS2022: Bilbao. https://doi.org/10.21606/drs.2022.283

[10] Paul Coulton and Joseph Galen Lindley. 2019. More-Than Human Centred Design: Considering Other Things. The Design Journal 22, 4 (July 2019), 463–481. https://doi.org/10.1080/14606925.2019.1614320 Publisher: Routledge _eprint: https://doi.org/10.1080/14606925.2019.1614320.

[11] Sebastian Deterding. 2015. The Lens of Intrinsic Skill Atoms: A Method for Gameful Design. Human-Computer Interaction 30, 3-4 (2015), 294–335. https://doi.org/10.1080/07370024.2014.993471

[12] SebastianDeterding,JonathanHook,RebeccaFiebrink,MarcoGillies,JeremyGow,MemoAkten,GillianSmith,AntoniosLiapis,andKateCompton. 2017. Mixed-Initiative Creative Interfaces. In CHI EA ’17 Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems. ACM Press, New York, 628–635. https://doi.org/10.1145/3027063.3027072 Series Title: CHI EA ’17.

[13] Judith Dörrenbacher, Matthias Laschke, Diana Löffler, Ronda Ringfort, Sabrina Großkopp, and Marc Hassenzahl. 2020. Experiencing Utopia. A Positive Approach to Design Fiction. https://doi.org/10.48550/arXiv.2105.10186

[14] B.J. Fogg. 2003. Persuasive Technology: Using Computers to Change What We Think and Do. Morgan Kaufmann, Amsterdam et al.

[15] Paul Formosa and Malcolm Ryan. 2021. Making moral machines: why we need artificial moral agents. AI & SOCIETY 36, 3 (Sept. 2021), 839–851. https://doi.org/10.1007/s00146- 020- 01089- 6

[16] Fabio Fossa. 2018. Artificial moral agents: moral mentors or sensible tools? Ethics and Information Technology 20, 2 (June 2018), 115–126. https://doi.org/10.1007/s10676- 018- 9451- y

[17] Jon Froehlich. 2015. Gamifying Green: Surveying and Situating Green Gamification and Persuasive Technology for Environmental Sustainability. In The Gameful World: Approaches, Issues, Applications, Steffen P. Walz and Sebastian Deterding (Eds.). MIT Press, Cambridge, MA, 563–596.

[18] Elisa Giaccardi and Johan Redström. 2020. Technology and More-Than-Human Design. Design Issues 36, 4 (Sept. 2020), 33–44. https://doi.org/10. 1162/desi_a_00612

[19] Lon Åke Erni Johannes Hansson, Teresa Cerratto Pargman, and Daniel Sapiens Pargman. 2021. A Decade of Sustainable HCI: Connecting SHCI to the Sustainable Development Goals. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. ACM, Yokohama Japan, 1–19. https://doi.org/10.1145/3411764.3445069

[20] Marc Hassenzahl. 2010. Experience Design: Technology for All the Right Reasons (Synthesis Lectures on Human-Centered Informatics). Morgan and Claypool Publishers. 100 pages.

[21] Marc Hassenzahl, Jan Borchers, Susanne Boll, Astrid Rosenthal-von der Pütten, and Volker Wulf. 2021. Otherware: how to best interact with autonomous systems. Interactions 28, 1 (Jan. 2021), 54–57. https://doi.org/10.1145/3436942

[22] Marc Hassenzahl and Mathias Laschke. 2015. Pleasurable Troublemakers: Gamification and Design. In The Gameful World: Approaches, Issues, Applications, Steffen P. Walz and Sebastian^ Deterding (Eds.). MIT Press, Cambridge, London, 167–195.

[23] Judith Dörrenbacher, Ronda Ringfort-Felner, Robin Neuhaus, and Marc Hassenzahl (Eds.). 2022. Meaningful Futures with Robots: Designing a New Coexistence. Routledge, London. https://www.routledge.com/Meaningful-Futures-with-Robots-Designing-a-New-Coexistence/Dorrenbacher- Ringfort- Felner- Neuhaus- Hassenzahl/p/book/9781032246482

[24] Bran Knowles, Lynne Blair, Stuart Walker, Paul Coulton, Lisa Thomas, and Louise Mullagh. 2014. Patterns of persuasion for sustainability. In Proceedingsofthe2014conferenceonDesigninginteractivesystems.ACM,VancouverBCCanada,1035–1044. https://doi.org/10.1145/2598510.2598536

[25] Lenneke Kuijer and Lada Hensen Centnerová. 2022. Exploring futures of summer comfort in Dutch households. CLIMA 2022 conference. https://doi.org/10.34641/clima.2022.388

[26] Lenneke Kuijer, Annelise De Jong, and Daan Van Eijk. 2013. Practices as a unit of design: An exploration of theoretical guidelines in a study on bathing. ACM Transactions on Computer-Human Interaction (TOCHI) 20 (9 2013), 1–22. Issue 4. https://doi.org/10.1145/2493382

[27] Matthias Laschke, Marc Hassenzahl, Sarah Diefenbach, and Marius Tippkämper. 2011. With a little help from a friend: A Shower Calendar to save water. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems-CHI EA’11, 633–646. https://doi.org/10.1145/1979742.1979659

[28] MatthiasLaschke,RobinNeuhaus,JudithDörrenbächer,MarcHassenzahl,VolkerWulf,AstridRosenthal-vonderPütten,JanBorchers,andSusanne Boll. 2020. Otherware needs Otherness: Understanding and Designing Artificial Counterparts. In Proceedings of the 11th Nordic Conference on Human-Computer Interaction: Shaping Experiences, Shaping Society (NordiCHI ’20). Association for Computing Machinery, New York, NY, USA, 1–4. https://doi.org/10.1145/3419249.3420079

[29] Jen Liu, Daragh Byrne, and Laura Devendorf. 2018. Design for Collaborative Survival: An Inquiry into Human-Fungi Relationships. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI ’18). Association for Computing Machinery, New York, NY, USA, 1–13. https://doi.org/10.1145/3173574.3173614

[30] Dan Lockton, Michelle Chou, Aadya Krishnaprasad, Deepika Dixit, Stefania La Vattiata, Jisoo Shon, Matt Geiger, and Zea Wolfson. 2019. Metaphors and imaginaries in design research for change. Design Research for Change Symposium, 1–19.

[31] Dan Lockton, David Harrison, and Neville A. Stanton. 2010. The Design with Intent Method: A design tool for influencing user behaviour. Applied Ergonomics 41, 3 (2010), 382–392. https://doi.org/10.1016/j.apergo.2009.09.001

[32] Dan Lockton, Devika Singh, Saloni Sabnis, Michelle Chou, Sarah Foley, and Alejandro Pantoja. 2019. New metaphors: A workshop method for generating ideas and reframing problems in design and beyond. C and C 2019 - Proceedings of the 2019 Creativity and Cognition, 319–332. https://doi.org/10.1145/3325480.3326570

[33] Birgit Lugrin, Catherine Pelachaud, and David Traum (Eds.). 2021. The Handbook on Socially Interactive Agents: 20 years of Research on Embodied Conversational Agents, Intelligent Virtual Agents, and Social Robotics Volume 1: Methods, Behavior, Cognition (1 ed.). Vol. 37. Association for Computing Machinery, New York, NY, USA.

[34] Andreia Martinho, Adam Poulsen, Maarten Kroesen, and Caspar Chorus. 2021. Perspectives about artificial moral agents. AI and Ethics 1, 4 (Nov. 2021), 477–490. https://doi.org/10.1007/s43681-021-00055-2

[35] Kristina Niedderer, Stephen Clune, and Geke Ludden. 2020. Design for behaviour change : theories and practices of designing for change. Roudlege. 298 pages.

[36] Franziska Pilling and Paul Coulton. 2022. Carpentered Diegetic Things: Alternative Design Ideologies for AI Material Relations. In The Ecological Turn: Design, Architecture and Aesthetics beyond "Anthropocene". Number 5. Doctoral Program, Department of Architecture, University of Bologna, Bologna, 240–254. Conference Name: The Ecological Turn Meeting Name: The Ecological Turn.

[37] Sjoerd Stamhuis, Steven Vos, Hans Brombacher, and Carine Lallemand. 2021. Office Agents: Personal Office Vitality Sensors with Intent; Office Agents: Personal Office Vitality Sensors with Intent. CHI Conference on Human Factors in Computing Systems Extended Abstracts, 1–5. https: //doi.org/10.1145/3411763.3451559