Article Text

Download PDFPDF

The making of a professional digital caregiver: personalisation and friendliness as practices of humanisation
  1. Johan Hallqvist
  1. Department of Culture and Media Studies & Umeå Centre for Gender Studies, Umeå University, Umeå, Sweden
  1. Correspondence to Johan Hallqvist, Department of Culture and Media Studies & Umeå Centre for Gender Studies, Umeå University, Umeå, Sweden; johan.hallqvist{at}umu.se

Abstract

The aim of this paper is to explore how a digital caregiver, developed within a Swedish interdisciplinary research project, is humanised through health-enhancing practices of personalisation and friendliness. The digital caregiver is developed for being used in older patients’ homes to enhance their health. The paper explores how the participants (researchers and user study participants) of the research project navigate through the humanisation of technology in relation to practices of personalisation and friendliness. The participants were involved in a balancing act between making the digital caregiver person-like and friend-like enough to ensure the health of the patient. Simultaneously, trying to make the patients feel like as if they were interacting with someone rather than something—while at the same time not making the digital caregiver seem like a real person or a real friend. This illustrates the participants’ discursive negotiations of the degree of humanisation the digital caregiver needs in order to promote the health of the patient. A discursive conflict was identified between a patient discourse of self-determination versus a healthcare professional discourse of authority and medical responsibility: whether the digital caregiver should follow the patient’s health-related preferences or follow the healthcare professionals’ health rules. Hence, a possible conflict between the patient and the digital caregiver might arise due to different understandings of friendliness and health; between friendliness (humanisation) as a health-enhancing practice governed by the patient or by the healthcare professionals (healthcare professionalism).

  • health care manager
  • medical humanities
  • care of the elderly
  • social anthropology

Data availability statement

Data are available on reasonable request. The data that support the findings of this study are available on request from the corresponding author, JH. The data are not publicly available due to restrictions, for example, containing information that could compromise the privacy of research participants.

http://creativecommons.org/licenses/by-nc/4.0/

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

[L]ike if a person says ‘I have pain’, then the [mentor] agent shouldn’t just be robotic asking ‘Oh, ok next [question] where do you have pain’ [with a robotic monotonous voice], it should say ‘Oh that doesn’t sound good’ [with a softer voice] so that’s one part I’m having humanity [programmed] in the mentor agent.1

The increased use of digital health technology is part of a turn in public healthcare systems, mainly in industrialised countries in the global north, towards personalised healthcare2—from more of a generic healthcare system designed to ‘fit all’ to one that focuses on prevention and participation based on the specific individual (Gutin 2019; Lindberg and Carlsson 2018; Scales et al. 2017). The patient’s autonomy, participation in and influence over healthcare are regarded to be important values in order to increase the health of the patient and make the patient more active in their own healthcare process. The responsibility for healthcare falls on the individual patient rather than on the welfare state (Hennion and Vidal-Naquet 2017; Lindberg and Lundgren 2019; West and Lundgren 2015).

Personalised healthcare, just like digital health technology, needs personal health data from the individual patient in order to perform health services (cf. Noury and López 2017). Thus, the patient is expected to provide personal data to the digital health technologies (cf. European Science Foundation (ESF) 2012). This gives the patient a dual role as both receiving care and being enrolled in the very provision of care. At the same time, digital health technologies also challenge notions of healthcare professionalism: what it means to be a healthcare professional, what work tasks the healthcare professionals should undertake and how healthcare professionals are increasingly expected to be able to handle and work with digital health technologies (Hallqvist 2019; Hansson 2017; Hansson and Bjarnason 2018).

An overall way of making digital health technology provide more personalised healthcare is humanisation, that is, the technology that the patient encounters appears more or less human (Farzanfar 2006).3 Therefore, developing digital health technologies that can perform human-like communication is crucial to encourage the patient to both interact with digital health technologies and participate in health-enhancing activities (Greenhalgh et al. 2012; Moore, Frost, and Britten 2015).

The discussion in this paper is based on an ethnographic fieldwork in Sweden where I studied the interdisciplinary research project Like-a-Peer.4 The project developed a digital caregiver to be used in older patients’ homes, aimed at promoting the health of the patients. In this paper, a digital caregiver refers to digital health technologies providing care for patients where the digital caregiver performs certain tasks (instead) of healthcare professionals. During my ethnographic fieldwork, I found the humanisation of the digital caregiver in terms of personalisation and friendliness to be a prerequisite for the digital caregiver to promote the patients’ health.

The aim of this paper is to explore how a digital caregiver is humanised through the health-enhancing practices of personalisation and friendliness. How is the digital caregiver developed with the objective of working as both a person and a friend to the patient?

Humanisation of healthcare: an overview

The humanisation of healthcare can refer to different understandings and practices of healthcare. Traditionally, humanisation has been characterised by a desire to promote humanistic values, focusing on the patient and the relationship between patients and healthcare professionals. Thus, humanisation can be understood as a resistance against increased (bio)medicalisation and technologisation of healthcare (cf. Abiko 1999; Marcum 2008). Technologisation of healthcare within personalised medicine and personalised care can lead to humanisation in the sense that patients are treated as individuals through engaging in these very practices of tailored datafication (Anya and Tawfik 2015). However, the increased technologisation of healthcare, such as artificial intelligence and other human-like technologies,5 is also criticised for its belief in the inherent power of technology to automatically do what is best for the patient and the patient’s health (Abiko 1999; Lupton 2014). Digital health technologies that measure patients’ health data might lead to increased (bio)medicalisation and dehumanisation of care by transforming people into numbers (cf. Richterich 2018; Ruckenstein and Schüll 2017). Digital health technologies can also through their collection of personal data, through, for example, sensors and cameras, lead to a form of digital monitoring of individuals, often referred to as dataveillance (Lupton 2016; Van Dijck and Poell 2016).

Another aspect of humanisation of healthcare that is growing rapidly, especially artificial intelligence, deals with how digital health technology can be made more human-like, often focusing on creating human-like interaction between digital health technology and the patient (Farzanfar 2006). The humanisation of digital health technology includes different strategies to make technology more human-like, for example, through the use of avatars (Bickmore, Pfeifer, and Jack 2009; Graber and Graber 2011; Hallqvist 2019), social robots (Breazeal 2011), ascribing names to the technology (Darling 2017; Hallqvist 2019) or giving the technology a backstory (Darling, Nandy, and Breazeal 2015). Another way to make digital health technologies more human-like is to create technologies that act as companions to the patients—ranging from assistants to friends (Darling 2017; Robins et al. 2005). Ho, Hancock, and Miner (2018) discuss how the mechanisms in forming friendships with a communicative artificial intelligence agent have several similarities with humans feeling attached to and trusting their human friends. Guzman (2015) shows how users of the intelligent virtual agent Siri tend to express Siri’s ability of being friendly as a human-like trait, while Lee, Kavya, and Lasser (2021) argue that people tend to think of Siri as a friend if Siri can make them feel comfortable, if she seems trusthworthy and if she uses a female voice.

Humanisation does not necessarily have to involve the technology looking more like a human, but rather to humanise technology in the sense that it is adapted to the human patients in order to understand and calculate the needs of the patient (Sciutti et al. 2018).

This paper focuses on the latter understanding of humanisation of healthcare: how digital health technology is made human-like. Specifically how the digital caregiver is made to seem human-like to the patient in terms of looking or behaving in a human-like manner.

Many researchers have shown how people tend to attribute technology with human traits, even though the technology often neither looks human nor necessarily behaves particularly intelligently (Hayles 2005; Treusch 2015; Turkle 1984). At the same time, technology can both be humanised as actors with their own lives, and dehumanised and thought of as non-human machines (Kruse 2006, 143). Thus, humanisation of digital health technologies offers both opportunities and challenges for researchers developing digital health technologies for healthcare. This requires both practical and ethical considerations where a key issue is when it is motivated to humanise digital health technologies and when it is not.

A practical and ethical aspect to consider is whether humanisation can help achieve the overall goal of enhancing the health of the patient—if that is the case, humanisation might be motivated (Bickmore, Pfeifer, and Jack 2009; Darling 2017; Farzanfar 2006). Bickmore, Pfeifer, and Jack (2009) showed in a study that a majority of the patients preferred to be informed about health documents by a computer agent rather than by a human health expert. Farzanfar (2006) discusses how some patients might prefer to interact with a computer, with certain human-like qualities such as a human-like voice, concerning sensitive subjects such as obesity and dietary plans because the patients felt less judged by a computer than by healthcare professionals. In other situations, humanisation of digital health technologies may need to be avoided in order to not risk that patients feel obligated to ‘obey’ the system (Sharkey and Sharkey 2010), lead to increased isolation from human social contacts (Turkle 2011) or hinder the technology from fulfilling its goals and functions (Darling 2017).

However, humanisation is a matter of if or when to humanise digital health technology, and to what degree it should be humanised. This becomes clear in how digital health technology can be humanised through virtual or digital healthcare professionals, such as virtual or digital nurses. Health technology designed to change unwanted behaviour should be persuasive, supportive, sympathetic and sensitive to the patient’s needs (Friedman 1998; Revere and Dunbar 2001), and, thus, resemble human caregivers. Imitating human interaction and emulating human healthcare professionals become important aspects to achieve this goal (Farzanfar 2006). In other words, digital health technology is humanised by, to a certain degree, interacting as a human being and functioning as a human healthcare professional: as a digital healthcare professional. This might however involve a balancing act between making the digital healthcare professionals human-like and professional-like enough, in order to encourage health-enhancing interaction and activities with the system, while still avoiding to make the system seem too much as a real human or a real healthcare professional to the patients (Hallqvist 2019).

In this paper, I explore how the participants6 (researchers and user study participants) in the Like-a-Peer project navigate through the humanisation of technology—making the digital caregiver seem human-like to the patient in terms of looking or behaving in a human-like manner—in relation to practices of personalisation and friendliness. Specifically, how the participants are involved in a balancing act between making the digital caregiver person-like and friend-like enough to ensure the health-enhancing practices of the digital caregiver.

Technology as discourse and sociocultural product

Within the field of medical humanities there is a growing interest, especially among humanities and social sciences researchers, to explore how digital healthcare technologies affect the way one understands the field of healthcare specifically through exploring (changing) cultural norms about the body, health, healthcare and illness (cf. Dolezal 2016; Teo 2020). This growing interest is reflected in studies of, for example, how digital health technologies change working conditions and notions of healthcare professionalism (Hallqvist 2019; Hansson 2017; Hansson and Bjarnason 2018), how health data are made into a commodity for commercial companies (Berg 2018; Van Dijck and Poell 2016) and how the digital health technologies’ use of surveillance calls for more discussions regarding privacy, integrity and other ethical dilemmas (Hansson 2017; Lupton 2013; Sanders 2017).

Influenced by these studies, my theoretical point of departure in this paper is that technology can be understood as discursive with cultural, social, political and ideological implications (Fisher 2010). The meanings of technology are produced through articulations. Discourse is defined as a system of meanings and practices that are fluctuating and contextual, and shaped in relation to other discourses (Laclau and Mouffe 1985), resulting in discursive struggles over the meaning(s) of technology. This paper explores how the digital caregiver developed by the Like-a-Peer project is humanised through practices of personalisation and friendliness. What it means to be human is also discursively negotiated among the participants in the Like-a-Peer project. By understanding technology as a discourse, it highlights how the technology discourse plays an active role in the construction of reality and works as ‘a body of knowledge that is inextricably intertwined with technological reality, social structures and everyday practices’ (Fisher 2010, 235). Hence, technology must be studied as a sociocultural phenomenon; it is not neutral but rather permeated by cultural conceptions and norms (Lundin and Åkesson 1999; Willim 2006).

Accordingly, Koch (2017) argues that an important part of exploring technologies is to understand them as full of cultural inscriptions—both how technologies are programmed with certain cultural norms, and how technologies are always interpreted and understood within current discourses on cultural notions of health, bodies and healthcare. This is similar to Deborah Lupton’s understanding of digital health technologies as sociocultural products ‘located within pre-established circuits of discourse and meaning’ (Lupton 2014, 1349).

Following Lupton and Koch, I understand the digital caregiver developed in the interdisciplinary research project, Like-a-Peer, as a sociocultural product with cultural inscriptions. This means that the technologies developed and used, need to be understood as integrated in sociocultural contexts.

Presenting the Like-a-Peer system

The research project Like-a-Peer develops an autonomous intelligent multiagent system, to be used primarily by older patients in their homes. The Like-a-Peer software, which is a platform, app and website, can be used by the older patients to interact through a smartphone or computer but it could also be incorporated into a robot. The system is a computer-based system consisting of different so-called intelligent agents, where each agent has a specific role in the system. These intelligent agents can be described as autonomous functions in the system with the aim of achieving specific goals. The agents observe and make decisions together on how to act based on the collected information about the older patient and the home environment. The agents continuously learn about the individual patient from their observations of the patient and use this knowledge to achieve their goals (Dignum 2019; Wooldridge and Jennings 1995). The collected information about the patient is stored in a database and can only be accessed by healthcare professionals if the patient agrees to this.

The main aim of the Like-a-Peer system is to promote the patient’s health. The system might also report health-related information about the patient to healthcare professionals, especially if the system observes behaviours that are deviant or possibly dangerous for the patient, but only if the patient agrees to this. Another important aim—and one of the project’s main challenges—is for the system to be friendly and function more as a friend than as an impersonal tool for the patient. Overall, the system tries to encourage the patient to take part in health-enhancing activities such as taking medicine, eating breakfast, doing physical exercises, keeping up to date with the news and keeping contact with friends and family.

In order to promote the patient’s health, the system must collect personal information about the patient while encouraging the patient to interact and communicate with the system. The information is collected in the patient’s home through a network of monitoring and communication technologies such as sensors, computers, software applications, mobile phones, smart environments and cameras. The information is monitored and processed by the intelligent agents who focus on different types of information. For example, the environmental agent is responsible for monitoring the environment of the patient, while the activity agent monitors and reminds the patient of necessary activities, such as taking medicine or eating breakfast. A particularly important agent is the mentor agent, whose main task is to have social and friendly conversations with the patient and motivate the patient to voluntarily interact with the system. The mentor agent also acts as a link between the patient and the healthcare professionals, such as doctors and nurses, by informing the healthcare professionals about the patient’s health status.

Ethnographic fieldwork: method, material and analytical framework

The Like-a-Peer project involved mainly researchers from computing science, and researchers within occupational therapy and nursing who contributed with their expertise about healthcare. An important objective of the Like-a-Peer project was to make the system personalised and friendly, which I understand as aspects of humanisation. This was expressed by a couple of the researchers, both during interviews and observations, in terms of how the system should be more like a ‘friend’ and a ‘person’ than a ‘tool’.

The ethnographic fieldwork extended over a period of 2.5 years. It included participant observations among and interviews with the researchers involved in the project. I observed different kinds of events, during a total of 50 hours, such as a user study with two researchers and two user study participants, meetings, seminars, public events (lectures, theme days) and social events. The observations provided a better understanding of the researchers’ work environments (cf. Hannerz 2001), physical and epistemic contexts (cf. Pettersson 2007) and how meaning is created in what the researchers do, and what they say (Lundgren 2009, 97). The notes taken focused on the discussions among the researchers: what the environment looked like, and the topics that were discussed. The project manager informed the researchers about my research, and I presented my project when meeting them.

The observations became more participatory the more I got to know the researchers, gradually becoming part of the project group both formally and informally. Formally, I was sometimes mentioned as an affiliated researcher for the project. Informally, through the researchers’ growing interest in my research and specifically how cultural perspectives on digital healthcare technology might gain their research (cf. Pettersson 2007). During the fieldwork, I noticed how the questions I asked, or thoughts and ideas I shared with the researchers about their projects, during seminars and meetings, could be picked up later during meetings with the project group. For example, that they picked up on something I said and made it a topic that was discussed, that they added a function to the system or changed the programming. Possible collaborations on scientific articles were also discussed. This could be understood from Helena Pettersson’s thoughts on how the ethnographic fieldworker ‘passes through various stages of the community’s socialisation process, especially when making participant observations’ (Pettersson 2007, 30).

I also performed five semi-structured interviews with five researchers working with the Like-a-Peer project and one semi-structured group interview with two user study participants who participated in a user study for the Like-a-Peer project. The interviews were centred around notions of digital health technologies, health, healthcare and friendship. The interviews were recorded with a digital recording device, and the recordings were transcribed verbatim. I also took notes during the interviews. The interviews lasted 30–75 min. The interviews with the researchers took place, depending on their preferences, in their office, their workplace or in my office. The group interview with the user study participants took place in a small conference room at the workplace of the researchers where the user studies were held. The interviews provided a deeper understanding of the project and the system, what each researcher worked with and the possibility of relating the researchers’ work to each other.

The data have been analysed using methods and concepts from discourse theory (Laclau and Mouffe 1985). I understand articulation as ‘any practice establishing a relation among elements such that their identity is modified as a result of the articulatory practice’ (Laclau and Mouffe 1985, 105). These articulations produce different meanings and understandings of being human (cf. Nilsson and Lundgren 2015). In this paper, practice refers to different activities such as how someone talks or acts concerning the humanisation of the digital caregiver developed within the Like-a-Peer project, specifically the meaning-making practices of the researchers that make the digital caregiver intelligible is explored (cf. Johansson 2010).

By focusing on different expressions of humanisation among the participants, including challenges associated with this humanisation, and how the participants discussed and negotiated meaning-making practices of humanisation of the digital caregiver, I found personalisation and friendliness to be central as health-enhancing practices in the humanisation of the digital caregiver. Hence, personalisation and friendliness are open concepts whose meanings are produced and discursively negotiated. While health-enhancing was a stated goal of the project—to promote the health of the patient—I also found that the project worked with and talked about the digital caregiver in terms of humanisation; to make the digital caregiver more human-like.

In the following findings sections, I focus on how personalisation and friendliness work as both health-enhancing and humanising meaning-making practices. Personalisation focuses on how the digital caregiver is made to feel more like a person, while friendliness focuses on how the digital caregiver is made to feel friendly or possibly even like a friend.

Humanisation through personalisation

Personalisation, that is, the technology that the patient encounters must seem more or less like a person, was an identified overall humanising practice, among the researchers involved in the Like-a-Peer project. The researchers tried to design the system so that the patient can both make sense of and get to know (about) this digital caregiver.

Making the system more person-like was, however, not a straightforward process. The researchers expressed different ideas about how to personalise the system. The project was characterised by recurrent negotiations about how to design the system and how to adapt it to the patients. In the following sections, I explore how the participants performed humanising practices while trying to make the system, and specifically the mentor agent, more person-like to the patient through the practices of choice of interfaces and personality and personal background.

Choice of interfaces

The choice of interfaces was one practice of personalisation that the participants discussed. The digital caregiver communicates via text-based dialogues with the patient, but this text-based interface was sometimes expressed in terms of making the digital caregiver seem less like a person to the patient. For example, during the user study conducted within the project, the mentor agent’s skills in conducting a text-based dialogue about health-related topics were tried out. The user study participants were Sara and Lisa, two Swedish high school students hired by the Like-a-Peer project as summer jobbers doing different programming tasks. Sara and Lisa were engaged as user study participants because the researchers wanted to test the mentor agent and were at the time not able to get a hold of older user study participants. Because Sara and Lisa were teenagers, and not the target group for the Like-a-Peer system, they were instructed by the researcher Marie to interact with the mentor agent as if they were older people having age-related health issues, such as pain and memory problems. During the user study, Sara and Lisa were asked by Marie if they had suggestions on how to improve the mentor agent and its text-based communication. One of the suggestions that came up was the need of making the mentor agent feel more like a person who is communicating with the patient:

Marie: Did you get the feeling like you are talking to somebody who understands or did you feel…

Sara: Like both. […]

Marie: Mm.

Sara: It’s a bit stiff maybe.

[…]

Lisa: And still you know that it’s a computer, so, it’s hard to, like, feel like you’re talking to a person.

[…]

Lisa: I think it would help with a, like, character, or something…

Sara: Yeah, [to] see something.

Lisa: Because then you’d be like…talking to that thing, and not just the computer, and…

Marie: You mean like an avatar?

Lisa: Yeah.

Sara: Yeah.

Marie, Sara and Lisa reason about possible ways to make the mentor agent more personalised. For Marie this involves having the text-based interaction feel more like an interaction with a person, while for Lisa and Sara the personalisation revolves mostly around how the use of an avatar could make the mentor agent and its communication feel more like a person. Sara and Lisa link a text-based interface with acting like a ‘computer’ and hence the mentor agent feels less like a person, while they link avatars with person-likeness and, thus, the mentor agent feels more like a person. In this sense, the avatar becomes a materialisation of a person-like someone: the avatar makes the mentor agent feel more like talking to a person because the patient can look at something—or someone—more than just text.

The avatar also works as a means of making the mentor agent more human-like, by both figuratively and literally giving the mentor agent a human face. The personalisation of the mentor agent through the avatars can thus be seen as both a health-enhancing and a humanising practice: by having the mentor agent feel more like a human to the patient, the health-related interactions might improve.

Personality and personal background

Attributing the system with a personality and a personal background, including different ideas of what should characterise this personality or personal background, was yet another practice of personalisation that was invoked by the participants. During a meeting with the researchers, focusing on how to retrieve more health-related information from the patient by trying to have the patient and the mentor agent bond, the idea of attributing the mentor agent with a personal background emerged. This happened when I asked a question to the group about how the patient and the mentor agent are supposed to bond if the relationship between them is focused solely on the mentor agent getting to know the patient and not the patient getting to know the mentor agent. Tova, the project manager, said that this was something they had not given much thought since they had been more focused on how the system could make sense of the individual patient’s personality, needs and preferences. In other words, to make the individual patient person-like to the system, rather than making the system person-like to the individual patient.

However, Tova quickly thought about it and, then and there, came up with the idea of creating a personality for the mentor agent: attributing characteristics and personality traits to the mentor agent to maximise the information retrieval and build (closer) relationships between the mentor agent and the patient. By giving the mentor agent a personal background where the focus is to adapt the mentor agent to the patient’s needs and preferences, and to make the mentor agent feel more or less like a person, that the patient might have to adapt to and get to know as well. By having a personality, the mentor agent could share information about itself, and thus feel more like a person to the patient. This illustrates how the researchers involved in the project also negotiate what it means to be human and how to programme human-likeness in the mentor agent, through different meaning-making practices of what being human might mean and specifically how and why this could be attributed to the mentor agent in order to enhance the health of the patient.

Humanisation through friendliness

Friendliness, that is, the technology that the patient encounters must seem more or less like a friend was another identified overall humanising practice among the researchers involved in the Like-a-Peer project. The researchers tried to design the mentor agent in a way that the patient might think of the mentor agent as being friendly, or possibly even a friend. Just like making the mentor agent feel more person-like was not a straightforward process, neither was making it feel friend-like. The researchers expressed different ideas of friendliness and how to make the mentor agent feel friendly. The empirical data are characterised by recurrent negotiations about how the mentor agent should be designed and how it should be adapted to the patients. In the following sections, I explore how the participants performed humanising meaning-making practices when trying to make the mentor agent more friend-like through the practices of being a friend or being friendly.

The participants tend to use friend and friendly synonymously when they discuss the mentor agent, where friendly can refer both to being friendly such as having a friendly (nice) conversation, and as feeling like a friend. Most of the time, the participants refer to the mentor agent as being friendly rather than feeling like a friend. However, the researchers tend to move between these two different meanings of friendliness as being friendly or feeling like a friend by using different synonyms of the term friend such as pal, peer and companion. This illustrates how the researchers in their work try to make sense of what friendliness means, how friendliness can be programmed and what friendliness can contribute with in terms of promoting the patient’s health.

Marie expresses the objective of the mentor agent being friendly in terms of how to make the system take the role of a ‘pal’ and specifically how to make the mentor agent feel less like a ‘tool’ and more like a pal for the patient to relate to. Friendliness is brought forward by the participants, especially the researchers, as an important health-enhancing practice: by building friendly or friend-like relationships to better adapt to the patient’s needs and preferences, by adapting the health-related advices to the specific patient and by getting the patient to change behaviours and engage in health-enhancing activities. However, the mentor agent and the patient do not necessarily have to be friends in order for the system to work and promote the health of the patient. The system will learn how and when to interact in a friendly manner with the patient based on the patient’s preferences, and whether the patient prefers the mentor agent to act friendly or as a friend.

In the following sections, I present two different practices of friendliness that I identified: compliant friendliness and persuasive friendliness. The former refers to the digital caregiver following the health-related needs of the patient, while the latter refers to situations when the digital caregiver might have to defy the preferences of the patient in order to promote the patient’s health.

Compliant friendliness

Friendliness was mainly expressed by the researchers as a question of adapting to and following the patient’s health-related needs and preferences. This practice of friendliness is defined as a compliant friendliness.

I was interested to find out if the researchers used a specific definition of friendship when developing the digital caregiver. One definition of friendship involved helping and supporting the patient based on the patient’s specific health-related wants and needs where Marie said that:

[f]or the mentor agent friendship is to support Ann [a fictive patient], to prioritize her wishes. Yeah mainly this is what friendship means to the mentor agent: to help and support Ann in her daily living. To prioritize what she wants.

To be able to help and support the patient, the system needs to know when to provide the help in order to be friendly. The researcher Fredrik explains: “maybe, if the system provides help in the current moment, maybe in that moment it will be friendly, as you say”. According to Fredrik, for the system to be able to provide help in the right moment the system also needs to be there for the patient when the patient needs the system: ”Imagine a friend that is in your home and there to help you. So that means that your friend will be there when you need them”. If the mentor agent provides help and support according to the patient’s needs and wishes in the right moment, by being there for the patient, the mentor agent performs friendliness and might possibly even be thought of as a friend.

The participants brought up the question of the patient being more inclined to interact with the system by making the system feel friendly. The mentor agent makes a decision when and how to be friendly to improve its relationship with the patient and encourage the patient to interact with the system.

Marie describes the Like-a-Peer system as different from other existing health-enhancing systems due to the focus on making it friendly:

Typically, systems are treated like assistants, like in healthcare, ‘oh, don’t forget to take your medicine, this, that’, just like an assistant. But what my research focuses on is making it more like a friendly […] a software that you feel more comfortable having interactions with.

Here, being friendly is linked to the ability of making the patient feel more comfortable and inclined to interact with the system, where the Like-a-Peer system is also understood as more friendly than other systems that are more assistant-like and less friendly. The aim of enhancing the health of the patient is here expressed as something more than just having the patient take its medicine. It is also about how the patient feels about communicating with the system, and encouraging these patient-system interactions. I interpret the degree of friendliness of the mentor agent, which the researchers try to programme and negotiate about, as a humanising practice making the system more human-like than that of other systems. Here, the humanisation seems to revolve around the way the system communicates with the patient and how the compliant friendliness aims to make the mentor agent feel human-like to the patient.

Obedience is another important part of the compliant friendliness. For example, when I asked Fredrik about how the system decides to be friendly and what friendly means to the system he said “if they obey your preference, right? […] The system knows you, and they obey your preference, you will see ‘okay, I have [a] friendly relation with this guy’”. Thus, the system needs to obey the preferences of the patient in order to be considered friendly.

Friendliness was also expressed in terms of service, a question of providing friendliness as a service in accordance with obeying the patient’s preferences—a customised service. The system performs friendliness services such as helping and supporting the patient. This way the friendly system becomes a service-minded system, where the system is supposed to be friendly, or act like a friend, according to the patient’s needs and preferences. Thus, assessing the friendliness services of the mentor agent was expressed in terms of ‘quality of service’, for example, Fredrik said:

When services are good, and provided when you need them. […] That is quality of service. So the friendly relationship depends quite a lot on the quality of service that the Like-a-peer can provide. […] The relation[ship] […] will be nice when you have good quality of service.

The quality of service acts as the criterion for, and a way of measuring, how the friendliness of the mentor agent is performed. If the quality of service is high, the mentor agent feels friendly to the patient and if the system cannot perform certain health-related services that the patient prefers the system might not be considered as friendly and, hence, as Fredrik puts it ‘useless’ to the patient.

Persuasive friendliness

Persuasive friendliness was another practice of friendliness expressed by the participants. In contrast to the practice of compliant friendliness, which was articulated as a customised service to the patient where the system was supposed to follow the patient’s health-related needs and preferences, persuasive friendliness was linked to the system’s capacity to try to persuade the patient, by following the health rules set up by the healthcare professionals (doctors, nurses, etc) for the sake of the patient’s health. However, if and when the system should follow the patient’s needs and preferences was not an easy task for the researchers to decide. During my interviews and observations, the participants tried to reason about situations where a persuasive friendliness would be both motivated and needed in order to promote the health of the patient. One example of such a situation was when I asked Marie about possible ethical challenges with making the mentor agent friendly, where she discussed ‘when to break the rules’, that is, when the mentor agent needs to not follow, or defy, the patient’s preferences:

Like the agent, let’s say if the agent’s having a dialogue with a person who has dementia then the person might say ‘No I don’t want to share information with the doctor’ at the same time the agent realizes that ‘this is something critical and I need to, it would be actually beneficial for this person if this information is communicated to the doctor’. So how would the human [patient] receive that you know ‘without my knowledge the doctor was informed’. I don‘t know how to handle that kind of ethical challenge.

Marie reasons about how to handle the ethical challenge of when the mentor agent should follow or not follow the patient’s preferences when the mentor agent believes that it is in the patient’s best interest to not follow the patient’s preferences. This might be motivated based on health reasons: by letting the doctor know about the person’s health, and hence not following the preferences of the patient, the health of the patient is promoted. However, the persuasive friendliness is also motivated by the researchers based on ideas of what a human friend would do, where the researchers negotiate about how the mentor agent should be friendly in terms of human friendship, which was, for example, discussed by Marie:

Like, if it’s a real human being who is our friend, even though we know that this person doesn’t want me to tell some information to their doctor, but since I know it’s needed, I may actually go and tell the doctor. That doesn’t mean that I’m not a good friend to this person. I want what’s best for this person.

The mentor agent not following the patient’s preferences is motivated and explained by Marie by comparing the mentor agent’s role, and challenge, with that of a human friend. Hence, a human friend and human friendship is centred around what a human believes is in the best interest of its friend, where not following the friend’s preferences might be a sign of real friendship. In this case, the mentor agent not following the patient’s preferences is motivated when the patient’s health is believed to be in danger, or put differently, when the patient does not comply with what the mentor agent’s understanding of what is best for the patient’s health. In this way, the friendliness of the mentor agent is conditioned by health: as long as the patient follows the ‘health protocol’, the system does not have to disobey the patient’s preferences. If not, the system might have to, as Fredrik expressed: ‘talk to the healthcare services […] and make a report’. Here, the mentor agent is expressed in terms of a more autonomous actor in relation to the patient. The mentor agent might have to not follow the patient’s preferences and act based on its own ability to make autonomous decisions. As a humanising practice, the persuasive friendliness makes the mentor agent more human in the sense that it becomes more autonomous in relation to the patient and also has to be able to make more complicated decisions regarding the health of the patient.

The question of ‘following’ both combines and separates the compliant friendliness from persuasive friendliness in the sense that in the practice of compliant friendliness the mentor agent is expected to follow the patient’s health-related needs and preferences, while in the practice of persuasive friendliness the mentor agent is expected to follow the health rules set by the healthcare professionals. Or put differently, the mentor agent is always expected to follow the healthcare professionals’ health rules but a potential conflict arises when the patient does not comply with these health rules leaving the mentor agent in a possible dilemma between the two practices of friendliness.

Discursive conflict: a patient discourse of self-determination versus a healthcare professional discourse of authority and medical responsibility

I identify the conflict between the two practices of friendliness as a discursive conflict between a patient discourse of self-determination and a healthcare professional discourse of authority and medical responsibility (cf. Hallqvist 2019). Here, the friendliness as a health-enhancing practice and a humanising practice might possibly conflict and create an ethical challenge for the digital caregiver, and the researchers developing the mentor agent, in how to handle patient’s self-determination and healthcare professionals’ medical responsibility. In other words, the digital caregiver is expected to make health-related decisions based on both the health-related preferences of the patient and the health rules set by the healthcare professionals—two possibly conflicting health-related preferences.

Even though there are differences between a compliant friendliness and a persuasive friendliness as health-enhancing and humanising practices, I found the participants sometimes combining these two different practices of friendliness in order to promote the patient’s health. For example, this was the case when the user study participants had health-related interactions with the mentor agent. Based on one of the user study participants’ answers, the mentor agent recommended her to contact a doctor. However, the user study participant responded to the mentor agent that she had already seen a doctor and therefore did not want to see a doctor again. This was later discussed by Marie and her research assistant when I asked what their thoughts were about the situation with the user study participant not wanting to follow-up on the mentor agent’s recommendation of seeing a doctor. The research assistant brought up the possibility of having the mentor agent insisting on the user study participant contacting the doctor, and that the mentor agent can persuade the user study participant in different ways, depending on what the reason for her not wanting to meet the doctor is. Here, persuasive friendliness and compliant friendliness seem to be interconnected by the research assistant by both wanting to persuade the user study participant to contact a healthcare professional but at the same time trying to adapt to the reasons, and possibly preferences, of the patient. This interconnecting of—and balancing between—the two practices of friendliness was later brought up during my interview with the user study participants:

Sara: Yes, if it [the mentor agent] just keeps saying: ‘you should do this and this’, then maybe it will feel like nagging, making you feel like ‘no, I don’t want to do that’. Sort of.

Author: Even if it would be the most beneficial for you to do?

Sara: Yes, isn’t it always like that? Like with children and their mothers, for example: [the mother saying] ‘just go and clean your room!’, and [the child saying] ‘No!’ [Laughing]. [Mother saying] ‘Go on and do the dishes now!’. It’s like…I don’t know. It has to feel a bit more like a friendly relationship [between the mentor agent and the patient] than…it [the mentor agent] constantly telling you what to do [---] That you understand that it [the mentor agent], ehm, is only telling you what it believes is in your best interest.

Sara argues that the mentor agent should be persuasive because the system only wants what is good for the patient. However, Sara seems to struggle to combine persuasion with being friendly. The system needs to persuade the patient what to do in a friendly enough manner to make the patient understand that the system wants what is best for the patient. In this way, persuasive friendliness is combined with a compliant friendliness. At the same time, the friendly relationship, that Sara believes is the most effective way of communicating in order to promote the health of the patient, does work as a humanising practice where being a (human) friend is expressed in terms of being both compliant and persuasive. In other words, as both being understanding of the friend’s needs and still being able to make autonomous decisions based on what one feels is in the best interest of the friend—even if this does defy the expressed preference of the friend.

Concluding discussion

In this paper, I have explored how a digital caregiver, developed within the Swedish interdisciplinary research project Like-a-Peer, was humanised through the health-enhancing practices of personalisation and friendliness. How to develop a digital caregiver with the objective of working as both a person and a friend to the patient?

The participants used two different practices of personalisation: the choice of interfaces, where avatars were understood as more person-like than text, and attributing the digital caregiver with a personality or personal background where the patient could get to know (about) the digital caregiver (cf. Bickmore, Pfeifer, and Jack 2009; Darling 2017; Graber and Graber 2011).

The practices of friendliness was invoked both in terms of the digital caregiver behaving in a friendly manner, for example, to have nice conversations with the patient, and to feel like a friend to the patient. Making the digital caregiver feel more like a friend than a tool was, together with the health-enhancing objective, another overall objective of the Like-a-Peer project. The participants used two different practices of friendliness: a compliant friendliness and a persuasive friendliness. In compliant friendliness, the digital caregiver was supposed to follow the health-related needs and preferences of the patient, for example, to help and support the patient. In persuasive friendliness, the digital caregiver was expected to try to persuade the patient to follow the individualised recommendations set by healthcare professionals (doctors, nurses, etc). Thus, the digital caregiver could follow the patient’s needs and preferences provided that these aligned with the health protocols set by healthcare professionals according to what they believed was in the best interest of the patient’s health. Hence, a possible conflict between the patient and the digital caregiver might arise due to possible different understandings of friendliness and health; between friendliness as a health-enhancing practice governed by the patient or by the healthcare professionals. This highlighted the entanglements of patients’ need and following doctors’ rules that the digital caregiver, patients, healthcare professionals and the researchers of the Like-a-Peer project needed to navigate through.

In comparison with compliant friendliness, the persuasive friendliness made the digital caregiver more autonomous towards the patient’s needs and preferences, and thus more human-like. At the same time, depending on how friendliness and the idea of friends was perceived, persuasive friendliness might also make the digital caregiver seem less friendly and human-like and to the patient by, for example, contacting a doctor without the patient’s consent.

A discursive struggle of the health-enhancing role of the mentor agent was identified: between a patient discourse of patient self-determination and a healthcare professional discourse of authority and medical responsibility (cf. Hallqvist 2019). Making the mentor agent feel friend-like to the patient was on the one hand, a health-enhancing practice aimed at promoting the health of the patient. On the other hand, friendliness was also a humanising practice making the mentor agent possibly feel like a human-like friend to the patient. These different understandings friendliness might create a possible conflict of interest between the digital caregiver following the patient’s needs and preferences and following the healthcare professionals’ health rules.

Both personalisation and friendliness worked as health-enhancing and humanising meaning-making practices. Through health-enhancing meaning-making practices, such as encouraging the patient to take its medicine, eat food, engage in health-related topics and being a link between the patient and the healthcare professionals, the digital caregiver was made to feel more person-like and friend-like to the patient. Through humanising meaning-making practices, for example, looking like a human through avatars and creating a feeling of the patient interacting with someone human-like, the digital caregiver was made to feel more human-like.

A crucial aspect of the participants’ health-enhancing and humanising meaning-making practices was how they tried to make the patient feel like the digital caregiver was person-like and friend-like, like they were interacting with someone rather than something. At the same time, the participants also tried to balance this feeling of a someone in order for the digital caregiver to not seem like a real person or a real friend (cf. Hallqvist 2019). This illustrated the importance of discursively negotiating the degree of humanisation of the digital caregiver in relation to what the participants believed was promoting the health of the patient (cf. Darling 2017; Farzanfar 2006; Hallqvist 2019).

Approaching technology as discursive brings forward the active role in the construction of reality and works as ‘a body of knowledge that is inextricably intertwined with technological reality, social structures and everyday practices’ (Fisher 2010, 235). The Like-a-Peer digital caregiver can be seen as a technology that is being developed with the support of health-enhancing and humanising meaning-making practices, such as personalisation and friendliness. Therefore, the digital caregiver cannot be understood as neutral but as a technology where the functionality, goals and meaning of the digital caregiver are negotiated by the participants. Thus, how the digital caregiver is understood is affected by notions of health and being a person, a human and a friend. Digital health technologies such as the Like-a-Peer digital caregiver need to be understood as integrated in the sociocultural context in which they are developed and used (cf. Koch 2017; Lupton 2014).

I argue that the digital caregiver challenges notions of healthcare professionalism by its ability to undertake certain tasks, usually performed by healthcare professionals, and by the digital caregiver becoming a part of both the healthcare professionals’ everyday work environments, as well as the patients’ home environments (cf. Hallqvist 2019; Teo 2020). In this sense, the digital caregiver becomes professional-like. The digital caregiver was expected to handle and make decisions based on the patient’s health-related preferences and healthcare professionals’ ‘health protocols’, possibly resulting in a conflict for the digital caregiver between the patient and the healthcare professionals regarding what the most health-enhancing decision was. Even though the digital caregiver was supposed to follow the healthcare professionals’ health protocols over following the patient’s health-related preferences, the possible conflict between the patient and the healthcare professionals regarding health-enhancing that the digital caregiver need to handle illustrated a (potential) discursive conflict in healthcare between a patient self-determination discourse and a healthcare professional discourse of authority and medical responsibility.

This is in line with the turn in public healthcare systems towards a personalised healthcare where healthcare professionals are expected to offer healthcare tailored to the specific patient (Gutin 2019; Lindberg and Carlsson 2018; Scales et al. 2017), while it is the healthcare professionals who have the knowledge and responsibility regarding what the best medicine or care for the patient is. This highlights a potential ethical conflict within personalised healthcare where the patient’s interests and the healthcare professionals’ knowledge and authority become a discursive struggle of health and what enhancing health entails. The personalisation and friendliness of digital caregivers can both serve as a way of providing personalised healthcare, while at the same time it may result in a risk of patients believing that the digital caregiver is only supposed to follow the patient’s health-related preferences; that the digital caregiver is a compliant friend. This brings forward both the importance of considering the degree of professional-likeness and human-likeness of digital health technologies by which these technologies should be programmed with, as well as how the professional-likeness and human-likeness of digital health technologies may be perceived by both patients and healthcare professionals (cf. Hallqvist 2019). The health-enhancing and human-likeness of digital health technologies, such as the digital caregiver developed by the Like-a-Peer project, might also affect how notions of being human is understood—when a digital caregiver is made into a someone rather than a something.

Data availability statement

Data are available on reasonable request. The data that support the findings of this study are available on request from the corresponding author, JH. The data are not publicly available due to restrictions, for example, containing information that could compromise the privacy of research participants.

Ethics statements

Patient consent for publication

Ethics approval

The paper is part of a PhD project and has been ethically approved by an ethical board in Sweden (ethical approval: 2015/98-31Ö).

Notes

1. Interview with Marie, a researcher in the Like-a-Peer project.

2. In this paper, personalised healthcare includes both personalised medicine and person-centred care. For further discussion on the similarities and differences, see El-Alti, Sandman, and Munthe (2019).

3. This is sometimes referred to as anthropomorphization. In this text, humanisation and anthropomorphization are used as synonyms.

4. The project’s name has been anonymised while still keeping in line with the core of the project’s name.

5. A common definition of artificial intelligence (AI) is that the main task is to create an artificial human intelligence that works better the more human it behaves (Russell and Norvig 2014). However, some AI researchers argue that the understanding of human intelligence is too narrow when it comes to the development of AI and that AI systems today can only exhibit human behaviour in limited areas (Dignum 2019).

6. Participants refer to both the researchers and the user study participants of the Like-a-Peer project. The term participant is used instead of the term informant to both highlight that the participants take part in an interaction and knowledge production with the researcher (cf. Lundstedt 2009), and to avoid reducing the participant to someone who is only sharing information with the researcher (cf. Pettersson 2007; Sjöstedt Landén 2012). I will also refer to the researchers and the user study participants separately when needed.

Bibliography

Footnotes

  • Contributors The author is the sole contributor to the paper.

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests None declared.

  • Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.

  • Provenance and peer review Not commissioned; externally peer reviewed.