All 2 entries tagged Artificial

No other Warwick Blogs use the tag Artificial on entries | View entries tagged Artificial at Technorati | There are no images tagged Artificial on this blog

July 04, 2018

Social and ethical implications of use of AI in health care

Writing about web page https://warwick.ac.uk/fac/med/research/hscience/sssh/research/lyncs/

In this blog I consider some of the social and ethical implications of the use of AI for algorithm driven clinical decision-making.

Developments in AI for health seem to challenge the role of and even need for health practitioners.

In the medium term, at least in the UK, society seems to require AI and clinicians to work together. There is a growing consensus that for a decision such as a health decision, where the implications for the individual are considerable, the individual should be informed that the decision is being made using AI and have the right to request that the decision is not made by AI1 2. If a decision is made using AI they should have the right to ask for the decision to be reconsidered1 2. It is not yet clear where responsibility lies for the consequences of advice given directly to the patient based on AI3, and there is a deman for the AI algorithms to be explainable4.

If part of the work of a clinician undertaken by AI, there is a question of how they maintain their expertise5 inorder to take responsibility for decisions and to provide a second opinion when requested. Further, the clinicians need to understand AI in order to be able to explain the decision-making to their patients.

Low and middle income countries (LMIC) do not have the health parctitioner workforce capacity that high income countries enjoy but most of them have good digital infrastructure – far better than water or roads. There is potential for AI to be rapidly taken up in LMIC to keep healthcare cost down whist providing improved services. In this scenario the lack of human capacity in the system might mean the decision-making process is not explained to patients and there is no second opinion available.

1. House of Lords Select Committee on Artificial Intelligence. AI in the UK: ready, willing and able?, 2018.

2. The Alan Turing Institute. The GDPR and Beyond: Elizabeth Denham, UK Information Commissioner, 2018.

3. Floridi L, Taddeo M. What is data ethics?: The Royal Society, 2016.

4. Information Commissioner’s Office. Information Rights Strategic Plan 2017-2021

5. Yang G-Z, Bellingham J, Dupont PE, et al. The grand challenges of Science Robotics. Science Robotics 2018;3(14):eaar7650.


April 10, 2018

Digital communication between patient and clinical team forms basis for Artificial Intelligence

Writing about web page https://warwick.ac.uk/fac/med/research/hscience/sssh/research/lyncs/

People with a health condition have long stretches of time between encounters with their healthcare team when they get on with living with their condition. People experience change in their condition, treatment effects and side effects mostly on their own - without engagement with their healthcare team. They have to interpret what they experience themselves. One of the most memorable comments a patient made to me as a young doctor was:

Going home with a prescription for a new treatment can feel very lonley.

Accessible information and explination makes a difference, but even where we have good evidence about a condition and its treatment we still only know what usuallyy happens most of the time. We cannot predict exactly what will happen to one particular patient1.

In our study of young people living with longterm conditions (the LYNC study)2 it was at times of change - new treatments or worsening symptoms or new life experiences such as going to univeristy - that young people most appreciated and benefited from digital access to their healthcare team. They were able to text, email or phone about what they were experiencing and receive interpretation, reassurance and guidance.

Communication via digital channels is easily recorded and stored. By enabling patients, between routine appointments, to digitally contact their clinical team we can build datasets of their concerns and the team's responses. This can be used as the training dataset for a chatbot based on artificial intelligence (AI), working alongside the clinical team. Initially the chatbot might have sufficient data to learn how to respond to common questions, for example, how to take a medication. As the dataset grows, the AI algorithm can learn how to respond to more complex clinical questions. By linking with patient records, responses can be tailored to individual patients. The AI chatbot can also learn from what happens to patients and does not forget. In theory an AI chatbot could become as good or even better than a clinician, although there would still remain uncertainty.

Clinician and chatbot working together could be quite a team and counter each other's cognitive biases.

This type of development can begin right now but there are ethical and social implications that need our attention.

1. Gorovitz S and McIntyre A. Towards a Theory of Medical Falibility. The Journal of Medicine and Philosophy. 1976:1(1):51-71

2. Griffiths F, Bryce C, Cave J et al. Timely digital patient-clinician communication in specialist clincal services for young people: a mixed-methods study (The LYNC Study). Journal of Medical Internet Research. 2017;19(4):e102


April 2024

Mo Tu We Th Fr Sa Su
Mar |  Today  |
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
29 30               

Search this blog

Galleries

Blog archive

Loading…
RSS2.0 Atom
Not signed in
Sign in

Powered by BlogBuilder
© MMXXIV