Wednesday 6 November 2013

The ethical implications of new diagnostic technologies: how personal should we get?

Flickr/samsungtomorrow: New smartphone apps track everything
from physical activity to calorie intake
BY ANNIE WILKINSON, POST-DOCTORATE RESEARCHER, INSTITUTE OF DEVELOPMENT STUDIES

[EDITOR'S NOTE: THIS BLOG IS PART OF THE FHS BLOG SERIES, EXPLORING THE IMPLICATIONS OF NEW TECHNOLOGIES FOR THE SELF-MANAGEMENT OF ILLNESS]

Medical historians will tell you that diagnosis has always been a moral concern and that diagnostic categories are as socially constructed as they are medically objective. Western medicine has a long tradition of applying disease labels to people and behaviours which deviate from the supposed societal norm. Famous examples are homosexuality and female hysteria. More recent examples are alcoholism, obesity or ADHD. Technologies of various sophistications have been employed to prove the existence of these ‘diseases’.

Diagnostic categories will always be a site of contestation, often linked as much to contemporary standards of morality and normality as to medicine’s capacity to detect disease. But with the global diagnostic industry expanding to be worth $49.2 billion in 2012, and with emerging NCD markets in the BRIC countries behind much of that growth, it is timely to ask if there are developments in diagnostics that are bringing new social, moral or legal concerns to the fore. And how might these play out in the emerging markets?

Genetic testing is an area of innovation which has provoked much social and ethical debate and public attention. Angelina Jolie’s double mastectomy made front pages while the US Supreme Court considered whether a bio-tech company could patent genes that cause breast cancer. The news that Google has invested significantly into a company, 23 and Me, which will offer genetic profiling for $99USD with the longer term aim of building vast databases of human genetic information – of potentially enormous research and economic value – will ensure such debates continue. In this field of ‘techno-ethics’ the issues are startlingly dramatic, raising questions about how we define disease, how we deal with risk, and how we value life and decide who can lay claim to it.

Yet another sea-change is underway which has drawn much less moral scrutiny: as Henry Lucas’s earlier blog detailed, devices are now available to identify, define and monitor both communicable and non-communicable diseases at a patient’s side, in an instant. Rapid Diagnostic Tests (RDTs) can already identify HIV or Malaria in remote rural clinics, and there is considerable excitement about the potential of similar technology to improve developing country health systems. A recent BBC Horizon programme profiled a few of the hundreds of apps and gadgets that are produced each week to monitor health in daily life from blood pressure readers and pedometers to more sophisticated apps that track your social interaction to predict your mental state or tell your temperature from a video image.

The developments in this field of medicine are uncontroversial: the narrative goes that in both developed and developing country settings there has been significant misdiagnosis of disease because appropriate technology was not available. Now there are accurate tests that make medicine more scientific and specific. This is the era of ‘precision’ or ‘personalised medicine’. In this narrative, more highly personalised information is undoubtedly a good thing. The BBC’s Horizon programme trumpeted advances in diagnostic and monitoring equipment as the ‘new medicine’. It was suggested that this new medicine could dramatically reduce, if not prevent, many NCDs by putting the power of knowledge into patient’s hands and reaping the reward of their subsequent behaviour change. New diagnostic and monitoring technologies are painted as the next silver bullet, with the potential to have as dramatic an impact on global health as antibiotics and vaccines.

As diagnostic and monitoring technologies pervade more and more of our everyday lives, some people are beginning to question the ethics of this trend towards ‘ubiquitous medicine’ albeit primarily in Western contexts. The increasing collection and storage of health information by non-traditional actors, such as ICT providers, invites a clash of ethics cultures. The established norms of information and media ethics are different to those in traditional medical ethics and they need to be aligned.

In many low and middle income countries, where neither widespread access to well stocked rural clinics or smart phones is ensured, these debates can seem of dubious relevance. However, with an increasingly engaged biotech industry, and the might of ICT giants like Google and Microsoft working to improve internet access in remote rural settings, they are sure to arrive in some form or another relatively soon. As such, it is worth considering what visions of health and improvements to health systems they rest on and what the consequences of these visions might be.

Of concern is the assumption that information about disease status is empowering and that knowing and being able to monitor one’s health status equates to having the agency to change behaviour to avoid or control chronic conditions. Underlying these ideas are two very Western notions: that health is the responsibility of an individual and that the future can be controlled. This has serious implications.

As Slim Slama’s blog in this series has pointed out, the underlying causes of NCDs and many chronic conditions (perhaps misleadingly labelled ‘lifestyle’ diseases) are broader socio-economic trends such as urbanisation, rising inequality or global systems of food production and marketing. Without changing these underlying conditions, will more information about health status actually make us healthier? Or is it a plaster to patch over the surface of the problem without facing the more difficult societal ills? Putting the power to diagnose and monitor in people’s hands will undoubtedly help some, but will it be equally empowering? There must be a way of ensuring that the increasing specificity of diagnosis does not serve to individualise health so much that its social determinants are lost. As some have suggested perhaps we should think seriously about broadening out the concept of diagnosis to include social determinants of disease.

At Brocher the problem of applying conventional Western medical ethics to chronic disease or developing country settings was noted (this will be explored in more detail by Paula Boddington in the next blog). Many developing countries tend to have plural health systems, and, although Western biomedicine is increasingly common, health has not always been narrowly attributed to biomedical mechanisms or the individual. It may be that keeping in mind broader concepts of disease and diagnosis that recognise the social determinants of health would be a more effective and sustainable way of addressing the ticking time bomb of NCDs, as it would build on existing local approaches to health. For sure, more precise and personalised information can serve to improve health, but the focus on individuals should not overshadow the impact and obligations of wider society.