Article Text

Download PDFPDF

The practice of clinical medicine as an art and as a science
  1. John Saunders
  1. Nevill Hall Hospital, Abergavenny

    Abstract

    The practice of modern medicine is the application of science, the ideal of which has the objective of value-neutral truth. The reality is different: practice varies widely between and within national medical communities. Neither evidence from randomised controlled trials nor observational methods can dictate action in particular circumstances. Their conclusions are applied by value judgments that may be impossible to specify in “focal particulars”. Herein lies the art which is integral to the practice of medicine as applied science.

    • Art of medicine
    • medical science
    • empiricism
    • tacit knowledge
    • evidence-based medicine

    Statistics from Altmetric.com

    Request Permissions

    If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

    “Medicine in industrialised countries is scientific medicine”, write Glymour and Stalker.1 The claim tacitly made by US or European doctors and tacitly relied on by their patients is that their palliatives and procedures have been shown by science to be effective. Although doctors' medical practice is not itself science, it is based on science and on training that is supposed to teach doctors to apply scientific knowledge to people in a rational way. This distinction between understanding nature and power over nature, between pure and applied science, was first made by Francis Bacon in his Novum Organum of 1620.2 Medicine as we practise it today is applied science. Thomas Huxley pointed out in his address at the opening of Mason's College in Birmingham in 1880 that applied science is nothing but the application of pure science to particular classes of problems.3 No one can safely make these deductions unless he or she has a firm grasp of the principles. Yet the idea of the practice of clinical medicine as an art persists. What is this? Does it amount to anything more than romantic rhetoric—a nod in the direction of humanitarianism? Is this what the Royal College of Physicians was referred to as late as 1975 when a guide stated that its membership examination “remains partly a test of culture, although knowledge of Latin, Greek, French and German is no longer required”?4

    Like many large textbooks, Cecil's Textbook of Medicine begins with a discourse on medicine as an art.5 Its focus is the patient—defined as a fellow human seeking help because of a problem relating to his or her health. From this emerges the comment that for medicine as an art, its chief and characteristic instrument must be human faculty. What aspects of the faculty matter? We are offered the ability to listen, to empathise, to inform, to maintain solidarity: for the doctor, in fact, to be part of the treatment. No one would want to dispute the desirability of these properties but I think they describe, firstly, moral dimensions to care—we listen because of respect for persons and so on: and secondly, skills. Interpersonal skills may be frequently lacking, just as technical skills may be. But they can, at least, in principle, be observed, taught, tested, their value assessed, just like any practical technical skill. And I think we could probably say much the same about the third part of the mantra of medical teachers, attitudes. While these may be more dependent on our upbringing and personalities, attitudes can be changed with education or appropriate legislation, can be observed and scored, can be evaluated in their contribution to patient care or diagnostic technique, at least in principle and even if these are crudely done. Part of the art of clinical medicine may lie in these areas, but not exclusively so: the art is not just practical performance. I want to suggest that the art and science of medicine are inseparable, part of a common culture. Knowing is an art; science requires personal participation in knowledge.

    Intellectual problems have an impersonal, objective character in that they can be conceived of as existing relatively independently of the particular thought, experiences, aims and actions of individual people. Without such an impersonal, objective character, the practice of medicine would be impossible. “Medical practice depends on generalisations that can be reliably applied and scientifically demonstrated. Without understanding people as objects in this way, there can be no such thing as medical science .”1 In the accumulation of such knowledge, doctors—like engineers—share experiences individually through meetings and publications. Within the community of its discipline, this inter-subjectivity establishes the objectivity of science: it is knowledge that can be publicly tested. We can sum up this approach as a doctrine of standard empiricism in which the specific aim of inquiry is to produce objective knowledge and truth—and to provide explanations and understanding. Science as pure science is knowledge of our natural environment for its own sake, or rather, for understanding. Science as applied science or technology is the exercise of a working control over it. Such is medicine. In its methodology, scientific thinking should, must, be insulated from all kinds of psychological, sociological, economic, political, moral and ideological factors which tend to influence thought in life and society. Without those proscriptions, objective knowledge of truth will degenerate into prejudice and ideology.

    Value-neutral truth

    Although the aim of standard empiricism is value-neutral truth, that does not imply that science is insulated from outside factors. It merely states that such factors are not integral to it—social context, for example. Doctors (and other health carers) are, of course, enmeshed in the obligations and responsibilities of their profession. Such responsibilities may extend from the individual patient, to the health care system, or to society as a whole. Their role as technologically trained practitioners, according to the canons of standard empiricism, does not exclude them adopting other roles—as a consoler or healer, for example. There is no logical bar to combining several roles; nor does standard empiricism form any logical bar to caring, empathy, compassion, “moderated love” or, simply, personal medicine. Nevertheless I think we might consider what happens in practice.

    In an entertaining, but enlightening, editorial, Anthony Clare points out that many doctors like to bask in the reflected glory of medicine as a scientific undertaking that transcends national barriers.6 The international pharmaceutical industry, the vast number of international academic meetings, the ever increasing number of international specialist societies, even the World Health Organisation itself are all evidence of this. Nevertheless much clinical practice is still heavily influenced by national culture and character. Clare gives examples. Take the French disease, “spasmophilia”, a condition that increased sevenfold in the 1970s and, he tells us, is diagnosed on the basis of an abnormal Chvostek sign and oddities on the electromyogram. In the USA, if it exists at all, it is panic disorder. In Britain, it doesn't exist—so presumably sufferers in France might be cured by a trip on Eurostar. The Germans consume six times as many heart drugs as their British counterparts, with cardiac glycosides being the second most prescribed group of drugs after non-narcotic analgesics. One electrocardiogram (ECG) survey of supposedly healthy citizens of Hamburg showed a rate of abnormalities of 40%. Germans have 85 drugs listed for treatment of low blood pressure and annual consultation rates of 163 per million. Hardly anyone in Britain gets treated for low blood pressure. Doctors in the USA think treating low blood pressure amounts to malpractice.

    Fashion is another powerful influence.7 There are treatments of fashion, investigations of fashion, diseases of fashion, operations of fashion. Hypoglycaemia comes and goes; chronic mononucleosis is probably on the way out, so is ME - even if chronic fatigue syndrome survives. Mitral leaflet prolapse syndrome caught our fancy in the 1970s when everyone who had an echocardiogram had it; then we've had temporomandibular joint syndrome, post traumatic stress syndromes, osteoporosis, fibromyositis, candidiasis hypersensitivity syndrome, total allergy syndrome, Gulf War syndrome, repetitive strain injury—and so they go on, a disease of fashion almost every month. One could make similar comments on treatment or investigations. The point is not simply whether they “exist”, though this is controversial in many of the examples given: it is the importance that they are accorded in a supposedly objective applied science. Is this evaluation the art of clinical practice?

    Bad science

    Now this, one may object, is all rather unfair. Surely, it doesn't demonstrate any admirable art in medicine: merely bad science or inadequate science or no science. It is science based on poor evidence, insufficient evidence or dogmas without evidence. And its practice is bad medicine; bad medicine pressured by the degree to which disease is the sustenance of TV dramas, magazines, commercial ads, the food industry, the publishing industry, sport and even the weather forecast.8 Isn't it another example of the “fact” that 85% of medical procedures are unproven—a figure, or something like it, that is widely quoted, poorly defined, based on abysmal evidence and almost certainly wrong—but very fashionable in certain circles, of course. Isn't what we need more and better clinical trials—the gold standard on which to base practice?

    The controlled, randomised clinical trial has been a powerful instrument in furthering medical knowledge and, of course, a doctor should know its results, but it is often not enough in recommending treatment for this patient. The double-blind, randomised, controlled trial (RCT) is an experiment: but experiment may be unnecessary, inappropriate, impossible or inadequate.9 A dramatic intervention such as penicillin in meningococcal meningitis does not need a RCT to demonstrate its efficacy. A RCT would be inappropriate if the effect of random allocation reduces the effectiveness of the intervention (when active participation of the subject is required, which, in turn, depends on the subject's beliefs and preferences). For example, in a trial of psychotherapy both clinicians and patients may have a preference, despite agreeing to random allocation. As a result, the lack of any subsequent difference in outcome between the comparison groups may underestimate the benefits of the intervention. The RCT may also be inappropriate if the event is a rare one (the number of subjects will not be sufficient) or likely to take place far into the future (it can't be continued long enough). For example, in the UK Atomic Energy Authority mortality study, 328,000 person years experience among radiation workers were examined.10 This was still many times too small and yielded unsatisfactorily wide confidence intervals. In interpreting low order risks, study situations are usually complex. In a multifactorial disease, a factor which increases the risk by less than half will almost certainly be undetectable. A RCT may be impossible if key people refuse participation, or if there are ethical, legal or political obstacles. Finally it may be inadequate if the trial involves atypical investigators or patient groups or if patients in the RCT receive better care than they would otherwise receive, regardless of which arm they are in. One answer to the failings of the RCT is a plea for “observational methods” (cohort and case control studies). Black argues9 that the RCT provides information on the value of an intervention shorn of all context, such as patients' beliefs and wishes and clinicians' attitudes and beliefs, despite the fact that such aspects may be crucial to determining the success of the intervention. By contrast, observational methods maintain the integrity of the context in which care is provided. He concludes:

    “There is no such thing as a perfect method; each method has its strengths and weaknesses. The two methods should be seen as complementary”.

    How then does one balance the information from two different approaches? If they are complementary, what rules exist to decide how much one looks to one method rather than the other? The answer is surely none. Good doctors use their personal judgment to affirm what they believe to be true in a particular situation. Their knowledge is not purely subjective, for they cannot believe just anything; and their judgment is made responsibly and with universal intent, ie they take it that anyone in the same position should concur. It is practical wisdom. Medical practice demands such judgments on a daily basis. The good doctor is able to reflect on diverse evidence and to apply it in a particular context. No computer could replace him, for the judgment cannot be reached by logic alone. Here medical practice as art and science merge.

    Rules of thumb

    At least part of the art of medicine lies in those non-scientific rules of thumb that guide decisions in practice, that enable the good doctor to affirm what he believes to be true in a particular situation. These cannot be and aren't science. McDonald argues that these should be discussed, criticised, refined and then taught.11 Ockham's razor tells us to go for the simplest unifying hypothesis in diagnosing the patient's disease; Sutton's law (based on the bank robber who told the judge he robbed banks because that's where the money is), tells us to go for the commonest explanation. Perhaps we could subsume those two principles into the structures of science. Certainly simplicity or elegance have long been recognised as important features of science.12 But by what rules do we decide to extrapolate—for example, it works in the old or the male, so we'll use it in the young or the female? Or it works with one particular drug, so we argue it will work with another drug that has the same effect. For example, we assume that any drug that lowers blood pressure will offer benefits to the patient. Or we assume that only a drug of the same class will have the same benefits; we extrapolate from evidence about one statin drug or one angiotensin-converting enzyme inhibitor to all others in the same class. Or we won't extrapolate in certain other cases. Instead we use the “show me” principle. Practolol was shown to reduce deaths after acute myocardial infarction,13 but other beta blockers were not assumed to be effective until huge trials had been mounted.14 Or we treat numbers: cholesterol, blood glucose, blood pressure are shown by science to benefit patients by reduction at certain extremes; noticing this, we assume that “more is better” and then we lower the threshold. Or we assume we know more than we do. Because nothing grew on throat swabs, we assumed sore throats were viral and avoided antibiotics. We now know from DNA sequencing data that many identifiable bacteria were not being isolated.15 Or we treat through plausible hypotheses: in the 1960s, nitrates weren't used to treat angina, because of the supposedly well-known phenomenon of coronary steal. Or we believe our tests are more discriminating than they are, for example the claim that no pulmonary embolism could occur if the arterial oxygen tension was over 80 mm Hg.16 Or we have expectations that are too great. Pre-marketing safety data of drugs confidently reveal acute toxicities occurring more often than 1 in 100 administrations. If the frequency is less than 1 in 1000 it will take six months to find out. Chloramphenicol was removed as a front-line antibiotic because of one case of aplastic anaemia in every 20,000.17 Or our expectations are too low: flu immunisation, around for decades, really does work; diabetic eye examination is highly worthwhile. Or our definition of disease is too narrow: thus we have angina without pain,18 toxic shock without shock,19 asthma without wheeze.20 Or we overinvestigate and undertreat, because all treatment becomes subservient to diagnosis. Or we operate on the asymptomatic because we believe it will be worse later—forgetting that it may not be or that technical breakthrough may occur (laparoscopic surgery for gallstones, for example). None of these processes of decision, described by McDonald, is logical or scientific in the usual sense of that word, nor are any based on evidence. Some could be, but for many this is impossible even in principle.

    Uncertainty

    Scientific medicine is based on evidence; but uncertainty grows when multiple technologies are combined into clinical strategies.21 Two strategies can be used in two different sequences: five in 120. Does anyone know definitively how to treat diabetes or ischaemic heart disease? There is no logical or scientific way of deciding between minimalism or an intervention based on inference and experience. Fortunately paralytic indecisiveness is rare. Indeed, we become so easily confident in our educated guesswork that it is easy to confuse personal opinion with evidence, or personal ignorance with genuine scientific uncertainty. We easily forget that the consensus of the guideline writers is not itself “evidence”, but, at best, the summary of practical wisdom. Clinical reasoning, with its reliance on experience, extrapolation and the critical application of the other ad hoc rules described, must be applied to traverse the grey zones of practice. As Naylor says,21 the prudent application of evaluative sciences will affirm rather than obviate the need for the art of medicine.

    Eliciting patient preferences is especially important when there is doubt about the best course of action. This is difficult with long term treatments when a patient's preferences may change as time passes, but decisions are needed now. A reflective practitioner treating hypertension or diabetes can hardly fail to be aware of this in daily practice. In conditions such as these, the trade-offs between probable short term harms or inconveniences and possible long term benefits are individual, difficult to quantify, full of uncertainty, and likely to change with life's changing circumstances.

    No matter to what extent information is provided, the doctor decides its nature and by that advice almost always influences, and often determines, the outcome. As Theodore Fox said, “the patient may be safer with a physician who is naturally wise than with one who is artificially learned”.22 At its best, the apprenticeship system of teaching at the bedside has traditionally given the British graduate at least some insights into these arts—something of quality that is both important and impossible to measure, like so many really important things. Polanyi pointed out in 1958, that “while the articulate contents of science are successfully taught all over the world in hundreds of new universities, the unspecifiable art of scientific research has not yet penetrated to many of these”.23 A master is followed because he is trusted even when you cannot analyse and account in detail for this. The apprentice picks up the rules of the art, including those which are not explicitly known to the master himself. All the efforts of microscopy and chemistry, mathematics and electronics have failed to reproduce a single violin of the kind which the half-literate Stradivarius turned out routinely more than 200 years ago. “Denigration of value judgment is one of the devices by which the scientific establishment maintains its misconceptions.”24 Judgment and its bedfellow wisdom are concerned with adding weight to the imponderable, with adding values to the unmeasurable or unmeasured.

    In a recent paper, Epstein offers this example.25 A 42-year-old mother of two small girls, despondent over job difficulties, was contemplating genetic screening for breast cancer as she approached the age at which her mother was diagnosed as having the same disease. Aside from the difficulties in taking an evidence-based approach to assigning quantitative risks and benefits to the genetic screening procedure (How much should I trust the available information?) and uncertainty about the effectiveness of medical or surgical interventions (Would knowing the results make a difference, and, if so, to whom?), the case raised important relationship-centred questions about values (What risks are worth taking?), the patient-doctor relationship (What approach would be most helpful to the patient?), pragmatics (Is the geneticist competent and respectful?), and capacity (To what extent is the patient's desire for testing biased by her fears, depression, or incomplete understanding of the illness and test?). In this situation, book knowledge and clinical experience alone are insufficient. Rather there is reliance on personal knowledge of the patient (Is she responding to this situation in a way concordant with her previous actions and values?) and the doctor (What values and biases affect the way I frame this situation for myself and for the patient?) to help us arrive at a mutual decision. The reflective activities applied equally to the technical aspects of medicine (How do I know I can trust the interpretations of medical tests?) and the affective domain (How well can I tolerate uncertainty and risk?) An attitude of critical curiosity, openness and connection allowed the patient and doctor to defer the decision and reconsider testing once the immediate crises had passed.

    It has been said that “we don't see things as they are, we see things as we are”.26 Evidence-based medicine and the doctrines of standard empiricism offer a structure for analysing medical decision making, but are not sufficient to describe the more tacit processes of expert clinical judgment. All data, regardless of their completeness or accuracy, are interpreted by the clinician to make sense of them and apply them to clinical practice. Experts take into account messy details, such as context, cost, convenience, and the values of the patient. “Doctor factors”such as emotions, bias, prejudice, risk-aversion, tolerance of uncertainty, and personal knowledge of the patient also influence clinical judgment. The practice of clinical medicine with its daily judgments is both science and art. It is impossible to make explicit all aspects of professional competence. Evidence-based decision models may be very powerful, but are like computer-generated symphonies in the style of Mozart—correct but lifeless. The art of caring for patients, then, should flourish not merely in the theoretical or abstract grey zones where scientific evidence is incomplete or conflicting but also in the recognition that what is black and white in the abstract often becomes grey in practice, as clinicians seek to meet their patients' needs. In the practice of clinical medicine, the art is not merely part of the “medical humanities” but is integral to medicine as an applied science.

    References

    Footnotes

    • John Saunders, MA, MD, FRCP, is Consultant Physician at Nevill Hall Hospital, Abergavenny.