Note: Where available, the PDF/Word icon below is provided to view the complete and fully formatted document
Address on the occasion of Arthur E. Mills Memorial Oration: Professionalism and the limitations of information technology, Hobart, Tas.



Download PDFDownload PDF

ADDRESS BY THE HONOURABLE SIR GUY GREEN AC KBE CVO

ADMINISTRATOR OF THE COMMONWEALTH OF AUSTRALIA

ON THE OCCASION OF

ARTHUR E. MILLS MEMORIAL ORATION: PROFESSIONALISM AND THE LIMITATIONS OF INFORMATION TECHNOLOGY

26 MAY 2003

I feel greatly honoured to have been asked once again to deliver the Arthur E. Mills Memorial Oration.

One of the defining characteristics of our age has been the enormous increase in the extent to which the creation and application of knowledge and the making of decisions have been systematized and mechanized. Two of the most prominent techniques for doing so are by modelling and the use of algorithms. And of course over the last 30 years or so our capacity to employ those techniques has been greatly enhanced by the advent of the computer.

Let me start by given giving a brief background sketch of those two techniques.

I am not confining the word algorithm to its strict sense of a set of rules for performing mathematical operations - I am using it in its broader sense of any fixed predetermined step by step procedure for reaching a conclusion or making a decision.

The origins of mechanical or automatic calculating or processing can be seen in the abacus, the slide rule and a largely forgotten family of logic machines which replicate logical processes or statements through the settings of wheels on machines, window cards which can be slid over each other in various combinations and geometric figures such as Venn diagrams. But what turned out to be the most fruitful line of development consisted of electrical circuits or electronic states which replicated logical processes and statements; this comprised the use successively of simple switches, relays, vacuum tubes, transistors and finally the integrated circuits of the modern computer.

The concept of mechanical programming was applied in the 18th century Jacquard loom which used punched metal plates to specify the pattern of the cloth. This was followed by the use of punched cards, perforated paper, magnetized wire and tape and finally the modern computer programme.

It can thus be seen that the computer and the algorithm were not solely the product of 20th century science and technology. That is significant because it shows that the impetus to automate and mechanise intellectual and decision making processes is not just a characteristic of our generation - it has been with us for a long time. What distinguished the 20th century was the development of the technology which made the use of algorithms possible on an unprecedented scale.

The origins of that other ubiquitous tool of modern life, the model, go back at least as far as Pythagoras.

A model is in essence a mapping device. It can be a physical device like a scale model of say a building or a bridge upon which tests can be performed in order to predict how the real thing will behave. But it can also be a theoretical model which use mathematical or other forms of description to represent and make predictions about the behaviour of

some part of the physical world.

Modelling is a useful way of doing science. Indeed in the general sense of it being a technique which involves the representation or mapping of a manageable sub-division of the physical world a great deal of science can be seen in essence to be a form of modelling. As with algorithms, the advent of the computer has resulted in a great increase in the use of modelling and simulation so that today it would be hard to find any field which does not rely upon some form of the technique. That represents a significant advance. Computer modelling is useful for generating hypotheses, it facilitates research and design and enables investigations to be undertaken and tests conducted that would otherwise be impossible or at least would be impossible to conclude within a reasonable period of time.

The scientific revolution of the 16th and 17th centuries marked the liberation of science from what John Dryden called the tyranny of scholastic Aristotelian orthodoxy. In this talk I would like to raise the question of whether the ubiquitousness and great utility of the new technology, and the rapidity with which it has been introduced may have given rise to a new form of scholastic orthodoxy: an orthodoxy which is characterised by a failure to appreciate the limitations of models and algorithms and a tendency to indiscriminately apply them to inappropriate fields.

Let me start with modelling.

A basic but common error is to forget that a model is not real. That sounds a rather obvious thing to say but it needs to be said because the output of models is routinely presented in such a way as to suggest that the thing represented by the model is the thing itself. Thus one frequently reads in scientific literature statements to the effect that a model or simulation “proves” or “shows” that something in the physical world is the case whereas the only statements which a model can make are statements about itself.

There is also a tendency to overlook the limitations of models including that by definition all models are incomplete and that the validity of the output of a model is dependent entirely upon the validity and scope of the assumptions upon which it is based.

A dramatic example of over reliance upon modelling was provided by the construction of the Millennium pedestrian bridge across the Thames. This magnificent structure had to be closed two days after it had been opened because the synchronised responses of pedestrians to random movements in the bridge set up dangerous oscillations. The failure to predict this phenomenon was a direct result of over reliance upon computer models and a failure to conduct sufficient empirical tests.

I would suggest that similar over confidence was displayed by a team at Oxford who developed a computer model of a heart which it was claimed would enable researchers to see instantly what effect a new drug would have on a patient’s heartbeat and show up any side-effects. ‘Because this model is functioning in exactly the same way as a real heart,” the leader of the team said, “we can see the consequences of administering a drug and if it could be harmful to humans’.” (1)

Given that no model can ever isomorphically map a complete organ let alone isomorphically map everything with which the organ interacts and which might affect its function the claim that the virtual heart functioned in exactly the same way as a real heart was manifestly unsustainable. It follows that the implied suggestion that the use of the model would obviate the need for clinical trials was highly questionable to say the least. But such was the general enthusiasm for the technique that later that year the model was used to persuade the United States Food and Drug Administration that an apparently dangerous effect reported by clinicians testing a new drug developed for hypertension and angina, mibefradil, was in fact harmless. The drug was approved without further clinical tests - the first time modelling data had ever been the decisive factor in the Administration’s approval process. That episode had two sequels. Shortly after its release mibefradil had to be recalled because of its harmful interaction with other drugs and the leader of the Oxford team appeared to modify his views about virtual hearts when as one of the authors of an article published by the Royal Society he warned that “It is important not to be overconfident. We are still a very long way from capturing the

complex physiology of the heart in mathematical models”. (2)

But as well as problems arising out of their misuse or overuse the advent of the age of the computer model and the application of algorithms to a wide range of decision making and intellectual processes have had other undesirable consequences. They have given rise to a new form of scientific hubris which holds that everything is measurable, computable and knowable. But while that may be true of the finite domain of a model or an algorithm it is not true of the real world.

The idea that everything is measurable is especially unacceptable when it is applied to human beings. Charles Dickens gave expression to this in Hard Times when he introduces us to Thomas Gradgrind who arrives “with a rule and a pair of scales, and the multiplication table always in his pocket, sir, ready to weigh and measure every parcel of human nature, and tell you exactly what it comes to.”

One troubling aspect of the idea that everything can be measured and tabulated is its corollary that anything which cannot be measured does not exist or can be dismissed as unimportant. There is an analogous corollary to the economic theory that everything can be evaluated in dollar terms which holds that anything which cannot be evaluated in dollar terms has no value.

One result of the failure to recognise that we can never completely describe or understand a given domain in the real world has been the promulgation of a bizarre version of the precautionary principle. Originally developed to govern decision-making and policy making in connection with environmentally sensitive activities the precautionary principle is now routinely invoked in discussions about a wide range of issues. So it has become a significant and influential concept. Over the years the precautionary principle has been formulated in a variety of ways. One version is that no proposed activity should be undertaken unless it can be positively demonstrated that it will not cause harm. At first sight that simple prescription seems to make sense. But a little reflection leads to the realisation that until we know everything about everything in the universe it is a logical impossibility to prove a negative of that kind. That is brought home to us on a practical level when we remember what we know about chaotic systems which are sensitively dependent upon initial conditions and in which even the most limited action is capable of generating large and unpredictable effects: the so called butterfly effect. It follows that as we can never prove conclusively that a particular action will not have an adverse consequence somewhere in the world at some time in the future the application of that form of the precautionary principle precludes us from ever doing anything at all again. In other words, the form of the principle which precludes activity unless it can be positively demonstrated that it will not cause harm is simply unworkable. The fact that despite its obvious defects that form of the precautionary principle has been able to gain currency is a product of the new arrogance which leads us to conclude that we are capable of knowing everything that there is to know abut a given domain. I repeat, that may be true of the world of algorithms and models but it is not true of the real world.

I should add that in citing that example I am not criticising the precautionary principle generally, only that particular version of it.

Another consequence of the new orthodoxy has been the emergence of the myth of certainty in science. Increasingly the media, consumers, decision makers and the community generally are demanding clear cut unequivocal answers to questions about everything from climate change, dietary requirements or genetically engineered crops to the efficacy of a new drug. But they fail to appreciate that scientific statements are provisional only, they being no more than the best fit for the data as they are currently known so that it is simply not possible to give unqualified answers to questions of that kind. Unfortunately it is not only non scientists who perpetuate the myth of certainty in science. Some scientists themselves through the confidence with which and the unqualified terms in which they express their opinions or report the results of their research bear just as much responsibility for perpetuating that misperception of science.

The idea that everything is measurable and that decision making and intellectual processes can be completely systematised and reduced to a set of rules has been especially influential in the field of quality assurance.

The general concept of quality assurance had its origins in the middle ages when the craft guilds laid down detailed regulations governing the quality of their products and the conduct of their members. With the Industrial Revolution and the rise in the 19th and 20th centuries of large corporations and the development of more sophisticated and efficient methods of production and distribution the maintenance and improvement of quality came to be seen as an industrial and business necessity.

The quality assurance movement comprises a number of cognate developments including the rise of management theory and the promulgation of precisely defined quality assurance standards.

While acknowledging the great benefits which the quality assurance movement has brought there are aspects of its approach and methodology which give rise to concern. This is particularly the case with some of the more enthusiastic applications of quality assurance programmes to health care. Let me give some examples. They are largely taken from the American literature but the kind of thinking behind them can be found everywhere.

A threshold difficulty with the application of some quality assurance programmes to health care is that they are based upon questionable premises.

For example, what was described as a consensus statement published by a New York school of medicine included the claim that “The quality of health care can be precisely defined and measured with a degree of scientific accuracy comparable with that of most measures used in clinical medicine”. (3) No doubt that is true of the measurement of some aspects of health care but there are many aspects in respect of which it is obviously not the case.

To take just one example, outcomes such as patient satisfaction cannot be quantified in numerical terms and even if they could, differences between patients in their expectations, criteria of satisfaction, readiness to accept risk and definition of a desired outcome make it impossible to devise a universal system of defining and measuring the quality of health care which can be being applied to every individual clinical situation. And even if such a system could be devised it is hard to imagine how it could define and measure the quality of that care with anything like scientific accuracy.

Much discussion about quality assurance is also flawed by the uncritical application to the delivery of health care of concepts developed in industry or the commercial world. For example, in one study the question is posed of why health care providers have lagged behind other sectors such as “utilities, financial services and manufacturing” in the adoption of information technology. (4) But the mere posing of that question reveals a failure to appreciate the crucial distinction between the conduct of a business and the practice of a profession. Amongst other things their objectives are different: the primary function of a commercial organisation is to serve the interests of the organisation while the primary function of a medical professional or indeed any professional is to serve the interests of the patient or client. In short there are obviously fundamental differences between practising medicine on the one hand and generating electricity, lending money or manufacturing bicycles on the other and there is no justification for concluding that the methodologies and concepts employed in one can be applied to the other.

Another questionable attempt to apply structured methods of quality assurance to the practice of medicine entails assessing quality by reference to the extent to which the results in a particular clinical situation are consistent with the outcome predicted by a model.

That too is an approach which needs to be viewed with circumspection. Even the most complete and sophisticated model can never overcome the inherent difficulty of making accurate predictions of what should be the outcome in an individual case based upon what are really epidemiological data; nor can it allow for the possibility that the difference between the predicted outcome and the actual outcome might be the result of a factor not incorporated in the model such as the fact that the clinical situation under review had unusual features.

An example of the sort of position which is reached by the unrelenting application of information technology to health

care is provided by this revealing observation contained in a policy perspective produced by a high level health research unit in Massachusetts entitled “The Future of Quality Measurement and Management in a Transforming Health Care System”: “The professional ethic used to provide the primary assurance that autonomous professionals …. would meet the needs of patients and continually improve the quality of their product. However, the professional ethic is under siege as health care becomes more like other commodities and services ...” But the authors of the report also detected what they regarded as a compensating trend in the rise of what they called the “power of the internet” which they concluded would “provide a way to ensure and improve quality in health care by reducing asymmetries of information between health care providers and a growing number of consumers. It will thereby correct a fundamental flaw in the functioning of health care markets and reduce the importance of … professionalism among individual providers…” (5)

There are two observations one can make about analyses of that kind.

I think that we should view with serious concern the idea that social and economic forces beyond our control will inevitably lead to the marginalisation of the professional ethic and the reduction of the doctor patient relationship to a set of transactions governed entirely by market forces.

And the suggestion that the internet should be seen as an integral component of the health care system is breathtaking. Some patients do not have access to it while others will not wish to do so; more importantly the information on the internet largely comprises a mass of unprocessed material which is of very uneven quality, whose provenance is often unknown and which is searchable only by the crudest means. It is surprising to see a policy statement which is designed to promote quality assurance placing reliance upon an instrument which is itself so palpably devoid of any quality assurance mechanisms.

Let me conclude with these propositions. First, I am not suggesting that we should be Luddites: of course the new technology has brought tremendous benefits and of course we should embrace it and apply it as fully and effectively as we can. But in doing so we must recognise its limitations. In particular we must recognise that in contrast to the limited domain of the algorithm and the computer model in the real world not everything is measurable and our knowledge about a particular field including the human organism is and perhaps always will be incomplete. Secondly while recognising that the quest for improvement in the quality of health care is important and should be pursued vigorously we also need to recognise that those endeavours will be counter productive if they reduce the practice of medicine to a purely economic activity governed by the thinking of the measurers and tabulators. Finally, while recognising the desirability that to the extent possible the practice of medicine should be evidence based and undertaken in accordance with established procedure we must also recognise that there will never be a universal algorithm which like some physical theory of everything is capable of being applied to every individual clinical situation.

Perhaps it is timely to reaffirm that two of the defining qualities of a professional are the courage and capacity to make judgements in situations where knowledge is incomplete and a commitment to professional values which always prevail over all other considerations.

References

1. The Sunday Times Aug 3 1997

2. Phil. Trans. R. Soc. Lond. (2001) 359, 1049-1054

3. JAMA (1998); 280 : 1000

4. Australian Health Review (2000); 23: 176

5. JAMA (1997); 278 : 1624