As lawyers, a lot of what we do is make predictions. At its most basic level, every bit of legal advice as to what the legal position is in any case is essentially a prediction: what would the courts make of it? In practice, the prediction is more subtle. What is the likely outcome of a legal engagement, bearing in mind that the vast majority of cases settle?
A good question is “How do we make these predictions?” And more importantly, “What can we do to make such predictions more accurate?”
These questions have hit a bit of additional topicality following the recent row in the UK about Andrew Sabisky. He is a special adviser, only recently appointed by Downing Street, who is what is known as a “superforecaster”. There is more science to this expression that you might think, not least because of the work of Philip Tetlock, who is a professor at the University of Pennsylvania, in the USA. He has done a good deal of work over the years on the subject of forecasting, or predictions, and has organised, through the Good Judgment Project, a series of forecasting tournaments over the last three decades. It turns out that the majority of people, including people ordinarily regarded as experts, are pretty rubbish at forecasts, barely better than random. In the longer the range of forecasts, the poorer their accuracy. But there are a few people – the so-called superforecasters – who are really rather good at it. It is not just luck. Time and again, these superforecasters (typically the best 2% of forecasters) prove themselves to be much better than run-of-the-mill experts are predicting what is going to happen. And Professor Tetlock and his team have done a good deal of work in identifying what makes these superforecasters better than others.
The answer turns out to be quite complicated; the superforecasters have a range of talents and techniques. One is the use of Bayesian techniques: they continually update their assessments of probabilities. Another is that they use the Fermi technique, breaking down uncertainties into smaller parts and making the uncertainties attaching to those smaller parts as transparent as possible. They start from the outside the problem, and move inwards, rather than the other way around. Even more important, perhaps, is their ability to resist groupthink (Andrew Sabiski got into serious trouble for failing to conceal his resistance to groupthink from public gaze). Mere specialist expertise in any particular field turns out to be a very poor indicator of forecasting accuracy.
One might think that it would be a good idea for a legal team engaged in a very substantial case to identify who are the super forecasters within the team. But on the whole, hierarchies tend to be resistant to any such process. It would be perfectly possible, for example, to make an assessment within a large legal team as to who the best forecasters are. But imagine that you are a senior member of that team. You are used to your estimations being accorded due respect. You are relatively unlikely to be willing to be assessed in competition with a young lawyer in his or her 20s. And so the business of predicting the results in a substantial legal case is typically the province of the senior, rather than the talented.
One might also think that it would be a good idea to insist that predictions as to the outcome of cases we put in numerical terms. All too often, lawyers put their predictions in terms that are barely better than Delphic. They say that a particular argument has a “reasonable” prospect of success, or that a particular application is “arguable”. The reason that these predictions are put in such terms is that it is thought to be much safer to put predictions in these vague terms, in the sense that it is thought that they cannot readily be gainsaid whatever the result. I have never subscribed to that point of view. When I was a solicitor, and it was my job to lead legal teams in substantial cases, I used (with some computer assistance) to generate probability graphs, offering estimates as to the likelihood of the result. For example, in a case where a contractor was pursuing a case for compensation for unexpected site conditions, both in misrepresentation and in contract, I might have identified three main outcomes (eg that the misrepresentation claim would succeed, that the contractor could overcome its lack of contractual notices, and that the contractor would succeed neither of those bases). I would weight each of those three outcomes. I would then make estimates as to the likely monetary result for each of those three outcomes, and as to the likely accuracy of those estimates. In short, I would compound three Gaussian distributions, make the appropriate adjustments in terms of legal and other costs and so forth, and show them in graphical form. I would say to the client something like “Look, the likelihood is that you’re going to end up with the result somewhere in the middle of this big hump in the graph. But you might well end up in the much less probable left-hand side of the graph, and you might, if you’re lucky, end up in the also improbable right hand side of the graph.” On the whole, my clients liked this sort of approach, and would follow my advice as to how to proceed. And on the whole, the eventual results tended to be somewhere around the middle of the big humps in these graphs. Now, towards the end of my career and as a barrister, I have effectively demoted myself, from battlefield General to mere Colonel of the cavalry. And so it now lies with my instructing solicitors to give whatever advice they think fit as to overall prospects.
The truth of the matter is that it is illusory to treat legal advice as absolute, or binary. It is sufficient to look at the number of successful appeals and adjudication challenges to show the significant uncertainties which attach to any legal action. So much depends on which Judge, arbitrator or adjudicator you get, how the witnesses perform on the day, the financial exigencies operating on the minds on the managers of one’s opponent and a host of other factors. It is not, of course, like rolling a couple of dice, but there is this similarity: it is not very likely (only one in 12) that you will roll just a couple of ones. But it’s possible. And good advice to clients puts numerical values to both risks and rewards.
All sorts of views are possible, of course, is to how the science of forecasting should be applied in the legal fashion. What is, perhaps, somewhat surprising is that many of my professional colleagues – well, most of my professional colleagues in truth – give no thought that science at all, let alone to how it might usefully be applied.
The practice of law has a long way to go in this regard.