Jan 4 2017

Experience in Artificial Intelligence – lessons for Legal IT in 2017


To start 2017 I thought I’d have a quick look at 2016’s favourite Legal IT topic, Artificial Intelligence (AI). Reason for picking this topic is down to two pieces of technology that I brought into our household at the end of last year that utilise AI heavily. Those are Nest and Alexa.

The former has brought home to me what I think is going to be a key issue in adoption of AI in law firms, that is trust. My Nest thermostat “learns” over time and one of the key aspects of this is watching for when you go out of the house, the system then reacts by turning thee heating down. At the same time it triggers my Nest security camera to turn on. Over Christmas though I’ve noticed, on odd occasions, when we were out that the camera would turn on but the thermostat wouldn’t drop the temperature. Other times though it would, there seemed to be no consistency. After a lot of old style IT troubleshooting and a lot of googling, I eventually found that this wasn’t a bug, but that Nest had learnt patterns and our locations and kept the heating on when it thought our trip was local and brief, thus in its mind turning off the heating was less efficient (as otherwise it would need to heat the whole house from scratch on our return). The camera though it realised should be on immediately.

This is my trust point, until I fully understood what was going on I didn’t trust the technology. I thought it was just not working, so I had taken to manually overriding the settings when we went out. In reality it was working very well and actually better at predicting things that would save energy than I was! This issue though will be the same with Legal IT AI, getting the trust will take time and will probably need a full understanding of how the machine is learning before people will accept AI into the mainstream functions.

Alexa to me is voice recognition starting to become useful, moving from the smartphone (Siri) or the computer (Cortana) to being “in the room” makes so much sense and is much more useful. Whether it’s controlling the lights or simply putting on some music it genuinely is useful rather than a gimmick. It is impressive how Alexa is using all the data it is gathering to improve, however it also shows how far AI has to go in terms of human interaction. It is way beyond having to specifically phrase your commands or questions, but there is so far to go to get beyond a few “skills” it has now.

These two areas fuel my scepticism around AI. No that’s not fair,  it’s not scepticism it’s just wanting an injection of reality into AI within Legal. I am impressed with Alexa and Nest and the more I use them the more I get impressed by the learning, however my expectations for the technology (particularly for Alexa) were not overoptimistic. I think if we adopt a realistic approach for AI in Legal in 2017 then it can and will be a great enabling technology for firms. If we don’t temper the hype though, we’ll be disappointed or fail to trust it and it’ll go in the bin for years before we try it again!


May 5 2015

“Lawyers are like any other machine….”


“Lawyers are like any other machine. They’re either a benefit or a hazard. If they’re a benefit, it’s not my problem”

Rick Deckard, Los Angeles, Nov 2019.


The last time I really looked at Artificial Intelligence (AI) was when I studied it in my second year at university as a module for my degree. AI bookOver the last year I’ve seen it pop up again and again at various Legal IT events and in a number of Legal publications. If the talks and articles are to be believed, then in the next 10 years or so we’re going to see AI become pervasive in the legal sector, both through the need to legislate against its usage in society and as a replacement for trainees and junior lawyers (Note: this is a link to a pay walled article, though you can read the synopsis). In fact the latter view point is becoming the new topic of choice on the Legal IT event circuit, Richard Susskind has talked about it, Dr Michio Kaku keynote at the recent British Legal Technology Forum has talked about it, Rohit Talwar talked about it at last year’s ILTA in Nashville and there are many many more examples.

I’m not convinced by the timescales, but it’s daft not to think this will feature in the reasonably near future. But then a recent article in The Spectator made me start to doubt whether this in fact will be our future? The article was actually about this year’s re-release of Ridley Scott’s ‘Final Cut’ of Blade Runner and the announcement of the sequel. But it talked about the concept of “virtual paranoia” in the original Philip K. Dick book. The uncertainty of what is real and what is not. Will this fear be the one to scupper AI in the legal profession? Will virtual paranoia mean we’ll never have the digital lawyer? Well if you’re after a key piece of legal work or advice would you be happy a computer giving it you or would you insist on a human being? I think there would be a bit of worry, this virtual paranoia is already creeping into society. We see people dislike the their online actions being used to tailor adverts for them, so much so they are removing themselves from sites that facilitate this.

The other point that struck me from this article was the question “What is the meaning of memory, now everything is a click away on Google?”, something this article on the ABA Journals site also raised recently. Will this aspect further enhance our virtual paranoia?

Maybe though what we’ll see is a future that will be somewhere between the extremes. Rather than full on replacement of the lawyer, we’ll see AI support the lawyer. AI used to speed up the legal process, provide the knowledge to the lawyer and become the KM function of law firms. I think it is unlikely that AI will ever become “human” enough, they may pass the Turing Test but I’m sure there is something of human interaction, of human behaviour and thought that will mean that the human lawyer is still required for quite some time.