Oct 30 2017

A few thoughts on artificial intelligence for both the sceptic and evangelist!

Jason

I find myself having more and more conversations about AI (artificial intelligence), whether it’s in Legal IT circles such as the recent panel I chaired at LawExpo 2017 in London or just a discussion with my brother on the ethics of the impact of this “next revolution” on the job market. My stand point has a mix of scepticism about how quickly this will happen and a dose of optimism that we will find a way to navigate this shift without mass unemployment.

I sometimes think though I am alone on both counts though. An example is this long but well written article on how we’re all going to lose our jobs to robots, for those in their twenties this article basically says you’ll all be out of a job before you retire! But then there are other more tempered articles that suggest not all jobs will expire but that we’ll adjust to use the tools in our jobs rather than the tools replace them. Or other more positive views of a future business world with AI.

My sceptical side can’t help but pull out articles like this one from 1992 which talked about the future of speech and pen recognition. “If I were a researcher, I’d feel I had a better return from studying speech recognition” [as oppose to handwriting recognition] and comment about how great a stylus would be for painting and drawing. Some 25 years later with Alexa and the Surface pen/Apple pencil we just about have the tech to achieve each to a reasonably successful degree!

But we will undoubtedly see a revolution that eclipses the industrial revolution and transforms the world of business, whether it’s in 5 or 50 years. And I look to the tech leaders like Bill Gates and Elon Musk who are sounding a warning that pretty much every politician is ignoring.

“I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish. I mean with artificial intelligence we’re summoning the demon.” – Elon Musk

Bill Gates has proposed possible solutions to managing the problem with taxes on robots and the long article linked to first in this post has other options for tackling the transition. We do need to start looking at what this revolution might mean for business and more importantly society, in a very barbell shaped economy more technology will only erode the middle even more and probably also erase the low end job market entirely!

However I think we have time, AI will form a huge part of technology solutions in the next 10 years I am sure, but in very specific applications and the majority of these I think will be in assisting humans rather than eradicating them. After all there will be plenty of companies that follow the Dilbert principle in this cartoon!

Share

Jun 7 2017

Yes sorry it’s another AI (and Uber) post – but also Ravn and iManage!

Jason

Honestly this isn’t Legal AI click bait! But on the back of the big news in Legal IT recently (the news regarding Ravn and iManage, not the Tikit and NetDocuments news yesterday) it reminded me of this interesting post on Uber’s use of AI I caught in the tech press.

This was the article, Uber using AI to adjust prices according to the rider. It caught my eye as I am always intrigued by Ubers continued use of technology. Whether it’s pushing the envelope with the AppStore to try and fingerprint itself to track reset devices or use of specific apps to detect and avoid city agencies sting operations with the tool called greyball, their innovative use of tech (if not their moral use of it) is fascinating.

The AI article though got me thinking, is this a use of AI that law firms could utilise? On the face of it it looks a little bit dishonest or a recipe to annoy your clients even more, but factor in some specifics like acceptance of risk the client is likely to take, speed over quality or vice versa, whether they are a key client, a loyal client or a one off engagement, or a client in a market you want to break into, their location (ie for global firms in jurisdictions where the average local rate is much lower than the firms average global rate) etc etc and you could start to get an interesting model that is consistent to the client and tuned to their need, but also financially beneficial to the law firm. Of course as @jordan_law21 put it on twitter, “You can have a flat fee, or you can see our dockets, but not both. Build client trust and they won’t ask for the latter”. As with fixed fees you would need to be up front with the client that this “tailored rate” was in play but they would get the best value using it!

Like I say I’m not sure it would fly, but for innovation to flourish in law firms we need to have a few wild ideas, poke them a bit and be willing to bin them if they just won’t fly.

For those that missed it, this was the Ravn/iManage news that came out of ConnectLive17.

“that iManage, the leading provider of Work Product Management solutions, today announced that it will revolutionise the way companies find, extract and act on key information from documents and emails through its acquisition of RAVN Systems, whom as you know are leading experts in the field of Artificial Intelligence (AI) and Cognitive Search” – press release here.

If you’re a customer of either it’s got to be exciting or at the very least interesting news. The possibilities of all that core law firm data in your document stores combined with Cognitive Search and AI are numerous. It also makes the introduction of AI solutions more about what you’re trying to do than about the technology complexity of getting all that volume of information through yet another system.

Share

Jan 4 2017

Experience in Artificial Intelligence – lessons for Legal IT in 2017

Jason

To start 2017 I thought I’d have a quick look at 2016’s favourite Legal IT topic, Artificial Intelligence (AI). Reason for picking this topic is down to two pieces of technology that I brought into our household at the end of last year that utilise AI heavily. Those are Nest and Alexa.

The former has brought home to me what I think is going to be a key issue in adoption of AI in law firms, that is trust. My Nest thermostat “learns” over time and one of the key aspects of this is watching for when you go out of the house, the system then reacts by turning thee heating down. At the same time it triggers my Nest security camera to turn on. Over Christmas though I’ve noticed, on odd occasions, when we were out that the camera would turn on but the thermostat wouldn’t drop the temperature. Other times though it would, there seemed to be no consistency. After a lot of old style IT troubleshooting and a lot of googling, I eventually found that this wasn’t a bug, but that Nest had learnt patterns and our locations and kept the heating on when it thought our trip was local and brief, thus in its mind turning off the heating was less efficient (as otherwise it would need to heat the whole house from scratch on our return). The camera though it realised should be on immediately.

This is my trust point, until I fully understood what was going on I didn’t trust the technology. I thought it was just not working, so I had taken to manually overriding the settings when we went out. In reality it was working very well and actually better at predicting things that would save energy than I was! This issue though will be the same with Legal IT AI, getting the trust will take time and will probably need a full understanding of how the machine is learning before people will accept AI into the mainstream functions.

Alexa to me is voice recognition starting to become useful, moving from the smartphone (Siri) or the computer (Cortana) to being “in the room” makes so much sense and is much more useful. Whether it’s controlling the lights or simply putting on some music it genuinely is useful rather than a gimmick. It is impressive how Alexa is using all the data it is gathering to improve, however it also shows how far AI has to go in terms of human interaction. It is way beyond having to specifically phrase your commands or questions, but there is so far to go to get beyond a few “skills” it has now.

These two areas fuel my scepticism around AI. No that’s not fair,  it’s not scepticism it’s just wanting an injection of reality into AI within Legal. I am impressed with Alexa and Nest and the more I use them the more I get impressed by the learning, however my expectations for the technology (particularly for Alexa) were not overoptimistic. I think if we adopt a realistic approach for AI in Legal in 2017 then it can and will be a great enabling technology for firms. If we don’t temper the hype though, we’ll be disappointed or fail to trust it and it’ll go in the bin for years before we try it again!

Share

May 5 2015

“Lawyers are like any other machine….”

Jason

“Lawyers are like any other machine. They’re either a benefit or a hazard. If they’re a benefit, it’s not my problem”

Rick Deckard, Los Angeles, Nov 2019.

 

The last time I really looked at Artificial Intelligence (AI) was when I studied it in my second year at university as a module for my degree. AI bookOver the last year I’ve seen it pop up again and again at various Legal IT events and in a number of Legal publications. If the talks and articles are to be believed, then in the next 10 years or so we’re going to see AI become pervasive in the legal sector, both through the need to legislate against its usage in society and as a replacement for trainees and junior lawyers (Note: this is a link to a pay walled article, though you can read the synopsis). In fact the latter view point is becoming the new topic of choice on the Legal IT event circuit, Richard Susskind has talked about it, Dr Michio Kaku keynote at the recent British Legal Technology Forum has talked about it, Rohit Talwar talked about it at last year’s ILTA in Nashville and there are many many more examples.

I’m not convinced by the timescales, but it’s daft not to think this will feature in the reasonably near future. But then a recent article in The Spectator made me start to doubt whether this in fact will be our future? The article was actually about this year’s re-release of Ridley Scott’s ‘Final Cut’ of Blade Runner and the announcement of the sequel. But it talked about the concept of “virtual paranoia” in the original Philip K. Dick book. The uncertainty of what is real and what is not. Will this fear be the one to scupper AI in the legal profession? Will virtual paranoia mean we’ll never have the digital lawyer? Well if you’re after a key piece of legal work or advice would you be happy a computer giving it you or would you insist on a human being? I think there would be a bit of worry, this virtual paranoia is already creeping into society. We see people dislike the their online actions being used to tailor adverts for them, so much so they are removing themselves from sites that facilitate this.

The other point that struck me from this article was the question “What is the meaning of memory, now everything is a click away on Google?”, something this article on the ABA Journals site also raised recently. Will this aspect further enhance our virtual paranoia?

Maybe though what we’ll see is a future that will be somewhere between the extremes. Rather than full on replacement of the lawyer, we’ll see AI support the lawyer. AI used to speed up the legal process, provide the knowledge to the lawyer and become the KM function of law firms. I think it is unlikely that AI will ever become “human” enough, they may pass the Turing Test but I’m sure there is something of human interaction, of human behaviour and thought that will mean that the human lawyer is still required for quite some time.

Share