KM & Artificial Intelligence

Artificial Intelligence (AI) and Machine Learning (ML) have been on my mind a lot of late. A few weeks ago I was on a website and a photo of a person appeared asking if I wanted to chat. I asked them if they were real or a Bot and there was no response. To this day I don’t know if it was a real person or a Bot, probably the latter.

What are AI / ML and what do they mean for KM and a learning organisation? This is starting to clarify and I welcome comments from people.

What if each major Project had its own AI / ML Bot that could interrogate existing Lessons Identified repositories as well as check with the team that the current Best Practices are being followed – and even chat with Bots on other Projects to share learning with them.

AI / ML depends heavily on the quality of the data they rely on of course – there are numerous stories of such systems coming to wrong, or at least biased, conclusions simply because of the bias in the data they are fed with - or “trained on” which seems to be the term used. An extreme case from a couple of years ago when an AI controlled Twitter account had to be turned off after 24 hours as it had learned from a lot of abuse from trolls and was then itself tweeting a lot of abuse as a result.

But assuming the input data is both unbiased and vast then AI / ML can both do the same as humans do only much faster - and also see insights that are beyond humans.

Imagine a large set of Lessons identified from Major Projects across Govt and AI / ML seeing patterns and links in these and making them available. But more than that, Bots on current projects actively looking ahead to see what Lessons Identified are most relevant to the upcoming Project Stage and checking they are being planned for - I imagine Project team meetings with Bots inputting these as part of the project team discussion. Perhaps even a Community of Practice of Project Bots sharing lessons between them as they go…all this doesn’t seem so far off. We are approaching a point where AI / ML is producing new knowledge assets and models, not just IT delivering existing man-made ones.

Could we be heading to a scenario like the movie “Interstellar” with Tars and Case in partnership with each other and humans?

My sense is that organisations that already apply KM in a structured & strategic way will find the transition to this new world easier. The culture and awareness of knowledge assets, knowledge planning, knowledge flow and learning with roles and processes are already there – so it’s easier for the new AI / ML world to link to this and relate to. It’s vital to remain business led in all this and not technology or even knowledge led of course…but this is the way that strategic KM works currently in any case (or at least should do).

A couple of stories to end with regarding large datasets and computers.

Some years ago I was working on a seismic exploration ship in the North Sea. We were down for weather (as was often the case) and so I had several hours over a few days to review data on a hull mounted sonar that would occasionally give errors that would propagate across the full system. 3D seismic exploration is extremely data intense tracking not just the ship but also the towed airguns and the up to four 3km long streamers with hydrophones, pingers, magnetic compasses, radionav. GPS and so on. All this produces vast amounts of data.

It took me hours but eventually I found the error in the software which was linked to the wind strength & direction and the sine of the crabbing angle of the ship. To be honest it was purely luck and persistence (and boredom from being down for weather for so long) that got this – I was just sorting and resorting all sorts of data looking for patterns in excel graphs and then suddenly it jumped out at me.

Yet I understand that AI / ML could find this sort of thing in seconds…and probably provide a whole lot of other insights too beyond mere humans

Second story also to do with data-intensive Seismic exploration. To some extent the quality of the seismic data acquired was dependent on the steering accuracy of the ship’s Captain in holding to the pre-programmed grid of lines set up parallel to each other perhaps 50m apart and several KM long. Captains could be distracted by boredom, fatigue and radio comms negotiating with local vessels including trawlers towing long nets competing for the same space. So, far smarter people than me devised a system to steer the ship automatically using computer programming. I was on the ship when this was first used and on the face of it, it all looked fine.

The problem was when the data was got back to land to process, it was seen that the system had introduced gentle but persistent (though varying in magnitude) sine waves in everything from the ships gyro to magnetic streamer compasses and acoustic ranging. This was a nightmare to process – far harder to deal with this than filtering out the random noise from human steering. So we went back to humans steering the ship again.

These two stories illustrate both the power and challenge of computers working on data – vast potential to scan data and process it much, much faster than humans (and see new things humans can’t) - while also doing strange and unpredictable, and possibly dangerous, things that we cannot see.