Why AI Drives Better Decision Making Or Does It?

Why AI Drives Better Decision Making Or Does It?

Why AI Drives Better Decision Making Or Does It?

There are many a news story these days on Artificial Intelligence and questions abound: “will it help us do our jobs better?”, “will we lose our jobs because of it?”, “will it give the wrong answers” etc.  The last question is something that many C-Suite executives are struggling with.  Why? You ask.  Well, most executives these days, are still basing their decisions somewhat on their own judgment, rather than hardcore data and analytics.  There is still a level of scepticism about data, analytics and any kind algorithm to predict a future event.  I’ve had lots of discussions with senior executives, and most come down to the following conclusions:

  • “I don’t know what the algorithm does”: Most are quite suspicious of the output from any kind of advanced algorithm, and see it as a bit of a black hole or dark art. They can’t trust the algorithm, as they don’t know it arrived at its result set.  Do they need to understand what the algorithm does?  Yes, to a degree, and they can understand that by understanding what rule sets and variables have been added.  The algorithms learn by themselves, and in my opinion, will always need oversight by a human as a reviewer of the output.  So, if this is added into the process what have executives got to worried about?
  • “It’s my decision and I will make it!”: Ah so this is an ego thing, then is it? I’ve been making my own decisions for 20 years and it’s been pretty consistent.  Well, we all know that ego needs to be checked at the door and a computer doesn’t come with one, well not yet.  So, perhaps, doing a little testing in parallel to see your decision vs. the algorithms decision might be a better approach.  Will you sleep more sound?
  • “Will the algorithm have an element of bias?”: Good question. A recent MIT initiative called the AI Now Initiative have kicked off research to look at “bad” algorithms and their biases.  Kate Crawford one of the founders, cites examples of algorithmic bias that have come to light lately, which, include flawed and misrepresentative systems used to rank teachers, and gender-biased models for natural language processing.  A key challenge, these and other researchers say, is that crucial stakeholders, including the companies that develop and apply machine learning systems and government regulators, show little interest in monitoring and limiting algorithmic bias.  If this is the case, then should executives ignore bias?

Wow, that’s full on in terms of why executives are worried about AI.  I do ask myself if there are benefits to AI and machine learning.  Your thoughts?