Artificial intelligence and machine learning: are the innovations all that they’re cracked up to be?
So far, we’ve just scratched the surface, says Stephen Moody, CIO Symphony AyasdiAI, provider of AI-based fraud detection solutions based in Redwood Shores, California. He predicts the next two years will see extraordinary new applications of these cutting-edge technologies.
But to make sure you’re using them in a meaningful way, it’s important to avoid the hype and do your due diligence. Moody spoke with StrategicCIO360 about “AI-washing,” why “cat recognition” isn’t enough and why you should run from a vendor who can’t easily explain the technology.
Has AI/ML lived up to its expectations—or is the best yet to come?
I think we’re just scratching the surface when it comes to artificial intelligence and machine learning. We’re still very much in the hype phase, where there is a lot of potential for what AI and ML can do, but a lot of that promise has yet to be actualized. For most people, I think the tendency is to think about AI in these very macro and science fiction-type ways—think things like the movie Ex Machina or the idea of robot lords taking over.
The reality is that AI and ML have already revolutionized so many processes and operations for businesses across sectors, but it’s very much behind the scenes. I predict that in the next two years we will witness extraordinary new applications of AI in all industries.
How do we combat “AI-washing”—saying something is AI without it being a practical application?
A lot of the so-called AI/ML solutions out there aren’t using true AI—it’s just window dressing on top of old, outdated approaches. You can’t just apply AI to a solution for the sake of calling it AI. It needs to be a meaningful application.
Specifically, when it comes to financial crime, many of the AI/ML solutions currently on the market rely on a technique called supervised machine learning. Essentially, the algorithm is given a history of things that are true, and then they try to find new examples. A classic example is cat recognition. You show the AI many photos of cats, and you show it photos that are not cats, and then it learns—so it can identify a photo as “cat” or “not cat.” But in real-life situations involving crime, it’s not so cut-and-dry when it comes to identifying what is financial crime and what is not so these techniques on their own will fail.
How is AI being used to transform money laundering and the anti-money laundering industry?
Many institutions and firms are still relying on manual efforts when it comes to anti-money laundering compliance technology. And a big part of the problem is that these solutions are overwhelmed by false positive alerts, forcing banks to employ armies of investigators who spend 99 percent of their time looking at completely normal financial behavior.
Criminals are good at hiding, so what’s needed is a more sophisticated type of AI that can enable mapping of behaviors within a system. With the right blend of techniques, banks can start to find the hidden areas of crime. This starts with learning and mapping all the different types of behaviors associated with their different customers.
ML technology is available today that can learn money flow and financial behavior and compare different customers’ behaviors to each other, to help reveal pockets of higher risk behavior. A bank’s historic view of risk, based on investigation outcomes, can be overlaid on this map enabling the system to “focus” on particular areas and generate alerts. This kind of automated solution supports regulatory review and achieves a high level of performance in accuracy and risk coverage.
What do other CIOs need to understand about AI-washing, transparency and explainability in AI?
AI and machine learning are playing a larger role in almost every industry, but it’s important not to get sold on hype. It’s important to really evaluate solutions and ask questions of the vendor about what kind of AI/ML is being used and how it’s being used.
To make AI broadly useful, humans need to be able to interact with and have confidence in the algorithms being used. This is the notion of explainable AI, which requires that the output of an algorithmic decision be justifiable, accessible and error-aware. This is what’s known as AI transparency.
An AI solution should be explainable. You should be able to clearly ask and understand what approach is used? What does the algorithm do? What information does it use?
A vendor should be able to tell you what information the solution’s algorithm presents to justify a decision and whether that information is understandable to an analyst and regulator. Exactly how does the model make decisions about specific types of behavior? What about behavior it’s never seen before? And then it also needs to be error-aware: how accurate is a given decision and how many misclassifications occur? How do you make sure you’re not missing anything important?
For any AI/ML solution, CIOs need to make sure they’re doing their due diligence to ensure it can fit these attributes and that it’s not just a bunch of AI-washing.