In a recent interview, Donna Achimov – the Chief Compliance Officer and Deputy Director of Canada’s FINTRAC financial intelligence unit – highlighted how they increasingly use artificial intelligence to monitor bank activity for suspected money laundering and terrorist financing. One motivation for this action is that the number of suspicious activity reports has tripled over the past 5 years, averaging over 500,000 per year. FINTRAC has increased its staff by 28% in fiscal 2023. However, it’s still insufficient to keep up with the volume, and AI techniques like machine learning can help identify patterns in massive datasets more efficiently.
This development identifies a potential conflict between regulators and banks as both look to advance technology to improve efficiency and effectiveness. Many banks are interested in deploying AI technologies in the financial crime space to reduce the volume of false positives generated by transaction monitoring systems. False positives are very costly for banks because they require human intervention to assess the alert and decide whether to investigate further or file a Suspicious Transaction Report (STR). This is wasted time and effort that could be spent on higher-value activities, and it also creates an environment where true positives could be missed because of the sheer volume of erroneous data that must be interrogated. Think of the glassy-eyed TSA agent staring at luggage X-rays all day – they have implemented computer vision technology to highlight potentially suspicious items to help humans combat fatigue.
Machine learning algorithms can be trained on prior STRs and determine potential false positives, allowing the human operations teams to focus on the alerts that are more likely to be accurate indicators of money laundering or terrorist financing. The potential benefit of this technology is significant. Still, many institutions are concerned that regulators might hold the AI-based processes to a higher standard, exposing them to the risk of enforcement action if an actual suspicious activity occurs. This concern is valid as many of the most recent fines assessed to banks for AML-related offenses were because the regulator determined that the banks failed to file an STR when they should have. This creates an environment where the AI-assisted judgment of two different organizations is in conflict.
Are we moving into a world where banks are applying AI techniques to reduce their workload, and the regulators are using the same capabilities to increase it? As AI capabilities become more powerful and accessible, the industry must work together to establish standards and best practices to prevent the potential churn created by “robot on robot” violence.