Updated: Jan 18
In adopting analytics, one of the main challenges faced by internal auditors is the volume of exceptions generated.
The traditional IA sampling approach typically involved evaluation of between 5 and 50 items, and consequently the number of exceptions would not fall out of that range.
However, when we use analytics across full populations, the number of exceptions produced can be quite high. The sheer volume can result in:
Difficulty in eliminating false positives - noise!
Unnecessary pain for business folk (that have to deal with the mountain of results)
Internal audit losing faith in analytics
There are various approaches to overcome this, e.g. progressively categorising results into specific buckets based on key characteristics, reviewing a sample of those, and then extrapolating the results of the reviewed sample to the remainder of the population of exceptions. However, when the key characteristics can't be easily categorised - e.g. when the exceptions are based on and include both structured and unstructured data - the traditional approach doesn't quite fit.
An alternate method that we have been using recently, and has been working really well, involves the use of machine learning techniques - similar approach, different techniques. The result is a significant (>90%) reduction in the number of false positives, providing business stakeholders with relevant information to consider and providing audit stakeholders with the comfort that we did not simply discard results, but used a well structured and defensible approach to create a manageable set of exceptions for follow-up.
We will explore the mechanics further in a future post.