In Part 1 we outlined, broadly, an approach to reducing false positives.
In Part 2, we explained that we were looking at potential product linkages that may not have been captured and said that we had too many false positives to investigate manually.
Let's finish the story here by looking at why traditional approaches wouldn’t work in our case, and then explain the alternate approach that we used.
The traditional approaches would typically look like one of these:
- select a random sample
- profile the sample (e.g. within a spreadsheet) to identify common characteristics to enable manual identification and elimination of false positives.
Why did we not opt for either of these?
- random sampling may have its merits, but with techniques to better target anomalies now available, such an approach is not defensible (and doesn't add real value)
- traditional profiling can work with structured data, for a smaller set of features (columns) - but the free text data translated to hundreds of columns, so this would not be feasible.
We worked through a number of scenarios and options, including the above, and had to discard most of those. We knew that the analytics software that we were using had strong predictive capability - so we decided to make use of it. The process was:
- Select a subset of the population (a random sample)
- Manually review the sample, marking false positives as such
- Pass the result of the review into a few different learner algorithms (e.g. Random Forest, Gradient Boosted Trees) to create a predictive model (in a feature selection loop)
- Score the models to find the best fit
- Pass the remainder of the population through the selected model to "predict" the outcome.
The result was a significant reduction in false positives - noting that such a process would not be 100% accurate, but certainly better than random sampling alone.
The tools and techniques to improve IA's use of analytics are now readily available.
Are you using them?