During the International Association of Special Investigation Units (IASIU) conference held virtually on September 14 and 15, 2020, Polonious ran a panel discussion with some of the world’s leading investigation professionals. Investigation Insights contains new research into the performance, effectiveness and challenges of special investigation units, and communicates how better insights can drive improvements in productivity. You can download the full report here.
In this blog, we will review how referrals are garnered and why it is important to vet them properly.
When respondents were asked how cases were referred to them for investigation their responses were somewhat predictable. Most SIUs (85%) get referrals from claims units. Beyond that, other methods of sourcing referrals were reasonably evenly spread.
Automated tools such as analytics engines using predictive analytics, machine learning, artificial intelligence and rules-based algorithms are employed by 50 per cent of organisations, 60 per cent use fraud hotlines, and 65 per cent said they seek out cases proactively.
It is important to note though, that we did not ask what proportion of cases are referred from each of these sources.
A big surprise for us was that around half (52.63%) of respondents did not record the number of false positive referrals they receive — those that, on first glance from an experienced investigator, are clearly not going to go anywhere — from either an analytics tool or their claims unit.
A smaller but still significant percentage (38.89%) told us that when they did receive a false positive they did not feed the information back to the analytics tool in order to improve referrals.
There will always be claims that legitimately warrant suspicion but, upon investigation, turn out to be valid. Where you want to draw the line on level of suspicion is a matter for each SIU. You may only want to investigate “slam dunks”, or those with a 100 per cent strike rate, but risk a lot of potential fraud slipping through. Or you may want to investigate every possible case, but end up spending a lot of time on claims that turn out to be valid.
This triage process might not take a long time — but even if it takes about five minutes per case, after 100 false positives, you have lost a whole day of work. In SIUs with high case volumes, this adds up. And in SIUs with low case volumes, there is likely not much budget to waste on spinning wheels.
At a minimum, reporting on raw numbers can identify some inefficiencies before putting pressure on investigators.
Compare this to the SIUs that did record false positives. About a sixth of those respondents had between 21 and 40 per cent of referrals as false positives, while a full quarter reported that between 41 and 60 per cent of their referrals turned out to be false positives.
While this was just a quick questionnaire with a small sample size (only 12 respondents for this question), if the numbers are representative of the wider industry there is a big proportion of SIUs where around half their cases should not even have been referred to them for investigation. What’s more, around half of them would not even know a referral was a false positive.
As mentioned above, of the units that track false positives, almost 40 per cent are not feeding these back into the detection tool, so we would hope that these are not the units receiving 41 to 60 per cent false positives.
Analytics tools work by learning the flags for fraud — either through AI or through analysts updating the rules as they receive data. If false positives are not being fed back into these tools, they cannot update the rules, and they are going to keep sending you bad cases.
If you are getting 50 per cent false positives, you are paying investigators to read case details and not provide value. And if these results are not being used to enhance your detection systems, you are going to be doing that every quarter.