As fraud professionals, it’s natural to focus on preventing fraud losses, but this often comes at the detriment of sales conversion. The nature of model-based risk management platforms and machine learning model training has this bias as well, mainly as a result of the fact that it is much easier to recognize missed fraud than it is to recognize sales insults.
Fraud and risk management strategies tend to focus so much on automated risk decisioning that improving manual review performance is often an afterthought. Consider the cost savings and increased revenue an organization could realize by cutting average order review times while also reducing sales insults and missed fraud on reviewed orders. This is why improving performance of manual reviews is at least equally important as efforts to reduce order review rates.
Guest Post Written by: Daryn Griggs, Co-Founder, Payshield, Certified eCommerce Fraud Professional
The events of COVID 19 have seen a massive increase in online shopping around the world including people who have never shopped online before. Naturally with an increase in online shopping comes an increase in online fraud, and the biggest fraud increase? Friendly Fraud!
One of the most common reasons organizations fail to realize significant improvements in risk management after implementing custom modeling solutions can be described with one phrase: Junk in. Junk out. This article discusses best practices as it relates to data management and other factors that are shown to improve performance when it comes to custom modeling and machine learning or artificial intelligence.
It’s not just breadth of data, but also quality of data, that is important. One of the biggest misconceptions about machine learning (ML) and artificial intelligence (AI) is that you can just flip a switch and let the technology work its magic.
Fraudulent insurance claims are responsible for more than $80 billion in losses per year in the United States, leading to higher premiums for all policy holders. It is estimated that card issuers will lose $1.3 billion this year from cards issued to synthetic identities. Total fraud losses from synthetic identities receiving loans and credit cards are estimated at $6 billion annually. Lenders, issuers and insurance carriers are increasingly combating fraudulent applications and claims with custom modeling solutions and AI.
Checking pay stubs to verify income is no longer sufficient. Completing an application with their authentic identity, a consumer can easily fabricate paycheck stubs or purchase them online for less than ten dollars. When fraudsters create synthetic identities to take out an uncollateralized loan with no intention of paying it back, they can just easily create or buy fake pay stubs to corroborate their synthesized existence.
In partnership with another Alphabet group company, Google recently released a way for consumers to test their skills in detecting phishing emails. It includes eight simulated emails based on characteristics of real phishing attacks, where the user is to indicate whether each email is legitimate or fake.
According to a recent survey, 83 percent of those who deal with phishing attacks against their organizations say they are increasing. The fraud solution provider market is addressing this need with phishing simulation services that help organizations identify employees most likely to fall victim real phishing attacks and provide them training.
“Using Confidence Indicator Services to Enhance Qualification Capabilities in Online Applications” is the latest white paper from The Fraud Practice; released today and available free to download. This white paper defines and discusses Confidence Indicator services in the context of a “digital body language,” which can provide a view and indication of how a consumer answered critical risk questions that may validate or call into question what the user input or said. This can allow organizations to better detect high risk applicants, and accept more applicants where data was a limiting factor.
This risk management technique is defined and discussed in terms of how it works, the signals it provides and business use cases, to include application on-boarding, credit issuance and loan origination, insurance claims fraud and more.
More and more consumers, as well as fraudsters, are browsing, buying and banking via their mobile devices. While this has benefited consumers with increased convenience it has also created new challenges for merchants, financial institutions and others that want to reach customers in the mobile channel, as they now must also manage new risks.
In this feature article The Fraud Practice discusses the mutually beneficial relationship between mobile devices and identity document verification, focusing on how this risk management technique can be best utilized within the mobile channel and how mobile device technology has made identity document verification a more viable option for merchants and other organizations beyond financial institutions.
Since many merchants rely on manual reviews for effective risk management they should be performed in the most cost effective way. This white paper is focused on building the business case for making considerations around maximizing the net benefit of performing manual reviews, whether they are performed internally, by a third party or a combination of both.
The Fraud Practice’s latest white paper, titled “Building the Business Case: When it Makes Sense to Outsource Manual Reviews”, seeks to help organizations build out this business case and is available for free today.
Whether an organization is building a custom modeling solution in-house, using a service provider or combining both in-house and third party resources, the fundamental components of an effective custom modeling solution are the same. Statistical models must first be created, which requires historical data, a team of modeling experts, as well as the right tools and software to design effective models. Next, the organization will need the infrastructure or platform to actually apply this model to live transactions, interpret the results and route the transactions accordingly. A commonly observed problem in the market, however, is that organizations put forth such great effort in ensuring the statistical models are accurate predictors of risk that the next step, how these models are actually deployed, is often overlooked or just an afterthought.
This isn’t to say that model design is not a critical step. What’s the good in efficiently deploying custom models if they are not effective at distinguishing fraudulent from legitimate transactions? But organizations must also consider the other side of the coin: even if a custom model was accurate at predicting fraud most all of the time, it is of no benefit unless it can be applied to transactions, meaning the transactional and customer data can be fed to the models and the results can be interpreted to decide the course of action for each order.
Deployment is the second major step in executing custom models after model design, but is at least equally as important of a step.