More than Manual Review: 5 Reasons Your Business Needs a Human Intelligence Team in an AI World

written by

Tonya Boyer

September 28, 2023

It’s well known in 2023 that AI is the future. Any casual observer will be aware of the impact ChatGPT is having on content generation. AI and machine learning models are also a more and more common aspect of ecommerce fraud prevention strategies. According to the MRC 2023 Ecommerce Payments and Fraud Report, 41% of businesses use machine learning models to support their fraud prevention efforts.

Nevertheless, a human intelligence team has an important part to play in any thorough fraud prevention strategy, particularly when manual review is used in coordination with a machine learning model. According to the same MRC report, 86% of businesses plan to maintain some level of manual review in the future. In fact, in APAC, this percentage is even higher at 94%.

Why manual review?

When we talk about manual review as part of fraud prevention, we’re talking about review of an ecommerce transaction by a human reviewer for indications of fraud. These transactions can take many forms, including a customer purchasing a physical or digital product from an online storefront, signing up for a subscription service, or redeeming rewards points. At IP Services, we have experience handling fraud prevention on each of these transaction types and more.

The key to maximizing your investment in a manual review team is to deploy it strategically. The average manual review takes between 2 and 5 minutes per transaction. According to the MRC report referenced earlier, the amount of investment in manual review teams varies based on business type and region, with APAC and enterprise-size businesses relying more heavily on manual review. But even for businesses outside this scope, a combination of manual review and machine learning is almost always the most efficient way to prevent fraud.

This is because a manual review team isn’t simply a group of agents who are limited to reactively reviewing customer transactions.

A manual review team is a group of specialized and highly trained individuals who are familiar with the fraud ecosystem and use their human intelligence to provide comprehensive end-to-end fraud support for your business. A good manual review team is really a human intelligence team.

Leveraging a Human Intelligence team

If you use a machine learning model to prevent fraud in your business, or if you plan to start using one in the future, below are five reasons your business also needs a human intelligence team as part of your comprehensive fraud prevention strategy.

Training the model.

Your human intelligence team should ideally be in place before your machine learning model, as that provides the perfect setup for your human intelligence team to train the model. Machine learning models are specifically a form of AI that focuses on learning – and they need something to learn from. What better way to train a model than having it learn from the highly accurate work of your human intelligence team as they decision transactions and label data?

human intelligence team and machine learning

Even after your machine learning model is up and running, there will still be need for ongoing training and course correction. Fraud is not static, and as fraudster tactics evolve, so too will the need for accurately labeled data and correct review decisions. New fraud patterns or attacks, as well as changes to your catalog (that is, adding a new product or subscription type, etc.) will create new situations in which the model has less data and is less able to make accurate decisions.

Your human intelligence team will be able to perform a thorough analysis of the risks for the new situation and use their human intuition and experience to accurately label data for the model to learn from. In this way, your customers will continue to enjoy a smooth customer experience (for example, not having their account errantly blocked by an outdated model), even as the model continues to learn from the newly labeled data.

Handling edge cases.

Machine learning models are great at using data analysis and statistics to decision many transactions very quickly. But the scope is limited; a machine learning model can only make accurate decisions on straight-forward transactions where everything is as it seems. For more complicated cases – what we call edge cases – the machine learning model will become less accurate.

As an example, we’ll look at a hypothetical transaction. The customer is buying a newly released high dollar product on a new account (potential red flag) from a market and region with low fraud rates (potential green flag) using a device with low fraud rates (potential green flag). The machine learning model may have a difficult time making a decision on a transaction that is neither obviously good nor obviously bad without further investigation.

That’s where human intelligence comes in. A fraud analyst from a human intelligence team can perform a thorough investigative analysis on the transaction in question to reach an accurate decision.

Human analysts have several advantages over a machine learning model in handling this type of transaction. They can identify and incorporate external data (third-party tooling, open-source searches, etc.) which can add new layers to the investigation. Fraud analysts also specialize in “telling the story” – that is, asking whether the transaction makes sense. This is a question that could only be asked or answered by someone using human intelligence.

In the hypothetical transaction given above, the fraud analyst could locate the customer online, perhaps, and determine if (based on the customer’s age, location, profession, etc.) it makes sense for the customer to have purchased the product in question.

The fraud analyst may even proactively reach out to the customer (via phone call, text, or email) to confirm that the purchase was authorized, allowing the customer to feel secure in the knowledge that your business makes their ecommerce safety a priority.

Catching fraud the model missed.

A machine learning model only has three options: approve a transaction, reject a transaction, or send a transaction to manual review. This decision is usually determined by calculating whether a transaction is over or under a given risk threshold. By necessity, the model will approve transactions that are below the risk threshold, even if there are in fact some concerns that the transactions may be fraudulent.

This is where a human intelligence team once again steps in. Human intelligence teams have more control over both what they review and what decisions they can make. In addition to reactive decisions about model-identified edge cases, they also conduct proactive analysis of transactions that the model has already approved.

This analysis will frequently happen via “fraud sweep,” where the fraud analyst reviews hundreds or even thousands of transactions per hour for patterns of fraud. The fraud analyst can then flag and block these transactions, allowing for the model to learn from the new fraud labels. Human intelligence teams have the edge when it comes to pattern analysis. After all, the human brain is uniquely designed to detect patterns.

This fraud sweep process can be completed without interruption or delay to your customer checkout experience. Many businesses will use the model’s decision as final until or unless the human intelligence team indicates otherwise.

If a transaction is approved by the model, your business can begin prepping products to be shipped (for physical goods) or you can generate accounts or download keys (for digital goods or subscriptions). Because sweeps have such a high throughput, they can be performed with frequent recurrence – sometimes even hourly – allowing fraud to be identified quickly after model approval. Products purchased fraudulently from an online storefront can be recalled before they are shipped out to customers, and access to digital goods can be revoked or removed before fraud loss becomes a larger issue.

Easing friction when the model blocks good customers.

Human intelligence teams prioritize customer experience. Many human intelligence teams, including the one at IP Services, also specialize in providing customer support related to fraud blocks. When a customer is blocked from purchase or signup by the machine learning model, our team is specially trained to receive the customer’s complaint, identify the block, and determine whether the customer can be unblocked.

This determination frequently requires a manual review. If the fraud analyst determines the customer is legitimate, they’ll be unblocked as soon as possible. If the customer is not legitimate (for example, the purchase attempt was made by a bad actor with a stolen credit card), the account will not be unblocked.

For legitimate customers, unblocking their account is only the first step. The key to real customer satisfaction is how the situation is then communicated back to the customer.

Many fraud prevention systems can automatically send prewritten template responses to customers whose accounts have been restored. But customers frequently find these responses underwhelming and lacking detail and empathy. Chat bots are slightly better, but customers still find them frustrating and unhelpful. Human customer service provides the best user experience.

According to a 2019 article from Forbes, 86% of consumers prefer to interact with a human agent, and 71% said they would be less likely to use a brand if human customer service wasn’t available.

At IP Services, our team of fraud analysts has experience communicating with customers in a variety of ways – from phone calls to chats to text messages. We pride ourselves on being able to empathize with the customer about their experience and, in many cases, we’re able to turn the customer’s poor experience of being blocked into a good experience of a quick account recovery from someone who understands their concerns.

Holding the model accountable – handling fraud attacks and system vulnerabilities.

As we touched on earlier, machine learning models are great at doing heavy lifting when they’ve been trained for a certain situation. But fraud is constantly changing; bad actors are highly motivated to try to beat the system, and, upon being blocked in one way, they immediately start searching for vulnerabilities. As fraudsters change their approach, they will always eventually find a way to exploit the system.

Machine learning models cannot keep up in real time with bad actors. Your human intelligence team can, of course, continue to label data to train the model for the new situation, but this process will take time. And in the meantime, fraud may be flooding your systems, causing your fraud loss to increase.

A human intelligence team will have a multi-tier approach to handling fraud attacks. The first step is to prevent as much fraud as possible from being pushed through your system. This will likely mean pulling all approved transactions with certain parameters into a sweep format so that the patterns can be easily identified, and the accounts rejected. This will mitigate fraud loss until the bug can be fixed permanently.

Your human intelligence team will also be able to assist in identifying the system vulnerabilities that allowed the fraud attack to get through. This is part of the ongoing strategic fraud prevention guidance that a human intelligence team can provide for your business.

The system vulnerability in this case may be a gap in the machine learning model’s parameters, or it may be an exploitation that bypasses review by the model entirely. Through testing and data analysis, your human intelligence team will be able to track down the bug and identify ways in which the vulnerability may be resolved.

A human intelligence team’s top priority is to hold the machine learning model accountable and ensure your business is receiving the best possible fraud prevention service, regardless of the limitations and gaps that may arise.


Machine learning and human intelligence go hand in hand. Both are integral parts of a successful and efficient fraud prevention strategy.

While machine learning models do the heavy lifting of decisioning transactions that are straightforward, a human intelligence team must be there to investigate edge cases and transactions that are more complicated, and to course correct on decisions where the machine learning model gets it wrong. A human intelligence team must take point on labeling data the model can learn from, and on identifying vulnerabilities which bad actors can exploit.

A human intelligence team, working for you, is exactly what your business needs to complement a machine learning model fraud prevention system. No fraud prevention strategy is complete without human intelligence.

written by

Tonya Boyer

September 28, 2023

Table of Contents
    Add a header to begin generating the table of contents

    written by

    Tonya Boyer

    September 28, 2023

    Stay informed with industry-relevant emails curated by our team of experts.

    We send out emails once or twice a month relating to IP Services, industry news, and events we'll be attending so you can meet our experts in person.

    Tonya Boyer

    Tonya has been with IP Services since 2014. After several years serving as a Subject Matter Expert in the cloud computing space, she began managing the Fraud Protection team in 2017. She believes in creating a happy, casual but professional workspace where everyone can live their best lives while doing good work. She is dedicated to community outreach and helps coordinate the IPS Connects volunteer and donation committee.