Training an Effective Manual Review Team

written by

Eddie Farrell

March 14, 2024

On September 26, 1983, human intuition may have saved the world. Stanislav Petrov, a Soviet Lieutenant Colonel, fielded an automated nuclear missile early warning system alert. The alarm described the launch of not one, but five, ballistic missiles supposedly originating from the United States.

At the time, the retaliation protocols for the Soviets required multiple authorizations, but by design, these would often be made without further confirmation of an attack. If Stanislav had greenlit his portion of the plan, the next person in the chain of command likely would have too. Even under immense pressure, Stanislav kept cool. Sensing that something was awry, he halted his confirmation of the attack. As it turned out, the automated system was wrong. The alert went off in error. Luckily, Stanislav listened to his gut and in doing so, may have stopped World War III from starting.

Is this a dramatic intro for a post about training fraud mitigation teams? Absolutely.

But it’s a powerful example of how fraud analysts, operating alongside your machine learning (ML) models, are still paramount. Someday, fully autonomous systems may be my recommendation for most fraud operations, but I don’t think we’re there yet. And honestly, I think that day is further off than most.

If you want the best possible defense against fraud losses or customer exploit, you’re going to want to leverage the speed and efficiency of an ML model, but you need a team of highly trained investigators too.

And although my pitch for this dual setup does involve gut instinct like Petrov used, you’re going to have to foster the right kind of training environment on your teams in order to reap the deep benefits of manual review.

Intuition and instinct are going to take time to build. They really can’t be rushed. But you can take proper steps toward building a manual review team with a deep knowledgebase so that these traits come naturally to your investigators. So, what’s the secret? What does it take to staff a good manual review team? You need tinkerers.

Although our interview structure at IPS has changed over the years, it’s had a few common threads and themes that remain the same. Critical thinking is highly sought after. We’re looking for people who default to inquisitive approaches to problem solving. We search for people who don’t just learn processes and then complete them on repeat without thinking. Fraud is an ever-changing landscape. The methods of attack and exploit change daily. So, we’re always looking for people who want to know the why behind the data being presented to them.

Why did the model flag this transaction as risky?
What data attributes specifically caused a risk score to rise?
Why did a shipping address, or gift card, or token delivery address suddenly change after years of consistency?
Why do certain edomains trigger review flags in some regions, but not in others?

We could list out these kinds of questions for ages. And just like a good investigator’s process, they’ll continue to change over time. But, the challenge here isn’t just about selecting the right people for your manual review team – it’s also about how you train them.

In the beginning, you want to train your team in a way that’s a bit counter to what you’re seeking from them as established analysts. In the early days of training entry-level analysts, you should show them to review very much like your ML model consumes data. It’s much like teaching a decision tree, a rubric. If this, then that. Over the years, we’ve tried a lot of different training methodologies, and this one seems to work best.

You do need to be mindful about the pitfalls of this approach – after all, you want your human intelligence to off-set and complement your ML model, not mirror its approach. But this is achievable with regular check-ins with new employees and reminders about the end goal.

However, those first few weeks or months may definitely contain lists of data attributes that analysts-in-training must review on every transaction. You may want to require that they use every tool at their disposal even when there’s redundancy. You should likely implore them to list out all the risky evidence they’ve found even when the offending transaction is part of clearly a known fraud pattern that more seasoned agents are rejecting almost instantly.

Ironically, it’s these robotic steps that create a foundational understanding that allows for dynamic thinking later. The broad investigation style that considers all data now will help fine-tune the skills needed to focus on the most important data later. The timeline here is extremely subjective. At IPS, we let the trainee dictate as much of the pace of their own training as possible. There are loose expectations and typical milestones, yes, but the more fluid you can be, the better the outcome.


So, the check-ins we mentioned above. What should they look like? Good fraud investigation often looks like good story telling. If you can talk to your trainees about why you’re asking them to review every piece of data during an investigation and explain to them that they’re building a story from the data – they’ll be better off. We use this story metaphor often during training.

We also talk a lot about how the review process can mirror that of putting together a puzzle. New employees are presented with some details of the story at the onset of the review. The fraud tool is showing them a mostly put together puzzle. But the devil is in the details.

The finer details of the “story” of their review could be anywhere. Sometimes everything they need is right in front of them. All the puzzle pieces are present, but jumbled. Although, most of the time they’re going to have to seek out details. Either somewhere else in your systems or perhaps out on the web. The last puzzle piece could be right under their nose, a SQL query away, or it could be found in a seemingly random post on Facebook confirming a customer vacation or recent geographical relocation.

The more you can describe this meticulous part of the process to the trainee, the more you’ll complement the flow-chart training style they’re using while shadowing with your more seasoned team members. This two-pronged approach helps an agent learn foundational skills all while understanding the vision for the end goal. It can help alleviate potential frustration with the wide-net review style that you’re requiring. And it can help the larger vision come into view with greater accuracy.

Another focal point to discuss with newer members of your team during training is the tendency to over-fixate on certain mismatches, changes in otherwise consistent behavior, or new purchase patterns. Your ML models are going to be sending the most convoluted transactions to your agents, so weird doesn’t always mean fraud.

One good way to help bolster detection of illegitimate transactions in the minds of trainees is to challenge them to focus on the fraudster’s goal – monetization. This can help counter the often-present issues of hyper focusing on what are essentially data red herrings.

For example, a customer updates a physical address on their account and it’s a mismatch to some other location information, but the current transaction in question is a digital purchase with a download code going to a historically consistent customer email address. Even with this new mismatching information, what are the chances of this “fraudster” making money if the digital code is going to the true customer? How could they resell this item? Perhaps the email was recently compromised, but if there’s no evidence of that, the mismatch is probably harmless.

Driving home messaging about fraudster motivation can help a new team member get a handle on the difference between a normal inconsistency and a risky one.

Sometimes identifying weird vs. fraud is just plain difficult for a newer analyst. Especially when they haven’t been exposed to a large number of investigations yet. Early on in the training process, it can be challenging to understand or visualize what a typical fraudulent transaction looks like.

However, you can start to paint that picture by showing them the opposite. Once they’re used to your fraud tools and they’re getting an understanding of the data being presented to them during review, ask them to visualize what their own transaction would look like. If they were reviewing their own account, what would match? What might not? If there were mismatches, what normal everyday life circumstances could have led to that discrepancy? This practice also reinforces their story building and puzzle completion training in a new way. It’s a powerful tool.

Team Training

These are just a few of the ways you can help a new employee get great at finding the bad guys. No matter how you train, however, a deep understanding of the work takes time.

The good news is that with the right training, your new analysts can be very accurate from the start and can quickly be deployed and make decisions on their own. The foundation of their knowledgebase can be expanded and added to over time under the tutelage of your more senior and experienced agents.

The great news is that these things can happen at the same time! At IPS, we have all new fraud team members work with a mentor. The goal here is to have a specific person drive home the big picture ideas, while daily shadowing and training sessions focus on the smaller building blocks that establish a strong and deep knowledgebase.

With this approach, IPS has had success numerous times over the years in deploying medium-to-large sized teams where many of the team members were new to fraud investigation. Seasoned experts assist managers in guidance and instruction, and within a very short time, the team is effective and accurate. Soon, the team can meet with and assist your data engineers to discuss adjustment to risk rules stacks, ML models, and mitigation strategies alike.

So, let’s go back to our nuclear alert example. Again, we’re not aiming to compare the end of the world to fraud losses or customer friction in the wake of an errant decision, but human instinct can absolutely save you both of those things.

Imagine that it’s Black Friday week and even though your team has planned every detail, a rogue rule starts rejecting good customers buying “risky” items at 3 AM. Who’s going to see this first? Your manual review team. Who’s going to train your model by rereviewing ML decisions? Your manual review team. Who’s going to notice that slight change in fraudster attack vector minutes or hours after the alteration? You guessed it, your manual review team.

We’ve had many short-term contracts over the years, and these can be very effective at improving your fraud mitigation success. We also think that long-term partnerships are extremely beneficial for our clients early on – blossoming into massively impactful business relationships over time.

If you’d like to explore the idea of working with IPS, please feel free to fill out the questionnaire on our website or send us a message. We’d love to talk to you about how manual review alongside ML can save you funds, frustration, and friction all at once.

written by

Eddie Farrell

March 14, 2024

Table of Contents
    Add a header to begin generating the table of contents

    written by

    Eddie Farrell

    March 14, 2024

    Stay informed with industry-relevant emails curated by our team of experts.

    We send out emails once or twice a month relating to IP Services, industry news, and events we'll be attending so you can meet our experts in person.

    Eddie Farrell

    Eddie Farrell has been one of the Fraud Team Leaders for over 7 years. He enjoys inspiring his team members to rise above the status quo, he loves collaboration and creativity in problem solving, and he’s always willing to have a conversation about video games, board games or why he didn’t think the last season of Game of Thrones was all that bad. He enjoys physical fitness, carpentry projects and has a 9 year old Norwegian Elkhound that barks more than any other dog on planet Earth.