Accounting Fraud in the AI Era: What Your Auditor Still Can't See
Understand how Artificial Intelligence enhances manual journal entry testing, reduces false positives and helps detect fraud that traditional tests don't identify.
AI-powered journal entry testing: how to detect fraud that traditional tests miss
Manual journal entry testing is required by ISA 240 and equivalent internal control standards. Every mid to large-sized company in Brazil performs this test in some form, whether to comply with external audit requirements or internal policy.
The problem is that the standard method, established two decades ago, is letting sophisticated fraud slip through undetected.
A growing body of audit research, accumulated since 2017 at institutions like Rutgers, University of St. Gallen, and Humboldt University Berlin, shows that artificial intelligence applied to journal entry testing captures risk that traditional tests can't see. The most recent work, published in 2025, proposes a specific architecture for this.
Imagine a quarter-end journal entry.
- Amount: $847,392.18. Passes the "round amount" filter.
- User: senior financial analyst, authorized to post to this account. Passes the "unusual user" filter.
- Date and time: Tuesday, 2:37 PM. Passes the "atypical timing" filter.
- Account combination: contingency provision against operating expense. Common combination in this company. Passes the "unusual counterpart" filter.
Description:
"provision adjustment per updated legal analysis"
In any traditional journal entry audit, this record passes unnoticed. Nothing in the structured fields raises attention.
But the description is generic, doesn't reference a specific process, and uses a vocabulary pattern that appears disproportionately often in irregular reclassifications.
The risk signal is in the text, which no traditional test analyzes.
This is what we need to discuss.
What is manual journal entry testing and why ISA 240 requires it
Manual Journal Entry Test, or JET, is the practice of auditing manual entries made to the general ledger, those that don't come automatically from an integrated subsystem.
Its importance comes from an asymmetry that every controller knows in practice but few make explicit.
In mid to large-sized companies, more than 90% of journal entries are automatic. They come from payroll, billing, tax modules, depreciation.
They are auditable by design because the rule that generated them is within the system.
The other 5% to 10% are manual.
Someone logged into the ERP and typed.
Reclassifications, provision adjustments, reversals, write-offs, closing entries.
It's in these 5% to 10% where, historically, virtually all relevant accounting fraud and virtually all material errors reside.
The international auditing standard recognizes this.
ISA 240 requires external auditors to test manual entries specifically.
The Sarbanes-Oxley Act, after WorldCom, reinforced this point in the American market.
In Brazil, NBC TA 240 brings equivalent requirements, and the CVM increased attention to this topic after Americanas.
The obligation to test exists.
The problem is in how it's tested.
How companies conduct journal entry audits today
The traditional method is a set of predefined rules that flag suspicious entries.
The most common:
- Round amounts ($500,000.00 instead of $497,832.41)
- Posted on weekends, holidays, or outside business hours
- Posted near period end, suspected of earnings manipulation
- User who normally doesn't post to that account
- Unusual counterpart, like revenue against reserves
- Amount above material threshold
The tool varies.
It could be ACL, IDEA, MindBridge, SAP or Oracle's Continuous Controls Monitoring module, or even spreadsheets.
The frequency also varies.
Some teams run quarterly, others only during external audit preparation.
The logic is always the same:
binary rules over structured fields.
Either the entry hits the rule or it doesn't.
It generates a list of alerts, someone reviews.
This method worked for 20 years.
It's still the standard.
And it has two structural problems that nobody solves.
Why traditional journal entry auditing fails
First problem: massive false positives
A company with 500 manual entries per month, running 15 rules, generates alerts on dozens, sometimes hundreds of entries.
The controller has time to carefully review 10 or 15.
The rest becomes noise.
The consequence is perverse.
The rule exists, is documented, appears in the audit report, but in practice doesn't filter risk.
It filters time.
The team looks at the most obvious alerts and assumes the others are false positives.
The rule becomes window dressing.
Second problem: false negatives in sophisticated fraud
Anyone who wants to manipulate earnings or commit fraud learns the rules.
It's the oldest principle of internal control: public control becomes the fraudster's instruction manual.
So the manipulated entry is made on business days, during business hours, by authorized users, with deliberately broken amounts, common counterparts.
It was designed to pass the filters.
It's no coincidence that major accounting fraud cases of the past two decades — WorldCom, HealthSouth, Wirecard, and Americanas — involved manual entries and passed the traditional audit tests of their time.
The pattern is consistent:
sophisticated fraud learns to camouflage as normal patterns.
How AI improves journal entry auditing
Go back to the opening example.
The $847,392.18 entry passes all traditional filters.
But if you look at the description:
"provision adjustment per updated legal analysis"
and compare it with the pattern of legitimate descriptions in the same ledger, you see an anomaly.
Legitimate descriptions usually reference specific processes, opinion numbers, event dates.
This one references nothing.
It's deliberately vague.
This pattern — disproportionately generic descriptions in irregular entries — is a consolidated observation in linguistic analysis research applied to accounting fraud.
What changes now is having tools to detect this systematically, at scale.
How AI detects fraud that traditional tests miss
In 2025, Huijue Kelly Duan and Miklos A. Vasarhelyi from Rutgers Business School published in Intelligent Systems in Accounting, Finance and Management the paper:
"Manual Journal Entry Testing: Integrating Natural Language Processing and Deep Learning"
The central idea is simple:
the text of the entry description carries risk signals.
Until now, no established practice uses this signal systematically.
The proposed model has three layers.
First layer: quantitative fields
Keeps what traditional testing already does:
- amount
- account
- user
- date
- time
- counterpart
Doesn't discard what already works.
Second layer: textual signals with NLP
AI-powered auditing incorporates signals extracted from the description using natural language processing.
It evaluates:
- description length
- vocabulary used
- linguistic complexity
- similarity to ledger patterns
- defensive justifications
- excessively generic descriptions
Each of these dimensions alone says little.
Combined, they become a strong risk signal.
Third layer: continuous risk score
Instead of:
"passed or didn't pass"
the model generates:
"this entry has 0.87 risk"
and
"this other has 0.12 risk"
The practical effect is direct.
Instead of 200 binary alerts to review, the controller receives an ordered list and reviews the top 20.
And these 20 are, in fact, the highest risk.
Architecture of AI-powered journal entry auditing
The conceptual implementation has five steps:
1. Extract manual entries from ERP
Fields:
- amount
- account
- counterpart
- user
- date
- time
- description
2. Calculate risk signals
Quantitative:
- what traditional testing already calculates
Textual:
- description length
- atypical vocabulary
- similarity to ledger patterns
- justification signals
3. Combine signals into a composite score
Model trained on history of entries already classified as normal or suspicious.
4. Order entries by score
From highest to lowest risk.
5. Review the highest
Typically 5% to 10% of the total.
None of this is cutting-edge technology.
NLP and machine learning have existed for years.
The barrier isn't technical.
It's understanding that the problem exists.
Three questions every CFO should ask
1. How does our test incorporate the description text?
If the answer is "it doesn't," you're using a 20-year-old ruler.
It's not wrong.
But there's enormous room for improvement.
2. How many entries does our team actually review?
If prioritization is only "those that hit the rules," you might have window dressing control.
3. Does our vendor really use AI?
Ask:
does the model analyze description text or just structured fields?
If the answer is vague, the real answer is usually no.
The future of journal entry auditing
Manual journal entry testing was born to detect fraud in structured data.
Sophisticated fraud learned to look structured.
The next leap isn't looking at more numbers.
It's looking at the text between them and the context that explains why they're there.
Academic research has already shown this works.
The technology already exists.
The barrier is organizational.
The future of journal entry auditing isn't just about looking at more numbers, but understanding the context behind them.
The entry that will catch you is the one that passes all the rules.
The question is whether its text will also go unnoticed.
FAQ — AI-powered journal entry auditing
What is journal entry auditing?
It's the process of reviewing entries made directly to the general ledger, especially manual ones, to identify errors, inconsistencies, and possible fraud.
What is manual journal entry testing?
Also called Manual Journal Entry Testing (JET), it's the practice of reviewing entries made manually in the ERP that weren't automatically generated by subsystems like payroll, tax, or billing.
How does AI help in accounting audits?
AI allows analyzing not just amounts and dates, but also the text of entry descriptions, identifying risk patterns that traditional tests can't detect.
Does ISA 240 require this type of audit?
Yes.
ISA 240 requires auditors to consider fraud risk and perform specific tests on manual entries, precisely because they are sensitive points for accounting manipulation.
Can this be implemented without changing ERPs?
Yes.
The analysis is typically done from exporting existing entries from the ERP, without needing to replace the main system.
If your company still reviews journal entries only with fixed rules and manual filters, there's a great opportunity to increase control, reduce risk, and gain operational efficiency.
Abstra helps finance teams build AI-powered audit processes, integrating ERPs, compliance rules, and intelligent analysis to reduce fraud and increase traceability.
Talk to our specialists and see how to apply this to your finance operations.
Catarina Pinheiro
Author
Subscribe to our Newsletter
Get the latest articles, insights, and updates delivered to your inbox.