Most mid-market cash flow forecasts are wrong by Tuesday afternoon. Not because the treasury team is incompetent — because the underlying spreadsheet was designed to be re-typed every Monday morning by a human who is also closing the books, chasing AR, and rebuilding the covenant pack. In 2026, that model finally breaks.
Why forecasting matters more in 2026
Mid-market CFOs in Europe are running treasury into a triple squeeze. ECB policy rates sit meaningfully above the post-2015 baseline, so the cost of being wrong about a €5M shortfall is no longer a rounding error — it is roughly €18 000 of avoidable interest per month on an RCF drawing at 4.4%. Working capital lines are also being repriced more aggressively at renewal, with banks demanding tighter covenant headroom in exchange for the same notional limit. And payment terms across European B2B continue to drift: median DSO across 140+ Arxa customers in industrial mid-market sits at 52 days in Q1 2026, up from 47 in 2022.
At the same time, the tooling has finally caught up. The 13-week treasury forecast used to be a Monday-morning artisanal product — copy-paste from the AR aging, hand-stitch the AP run, overlay payroll, hope. That model produced a median absolute percentage error in the 18–25% range at week 8. The companies running modern AI-assisted forecasts are now hitting 6–9% MAPE at week 8 with one analyst instead of three. If your finance function still treats the forecast as a weekly ritual rather than a continuous data product, you are paying for that delta in interest, in covenant cushion, and in the price of every supplier negotiation you walk into without knowing your true cash runway.
Direct vs indirect method: pick the right tool
The direct/indirect debate is not philosophical. It is operational. The two methods answer different questions, and conflating them is the single most common reason finance teams produce a forecast nobody trusts.
The direct method
The direct method builds projected cash from the receipts and disbursements ledgers upward. You start from open AR, apply expected collection timing, layer in scheduled AP runs, payroll, tax, debt service, and capex, and you arrive at a closing cash balance for each period. It is the only method that answers the question "will we meet payroll on the 27th?".
Worked example. A €120M revenue distributor opens week 14 with €3 420 000 of cash. Open AR scheduled to land that week is €4 180 000, but the model applies a 92% on-time probability based on the last 26 weeks of customer-level behaviour, so expected receipts are €3 845 600. Disbursements: €2 100 000 in supplier payments scheduled for Wednesday, €1 280 000 payroll on Friday, €340 000 VAT prepayment. Closing cash week 14 = 3 420 000 + 3 845 600 − 3 720 000 = €3 545 600. The direct method puts that number on the CFO's desk before Monday lunch.
The indirect method
The indirect method starts from projected net income and reconciles to cash via non-cash items and working capital movements. It is the method that ties to your P&L and balance sheet, the one your auditors and your board pack expect, and the one that actually answers "is this business generating cash, or are we funding growth out of suppliers?".
Use the indirect method for the 12-month rolling plan, the lender pack, and any conversation with your audit committee. Use the direct method for the 13-week operational forecast. Reconciliation between the two should be a weekly automated check — if your direct cash forecast and your indirect cash forecast disagree by more than 3% on a 13-week overlap, one of them is wrong, and you need to know which.
13-week vs 12-month: horizons compared
These are not competing forecasts. They are different instruments. Running only the 12-month plan leaves you blind to weekly liquidity stress; running only the 13-week leaves you blind to seasonality, capex timing, and covenant trajectory. The right question is not which one — it is how do they interlock.
| Dimension | 13-week direct | 12-month indirect |
|---|---|---|
| Primary question | Will we meet obligations and stay above minimum cash? | Are we generating cash, and what is the funding gap? |
| Granularity | Weekly buckets, customer/supplier level | Monthly buckets, P&L line level |
| Update cadence | Weekly (Monday close) | Monthly, refreshed at month-end close |
| Driver of accuracy | AR aging quality, AP commitment data | Revenue plan, working capital assumptions |
| Typical owner | Treasury / FP&A senior analyst | FP&A lead / Head of Finance |
| Accepted MAPE at midpoint | 5–10% at week 6–7 | 8–12% at month 6 |
| Primary consumers | CFO, treasury, operations | CFO, board, lenders, audit committee |
| Reforecast trigger | Material variance >3% on a single week | Significant business change or quarterly review |
The 13-week is a liquidity instrument. Its job is to surface the Tuesday in week 6 when you breach minimum operating cash because a €2.4M customer slipped two weeks and a €900k VAT payment landed on the same day. That insight is useless at month-level granularity — by the time the monthly forecast catches it, you have already drawn the RCF.
The 12-month is a strategic instrument. Its job is to answer whether the business is funding itself, whether the working capital cycle is stable, whether the capex plan is feasible without a new facility, and whether you will cross a covenant threshold in Q3. A 13-week cannot answer any of those questions because it does not see far enough.
Where AI is finally beating spreadsheets
Until roughly 2023, "AI in treasury" was vendor theatre. The models were not materially better than a competent analyst with a clean dataset, and the integration cost erased any benefit. That has changed in three concrete ways.
Customer-level collections timing
The single biggest accuracy gain comes from forecasting when each open invoice will actually be paid, rather than applying a blended DSO to the whole AR book. A modern model uses 18–24 months of customer-level payment history, segments by payment behaviour cluster, and applies invoice-level features (amount, terms, season, dispute flag, channel). On Arxa benchmarks across industrial and B2B services portfolios, this approach delivers a MAPE of 6.4% at week 8 versus 21.3% for a static DSO model. That is not a marginal improvement. That is the difference between a forecast you act on and a forecast you ignore.
Anomaly detection on the AP run
AI-driven anomaly detection on accounts payable now catches 72–78% of duplicate payments, vendor master fraud, and invoice timing manipulation before the run, compared to 31% for traditional rule-based controls. Across our customer base, that translates to roughly €140 000 per €100M of AP throughput in detected leakage that would otherwise have left the building. This is not a forecasting feature per se, but it materially improves the quality of the disbursement signal the forecast consumes.
Continuous scenario generation
The third win is operational. Generating a stress case used to take half a day of analyst time. Modern systems generate, in seconds, the full distribution of cash outcomes given probabilistic inputs on collections, FX, and demand. The CFO no longer asks "what does the −15% case look like?" and waits until Thursday. The CFO scrolls through it.
We replaced a Monday-morning ritual with a continuous data product. The forecast is now wrong less often, and when it is wrong, we know within hours rather than weeks. That changed how we talk to our banks.
Common mistakes that wreck forecasts
1. Applying a blended DSO to the whole AR book
A 52-day DSO across the portfolio is the average of customers paying in 28 days and customers paying in 95. Applying 52 days uniformly destroys the timing signal. Segment at minimum into three behaviour clusters (early, on-time, chronic-late) and apply cluster-specific timing distributions. Better still, model at customer level for the top 80% of AR by value.
2. Treating AP as a single weekly bucket
"Supplier payments week 14: €2.1M" is not a forecast — it is a budget line. Real AP runs split across the week, with some suppliers paid Tuesday, payroll-adjacent suppliers paid Friday, and tax authorities paid on statutory dates. Forecasting AP at run level rather than weekly aggregate is the difference between knowing you have €400k in cash on Wednesday morning and discovering it on Wednesday afternoon.
3. No human overlay layer
A purely model-driven forecast cannot see the €1.8M acquisition earn-out due in October, the new ERP migration that will delay AP processing for two weeks in March, or the customer your CRO told you in confidence is about to terminate. A forecast without an overlay layer where finance can inject known-but-unmodelable items is a forecast that will be embarrassingly wrong at exactly the wrong moments.
4. Not measuring variance
Most teams produce a forecast and never look back at how it performed. Without a weekly variance log — predicted vs actual, decomposed by line — the forecast cannot improve. The discipline is non-negotiable: every Monday, the prior-week forecast is scored against actuals, variances above 3% are decomposed, and the top driver becomes a model adjustment for the following week.
5. Naive FX treatment
For any business with more than 15% non-EUR exposure, forecasting cash in nominal local currency and converting at spot at period end is malpractice. Apply forward rates for hedged exposures, apply scenario bands for unhedged, and report the cash forecast in EUR with the FX-driven variance broken out separately. A €230M revenue SaaS business with USD/GBP exposure can easily see €600k–€900k of weekly forecast noise from FX alone if this is done badly.
6. Showing the direct forecast to the board
The 13-week direct forecast is an operational artifact. It is full of detail your board does not need and assumptions your board will challenge unproductively. Show the board the 12-month indirect forecast with stress scenarios; keep the 13-week in the executive committee and treasury function. Mixing audiences is how forecasts get watered down to please everyone and inform no one.
KPIs to track
A forecasting function without KPIs is theatre. The following metrics, tracked weekly and reviewed monthly, are the minimum bar in 2026.
- Forecast accuracy at week 1, 4, 8, 13. Target ranges in mid-market: <3% at week 1, <6% at week 4, <10% at week 8, <14% at week 13. If you are below these, your process is good. If you are above, fix the inputs before you fix the model.
- Bias. Average signed error. A persistently negative bias means you are systematically over-forecasting cash, which is the dangerous direction. Bias should sit within ±1.5% over a rolling 12-week window.
- DSO and DPO, weekly. Track DSO weekly, not monthly. A 4-day move in DSO on €120M revenue is €1 320 000 of working capital — your forecast must surface that within a week, not at month-end.
- Days-to-close. The time from period-end to a published actual. Best-in-class mid-market is 4 business days; the median is closer to 8. Every day of close delay is a day your forecast is comparing actuals from a stale baseline.
- Forecast refresh latency. Hours between an underlying ledger change (new invoice, new payment) and that change being reflected in the forecast. Manual processes run at 168 hours (weekly). Modern systems run at <1 hour.
- Minimum cash buffer breach count. Number of weeks per quarter where projected cash falls below the policy minimum. This is the metric that gets covenant attention, and it should be on the CFO dashboard, not buried in a treasury report.
- RCF utilisation forecast vs actual. If you draw the facility more than your forecast predicted, your forecast is optimistic and your covenant headroom is smaller than your board pack suggests.
Putting it together: the 2026 playbook
The mid-market CFOs who get this right in 2026 share a small number of operational habits. None of them are exotic.
- One source of truth. Opening cash, AR, AP, and forecast drivers live in one system. Both the direct and indirect views are generated from the same underlying ledger, with the same close timestamp.
- 13-week direct, refreshed daily. The forecast updates as ledgers update. Monday is no longer a forecasting ritual — it is a review meeting on top of a forecast that has been live all week.
- 12-month indirect, refreshed at close. Reconciles to the 13-week on the 3-month overlap. Variance >3% triggers an investigation, not a re-forecast.
- Customer-level collections model. AI-driven, retrained monthly, with a transparent feature set the analyst can audit. Black-box models do not get past treasury committee in 2026.
- Human overlay layer. A structured place to inject capex, M&A, one-offs, and qualitative business intelligence. Versioned, attributed, and time-stamped — so you can later see who said what and when.
- Two stress scenarios, always live. +10 DSO and −15% revenue. The board sees them every quarter. The CFO sees them every week.
- Variance discipline. Monday morning: prior-week forecast scored, variance >3% decomposed, top driver fed back into the model.
The CFOs running this playbook are not generating better forecasts because they have smarter analysts. They are generating better forecasts because they have replaced a weekly Excel ritual with a continuous, instrumented, accountable data product. The analyst time saved gets redeployed into the high-judgment work — covenant strategy, supplier negotiation, capital allocation — where finance actually creates value.
The hardest part is not the technology. It is the cultural shift from "the forecast is what Anne builds on Monday" to "the forecast is a system, and Anne is responsible for its accuracy." CFOs who make that shift in 2026 will spend less on interest, sleep better through covenant resets, and walk into every bank meeting knowing their numbers are tighter than the room expects. The ones who don't will keep paying the spread.
See it run on your numbers.
Connect a single bank in 4 minutes. Get your first AI-prepared cash brief tomorrow morning.
Start your free trial14 days · No credit card required · Cancel anytime
Written by the Arxa Intelligence team — finance leaders, engineers, and treasury operators sharing what we've learned in the field. We don't ghostwrite under fake names; if you want to talk to whoever wrote a piece, email us at hello@arxaintelligence.com.