Pay creators and affiliates exactly, every time.
End rounding drift, re-run surprises, and spreadsheet reconciliations. Only the proven set releases— with Finance & Tax sign-off in one place.

Who this is for
Large creator / affiliate networks
250k–500k+ payees across ads, ecommerce, sponsorships, bonuses, and refunds.
Multi-source revenue & adjustments
Multiple data feeds (impressions, clicks, orders) + corrections (refunds/chargebacks).
Ops & finance under pressure
Pain today: rounding drift, disputes, slow reconciliations, change/migration risk, audit friction.
What you get (in plain English)
Fewer creator disputes
Statements replay identically; cents add up the same way—every time.
Penny-exact outcomes
Integer math + deterministic carry with a clear, documented policy.
Faster close & clean recon
PSP/GL rows tie back to a transcript digest—no more “why is this off by a cent?”.
Before → After (at a glance)
Traditional stack & flow (before)
- Data: Snowflake/BigQuery (+ Airflow/dbt) aggregate impressions/clicks/revenue.
- Logic: SQL/Python jobs with floating-point math; per-item rounding.
- Payout: Stripe Connect / PayPal Payouts / Tipalti bulk files.
- Back office: NetSuite; close/recs in BlackLine/Trintech.
Proof gaps: batches release because “job finished,” not because replay matched and Finance/Tax approvals were fresh and bound.
Overlay architecture (after)
- Deterministic engine: single-writer partitions, canonical fold order, 128-bit integers.
- Late quantization + carry-ledger: one-time rounding in a documented, reproducible order.
- Acceptance matrix: Finance ACK + Tax/CT (and optional SPV) with freshness & quorum.
- Authorize(window_id): returns ALLOW/HOLD with reason codes; only ALLOW releases.
How it works (60 seconds)
1) Compute consistently
We compute your weekly window deterministically—no order or rounding drift.
2) Approve once, with evidence
Finance ACK + Tax/Compliance attestations stay fresh and bound to the exact outputs.
3) Release the proven set
We authorize the ALLOW set only; the rest wait with reason-coded HOLDs.
Keep your rails—no rip-and-replace.
Fits your stack
Keep what works
Stripe, Adyen, PayPal, Tipalti for payouts; NetSuite for GL. We sit in front as a pre-release gate.
Add three fields
window_id— identifies the payout windowoutput_digest— ties to the sealed transcript (replay equality)provider_batch_id— PSP/EBP batch reference
One pre-release call
Right before you create the PSP batch:
POST /authorize { window_id }
→ ALLOW roster + HOLD roster with reason codesWhere these fields go
- PSP/rails batch metadata:
window_id,output_digest,provider_batch_id - GL (NetSuite) custom fields:
custbody_payout_window_id,custbody_output_digest,custbody_provider_batch_id
Technical details (for software engineers)
View endpoints & mappings
Endpoints
Use these calls to gate release and fetch evidence:
// Evaluates replay equality + acceptance matrix (ACK/CT/SPV freshness & quorum) → Returns: ALLOW roster + HOLD roster with reason codes
// Push CT/ACK/SPV payloads; we verify signatures & freshness and update decisions
// Sealed transcript + output_digest for replay & audit
Artifacts & mappings
What auditors and finance receive & where it lands:
- Sealed transcript (per window) with
output_digest - ALLOW/HOLD rosters + reason codes
- Carry-ledger (optional) for sub-cent traceability
// Vendor Bill/Payment custom fields: custbody_payout_window_id custbody_output_digest custbody_provider_batch_id custbody_transcript_url
Results teams aim for
Replays match ≥ 99.99%
Digest(replay) equals digest(transcript) across windows and re-runs.
Disputes ↓ 30–60%
Proof-backed statements and reason-coded holds reduce escalations.
Minutes, not hours
Time from window close to authorized disbursement measured in minutes.
Targets are set together during a pilot; they are goals, not guarantees.
Pilot in 30 days
What we’ll ask in discovery
- What must be true before you release a payout file today? Who signs off?
- Have you ever rolled back or reissued a payout run? Why and how long?
- How do you prove equality across versions or after replay?
Next step
Gate one cohort for two windows; measure replays, reasons, and promotion time.
