Managed inference for financial matching.

SAY HELLO TO

Grath's managed reconciliation inference engine clearing the path for AI application builders in financial services. Ultra fast time-to-first-token, low latency, limitless throughput, and resilient scaling built on cutting-edge GPUs.

/

TRUSTED BY

TRUSTED BY 300+ COMPANIES

/

TOPA™ CLOUD

A reconciliation platform that scales as fast as your ideas. Spend less time wrestling with infrastructure, and more time innovating.

General-purpose inference is insufficient for reconciliation. Topa Cloud is dedicated managed inference infrastructure built exclusively for financial services engineers. Topa's specialised proprietary models, deterministic components ensure reliability and auditability every time.

Composable over monolithic

Reconciliation has been sold as a finished application for two decades. Topa inverts the stack: Engine, Inference, and Agent are independent services with their own quotas, pricing, and SLAs. Call what you need. Compose what you don't have. Keep full control of the user experience.

Composable over monolithic

Reconciliation has been sold as a finished application for two decades. Topa inverts the stack: Engine, Inference, and Agent are independent services with their own quotas, pricing, and SLAs. Call what you need. Compose what you don't have. Keep full control of the user experience.

Domain-trained, not domain-adjacent

The Topa model family is trained exclusively on financial services reconciliation data: transaction-level matching, settlement-cycle reasoning, counterparty disambiguation, currency and FX handling.

Domain-trained, not domain-adjacent

The Topa model family is trained exclusively on financial services reconciliation data: transaction-level matching, settlement-cycle reasoning, counterparty disambiguation, currency and FX handling.

Auditable by design

Every Topa output carries a confidence score, a reason code, the model version that produced it, and a trace of the rules or inference steps that led to the result. Every run is reproducible, every decision is explainable, and every byte in and out is logged against a request ID.

Auditable by design

Every Topa output carries a confidence score, a reason code, the model version that produced it, and a trace of the rules or inference steps that led to the result. Every run is reproducible, every decision is explainable, and every byte in and out is logged against a request ID.

Versioned, not evergreen

Frontier model providers quietly retrain and redeploy. That's fine for consumer chat. It's unacceptable for a reconciliation run you have to defend in Q3 when the auditors ask what model produced the Q1 results. Topa models are explicitly versioned and referenceable for auditability.

Versioned, not evergreen

Frontier model providers quietly retrain and redeploy. That's fine for consumer chat. It's unacceptable for a reconciliation run you have to defend in Q3 when the auditors ask what model produced the Q1 results. Topa models are explicitly versioned and referenceable for auditability.

Scale without rewriting your workflow

Scale without rewriting your workflow

Replace fragmented data with a single source of truth

Replace fragmented data with a single source of truth

Spend less time wrestling with infrastructure and more time innovating.

Granular data points for better decision making

Granular data points for better decision making

End-to-end visibility across your AI infrastructure stack

End-to-end visibility across your AI infrastructure stack

/

TOPA CLOUD MANAGED INFERENCE

Layers

Engine

Deterministic matching at scale. The rules layer that ingests your ledgers, applies composite key matching with configurable tolerances on amount, date, reference, and currency, and returns structured results with reason codes.

/01

ENGINE

Inference

The domain-trained model exclusively for financial services reconciliation data. It handles fuzzy matching across timing, amount, and reference variations, exception classification, counterparty disambiguation, and settlement-cycle reasoning.

/02

INFERENCE

Agent

A reconciliation specialist that runs on your data. Agent is a reasoning layer that orchestrates Engine and Inference and augments them with domain expertise. Identify why a ledger didn't balance, draft exception narratives and design workflow logic.

/03

AGENT

/

TESTIMONIALS

With Grath, our reconciliation process is seamless and efficient. Automated reports save us so much time!

Toby Singer

FREETRADE

Partnering with Grath has positioned Winterflood for continued success the platform can now handle our trading volumes efficiently, and the improvements to our data management have been significant 

James Wharton

WINTERFLOOD

The AI agent they built saved our team hours every week and improved our response time. It feels like we hired a new team member who never sleeps!

Windi Kulina

CMO of Bima

/

TESTIMONIALS

With Grath, our reconciliation process is seamless and efficient. Automated reports save us so much time!

Toby Singer

FREETRADE

Partnering with Grath has positioned Winterflood for continued success the platform can now handle our trading volumes efficiently, and the improvements to our data management have been significant 

James Wharton

WINTERFLOOD

The AI agent they built saved our team hours every week and improved our response time. It feels like we hired a new team member who never sleeps!

Windi Kulina

CMO of Bima

/

ABOUT US

How we think differently

How we think differently

How we think differently

We work with some of the world's most successful financial services companies, not because we're the biggest, but because we're the most committed to their success.

/

COMPUTE

Low barrier to entry, ultra-high performance.

Topa Cloud provides an intuitive, powerful, and comprehensive set of developer surfaces to easily manage your reconciliation resources. Build leveraging latest accelerated compute technology from NVIDIA to power the most demanding reconciliation tasks.

/

SUPPORT

Global support, local expertise.

With teams located in the UK, Europe, Middle East and Australia Grath support is available 24/7 with an average first reply time of under 15 minutes.

/

FAQ

Frequently Asked Questions

What is the difference between Topa and Grath's reconciliation platform?

The Grath reconcilation platform is the application; Topa is the engine, model, and agent, exposed as infrastructure for teams building their own workflows.

Why not just use a general-purpose LLM?

General-purpose models aren't trained on reconciliation data, return unstructured text instead of match results, and send your ledgers to a horizontal inference provider.

Do I need to replace my current reconciliation tool?

Most customers start by pointing Topa at a specific problem area while their existing system continues to run.

How does pricing work?

You pay per token, metered separately for input and output, at rates that vary by layer.

Can I use Topa inside a regulated institution?

Yes, models are versioned and pinnable, and audit-grade logging is built in.

/

FAQ

Frequently Asked Questions

What is the difference between Topa and Grath's reconciliation platform?

The Grath reconcilation platform is the application; Topa is the engine, model, and agent, exposed as infrastructure for teams building their own workflows.

Why not just use a general-purpose LLM?

General-purpose models aren't trained on reconciliation data, return unstructured text instead of match results, and send your ledgers to a horizontal inference provider.

Do I need to replace my current reconciliation tool?

Most customers start by pointing Topa at a specific problem area while their existing system continues to run.

How does pricing work?

You pay per token, metered separately for input and output, at rates that vary by layer.

Can I use Topa inside a regulated institution?

Yes, models are versioned and pinnable, and audit-grade logging is built in.

/

FAQ

Frequently Asked Questions

What is the difference between Topa and Grath's reconciliation platform?

The Grath reconcilation platform is the application; Topa is the engine, model, and agent, exposed as infrastructure for teams building their own workflows.

Why not just use a general-purpose LLM?

General-purpose models aren't trained on reconciliation data, return unstructured text instead of match results, and send your ledgers to a horizontal inference provider.

Do I need to replace my current reconciliation tool?

Most customers start by pointing Topa at a specific problem area while their existing system continues to run.

How does pricing work?

You pay per token, metered separately for input and output, at rates that vary by layer.

Can I use Topa inside a regulated institution?

Yes, models are versioned and pinnable, and audit-grade logging is built in.