---
title: "Anthropic wants Claude to play with money, unleashes finance agents"
type: "News"
locale: "en"
url: "https://longbridge.com/en/news/285255026.md"
description: "Anthropic has launched financial agent templates for its Claude AI service, aimed at enhancing financial operations. These templates include skills, connectors, and subagents to assist with tasks like KYC/AML compliance. For instance, the KYC Screener agent evaluates onboarding records and assigns risk ratings. Other agents include tools for pitch building, meeting preparation, and market research. Despite concerns about accuracy, Anthropic emphasizes user oversight in reviewing and approving Claude's outputs before implementation."
datetime: "2026-05-05T19:49:08.000Z"
locales:
  - [zh-CN](https://longbridge.com/zh-CN/news/285255026.md)
  - [en](https://longbridge.com/en/news/285255026.md)
  - [zh-HK](https://longbridge.com/zh-HK/news/285255026.md)
---

# Anthropic wants Claude to play with money, unleashes finance agents

If you've ever read Anthropic's disclaimer that responses generated by Claude may contain mistakes and thought, "That's what I need to spice up financial operations," you're in luck.

Anthropic has released a set of financial agent templates designed to allow its Claude AI service to better assist with financial tasks.

"Each agent template is a reference architecture that packages three things: skills (instructions and domain knowledge for the task), connectors (governed access to the data the task runs on), and subagents (additional Claude models that are called upon by the main agent, for specific sub-tasks such as comparables selection or methodology checks)," the company explains.

The terminology can be a bit murky because, at the end of the day, it's all just a model pursuing a goal in an iterative loop with resources like tools and data.

Claude Code itself is an agentic harness that supports an underlying model using Anthropic's defined control flow. When the Claude model is driving the control flow toward a goal – deciding what tools to use and what data to access – that's an agent.

Then there are subagents, and these are really just API calls to Claude using specialized system prompts, specified tools, and context provided by an orchestration system. They're a bit like functions in a program that handle a particular aspect of an application.

So Anthropic's finance agents consist of: skills, which are markdown files that describe workflows; connectors, which are integrations with external services; and subagents, made up of a focused system prompt, specific tools, and contextual data.

-   Kids say they can beat age checks by drawing on a fake mustache
-   Brit mathematician lets AI agent loose with credit card – cue password leaks, CAPTCHA chaos and more
-   Attackers are cashing in on fresh 'CopyFail' Linux flaw
-   Bun posts Rust porting guide, says rewrite is still half-baked

For example, Anthropic's Know-Your-Customer Screener agent template (kyc-screener) includes a skill called kyc-rules that spells out how Claude should apply a firm's KYC/AML (anti-money laundering) rules to a parsed onboarding record. The rules tell the AI model to assign a risk rating, check documents, cite rule outcomes, and produce a result formatted thus:

```
 {   "risk_rating": "low | medium | high",   "disposition": "clear | request-docs | escalate-EDD | decline-recommend",   "missing_documents": ["..."],   "escalation_reasons": ["rule 4.2: confirmed PEP", "..."],   "rule_outcomes": [{"rule_id": "...", "outcome": "...", "evidence": "..."}] } 
```

This JSON data would presumably be useful to whatever corporate system receives it.

Anthropic's list of agents includes: Pitch builder; Meeting preparer; Earnings reviewer; Model builder; Market researcher; Valuation reviewer; General ledger reconciler; Month-end closer; Statement auditor; and, as previously noted, KYC screener.

These can be applied to Claude Cowork and Claude Code as plugins or as a "cookbook" – copyable code snippets – for Claude Managed Agents.

You may be thinking that finance tends to be fairly unforgiving when it comes to sciency stuff like numbers. Perhaps you're unimpressed that Anthropic's Opus 4.7 model scored an "industry leading" 64.37 percent on Vals AI's Finance Agent benchmark – a failure rate that would get a human tossed.

Worry not, because Anthropic expects that users will "stay firmly in the loop – reviewing, iterating on, and approving Claude's work before it goes to a client, gets filed, or is acted on."

With accounting comes accountability. ®

### Related Stocks

- [GS-C.US](https://longbridge.com/en/quote/GS-C.US.md)
- [GS-A.US](https://longbridge.com/en/quote/GS-A.US.md)
- [GS-D.US](https://longbridge.com/en/quote/GS-D.US.md)
- [GS.US](https://longbridge.com/en/quote/GS.US.md)
- [FNCL.US](https://longbridge.com/en/quote/FNCL.US.md)
- [IAI.US](https://longbridge.com/en/quote/IAI.US.md)
- [XLF.US](https://longbridge.com/en/quote/XLF.US.md)

## Related News & Research

- [GIC backs Anthropic-linked new AI-native enterprise services firm](https://longbridge.com/en/news/285493000.md)
- [Anthropic deepens finance push with 10 new AI agents for banks, insurers](https://longbridge.com/en/news/285228498.md)
- [Klaviyo Expands Integration with Anthropic to Bring Agentic Marketing Workflows to Claude | KVYO Stock News](https://longbridge.com/en/news/285583779.md)
- [It Only Took Seconds For A Claude AI Agent To Go Rogue And Delete Company's Database — 'I Violated Every Principle I Was Given'](https://longbridge.com/en/news/285414684.md)
- [The Goldman Sachs Group, Inc. $GS Shares Sold by Heartland Bank & Trust Co](https://longbridge.com/en/news/285196211.md)