Tool Calls With Agentic Code Generation Using 'Monty'

Inspired by Anthropic’s recent article on using code execution to improve MCP tool calling, and having just discovered Monty — a new sandboxed Python runtime from Pydantic — I was motivated to spend a weekend building an example project to explore further. Standard LLM tool calling has a fundamental inefficiency: the model calls one tool at a time, waits for the result, decides what to call next, and round-trips back to the model for every step. For questions that require fetching data from multiple sources, time (and tokens) add up fast. The monty-example project explores a different approach: let the model write code that orchestrates tool calls and executes that code in a sandbox running within your code! ...

April 11, 2026 · 11 min · Michael OShea