R-Labs / Case Files
Agentic workflow redesign for a small, high-leverage operating team
Deep-work hours reclaimed; AI use made trackable without added overhead.
- Duration
- 4 weeks
- Year
- 2025
- Surfaces
- Agentic Workflow DesignPrompt and Automation Systems
Context
A compact team running an outsized portfolio across multi-jurisdiction mandate, with growing, uncoordinated use of AI tools. Judgment-heavy work was repeatedly interrupted by coordination tax; parallel AI use was inconsistent, partially documented, and hard to audit.
Three different tools had been adopted ad-hoc by different seniors for different kinds of work. No one opposed AI use. No one owned it. The principal could feel the drag on the week before anyone could name it.
Problem
The team’s core value — its judgment — was being diluted by operational overhead. Every week, several hours of senior capacity were absorbed by reconciling what different people had asked different AI tools, in different formats, with no eval pass or shared standard.
A second-order cost had started to show. Reviewers were skimming, because they could not quickly tell which portions of a document were human-drafted, which were AI-drafted, and which had been reworked across both. Quality was holding; the margin on quality was thinning.
Constraint
No new hires. No tool sprawl. No loss of review standards. Absolute discretion — no external vendor could see the work itself.
The final constraint was the heaviest: nothing built in the engagement was to depend on R-Labs continuing to be in the room.
Intervention
Designed an agentic operating loop organised around three explicit decision surfaces — the points where a human judgment had to sit, and nothing else. Between surfaces, scoped AI drafting steps ran in parallel, each with a lightweight eval pass built into the hand-off rather than bolted on afterwards.
The loop was written down — not as a diagram, but as a short operating contract the team could follow without R-Labs in the room. Equally deliberate was what stayed out. No shared prompt library, no vendor-branded workflow platform, no dashboard. The discipline lived in the operating contract; the tooling stayed minimal.
Outcome
Senior review turnaround cut to a fraction of prior time — not because the work got shorter, but because arrival at senior review became predictable, annotated, and pre-evaluated. Deep-work blocks recovered across the seniors. AI use became visible and trackable by construction, without anyone having to file a trace or fill in a form.
The engagement ended cleanly at the four-week mark. The team has been running the loop on its own since; a separately scoped follow-on to build a small internal eval harness was commissioned on the team’s terms, not ours.
Reflection
The lift was not in the AI. It was in the operating shape around the AI — the named surfaces, the eval pass at each hand-off, and the explicit decision about what the team would not automate.