§ Process · DMA-PRC-001

Twelve weeks from kickoff
to production.
Here's exactly what happens.

Things go wrong on every AI project. Honest agencies say so. This page tells you what the process looks like — including the parts that break.

Weeks 01–02
Discovery & architecture
Weeks 03–10
Build
Weeks 11–12
Deploy & handoff
§ 01 — The phases

Week by
week.

Wk 1–2Discovery & architecture

We dig in before we write a line of code.

We dig into your data, your existing systems, and your actual goal — which is often different from the brief. We come back with an architecture, a model approach, and a build plan with weekly milestones.

This is also where we find the problems. Bad data quality, unclear success metrics, unrealistic scope. We rescope in week 2 if needed. We don't pretend these don't exist and surface them in week eight.

Progress · week 1–2 of 12

Wk 3–10Build

Working software every Friday. Not slides.

Iterative weekly demos. You see working software every Friday — not a progress report, not a status update, but a thing you can actually use. We deploy to staging continuously.

Models get trained, evaluated, retrained. The frontend takes shape. By week 8 you're using a near-production version internally. If we're off-track, you know by week 4 — not week 11.

Progress · week 3–10 of 12

Wk 11–12Deploy & handoff

Production deployment. You get the keys.

Production deployment. Monitoring set up. Documentation written. Your team trained on the system. We hand over the keys — source code, trained models, deployment scripts, monitoring dashboards.

But we're still on call. Maintenance is a feature, not an upsell. The retainer is available if you want continued coverage. Our number is in the handover doc either way.

Progress · week 11–12 of 12

§ 02 — What goes wrong

Because it does.
Always.

Every AI project hits at least one of these. Agencies that don't tell you this upfront are the ones who surface it in week eleven.

Most common

Data turns out worse than expected.

Half of all AI projects hit this. Missing labels, inconsistent formats, insufficient volume, privacy issues that nobody flagged. We rescope in week 2 if needed. We don't pretend the data is fine when it isn't.

Technical

The model doesn't hit the accuracy bar in week 8.

Sometimes happens. We try alternative architectures, collect more data, narrow the scope. We'd rather ship a smaller thing that works than a bigger thing that doesn't. We'll never push a failing system to production.

Stakeholder

A stakeholder changes the brief in week 6.

Common. We re-quote the additional work. You decide if it's worth it. We adjust. We don't absorb scope changes silently — that's how projects break at launch.

§ 03 — What we don't do

Hard limits.
No exceptions.

×Sub-contract to anyone outside our team without your explicit knowledge
×Use your project to train a junior engineer on your budget
×Disappear after launch — maintenance is part of the deal
×Work without an NDA on anything sensitive — always signed before any data is shared
×Build a system we can't honestly maintain or don't understand well enough to debug at 2am
§ 04 — What you get at the end

Everything you
need to own it.

Working AI product in production

Not a prototype. Not a demo. Running on real infrastructure, handling real load.

Source code — yours, fully

Complete repo. No dependency on our infrastructure. No lock-in.

Trained models and weights

Every model we trained is handed over. You can retrain, fine-tune, or replace.

Deployment scripts and infra config

Docker files, CI/CD pipelines, environment configs — everything to redeploy independently.

Documentation

Architecture overview, API reference, runbooks for common failure modes.

Monitoring dashboards

Model drift alerts, error rates, latency. Configured and live from day one of production.

Handover call

A session with your team walking through every part of the system. Recorded if useful.

Our number, in case it breaks at 2am

Optional retainer for ongoing coverage. Mandatory response if it's something we built wrong.

§ 05 — Ready to start

The first call is free. Thirty minutes.

We'll tell you honestly whether the project is a fit, what the architecture looks like, and what the risks are. No pitch if we can't help.

Book the first callSee pricing