Skip to main content

The prototype-to-production gap — bridged.

Vibe coding gets you to a working prototype in days. What it leaves behind: API keys in environment variables that get committed to git, SQL queries built from string concatenation, missing authentication on admin routes, and no rate limiting. These are not theoretical risks — they are the four most common vulnerabilities we find in prototype audits. Fixing them before users are onboarded takes hours. Fixing them after takes weeks.

Start a ConversationAll Services
Vibe Code to MVP
The Challenge

The one-person startup narrative is accurate: a founder with Cursor and Claude can build a working full-stack application in days that would have taken months before. This is real and it has changed what early-stage product development looks like. What the narrative glosses over is the prototype-to-production gap: what AI code generation reliably produces and what production deployment reliably requires are different things, and the difference is not small.

The security debt in vibe-coded prototypes is consistent across tools and languages. API keys committed directly to source code. JWT tokens that never expire. No rate limiting on authentication endpoints. CORS configured to allow all origins. SQL queries constructed with string concatenation. Passwords stored without proper hashing. These are not edge cases — they are what AI code generation produces for features that are not visible in happy-path flows. A prototype with these issues is fine for a demo. Onboarding real users to it creates real exposure.

What vibe-coded prototypes consistently lack
  • Authentication and authorization with proper session management and token lifecycle
  • Input validation and sanitization — SQL/command injection vulnerabilities are common
  • Secrets management — API keys and credentials in code or unprotected .env files
  • Rate limiting on authentication and sensitive endpoints
  • Error handling beyond the happy path — unhandled exceptions expose stack traces to users
  • Database connection pooling and query parameterization
  • CI/CD pipeline, staging environment, and deployment automation
  • Observability — Sentry for error tracking, basic uptime monitoring
Our Approach

We audit the prototype codebase first. The goal is to understand what exists, what works, what needs hardening versus what needs replacing. The business logic embedded in a working prototype is often correct — it represents validated product thinking. We preserve what works and harden what does not, rather than rewriting for the sake of rewriting.

Security hardening is priority one — before any users are onboarded, regardless of how long other hardening work takes. Secrets rotation, authentication implementation, input validation, injection vulnerability fixes. We produce a security findings list with severity ratings and implement critical and high findings before proceeding. The deployment infrastructure and observability work follows.

Prototype to MVP process

01
Codebase audit

Review the prototype: what works, what is architecturally sound, what needs hardening vs. replacement. Security scan for committed secrets, dependency vulnerabilities, and common injection patterns. Produce a findings list with effort estimates.

02
Security hardening — priority one

Rotate committed secrets. Implement proper authentication (session management, JWT lifecycle, refresh tokens). Add input validation and parameterized queries. Fix CORS configuration. Add rate limiting to auth and sensitive endpoints. Implement proper password hashing if applicable.

03
Error handling and resilience

Add error boundaries throughout. Handle failure modes gracefully. Replace stack trace exposures with user-appropriate error messages. Add retry logic for external service calls with proper backoff.

04
Deployment infrastructure

Production hosting on Vercel, Railway, Fly.io, or AWS based on requirements. CI/CD pipeline with staging environment. Database with connection pooling. Basic monitoring.

05
Observability

Sentry for error tracking and performance monitoring. Basic uptime monitoring. Alert routing to a channel your team actually watches. Enough visibility to know when users are hitting errors — not a full observability platform.

What Is Included
  1. 01

    Preserve-first codebase audit

    A working prototype often has correct business logic embedded in it — the problem is everything around it. We identify what to keep, what to harden, and what to replace, then scope the engagement against actual findings rather than assumptions. You get an audit report before we write a line of new code.

  2. 02

    Security hardening for AI-generated code

    AI code generation reliably misses the same things: secrets committed to git, SQL queries built via string concatenation, CORS open to all origins, and no rate limiting. We audit for the full OWASP Top 10, rotate credentials, parameterize queries, and tighten CORS and CSP headers before any users are onboarded.

  3. 03

    Authentication and authorization

    Proper JWT implementation means setting expiry, building refresh token rotation, and invalidating on logout — not just signing a payload. We handle session edge cases (concurrent sessions, forced logout, token theft), wire OAuth2 where the product needs it, and enforce server-side authorization checks on every protected route.

  4. 04

    Deployment pipeline and hosting

    We select hosting appropriate to the MVP's actual scale: Vercel for Next.js frontends, Railway or Fly.io for backend services, Supabase or PlanetScale for the database. The pipeline includes a staging environment, branch previews, automated deploys on merge, and one-command rollback via tagged releases.

  5. 05

    Sentry observability for early-stage products

    Sentry captures unhandled exceptions, traces slow API calls, and routes alerts to Slack so you know when users hit errors before they email you. We configure source maps for readable stack traces, set performance monitoring baselines, and tune alert thresholds so you get signal without noise.

Deliverables
  • Codebase audit report with security findings and risk ratings
  • Secrets rotation, auth hardening, input validation, rate limiting
  • JWT lifecycle, OAuth2 integration, server-side authorization
  • Production deployment: staging environment and CI/CD pipeline
  • Sentry error tracking with alert routing configured
  • Deployment runbook for ongoing operations
Projected Impact

Remediating security debt after users are onboarded costs significantly more than addressing it before launch — in engineering time, in user trust, and occasionally in breach notification obligations. The cost of a pre-launch audit is predictable; the cost of a post-breach remediation is not.

FAQ

Frequently
asked questions

Harden the prototype or rebuild from scratch?

Harden when: the prototype architecture is fundamentally sound (correct data model, reasonable API structure, working core logic) and the missing concerns are additive — security, monitoring, deployment — rather than structural. Rebuild when: the data model is wrong, the API is not designed for actual usage patterns, or the prototype was built in a framework inappropriate for the production use case. The audit tells us which situation applies.

What frameworks do you work with for this?

We meet the prototype where it is. Most Cursor and Claude-generated prototypes use Next.js, React, Python FastAPI, or Node.js/Express — the frameworks AI tools generate fluently. We work with whatever was generated rather than imposing a technology preference. If the framework genuinely cannot support the production requirements, we surface that in the audit.

How do you handle AI-generated code that uses deprecated patterns?

We flag them in the audit and fix the security-relevant ones before deployment. AI code generation tools have knowledge cutoffs and sometimes generate patterns that are outdated or deprecated. Dependencies are scanned for known vulnerabilities (npm audit, pip-audit) and updated. Deprecated API usage is flagged and updated within the engagement scope.

What hosting platform should we use for an MVP?

For most Next.js or Node.js MVPs: Vercel for the frontend and API routes, Railway or Supabase for the database. For Python backends: Railway, Fly.io, or a small cloud instance. For products expecting significant early traction: a simple Kubernetes setup scales better than trying to migrate under load. We recommend based on expected traffic patterns, team operational capacity, and cost constraints.

Can you help after launch?

Yes. We offer ongoing retainer-based engineering support post-launch. Early-stage products iterate fast and need engineering capacity that scales with discovery velocity. Retainer engagement covers feature development, bug fixes, and infrastructure scaling as user load grows.

Ready to get started?

Tell us what you are building. We will scope it, price it honestly, and give you a clear plan.

Start a Conversation

Free 30-min scoping call