Select Interactive
AI · Case Study · Tech Stack & Tools8 min read

From Zero to Near-MVP in Under Two Weeks: How We Guided a Small Dev Team Through Agentic AI Scaffolding and Modern Stack Adoption

A real case study showing how Cursor, Claude Code, and targeted prompting accelerated a small team’s React plus TanStack plus shadcn/ui project from a blank slate to a functional near-MVP in record time.

Jeremy Burton

Partner, Select Interactive

Key takeaway

Want the short version? Skip down for a concise summary.

Jump to summary

A small engineering team wanted to build a custom app from scratch but didn't quite know the best way to get started with agentic AI and a modern tech stack. With a few days of consulting, example prompts, and a live review of the codebase, they shipped a near-MVP in under two weeks.

The Result: a production-ready foundation with TanStack Router, TanStack Query, and TanStack Form, utilizing shadcn/ui and Tailwind, built on Vite with React and TypeScript, completed in less than 1/4 the time a typical manual development sprint would have taken.

The Challenge

The client had two developers on the project with limited recent exposure to today’s frontend ecosystem. They needed a scalable foundation without burning weeks debating boilerplate, folder layout, or which libraries to glue together.

The goal was to move fast while adopting practices that hold up over time, including hooks for future agentic AI features alongside clean APIs and observable behavior.

Initial Project Scaffolding With Precise Prompting

We started with a structured discovery conversation: what the application needed to accomplish, roughly who would use it, and what “done” looked like for their first meaningful milestone. That context shaped every prompt that followed.

From there we ran guided prompting sessions in Cursor and Claude Code. The stack we locked in was Vite with React and TypeScript, TanStack Router, TanStack Query, TanStack Form, and shadcn/ui with Tailwind, matching patterns we rely on ourselves.

What good prompts looked like

Concrete beats vague. Effective patterns included naming the frameworks explicitly, declaring non-negotiables such as routing and styling approach, and requesting file-level outputs the team could inspect. Illustrative snippets (domains anonymized):

Scaffold out a Vite React TypeScript project with TanStack Router, TanStack Query, TanStack Form, using shadcn/ui with Tailwind. This project will run as a SPA on the client while allowing for a NodeJS server for any necessary server-side API calls for data or 3rd party integrations. Please include Vitest and React Testing Library to include application wide test suites and configure a minimum require of 80% code coverage.

Outcome: a fully configured codebase with routing, data fetching behaviors, forms, and a UI foundation in minutes rather than days. The next step was to provide necessary prompts for UI development for the account registration, login screen and initial dashboard concepts.

Project Architecture Review

Once scaffolding landed, we walked the repo live with everyone on the call: route tree, loader patterns, modules that own server adapters versus UI, and where types ought to accumulate.

Recommendations covered folder conventions, pragmatic state boundaries, type-safety norms, and a pragmatic testing stance (Vitest plus React Testing Library) so quality did not lag behind momentum.

We tightened a few seams on the spot, explaining why each change mattered so the tradeoffs stuck. Adjustments ranged from narrowing query keys and cache shapes to simplifying a layout that duplicated providers.

Composable UI Component Building

With architecture stable, attention shifted to reusable UI surfaces: core layouts, form sections, filtered tables, empty states. We leaned on targeted prompts plus pair-style sessions while the group watched edits land in Cursor.

The emphasis stayed on accessibility defaults, responsive behavior, and components that behave predictably whether a human writes the next slice or agentic tooling drafts it.

Knowledge Transfer and Hand-off

We capped the engagement with a short, repeatable curriculum on prompting: layering constraints, iterating with checkpoints, verifying diffs critically, and when to escalate to human review.

We summarized architecture decisions into plain-language notes aligned with the codebase. The team left with access to ongoing office hours slots for follow-up questions as they marched toward hardened production behavior.

What the Team Experienced

Most collaborative work landed in intentional, short bursts measured in focused hours rather than sprawling calendar blocks. The arc below maps seven work tracks across three contiguous hour bands, with prompt engineering on one axis and code review checkpoints on the other. Sessions stayed long enough for depth but short enough to avoid derailing roadmap meetings elsewhere.

Feedback after the sprint could be summed up cleanly: they crossed from tentative about unfamiliar tooling to confidently owning routing, forms, styling, and how far to trust generators.

The Results and Why This Approach Works

The team exited the engagement near production-quality MVP scope with time left for polish compared with prior internal initiatives that stretched further for less visible progress.

  • Initial setup acceleration: ~70% faster versus their prior spreadsheet estimate for manual scaffolding.
  • Quality and consistency benefited from shared scaffolding plus live review cycles instead of one-off merges late in development.
  • The architecture leaves room for later agent-assisted features, observability, and API layering without refactoring the entire frontend.
  • The process also prepares future agents to understand the codebase and its architecture by consistently updating documentation in the codebase via AGENTS.md and various skill files.

Traditional advisory work often emphasizes theory decks. Pairing disciplined agent tooling with hardened stack instincts lets teams learn by shipping. TanStack ergonomics paired with Cursor and Claude Code amplified every hour we spent together.

If this blend of facilitation sounds useful, our AI and modern web strategy consultations capture the fuller menu we described for advisory work beyond single accelerations.

The Result

We were able to help a small engineering team move from uncertainty about agentic AI and modern tech stack libraries to a near-MVP application in less than ten working days. Routing, querying, validated forms, and design-system-backed UI snapped into place sooner than their most optimistic planning, and engineers left empowered to steer the codebase forward with agentic tooling they now understand. This mirrors the condensed, outcomes-first AI and tech acceleration work we routinely deliver.

If your team faces a similar sprint from zero to credible software, reach out. We routinely package short, intensive AI and stack alignment consulting like this.

Work With Us

Have a project in mind?

We build the web's most demanding applications. Let's talk about yours.

Get in Touch