Select Interactive
AI · Tech Stack & Tools8 min read

From Ticket to Deployed: Our Agentic AI Development Workflow

A behind-the-scenes look at how we connect Linear, Cursor, GitHub, and Slack into a single automated development loop that takes a ticket from open to deployed without a developer touching every step, and what that means for capacity and turnaround.

Jeremy Burton

Partner, Select Interactive

Key takeaway

Want the short version? Skip down for a concise summary.

Jump to summary

Most people think of AI as a smarter autocomplete, something that helps a developer type code a little faster. That is part of it. But the more interesting shift is what happens when AI stops being a typing assistant and starts being an agent: something that can pick up a task, work through it independently, and hand it back completed. That is exactly what we have built into our development workflow.

At Select Interactive, a developer can open a ticket in our project management tool, assign it to an AI agent, and walk away. By the time they come back with a coffee or a slice of pizza, there is a pull request waiting for review. That is not a demo or a carefully staged example. That is how we work every single day. This article is a full account of how the loop runs: the tools involved, the steps in the pipeline, what it means for capacity and delivery time, and the quality safeguards that keep it reliable.

The Four Tools That Power the Loop

Before walking through the workflow, it helps to understand what each tool brings to the table. None of these tools were originally designed to work together as a unified system. The integration is something we configured, connected, and continue to maintain. But when they are all in sync, they behave like a single automated pipeline.

  • Linear. Our project management and issue tracking system. This is where all work originates: bug reports, feature requests, content updates, and everything else. Every Linear project is mapped to a specific GitHub repository, which is what makes the automation possible.
  • Cursor. An AI-powered coding environment and the engine behind our agentic workflow. Cursor cloud agents can read a project codebase, understand its context, write and test code, and interact with GitHub, all without a developer actively involved. This is the part of the stack that does the actual development work.
  • GitHub. Where all code lives. Source control, pull requests, code review, and the CI/CD pipeline that deploys changes to our Microsoft Azure environments via GitHub Actions. GitHub is both the home for finished work and the quality gate that every change must pass through before it ships.
  • Slack. Our team communication hub. In this workflow, Slack is where the team gets notified when an agent has completed its work and a pull request is ready for human review. It keeps everyone in the loop without anyone having to check dashboards or dig through email.

Each of these tools does its job well independently. Connected and configured as a system, they eliminate most of the manual handoffs that slow projects down.

The Loop: From a Single Click to Deployed Code

Here is the full workflow from beginning to end, exactly as it runs in practice. The whole sequence can complete with very little human involvement beyond the initial assignment and the final code review.

  1. Create the issue. A new task is added to the relevant Linear project, whether it is a bug report, a feature request, or a content update. The issue includes a clear description and enough context for the agent to know what a successful outcome looks like.
  2. Assign it to Cursor. Once the issue is reviewed and ready, a developer assigns it to Cursor directly from within Linear. That single action is what starts the automation.
  3. The agent begins work. Cursor's cloud agent picks up the ticket, marks it as In Progress in Linear, and creates a dedicated feature branch in the linked GitHub repository. It then reads the codebase, understands the project structure, and writes the necessary code.
  4. A pull request is opened. When the work is done, the agent opens a pull request against the develop branch in GitHub. At the same moment, an automated notification goes to the team in Slack so everyone knows the PR is ready for review.
  5. Linear updates automatically. The ticket status moves to In Review on the Linear board, keeping the project state current without anyone updating it manually.
  6. Human review. A developer reviews the pull request in GitHub. They can leave comments and request changes, pull the branch locally to make edits, or approve and merge it directly.
  7. Tests must pass. Before any PR can be merged, all automated tests configured for the project must pass. This is a hard requirement, not an optional step. If the agent introduced a regression or a failing test, the PR stays open until it is resolved.
  8. Merge, deploy, and close. When the PR is merged, Linear automatically marks the issue complete. GitHub Actions kicks off immediately and deploys the change to the development environment in Azure, where the team and the client can review it in a live build before it moves to staging.

The developer in this workflow makes two decisions: is this issue ready to assign, and is this pull request ready to merge. Everything in between runs automatically. The cognitive load stays on the judgment calls, not the logistics.

What This Actually Feels Like in Practice

The workflow above might look like a lot of moving parts on paper. In practice, from the developer perspective, it is genuinely simple. You review an issue, decide it is ready, assign it to Cursor, and move on. That is the trigger. Everything else runs without your involvement.

Assign the ticket, head to the kitchen for a coffee or grab a slice of pizza, and by the time you are back, there is a pull request waiting for review. We still find it a little satisfying every single time.

To make that concrete: a button rendering incorrectly on mobile, the kind of fix that used to take two or three hours of focused developer time to track down, reproduce, fix, test, and submit, now reaches the pull request stage in fifteen to thirty minutes. A form validation issue, a copy update that needs to apply across multiple pages, a layout adjustment for a specific breakpoint: all of these land in review in a fraction of the time they required before.

What makes the speed possible is that Cursor's agent does not work from a blank slate. It has full access to the codebase and the context in the Linear ticket. It understands the project structure, the conventions in use, and what the issue is asking for. That understanding allows it to make targeted changes rather than broad ones.

This also means the agent's output tends to fit naturally into the existing codebase rather than introducing inconsistencies a developer would need to clean up later. The review step is genuinely a review, not a significant rewrite.

More Than Speed: This Is About Capacity

Speed is the most visible benefit and the easiest to talk about. But it is not the most important one. The real shift is in how much work can move through the pipeline at the same time.

Before agentic workflows, a developer working on three separate issues would handle them sequentially: context switch to the first, write and test the fix, open a PR, then move to the next. Three issues, one developer, most of a day.

With agents in the loop, those same three issues can run in parallel. One agent is working on the first bug while a second handles a feature request and a third makes a content update. The developer reviews the results as they arrive rather than building them one at a time. The total elapsed time drops from most of a day to a few hours.

This is not about reducing headcount or replacing skilled developers. A developer's judgment, product sense, and review capacity are essential at every step of this workflow. What changes is how much work can be in motion at once. Engineers can take on more, deliver faster, and spend their best hours on the architectural decisions, complex problems, and creative work that genuinely requires human expertise, not on boilerplate and routine fixes.

For clients, this translates directly into shorter feedback cycles, faster iteration on requested changes, and more predictable timelines. The capacity increase is real, and it compounds across the life of a project.

Speed Does Not Mean Skipping the Safety Net

When anything in a development process speeds up, the first reasonable question is: does quality take a hit? In this workflow, the answer is no. The reason is not effort or discipline, it is structure. Quality gates are built into the pipeline, not added on top of it.

Tests are a hard gate, not a suggestion

Every repository in this workflow has an automated test suite configured in GitHub Actions. Before any pull request can be merged, all tests must pass. This is enforced at the repository level and cannot be bypassed. If the agent's code introduces a regression or a failing test, the PR stays open until it is addressed. No matter how fast the agent worked, nothing ships until the tests say it is ready.

Agents propose, people decide

The agent never merges its own pull request. Every PR requires a human reviewer who looks at the code, evaluates the approach, and makes the call to merge. The agent produces a proposal; the developer approves or redirects it. That distinction keeps humans in the decision loop at the moment that matters most.

Feature branches, always

Agents always work on dedicated feature branches. The develop branch and production environments are never directly touched. This is standard professional practice, and the agentic workflow follows it exactly. There is no shortcut that bypasses the branch-review-merge cycle.

A live preview before anything moves further

When a PR merges, the automatic deployment to the dev environment gives the team and the client a real, working version of the change to evaluate before it moves to staging or production. That checkpoint is meaningful. Speed and safety are not in conflict in this system; they coexist because the quality controls are built in from the start.

The Loop Is Already Running

This is not a roadmap item or a pilot program. This is how we work today, on active client projects, right now. Every project we run through this pipeline benefits from the same integrated loop: issues tracked in Linear, development work handled by Cursor agents, code reviewed and tested in GitHub, deployed to Azure, and the team coordinated through Slack.

The integrations are live and the workflow is proven. Small fixes that used to take most of a day now move from assignment to review in under an hour. Larger features progress faster because the routine parts of implementation no longer compete for developer attention alongside the complex parts.

If you are managing a digital product and want a development partner that can move quickly, stay organized, and deliver changes with a short cycle from request to deployed feature, this is how we work. We would love to show you what that looks like on your project.

Work With Us

Have a project in mind?

We build the web's most demanding applications. Let's talk about yours.

Get in Touch