Vibeslop
Vibeslop is what happens when AI-generated output ships without customer insight, process discipline, or feedback loops. OR13’s agentic toolkit combines the best of outcome-driven innovation, agile development, and the agentic software development lifecycle — so every sprint moves toward outcomes people actually pay for.
Scroll down or use arrow keys to navigate
Framework 1 — Jobs to Be Done
People don’t buy products — they hire them to make progress in their lives. A job statement captures the functional task, the emotional state, and the social context.
▶What is Jobs to Be Done?Framework 2 — Hook Model
JTBD tells you what job your product is hired for. The Hook Model tells you how to make users come back every day. Four phases, repeated with enough frequency, turn conscious choices into automatic behavior.
▶4 Keys to Habit-Forming Products — Nir Eyal
▶The Hook Model — Nir Eyal
▶Hooked: How to Build Habit-Forming ProductsA habit is a behavior done with little or no conscious thought. Products that change behavior start by understanding the user’s internal trigger, then build the shortest path to a variable reward.
Nir Eyal, Hooked: How to Build Habit-Forming Products (2014)
Framework 3 — Agile · Shape Up
Methodology without discipline is theater.
Framework 4 — Agentic SDLC
Skills give every role access to domain tools they wouldn’t normally touch. Agents bridge the gap — so a designer can ship, an engineer can prototype, and sales can measure what matters.
/design/review/build/test/launch/plan/review/analyze/plan/review/analyzePhase 1 of 7 — Plan
What job is the customer hiring us to do?
Write job statements using the template: (action verb) + (object) + (context). Separate the functional task from the emotional need and social context. Score each outcome using importance × satisfaction gap to find the most underserved jobs — the ones where current solutions fail and customers feel the pain most acutely.
Plan the full hook cycle before you build any of it. Identify which internal triggers (negative emotions — boredom, loneliness, anxiety, uncertainty, FOMO) your product will attach to. Define the simplest possible action, choose the variable reward type (Tribe, Hunt, or Self), and plan what stored value users will create that makes the product better with each visit. Estimate the target cycle frequency — how often must users repeat the loop to form a habit?
Shape the appetite: decide how much time a bet is worth (1 week, 2 weeks, 6 weeks) before scoping. Fixed time, variable scope. The backlog is a menu of bets ranked by customer outcome. Backlog grooming means cutting scope to fit the appetite, not expanding timelines to fit scope. Threat modeling identifies security risks early so they’re designed out, not patched later.
The /plan skill coordinates the planning rituals. Product Manager and Sales define the job and appetite; agents pull opportunity scores, pipeline data, and support themes from connected tools. Engineering and Customer Success validate feasibility and urgency. The skill produces a prioritized backlog with sprint scope captured in the project tracker, fed by customer signals from the CRM.
Phase 2 of 7 — Design
What does progress look like for this job?
Define the “before” state (struggling moment) and the “after” state (desired outcome), then design the shortest path between them. Every screen and flow should move the user closer to their desired outcome. Map each screen to a job dimension: does it solve the functional task, address the emotional need, or fulfill the social context? If a screen doesn’t serve any dimension, cut it.
Apply Fogg’s Behavior Model: B = MAT (Behavior happens when Motivation, Ability, and Trigger converge at the same moment). Design the core action to be as simple as possible by optimizing the six simplicity factors: time, money, physical effort, brain cycles, social deviance, and non-routine. Then design the variable reward: what will users see that’s different every time? Variability — not magnitude — is what keeps users engaged.
Use breadboarding to define flow logic: places (screens), affordances (buttons, fields), and connection lines (transitions) — no visual design yet. Use fat-marker sketches for rough visual concepts drawn at a level where you physically can’t add detail. The architect defines technical boundaries: what’s in scope, what’s integration, what’s deferred. Don’t over-specify — leave room for builders to solve.
The /design skill coordinates design rituals. People set direction and review feasibility; agents generate wireframe variations, run accessibility audits, validate designs against opportunity scores, and flag technical constraints. The skill captures design artifacts in the design tool and component library, ensuring every screen traces back to a job dimension.
Phase 3 of 7 — Build
Are we building toward the outcome or just shipping features?
Every pull request should reference the job statement it serves and the outcome metric it moves. If a PR doesn’t connect to a job, question whether it belongs in this cycle. Engineers should feel empowered to push back on work that doesn’t serve a customer outcome — building the wrong thing fast is still waste.
Build the infrastructure that makes the entire hook cycle work. For Action: optimize the critical path for minimum latency, prefetch the likely next step, eliminate unnecessary confirmation dialogs. For Variable Reward: build the systems that generate variability (recommendation engines, social feeds, personalization). For Investment: build the data layer that stores user contributions and ensure each investment visibly improves the next session.
Small batches, continuous integration, and trunk-based development. The team owns the how — leadership owns the what and why. Standups should surface blockers, not status. Pair on complex unknowns and unfamiliar domains. Code review checks outcome alignment alongside code quality. Secure coding implements mitigations identified during the Plan phase threat model.
The /build skill coordinates build rituals. People own system design, scope decisions, and customer context; agents handle scaffolding, CI pipelines, progress tracking, and pipeline signals. Every PR references the job statement it serves. The skill captures code, infrastructure, and deploy artifacts across connected tools.
Phase 4 of 7 — Test
Does this actually help the customer get the job done?
Write acceptance tests framed as job-completion scenarios, not feature checks. Template: “Given a user experiencing [struggling moment], when they [complete the action], they should achieve [desired outcome].” Measure job completion rate and time-to-value, not just feature coverage. If the feature works as coded but doesn’t help the user complete the job, it’s a failure.
Test the entire hook cycle, not just individual phases. Does the trigger fire at the right moment? Is the action achievable in the minimum steps? Does the reward feel genuinely variable (not scripted)? Do users complete the investment step? Measure cycle completion rate, session frequency, and time-between-sessions. If frequency is increasing, habits are forming.
Each testing type catches a different class of failure. Regression: existing job-completion paths still work. Manual QA: catches intent gaps where the feature works as coded but doesn’t serve the user’s job. Penetration testing: validates threat model mitigations from the Plan phase. UAT: real users or stakeholders confirm they can complete the job statement end-to-end.
The /test skill coordinates testing rituals. People write judgment tests, validate acceptance criteria, and run UAT; agents automate regression, E2E happy paths, coverage reports, and demo-path validation. The skill produces test results, coverage data, and acceptance sign-off captured in CI and the project tracker.
/testPeople write judgment tests and run UAT; agents automate regression and coverage.
Phase 5 of 7 — Review
Did we help the customer make progress on their job?
Review against the original job statement, not the spec. The spec is a bet — the job statement is the truth. If they diverge, update the spec. Measure concrete progress: job completion rate, time-to-value, satisfaction gap closure (did the importance × satisfaction score improve?). Check qualitative signals: did customers stop using their old workaround?
Review the full hook cycle’s performance with real data. Track cycle completion rate (what percentage of users complete trigger → action → reward → investment in a single session). Measure inter-session interval — a decreasing interval means habits are forming. Identify where users drop off: trigger-to-action? action-to-reward? reward-to-investment? Check whether the investment phase is generating enough stored value to load the next trigger.
Demo the user journey end-to-end, not individual features in isolation. Capture feedback framed as job evaluations: “How well does this help the customer get the job done?” Produce a clear shipped/cut/carried-forward list. The review is a forcing function for honesty about which bets paid off and which ones missed.
The /review skill coordinates review rituals. People interpret outcomes, evaluate technical health, and assess deal impact; agents generate retention reports, incident summaries, session replays, and support themes. The skill produces an evidence-based shipped/cut/carried-forward list with outcome data for each bet.
Phase 6 of 7 — Launch
How do we position this around the job, not the feature?
Launch messaging should speak to the job the customer is hiring for. “We added dark mode” is a feature. “Work comfortably at night” is a job. Lead with the struggling moment (“Tired of squinting at bright screens?”), present the product as progress on their job, and use social proof from users who got the job done. Feature lists go in the changelog, not the announcement.
Launch is where you activate the first full hook cycle. Fire external triggers — announcements, emails, in-app prompts — that create a mental association between the user’s internal trigger (the emotion) and your product. Don’t send a “new feature” email — send a “struggling with X?” email. Design the first-use experience to deliver an immediate variable reward and prompt an investment, so the user completes the entire cycle on day one.
Stage the rollout: internal dogfood → beta cohort → general availability. Feature flags control exposure. Define rollback criteria before launch: error rate thresholds, satisfaction drops, or performance degradation that trigger automatic rollback. Sales enablement means training the team to position around the job statement, not walk through features.
The /launch skill coordinates launch rituals. People own go-to-market narrative, go/no-go decisions, sales enablement, and support readiness; agents manage the deploy pipeline, validate job-statement positioning, generate battlecards, and monitor first-session completion. The skill orchestrates a staged rollout with automatic rollback on health check failure.
Phase 7 of 7 — Analyze
Which outcomes are still underserved?
Re-score each outcome using the importance × satisfaction framework and compare to pre-launch scores. Identify outcomes where the gap narrowed (your bet paid off), remained (your bet missed), or new gaps emerged (unintended consequence). Task completion rate, time-to-value, and job satisfaction matter more than page views or DAU. These findings feed the next Plan cycle directly.
Diagnose where the hook cycle is breaking. Define your habit threshold (e.g., 3+ sessions per week) and track what percentage of each cohort reaches it. Measure drop-off at each phase transition: trigger-to-action, action-to-reward, reward-to-investment. Compare trigger channels by first-cycle conversion rate. A decreasing inter-session interval means habits are forming; an increasing one means you’re losing the battle for attention.
Each ritual surfaces a different class of problem. Retros surface process problems (use Start/Stop/Continue). Metrics reviews compare actual outcomes against predicted bet outcomes. Post-mortems produce blameless root cause analysis with concrete action items. Churn analysis identifies which jobs users found a better solution for — they didn’t leave your product, they hired a different one.
The /analyze skill coordinates analysis rituals. People interpret data, evaluate system health, read the pipeline, and flag at-risk accounts; agents generate retention curves, error budgets, usage-to-deal correlations, and churn patterns. The skill produces an evidence-ranked list of bets for the next Plan cycle, closing the loop.
/analyzePeople interpret data and rank bets; agents generate retention curves and churn patterns.
With it, every sprint moves toward outcomes people actually pay for.
© 2026 Orie Steele