Vibeslop

AI makes building easy.
Without structure, you ship noise.

Vibeslop is what happens when AI-generated output ships without customer insight, process discipline, or feedback loops. OR13’s agentic toolkit combines the best of outcome-driven innovation, agile development, and the agentic software development lifecycle — so every sprint moves toward outcomes people actually pay for.

Scroll down or use arrow keys to navigate

Framework 1 — Jobs to Be Done

Outcome Driven Innovation

People don’t buy products — they hire them to make progress in their lives. A job statement captures the functional task, the emotional state, and the social context.

What is Jobs to Be Done?What is Jobs to Be Done?
Job Statement Template
(action verb) + (object of action) + (context)
Functionalverb + object → Functional
Personalcontext → Emotional
Socialcontext → Social
SpotifyFind the right music vibe for a certain activity
FunctionalFind the right music vibe
Emotionalfor a certain activity — matching mood
Socialshare playlists with friends
Railsware
UberBook a ride through the app without communication anxiety
FunctionalBook a ride through the app
Emotionalwithout communication anxiety
Socialarrive on time — be seen as reliable
Railsware
DuolingoLearn a new language every day without losing motivation
FunctionalLearn a new language
Emotionalevery day without losing motivation
Socialimpress others with language skills
Railsware

Framework 2 — Hook Model

Building Habits Into Your Product

JTBD tells you what job your product is hired for. The Hook Model tells you how to make users come back every day. Four phases, repeated with enough frequency, turn conscious choices into automatic behavior.

4 Keys to Habit-Forming Products — Nir Eyal4 Keys to Habit-Forming Products — Nir Eyal
The Hook Model — Nir EyalThe Hook Model — Nir Eyal
Hooked: How to Build Habit-Forming ProductsHooked: How to Build Habit-Forming Products
1Trigger
The SparkExternal cues evolve into internal triggers — emotions like boredom, anxiety, or FOMO that pull users back without prompting.
NotificationEmailBoredomRoutine
InstagramPush notification: “X liked your photo”
SlackRed badge count on the app icon
TikTokBoredom while waiting in line
2Action
The BehaviorThe simplest behavior in anticipation of a reward. Motivation + ability + trigger = action. Reduce friction to zero.
Open appScroll feedTap search
Twitter/XTap to open, scroll the timeline
UberOne tap to request a ride
GoogleType a query, hit enter
3Variable Reward
The HookUnpredictable rewards trigger dopamine surges. Variability — not magnitude — is what keeps users engaged.
Tribe: LikesHunt: DiscoverySelf: Mastery
RedditTribe — Upvotes, karma, comments
PinterestHunt — Endless stream of fresh pins
DuolingoSelf — XP, streaks, leaderboard rank
4Investment
The SetupUsers put something back — time, data, effort, social capital. Each investment loads the next trigger and raises switching costs.
Follow usersBuild playlistsSave preferences
SpotifyBuild playlists, train the algorithm
LinkedInAdd connections, endorse skills
NotionCreate pages, build a knowledge base
Three Types of Variable Reward
TribeSocial validationLikes, comments, followers, reputation
HuntPursuit of resourcesScrolling feeds, deal-hunting, discovery
SelfMastery & completionClearing inbox, leveling up, finishing a task
The Cycle in Practice
SpotifyBoredom → Open app → Discover new songs → Build playlists
DuolingoStreak notification → Start lesson → XP + leaderboard → Follow friends
TikTokBoredom → Quick scroll → Surprising content → Share + create

A habit is a behavior done with little or no conscious thought. Products that change behavior start by understanding the user’s internal trigger, then build the shortest path to a variable reward.

Nir Eyal, Hooked: How to Build Habit-Forming Products (2014)

Framework 3 — Agile · Shape Up

Agile Outcome Delivery

Methodology without discipline is theater.

What is Agile?What is Agile?
Agile Product OwnershipAgile Product Ownership

Framework 4 — Agentic SDLC

Agentic Outcome Development

Skills give every role access to domain tools they wouldn’t normally touch. Agents bridge the gap — so a designer can ship, an engineer can prototype, and sales can measure what matters.

Vibe CodingVibe Coding
Agentic Software Development LifecycleAgentic Software Development Lifecycle
People
Maya
Senior Product Designer
Owns the experience. Thinks in flows, not features.
DexParker
James
Staff Engineer
Ships production code. Cares about reliability and speed.
KitDex
Priya
Account Executive
Closes deals. Needs data, not dashboards.
SageParker
Agents
Dex
Design
Prototypes ideas, iterates on layouts, keeps the design system honest
/design
FigmaAdobe CCStorybook
/review
FigmaGitHub
Kit
Engineering
Writes code, runs tests, handles deploys
/build
GitHubVercelSupabase
/test
GitHub ActionsPlaywright
/launch
VercelGitHub
Parker
Product
Writes specs, prioritizes the backlog, tracks outcomes
/plan
JiraLinear
/review
GitHubClarity
/analyze
Google AnalyticsClarity
Sage
Sales
Pulls pipeline data, drafts proposals, preps call briefs
/plan
HubSpotJira
/review
HubSpotClarity
/analyze
Google AnalyticsHubSpot

Phase 1 of 7 — Plan

Choose what’s worth building next

JJobs to Be Done

What job is the customer hiring us to do?

Write job statements using the template: (action verb) + (object) + (context). Separate the functional task from the emotional need and social context. Score each outcome using importance × satisfaction gap to find the most underserved jobs — the ones where current solutions fail and customers feel the pain most acutely.

  • Interview customers about their struggling moments, not feature wishes
  • Write job statements with all three dimensions: functional, emotional, social
  • Score outcomes: high importance + low satisfaction = underserved opportunity
  • Rank the backlog by opportunity score, not stakeholder loudness
Job StatementsOutcome ScoringUnderserved Needs
HHook Model

Plan the full hook cycle before you build any of it. Identify which internal triggers (negative emotions — boredom, loneliness, anxiety, uncertainty, FOMO) your product will attach to. Define the simplest possible action, choose the variable reward type (Tribe, Hunt, or Self), and plan what stored value users will create that makes the product better with each visit. Estimate the target cycle frequency — how often must users repeat the loop to form a habit?

  • Map the full cycle: trigger → action → variable reward → investment
  • Identify internal triggers (emotions) and the external triggers that bridge to them
  • Choose the reward type: Tribe (social), Hunt (discovery), or Self (mastery)
  • Define the stored value users invest and the target cycle frequency for habit formation
Cycle DesignTrigger MappingInvestment Strategy
AAgile Delivery

Shape the appetite: decide how much time a bet is worth (1 week, 2 weeks, 6 weeks) before scoping. Fixed time, variable scope. The backlog is a menu of bets ranked by customer outcome. Backlog grooming means cutting scope to fit the appetite, not expanding timelines to fit scope. Threat modeling identifies security risks early so they’re designed out, not patched later.

  • Set a fixed time appetite for each bet — don’t let scope creep extend timelines
  • Frame every backlog item as a bet: “If we build X, outcome Y will improve by Z”
  • Cut scope to fit the appetite rather than expanding the timeline to fit scope
  • Run threat modeling early — security constraints shape the architecture
SAgentic SDLC

The /plan skill coordinates the planning rituals. Product Manager and Sales define the job and appetite; agents pull opportunity scores, pipeline data, and support themes from connected tools. Engineering and Customer Success validate feasibility and urgency. The skill produces a prioritized backlog with sprint scope captured in the project tracker, fed by customer signals from the CRM.

  • Agents pull inputs: opportunity scores, deal-stage data, support ticket themes, architecture constraints
  • People run the rituals: sprint planning, backlog grooming, threat modeling
  • Skill captures outputs: prioritized backlog, sprint scope, and threat model in the project tracker
  • CRM stays in sync: pipeline context and customer signals feed the next planning cycle
Skill/plan

People set the appetite; agents score opportunities and draft the backlog.

Phase 2 of 7 — Design

Sketch the solution at the right altitude

JJobs to Be Done

What does progress look like for this job?

Define the “before” state (struggling moment) and the “after” state (desired outcome), then design the shortest path between them. Every screen and flow should move the user closer to their desired outcome. Map each screen to a job dimension: does it solve the functional task, address the emotional need, or fulfill the social context? If a screen doesn’t serve any dimension, cut it.

  • Map the struggling moment → desired outcome journey for each job
  • Tag every screen: which job dimension does it serve (functional, emotional, social)?
  • Define progress metrics at design time — how will you know the user succeeded?
  • Cut screens that don’t advance any job dimension
Struggling MomentDesired OutcomeProgress Mapping
HHook Model

Apply Fogg’s Behavior Model: B = MAT (Behavior happens when Motivation, Ability, and Trigger converge at the same moment). Design the core action to be as simple as possible by optimizing the six simplicity factors: time, money, physical effort, brain cycles, social deviance, and non-routine. Then design the variable reward: what will users see that’s different every time? Variability — not magnitude — is what keeps users engaged.

  • Identify the single core action users must take — reduce it to the minimum steps
  • Audit the six simplicity factors: time, money, effort, brain cycles, social deviance, routine
  • Design reward variability — what’s different every time the user completes the action?
  • Sketch the investment step: what do users put back that loads the next trigger?
B=MAT ModelAction SimplicityReward Type
AAgile Delivery

Use breadboarding to define flow logic: places (screens), affordances (buttons, fields), and connection lines (transitions) — no visual design yet. Use fat-marker sketches for rough visual concepts drawn at a level where you physically can’t add detail. The architect defines technical boundaries: what’s in scope, what’s integration, what’s deferred. Don’t over-specify — leave room for builders to solve.

  • Breadboard the flow: places + affordances + connections, no visual design
  • Fat-marker sketch the UI: intentionally rough so detail can’t creep in
  • Architect defines technical boundaries: in scope, integration, and deferred
  • Leave room for builders to solve implementation details their way
SAgentic SDLC

The /design skill coordinates design rituals. People set direction and review feasibility; agents generate wireframe variations, run accessibility audits, validate designs against opportunity scores, and flag technical constraints. The skill captures design artifacts in the design tool and component library, ensuring every screen traces back to a job dimension.

  • Agents pull inputs: opportunity scores, technical constraints, competitive UI patterns, support pain points
  • People run the rituals: usability testing, shaping, design sprint
  • Skill captures outputs: breadboards, fat-marker sketches, and component specs in the design tool
  • Component library stays current: interaction patterns and accessibility standards in Storybook
Skill/design

People sketch and set constraints; agents generate variations and audit accessibility.

Phase 3 of 7 — Build

Let the team solve the problem their way

JJobs to Be Done

Are we building toward the outcome or just shipping features?

Every pull request should reference the job statement it serves and the outcome metric it moves. If a PR doesn’t connect to a job, question whether it belongs in this cycle. Engineers should feel empowered to push back on work that doesn’t serve a customer outcome — building the wrong thing fast is still waste.

  • Include the job statement and outcome metric in every PR description
  • Push back on features that don’t connect to an underserved outcome
  • Track which job each piece of work serves — no orphan features
  • If the spec and the job diverge during build, flag it immediately
Outcome AlignmentJob Traceability
HHook Model

Build the infrastructure that makes the entire hook cycle work. For Action: optimize the critical path for minimum latency, prefetch the likely next step, eliminate unnecessary confirmation dialogs. For Variable Reward: build the systems that generate variability (recommendation engines, social feeds, personalization). For Investment: build the data layer that stores user contributions and ensure each investment visibly improves the next session.

  • Optimize the critical path: the one action users do most should be fastest
  • Build the systems that generate reward variability — not static content
  • Build persistence for user investments: content, connections, preferences, history
  • Ensure investments are visible on return — the product should feel “mine”
Critical PathData PersistenceFriction Reduction
AAgile Delivery

Small batches, continuous integration, and trunk-based development. The team owns the how — leadership owns the what and why. Standups should surface blockers, not status. Pair on complex unknowns and unfamiliar domains. Code review checks outcome alignment alongside code quality. Secure coding implements mitigations identified during the Plan phase threat model.

  • Standups: “What’s blocking you from shipping the next outcome?” — not status updates
  • Pair program on complex unknowns and security-sensitive code paths
  • Code review: does this PR move the outcome metric, not just pass lint?
  • Implement threat model mitigations from the Plan phase — don’t defer security
SAgentic SDLC

The /build skill coordinates build rituals. People own system design, scope decisions, and customer context; agents handle scaffolding, CI pipelines, progress tracking, and pipeline signals. Every PR references the job statement it serves. The skill captures code, infrastructure, and deploy artifacts across connected tools.

  • Agents pull inputs: sprint scope from tracker, design specs, threat model mitigations, pipeline urgency
  • People run the rituals: daily standup, pair programming, code review, secure coding
  • Skill captures outputs: PRs tagged to job statements, CI green, preview deploys in Vercel
  • Infrastructure stays in sync: database migrations, auth config, and storage in Supabase
Skill/build

People own design and scope; agents scaffold code and manage CI.

Phase 4 of 7 — Test

Catch the gap between what we built and what the customer needs

JJobs to Be Done

Does this actually help the customer get the job done?

Write acceptance tests framed as job-completion scenarios, not feature checks. Template: “Given a user experiencing [struggling moment], when they [complete the action], they should achieve [desired outcome].” Measure job completion rate and time-to-value, not just feature coverage. If the feature works as coded but doesn’t help the user complete the job, it’s a failure.

  • Frame acceptance tests as: Given [trigger] → When [action] → Then [outcome]
  • Test the full journey from struggling moment to desired outcome
  • Measure job completion rate and time-to-value alongside code coverage
  • A feature that works as coded but doesn’t serve the job is a test failure
Job CompletionTime-to-ValueAcceptance Scenarios
HHook Model

Test the entire hook cycle, not just individual phases. Does the trigger fire at the right moment? Is the action achievable in the minimum steps? Does the reward feel genuinely variable (not scripted)? Do users complete the investment step? Measure cycle completion rate, session frequency, and time-between-sessions. If frequency is increasing, habits are forming.

  • Test trigger timing: does it reach users when the internal emotion is active?
  • Validate the action is completable in the designed minimum steps
  • A/B test reward variations — measure surprise, not just satisfaction
  • Track cycle completion rate: what % of users go trigger → action → reward → invest?
Cycle CompletionA/B TestingFrequency Tracking
AAgile Delivery

Each testing type catches a different class of failure. Regression: existing job-completion paths still work. Manual QA: catches intent gaps where the feature works as coded but doesn’t serve the user’s job. Penetration testing: validates threat model mitigations from the Plan phase. UAT: real users or stakeholders confirm they can complete the job statement end-to-end.

  • Regression: all existing job-completion paths must still pass
  • QA catches intent gaps: works as coded but doesn’t serve the job
  • Pen testing validates threat model mitigations — not just a checkbox
  • UAT: can a real user complete the job statement in the target time?
SAgentic SDLC

The /test skill coordinates testing rituals. People write judgment tests, validate acceptance criteria, and run UAT; agents automate regression, E2E happy paths, coverage reports, and demo-path validation. The skill produces test results, coverage data, and acceptance sign-off captured in CI and the project tracker.

  • Agents pull inputs: acceptance criteria from spec, threat model test cases, demo scripts, support scenarios
  • People run the rituals: penetration testing, accessibility testing
  • Skill captures outputs: test results, coverage reports, and pen test findings in CI
  • Acceptance tracked: job-completion scenarios signed off in the project tracker
Skill/test

People write judgment tests and run UAT; agents automate regression and coverage.

Phase 5 of 7 — Review

Face whether the work moved the needle

JJobs to Be Done

Did we help the customer make progress on their job?

Review against the original job statement, not the spec. The spec is a bet — the job statement is the truth. If they diverge, update the spec. Measure concrete progress: job completion rate, time-to-value, satisfaction gap closure (did the importance × satisfaction score improve?). Check qualitative signals: did customers stop using their old workaround?

  • Compare actual outcomes against the job statement, not the spec
  • Measure: job completion rate, time-to-value, satisfaction gap closure
  • Check switching behavior: did users abandon their old workaround?
  • If spec and job statement diverged during build, update the spec now
Satisfaction GapSwitching BehaviorOutcome Metrics
HHook Model

Review the full hook cycle’s performance with real data. Track cycle completion rate (what percentage of users complete trigger → action → reward → investment in a single session). Measure inter-session interval — a decreasing interval means habits are forming. Identify where users drop off: trigger-to-action? action-to-reward? reward-to-investment? Check whether the investment phase is generating enough stored value to load the next trigger.

  • Track cycle completion rate: trigger → action → reward → investment per session
  • Measure inter-session interval — decreasing = habit forming, increasing = losing them
  • Identify the biggest drop-off point in the cycle — that’s where to focus next
  • Assess investment stored value: is the product getting better with each visit?
Retention CurvesSession IntervalStored Value
AAgile Delivery

Demo the user journey end-to-end, not individual features in isolation. Capture feedback framed as job evaluations: “How well does this help the customer get the job done?” Produce a clear shipped/cut/carried-forward list. The review is a forcing function for honesty about which bets paid off and which ones missed.

  • Demo the user journey end-to-end — never demo features in isolation
  • Capture feedback as job evaluations, not feature opinions
  • Produce a shipped/cut/carried-forward list with outcome data for each bet
  • Identify which bets paid off and which need to be reframed or killed
SAgentic SDLC

The /review skill coordinates review rituals. People interpret outcomes, evaluate technical health, and assess deal impact; agents generate retention reports, incident summaries, session replays, and support themes. The skill produces an evidence-based shipped/cut/carried-forward list with outcome data for each bet.

  • Agents pull inputs: retention curves, funnel rates, NPS trends, incident logs, session replays, support tickets
  • People run the rituals: sprint review, security audit
  • Skill captures outputs: shipped/cut/carried-forward list with outcome data per bet in the tracker
  • Behavior data flows back: Clarity heatmaps and replays inform the next design cycle
Skill/review

People interpret outcomes; agents generate retention reports and session replays.

Phase 6 of 7 — Launch

Get the work into customers’ hands

JJobs to Be Done

How do we position this around the job, not the feature?

Launch messaging should speak to the job the customer is hiring for. “We added dark mode” is a feature. “Work comfortably at night” is a job. Lead with the struggling moment (“Tired of squinting at bright screens?”), present the product as progress on their job, and use social proof from users who got the job done. Feature lists go in the changelog, not the announcement.

  • Lead with the struggling moment: “Tired of [old way]?”
  • Present the product as progress on the job, not a feature list
  • Use social proof from users who completed the job successfully
  • Write launch copy using job statement language: verb + object + context
Job PositioningStruggling MomentSocial Proof
HHook Model

Launch is where you activate the first full hook cycle. Fire external triggers — announcements, emails, in-app prompts — that create a mental association between the user’s internal trigger (the emotion) and your product. Don’t send a “new feature” email — send a “struggling with X?” email. Design the first-use experience to deliver an immediate variable reward and prompt an investment, so the user completes the entire cycle on day one.

  • Match external triggers to internal emotions — not “new feature” but “struggling with X?”
  • Design first-use to complete the full cycle: trigger → action → reward → invest on day one
  • Plan trigger frequency: enough to build the association, not enough to annoy
  • Track which trigger channels produce the highest first-cycle completion rate
First-Use ExperienceExternal TriggersTrigger Channels
AAgile Delivery

Stage the rollout: internal dogfood → beta cohort → general availability. Feature flags control exposure. Define rollback criteria before launch: error rate thresholds, satisfaction drops, or performance degradation that trigger automatic rollback. Sales enablement means training the team to position around the job statement, not walk through features.

  • Stage rollout: internal → beta → GA with feature flags controlling exposure
  • Define rollback criteria upfront: error rate, latency, satisfaction thresholds
  • Train sales to position around the job: “this helps customers [job]” not “this does [feature]”
  • Monitor support ticket volume and first-session completion rate post-launch
SAgentic SDLC

The /launch skill coordinates launch rituals. People own go-to-market narrative, go/no-go decisions, sales enablement, and support readiness; agents manage the deploy pipeline, validate job-statement positioning, generate battlecards, and monitor first-session completion. The skill orchestrates a staged rollout with automatic rollback on health check failure.

  • Agents pull inputs: go-to-market copy, rollback criteria, sales battlecards, support documentation
  • People run the rituals: continuous delivery, feature toggles, canary release
  • Skill captures outputs: staged deploy via feature flags, changelog, and release tags in GitHub
  • Rollout monitored: auto-rollback on health check failure, first-session completion tracked in Vercel
Skill/launch

People own go/no-go; agents manage staged rollout and auto-rollback.

Phase 7 of 7 — Analyze

Name what’s not working

JJobs to Be Done

Which outcomes are still underserved?

Re-score each outcome using the importance × satisfaction framework and compare to pre-launch scores. Identify outcomes where the gap narrowed (your bet paid off), remained (your bet missed), or new gaps emerged (unintended consequence). Task completion rate, time-to-value, and job satisfaction matter more than page views or DAU. These findings feed the next Plan cycle directly.

  • Re-score importance × satisfaction for every outcome you targeted
  • Compare pre/post: gaps that narrowed = success, gaps that remained = miss
  • Watch for new underserved outcomes that emerged as unintended consequences
  • Feed findings directly into the next Plan phase — close the loop
ODI Re-scoringOutcome GapsNext Cycle Input
HHook Model

Diagnose where the hook cycle is breaking. Define your habit threshold (e.g., 3+ sessions per week) and track what percentage of each cohort reaches it. Measure drop-off at each phase transition: trigger-to-action, action-to-reward, reward-to-investment. Compare trigger channels by first-cycle conversion rate. A decreasing inter-session interval means habits are forming; an increasing one means you’re losing the battle for attention.

  • Define the habit threshold: how many sessions per week indicates a habit?
  • Track cohort retention curves — where do users fall off the cycle?
  • Measure drop-off at each transition: trigger→action, action→reward, reward→invest
  • Compare trigger channels: which produce the highest cycle completion rate?
Cohort AnalysisDrop-off DiagnosisHabit Threshold
AAgile Delivery

Each ritual surfaces a different class of problem. Retros surface process problems (use Start/Stop/Continue). Metrics reviews compare actual outcomes against predicted bet outcomes. Post-mortems produce blameless root cause analysis with concrete action items. Churn analysis identifies which jobs users found a better solution for — they didn’t leave your product, they hired a different one.

  • Retros: Start/Stop/Continue — produce process improvements for the next cycle
  • Metrics review: compare actual outcomes vs. predicted outcomes from each bet
  • Post-mortems: blameless RCA + concrete action items, not blame assignment
  • Churn analysis: which job did churned users find a better solution for?
SAgentic SDLC

The /analyze skill coordinates analysis rituals. People interpret data, evaluate system health, read the pipeline, and flag at-risk accounts; agents generate retention curves, error budgets, usage-to-deal correlations, and churn patterns. The skill produces an evidence-ranked list of bets for the next Plan cycle, closing the loop.

  • Agents pull inputs: retention curves, error budgets, deal velocity, churn signals, session replays
  • People run the rituals: retrospective, post-mortem
  • Skill captures outputs: evidence-ranked bet list for the next Plan cycle in the tracker
  • Loop closes: findings feed directly into the next /plan invocation
Skill/analyze

People interpret data and rank bets; agents generate retention curves and churn patterns.

Without structure, you ship noise.

With it, every sprint moves toward outcomes people actually pay for.

Scroll to load diagram…

Connect

orie@or13.io

© 2026 Orie Steele