sinc Prompt Examples: 15 Real-World Templates Across 15 Domains

sinc-prompt Examples

15 real-world templates showing how 6-band spectral decomposition transforms vague prompts into precise specifications

Jump to:
1. Code Review · 2. Bug Debugging · 3. Market Research · 4. Legal Contract Review · 5. Medical Diagnosis · 6. Financial Analysis · 7. Content Writing · 8. Product Launch · 9. Data Analysis · 10. Customer Support · 11. Api Design · 12. System Architecture · 13. Devops Incident · 14. Hiring Assessment · 15. Competitive Analysis

How to Read Each Example

Each example shows three things:

  1. Raw prompt -- what most people type into an LLM
  2. sinc JSON -- the same intent decomposed into 6 frequency bands (PERSONA, CONTEXT, DATA, CONSTRAINTS, FORMAT, TASK)
  3. Expected improvement -- what changes in the LLM output quality

The sinc format is based on the Nyquist-Shannon sampling theorem applied to specification space. A raw prompt is 1 sample of a 6-band signal -- guaranteed aliasing. Six bands = lossless reconstruction. Research: DOI 10.5281/zenodo.19152668

15 Examples Across 15 Domains

Example 1 of 15

Code Review

Software Engineering code_review
Raw prompt: Review this Python function for bugs and performance issues
{
  "formula": "x(t) = Σ x(nT) · sinc((t - nT) / T)",
  "T": "specification-axis",
  "fragments": [
    {
      "n": 0,
      "t": "PERSONA",
      "x": "You are a senior Python engineer with 12 years of experience in performance optimization and security auditing. You review code at the function level with focus on correctness, edge cases, and algorithmic complexity."
    },
    {
      "n": 1,
      "t": "CONTEXT",
      "x": "This function is part of a payment processing pipeline handling 50,000 transactions per hour. It runs in production on Python 3.11 with PostgreSQL 15. The team has reported intermittent 504 timeouts traced to this function."
    },
    {
      "n": 2,
      "t": "DATA",
      "x": "def process_payment(user_id, amount):\n    db = get_connection()\n    user = db.query(f'SELECT * FROM users WHERE id = {user_id}')\n    if user['balance'] >= amount:\n        db.execute(f'UPDATE users SET balance = balance, {amount} WHERE id = {user_id}')\n        db.execute(f'INSERT INTO transactions VALUES ({user_id}, {amount}, NOW())')\n        return {'status': 'ok'}\n    return {'status': 'insufficient'}"
    },
    {
      "n": 3,
      "t": "CONSTRAINTS",
      "x": "Identify ALL bugs ranked by severity (critical > high > medium > low). For each bug: state the exact line, the vulnerability class (CWE number if applicable), the exploit scenario, and the fix as a code diff. Check specifically for: SQL injection, race conditions, missing transaction boundaries, connection leaks, type coercion errors, missing error handling, and N+1 queries. Do not suggest stylistic changes. Do not suggest changes unrelated to bugs or performance. Maximum 500 words."
    },
    {
      "n": 4,
      "t": "FORMAT",
      "x": "Return a numbered list. Each entry: [SEVERITY] Line N: one-sentence description. Vulnerability: CWE-XXX. Exploit: one sentence. Fix: code diff (before/after). End with a summary table: Bug | Severity | CWE | Fixed."
    },
    {
      "n": 5,
      "t": "TASK",
      "x": "Review the process_payment function. Find every bug and performance issue. Provide exact fixes."
    }
  ]
}
With sinc: Raw prompt produces generic advice about 'consider using parameterized queries'. sinc prompt identifies 4 specific bugs: SQL injection (CWE-89), race condition on balance check-and-update, missing transaction boundary, connection leak. Each with exact line numbers and diffs.
Example 2 of 15

Bug Debugging

Software Engineering bug_debugging
Raw prompt: My React app crashes when I click the submit button
{
  "formula": "x(t) = Σ x(nT) · sinc((t - nT) / T)",
  "T": "specification-axis",
  "fragments": [
    {
      "n": 0,
      "t": "PERSONA",
      "x": "You are a React debugging specialist who diagnoses runtime errors from stack traces and component trees. You identify root causes in under 3 questions."
    },
    {
      "n": 1,
      "t": "CONTEXT",
      "x": "React 18.2 app with TypeScript 5.0, using React Hook Form 7.45 and Zod validation. The form worked in v2.1 but crashes since v2.2 deployment on March 15, 2026. No changes to the form component itself between versions."
    },
    {
      "n": 2,
      "t": "DATA",
      "x": "Error: TypeError: Cannot read properties of undefined (reading 'email')\n  at SubmitHandler (ContactForm.tsx:47)\n  at HTMLFormElement.handleSubmit (react-hook-form.js:1284)\nComponent: <ContactForm schema={contactSchema} onSubmit={handleContact} />\nThe contactSchema was moved from local file to shared @company/schemas package in v2.2."
    },
    {
      "n": 3,
      "t": "CONSTRAINTS",
      "x": "Diagnose from the stack trace and context provided -- do not ask clarifying questions. Identify the exact root cause in 1 sentence. Trace the causal chain: what changed in v2.2, why it breaks the read of 'email', and what the fix is. Provide the fix as a code diff. If multiple possible causes exist, rank by probability and explain why the top candidate is most likely. Do not suggest 'try adding console.log' or generic debugging steps."
    },
    {
      "n": 4,
      "t": "FORMAT",
      "x": "Root Cause: one sentence. Causal Chain: numbered steps (max 5). Fix: code diff. Verification: one command to confirm the fix works."
    },
    {
      "n": 5,
      "t": "TASK",
      "x": "Diagnose why the ContactForm crashes on submit since v2.2 and provide the exact fix."
    }
  ]
}
With sinc: Raw prompt triggers a generic checklist (check state, add error boundaries, etc). sinc prompt identifies the root cause: the shared schema package exports a different Zod shape where 'email' is nested under 'contact.email', causing the destructuring at line 47 to fail.
Example 3 of 15

Market Research

Business Strategy market_research
Raw prompt: Research the AI coding assistant market for me
{
  "formula": "x(t) = Σ x(nT) · sinc((t - nT) / T)",
  "T": "specification-axis",
  "fragments": [
    {
      "n": 0,
      "t": "PERSONA",
      "x": "You are a technology market analyst at a top-3 consulting firm specializing in developer tools and AI infrastructure. You produce investment-grade market maps with TAM/SAM/SOM analysis."
    },
    {
      "n": 1,
      "t": "CONTEXT",
      "x": "A Series B developer tools company ($12M ARR) is evaluating whether to build an AI coding assistant feature. They need to understand the competitive landscape as of Q1 2026, market size projections, and defensible niches before their board meeting on April 5, 2026."
    },
    {
      "n": 2,
      "t": "DATA",
      "x": "Known competitors: GitHub Copilot, Cursor, Codeium, Tabnine, Amazon CodeWhisperer, Sourcegraph Cody. The company's existing product is a code review platform with 8,000 enterprise customers. Their differentiation is deep integration with CI/CD pipelines and compliance workflows."
    },
    {
      "n": 3,
      "t": "CONSTRAINTS",
      "x": "Use only publicly available data (earnings reports, press releases, Crunchbase, published analyst reports). For market size, cite the source and date of each number. Clearly label estimates vs confirmed figures. Analyze exactly 3 strategic options: build, partner, acquire. For each option: investment required (range), time to market, risk level (1-5), expected ROI at 24 months. Do not use superlatives ('massive market', 'huge opportunity'). Every claim needs a number."
    },
    {
      "n": 4,
      "t": "FORMAT",
      "x": "1. Market Map: table with competitor | funding | est. ARR | key differentiator | threat level (1-5). 2. TAM/SAM/SOM: three numbers with sources. 3. Strategic Options: 3-row comparison table. 4. Recommendation: 1 paragraph with the recommended path and why."
    },
    {
      "n": 5,
      "t": "TASK",
      "x": "Produce a board-ready competitive analysis of the AI coding assistant market with a strategic recommendation for a code review platform company."
    }
  ]
}
With sinc: Raw prompt returns a Wikipedia-style overview. sinc prompt produces a structured competitive map with 6 competitors analyzed on 5 dimensions, TAM/SAM/SOM with cited sources, and 3 strategic options with quantified investment and ROI projections.
Example 4 of 15

Legal Contract Review

Legal legal_contract_review
Raw prompt: Review this SaaS contract for any issues
{
  "formula": "x(t) = Σ x(nT) · sinc((t - nT) / T)",
  "T": "specification-axis",
  "fragments": [
    {
      "n": 0,
      "t": "PERSONA",
      "x": "You are a technology transactions attorney with 15 years of experience reviewing SaaS agreements for enterprise buyers. You have reviewed over 500 SaaS contracts and specialize in data protection, liability caps, and termination clauses."
    },
    {
      "n": 1,
      "t": "CONTEXT",
      "x": "Our company (5,000 employees, healthcare sector, HIPAA-regulated) is evaluating a 3-year SaaS contract worth $840,000 total from a Series C vendor for an internal analytics platform. The vendor processes PHI as part of the service. Contract renewal deadline is April 1, 2026."
    },
    {
      "n": 2,
      "t": "DATA",
      "x": "Key contract sections: Liability cap is set at 'fees paid in the prior 12 months'. Data processing clause states vendor 'may use aggregated, de-identified data for product improvement'. Termination requires 180-day written notice. SLA guarantees 99.5% uptime with credits capped at 5% of monthly fees. Indemnification is mutual but excludes IP infringement by vendor's third-party components. Governing law: Delaware."
    },
    {
      "n": 3,
      "t": "CONSTRAINTS",
      "x": "Evaluate ONLY from the buyer's perspective. Flag every clause that creates risk for a HIPAA-regulated entity. For each flagged clause: state the risk in 1 sentence, rate severity (critical/high/medium/low), provide the specific redline language you would propose, and cite the relevant regulation or standard (HIPAA, SOC 2, GDPR if applicable). Do not provide general legal education. Do not disclaim that you are not a lawyer. Focus exclusively on actionable redlines."
    },
    {
      "n": 4,
      "t": "FORMAT",
      "x": "Numbered list of flagged clauses, each with: [SEVERITY] Clause name -- Risk: one sentence. Redline: exact proposed language change. Basis: regulation/standard. End with a negotiation priority matrix: must-have vs nice-to-have changes."
    },
    {
      "n": 5,
      "t": "TASK",
      "x": "Review the SaaS contract terms and produce actionable redlines prioritized for a HIPAA-regulated healthcare buyer."
    }
  ]
}
With sinc: Raw prompt produces generic 'things to look for in SaaS contracts'. sinc prompt flags 6 specific risks: inadequate liability cap for PHI breaches, overbroad de-identification clause violating HIPAA Safe Harbor, 180-day lock-in, below-market SLA, IP indemnification gap, and missing BAA requirement. Each with exact redline language.
Example 5 of 15

Medical Diagnosis

Healthcare medical_diagnosis
Raw prompt: What could cause persistent fatigue and joint pain?
{
  "formula": "x(t) = Σ x(nT) · sinc((t - nT) / T)",
  "T": "specification-axis",
  "fragments": [
    {
      "n": 0,
      "t": "PERSONA",
      "x": "You are a clinical decision support system trained on Harrison's Principles of Internal Medicine and UpToDate. You generate differential diagnoses ranked by posterior probability given the presented symptoms and demographics."
    },
    {
      "n": 1,
      "t": "CONTEXT",
      "x": "38-year-old female presenting to primary care. Symptoms onset 4 months ago, progressive. No prior autoimmune history. Family history: mother has Hashimoto's thyroiditis. Lives in northeastern United States. No recent travel. Works as a landscape designer (outdoor exposure). Current medications: none."
    },
    {
      "n": 2,
      "t": "DATA",
      "x": "Symptoms: fatigue (daily, severity 7/10), bilateral symmetric joint pain (MCPs, PIPs, wrists), morning stiffness lasting 90 minutes, low-grade fever (99.8F intermittent), unintentional weight loss of 8 lbs over 4 months. Labs: ESR 42 mm/hr (elevated), CRP 2.8 mg/dL (elevated), CBC normal, TSH normal."
    },
    {
      "n": 3,
      "t": "CONSTRAINTS",
      "x": "Generate a differential diagnosis with exactly 5 conditions ranked by likelihood. For each: state the pre-test probability estimate (percentage), list the 3 most discriminating findings that support or argue against it, and specify the single most informative next test to confirm or rule it out. Account for the geographic location (Lyme disease endemic area) and family history (autoimmune predisposition). Do not provide treatment recommendations. Do not diagnose -- generate differentials only. Include at least 1 non-autoimmune etiology."
    },
    {
      "n": 4,
      "t": "FORMAT",
      "x": "Table: Rank | Diagnosis | Pre-test Probability | Supporting Findings | Arguing Against | Key Next Test. Below the table: recommended test ordering sequence (which test first and why)."
    },
    {
      "n": 5,
      "t": "TASK",
      "x": "Generate a ranked differential diagnosis for this presentation and recommend the optimal diagnostic workup sequence."
    }
  ]
}
With sinc: Raw prompt lists 15+ conditions alphabetically with no ranking. sinc prompt produces a ranked differential: RA (40%), Lyme arthritis (25%), SLE (15%), viral arthralgia (12%), fibromyalgia (8%), with specific discriminating features and an ordered workup starting with RF/anti-CCP and Lyme serology.
Example 6 of 15

Financial Analysis

Finance financial_analysis
Raw prompt: Analyze whether we should raise a Series B round
{
  "formula": "x(t) = Σ x(nT) · sinc((t - nT) / T)",
  "T": "specification-axis",
  "fragments": [
    {
      "n": 0,
      "t": "PERSONA",
      "x": "You are a startup CFO advisor who has guided 40+ SaaS companies through Series A to Series C fundraising. You evaluate fundraising readiness using SaaS-specific metrics and market timing signals."
    },
    {
      "n": 1,
      "t": "CONTEXT",
      "x": "B2B SaaS company, 2.5 years post-Series A ($8M raised at $32M pre-money). Current date: March 2026. The fundraising market for B2B SaaS has tightened since 2023 but is recovering for companies with strong unit economics. Board is split on timing: raise now at lower valuation vs wait 6 months for better metrics."
    },
    {
      "n": 2,
      "t": "DATA",
      "x": "ARR: $4.2M (was $1.8M at Series A). MoM growth: 8% (last 6 months average). Gross margin: 78%. Net revenue retention: 118%. CAC payback: 14 months. Burn multiple: 1.8x. Runway: 11 months at current burn. Logo churn: 2.1% monthly. ACV: $18,000. Total customers: 230. Pipeline: $1.2M in qualified opportunities."
    },
    {
      "n": 3,
      "t": "CONSTRAINTS",
      "x": "Evaluate fundraising readiness on exactly 5 dimensions: growth rate, unit economics, market timing, runway pressure, and competitive positioning. For each dimension: score 1-10, state the benchmark (median Series B SaaS company), and indicate whether this metric helps or hurts the raise. Model exactly 2 scenarios: (A) raise now targeting $15-20M at $60-80M pre-money, (B) wait 6 months assuming current growth trajectory holds. For each scenario: expected dilution, runway impact, and valuation sensitivity. Do not recommend -- present the analysis for board decision."
    },
    {
      "n": 4,
      "t": "FORMAT",
      "x": "1. Readiness Scorecard: 5-row table (Dimension | Score | Benchmark | Verdict). 2. Scenario A: 3 metrics in a box. 3. Scenario B: 3 metrics in a box. 4. Risk Matrix: 2x2 (raise now risks vs wait risks, upside vs downside)."
    },
    {
      "n": 5,
      "t": "TASK",
      "x": "Produce a Series B fundraising readiness analysis with two timing scenarios for board presentation."
    }
  ]
}
With sinc: Raw prompt gives textbook fundraising advice. sinc prompt produces a quantified readiness scorecard showing growth rate 8/10 (above 75th percentile), unit economics 7/10, runway pressure as the key forcing function at 11 months, with two modeled scenarios including dilution math and valuation sensitivity.
Example 7 of 15

Content Writing

Marketing content_writing
Raw prompt: Write a blog post about our new product feature
{
  "formula": "x(t) = Σ x(nT) · sinc((t - nT) / T)",
  "T": "specification-axis",
  "fragments": [
    {
      "n": 0,
      "t": "PERSONA",
      "x": "You are a B2B SaaS content strategist who writes product launch posts that rank on page 1 for feature-specific long-tail keywords. Your writing style: clear, technical, zero fluff, every paragraph earns its place."
    },
    {
      "n": 1,
      "t": "CONTEXT",
      "x": "We are launching real-time collaboration in our project management tool (competitor to Asana/Monday.com). Target audience: engineering managers at 50-500 person companies who currently use Google Docs alongside their PM tool. Publishing on our blog (DA 45, 12,000 monthly visitors). Goal: 500 organic visits in first 30 days."
    },
    {
      "n": 2,
      "t": "DATA",
      "x": "Feature: real-time collaborative editing in task descriptions, comments, and documentation. Tech: CRDT-based (no conflicts, works offline). Differentiator: inline code blocks with syntax highlighting (competitors only have rich text). Beta data: 340 users, 89% daily active rate, average session increased from 12 to 23 minutes. Target keyword: 'project management tool with real-time collaboration'."
    },
    {
      "n": 3,
      "t": "CONSTRAINTS",
      "x": "Word count: 1,200-1,500. Include the target keyword in: title, H1, first paragraph, one H2, meta description. Use exactly 4 H2 sections. Include 1 customer quote (fabricate a realistic one attributed to 'Engineering Manager at a 200-person fintech'). Include 1 comparison table (us vs Asana vs Monday.com on collaboration features). No buzzwords: do not use 'revolutionary', 'game-changing', 'seamless', 'leverage', or 'unlock'. Write at an 8th grade reading level (Flesch-Kincaid). End with a CTA to start a 14-day free trial."
    },
    {
      "n": 4,
      "t": "FORMAT",
      "x": "Title (with keyword) | Meta description (155 chars) | Body with 4 H2 sections | Comparison table | Customer quote block | CTA button text. Deliver as markdown."
    },
    {
      "n": 5,
      "t": "TASK",
      "x": "Write the product launch blog post for real-time collaboration feature, optimized for the target keyword."
    }
  ]
}
With sinc: Raw prompt produces generic feature announcement with buzzwords. sinc prompt delivers a 1,350-word SEO-optimized post with keyword placement in 5 required positions, a comparison table, realistic customer quote, 8th-grade reading level, and zero banned buzzwords.
Example 8 of 15

Product Launch

Product Management product_launch
Raw prompt: Help me plan the launch of our mobile app
{
  "formula": "x(t) = Σ x(nT) · sinc((t - nT) / T)",
  "T": "specification-axis",
  "fragments": [
    {
      "n": 0,
      "t": "PERSONA",
      "x": "You are a product launch strategist who has executed 20+ B2C mobile app launches including 3 that reached top-10 in their App Store category within the first week. You plan launches in 4 phases: pre-launch, launch day, week 1, month 1."
    },
    {
      "n": 1,
      "t": "CONTEXT",
      "x": "Fitness tracking app targeting 25-40 year old urban professionals. We have 8,000 email waitlist signups, 2,400 beta testers (4.6 star average rating), and $50,000 launch marketing budget. Launch date: April 15, 2026. We are a team of 6 (2 engineers, 1 designer, 1 marketer, 1 founder/CEO, 1 customer support). Competitors: Strava, Nike Run Club, Fitbit."
    },
    {
      "n": 2,
      "t": "DATA",
      "x": "Beta metrics: 68% D7 retention, 42% D30 retention, average 4.2 sessions/week, top feature usage: AI workout suggestions (78% of users), social challenges (61%), progress photos (54%). App Store Optimization: primary keyword 'AI fitness tracker' has 2,400 monthly searches with medium competition. Current App Store listing is drafted but not optimized."
    },
    {
      "n": 3,
      "t": "CONSTRAINTS",
      "x": "Plan must fit within $50,000 budget with exact dollar allocation per channel. Every task must have an owner (one of the 6 team members by role), a deadline (specific date relative to April 15 launch), and a success metric. Include exactly 3 contingency plans for: (1) App Store review rejection, (2) server overload on day 1, (3) negative press/reviews. Do not suggest hiring additional staff. Do not suggest activities requiring more than 2 weeks lead time from today (March 22, 2026). Prioritize activities by expected impact on day-1 downloads."
    },
    {
      "n": 4,
      "t": "FORMAT",
      "x": "Phase timeline: Gantt-style table with task | owner | deadline | budget | success metric. Budget allocation: pie chart data (channel | amount | expected ROI). Contingency plans: trigger | action | owner | max response time. Launch day hour-by-hour schedule."
    },
    {
      "n": 5,
      "t": "TASK",
      "x": "Create the complete launch plan for April 15 with budget allocation, task assignments, and contingency plans."
    }
  ]
}
With sinc: Raw prompt gives a generic launch checklist. sinc prompt produces a 24-day plan with 35+ tasks assigned to specific team members, $50K budget allocated across 6 channels with expected ROI, 3 contingency playbooks with response time SLAs, and an hour-by-hour launch day schedule.
Example 9 of 15

Data Analysis

Data Science data_analysis
Raw prompt: Analyze our user churn data and find patterns
{
  "formula": "x(t) = Σ x(nT) · sinc((t - nT) / T)",
  "T": "specification-axis",
  "fragments": [
    {
      "n": 0,
      "t": "PERSONA",
      "x": "You are a product analytics lead who specializes in churn prediction and retention analysis for subscription SaaS products. You use cohort analysis, survival curves, and feature importance ranking to identify actionable churn drivers."
    },
    {
      "n": 1,
      "t": "CONTEXT",
      "x": "B2B SaaS product with 3,200 active accounts on monthly subscriptions ($49-$299/month). Churn spiked from 3.2% to 5.8% monthly over the last quarter (Q4 2025). The product team shipped 3 major features in Q4 and changed the onboarding flow on November 1, 2025. Support ticket volume increased 40% in the same period."
    },
    {
      "n": 2,
      "t": "DATA",
      "x": "Churned accounts Q4: 412. Breakdown by plan: Starter ($49): 58% of churn, Pro ($149): 31%, Enterprise ($299): 11%. Churned accounts by tenure: <3 months: 47%, 3-12 months: 35%, >12 months: 18%. Feature adoption among churned users: new dashboard (12%), API v2 (8%), team spaces (23%). NPS score for churned users (exit survey, 34% response rate): average 4.2. Top exit survey reasons: 'too complicated' (38%), 'missing integration' (24%), 'too expensive' (21%), 'switched to competitor' (17%)."
    },
    {
      "n": 3,
      "t": "CONSTRAINTS",
      "x": "Perform 3 specific analyses: (1) cohort analysis comparing pre- vs post-November 1 onboarding change, (2) survival curve comparison by plan tier, (3) churn driver ranking by information gain. For each analysis: state the method, show the expected output structure, identify the top finding, and recommend 1 specific action. Quantify the revenue impact of the churn spike ($X MRR lost). Do not recommend 'collect more data' as a primary action. Every recommendation must be implementable within 2 weeks by a product team of 4."
    },
    {
      "n": 4,
      "t": "FORMAT",
      "x": "Executive summary: 3 bullet points. Analysis 1: cohort table + finding. Analysis 2: survival comparison + finding. Analysis 3: ranked feature importance + finding. Revenue impact: single number. Action plan: 3 prioritized actions with expected churn reduction (percentage points)."
    },
    {
      "n": 5,
      "t": "TASK",
      "x": "Analyze the Q4 churn spike, identify the top 3 drivers, and recommend 3 actions with expected impact."
    }
  ]
}
With sinc: Raw prompt produces generic churn analysis framework. sinc prompt identifies the November 1 onboarding change as primary driver (47% of churn is <3 month tenure, correlating with post-change cohort), quantifies $127K MRR impact, and recommends 3 specific actions: revert onboarding flow, add missing Salesforce integration, introduce Starter-to-Pro upgrade path.
Example 10 of 15

Customer Support

Customer Success customer_support
Raw prompt: Create a response template for angry customers who want refunds
{
  "formula": "x(t) = Σ x(nT) · sinc((t - nT) / T)",
  "T": "specification-axis",
  "fragments": [
    {
      "n": 0,
      "t": "PERSONA",
      "x": "You are a customer support operations manager who designs response templates that achieve 85%+ CSAT on escalated tickets. You follow the HEARD framework: Hear, Empathize, Apologize, Resolve, Diagnose."
    },
    {
      "n": 1,
      "t": "CONTEXT",
      "x": "SaaS company ($8M ARR) selling project management software. Refund policy: full refund within 30 days, pro-rated after 30 days, no refund after 90 days. Average ticket resolution: 4.2 hours. Current CSAT on refund tickets: 62% (company target: 80%). Support team: 5 agents, all Tier 1. Tier 2 escalation to team lead."
    },
    {
      "n": 2,
      "t": "DATA",
      "x": "Top 5 refund request triggers: (1) billed after cancellation attempt (34%), (2) feature removed or changed (22%), (3) service outage impact (18%), (4) competitor switch (15%), (5) budget cuts (11%). Average refund ticket sentiment score: -0.72 (scale -1 to +1). Refund approval rate: 73%. Average refund amount: $127."
    },
    {
      "n": 3,
      "t": "CONSTRAINTS",
      "x": "Create exactly 5 templates, one per trigger type. Each template must: follow the HEARD framework in order, include merge fields for {customer_name}, {product_name}, {amount}, {billing_date}, {agent_name}. Keep each template under 150 words. Include a decision tree for when to approve vs escalate: approve immediately (conditions), escalate to Tier 2 (conditions), deny with alternative (conditions). Never use: 'unfortunately', 'per our policy', 'as previously stated', 'I understand your frustration but'. Tone: direct, warm, solution-first. Each template must offer exactly 2 resolution options."
    },
    {
      "n": 4,
      "t": "FORMAT",
      "x": "5 templates in order (one per trigger). Each template: [TRIGGER TYPE] | Word count | HEARD step markers in the text. Decision tree as a flowchart (text-based). Escalation criteria table: Condition | Action | SLA."
    },
    {
      "n": 5,
      "t": "TASK",
      "x": "Create 5 refund response templates with a decision tree for approval/escalation routing."
    }
  ]
}
With sinc: Raw prompt produces 1 generic apologetic template. sinc prompt produces 5 trigger-specific templates each under 150 words, a decision tree covering 8 routing conditions, and an escalation SLA table -- all calibrated to move CSAT from 62% toward the 80% target.
Example 11 of 15

Api Design

Software Architecture api_design
Raw prompt: Design a REST API for a task management system
{
  "formula": "x(t) = Σ x(nT) · sinc((t - nT) / T)",
  "T": "specification-axis",
  "fragments": [
    {
      "n": 0,
      "t": "PERSONA",
      "x": "You are an API architect who designs RESTful APIs following OpenAPI 3.1 specification. You have designed APIs serving 100M+ requests/day and prioritize developer experience, consistency, and backward compatibility."
    },
    {
      "n": 1,
      "t": "CONTEXT",
      "x": "Building a task management API for a B2B SaaS product. Expected load: 10,000 concurrent users, 500 requests/second peak. Must support multi-tenancy (workspace isolation), real-time updates via webhooks, and integration with Slack, Jira, and GitHub. Authentication: OAuth 2.0 with JWT. Deployment: Kubernetes on AWS."
    },
    {
      "n": 2,
      "t": "DATA",
      "x": "Core entities: Workspace, Project, Task, Comment, User, Label, Attachment. Relationships: Workspace has many Projects, Project has many Tasks, Task has many Comments and Labels (many-to-many), User belongs to many Workspaces. Task fields: title, description, status (todo/in-progress/done/archived), priority (1-4), assignee, due_date, created_at, updated_at. Required operations: CRUD on all entities, bulk task updates, task search with filters, activity timeline."
    },
    {
      "n": 3,
      "t": "CONSTRAINTS",
      "x": "Follow these conventions strictly: resource naming in plural lowercase (e.g., /tasks not /task), consistent pagination using cursor-based approach (not offset), standard error format with error code + message + details array, rate limiting headers on every response (X-RateLimit-*), ETag-based caching for GET endpoints, HATEOAS links for discoverability. Design exactly 15 endpoints covering the core operations. Include request/response schemas for the 3 most complex endpoints. Specify HTTP status codes for success AND the 3 most likely error cases per endpoint. Do not use query parameters for required filters -- use path segments."
    },
    {
      "n": 4,
      "t": "FORMAT",
      "x": "Endpoint table: Method | Path | Description | Auth | Rate Limit. Followed by 3 detailed endpoint specs with request body schema, response schema, and error cases. Pagination example. Error format example."
    },
    {
      "n": 5,
      "t": "TASK",
      "x": "Design the complete REST API surface for the task management system with 15 endpoints, pagination strategy, and error handling."
    }
  ]
}
With sinc: Raw prompt produces a basic CRUD list with inconsistent conventions. sinc prompt delivers 15 endpoints with consistent naming, cursor-based pagination, standard error format, rate limit headers, 3 fully specified endpoint schemas with error cases, and HATEOAS link patterns.
Example 12 of 15

System Architecture

Software Architecture system_architecture
Raw prompt: Design the architecture for a real-time notification system
{
  "formula": "x(t) = Σ x(nT) · sinc((t - nT) / T)",
  "T": "specification-axis",
  "fragments": [
    {
      "n": 0,
      "t": "PERSONA",
      "x": "You are a distributed systems architect who designs event-driven architectures at scale. You have built notification systems handling 1B+ events/day at companies with 50M+ users. You evaluate trade-offs using CAP theorem and document decisions as ADRs."
    },
    {
      "n": 1,
      "t": "CONTEXT",
      "x": "E-commerce platform with 2M daily active users. Current notification system is a monolithic cron job that sends batch emails every 15 minutes. Requirements: real-time push notifications (mobile + web), email digests, in-app notification center. Must support: order updates, price drop alerts, back-in-stock alerts, promotional campaigns. SLA: 99.9% delivery within 30 seconds for transactional notifications."
    },
    {
      "n": 2,
      "t": "DATA",
      "x": "Current volume: 8M notifications/day. Projected: 25M/day within 12 months. Peak: 10x during flash sales (250M/day events). Infrastructure: AWS, Kubernetes, PostgreSQL, Redis. Budget: $15,000/month for notification infrastructure. Team: 3 backend engineers, 1 DevOps."
    },
    {
      "n": 3,
      "t": "CONSTRAINTS",
      "x": "Design must handle 10x peak without pre-scaling. Use only AWS-native or open-source components (no vendor lock-in to Twilio, SendGrid, etc. for core architecture -- these are acceptable as leaf-node delivery providers only). Provide exactly 3 Architecture Decision Records (ADRs): message broker choice, delivery guarantee strategy, and storage/query pattern for notification center. Each ADR must include: context, options considered (minimum 3), decision, consequences. Include a failure mode analysis: what happens when each component fails? Must fit within $15,000/month budget -- show estimated cost breakdown."
    },
    {
      "n": 4,
      "t": "FORMAT",
      "x": "C4 Level 2 diagram (text description of containers). 3 ADRs in standard format. Data flow for 1 notification lifecycle (order shipped). Failure mode table: Component | Failure | Impact | Mitigation | RTO. Cost breakdown table: Component | Service | Monthly Cost."
    },
    {
      "n": 5,
      "t": "TASK",
      "x": "Design the real-time notification system architecture with ADRs, failure analysis, and cost breakdown fitting the $15K/month budget."
    }
  ]
}
With sinc: Raw prompt produces a generic pub/sub diagram. sinc prompt delivers a complete C4 Level 2 architecture with Kafka/SQS event bus, 3 ADRs with trade-off analysis, failure mode table covering 8 components, and a cost breakdown totaling $12,400/month across 7 AWS services.
Example 13 of 15

Devops Incident

DevOps/SRE devops_incident
Raw prompt: Our database is slow, help us fix it
{
  "formula": "x(t) = Σ x(nT) · sinc((t - nT) / T)",
  "T": "specification-axis",
  "fragments": [
    {
      "n": 0,
      "t": "PERSONA",
      "x": "You are an SRE incident commander who has resolved 200+ production database incidents. You follow a structured triage: identify blast radius, stabilize, diagnose root cause, remediate, postmortem."
    },
    {
      "n": 1,
      "t": "CONTEXT",
      "x": "Production PostgreSQL 15 database on AWS RDS (db.r6g.2xlarge, 8 vCPU, 64GB RAM). Serving a B2B SaaS application with 1,200 concurrent users. Incident started 45 minutes ago. Customer-facing impact: API response times increased from 120ms p95 to 3,200ms p95. Error rate: 12% (up from 0.1%). PagerDuty alert triggered at severity P1."
    },
    {
      "n": 2,
      "t": "DATA",
      "x": "Observations: CPU usage: 94% (normal: 35%). Active connections: 487 (max_connections: 500, normal: 120). Disk IOPS: 12,000 (provisioned: 10,000). Top query by time: SELECT * FROM events WHERE user_id = $1 AND created_at > $2 ORDER BY created_at DESC -- running 3,400 times/minute (up from 200/min). pg_stat_activity shows 340 queries in 'active' state on the events table. The events table: 180M rows, last VACUUM was 3 days ago, bloat estimate: 40%. A new feature was deployed 2 hours ago that added an 'activity feed' hitting this query."
    },
    {
      "n": 3,
      "t": "CONSTRAINTS",
      "x": "Provide a triage sequence with exact commands to run (psql, AWS CLI, or shell commands). Separate actions into: immediate stabilization (next 5 minutes), root cause confirmation (next 15 minutes), remediation (next 1 hour). Every action must have: the exact command, what it checks, the expected output if the hypothesis is correct, and the rollback command if it makes things worse. Do not suggest 'upgrade the instance' as a first action. Do not suggest changes that require a maintenance window. Prioritize actions by impact/effort ratio."
    },
    {
      "n": 4,
      "t": "FORMAT",
      "x": "Phase 1 (0-5 min): numbered stabilization commands. Phase 2 (5-20 min): diagnostic commands with expected output. Phase 3 (20-60 min): remediation steps. Decision tree: if X then Y, else Z. Postmortem template: timeline, root cause, action items."
    },
    {
      "n": 5,
      "t": "TASK",
      "x": "Produce the incident response runbook for this PostgreSQL performance crisis with exact commands for each phase."
    }
  ]
}
With sinc: Raw prompt suggests EXPLAIN ANALYZE and 'consider adding an index'. sinc prompt produces a 3-phase runbook: immediate kill of long-running queries + connection limit reduction, diagnostic commands revealing missing index on events(user_id, created_at) + 40% table bloat, remediation with CREATE INDEX CONCURRENTLY + VACUUM + new feature rollback decision tree.
Example 14 of 15

Hiring Assessment

Human Resources hiring_assessment
Raw prompt: Create an interview process for a senior backend engineer
{
  "formula": "x(t) = Σ x(nT) · sinc((t - nT) / T)",
  "T": "specification-axis",
  "fragments": [
    {
      "n": 0,
      "t": "PERSONA",
      "x": "You are a technical hiring manager who has conducted 500+ engineering interviews and built interview processes achieving 90%+ new hire retention at 12 months. You design structured interviews with calibrated rubrics to minimize bias."
    },
    {
      "n": 1,
      "t": "CONTEXT",
      "x": "Series B startup (80 employees, 25 engineers) hiring a senior backend engineer for the payments team. Tech stack: Go, PostgreSQL, Kafka, Kubernetes, AWS. The role requires: designing APIs, owning a microservice, mentoring 2 junior engineers, and participating in on-call rotation. Compensation: $180-220K base. Timeline: fill the role by May 1, 2026. Current pipeline: 45 applicants, need to filter to 8 phone screens."
    },
    {
      "n": 2,
      "t": "DATA",
      "x": "Team gaps: no one with deep Kafka experience (current team learned from docs), API design is inconsistent across services, on-call is currently handled by 3 people (need a 4th). Previous hire (6 months ago) passed all interviews but struggled with ambiguity and ownership -- the process failed to test for these traits. Hiring committee: engineering manager, 2 senior engineers, VP Engineering (final)."
    },
    {
      "n": 3,
      "t": "CONSTRAINTS",
      "x": "Design exactly 5 interview stages (resume screen through final). Total candidate time: maximum 5 hours across all stages. Each stage must have: exact duration, interviewer(s), competencies tested (maximum 3 per stage), 2 specific questions with ideal answer criteria, a scoring rubric (1-4 scale with behavioral anchors for each score), and a pass/fail threshold. Ensure the process tests for the failure mode identified (ambiguity tolerance and ownership) -- dedicate at least 1 full stage to this. Include a calibration exercise: provide 2 sample candidate profiles and the expected score across all stages. Do not include trick questions or brainteasers."
    },
    {
      "n": 4,
      "t": "FORMAT",
      "x": "Stage table: Stage | Duration | Interviewer | Competencies | Pass Threshold. Per stage: 2 questions with rubric (Score 1/2/3/4 behavioral descriptions). Calibration: 2 candidate profiles scored through all stages. Scorecard template for the hiring committee debrief."
    },
    {
      "n": 5,
      "t": "TASK",
      "x": "Design the complete 5-stage interview process with questions, rubrics, and calibration profiles for the senior backend engineer role."
    }
  ]
}
With sinc: Raw prompt produces a generic 'phone screen, coding, system design, behavioral, offer' list. sinc prompt delivers 5 calibrated stages with 10 specific questions, 4-point rubrics with behavioral anchors, a dedicated 'ambiguity and ownership' stage addressing the previous hire failure, and 2 calibration profiles showing expected scoring.
Example 15 of 15

Competitive Analysis

Business Strategy competitive_analysis
Raw prompt: Compare our product to our competitors
{
  "formula": "x(t) = Σ x(nT) · sinc((t - nT) / T)",
  "T": "specification-axis",
  "fragments": [
    {
      "n": 0,
      "t": "PERSONA",
      "x": "You are a competitive intelligence analyst who produces battle cards used by sales teams to win deals against specific competitors. You analyze products on 4 axes: feature parity, pricing, positioning, and customer sentiment."
    },
    {
      "n": 1,
      "t": "CONTEXT",
      "x": "We sell an observability platform ($2.5M ARR) competing for mid-market deals (200-2000 employees). Win rate has dropped from 45% to 31% in the last quarter. Sales reports indicate 3 primary competitors appearing in 80% of lost deals: Datadog, Grafana Cloud, and New Relic. The sales team of 8 AEs needs updated battle cards by April 1, 2026."
    },
    {
      "n": 2,
      "t": "DATA",
      "x": "Our product: full-stack observability (logs, metrics, traces), $15/host/month, no user-based pricing, unlimited retention, self-hosted option available. Key wins: companies with >500 hosts (cost advantage), compliance-heavy industries (self-hosted), teams with existing Prometheus/Grafana investments (migration tools). Key losses: companies wanting APM (we lack code-level tracing), teams wanting AI-powered alerting (competitors ahead), deals where Datadog is incumbent."
    },
    {
      "n": 3,
      "t": "CONSTRAINTS",
      "x": "Create exactly 3 battle cards (one per competitor). Each battle card must contain: positioning comparison (2 sentences each), feature comparison table (exactly 8 features, scored as 'ahead'/'parity'/'behind' for each), pricing comparison at 3 scale points (100, 500, 1000 hosts), top 3 objections the competitor's sales team makes about us (with rebuttals), top 3 trap questions to ask prospects to highlight competitor weaknesses, and the ideal customer profile where we win against this specific competitor. Use only publicly available pricing and feature data. Mark any estimate with [EST]. Do not disparage competitors -- focus on factual differentiation."
    },
    {
      "n": 4,
      "t": "FORMAT",
      "x": "3 battle cards, each as a structured section: Header (Competitor name + one-line positioning). Feature table: 8 rows. Pricing table: 3 scale points. Objection/Rebuttal: 3 pairs. Trap Questions: 3 with expected outcome. Win Profile: 1 paragraph."
    },
    {
      "n": 5,
      "t": "TASK",
      "x": "Create 3 competitor battle cards for Datadog, Grafana Cloud, and New Relic to help the sales team improve win rate from 31% back to 45%."
    }
  ]
}
With sinc: Raw prompt produces a generic feature comparison table. sinc prompt delivers 3 actionable battle cards with pricing at 3 scale points (showing our 40-60% cost advantage at 500+ hosts), 9 specific objection rebuttals, 9 trap questions with expected outcomes, and win profiles identifying the exact customer segments where we have structural advantage.

Download Machine-Readable Data

All 15 examples are available as a JSONL file for programmatic use, fine-tuning datasets, or evaluation benchmarks.

Download sinc-examples.jsonl (one JSON object per line)

See also: Full dataset page with 275 observations, BibTeX citation, and CC BY 4.0 license.

x(t) = Σ x(nT) · sinc((t - nT) / T)  —  github.com/mdalexandre/sinc-llm