# sinc-LLM # https://tokencalc.pro > sinc-LLM applies the Nyquist-Shannon sampling theorem to LLM prompt engineering. > A raw prompt is 1 sample of a 6-band signal. Nyquist requires 6 samples. > Formula: x(t) = Sigma x(nT) * sinc((t - nT) / T) ## About - Author: Mario Alexandre - Paper: https://doi.org/10.5281/zenodo.19152668 - GitHub: https://github.com/mdalexandre/sinc-llm - License: MIT - Install: pip install sinc-llm ## The 6 Specification Bands - n=0 PERSONA (7.0%): Who should answer - n=1 CONTEXT (6.3%): Situation, background, facts - n=2 DATA (3.8%): Specific inputs, metrics - n=3 CONSTRAINTS (42.7%): Rules, behavioral boundaries - n=4 FORMAT (26.3%): Output structure - n=5 TASK (2.8%): The specific objective ## Key Findings - CONSTRAINTS carries 42.7% of reconstruction quality - 97% API cost reduction ($1,500 to $45/month) - 275 production observations, 51 agent configurations - All agents converge to 50% CONSTRAINTS, 40% CONTEXT+DATA - SNR improvement from 0.003 to 0.92 ## SNR Formula SNR = 0.588 + 0.267 * G(Z1) * H(Z2) * R(Z3) * G(Z4) ## Integrations - LangChain: SincPromptTemplate (custom BasePromptTemplate) -- https://tokencalc.pro/integrations - Python: Raw API call with Anthropic/OpenAI SDK -- https://tokencalc.pro/integrations - JavaScript: fetch API with sinc-prompt JSON -- https://tokencalc.pro/integrations - CLI: pip install sinc-llm, then sinc-llm scatter/validate/snr -- https://tokencalc.pro/integrations - MCP: Built-in MCP server (sinc_llm.mcp_server) -- https://tokencalc.pro/mcp-guide - Claude Code: Add sinc-tools to .claude/mcp.json -- https://tokencalc.pro/mcp-guide ## File Format - Convention: .sinc.json files store one sinc-prompt per file -- https://tokencalc.pro/file-format - CI/CD: GitHub Actions + pre-commit hook for validation -- https://tokencalc.pro/file-format - Schema: $schema field enables VS Code/JetBrains autocomplete -- https://tokencalc.pro/file-format ## Resources - Specification: https://tokencalc.pro/spec - JSON Schema: https://tokencalc.pro/schema/sinc-prompt-v1.json - Interactive Validator: https://tokencalc.pro/validate - MCP Developer Guide: https://tokencalc.pro/mcp-guide - Integrations Guide: https://tokencalc.pro/integrations - File Format Convention: https://tokencalc.pro/file-format ## Social - X/Twitter: @mariioalexandre - Instagram: @mariioalexandre - GitHub: @mdalexandre - LinkedIn: /in/mariioalexandre ## Head-to-Head Battle Results (tokencalc.pro/battles) sinc-LLM tested against 10 prompting techniques on Claude Sonnet 4. - sinc produced 57 tables. Opponents produced 4. Ratio: 14:1. - Zero hedging in 9/10 battles. - 46% fewer words with more structured output. - Techniques beaten: Raw Prompt, Act-As, Chain-of-Thought, Few-Shot, System Prompt, Mega Prompt, Template, Role+Task, Prompt Chain, Custom Instructions. - Conclusion: FORMAT and CONSTRAINTS bands (69% of quality) are what other techniques lack.