FAQs
General
What is Trunkate AI?
Trunkate AI is a deterministic prompt optimizer. It mathematically removes conversational fluff, structures unstructured data, and prunes redundant context from your LLM prompts before they are sent to the model API.
How is this different from Prompt Caching?
Prompt Caching (like Anthropic’s feature) saves money if you send the exact same system prompt repeatedly. Trunkate AI saves money on dynamic prompts by shrinking the text itself. The two technologies can be used together for maximum savings.
Technical
What is “Structure Encoding”?
LLMs are excellent at reading prose, but parsing unstructured text requires more internal computation (tokens) than structured data. Structure Encoding converts sentences like “The user is John, he is 30” into “name:John,age:30”.
Is this safe for my app?
Yes. The optimization pipeline is entirely deterministic. It uses a combination of dictionary-based rewrites and grammatical parsing. We do not use hallucination-prone models to edit your prompt.
Security & Privacy
Is my data private?
Yes.
- Stateless Processing: The optimization occurs entirely in memory.
- No Data Logging: We do not store your prompts or train any models on user data.
- Local Mode: We offer a completely offline CLI and local SDK for enterprise users who require air-gapped security.
Do I need to change my LLM?
No. Trunkate is a middleware layer. It sits between your application logic and the LLM API provider (OpenAI, Anthropic, Bedrock, etc.), acting completely agnostic to the underlying provider.