Structure Encoding
LLMs are excellent at reading prose, but parsing unstructured text requires significantly more internal computation (and tokens) than parsing structured data. Structure Encoding is Trunkate AI’s method of condensing verbose instructions into compact data formats.
The Transformation
Trunkate automatically detects when a prompt is describing complex objects, lists, or conditional logic in plain English, and re-encodes it.
“The user’s name is John. He is 30 years old.” → “name:John,age:30”
Extraction Capabilities
| Feature | Description | Token Savings |
|---|---|---|
| Key-Value Projection | Converts descriptive sentences into simple k:v tuples. | High |
| Implicit List Collapsing | Changes “First do A, then do B, finally do C” into [A, B, C]. | Medium |
| Logic Simplification | Translates “If the user says X, you should respond with Y” into rule-based pseudocode. | Very High |
Preserving Nuance
The primary challenge of structuring text is losing the nuance of the original request. Trunkate AI employs an Importance Gradient algorithm to determine exactly how aggressively to compress a structure.
- Strict Constraints: Retained exactly as written (e.g., “Must be under 50 words”).
- Contextual Details: Compressed into KV pairs (e.g., “The audience is children”).
- Redundant Information: Safely pruned if the model’s pre-training already covers the concept.
Why it works
By providing the LLM with dense, structured context rather than sprawling prose, the model spends less of its context window on understanding the request and more of it on generating the answer. This directly translates to higher output quality and lower hallucination rates.