Lots of talk of how JSON/XML formatting/prompts for LLMs will 10x your output of ChatGPT, Claude, Grok etc. But it's 0% true. The model has the same context window whether you're asking for War and Peace or {"story": "War and Peace"}. People think JSON/XML tricks the model because they see longer outputs in structured formats, but that's correlation not causation....you're just asking better questions with clearer expectations. What actually works to 10x output is "boring" advice but here it is anyways: Breaking complex requests into chunks, using "continue" prompts, and being specific about what you want instead of hoping XML/JSON tags will somehow bypass the fundamental architecture of transformer models.
80,65K