Recursive Language Model Prompting
A friend pointed me towards an AI Research paper from MIT that introduces Recursive Language Models (RLMs), and reframes how large language models handle extremely long contexts by treating the input, not as a static prompt, but as an external environment the model can actively explore.
Instead of forcing all information into a fixed context window, an RLM uses a programmatic interface (i.e., a Python REPL) to inspect, filter, and recursively decompose large datasets, invoking smaller sub-queries as needed.
This recursive, hierarchical approach is aimed to mitigate “context rot,” and enable reasoning over millions of tokens, shifting the scaling bottleneck from ever-larger context windows to a more structured interaction over external data.
Even though today's LLM lack native recursion, external memory pointers
and REPL execution it is possible to force this type of behavior through a structured prompt to try and prevent context flooding, reduce hallucination, and encourage the hierarchical reasoning approach described in the RLM paper.
I would say that the constructed prompt is a reasonable "poor man's RLM" for long'ish documents (where you want more disciplined reasoning) but of course it won't match the performance gains from the paper (which come from genuinely constrained and isolated context), however I've tested it and it seems to provide much better results for large documents/long transcripts, codebases and compliance and audit tasks.
They way I like to think about it is that RLM style prompting can enforce critical thinking at the model level in a more specific, enforceable way,
You can download the prompt template from my GitHub if you wish to try it out. Just fill in the blanks of what you are trying to achieve.

