Two interesting points from #OpenAI’s AI agent explainer:
⁃ #Codex’s compaction uses data from the #LLM’s internal latent representation of the conversation rather than a text summary. Considering the computational expense of reprocessing tokens, this must a big efficiency win.
⁃ The client gets this data encrypted, with the keys held by OpenAI. This satisfies #ZDR compliance, but I suspect also blocks model internals from reversing.
