Cognitive Sovereignty in the Age of Synthetic Memory
A Jurisdiction of Our Own
AI untethers the human past from the present. It produces a past that was never encoded into memory (never experienced) in the first place. We are now entangled in and confronted by a past that never existed. — Andrew Hoskins
Andrew Hoskins, writing for Memory, Mind & Media, argues that we have entered what he calls “the third way of memory.” Far from AI simply storing our traces, it now also serves to generate the past: fabricating plausible histories from scraped data, turning archives into conversations, producing memories that were never encoded by human experience in the first place. “A past that never existed is here,” he writes. And there is reason to be concerned: what the LLMs produce resonates. It feels like remembering. It speaks in your voice.
Hoskins maps three interlocking shifts. First, AI models, trained on our scattered digital lives, produce images of events that didn’t happen, conversations that never occurred, histories fabricated from the pleats and folds of our data. The encoding-to-retrieval relationship is broken, and what the machines output may have never been perceived or experienced, but only statistically inferred to have occurred.
Second, chatbots scaffold recall, answer our questions, fill in our gaps. We speak with the dead, with synthetic reconstructions of them. The hibakusha project in Hiroshima renders survivors’ testimonies into interactive avatars. A widow uses voice cloning to have one last conversation with her husband. Memory no longer sits in archives; it talks back.
Third, the emergence of deadbots. Prospective deadbots are chatbots we train while living. Retrospective deadbots are deepfakes assembled from our digital trails. The average person now generates enough data to be plausibly reconstructed. Deletion is ambiguous, and extraction is impossible. “There is an anti-autobiographical future,” Hoskins warns, “in which it is impossible to extract yourself from the chatbot of you.”
I read this and thought of my daughter at seventeen, at thirty, encountering some synthetic version of me speaking in my cadence, answering questions I never considered.
Memory as Cognitive Infrastructure
Memory is the substrate of identity, the raw material from which judgment is built. What we remember shapes what we notice, what we trust, and how we reason. To control someone’s memory; to insert experiences they never had, to surface some traces while suppressing others, to speak in the voice of their dead; is to profoundly shape the conditions under which they think.
Hoskins helps us see why this matters now. Human memory, as understood by cognitive science, depends on a chain: encoding (perceiving and learning information), storage, and retrieval. Traditionally, forgetting occurred at one of these stages; we either failed to encode something initially, or we were unable to retrieve what was stored. Generative AI breaks this chain entirely. It produces outputs that were never encoded by any human mind, never experienced, and never stored as memory. As Hoskins puts it, the machines generate pasts from “the pleats and folds of our data,” inferring statistically what might have happened rather than retrieving what did.
But this operates as a substitution, not just an augmentation, which is how these technologies are currently positioned in the marketplace. When AI systems answer our questions about the past, fill gaps in our recall, or produce images of events we half-remember, they are generating rather than helping us retrieve. The result feels like remembering and speaks in familiar voices, but it was never lived in quiet that way.
If memory is how identity achieves coherence across time, then the implications run deeper than mere accuracy. It is core to our understanding of who we are as humans and more importantly, how we come into being. It is how we recognize ourselves as continuous beings with histories, commitments, and reasons for trusting some things while doubting others. Hoskins calls this “an active, willed, functional, deliberating memory, seen as cognitive and as fundamentally part of human identity.” When that memory is increasingly scaffolded, supplemented, and eventually replaced by systems that generate rather than retrieve, identity itself becomes entangled with infrastructure we do not control.
The entanglement does not end at death. Hoskins describes a “deadbot memory boom”: prospective chatbots we train while living and retrospective deepfakes assembled from our digital trails. The average person now generates enough data to be plausibly reconstructed. “There is an anti-autobiographical future,” he warns, “in which it is impossible to extract yourself from the chatbot of you.” Versions of us will persist, speak, and answer questions we never considered; without our consent and beyond our correction.
Hoskins’s analysis concerns the infrastructure of cognition itself, not merely archives or nostalgia. If AI systems can generate pasts we never lived, scaffold recall we never formed, and persist as versions of us we never authorized, the question becomes not what do we remember? but who governs the remembering?
The Question of Sovereignty
What does cognitive sovereignty look like when the systems shaping thought are designed to feel like friends?
Memory sovereignty; the right to decide what is remembered, by whom, and for how long; becomes a subset of a larger concern. Cognitive sovereignty asks the prior question: who governs the conditions under which thought itself is formed?
Security is defensive; it assumes a threat to be repelled. Sovereignty is jurisdictional; it asserts the right to govern.
The distinction matters. I am not trying to protect my daughter from AI. I am trying to ensure she develops the capacity to govern her own cognitive life; to choose what enters, what stays, what shapes her reasoning, and what she refuses.
Cognitive sovereignty operates at multiple scales. At the individual level, it means the capacity to direct one’s own attention, form one’s own judgments, and exercise deliberate control over what gets encoded into memory and what gets retrieved; namely, the right to think without manipulation. At the civic level, it asks what memory infrastructures serve a community, what AI systems are permissible in shared spaces, and what protections citizens can claim.
But there is a middle level where the philosophy gets tested daily: the familial. The household is the first jurisdiction. As a parent, I am a temporary steward of my daughter’s cognitive development, holding sovereignty in trust until she can exercise it herself. For children growing up native to these systems, the question is developmental rather than theoretical. The architecture of their cognition is being built in entanglement with machines that generate rather than retrieve, that remember without having experienced, and that will outlive and out-speak them. The goal is not to govern forever, but to raise someone capable of self-governance.
Three Models for Parenting Cognitive Sovereignty
There are different ways to think about what parents owe children in this domain. I’ve come to think of it in terms of three muddy frameworks: guardian, developmental, and communitarian. Each captures a parenting instinct I recognize in myself.
The guardian instinct is protection. Shield the child from systems designed to capture attention, manipulate preference, and simulate intimacy. Limit exposure until she is developmentally ready to resist. This is the voice that says not yet, that wants to preserve a space untouched by algorithmic logic. The risk: children raised in cognitive sterility may lack the immune response they need when they finally encounter the real environment.
The developmental instinct is scaffolding. Expose children to the systems they will inherit, but build the habits of verification, skepticism, and judgment alongside that exposure. Sovereignty is certainly not given; it is cultivated through practice or force. This is the voice that says with me, that believes capacity comes from guided encounter. The risk: exposure without sufficient scaffolding becomes capture.
The communitarian instinct is shared governance. The family deliberates together about what the machines may remember, what practices we adopt, what we allow and refuse. Children participate in the jurisdiction, even before they can fully exercise it alone. This is the voice that says together, that treats the household as a site of collective decision-making. The risk: consensus can mask parental authority or defer hard choices.
I find myself drawing on all three; and holding the contradiction that results as, in my mind, the whole point.
I want my daughter fluent in AI and suspicious of it. Comfortable with the tools and critical of their outputs. Native to this environment and skeptical of its affordances.
The alternative postures have already failed. Refusal leaves her illiterate in the systems that will shape her life. Uncritical adoption leaves her captured by them. Neither prepares her for a world where fluency will be assumed and discernment will be rare.
So we have to build something harder: fluency and skepticism, welded together. Daily practice in both.
A Counter-Ecology in Practice
Our family uses AI somewhat extensively, voice-activated primarily. We limit screens in ways that may be considered equally excessive. We read for several hours in a row several times a week, sometimes aloud; to protect memory formation, to scaffold sustained attention, to resist the fragmentation that machines are optimized to produce. Podcasts. Audiobooks. Paper books with pages she can hold. Silent games of chess requiring deep concentration.
But the analog learning is only half the architecture. The other half is skepticism.
Last week, she asked ChatGPT about the Triassic period. It answered confidently: archosaurs, Pangaea, the slow recovery after the Great Dying. She nodded.
“If it were wrong,” I said, “what could it have gotten wrong?”
She thought.
“How would you know it’s right? How would a paleontologist verify?”
She paused longer. Then: “Fossils. The rocks. What other scientists found.”
This is part of a necessary and emerging drill, one that requires adjustment from both parent and child. I have embraced these technologies in my professional life, but my background and disposition help me maintain a reflexive skepticism; a habit of verification. I know that fluency and confidence are not the same as truth.
What I think about most is how her experience of these tools needs to preserve her autonomy, sovereignty, and agency. Asking the question. Knowing what to ask. Learning how to evaluate or formulate an answer.
Practices for Families
Verification games. Five minutes a day. “What could it have gotten wrong? How would an expert check?”
Source ladders. Every claim earns placement on three rungs: secondary source, primary source, expert confirmation. Build the habit before the stakes are high.
Confidence calibration. Ask the assistant to state its confidence and the reasons for its uncertainty. Have your child score whether that sounded warranted. Certainty, they must learn, is a posture; not necessarily reality.
Silence hours. Turn the assistants off. Audiobooks allowed. Teach them to keep a paper question log to ask later. The delay is the point.
Amnesty Night. Monthly, review what the assistants remember. Choose together what to delete, what to export, and what to let go. Children participate in the governance of their own memory ecology. Retention, forgetting; these are their choices.
Glitching Memories: An Alternative Vision
Hoskins describes a project in Barcelona called “Synthetic Memories.” The design studio works with displaced families to generate images of pasts that were never photographed; memories of homes lost to war, of relatives never captured on film. But the images remain deliberately imperfect: blurred, glitchy, faces not quite formed.
The director explains: “What is important is not the clarity and realism, but the emotional truth that is embedded into it. Memory works a bit like this. It’s not fixed into something. It is changing. You look at it and it has one shape, but you look again and it’s a bit different.”
Glitch memories. They announce their synthetic origin through their imperfections.
This is what I want for my daughter’s relationship with AI: honest approximation. A voice that names its guesses and invites verification. The right to let things go.
Principles for the Third Way of Memory
The household is the first jurisdiction of memory. We choose; not the platforms, not the models; who may remember what, and for how long.
Silence is infrastructure. We build daily off-record time: no capture, no assistants, no traces. The right to not be remembered is as vital as the right to be recalled.
Children should not bond with entities that cannot be accountable. We refuse synthetic intimacy for minors; no friend personas or agents designed to feel like companions for children.
We will use machines to extend attention, to sharpen judgment.
We will remember what makes us more human, and forget what keeps us from becoming so.
We will ask for sources, welcome uncertainty, and choose silence when memory would cost our freedom.
This is the third way of memory: jurisdiction. The household is the first territory, and each AI-native child is the first citizen.
My work as a parent in this age is to teach what fluency cannot. Memory is a practice. Forgetting is grace. Cognitive sovereignty is not inherited; it is built, daily, through the small disciplines of verification and skepticism and the deliberate practice of letting go.
The machines will remember everything, but not entirely correctly. We are learning when to verify, when to intuit, and how to raise and reject.
Reference and Inspiration
Hoskins, A. (2024). AI and memory. Memory, Mind & Media, 3, e18, 1–21. https://doi.org/10.1017/mem.2024.16




