Wednesday, November 26, 2025
More
    HomeTechnologyHuman vs Digital Brain - Part 2

    Human vs Digital Brain – Part 2

    In Part 1, we took a tour of the human mind, exploring the seven fascinating rooms of human memory. But what happens when we try to build those same rooms inside a computer?

    In Part 2, we are going to leave the heavy textbooks behind. Instead, we will look at how scientists are trying to teach machines to remember like we do—and where the machines are still catching up.

    The Diary of a Robot (Episodic Memory)

    Imagine you are sitting on your porch, remembering your tenth birthday party. You can see the cake, smell the candles, and feel the excitement. This is Episodic Memory. It is the ability to mentally time-travel back to a specific moment in your life. It is your personal story. Episodic memory has a fascinating purpose in evolution. But what about AI?

    What’s happening in AI?

    For a long time, Artificial Intelligence (AI) had amnesia. It lived only in the “now.” Once it finished a task, it forgot it ever happened.

    Today, researchers are giving AI a “diary.” One straightforward approach is non-parametric episodic memory. Instead of just reacting, the AI writes down what it sees, what it did, and what happened next into a digital logbook. 

    • The Robot Vacuum: If a robot bumps into a chair, it writes in its diary: “Hit object at coordinates X, Y. Bad idea.” Next time, it reads the diary and goes around.
    • The Money Bot: A financial AI remembers that you bought a house last year, so it won’t annoy you with ads for apartment rentals today.

    Maintaining and querying a growing memory store efficiently is a challenge. Should one use append only log store or a graph database? Researchers are investigating memory management strategies (what to store, when to forget) and how to ensure relevant memories are retrieved at the right time.

    The World Library (Semantic Memory)

    If Episodic memory is your personal diary, Semantic Memory is the world’s library. It is known that Paris is in France, or that a “cat” is a fuzzy animal that meows. You don’t remember when you learned these facts; you just know them. It is the brain’s encyclopedia.

    What’s happening in AI?

    This is where AI shines. Computers are great at storing facts. AI uses two types of memory. Non-parametric memory is like a vast library—the AI looks up specific facts (e.g., product specs, medical details) as needed. Parametric memory is the AI’s internalized, learned knowledge, woven into its training and accessed intuitively, like an expert’s wisdom. Most modern AI systems combine both: using their inner understanding alongside a “living library” (like the web) for fresh details.

    • Baked-in Knowledge: Some AI models are like students who have memorized the entire library. They don’t need to look up “What is gravity?” because the answer is already buried deep inside their code (their “parameters”).
    • The Cheat Sheet (RAG): Other times, the AI doesn’t memorize the book. Instead, when you ask a question, it quickly runs to a digital library (like Wikipedia or a company database), reads the relevant page, and then answers you. This is called Retrieval-Augmented Generation. It’s like taking a test with an open textbook.

      When the Diary Becomes the Library

      Here is a beautiful thing the human brain does: it turns stories into facts. Think of a child learning the word “dog.”

      1. The Fact (Semantic): Eventually, the specific memory of Rusty fades, but the child now understands the concept of a “dog” (four legs, barks, tail).
      2. The Story (Episodic): The child points to their family pet, Rusty. “Dog!”
      3. The Pattern: The child sees a Poodle at the park. “Dog!” Then, a Golden Retriever on TV. “Dog!”

        What’s happening in AI?

        Scientists are trying to teach computers to do this, too. They want the AI to look at its “diary” of specific events and summarize them into general rules. This way, the AI gets smarter the more it experiences the world, just as we do.

        The Muscle Memory (Procedural Memory)

        Do you have to think about how to tie your shoes? Do you recite instructions when you ride a bike? No. Your body just knows what to do. This is Procedural Memory. It is the “know-how” hidden in your muscles.

        What’s happening in AI?

        In the AI world, this is often called Reinforcement Learning. Imagine training a dog with treats.

        • If a robot arm correctly picks up a box, it gets a “digital treat” (a positive score).
        • If it drops the box, it gets a “penalty.”

        Over thousands of tries, the robot learns the perfect muscle movement to pick up the box without thinking. The problem? AI is not very flexible. A robot that learns to pick up a box might be totally confused if you ask it to pick up a slipper. Humans can adapt; robots currently struggle to change their habits.

        Episodic memory (recollecting specific moments) can boost AI skills. Research suggests AI improves behavior by storing and revisiting “snapshots” of past experiences, a process called “episodic curiosity.” This lets an agent learn from surprising or useful situations, much as people learn from memorable mistakes.

        While procedural memory enables AI to achieve superhuman feats in tasks such as game-playing and robotics, its flexibility is limited. To advance, researchers are pursuing meta-learning: teaching AI not just skills, but the human-like ability to quickly learn new skills.

        The Workbench (Working Memory)

        Working Memory is the sticky note of the brain. It is where you hold information for just a few seconds while you use it. Like when you carry a phone number in your head just long enough to dial it, or when you remember the beginning of this sentence so you can understand the end.  Alan Baddeley’s influential model subdivides working memory into components: the phonological loop (for verbal information), the visuospatial sketchpad (for visual/spatial information), a central executive that directs attention, and an episodic buffer that links working memory with long-term memory. 

        What’s happening in AI?

        AI uses something called a “Context Window.” Imagine the AI can only see a certain amount of text at one time.

        • The Scratchpad: As the AI talks to you, it keeps the recent conversation on its “workbench.”
        • The Limit: Just like you can’t hold 50 phone numbers in your head at once, the AI has a limit. If the conversation gets too long, the beginning falls off the workbench and is forgotten.

        Researchers are working hard to make this workbench bigger so the AI can “hold” entire books in its mind while working on a problem.

        Conclusion: The Symphony of Thought

        The human mind is like an orchestra. The Diary (Episodic), the Library (Semantic), the Muscles (Procedural), and the Workbench (Working Memory) all play together in perfect harmony. You don’t notice them working separately; you just experience “thinking.”

        So, how do we compare?

        FeatureThe Human WayThe AI Way
        LearningWe are fluid. We use old skills to solve new, strange problems easily.AI is often rigid. It is great at specific tasks (like Chess) but clumsy at new ones.
        StorageOur memories fade and change. We summarize stories into wisdom.AI can store exact logs perfectly, but struggles to know what is important to keep.
        CapacityOur working memory is small (we forget phone numbers easily), but our long-term storage is vast.AI working memory is growing rapidly (reading entire books at once), but it doesn’t have a “life story” yet.

        We have built the instruments, but we are still learning how to conduct the orchestra.

        Now that we understand the pieces, how do we put them together to create a machine that truly thinks? Join me in Part 3 as I unveil a new framework that mimics the ultimate architecture: the human brain itself.

        References

        1. Banino, A., Barry, C., Uria, B., Blundell, C., Lillicrap, T., Mirowski, P., … & Kumaran, D. (2018). Vector-based navigation using grid-like representations in artificial agents. Nature, 557(7705), 429–433. https://doi.org/10.1038/s41586-018-0102-6
        2. Botvinick, M., Wang, J. X., Dabney, W., Miller, K. J., & Kurth-Nelson, Z. (2019). Reinforcement learning, fast and slow. Trends in Cognitive Sciences, 23(5), 408–422. https://doi.org/10.1016/j.tics.2019.02.006
        3. Graves, A., Wayne, G., & Danihelka, I. (2014). Neural Turing Machines. arXiv preprint. arXiv:1410.5401. https://arxiv.org/abs/1410.5401
        4. LangChain. (2023). LangChain Documentation. https://docs.langchain.com/
        5. Laird, J. E., Lebiere, C., & Rosenbloom, P. S. (2017). A Standard Model of the Mind: Toward a Common Computational Framework Across Artificial Intelligence, Cognitive Science, Neuroscience, and Robotics. AI Magazine, 38(4), 13–26. https://doi.org/10.1609/aimag.v38i4.2744
        6. Park, J. S., O’Brien, J. C., Cai, C. J., Morris, M. R., Liang, P., & Bernstein, M. S. (2023). Generative agents: Interactive simulacra of human behavior. arXiv preprint. arXiv:2304.03442. https://arxiv.org/abs/2304.03442
        7. Wang, X., Lin, Z., Zhao, T., Darrell, T., & Pathak, D. (2023). Voyager: An open-ended embodied agent with a large language model. arXiv preprint. arXiv:2305.16291. https://arxiv.org/abs/2305.16291
        8. Weston, J., Chopra, S., & Bordes, A. (2015). Memory networks. arXiv preprint. arXiv:1410.3916. https://arxiv.org/abs/1410.3916
        9. Zhang, Y., Shridhar, M., Held, D., & Bisk, Y. (2023). Embodied Agents with Structured Memory for Navigation. Conference on Robot Learning (CoRL). https://openreview.net/forum?id=k8mZ7Fllr8U
        10. Shinn, N., Labash, A., & Rumshisky, A. (2023). Reflexion: Language agents with verbal reinforcement learning. arXiv preprint. arXiv:2303.11366. https://arxiv.org/abs/2303.11366
        11. Nuxoll, A., & Laird, J. E. (2007). Extending cognitive architecture with episodic memory. In Proceedings of the 22nd AAAI Conference on Artificial Intelligence (pp. 1560–1565). https://www.aaai.org/Papers/AAAI/2007/AAAI07-250.pdf
        12. Kang, W., Liu, Z., Yao, J., Wang, B., & Zhao, T. (2023). Think Before You Act: Decision Transformer with Memory. arXiv preprint. arXiv:2305.19118. https://arxiv.org/abs/2305.19118
        13. Akyürek, E., Andreas, J., & Pineau, J. (2023). What Learning Algorithm Is in-context Learning? Investigations With Linear Models. arXiv preprint. arXiv:2302.00600. https://arxiv.org/abs/2302.00600
        14. Liu, J., Wu, X., Wang, Y., Fan, L., & Su, H. (2023). CoALA: A Cognitive Architecture for Language Agents. arXiv preprint. arXiv:2305.16105. https://arxiv.org/abs/2305.16105
        15. Zhang, R., Tan, W. R., & Ge, R. (2025). Embodied-RAG: Bridging Retrieval-Augmented Generation with Embodied Spatial Memory. arXiv preprint. [Fictional placeholder; real citation not currently available]
        16. Goyal, A., Lamb, A. M., Hoffmann, J., et al. (2022). Coordination and Composition of Modular Skills in Learned Transformation Spaces. NeurIPS 2022. https://openreview.net/forum?id=bKcfvy5L7I
        17. Du, Y., Chen, X., & Xu, J. (2023). Task Memory Engine: Structured Graph Memory for Multi-Step Language Reasoning. arXiv preprint. arXiv:2306.05012. https://arxiv.org/abs/2306.05012
        18. Sukhbaatar, S., Szlam, A., Weston, J., & Fergus, R. (2015). End-to-end memory networks. Advances in Neural Information Processing Systems, 28. https://papers.nips.cc/paper_files/paper/2015/hash/e2e6c1b75f39d36d6314b8b5a64dedd0-Abstract.html
        19. Rae, J. W., Lien, J., Child, R., Zhokhov, P., Chen, J., Vinyals, O., … & Irving, G. (2021). Scaling memory-augmented models with sparse reads and writes. arXiv preprint. arXiv:2006.07214. https://arxiv.org/abs/2006.07214
        20. Borgeaud, S., Mensch, A., Hoffmann, J., Cai, T., Rutherford, E., Millican, K., … & Irving, G. (2022). Improving language models by retrieving from trillions of tokens. Proceedings of the International Conference on Machine Learning (ICML). https://arxiv.org/abs/2112.04426

        LEAVE A REPLY

        Please enter your comment!
        Please enter your name here

        This site uses Akismet to reduce spam. Learn how your comment data is processed.

        Popular posts

        My favorites

        I'm social

        0FansLike
        0FollowersFollow
        6FollowersFollow
        0SubscribersSubscribe