Core Concepts
Llongterm is built around three core concepts: Minds, Memory, and Middleware. Together, they provide a persistent and contextually rich experience for your AI, allowing it to remember, structure, and augment conversations intelligently.
Minds
A "Mind" in llongterm is a persistent entity that is capable of taking user information and storing it as "memory." Minds are the central data holders that evolve based on user inputs, allowing you to build consistent and intelligent conversational experiences.
Accept Information: Minds can accept pieces of information users provide, such as preferences, opinions, or ongoing discussions.
Structure Memory: Once information is received, the mind structures it into memory—a compact and meaningful representation that can be reused in future interactions.
With each user, a distinct Mind is created, enabling personalized experiences and creating the basis for long-term interaction consistency.
Memory
Memory is the core data structure within llongterm. It is where all user-provided information is stored, compactly organized to provide efficient access to relevant data.
Compact Storage: Memory in llongterm is designed to store all relevant user details in a streamlined format, allowing quick and easy retrieval of necessary information.
Rich Representation: By structuring incoming data into a unified memory model, llongterm makes it possible for the AI to recall both granular details and broader context.
Persistent Context: Memory allows long-term context to persist across different sessions, enabling the AI to hold coherent, ongoing conversations with users, rather than starting from scratch each time.
To see examples of stored memory see specification.
Llongterm as Middleware
Llongterm also acts as a middleware layer between the user and your LLM (Large Language Model). When information is remembered, llongterm augments this memory and returns an enriched system message.
Augmentation of Memory: Each time new information is remembered, llongterm enriches the memory context by associating it with past interactions and insights. This enriched memory is then used to augment ongoing interactions.
Returning Enriched Context: When you call the remember function, llongterm not only stores the new data but also generates a system message. This message contains all relevant context extracted from memory, ready to be used in subsequent LLM queries.
Seamless Integration: The system message is passed back to the developer, who can then forward it to the LLM. This middleware approach ensures that the LLM receives all the context it needs to generate informed responses, leading to richer and more coherent user interactions.
Human Readable
Llongterm's memory structure is designed to be human readable, enhancing user trust and interaction with the system.
Ease of Understanding: The data format is intuitive, ensuring that both developers and users can easily interpret stored information.
Clear Debugging: With a transparent memory view, developers can quickly identify issues and understand the memory's impact on interactions.
Improved Transparency: Users gain insights into how their data is used, fostering a sense of transparency and control over their information.
User-Friendly Documentation: The human-readable format facilitates easier documentation, sharing, and collaboration among teams.
Last updated