LangGraph and CoreCast: A Complete Agent State Example
By Marcus Chen • March 3, 2026 • 11 min read
LangGraph is excellent at what it does: managing the state and execution flow of complex agent workflows. It handles the graph of nodes and edges, the conditional branching, the checkpointing of execution state, and the orchestration of multi-step agentic processes. What it doesn't handle — by design — is persistent cross-session memory. LangGraph's state lives for the duration of a graph run. CoreCast's memory persists across sessions, users, and time. The two solve complementary problems, and together they form a complete agent state architecture that most production agents actually need.
What LangGraph State Is and Is Not
In LangGraph, every graph run has a state object — a typed data structure that nodes can read from and write to as the run progresses. The state captures the in-progress execution: the messages exchanged, the tool results accumulated, the intermediate decisions made. When checkpointing is enabled, LangGraph can persist this state to allow runs to be resumed after interruption — human-in-the-loop flows, asynchronous processing, and fault-tolerant execution all depend on this capability.
What LangGraph checkpoints are not is long-term memory. They're execution state snapshots — they capture where a specific run was at a specific point in time. They're not designed to be queried semantically, they're not designed to span across multiple runs for different users, and they're not designed to serve as the foundation for cross-session context retrieval. LangGraph's checkpoint stores are also typically in-memory or lightly persisted — not built for the query patterns, retention policies, and isolation requirements of a production memory system.
This is not a criticism of LangGraph — it's a description of what it's for. The boundary is clean and intentional. LangGraph owns the execution graph and intra-run state. Something else owns the inter-session, cross-user memory layer. CoreCast is designed to be that something else.
The Integration Architecture
The integration pattern has two main interaction points: memory retrieval at run start, and memory write at run end. At the start of a LangGraph run, a setup node retrieves relevant memories from CoreCast using the current user and context as query parameters. The retrieved memories are injected into the LangGraph state, where they're available to all downstream nodes in the graph — just like any other state field. At the end of a successful run, a finalization node writes the important outcomes from the run back to CoreCast as new memories: what was decided, what actions were taken, what the user expressed.
Mid-run memory operations are also possible. Tool nodes that make significant decisions can write memories as they execute, without waiting for the run to complete. This is useful for long-running graphs where a checkpoint of important decisions should survive a run failure — the memory is written as a side effect of execution, not only at successful completion.
The state schema in a LangGraph + CoreCast architecture typically has two categories of fields. There are execution fields — the fields that LangGraph manages natively, like message history, tool outputs, and branching decisions. And there are memory fields — the fields populated from CoreCast at run start, which provide the long-term context the agent needs to reason well. The separation keeps the state schema clean and makes clear which part of the state is ephemeral and which is retrieved from persistence.
A Concrete Example: Research Agent
Consider a research agent built on LangGraph — the kind that helps users investigate topics across multiple sessions, building a body of knowledge over time. The graph has nodes for query planning, document retrieval, synthesis, and response generation. Each run handles one user request.
Without persistent memory, each run starts cold. The user has to re-orient the agent: "I'm researching X, I've already covered Y, focus on Z." With CoreCast integrated, the setup node retrieves what the agent already knows about this user's research thread — the topics covered, the sources consulted, the conclusions reached, the open questions that were deferred. This context is injected into the state as a memory field, and the query planning node uses it to design a search that builds on previous work rather than repeating it.
At the end of the run, the finalization node extracts the key findings — new topics explored, conclusions reached, sources that were high-quality — and writes them back to CoreCast as memories tagged with the research thread identifier. The next run picks these up automatically. Over multiple sessions, the agent builds a rich, queryable body of knowledge about the user's research area — without the user having to maintain or re-present that context manually.
Handling State That Should Not Be Memorized
Not everything in a LangGraph run should be written to persistent memory. Intermediate tool call results, debug outputs, retry states, and procedural messages are all useful within a run but don't contribute to long-term agent intelligence. Writing all of this to CoreCast would pollute the memory store with low-signal noise.
The finalization node should apply a filter — extracting only the semantically important outcomes from the run and discarding the procedural scaffolding. CoreCast's memory extraction capability can assist with this, but the primary filter should be in the LangGraph finalization logic, where the developer knows exactly which state fields contain meaningful outcomes.
A good heuristic: if the information would be useful to a future agent run for the same user, it's worth storing. If it's only relevant to the execution of the current run, it belongs in LangGraph's ephemeral state, not in CoreCast's persistent store. This boundary keeps both systems clean and ensures retrieval quality stays high over time.
Why This Combination Works
The fundamental reason LangGraph and CoreCast work well together is that they're solving problems at different time horizons. LangGraph operates over the millisecond-to-minute timescale of a single execution run. CoreCast operates over the hour-to-year timescale of a user relationship. Agents that need to be effective across both timescales — which is most production agents — need both layers working together.
Teams building serious agent products with LangGraph will hit the cross-session memory problem before they expect to. The integration is not complex — the setup node and finalization node pattern can be implemented in an afternoon. The value compounds quickly: by the second or third session with memory enabled, the agent's responses are measurably better informed. That's the compounding return on getting the architecture right from the start.
CoreCast's native LangGraph integration adds persistent memory to your agent graph in a few lines. Start with the free tier.
See the SDK or Back to Blog