Menu

  • Home
  • Work
    • AI
    • Cloud
      • Virtualization
      • IaaS
      • PaaS
    • Architecture
    • BigData
    • Python
    • Java
    • Go
    • C
    • C++
    • JavaScript
    • PHP
    • Others
      • Assembly
      • Ruby
      • Perl
      • Lua
      • Rust
      • XML
      • Network
      • IoT
      • GIS
      • Algorithm
      • Math
      • RE
      • Graphic
    • OS
      • Linux
      • Windows
      • Mac OS X
    • Database
      • MySQL
      • Oracle
    • Mobile
      • Android
      • IOS
    • Web
      • HTML
      • CSS
  • Life
    • Cooking
    • Travel
    • Gardening
  • Gallery
  • Video
  • Music
  • Essay
  • Home
  • Work
    • AI
    • Cloud
      • Virtualization
      • IaaS
      • PaaS
    • Architecture
    • BigData
    • Python
    • Java
    • Go
    • C
    • C++
    • JavaScript
    • PHP
    • Others
      • Assembly
      • Ruby
      • Perl
      • Lua
      • Rust
      • XML
      • Network
      • IoT
      • GIS
      • Algorithm
      • Math
      • RE
      • Graphic
    • OS
      • Linux
      • Windows
      • Mac OS X
    • Database
      • MySQL
      • Oracle
    • Mobile
      • Android
      • IOS
    • Web
      • HTML
      • CSS
  • Life
    • Cooking
    • Travel
    • Gardening
  • Gallery
  • Video
  • Music
  • Essay

LangChain: Architecture, LCEL, Agents, LangGraph, Retrieval, and Production Patterns

4
Jul
2023

LangChain: Architecture, LCEL, Agents, LangGraph, Retrieval, and Production Patterns

By Alex
/ in AI
0 Comments

LangChain is no longer best understood as a grab bag of prompt helpers, legacy chains, and one-off agent wrappers. In the current Python ecosystem, LangChain is the high-level framework for building agentic applications on top of a shared runtime, while LangGraph is the lower-level orchestration layer for stateful workflows that need persistence, branching, interrupts, and precise control.

What LangChain Is Now

The old mental model of LangChain centered on LLMChain, prompt templates, memory objects, and a long catalog of agent types. That model still explains a large amount of older code on GitHub, but it does not describe the modern stack very well.

Today the center of gravity is different:

  • langchain provides the high-level developer experience for models, tools, structured output, agents, middleware, retrieval composition, streaming, and runtime configuration.
  • langgraph provides the durable runtime for stateful agent execution. It is the layer you use when an application has real workflow structure instead of a single request-response loop.
  • The common abstractions live in langchain-core: messages, documents, prompts, tools, runnables, output parsers, callbacks, and other shared interfaces.

The practical consequence is simple. If an application is a straightforward tool-using agent, start with LangChain. If it needs long-lived state, resumability, branching control flow, human approval, or multi-agent coordination, drop to LangGraph without abandoning the same model, message, tool, and runnable abstractions.

Package Layout

The package split matters because most outdated tutorials assume everything lives under one import tree. Modern LangChain is intentionally modular.

Package Role When to install it
langchain High-level Python framework for agents, models, tools, middleware, structured output, streaming, and application composition. Install for almost every new LangChain project.
langchain-core Shared interfaces and primitives such as messages, documents, prompts, tools, and runnables. Usually comes in as a dependency rather than a package you install directly.
langgraph State graphs, checkpoints, persistence, interrupts, human-in-the-loop flows, and durable execution. Install when the workflow has state beyond a single agent call.
langchain-openai, langchain-anthropic, and similar provider packages Provider-specific chat models, embeddings, and integration code. Install only the providers you actually use.
langchain-community Community-maintained integrations such as loaders, vector stores, and third-party utilities. Install when you need community loaders or storage integrations.
langchain-text-splitters Text splitting utilities separated into their own package. Install for ingestion and chunking pipelines.
langchain-classic Compatibility package for many legacy chains, retrievers, and v0-style APIs. Install only when migrating or maintaining older code.
Installation

Install the framework, the orchestration runtime, and only the integrations you need. The old langchain[all] habit is obsolete.

Shell
1
2
3
4
5
6
7
pip install -U langchain langgraph langchain-openai
 
# Common optional packages
pip install -U langchain-community langchain-text-splitters faiss-cpu
 
# Only when you are migrating old tutorials or old production code
pip install -U langchain-classic
Core Abstractions
Models and Messages

Modern LangChain treats chat models as the default interface. A chat model receives a list of messages and returns an AI message that may contain plain text, tool calls, or provider-specific metadata. The framework normalizes enough of this structure that application code can stay stable while the provider changes.

The most important message kinds are:

  • system for developer instructions and global behavior.
  • human for user input.
  • ai for model output.
  • tool for tool execution results returned to the model.

Recent LangChain versions also standardize message content through typed content blocks. That matters for multimodal input, citations, reasoning traces exposed by some providers, and tool call metadata. A modern agent loop is therefore message-centric, not string-centric.

Python
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
from langchain.chat_models import init_chat_model
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
 
prompt = ChatPromptTemplate.from_messages(
    [
        ("system", "You write concise technical summaries."),
        ("human", "Summarize {topic} in three sentences."),
    ]
)
 
model = init_chat_model("openai:gpt-4.1")
chain = prompt | model | StrOutputParser()
 
print(chain.invoke({"topic": "LangChain middleware"}))
Runnables and LCEL

The Runnable interface is still one of the most important ideas in LangChain, even though modern marketing material talks more about agents. Models, prompts, retrievers, output parsers, and many custom components all implement the same operational shape. That gives you a common set of execution patterns:

  • invoke and ainvoke for single synchronous or asynchronous calls.
  • batch and abatch for parallel request execution.
  • stream and event streaming APIs for incremental output.
  • Pipe composition with | for sequential flows.
  • Dictionary composition for fan-out and fan-in patterns.

This composition model is often called LCEL, the LangChain Expression Language. Older tutorials treated chains as distinct classes. Current LangChain treats most application logic as composition of runnables.

Python
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
from operator import itemgetter
 
from langchain.chat_models import init_chat_model
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
 
prompt = ChatPromptTemplate.from_template(
    "Answer strictly from the supplied context.\n\n"
    "Context:\n{context}\n\n"
    "Question:\n{question}"
)
 
model = init_chat_model("openai:gpt-4.1")
 
simple_chain = (
    {
        "context": itemgetter("context"),
        "question": itemgetter("question"),
    }
    | prompt
    | model
    | StrOutputParser()
)
 
print(
    simple_chain.invoke(
        {
            "context": "LangGraph adds persistence, interrupts, and stateful graphs.",
            "question": "What does LangGraph add?",
        }
    )
)
Structured Output

Free-form text is convenient for demos and annoying in production. Modern LangChain therefore makes structured output a first-class workflow instead of an afterthought built from regexes and brittle parsers.

There are two common approaches:

  • Call with_structured_output on a chat model when you want model output parsed directly into a schema.
  • Pass response_format to create_agent when you want the final agent result in a validated structure.

LangChain will use provider-native structured output when a model supports it. Otherwise it can fall back to tool-calling-based strategies. That abstraction removes a large amount of provider-specific branching from application code.

Python
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
from pydantic import BaseModel, Field
 
from langchain.chat_models import init_chat_model
 
 
class ExtractedFact(BaseModel):
    subject: str = Field(description="Entity or concept being discussed")
    relation: str = Field(description="Relationship or claim")
    value: str = Field(description="Object or fact value")
 
 
model = init_chat_model("openai:gpt-4.1")
structured_model = model.with_structured_output(ExtractedFact)
 
print(
    structured_model.invoke(
        "LangGraph powers the runtime beneath LangChain create_agent."
    )
)
Tools

Tools are still the bridge between the model and the external world, but the modern tool story is more standardized than the old agent-tool abstractions suggested. A tool is simply a function with a name, a description, and a schema the model can call.

In practice, tools fall into three categories:

  • Pure computation tools such as math, formatting, or local business rules.
  • Data access tools such as search, SQL, vector retrieval, and API lookups.
  • Action tools such as ticket creation, email sending, code execution, or deployment operations.

Good tools are narrow. They do one thing, return stable output, validate input strictly, and hide messy implementation details from the model.

Python
1
2
3
4
5
6
7
from langchain.tools import tool
 
 
@tool
def get_weather(city: str) -> str:
    """Return the current weather for a city."""
    return f"{city}: 18 C, light rain"
Agents in LangChain v1

The modern entry point is create_agent. It replaces most of the old discussion around agent classes, planner types, and specialized enums such as AgentType.ZERO_SHOT_REACT_DESCRIPTION.

A current LangChain agent is built from four ingredients:

  • A chat model.
  • A set of tools.
  • Optional middleware that intercepts and modifies execution.
  • Optional response schema, state persistence, and runtime configuration.

The key design change is that the high-level agent API now sits on top of LangGraph. That means the friendly LangChain entry point can still participate in persistence, streaming, memory management, and other stateful runtime features.

Python
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
from pydantic import BaseModel, Field
 
from langchain.agents import create_agent
from langchain.tools import tool
 
 
@tool
def get_weather(city: str) -> str:
    """Return the current weather for a city."""
    return f"{city}: 18 C, light rain"
 
 
class WeatherReport(BaseModel):
    city: str = Field(description="Resolved city name")
    condition: str = Field(description="Weather condition")
    recommendation: str = Field(description="Practical advice for the user")
 
 
agent = create_agent(
    model="openai:gpt-4.1",
    tools=[get_weather],
    response_format=WeatherReport,
)
 
result = agent.invoke(
    {
        "messages": [
            {
                "role": "user",
                "content": "Should I bring an umbrella in Seattle?",
            }
        ]
    }
)
 
print(result["structured_response"])
Middleware

Middleware is one of the biggest architectural improvements in LangChain v1. Instead of forcing every behavior into prompts or custom wrappers, middleware lets you intervene at clear points in the model and tool execution loop.

Typical middleware responsibilities include:

  • Summarizing old conversation history when token usage grows.
  • Enforcing human approval before selected tool calls.
  • Switching models dynamically based on cost, latency, or risk.
  • Adding retries, fallback behavior, or call limits.
  • Injecting guardrails, context edits, or compliance checks.

This makes modern LangChain agents far easier to reason about than the old approach of stacking callbacks, prompt hacks, and custom chain subclasses.

Why LangGraph Exists

LangChain is comfortable when the application still looks like an agent loop. LangGraph exists for the point where that abstraction stops being enough.

Use LangGraph when the workflow has explicit state transitions, durable checkpoints, interrupts, or multiple coordinated actors. Common examples include:

  • A research agent that plans, searches, extracts, verifies, and writes through distinct stages.
  • An approval flow where a human reviews a tool call before execution.
  • A multi-agent system where one agent delegates to specialists and merges results.
  • A long-running pipeline that must survive process restarts or resume after failure.

LangGraph models these systems as graphs over shared state. Nodes read and update the state. Edges control where execution goes next. Checkpointers persist thread-scoped state so a run can pause and resume instead of restarting from the beginning.

Short-Term Memory

Short-term memory in the modern stack is not an isolated memory object attached to a chain. It is part of the graph state for a thread. That state normally includes message history, retrieved context, tool outputs, uploaded files, and any other values the workflow needs to carry forward.

This shift matters because memory is now tied to execution semantics. A step can read state, update state, and persist the updated snapshot through a checkpointer. That is a cleaner model than the old pattern of mutating a ConversationBufferMemory object on the side.

Long-Term Memory

Long-term memory is different. It stores information across threads and sessions: user preferences, durable facts, profile data, recurring tasks, or learned application-specific knowledge. LangGraph treats this as stored data with namespaces rather than an ever-growing transcript.

Thread state and durable application storage solve different problems. Thread state holds the execution context for one conversation. Durable storage holds facts and preferences that should survive across conversations. Mixing the two is one of the fastest ways to build a confused agent.

Retrieval and RAG

Retrieval remains the standard way to give a model access to external knowledge without fine-tuning. The core pipeline has not changed, but the framing is clearer than in older LangChain material:

  • Load documents from files, web pages, APIs, or SaaS systems.
  • Split them into chunks that preserve semantic coherence.
  • Embed the chunks into vectors.
  • Store the vectors in a vector database or local vector index.
  • Expose retrieval as a retriever that returns relevant documents for a query.
  • Compose that retriever into a runnable chain or graph.

A vector store is storage plus similarity search. A retriever is the application-facing abstraction. Every vector store can usually be wrapped as a retriever, but not every retriever needs to be backed by a vector store. That distinction is worth keeping straight.

Python
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
from langchain_community.document_loaders import WebBaseLoader
from langchain_community.vectorstores import FAISS
from langchain_openai import OpenAIEmbeddings
from langchain_text_splitters import RecursiveCharacterTextSplitter
 
loader = WebBaseLoader(
    web_paths=("https://docs.langchain.com/oss/python/langchain/overview",)
)
docs = loader.load()
 
splitter = RecursiveCharacterTextSplitter(
    chunk_size=1000,
    chunk_overlap=200,
)
chunks = splitter.split_documents(docs)
 
vectorstore = FAISS.from_documents(chunks, OpenAIEmbeddings())
retriever = vectorstore.as_retriever(
    search_type="mmr",
    search_kwargs={"k": 4},
)
 
print(retriever.invoke("What does LangGraph add beyond create_agent?"))
Composable RAG

The cleanest modern RAG pattern is to treat retrieval as just another runnable. That keeps the application modular and avoids a large amount of v0-style chain scaffolding.

Python
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
from operator import itemgetter
 
from langchain.chat_models import init_chat_model
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
 
prompt = ChatPromptTemplate.from_template(
    "Answer from the supplied context only.\n\n"
    "Context:\n{context}\n\n"
    "Question:\n{question}"
)
 
rag_chain = (
    {
        "context": itemgetter("question") | retriever,
        "question": itemgetter("question"),
    }
    | prompt
    | init_chat_model("openai:gpt-4.1")
    | StrOutputParser()
)
 
print(rag_chain.invoke({"question": "What does LangGraph add?"}))
Advanced Retrieval Patterns

Production retrieval rarely stops at plain top-k similarity search. Useful advanced patterns include:

  • Hybrid retrieval that mixes lexical and vector search.
  • Maximum marginal relevance to reduce redundant chunks.
  • Reranking with a second model after an initial broad recall step.
  • Context compression to strip irrelevant text from otherwise relevant documents.
  • Parent-child or hierarchical retrieval when the indexed chunk size is smaller than the answering context you want to show.
  • History-aware retrieval when the user asks follow-up questions that depend on prior turns.

Many of the v0 convenience classes for retrieval still exist, but several live in langchain-classic. The durable design principle in v1 is to keep retrieval as a clean component inside a runnable pipeline or a LangGraph workflow instead of treating RAG as a monolithic chain type.

Streaming, Async Execution, and Observability

Modern LangChain is built for interactive systems, so streaming is a first-class feature rather than a callback hack. There are three distinct things you may want to stream:

  • Model tokens for chat-like latency.
  • Agent progress events such as tool calls and intermediate reasoning steps.
  • Custom application updates emitted during a workflow.

The same runtime also supports asynchronous execution and request batching. That matters when an application fans out retrieval calls, runs multiple model invocations in parallel, or serves many concurrent users.

Observability is the other half of production readiness. Without traces, state snapshots, tool call inspection, and prompt/version tracking, agent debugging becomes guesswork. LangSmith is the standard companion product for tracing and evaluating LangChain and LangGraph applications.

MCP and External Context

Model Context Protocol (MCP) is now part of the LangChain story. MCP standardizes how tools and resources are exposed by external servers, which makes it easier to connect agents to editor state, local resources, internal APIs, and hosted services through a common protocol instead of a one-off adapter for each target.

In practice, MCP does not replace ordinary LangChain tools. It expands the set of systems that can be surfaced as tools and context sources. When an environment already exposes an MCP server, LangChain can consume it instead of reimplementing the integration manually.

Production Patterns

Most real failures in LangChain systems are not caused by the model wrapper. They come from weak application boundaries. A production-ready design usually follows a few rules:

  • Keep tools narrow and deterministic where possible.
  • Separate thread state from durable memory.
  • Prefer structured output over free-form parsing.
  • Treat retrieval quality as an indexing and ranking problem, not just a prompt problem.
  • Use middleware for control-plane behavior instead of burying policies inside prompts.
  • Use LangGraph when workflow state and failure recovery matter.
  • Trace everything before calling the system unreliable.

The common anti-pattern is to overfit a complex workflow into a single prompt and a pile of tools. That works for a demo. It collapses under real data, real users, and real failure modes.

Migrating Old LangChain Code

Much of the internet still teaches a LangChain that no longer represents the preferred API surface. The table below maps the most common legacy concepts to current replacements.

Legacy pattern Current replacement Reason
LLMChain Runnable composition such as prompt | model | parser Simpler composition, better interoperability, fewer bespoke chain classes.
initialize_agent(..., agent=AgentType...) create_agent(...) The new API is simpler and sits on top of LangGraph.
ConversationBufferMemory and similar memory classes Thread state, checkpointers, summarization middleware, and long-term stores Memory is now modeled as execution state plus durable storage.
Provider imports under langchain.llms or old chat-model namespaces Provider packages such as langchain-openai or unified initialization through init_chat_model Cleaner packaging and fewer unnecessary dependencies.
Monolithic chain classes for QA, summarization, or conversational retrieval Composable runnables or LangGraph workflows Current APIs make dataflow more explicit and easier to customize.
Old retriever helpers under langchain.retrievers Current retriever interfaces plus langchain-classic for legacy helpers The ecosystem moved several v0 components into a compatibility package.
What To Learn First

For a new project, the shortest correct learning path is:

  1. Learn chat models, messages, prompts, and structured output.
  2. Learn tools and the create_agent API.
  3. Learn runnable composition for non-agent flows and RAG pipelines.
  4. Learn LangGraph when the system needs checkpoints, interrupts, or multi-step stateful control flow.
  5. Learn LangSmith before the project reaches production.

That sequence matches how the current framework is actually designed. It also avoids the trap of spending days on APIs that only exist because old tutorials have a long afterlife.

References
  • LangChain Python overview
  • LangChain agents
  • LangChain models
  • LangChain structured output
  • LangChain retrieval
  • LangChain memory overview
  • LangChain Model Context Protocol
  • LangChain observability
  • LangChain v1 release notes
  • LangGraph v1 release notes
← Kubernetes Migration
A Comprehensive Study of Kotlin for Java Developers →

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code class="" title="" data-url=""> <del datetime=""> <em> <i> <q cite=""> <strike> <strong> <pre class="" title="" data-url=""> <span class="" title="" data-url="">

Related Posts

  • Octave知识集锦
  • 吴恩达机器学习笔记
  • 人工智能理论知识 - Transformers和大模型
  • 人工智能理论知识 - 智能体
  • 人工智能理论知识 - 简介

Recent Posts

  • 人工智能理论知识 - 智能体
  • 人工智能理论知识 - Transformers和大模型
  • 人工智能理论知识 - 主要应用领域
  • 人工智能理论知识 - 算法和机器学习
  • 人工智能理论知识 - 数学基础
ABOUT ME

汪震 | Alex Wong

江苏淮安人,现居北京。目前供职于腾讯云,专注国际售后AI落地。

GitHub:gmemcc

Git:git.gmem.cc

Email:gmemjunk@gmem.cc@me.com

ABOUT GMEM

绿色记忆是我的个人网站,域名gmem.cc中G是Green的简写,MEM是Memory的简写,CC则是我的小天使彩彩名字的简写。

我在这里记录自己的工作与生活,同时和大家分享一些编程方面的知识。

GMEM HISTORY
v2.00:微风
v1.03:单车旅行
v1.02:夏日版
v1.01:未完成
v0.10:彩虹天堂
v0.01:阳光海岸
MIRROR INFO
Meta
  • Log in
  • Entries RSS
  • Comments RSS
  • WordPress.org
Recent Posts
  • 人工智能理论知识 - 智能体
    这一篇处理模型之外的系统层问题,包括上下文工程、Harness Engineering、检索增强生成(RAG)与 ...
  • 人工智能理论知识 - Transformers和大模型
    这一篇聚焦现代大模型主线,内容从 Transformer 架构出发,延伸到语言模型、多模态模型、预训练与微调,以 ...
  • 人工智能理论知识 - 主要应用领域
    这一篇从任务视角进入现代 AI 的几个核心应用方向,重点讨论自然语言处理、计算机视觉、语音和音频处理、搜索/推荐 ...
  • 人工智能理论知识 - 算法和机器学习
    这一篇从常用算法进入机器学习基础概念、经典机器学习与神经网络,重点讨论“模型如何被构造、训练、评估与正则化”。前 ...
  • 人工智能理论知识 - 数学基础
    这一篇整理 AI 所需的数学基础,包括基础数学、线性代数、微积分与概率论统计。它回答的核心问题是:模型里的向量、 ...
  • 人工智能理论知识 - 简介
    这一篇作为整套 AI 总纲的导论,先不进入公式和具体模型细节,而是回答更根本的问题:什么叫智能,人工智能究竟在试 ...
  • 多语言敏感信息检测模型训练日志
    这篇文章记录一个多语言敏感信息识别项目的完整训练日志。它关注的是工程路径本身:原始 AI 合成语料如何被清洗成可 ...
  • DevPod on Kubernetes: turning devcontainer.json into a persistent remote workspace
    DevPod is an open source workspace manager ...
  • OpenClaw: Architecture, Components, and Deployment Notes
    Four Months, 343,000 Stars On November 24, 2025, ...
  • Replacing Docker Desktop with Colima on macOS
    Colima is one of the cleanest ways ...
  • Kubernetes GPU Sharing
    GPU sharing in Kubernetes depends on what ...
  • Investigating and Solving the Issue of Failed Certificate Request with ZeroSSL and Cert-Manager
    In this blog post, I will walk ...
  • A Comprehensive Study of Kotlin for Java Developers
    Introduction Purpose of the Study Understanding the Mo ...
  • LangChain: Architecture, LCEL, Agents, LangGraph, Retrieval, and Production Patterns
    LangChain is no longer best understood as ...
  • Kubernetes Migration
    Migrating a Kubernetes cluster from one cloud ...
  • Terraform: a practical guide to infrastructure as code
    Terraform is an infrastructure-as-code tool. You describ ...
  • 草缸2021
    经过四个多月的努力,我的小小荷兰景到达极致了状态。

  • 编写Kubernetes风格的APIServer
    背景 前段时间接到一个需求做一个工具,工具将在K8S中运行。需求很适合用控制器模式实现,很自然的就基于kube ...
TOPLINKS
  • Zitahli's blue 91 people like this
  • 梦中的婚礼 64 people like this
  • 汪静好 61 people like this
  • 那年我一岁 36 people like this
  • 为了爱 28 people like this
  • 小绿彩 26 people like this
  • 彩虹姐姐的笑脸 24 people like this
  • 杨梅坑 6 people like this
  • 亚龙湾之旅 1 people like this
  • 汪昌博 people like this
  • 2013年11月香山 10 people like this
  • 2013年7月秦皇岛 6 people like this
  • 2013年6月蓟县盘山 5 people like this
  • 2013年2月梅花山 2 people like this
  • 2013年淮阴自贡迎春灯会 3 people like this
  • 2012年镇江金山游 1 people like this
  • 2012年徽杭古道 9 people like this
  • 2011年清明节后扬州行 1 people like this
  • 2008年十一云龙公园 5 people like this
  • 2008年之秋忆 7 people like this
  • 老照片 13 people like this
  • 火一样的六月 16 people like this
  • 发黄的相片 3 people like this
  • Cesium学习笔记 90 people like this
  • IntelliJ IDEA知识集锦 59 people like this
  • Bazel学习笔记 38 people like this
  • 基于Kurento搭建WebRTC服务器 38 people like this
  • PhoneGap学习笔记 32 people like this
  • NaCl学习笔记 32 people like this
  • 使用Oracle Java Mission Control监控JVM运行状态 29 people like this
  • Ceph学习笔记 27 people like this
  • 基于Calico的CNI 27 people like this
Tag Cloud
ActiveMQ AspectJ CDT Ceph Chrome CNI Command Cordova Coroutine CXF Cygwin DNS Docker eBPF Eclipse ExtJS F7 FAQ Groovy Hibernate HTTP IntelliJ IO编程 IPVS JacksonJSON JMS JSON JVM K8S kernel LB libvirt Linux知识 Linux编程 LOG Maven MinGW Mock Monitoring Multimedia MVC MySQL netfs Netty Nginx NIO Node.js NoSQL Oracle PDT PHP Redis RPC Scheduler ServiceMesh SNMP Spring SSL svn Tomcat TSDB Ubuntu WebGL WebRTC WebService WebSocket wxWidgets XDebug XML XPath XRM ZooKeeper 亚龙湾 单元测试 学习笔记 实时处理 并发编程 彩姐 性能剖析 性能调优 文本处理 新特性 架构模式 系统编程 网络编程 视频监控 设计模式 远程调试 配置文件 齐塔莉
Recent Comments
  • 杨松涛 on snmp4j学习笔记
  • kaka on Cilium学习笔记
  • JackZhouMine on Cesium学习笔记
  • 陈黎 on 通过自定义资源扩展Kubernetes
  • qg on Istio中的透明代理问题
  • heao on 基于本地gRPC的Go插件系统
  • 黄豆豆 on Ginkgo学习笔记
  • cloud on OpenStack学习笔记
  • 5dragoncon on Cilium学习笔记
  • Archeb on 重温iptables
  • C/C++编程:WebSocketpp(Linux + Clion + boostAsio) – 源码巴士 on 基于C/C++的WebSocket库
  • jerbin on eBPF学习笔记
  • point on Istio中的透明代理问题
  • G on Istio中的透明代理问题
  • 绿色记忆:Go语言单元测试和仿冒 on Ginkgo学习笔记
  • point on Istio中的透明代理问题
  • 【Maven】maven插件开发实战 – IT汇 on Maven插件开发
  • chenlx on eBPF学习笔记
  • Alex on eBPF学习笔记
  • CFC4N on eBPF学习笔记
  • 李运田 on 念爷爷
  • yongman on 记录一次KeyDB缓慢的定位过程
  • Alex on Istio中的透明代理问题
©2005-2026 Gmem.cc | Powered by WordPress | 京ICP备18007345号-2