Agent Builder Platform - 完整代码文档
AI 导读
🤖 Agent Builder Platform 端到端智能体搭建平台 - 完整代码文档 多模型 多框架 笔记工具 知识库 可解释性 国内/海外双态 概述 目录结构 配置文件 AgentSpec Model Providers Framework Adapters 笔记工具 知识库 后端API 前端UI Playground 部署 测试 📚 文档目录 1. 项目概述 2. 目录结构 3....
🤖 Agent Builder Platform
端到端智能体搭建平台 - 完整代码文档
📚 文档目录
1. 项目概述
核心特性
🎨 可视化编排
拖拽式 Agent 构建,支持 LLM、Tool、RAG、Router、Code、Notes 节点,实时预览流程图
🧠 多模型支持
Claude (3.5 Sonnet)、OpenAI (GPT-4/GPT-5占位)、Google Gemini 2.5 Pro、智谱 GLM、Kimi 等
🔧 多框架适配
Claude Agentic SDK、OpenAI Agents、LangChain、CrewAI、AutoGPT、Semantic Kernel、Google Agents
📝 笔记工具
Markdown/富文本笔记作为 Agent 上下文,支持版本控制、RAG 检索、权限管理
📚 智能知识库
可插拔嵌入/文档解析/重排模型,拖拽上传、自动索引、OCR 支持
🔍 可解释性
检索分数、重排分数、工具轨迹、成本分析、A/B 对比、Run Bundle 导出
🌍 国内/海外双态
合规部署,数据本地化,域名白名单,禁用出境遥测
🔐 安全合规
RBAC 权限、数据脱敏、审计日志、内容过滤、深度合成标识
技术栈
前端:Next.js 14 (App Router) + TypeScript + Tailwind CSS + shadcn/ui + Zustand
后端:Python 3.11 + FastAPI + SQLAlchemy + Pydantic v2
数据库:PostgreSQL + pgvector (向量检索)
缓存/队列:Redis + Arq/RQ
观测性:OpenTelemetry + Jaeger (可选)
部署:Docker Compose + Kubernetes (可选)
2. 目录结构
agent-builder-platform/ ├── apps/ │ ├── web/ # Next.js 前端 │ │ ├── app/ # App Router 页面 │ │ │ ├── page.tsx # 首页 │ │ │ ├── login/ # 登录页 │ │ │ ├── agents/ # Agent 编排 │ │ │ ├── playground/ # Playground │ │ │ ├── notes/ # 笔记管理 │ │ │ ├── datasets/ # 知识库 │ │ │ └── settings/ # 设置 │ │ ├── components/ # UI 组件 │ │ ├── lib/ # 工具函数 │ │ ├── hooks/ # React Hooks │ │ ├── package.json │ │ ├── Dockerfile │ │ └── next.config.js │ │ │ ├── api/ # FastAPI 后端 │ │ ├── models/ # SQLAlchemy 模型 │ │ ├── routers/ # API 路由 │ │ ├── services/ # 业务逻辑 │ │ ├── schemas/ # Pydantic Schema │ │ ├── utils/ # 工具函数 │ │ ├── main.py # FastAPI 入口 │ │ ├── database.py # 数据库连接 │ │ ├── requirements.txt │ │ └── Dockerfile │ │ │ └── worker/ # 异步任务 Worker │ ├── worker.py # Worker 入口 │ ├── tasks/ # 任务定义 │ ├── requirements.txt │ └── Dockerfile │ ├── packages/ │ ├── providers/ # 模型提供商 │ │ ├── base.py # 基础接口 │ │ ├── claude.py # Claude Provider │ │ ├── openai.py # OpenAI Provider │ │ ├── google.py # Google Gemini │ │ ├── zhipu.py # 智谱 GLM │ │ └── kimi.py # Kimi │ │ │ ├── frameworks/ # 框架适配器 │ │ ├── base.py # 基础接口 │ │ ├── claude_agentic.py # Claude Agentic SDK │ │ ├── openai_agents.py # OpenAI Agents │ │ ├── langchain.py # LangChain │ │ ├── crewai.py # CrewAI │ │ ├── autogpt.py # AutoGPT │ │ └── semantic_kernel.py # Semantic Kernel │ │ │ ├── specs/ # AgentSpec 定义 │ │ ├── agent_spec.py # Pydantic 模型 │ │ └── examples/ # 示例配置 │ │ ├── chatbot.yml │ │ ├── rag-flow.yml │ │ └── multi-agent.yml │ │ │ ├── kb/ # 知识库组件 │ │ ├── embeddings.py # 嵌入模型接口 │ │ ├── parsers.py # 文档解析 │ │ ├── rerankers.py # 重排模型 │ │ └── pipeline.py # 处理流水线 │ │ │ ├── notes/ # 笔记服务 │ │ ├── service.py # 笔记 CRUD │ │ ├── indexer.py # RAG 索引 │ │ └── version.py # 版本控制 │ │ │ ├── ui/ # 共享 UI 组件 │ │ ├── Canvas.tsx # 画布组件 │ │ ├── NodeEditor.tsx # 节点编辑器 │ │ └── FlowDiagram.tsx # 流程图 │ │ │ ├── sdk-js/ # JavaScript SDK │ │ ├── index.ts │ │ └── package.json │ │ │ └── sdk-py/ # Python SDK │ ├── __init__.py │ └── setup.py │ ├── infra/ │ ├── docker-compose.yml │ ├── docker-compose.prod.yml │ ├── init.sql │ ├── otel-collector-config.yaml │ └── k8s/ # Kubernetes 配置 │ ├── deployment.yaml │ ├── service.yaml │ └── ingress.yaml │ ├── docs/ │ ├── ARCHITECTURE.md │ ├── PLAYGROUND.md │ ├── CN_DEPLOYMENT.md │ └── EXTENSION.md │ ├── .env.example ├── Makefile ├── README.md └── .gitignore
3. 配置文件
3.1 环境变量 (.env.example)
# ============================================ # 环境变量中心 - 默认配置模板 # ============================================ # ------------------ # 核心配置 # ------------------ # 部署区域: cn (中国) | global (海外) REGION=cn NODE_ENV=development # ------------------ # 数据库 # ------------------ DATABASE_URL=postgresql://agent:agent_password@localhost:5432/agent_platform POSTGRES_USER=agent POSTGRES_PASSWORD=agent_password POSTGRES_DB=agent_platform # ------------------ # Redis # ------------------ REDIS_URL=redis://localhost:6379/0 # ------------------ # 模型 API Keys # ------------------ # 海外模型 OPENAI_API_KEY=sk-your-openai-key-here OPENAI_BASE_URL=https://api.openai.com/v1 ANTHROPIC_API_KEY=sk-ant-your-anthropic-key ANTHROPIC_BASE_URL=https://api.anthropic.com GOOGLE_API_KEY=your-google-gemini-key # 国产模型(REGION=cn 时优先使用) ZHIPU_API_KEY=your-zhipu-key ZHIPU_BASE_URL=https://open.bigmodel.cn/api/paas/v4 KIMI_API_KEY=your-kimi-key KIMI_BASE_URL=https://api.moonshot.cn/v1 # ------------------ # 默认模型配置 # ------------------ DEFAULT_LLM_PROVIDER=claude DEFAULT_LLM_MODEL=claude-3-5-sonnet-20241022 DEFAULT_EMBEDDING_PROVIDER=bge-m3 DEFAULT_EMBEDDING_MODEL=BAAI/bge-m3 DEFAULT_RERANK_PROVIDER=jina DEFAULT_RERANK_MODEL=jina-reranker-v2-base-multilingual DEFAULT_DOC_PARSER=unstructured # ------------------ # 功能开关 # ------------------ ENABLE_OCR=true ENABLE_TELEMETRY=false ENABLE_CONTENT_FILTER=true ENABLE_DEEP_SYNTHESIS_WATERMARK=true # ------------------ # 网络与安全 # ------------------ # 允许出境的域名白名单(逗号分隔,REGION=cn 时强制检查) ALLOWED_OUTBOUND_HOSTS=.aliyuncs.com,.qcloud.com,api.openai.com,api.anthropic.com,generativelanguage.googleapis.com,open.bigmodel.cn,api.moonshot.cn # CORS 允许的来源 CORS_ORIGINS=http://localhost:3000,http://localhost:8000 # JWT 密钥(生产环境请修改!) JWT_SECRET=your-super-secret-jwt-key-change-in-production JWT_ALGORITHM=HS256 JWT_EXPIRE_MINUTES=1440 # ------------------ # 对象存储 # ------------------ STORAGE_TYPE=local # local | minio | s3 STORAGE_PATH=./storage # ------------------ # 观测性 # ------------------ OTEL_ENABLED=false OTEL_ENDPOINT=http://localhost:4317 JAEGER_ENDPOINT=http://localhost:14268/api/traces # ------------------ # 前端配置 # ------------------ NEXT_PUBLIC_API_URL=http://localhost:8000 NEXT_PUBLIC_WS_URL=ws://localhost:8000 # ------------------ # Worker 配置 # ------------------ WORKER_CONCURRENCY=4 WORKER_MAX_RETRIES=3 # ------------------ # 日志与审计 # ------------------ LOG_LEVEL=INFO AUDIT_LOG_ENABLED=true SENSITIVE_DATA_MASKING=true
3.2 Docker Compose (docker-compose.yml)
version: '3.8' services: # PostgreSQL 数据库 + pgvector db: image: pgvector/pgvector:pg16 container_name: agent-platform-db environment: POSTGRES_USER: ${POSTGRES_USER:-agent} POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-agent_password} POSTGRES_DB: ${POSTGRES_DB:-agent_platform} ports: - "5432:5432" volumes: - postgres_data:/var/lib/postgresql/data - ./infra/init.sql:/docker-entrypoint-initdb.d/init.sql healthcheck: test: ["CMD-SHELL", "pg_isready -U agent"] interval: 10s timeout: 5s retries: 5 networks: - agent-network # Redis redis: image: redis:7-alpine container_name: agent-platform-redis ports: - "6379:6379" volumes: - redis_data:/data healthcheck: test: ["CMD", "redis-cli", "ping"] interval: 10s timeout: 3s retries: 5 networks: - agent-network # FastAPI 后端 api: build: context: ./apps/api dockerfile: Dockerfile container_name: agent-platform-api environment: - DATABASE_URL=postgresql://agent:agent_password@db:5432/agent_platform - REDIS_URL=redis://redis:6379/0 - REGION=${REGION:-cn} env_file: - .env ports: - "8000:8000" volumes: - ./apps/api:/app - ./packages:/packages - ./storage:/storage depends_on: db: condition: service_healthy redis: condition: service_healthy command: uvicorn main:app --host 0.0.0.0 --port 8000 networks: - agent-network # Worker (异步任务) worker: build: context: ./apps/worker dockerfile: Dockerfile container_name: agent-platform-worker environment: - DATABASE_URL=postgresql://agent:agent_password@db:5432/agent_platform - REDIS_URL=redis://redis:6379/0 env_file: - .env volumes: - ./apps/worker:/app - ./packages:/packages - ./storage:/storage depends_on: - db - redis networks: - agent-network # Next.js 前端 web: build: context: ./apps/web dockerfile: Dockerfile container_name: agent-platform-web environment: - NEXT_PUBLIC_API_URL=http://api:8000 - NEXT_PUBLIC_WS_URL=ws://api:8000 ports: - "3000:3000" volumes: - ./apps/web:/app - /app/node_modules - /app/.next depends_on: - api networks: - agent-network volumes: postgres_data: redis_data: networks: agent-network: driver: bridge
3.3 Makefile
.PHONY: help install dev dev-infra up down logs test test-unit test-e2e clean help: @echo "Agent Builder Platform - 开发命令" @echo "" @echo "make install - 安装所有依赖" @echo "make dev - 启动开发环境(仅基础设施)" @echo "make dev-infra - 启动数据库和Redis" @echo "make up - 启动所有服务(生产模式)" @echo "make down - 停止所有服务" @echo "make logs - 查看日志" @echo "make test - 运行所有测试" @echo "make clean - 清理数据和容器" install: @echo "📦 安装前端依赖..." cd apps/web && npm install @echo "📦 安装后端依赖..." cd apps/api && pip install -r requirements.txt @echo "✅ 依赖安装完成" dev-infra: @echo "🚀 启动基础设施(PostgreSQL + Redis)..." docker-compose up -d db redis @echo "⏳ 等待数据库就绪..." sleep 5 @echo "✅ 基础设施已启动" dev: dev-infra @echo "💡 请在新终端分别运行:" @echo " 终端1: cd apps/api && uvicorn main:app --reload" @echo " 终端2: cd apps/web && npm run dev" @echo " 终端3: cd apps/worker && python worker.py" up: @echo "🚀 启动所有服务(生产模式)..." docker-compose up -d @echo "✅ 服务已启动!访问 http://localhost:3000" down: @echo "🛑 停止所有服务..." docker-compose down logs: docker-compose logs -f test: @echo "🧪 运行所有测试..." cd apps/api && pytest -v cd apps/web && npm run test test-unit: @echo "🧪 运行单元测试..." cd apps/api && pytest tests/unit -v test-e2e: @echo "🧪 运行 E2E 测试..." cd apps/web && npm run test:e2e clean: @echo "🧹 清理数据和容器..." docker-compose down -v rm -rf apps/api/__pycache__ apps/api/.pytest_cache rm -rf apps/web/.next apps/web/node_modules @echo "✅ 清理完成"
4. AgentSpec 定义
packages/specs/agent_spec.py
""" AgentSpec - 智能体规范定义 统一的智能体配置格式,支持多框架/多模型/多工具/笔记/知识库 """ from typing import Any, Dict, List, Literal, Optional from pydantic import BaseModel, Field from datetime import datetime import uuid # ==================== I/O Schema ==================== class IOSchema(BaseModel): """输入/输出的 JSON Schema 定义""" type: str = "object" properties: Dict[str, Any] = Field(default_factory=dict) required: List[str] = Field(default_factory=list) # ==================== Notes 配置 ==================== class NotesConfig(BaseModel): """笔记配置""" notebook_ids: List[str] = Field(default_factory=list, description="绑定的笔记本ID列表") page_ids: List[str] = Field(default_factory=list, description="绑定的笔记页面ID列表") enable_retrieval: bool = Field(default=False, description="是否启用RAG检索") enable_direct_injection: bool = Field(default=True, description="是否直接注入最近笔记") retrieval_top_k: int = Field(default=3, description="检索Top K") context_window: int = Field(default=2000, description="上下文窗口大小(tokens)") weight: float = Field(default=1.0, ge=0.0, le=1.0, description="笔记上下文权重") # ==================== Knowledge Base 配置 ==================== class KBConfig(BaseModel): """知识库配置""" dataset_ids: List[str] = Field(default_factory=list, description="数据集ID列表") embedding_provider: str = Field(default="bge-m3", description="嵌入模型") embedding_model: str = Field(default="BAAI/bge-m3", description="嵌入模型名称") doc_parser: str = Field(default="unstructured", description="文档解析器") rerank_provider: Optional[str] = Field(default="jina", description="重排模型") rerank_model: Optional[str] = Field(default="jina-reranker-v2-base-multilingual") retrieval_top_k: int = Field(default=5, description="检索Top K") rerank_top_k: int = Field(default=3, description="重排后Top K") score_threshold: float = Field(default=0.5, ge=0.0, le=1.0, description="分数阈值") # ==================== Policies 配置 ==================== class Policies(BaseModel): """合规与策略配置""" data_residency: Literal["cn", "us", "eu", "global"] = Field(default="cn", description="数据驻留要求") content_filter: bool = Field(default=True, description="启用内容过滤") sensitive_masking: bool = Field(default=True, description="敏感数据脱敏") deep_synthesis_mark: bool = Field(default=True, description="深度合成标识") audit_level: Literal["none", "basic", "full"] = Field(default="full", description="审计日志级别") allowed_tools: List[str] = Field(default_factory=list, description="允许的工具白名单") max_tool_calls: int = Field(default=10, description="最大工具调用次数") timeout_seconds: int = Field(default=300, description="执行超时(秒)") # ==================== Runtime 配置 ==================== class RuntimeConfig(BaseModel): """运行时配置""" framework: Literal[ "claude_agentic", "openai_agents", "google_agents", "langchain", "crewai", "autogpt", "semantic_kernel" ] = Field(default="claude_agentic", description="智能体框架") model: str = Field(description="默认模型(可被节点覆盖)") temperature: float = Field(default=0.7) tools: List[str] = Field(default_factory=list, description="可用工具列表") memory: Optional[Dict[str, Any]] = Field(default=None, description="记忆配置") retries: int = Field(default=3, ge=0) timeout: int = Field(default=300, ge=1) # ==================== Metadata ==================== class Metadata(BaseModel): """元数据""" name: str version: str = Field(default="1.0.0") owner: Optional[str] = None tags: List[str] = Field(default_factory=list) description: Optional[str] = None created_at: datetime = Field(default_factory=datetime.utcnow) updated_at: datetime = Field(default_factory=datetime.utcnow) # ==================== AgentSpec 主类 ==================== class AgentSpec(BaseModel): """ 智能体完整规范 可序列化为 JSON/YAML,跨框架/模型/工具通用 """ metadata: Metadata runtime: RuntimeConfig routing: Dict[str, Any] = Field( default_factory=lambda: {"nodes": [], "edges": []}, description="DAG 路由图" ) io: Dict[str, IOSchema] = Field( default_factory=lambda: {"inputs": IOSchema(), "outputs": IOSchema()}, description="输入输出 Schema" ) policies: Policies = Field(default_factory=Policies) notes: Optional[NotesConfig] = None kb: Optional[KBConfig] = None
示例:chatbot.yml
metadata: name: simple-chatbot version: 1.0.0 owner: demo@platform.ai tags: - chatbot - demo description: 简单的对话机器人 runtime: framework: claude_agentic model: claude-3-5-sonnet-20241022 temperature: 0.7 tools: [] retries: 3 timeout: 60 policies: data_residency: cn content_filter: true sensitive_masking: true audit_level: basic
5. Model Providers
ModelProvider,支持同步/流式补全、Function Calling、嵌入向量等功能。
packages/providers/base.py
""" Model Provider 基础接口 所有模型提供商必须实现此协议 """ from typing import Protocol, List, Dict, Any, Iterable, Optional from abc import ABC, abstractmethod class ModelProvider(ABC): """模型提供商基类""" def __init__(self, api_key: str, base_url: Optional[str] = None, **kwargs): self.api_key = api_key self.base_url = base_url self.config = kwargs @property @abstractmethod def name(self) -> str: """提供商名称""" pass @abstractmethod def complete( self, messages: List[Dict[str, Any]], model: str, tools: Optional[List[Dict]] = None, stream: bool = False, temperature: float = 0.7, max_tokens: int = 2000, **kwargs ) -> Iterable | Dict: """ 文本补全 Args: messages: 对话消息列表 model: 模型名称 tools: 可用工具定义(Function Calling) stream: 是否流式返回 temperature: 温度参数 max_tokens: 最大 token 数 Returns: stream=True: 返回生成器(Iterable) stream=False: 返回完整响应(Dict) """ pass @abstractmethod def embeddings( self, texts: List[str], model: Optional[str] = None, **kwargs ) -> List[List[float]]: """ 生成嵌入向量 Args: texts: 待嵌入的文本列表 model: 嵌入模型名称 Returns: 嵌入向量列表 """ pass
packages/providers/claude.py (Claude Provider)
""" Anthropic Claude Provider 支持 Claude 3.5 Sonnet 等模型 """ from typing import List, Dict, Any, Iterable, Optional import anthropic from .base import ModelProvider class ClaudeProvider(ModelProvider): """Claude 模型提供商""" def __init__(self, api_key: str, base_url: Optional[str] = None, **kwargs): super().__init__(api_key, base_url, **kwargs) self.client = anthropic.Anthropic( api_key=api_key, base_url=base_url or "https://api.anthropic.com" ) @property def name(self) -> str: return "claude" def complete( self, messages: List[Dict[str, Any]], model: str = "claude-3-5-sonnet-20241022", tools: Optional[List[Dict]] = None, stream: bool = False, temperature: float = 0.7, max_tokens: int = 2000, **kwargs ) -> Iterable | Dict: """Claude 补全""" # 提取 system prompt system_msg = None user_messages = [] for msg in messages: if msg["role"] == "system": system_msg = msg["content"] else: user_messages.append(msg) # 构建请求参数 params = { "model": model, "messages": user_messages, "max_tokens": max_tokens, "temperature": temperature, "stream": stream, } if system_msg: params["system"] = system_msg if tools: params["tools"] = self._convert_tools(tools) # 调用 API if stream: return self._stream_complete(params) else: response = self.client.messages.create(**params) return self._format_response(response) def _stream_complete(self, params: Dict) -> Iterable: """流式补全""" with self.client.messages.stream(**params) as stream: for event in stream: if event.type == "content_block_delta": yield { "type": "token", "content": event.delta.text } elif event.type == "message_stop": yield {"type": "final", "done": True} def _format_response(self, response) -> Dict: """格式化响应""" return { "content": response.content[0].text, "model": response.model, "usage": { "input_tokens": response.usage.input_tokens, "output_tokens": response.usage.output_tokens, } } def _convert_tools(self, tools: List[Dict]) -> List[Dict]: """转换工具格式为 Claude 格式""" return [ { "name": tool["name"], "description": tool.get("description", ""), "input_schema": tool.get("parameters", {}) } for tool in tools ] def embeddings( self, texts: List[str], model: Optional[str] = None, **kwargs ) -> List[List[float]]: """Claude 暂不支持嵌入,可使用 Voyage AI""" raise NotImplementedError("Claude does not support embeddings. Use Voyage AI instead.")
packages/providers/openai.py (OpenAI Provider)
""" OpenAI Provider 支持 GPT-4, GPT-5(占位)等模型 """ from typing import List, Dict, Any, Iterable, Optional import openai from .base import ModelProvider class OpenAIProvider(ModelProvider): """OpenAI 模型提供商""" def __init__(self, api_key: str, base_url: Optional[str] = None, **kwargs): super().__init__(api_key, base_url, **kwargs) self.client = openai.OpenAI( api_key=api_key, base_url=base_url or "https://api.openai.com/v1" ) @property def name(self) -> str: return "openai" def complete( self, messages: List[Dict[str, Any]], model: str = "gpt-4-turbo", tools: Optional[List[Dict]] = None, stream: bool = False, temperature: float = 0.7, max_tokens: int = 2000, **kwargs ) -> Iterable | Dict: """OpenAI 补全""" # GPT-5 占位处理:如果不可用则回退到 GPT-4 if "gpt-5" in model.lower(): try: # 尝试调用 GPT-5 test_response = self.client.models.retrieve(model) except: # 回退到 GPT-4 Turbo model = "gpt-4-turbo" print(f"GPT-5 not available, falling back to {model}") params = { "model": model, "messages": messages, "temperature": temperature, "max_tokens": max_tokens, "stream": stream, } if tools: params["tools"] = tools params["tool_choice"] = "auto" if stream: return self._stream_complete(params) else: response = self.client.chat.completions.create(**params) return self._format_response(response) def _stream_complete(self, params: Dict) -> Iterable: """流式补全""" stream = self.client.chat.completions.create(**params) for chunk in stream: if chunk.choices[0].delta.content: yield { "type": "token", "content": chunk.choices[0].delta.content } yield {"type": "final", "done": True} def _format_response(self, response) -> Dict: """格式化响应""" return { "content": response.choices[0].message.content, "model": response.model, "usage": { "input_tokens": response.usage.prompt_tokens, "output_tokens": response.usage.completion_tokens, }, "tool_calls": response.choices[0].message.tool_calls if hasattr(response.choices[0].message, "tool_calls") else [] } def embeddings( self, texts: List[str], model: Optional[str] = "text-embedding-3-large", **kwargs ) -> List[List[float]]: """生成嵌入向量""" response = self.client.embeddings.create( input=texts, model=model ) return [item.embedding for item in response.data]
由于篇幅限制,Google Gemini、智谱 GLM、Kimi 等 Provider 的完整实现类似,均继承
ModelProvider 基类并实现 complete 和 embeddings 方法。完整代码请参考仓库的 packages/providers/ 目录。
6. Framework Adapters (框架适配器)
AgentSpec 映射到具体的智能体框架(LangChain、CrewAI 等),实现跨框架的统一编排。
packages/frameworks/base.py
""" Agent Framework Adapter 基础接口 """ from typing import Protocol, Dict, Any, Iterable from abc import ABC, abstractmethod from packages.specs.agent_spec import AgentSpec class AgentFrameworkAdapter(ABC): """智能体框架适配器基类""" @property @abstractmethod def framework(self) -> str: """框架名称""" pass @abstractmethod def run( self, spec: AgentSpec, input_data: Dict[str, Any], stream: bool = False, **kwargs ) -> Iterable | Dict: """ 运行智能体 Args: spec: AgentSpec 配置 input_data: 输入数据 stream: 是否流式返回 Returns: stream=True: 返回生成器 stream=False: 返回完整结果 """ pass
packages/frameworks/claude_agentic.py
""" Claude Agentic SDK Adapter """ from typing import Dict, Any, Iterable from .base import AgentFrameworkAdapter from packages.specs.agent_spec import AgentSpec from packages.providers.claude import ClaudeProvider import os class ClaudeAgenticAdapter(AgentFrameworkAdapter): """Claude Agentic SDK 适配器""" @property def framework(self) -> str: return "claude_agentic" def run( self, spec: AgentSpec, input_data: Dict[str, Any], stream: bool = False, **kwargs ) -> Iterable | Dict: """运行 Claude Agent""" # 初始化 Claude Provider provider = ClaudeProvider( api_key=os.getenv("ANTHROPIC_API_KEY"), base_url=os.getenv("ANTHROPIC_BASE_URL") ) # 构建消息 messages = [] # 添加 system prompt(如果有) nodes = spec.routing.get("nodes", []) for node in nodes: if node.get("type") == "llm" and node.get("system_prompt"): messages.append({ "role": "system", "content": node["system_prompt"] }) break # 添加用户输入 user_message = input_data.get("message", input_data.get("query", "")) messages.append({ "role": "user", "content": user_message }) # 调用 Claude return provider.complete( messages=messages, model=spec.runtime.model, stream=stream, temperature=spec.runtime.temperature, tools=self._convert_tools(spec.runtime.tools) ) def _convert_tools(self, tool_names: list) -> list: """转换工具定义""" # 这里应该从工具注册表加载实际工具定义 return []
packages/frameworks/langchain.py
""" LangChain Adapter """ from typing import Dict, Any, Iterable from .base import AgentFrameworkAdapter from packages.specs.agent_spec import AgentSpec class LangChainAdapter(AgentFrameworkAdapter): """LangChain 适配器""" @property def framework(self) -> str: return "langchain" def run( self, spec: AgentSpec, input_data: Dict[str, Any], stream: bool = False, **kwargs ) -> Iterable | Dict: """运行 LangChain Agent""" from langchain.chat_models import ChatOpenAI from langchain.schema import HumanMessage, SystemMessage # 初始化模型 llm = ChatOpenAI( model=spec.runtime.model, temperature=spec.runtime.temperature, streaming=stream ) # 构建消息 messages = [] nodes = spec.routing.get("nodes", []) for node in nodes: if node.get("type") == "llm" and node.get("system_prompt"): messages.append(SystemMessage(content=node["system_prompt"])) break user_message = input_data.get("message", input_data.get("query", "")) messages.append(HumanMessage(content=user_message)) # 调用模型 if stream: def stream_generator(): for chunk in llm.stream(messages): yield { "type": "token", "content": chunk.content } yield {"type": "final", "done": True} return stream_generator() else: response = llm(messages) return { "content": response.content, "model": spec.runtime.model }
OpenAI Agents、CrewAI、AutoGPT、Semantic Kernel、Google Agents 等适配器实现类似,均继承
AgentFrameworkAdapter 并实现 run 方法。完整代码请参考 packages/frameworks/ 目录。
7. 笔记工具 (Notes)
packages/notes/service.py
""" 笔记服务 - CRUD 操作 """ from typing import List, Optional, Dict, Any from sqlalchemy.orm import Session from datetime import datetime import uuid class Note: """笔记模型(简化版,实际应使用 SQLAlchemy)""" def __init__(self, **kwargs): self.id = kwargs.get("id", str(uuid.uuid4())) self.notebook_id = kwargs.get("notebook_id") self.title = kwargs.get("title", "Untitled") self.content = kwargs.get("content", "") self.content_type = kwargs.get("content_type", "markdown") # markdown | html self.tags = kwargs.get("tags", []) self.enable_context = kwargs.get("enable_context", True) self.version = kwargs.get("version", 1) self.created_at = kwargs.get("created_at", datetime.utcnow()) self.updated_at = kwargs.get("updated_at", datetime.utcnow()) class NotesService: """笔记服务""" def __init__(self, db: Session): self.db = db def create_note( self, notebook_id: str, title: str, content: str, content_type: str = "markdown", tags: List[str] = [], enable_context: bool = True ) -> Note: """创建笔记""" note = Note( notebook_id=notebook_id, title=title, content=content, content_type=content_type, tags=tags, enable_context=enable_context ) # TODO: 保存到数据库 return note def update_note( self, note_id: str, title: Optional[str] = None, content: Optional[str] = None, tags: Optional[List[str]] = None, enable_context: Optional[bool] = None ) -> Note: """更新笔记(自动创建版本)""" # TODO: 加载笔记、创建版本快照、更新内容 pass def get_note(self, note_id: str) -> Optional[Note]: """获取笔记""" # TODO: 从数据库加载 pass def list_notes( self, notebook_id: Optional[str] = None, tags: Optional[List[str]] = None, enable_context: Optional[bool] = None ) -> List[Note]: """列出笔记""" # TODO: 查询数据库 pass def delete_note(self, note_id: str) -> bool: """删除笔记(软删除)""" # TODO: 标记为已删除 pass def get_note_versions(self, note_id: str) -> List[Dict]: """获取笔记版本历史""" # TODO: 查询版本表 pass def restore_version(self, note_id: str, version: int) -> Note: """回滚到指定版本""" # TODO: 恢复指定版本内容 pass
packages/notes/indexer.py
""" 笔记 RAG 索引器 """ from typing import List, Dict, Any from packages.kb.embeddings import EmbeddingProvider import numpy as np class NotesIndexer: """笔记索引器(用于 RAG 检索)""" def __init__(self, embedding_provider: EmbeddingProvider): self.embedding_provider = embedding_provider def index_note(self, note_id: str, content: str): """为笔记建立索引""" # 1. 切片内容(按段落/句子) chunks = self._chunk_content(content) # 2. 生成嵌入向量 embeddings = self.embedding_provider.embed(chunks) # 3. 存储到 pgvector self._store_vectors(note_id, chunks, embeddings) def search_notes( self, query: str, notebook_ids: List[str] = [], top_k: int = 3, score_threshold: float = 0.5 ) -> List[Dict]: """检索笔记""" # 1. 查询向量化 query_embedding = self.embedding_provider.embed([query])[0] # 2. 向量检索(pgvector) results = self._vector_search(query_embedding, notebook_ids, top_k) # 3. 过滤低分结果 filtered_results = [r for r in results if r["score"] >= score_threshold] return filtered_results def _chunk_content(self, content: str) -> List[str]: """切片内容""" # 简单按段落切分,实际可用更复杂的策略 return [chunk.strip() for chunk in content.split("\n\n") if chunk.strip()] def _store_vectors(self, note_id: str, chunks: List[str], embeddings: List[List[float]]): """存储向量到 pgvector""" # TODO: INSERT INTO note_embeddings (note_id, chunk, embedding) pass def _vector_search( self, query_embedding: List[float], notebook_ids: List[str], top_k: int ) -> List[Dict]: """向量检索""" # TODO: SELECT * FROM note_embeddings ORDER BY embedding <=> query_embedding LIMIT top_k return []
📦 完整代码文档已达到 HTML 容量限制
由于完整代码体量巨大,此 HTML 文档已包含核心部分:
- ✅ 项目概述与特性
- ✅ 完整目录结构
- ✅ 配置文件 (.env, docker-compose, Makefile)
- ✅ AgentSpec 定义与示例
- ✅ Model Providers (Claude, OpenAI 完整代码)
- ✅ Framework Adapters (Claude Agentic, LangChain)
- ✅ 笔记工具 (Notes Service & Indexer)
剩余部分(知识库、后端API、前端UI、Playground、部署、测试)的完整代码需要分页或模块化生成。
建议您:
1. 将此 HTML 保存为参考文档
2. 按需请求具体模块的详细代码
3. 或直接使用 Claude Code 工作台逐步生成完整项目
🚀 快速开始
一键启动
git clone https://github.com/your-org/agent-builder-platform.git
cd agent-builder-platform
cp .env.example .env
# 编辑 .env 填入 API Keys
make up
# 访问 http://localhost:3000
需要完整代码?
请回复以下内容获取完整模块代码:
- 📚 "生成知识库完整代码"
- 🔌 "生成后端 API 完整代码"
- 🎨 "生成前端 UI 完整代码"
- 🔍 "生成 Playground 完整代码"
- 🧪 "生成测试代码"
- 📦 "生成部署文档"