AI Menu
- Press "⌘ + J".
- Select text and click "Ask AI" in the floating toolbar
- Right-click a block and select "Ask AI"
- Press space in an empty block. Try it out:
-
- Search commands in the input field:
- Use arrow keys to navigate, Enter to select
- Continue writing
- Add a summary
- Explain
- Accept
- Discard
- Try again
- Improve writing
- Emojify
- Make it longer or shorter
- Fix spelling & grammar
- Simplify language
- Replace the selection
- Insert below
- Discard
- Try again
'use client';
import * as React from 'react';
import { Plate, usePlateEditor } from 'platejs/react';
import { EditorKit } from '@/components/editor/editor-kit';
import { Editor, EditorContainer } from '@/components/ui/editor';
import { DEMO_VALUES } from './values/demo-values';
export default function Demo({ id }: { id: string }) {
const editor = usePlateEditor({
plugins: EditorKit,
value: DEMO_VALUES[id],
});
return (
<Plate editor={editor}>
<EditorContainer variant="demo">
<Editor />
</EditorContainer>
</Plate>
);
}


功能特点
- 智能命令菜单: 带有预定义 AI 命令的组合框界面,用于生成和编辑
- 多种触发模式:
- 光标模式: 在块末尾用空格触发
- 选择模式: 用选中的文本触发
- 块选择模式: 用选中的块触发
- 响应模式:
- 聊天模式: 预览响应并提供接受/拒绝选项
- 插入模式: 直接插入内容并支持 markdown 流式传输
- 智能内容处理: 针对表格、代码块和复杂结构优化的分块处理
- 流式响应: 实时 AI 内容生成
- Markdown 集成: 完全支持 AI 响应中的 Markdown 语法
- 可自定义提示: 用户和系统提示的模板系统
- 内置 Vercel AI SDK 支持: 即用型聊天 API 集成
Kit 使用
安装
添加 AI 功能最快的方法是使用 AIKit,它包含预配置的 AIPlugin 和 AIChatPlugin,以及光标覆盖和 markdown 支持及其 Plate UI 组件。
'use client';
import type { AIChatPluginConfig } from '@platejs/ai/react';
import type { UseChatOptions } from 'ai/react';
import { streamInsertChunk, withAIBatch } from '@platejs/ai';
import { AIChatPlugin, AIPlugin, useChatChunk } from '@platejs/ai/react';
import { KEYS, PathApi } from 'platejs';
import { usePluginOption } from 'platejs/react';
import { AILoadingBar, AIMenu } from '@/components/ui/ai-menu';
import { AIAnchorElement, AILeaf } from '@/components/ui/ai-node';
import { CursorOverlayKit } from './cursor-overlay-kit';
import { MarkdownKit } from './markdown-kit';
export const aiChatPlugin = AIChatPlugin.extend({
options: {
chatOptions: {
api: '/api/ai/command',
body: {},
} as UseChatOptions,
promptTemplate: ({ isBlockSelecting, isSelecting }) => {
return isBlockSelecting
? PROMPT_TEMPLATES.userBlockSelecting
: isSelecting
? PROMPT_TEMPLATES.userSelecting
: PROMPT_TEMPLATES.userDefault;
},
systemTemplate: ({ isBlockSelecting, isSelecting }) => {
return isBlockSelecting
? PROMPT_TEMPLATES.systemBlockSelecting
: isSelecting
? PROMPT_TEMPLATES.systemSelecting
: PROMPT_TEMPLATES.systemDefault;
},
},
render: {
afterContainer: AILoadingBar,
afterEditable: AIMenu,
node: AIAnchorElement,
},
shortcuts: { show: { keys: 'mod+j' } },
useHooks: ({ editor, getOption }) => {
const mode = usePluginOption(
{ key: KEYS.aiChat } as AIChatPluginConfig,
'mode'
);
useChatChunk({
onChunk: ({ chunk, isFirst, nodes }) => {
if (isFirst && mode == 'insert') {
editor.tf.withoutSaving(() => {
editor.tf.insertNodes(
{
children: [{ text: '' }],
type: KEYS.aiChat,
},
{
at: PathApi.next(editor.selection!.focus.path.slice(0, 1)),
}
);
});
editor.setOption(AIChatPlugin, 'streaming', true);
}
if (mode === 'insert' && nodes.length > 0) {
withAIBatch(
editor,
() => {
if (!getOption('streaming')) return;
editor.tf.withScrolling(() => {
streamInsertChunk(editor, chunk, {
textProps: {
ai: true,
},
});
});
},
{ split: isFirst }
);
}
},
onFinish: () => {
editor.setOption(AIChatPlugin, 'streaming', false);
editor.setOption(AIChatPlugin, '_blockChunks', '');
editor.setOption(AIChatPlugin, '_blockPath', null);
},
});
},
});
export const AIKit = [
...CursorOverlayKit,
...MarkdownKit,
AIPlugin.withComponent(AILeaf),
aiChatPlugin,
];
const systemCommon = `\
You are an advanced AI-powered note-taking assistant, designed to enhance productivity and creativity in note management.
Respond directly to user prompts with clear, concise, and relevant content. Maintain a neutral, helpful tone.
Rules:
- <Document> is the entire note the user is working on.
- <Reminder> is a reminder of how you should reply to INSTRUCTIONS. It does not apply to questions.
- Anything else is the user prompt.
- Your response should be tailored to the user's prompt, providing precise assistance to optimize note management.
- For INSTRUCTIONS: Follow the <Reminder> exactly. Provide ONLY the content to be inserted or replaced. No explanations or comments.
- For QUESTIONS: Provide a helpful and concise answer. You may include brief explanations if necessary.
- CRITICAL: DO NOT remove or modify the following custom MDX tags: <u>, <callout>, <kbd>, <toc>, <sub>, <sup>, <mark>, <del>, <date>, <span>, <column>, <column_group>, <file>, <audio>, <video> in <Selection> unless the user explicitly requests this change.
- CRITICAL: Distinguish between INSTRUCTIONS and QUESTIONS. Instructions typically ask you to modify or add content. Questions ask for information or clarification.
- CRITICAL: when asked to write in markdown, do not start with \`\`\`markdown.
`;
const systemDefault = `\
${systemCommon}
- <Block> is the current block of text the user is working on.
- Ensure your output can seamlessly fit into the existing <Block> structure.
<Block>
{block}
</Block>
`;
const systemSelecting = `\
${systemCommon}
- <Block> is the block of text containing the user's selection, providing context.
- Ensure your output can seamlessly fit into the existing <Block> structure.
- <Selection> is the specific text the user has selected in the block and wants to modify or ask about.
- Consider the context provided by <Block>, but only modify <Selection>. Your response should be a direct replacement for <Selection>.
<Block>
{block}
</Block>
<Selection>
{selection}
</Selection>
`;
const systemBlockSelecting = `\
${systemCommon}
- <Selection> represents the full blocks of text the user has selected and wants to modify or ask about.
- Your response should be a direct replacement for the entire <Selection>.
- Maintain the overall structure and formatting of the selected blocks, unless explicitly instructed otherwise.
- CRITICAL: Provide only the content to replace <Selection>. Do not add additional blocks or change the block structure unless specifically requested.
<Selection>
{block}
</Selection>
`;
const userDefault = `<Reminder>
CRITICAL: NEVER write <Block>.
</Reminder>
{prompt}`;
const userSelecting = `<Reminder>
If this is a question, provide a helpful and concise answer about <Selection>.
If this is an instruction, provide ONLY the text to replace <Selection>. No explanations.
Ensure it fits seamlessly within <Block>. If <Block> is empty, write ONE random sentence.
NEVER write <Block> or <Selection>.
</Reminder>
{prompt} about <Selection>`;
const userBlockSelecting = `<Reminder>
If this is a question, provide a helpful and concise answer about <Selection>.
If this is an instruction, provide ONLY the content to replace the entire <Selection>. No explanations.
Maintain the overall structure unless instructed otherwise.
NEVER write <Block> or <Selection>.
</Reminder>
{prompt} about <Selection>`;
export const PROMPT_TEMPLATES = {
systemBlockSelecting,
systemDefault,
systemSelecting,
userBlockSelecting,
userDefault,
userSelecting,
};
AIMenu: 渲染 AI 命令界面AILoadingBar: 显示 AI 处理状态AIAnchorElement: AI 菜单的锚点元素AILeaf: 渲染 AI 生成的内容并带有视觉区分
添加 Kit
import { createPlateEditor } from 'platejs/react';
import { AIKit } from '@/components/editor/plugins/ai-kit';
const editor = createPlateEditor({
plugins: [
// ...其他插件,
...AIKit,
],
});添加 API 路由
AI 功能需要服务器端 API 端点。添加预配置的 AI 命令路由:
import type { TextStreamPart, ToolSet } from 'ai';
import type { NextRequest } from 'next/server';
import { createOpenAI } from '@ai-sdk/openai';
import { InvalidArgumentError } from '@ai-sdk/provider';
import { delay as originalDelay } from '@ai-sdk/provider-utils';
import { convertToCoreMessages, streamText } from 'ai';
import { NextResponse } from 'next/server';
/**
* Detects the first chunk in a buffer.
*
* @param buffer - The buffer to detect the first chunk in.
* @returns The first detected chunk, or `undefined` if no chunk was detected.
*/
export type ChunkDetector = (buffer: string) => string | null | undefined;
type delayer = (buffer: string) => number;
/**
* Smooths text streaming output.
*
* @param delayInMs - The delay in milliseconds between each chunk. Defaults to
* 10ms. Can be set to `null` to skip the delay.
* @param chunking - Controls how the text is chunked for streaming. Use "word"
* to stream word by word (default), "line" to stream line by line, or provide
* a custom RegExp pattern for custom chunking.
* @returns A transform stream that smooths text streaming output.
*/
function smoothStream<TOOLS extends ToolSet>({
_internal: { delay = originalDelay } = {},
chunking = 'word',
delayInMs = 10,
}: {
/** Internal. For test use only. May change without notice. */
_internal?: {
delay?: (delayInMs: number | null) => Promise<void>;
};
chunking?: ChunkDetector | RegExp | 'line' | 'word';
delayInMs?: delayer | number | null;
} = {}): (options: {
tools: TOOLS;
}) => TransformStream<TextStreamPart<TOOLS>, TextStreamPart<TOOLS>> {
let detectChunk: ChunkDetector;
if (typeof chunking === 'function') {
detectChunk = (buffer) => {
const match = chunking(buffer);
if (match == null) {
return null;
}
if (match.length === 0) {
throw new Error(`Chunking function must return a non-empty string.`);
}
if (!buffer.startsWith(match)) {
throw new Error(
`Chunking function must return a match that is a prefix of the buffer. Received: "${match}" expected to start with "${buffer}"`
);
}
return match;
};
} else {
const chunkingRegex =
typeof chunking === 'string' ? CHUNKING_REGEXPS[chunking] : chunking;
if (chunkingRegex == null) {
throw new InvalidArgumentError({
argument: 'chunking',
message: `Chunking must be "word" or "line" or a RegExp. Received: ${chunking}`,
});
}
detectChunk = (buffer) => {
const match = chunkingRegex.exec(buffer);
if (!match) {
return null;
}
return buffer.slice(0, match.index) + match?.[0];
};
}
return () => {
let buffer = '';
return new TransformStream<TextStreamPart<TOOLS>, TextStreamPart<TOOLS>>({
async transform(chunk, controller) {
if (chunk.type !== 'text-delta') {
console.info(buffer, 'finished');
if (buffer.length > 0) {
controller.enqueue({ textDelta: buffer, type: 'text-delta' });
buffer = '';
}
controller.enqueue(chunk);
return;
}
buffer += chunk.textDelta;
let match;
while ((match = detectChunk(buffer)) != null) {
controller.enqueue({ textDelta: match, type: 'text-delta' });
buffer = buffer.slice(match.length);
const _delayInMs =
typeof delayInMs === 'number'
? delayInMs
: (delayInMs?.(buffer) ?? 10);
await delay(_delayInMs);
}
},
});
};
}
const CHUNKING_REGEXPS = {
line: /\n+/m,
list: /.{8}/m,
word: /\S+\s+/m,
};
export async function POST(req: NextRequest) {
const { apiKey: key, messages, system } = await req.json();
const apiKey = key || process.env.OPENAI_API_KEY;
if (!apiKey) {
return NextResponse.json(
{ error: 'Missing OpenAI API key.' },
{ status: 401 }
);
}
const openai = createOpenAI({ apiKey });
let isInCodeBlock = false;
let isInTable = false;
let isInList = false;
let isInLink = false;
try {
const result = streamText({
experimental_transform: smoothStream({
chunking: (buffer) => {
// Check for code block markers
if (/```[^\s]+/.test(buffer)) {
isInCodeBlock = true;
} else if (isInCodeBlock && buffer.includes('```')) {
isInCodeBlock = false;
}
// test case: should not deserialize link with markdown syntax
if (buffer.includes('http')) {
isInLink = true;
} else if (buffer.includes('https')) {
isInLink = true;
} else if (buffer.includes('\n') && isInLink) {
isInLink = false;
}
if (buffer.includes('*') || buffer.includes('-')) {
isInList = true;
} else if (buffer.includes('\n') && isInList) {
isInList = false;
}
// Simple table detection: enter on |, exit on double newline
if (!isInTable && buffer.includes('|')) {
isInTable = true;
} else if (isInTable && buffer.includes('\n\n')) {
isInTable = false;
}
// Use line chunking for code blocks and tables, word chunking otherwise
// Choose the appropriate chunking strategy based on content type
let match;
if (isInCodeBlock || isInTable || isInLink) {
// Use line chunking for code blocks and tables
match = CHUNKING_REGEXPS.line.exec(buffer);
} else if (isInList) {
// Use list chunking for lists
match = CHUNKING_REGEXPS.list.exec(buffer);
} else {
// Use word chunking for regular text
match = CHUNKING_REGEXPS.word.exec(buffer);
}
if (!match) {
return null;
}
return buffer.slice(0, match.index) + match?.[0];
},
delayInMs: () => (isInCodeBlock || isInTable ? 100 : 30),
}),
maxTokens: 2048,
messages: convertToCoreMessages(messages),
model: openai('gpt-4o'),
system: system,
});
return result.toDataStreamResponse();
} catch {
return NextResponse.json(
{ error: 'Failed to process AI request' },
{ status: 500 }
);
}
}
配置环境
确保在环境变量中设置了 OpenAI API 密钥:
OPENAI_API_KEY="your-api-key"手动使用
安装
pnpm add @platejs/ai @platejs/selection @platejs/markdown @platejs/basic-nodes
添加插件
import { AIPlugin, AIChatPlugin } from '@platejs/ai/react';
import { createPlateEditor } from 'platejs/react';
import { MarkdownKit } from '@/components/editor/plugins/markdown-kit';
const editor = createPlateEditor({
plugins: [
// ...其他插件,
...MarkdownKit, // AI 内容处理必需
AIPlugin,
AIChatPlugin,
],
});MarkdownKit: 处理带有 Markdown 语法和 MDX 支持的 AI 响应所必需。AIPlugin: 用于 AI 内容管理和转换的核心插件。AIChatPlugin: 处理 AI 聊天界面、流式传输和用户交互。
配置插件
创建带有基本配置的扩展 aiChatPlugin:
import type { AIChatPluginConfig } from '@platejs/ai/react';
import type { UseChatOptions } from 'ai/react';
import { KEYS, PathApi } from 'platejs';
import { streamInsertChunk, withAIBatch } from '@platejs/ai';
import { AIChatPlugin, AIPlugin, useChatChunk } from '@platejs/ai/react';
import { usePluginOption } from 'platejs/react';
import { MarkdownKit } from '@/components/editor/plugins/markdown-kit';
import { AILoadingBar, AIMenu } from '@/components/ui/ai-menu';
import { AIAnchorElement, AILeaf } from '@/components/ui/ai-node';
export const aiChatPlugin = AIChatPlugin.extend({
options: {
chatOptions: {
api: '/api/ai/command',
body: {},
} as UseChatOptions,
},
render: {
afterContainer: AILoadingBar,
afterEditable: AIMenu,
node: AIAnchorElement,
},
shortcuts: { show: { keys: 'mod+j' } },
});
const plugins = [
// ...其他插件,
...MarkdownKit,
AIPlugin.withComponent(AILeaf),
aiChatPlugin,
];chatOptions: Vercel AI SDKuseChat钩子的配置。render: AI 界面的 UI 组件。shortcuts: 键盘快捷键(Cmd+J显示 AI 菜单)。
使用 useHooks 添加流式传输
useChatChunk 钩子实时处理流式 AI 响应,处理内容插入和块管理。它监控聊天状态并处理传入的文本块,在它们到达时将它们插入编辑器:
export const aiChatPlugin = AIChatPlugin.extend({
// ... 之前的选项
useHooks: ({ editor, getOption }) => {
const mode = usePluginOption(
{ key: KEYS.aiChat } as AIChatPluginConfig,
'mode'
);
useChatChunk({
onChunk: ({ chunk, isFirst, nodes }) => {
if (isFirst && mode == 'insert') {
editor.tf.withoutSaving(() => {
editor.tf.insertNodes(
{
children: [{ text: '' }],
type: KEYS.aiChat,
},
{
at: PathApi.next(editor.selection!.focus.path.slice(0, 1)),
}
);
});
editor.setOption(AIChatPlugin, 'streaming', true);
}
if (mode === 'insert' && nodes.length > 0) {
withAIBatch(
editor,
() => {
if (!getOption('streaming')) return;
editor.tf.withScrolling(() => {
streamInsertChunk(editor, chunk, {
textProps: {
ai: true,
},
});
});
},
{ split: isFirst }
);
}
},
onFinish: () => {
editor.setOption(AIChatPlugin, 'streaming', false);
editor.setOption(AIChatPlugin, '_blockChunks', '');
editor.setOption(AIChatPlugin, '_blockPath', null);
},
});
},
});onChunk: 处理每个流式块,在第一个块创建 AI 节点并实时插入内容onFinish: 响应完成时清理流式状态- 使用
withAIBatch和streamInsertChunk进行优化的内容插入
系统提示
系统提示定义了 AI 的角色和行为。您可以在扩展插件中自定义 systemTemplate:
export const customAIChatPlugin = AIChatPlugin.extend({
options: {
systemTemplate: ({ isBlockSelecting, isSelecting }) => {
const customSystem = `你是一个专门从事代码和 API 文档的技术文档助手。
规则:
- 提供准确、结构良好的技术内容
- 使用适当的代码格式和语法高亮
- 包含相关示例和最佳实践
- 保持一致的文档风格
- 重要:除非明确要求,否则不要删除或修改自定义 MDX 标签。
- 重要:区分指令和问题。`;
return isBlockSelecting
? `${customSystem}
- <Selection> 表示用户选择并想要修改或询问的完整文本块。
- 你的响应应该是对整个 <Selection> 的直接替换。
- 除非另有明确指示,否则保持所选块的整体结构和格式。
<Selection>
{block}
</Selection>`
: isSelecting
? `${customSystem}
- <Block> 是包含用户选择的文本块,提供上下文。
- <Selection> 是用户在块中选择并想要修改或询问的特定文本。
- 考虑 <Block> 提供的上下文,但只修改 <Selection>。
<Block>
{block}
</Block>
<Selection>
{selection}
</Selection>`
: `${customSystem}
- <Block> 是用户当前正在处理的文本块。
<Block>
{block}
</Block>`;
},
// ...其他选项
},
}),用户提示
自定义用户提示在扩展插件中的格式和上下文:
export const customAIChatPlugin = AIChatPlugin.extend({
options: {
promptTemplate: ({ isBlockSelecting, isSelecting }) => {
return isBlockSelecting
? `<Reminder>
如果是问题,请提供关于 <Selection> 的有帮助且简洁的回答。
如果是指令,请仅提供替换整个 <Selection> 的内容。不要解释。
分析并改进以下内容块,保持结构和清晰度。
永远不要写入 <Block> 或 <Selection>。
</Reminder>
{prompt} 关于 <Selection>`
: isSelecting
? `<Reminder>
如果是问题,请提供关于 <Selection> 的有帮助且简洁的回答。
如果是指令,请仅提供替换 <Selection> 的文本。不要解释。
确保它无缝融入 <Block>。如果 <Block> 为空,写一个随机句子。
永远不要写入 <Block> 或 <Selection>。
</Reminder>
{prompt} 关于 <Selection>`
: `<Reminder>
重要:永远不要写入 <Block>。
自然地继续或改进内容。
</Reminder>
{prompt}`;
},
// ...其他选项
},
}),添加 API 路由
创建一个针对不同内容类型优化的流式 API 路由处理程序:
import type { TextStreamPart, ToolSet } from 'ai';
import type { NextRequest } from 'next/server';
import { createOpenAI } from '@ai-sdk/openai';
import { InvalidArgumentError } from '@ai-sdk/provider';
import { delay as originalDelay } from '@ai-sdk/provider-utils';
import { convertToCoreMessages, streamText } from 'ai';
import { NextResponse } from 'next/server';
const CHUNKING_REGEXPS = {
line: /\n+/m,
list: /.{8}/m,
word: /\S+\s+/m,
};
export async function POST(req: NextRequest) {
const { apiKey: key, messages, system } = await req.json();
const apiKey = key || process.env.OPENAI_API_KEY;
if (!apiKey) {
return NextResponse.json(
{ error: '缺少 OpenAI API 密钥。' },
{ status: 401 }
);
}
const openai = createOpenAI({ apiKey });
let isInCodeBlock = false;
let isInTable = false;
let isInList = false;
let isInLink = false;
try {
const result = streamText({
experimental_transform: smoothStream({
chunking: (buffer) => {
// 检测内容类型以优化分块
if (/```[^\s]+/.test(buffer)) {
isInCodeBlock = true;
} else if (isInCodeBlock && buffer.includes('```')) {
isInCodeBlock = false;
}
if (buffer.includes('http')) {
isInLink = true;
} else if (buffer.includes('https')) {
isInLink = true;
} else if (buffer.includes('\n') && isInLink) {
isInLink = false;
}
if (buffer.includes('*') || buffer.includes('-')) {
isInList = true;
} else if (buffer.includes('\n') && isInList) {
isInList = false;
}
if (!isInTable && buffer.includes('|')) {
isInTable = true;
} else if (isInTable && buffer.includes('\n\n')) {
isInTable = false;
}
// 根据内容类型选择分块策略
let match;
if (isInCodeBlock || isInTable || isInLink) {
match = CHUNKING_REGEXPS.line.exec(buffer);
} else if (isInList) {
match = CHUNKING_REGEXPS.list.exec(buffer);
} else {
match = CHUNKING_REGEXPS.word.exec(buffer);
}
if (!match) return null;
return buffer.slice(0, match.index) + match?.[0];
},
delayInMs: () => (isInCodeBlock || isInTable ? 100 : 30),
}),
maxTokens: 2048,
messages: convertToCoreMessages(messages),
model: openai('gpt-4o'),
system: system,
});
return result.toDataStreamResponse();
} catch {
return NextResponse.json(
{ error: '处理 AI 请求失败' },
{ status: 500 }
);
}
}
// 用于优化分块的平滑流实现
function smoothStream<TOOLS extends ToolSet>({
_internal: { delay = originalDelay } = {},
chunking = 'word',
delayInMs = 10,
}: {
_internal?: {
delay?: (delayInMs: number | null) => Promise<void>;
};
chunking?: ChunkDetector | RegExp | 'line' | 'word';
delayInMs?: delayer | number | null;
} = {}): (options: {
tools: TOOLS;
}) => TransformStream<TextStreamPart<TOOLS>, TextStreamPart<TOOLS>> {
let detectChunk: ChunkDetector;
if (typeof chunking === 'function') {
detectChunk = (buffer) => {
const match = chunking(buffer);
if (match == null) return null;
if (match.length === 0) {
throw new Error(`分块函数必须返回非空字符串。`);
}
if (!buffer.startsWith(match)) {
throw new Error(
`分块函数必须返回缓冲区前缀的匹配项。`
);
}
return match;
};
} else {
const chunkingRegex =
typeof chunking === 'string' ? CHUNKING_REGEXPS[chunking] : chunking;
if (chunkingRegex == null) {
throw new InvalidArgumentError({
argument: 'chunking',
message: `分块必须是 "word" 或 "line" 或 RegExp。收到: ${chunking}`,
});
}
detectChunk = (buffer) => {
const match = chunkingRegex.exec(buffer);
if (!match) return null;
return buffer.slice(0, match.index) + match?.[0];
};
}
return () => {
let buffer = '';
return new TransformStream<TextStreamPart<TOOLS>, TextStreamPart<TOOLS>>({
async transform(chunk, controller) {
if (chunk.type !== 'text-delta') {
if (buffer.length > 0) {
controller.enqueue({ textDelta: buffer, type: 'text-delta' });
buffer = '';
}
controller.enqueue(chunk);
return;
}
buffer += chunk.textDelta;
let match;
while ((match = detectChunk(buffer)) != null) {
controller.enqueue({ textDelta: match, type: 'text-delta' });
buffer = buffer.slice(match.length);
const _delayInMs =
typeof delayInMs === 'number'
? delayInMs
: (delayInMs?.(buffer) ?? 10);
await delay(_delayInMs);
}
},
});
};
}然后在 .env.local 中设置你的 OPENAI_API_KEY。
添加工具栏按钮
你可以在工具栏中添加 AIToolbarButton 来打开 AI 菜单。
键盘快捷键
| Key | Description |
|---|---|
| Space | 在空块中打开 AI 菜单(光标模式) |
| Cmd + J | 打开 AI 菜单(光标或选择模式) |
| Escape | 关闭 AI 菜单 |
Plate Plus
自定义
添加自定义 AI 命令
'use client';
import * as React from 'react';
import {
AIChatPlugin,
AIPlugin,
useEditorChat,
useLastAssistantMessage,
} from '@platejs/ai/react';
import { BlockSelectionPlugin, useIsSelecting } from '@platejs/selection/react';
import { Command as CommandPrimitive } from 'cmdk';
import {
Album,
BadgeHelp,
BookOpenCheck,
Check,
CornerUpLeft,
FeatherIcon,
ListEnd,
ListMinus,
ListPlus,
Loader2Icon,
PauseIcon,
PenLine,
SmileIcon,
Wand,
X,
} from 'lucide-react';
import { type NodeEntry, type SlateEditor, isHotkey, NodeApi } from 'platejs';
import { useEditorPlugin, useHotkeys, usePluginOption } from 'platejs/react';
import { type PlateEditor, useEditorRef } from 'platejs/react';
import { Button } from '@/components/ui/button';
import {
Command,
CommandGroup,
CommandItem,
CommandList,
} from '@/components/ui/command';
import {
Popover,
PopoverAnchor,
PopoverContent,
} from '@/components/ui/popover';
import { cn } from '@/lib/utils';
import { useChat } from '@/components/editor/use-chat';
import { AIChatEditor } from './ai-chat-editor';
export function AIMenu() {
const { api, editor } = useEditorPlugin(AIChatPlugin);
const open = usePluginOption(AIChatPlugin, 'open');
const mode = usePluginOption(AIChatPlugin, 'mode');
const streaming = usePluginOption(AIChatPlugin, 'streaming');
const isSelecting = useIsSelecting();
const [value, setValue] = React.useState('');
const chat = useChat();
const { input, messages, setInput, status } = chat;
const [anchorElement, setAnchorElement] = React.useState<HTMLElement | null>(
null
);
const content = useLastAssistantMessage()?.content;
React.useEffect(() => {
if (streaming) {
const anchor = api.aiChat.node({ anchor: true });
setTimeout(() => {
const anchorDom = editor.api.toDOMNode(anchor![0])!;
setAnchorElement(anchorDom);
}, 0);
}
// eslint-disable-next-line react-hooks/exhaustive-deps
}, [streaming]);
const setOpen = (open: boolean) => {
if (open) {
api.aiChat.show();
} else {
api.aiChat.hide();
}
};
const show = (anchorElement: HTMLElement) => {
setAnchorElement(anchorElement);
setOpen(true);
};
useEditorChat({
chat,
onOpenBlockSelection: (blocks: NodeEntry[]) => {
show(editor.api.toDOMNode(blocks.at(-1)![0])!);
},
onOpenChange: (open) => {
if (!open) {
setAnchorElement(null);
setInput('');
}
},
onOpenCursor: () => {
const [ancestor] = editor.api.block({ highest: true })!;
if (!editor.api.isAt({ end: true }) && !editor.api.isEmpty(ancestor)) {
editor
.getApi(BlockSelectionPlugin)
.blockSelection.set(ancestor.id as string);
}
show(editor.api.toDOMNode(ancestor)!);
},
onOpenSelection: () => {
show(editor.api.toDOMNode(editor.api.blocks().at(-1)![0])!);
},
});
useHotkeys('esc', () => {
api.aiChat.stop();
// remove when you implement the route /api/ai/command
chat._abortFakeStream();
});
const isLoading = status === 'streaming' || status === 'submitted';
if (isLoading && mode === 'insert') {
return null;
}
return (
<Popover open={open} onOpenChange={setOpen} modal={false}>
<PopoverAnchor virtualRef={{ current: anchorElement! }} />
<PopoverContent
className="border-none bg-transparent p-0 shadow-none"
style={{
width: anchorElement?.offsetWidth,
}}
onEscapeKeyDown={(e) => {
e.preventDefault();
api.aiChat.hide();
}}
align="center"
side="bottom"
>
<Command
className="w-full rounded-lg border shadow-md"
value={value}
onValueChange={setValue}
>
{mode === 'chat' && isSelecting && content && (
<AIChatEditor content={content} />
)}
{isLoading ? (
<div className="flex grow items-center gap-2 p-2 text-sm text-muted-foreground select-none">
<Loader2Icon className="size-4 animate-spin" />
{messages.length > 1 ? 'Editing...' : 'Thinking...'}
</div>
) : (
<CommandPrimitive.Input
className={cn(
'flex h-9 w-full min-w-0 border-input bg-transparent px-3 py-1 text-base transition-[color,box-shadow] outline-none placeholder:text-muted-foreground md:text-sm dark:bg-input/30',
'aria-invalid:border-destructive aria-invalid:ring-destructive/20 dark:aria-invalid:ring-destructive/40',
'border-b focus-visible:ring-transparent'
)}
value={input}
onKeyDown={(e) => {
if (isHotkey('backspace')(e) && input.length === 0) {
e.preventDefault();
api.aiChat.hide();
}
if (isHotkey('enter')(e) && !e.shiftKey && !value) {
e.preventDefault();
void api.aiChat.submit();
}
}}
onValueChange={setInput}
placeholder="Ask AI anything..."
data-plate-focus
autoFocus
/>
)}
{!isLoading && (
<CommandList>
<AIMenuItems setValue={setValue} />
</CommandList>
)}
</Command>
</PopoverContent>
</Popover>
);
}
type EditorChatState =
| 'cursorCommand'
| 'cursorSuggestion'
| 'selectionCommand'
| 'selectionSuggestion';
const aiChatItems = {
accept: {
icon: <Check />,
label: 'Accept',
value: 'accept',
onSelect: ({ editor }) => {
editor.getTransforms(AIChatPlugin).aiChat.accept();
editor.tf.focus({ edge: 'end' });
},
},
continueWrite: {
icon: <PenLine />,
label: 'Continue writing',
value: 'continueWrite',
onSelect: ({ editor }) => {
const ancestorNode = editor.api.block({ highest: true });
if (!ancestorNode) return;
const isEmpty = NodeApi.string(ancestorNode[0]).trim().length === 0;
void editor.getApi(AIChatPlugin).aiChat.submit({
mode: 'insert',
prompt: isEmpty
? `<Document>
{editor}
</Document>
Start writing a new paragraph AFTER <Document> ONLY ONE SENTENCE`
: 'Continue writing AFTER <Block> ONLY ONE SENTENCE. DONT REPEAT THE TEXT.',
});
},
},
discard: {
icon: <X />,
label: 'Discard',
shortcut: 'Escape',
value: 'discard',
onSelect: ({ editor }) => {
editor.getTransforms(AIPlugin).ai.undo();
editor.getApi(AIChatPlugin).aiChat.hide();
},
},
emojify: {
icon: <SmileIcon />,
label: 'Emojify',
value: 'emojify',
onSelect: ({ editor }) => {
void editor.getApi(AIChatPlugin).aiChat.submit({
prompt: 'Emojify',
});
},
},
explain: {
icon: <BadgeHelp />,
label: 'Explain',
value: 'explain',
onSelect: ({ editor }) => {
void editor.getApi(AIChatPlugin).aiChat.submit({
prompt: {
default: 'Explain {editor}',
selecting: 'Explain',
},
});
},
},
fixSpelling: {
icon: <Check />,
label: 'Fix spelling & grammar',
value: 'fixSpelling',
onSelect: ({ editor }) => {
void editor.getApi(AIChatPlugin).aiChat.submit({
prompt: 'Fix spelling and grammar',
});
},
},
generateMarkdownSample: {
icon: <BookOpenCheck />,
label: 'Generate Markdown sample',
value: 'generateMarkdownSample',
onSelect: ({ editor }) => {
void editor.getApi(AIChatPlugin).aiChat.submit({
prompt: 'Generate a markdown sample',
});
},
},
generateMdxSample: {
icon: <BookOpenCheck />,
label: 'Generate MDX sample',
value: 'generateMdxSample',
onSelect: ({ editor }) => {
void editor.getApi(AIChatPlugin).aiChat.submit({
prompt: 'Generate a mdx sample',
});
},
},
improveWriting: {
icon: <Wand />,
label: 'Improve writing',
value: 'improveWriting',
onSelect: ({ editor }) => {
void editor.getApi(AIChatPlugin).aiChat.submit({
prompt: 'Improve the writing',
});
},
},
insertBelow: {
icon: <ListEnd />,
label: 'Insert below',
value: 'insertBelow',
onSelect: ({ aiEditor, editor }) => {
void editor.getTransforms(AIChatPlugin).aiChat.insertBelow(aiEditor);
},
},
makeLonger: {
icon: <ListPlus />,
label: 'Make longer',
value: 'makeLonger',
onSelect: ({ editor }) => {
void editor.getApi(AIChatPlugin).aiChat.submit({
prompt: 'Make longer',
});
},
},
makeShorter: {
icon: <ListMinus />,
label: 'Make shorter',
value: 'makeShorter',
onSelect: ({ editor }) => {
void editor.getApi(AIChatPlugin).aiChat.submit({
prompt: 'Make shorter',
});
},
},
replace: {
icon: <Check />,
label: 'Replace selection',
value: 'replace',
onSelect: ({ aiEditor, editor }) => {
void editor.getTransforms(AIChatPlugin).aiChat.replaceSelection(aiEditor);
},
},
simplifyLanguage: {
icon: <FeatherIcon />,
label: 'Simplify language',
value: 'simplifyLanguage',
onSelect: ({ editor }) => {
void editor.getApi(AIChatPlugin).aiChat.submit({
prompt: 'Simplify the language',
});
},
},
summarize: {
icon: <Album />,
label: 'Add a summary',
value: 'summarize',
onSelect: ({ editor }) => {
void editor.getApi(AIChatPlugin).aiChat.submit({
mode: 'insert',
prompt: {
default: 'Summarize {editor}',
selecting: 'Summarize',
},
});
},
},
tryAgain: {
icon: <CornerUpLeft />,
label: 'Try again',
value: 'tryAgain',
onSelect: ({ editor }) => {
void editor.getApi(AIChatPlugin).aiChat.reload();
},
},
} satisfies Record<
string,
{
icon: React.ReactNode;
label: string;
value: string;
component?: React.ComponentType<{ menuState: EditorChatState }>;
filterItems?: boolean;
items?: { label: string; value: string }[];
shortcut?: string;
onSelect?: ({
aiEditor,
editor,
}: {
aiEditor: SlateEditor;
editor: PlateEditor;
}) => void;
}
>;
const menuStateItems: Record<
EditorChatState,
{
items: (typeof aiChatItems)[keyof typeof aiChatItems][];
heading?: string;
}[]
> = {
cursorCommand: [
{
items: [
aiChatItems.generateMdxSample,
aiChatItems.generateMarkdownSample,
aiChatItems.continueWrite,
aiChatItems.summarize,
aiChatItems.explain,
],
},
],
cursorSuggestion: [
{
items: [aiChatItems.accept, aiChatItems.discard, aiChatItems.tryAgain],
},
],
selectionCommand: [
{
items: [
aiChatItems.improveWriting,
aiChatItems.emojify,
aiChatItems.makeLonger,
aiChatItems.makeShorter,
aiChatItems.fixSpelling,
aiChatItems.simplifyLanguage,
],
},
],
selectionSuggestion: [
{
items: [
aiChatItems.replace,
aiChatItems.insertBelow,
aiChatItems.discard,
aiChatItems.tryAgain,
],
},
],
};
export const AIMenuItems = ({
setValue,
}: {
setValue: (value: string) => void;
}) => {
const editor = useEditorRef();
const { messages } = usePluginOption(AIChatPlugin, 'chat');
const aiEditor = usePluginOption(AIChatPlugin, 'aiEditor')!;
const isSelecting = useIsSelecting();
const menuState = React.useMemo(() => {
if (messages && messages.length > 0) {
return isSelecting ? 'selectionSuggestion' : 'cursorSuggestion';
}
return isSelecting ? 'selectionCommand' : 'cursorCommand';
}, [isSelecting, messages]);
const menuGroups = React.useMemo(() => {
const items = menuStateItems[menuState];
return items;
}, [menuState]);
React.useEffect(() => {
if (menuGroups.length > 0 && menuGroups[0].items.length > 0) {
setValue(menuGroups[0].items[0].value);
}
}, [menuGroups, setValue]);
return (
<>
{menuGroups.map((group, index) => (
<CommandGroup key={index} heading={group.heading}>
{group.items.map((menuItem) => (
<CommandItem
key={menuItem.value}
className="[&_svg]:text-muted-foreground"
value={menuItem.value}
onSelect={() => {
menuItem.onSelect?.({
aiEditor,
editor: editor,
});
}}
>
{menuItem.icon}
<span>{menuItem.label}</span>
</CommandItem>
))}
</CommandGroup>
))}
</>
);
};
export function AILoadingBar() {
const chat = usePluginOption(AIChatPlugin, 'chat');
const mode = usePluginOption(AIChatPlugin, 'mode');
const { status } = chat;
const { api } = useEditorPlugin(AIChatPlugin);
const isLoading = status === 'streaming' || status === 'submitted';
const visible = isLoading && mode === 'insert';
if (!visible) return null;
return (
<div
className={cn(
'absolute bottom-4 left-1/2 z-10 flex -translate-x-1/2 items-center gap-3 rounded-md border border-border bg-muted px-3 py-1.5 text-sm text-muted-foreground shadow-md transition-all duration-300'
)}
>
<span className="h-4 w-4 animate-spin rounded-full border-2 border-muted-foreground border-t-transparent" />
<span>{status === 'submitted' ? 'Thinking...' : 'Writing...'}</span>
<Button
size="sm"
variant="ghost"
className="flex items-center gap-1 text-xs"
onClick={() => api.aiChat.stop()}
>
<PauseIcon className="h-4 w-4" />
Stop
<kbd className="ml-1 rounded bg-border px-1 font-mono text-[10px] text-muted-foreground shadow-sm">
Esc
</kbd>
</Button>
</div>
);
}
你可以通过向 aiChatItems 对象添加新项目并更新菜单状态项目来扩展 AI 菜单。
简单自定义命令
添加一个提交自定义提示的基本命令:
// 添加到你的 ai-menu.tsx aiChatItems 对象
summarizeInBullets: {
icon: <ListIcon />,
label: '以要点形式总结',
value: 'summarizeInBullets',
onSelect: ({ editor }) => {
void editor.getApi(AIChatPlugin).aiChat.submit({
prompt: '将此内容总结为要点',
});
},
},复杂逻辑命令
创建在提交前具有客户端逻辑的命令:
generateTOC: {
icon: <BookIcon />,
label: '生成目录',
value: 'generateTOC',
onSelect: ({ editor }) => {
// 检查文档是否有标题
const headings = editor.api.nodes({
match: (n) => ['h1', 'h2', 'h3'].includes(n.type as string),
});
if (headings.length === 0) {
void editor.getApi(AIChatPlugin).aiChat.submit({
mode: 'insert',
prompt: '为此文档创建带有示例标题的目录',
});
} else {
void editor.getApi(AIChatPlugin).aiChat.submit({
mode: 'insert',
prompt: '根据现有标题生成目录',
});
}
},
},理解菜单状态
AI 菜单根据用户选择和 AI 响应状态适应不同的上下文:
const menuState = React.useMemo(() => {
// 如果 AI 已经响应,显示建议操作
if (messages && messages.length > 0) {
return isSelecting ? 'selectionSuggestion' : 'cursorSuggestion';
}
// 如果还没有 AI 响应,显示命令操作
return isSelecting ? 'selectionCommand' : 'cursorCommand';
}, [isSelecting, messages]);菜单状态:
cursorCommand:无选择,无 AI 响应 → 显示生成命令(继续写作、总结等)selectionCommand:文本已选择,无 AI 响应 → 显示编辑命令(改进写作、修正拼写等)cursorSuggestion:无选择,AI 已响应 → 显示建议操作(接受、丢弃、重试)selectionSuggestion:文本已选择,AI 已响应 → 显示替换操作(替换选择、在下方插入等)
更新菜单状态
在 menuStateItems 中的适当菜单状态添加自定义命令:
const menuStateItems: Record<EditorChatState, { items: any[] }[]> = {
cursorCommand: [
{
items: [
aiChatItems.generateTOC,
aiChatItems.summarizeInBullets,
// ... 现有项目
],
},
],
selectionCommand: [
{
items: [
aiChatItems.summarizeInBullets, // 也适用于选中的文本
// ... 现有项目
],
},
],
// ... 其他状态
};切换 AI 模型
在 API 路由中配置不同的 AI 模型和提供商:
import { createOpenAI } from '@ai-sdk/openai';
import { createAnthropic } from '@ai-sdk/anthropic';
export async function POST(req: NextRequest) {
const { model = 'gpt-4o', provider = 'openai', ...rest } = await req.json();
let aiProvider;
switch (provider) {
case 'anthropic':
aiProvider = createAnthropic({ apiKey: process.env.ANTHROPIC_API_KEY });
break;
case 'openai':
default:
aiProvider = createOpenAI({ apiKey: process.env.OPENAI_API_KEY });
break;
}
const result = streamText({
model: aiProvider(model),
// ... 其他选项
});
return result.toDataStreamResponse();
}在 aiChatPlugin 中配置模型:
export const aiChatPlugin = AIChatPlugin.extend({
options: {
chatOptions: {
api: '/api/ai/command',
body: {
model: 'gpt-4o-mini', // 或 'claude-4-sonnet'
provider: 'openai', // 或 'anthropic'
},
},
// ... 其他选项
},
});有关更多 AI 提供商和模型,请参阅 Vercel AI SDK 文档。
自定义流式优化
使用自定义分块策略优化特定内容类型的流式性能:
const customChunking = (buffer: string) => {
// 检测 JSON 内容以进行较慢的分块
if (buffer.includes('{') && buffer.includes('}')) {
const jsonMatch = /\{[^}]*\}/g.exec(buffer);
if (jsonMatch) {
return buffer.slice(0, jsonMatch.index + jsonMatch[0].length);
}
}
// 检测代码块以进行基于行的分块
if (buffer.includes('```')) {
const lineMatch = /\n+/m.exec(buffer);
return lineMatch ? buffer.slice(0, lineMatch.index + lineMatch[0].length) : null;
}
// 默认单词分块
const wordMatch = /\S+\s+/m.exec(buffer);
return wordMatch ? buffer.slice(0, wordMatch.index + wordMatch[0].length) : null;
};
// 在 streamText 配置中使用
const result = streamText({
experimental_transform: smoothStream({
chunking: customChunking,
delayInMs: (buffer) => {
// 复杂内容较慢,简单文本较快
return buffer.includes('```') || buffer.includes('{') ? 80 : 20;
},
}),
// ... 其他选项
});安全注意事项
实现 AI 功能的安全最佳实践:
export async function POST(req: NextRequest) {
const { messages, system } = await req.json();
// 验证请求结构
if (!messages || !Array.isArray(messages)) {
return NextResponse.json({ error: '无效的消息' }, { status: 400 });
}
// 内容长度验证
const totalContent = messages.map(m => m.content).join('');
if (totalContent.length > 50000) {
return NextResponse.json({ error: '内容过长' }, { status: 413 });
}
// 速率限制(使用您首选的解决方案实现)
// await rateLimit(req);
// 内容过滤(可选)
// const filteredMessages = await filterContent(messages);
// 处理 AI 请求...
}安全指南:
- 验证输入:始终验证和清理用户提示
- 速率限制:在 AI 端点上实现速率限制
- 内容过滤:考虑对响应进行内容过滤
- API 密钥安全:切勿在客户端暴露 API 密钥
- 用户隐私:注意发送给 AI 模型的数据
插件
AIPlugin
核心插件,扩展编辑器以支持 AI 内容管理功能。
AIChatPlugin
主要插件,支持 AI 聊天操作、流式传输和用户界面交互。
api:AI 请求的 API 端点body:额外的请求体参数'chat':显示带有接受/拒绝选项的预览'insert':直接将内容插入编辑器- 默认值:
'chat' - 默认值:
false - 默认值:
false {block}:选择中块的 Markdown{editor}:整个编辑器内容的 Markdown{selection}:当前选择的 Markdown{prompt}:实际用户提示- 默认值:
'{prompt}' - 默认值:
null
Vercel AI SDK useChat 钩子的配置选项。
指定如何处理助手消息:
AI 聊天界面是否打开。
AI 响应是否正在流式传输。
生成用户提示的模板。支持占位符:
系统消息的模板。支持与 promptTemplate 相同的占位符。
用于生成 AI 响应的编辑器实例。
useChat 钩子返回的聊天助手。
API
api.aiChat.accept()
接受当前 AI 建议:
- 从内容中移除 AI 标记
- 隐藏 AI 聊天界面
- 聚焦编辑器
接受当前的 AI 建议:
- 从内容中移除 AI 标记
- 隐藏 AI 聊天界面
- 聚焦编辑器
api.aiChat.insertBelow()
在当前块下方插入 AI 生成的内容。
处理块选择和普通选择两种模式:
- 块选择模式:在最后一个选中块后插入,应用最后一个块的格式
- 普通选择模式:在当前块后插入,应用当前块的格式
api.aiChat.replaceSelection()
用 AI 生成的内容替换当前选择。
处理不同的选择模式:
- 单个块选择:替换选中的块,根据格式选项将选中块的格式应用到插入的内容
- 多个块选择:替换所有选中的块
- 使用
format: 'none'或'single':保留原始格式 - 使用
format: 'all':将第一个块的格式应用到所有内容
- 使用
- 普通选择:替换当前选择,同时保持周围上下文
api.aiChat.reset()
重置聊天状态:
- 停止任何正在进行的生成
- 清除聊天消息
- 从编辑器中移除所有 AI 节点
api.aiChat.node()
获取 AI 聊天节点条目。
api.aiChat.reload()
重新加载当前 AI 聊天:
- 在插入模式:撤销之前的 AI 更改
- 使用当前系统提示重新加载聊天
api.aiChat.show()
显示 AI 聊天界面:
- 重置聊天状态
- 清除消息
- 将打开状态设置为 true
api.aiChat.hide()
隐藏 AI 聊天界面:
- 重置聊天状态
- 将打开状态设置为 false
- 聚焦编辑器
- 移除 AI 锚点
api.aiChat.stop()
停止当前 AI 生成:
- 将流式状态设置为 false
- 调用聊天停止函数
api.aiChat.submit()
提交提示以生成 AI 内容。
转换
tf.aiChat.removeAnchor()
从编辑器中移除 AI 聊天锚点节点。
tf.aiChat.accept()
接受当前 AI 建议并将其集成到编辑器内容中。
tf.aiChat.insertBelow()
在当前块下方插入 AI 内容的转换。
tf.aiChat.replaceSelection()
用 AI 内容替换当前选择的转换。
tf.ai.insertNodes()
插入带有 AI 标记的 AI 生成节点。
tf.ai.removeMarks()
从指定位置移除节点的 AI 标记。
tf.ai.removeNodes()
移除带有 AI 标记的节点。
tf.ai.undo()
AI 更改的特殊撤销操作:
- 如果最后操作是 AI 生成的,则撤销该操作
- 移除重做栈条目以防止重做 AI 操作
钩子
useAIChatEditor
一个在 AI 聊天插件中注册编辑器并使用块级记忆化反序列化 markdown 内容的钩子。
const AIChatEditor = ({ content }: { content: string }) => {
const aiEditor = usePlateEditor({
plugins: [
// 你的编辑器插件
MarkdownPlugin,
AIPlugin,
AIChatPlugin,
// 等等...
],
});
useAIChatEditor(aiEditor, content, {
// 可选的 markdown 解析器选项
parser: {
exclude: ['space'],
},
});
return <Editor editor={aiEditor} />;
};On This Page
功能特点Kit 使用安装添加 Kit添加 API 路由配置环境手动使用安装添加插件配置插件使用 useHooks 添加流式传输系统提示用户提示添加 API 路由添加工具栏按钮键盘快捷键Plate Plus自定义添加自定义 AI 命令简单自定义命令复杂逻辑命令理解菜单状态更新菜单状态切换 AI 模型自定义流式优化安全注意事项插件AIPluginAIChatPluginAPIapi.aiChat.accept()api.aiChat.insertBelow()api.aiChat.replaceSelection()api.aiChat.reset()api.aiChat.node()api.aiChat.reload()api.aiChat.show()api.aiChat.hide()api.aiChat.stop()api.aiChat.submit()转换tf.aiChat.removeAnchor()tf.aiChat.accept()tf.aiChat.insertBelow()tf.aiChat.replaceSelection()tf.ai.insertNodes()tf.ai.removeMarks()tf.ai.removeNodes()tf.ai.undo()钩子useAIChatEditor
