Contribute to the documentation (#630)
This commit is contained in:
parent
392fc5ade5
commit
a427c84c1c
|
@ -1,3 +1,4 @@
|
||||||
|
|
||||||
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
|
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
|
||||||
|
|
||||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||||
|
@ -13,30 +14,27 @@ specific language governing permissions and limitations under the License.
|
||||||
rendered properly in your Markdown viewer.
|
rendered properly in your Markdown viewer.
|
||||||
|
|
||||||
-->
|
-->
|
||||||
# Agents
|
# Agents(智能体)
|
||||||
|
|
||||||
<Tip warning={true}>
|
<Tip warning={true}>
|
||||||
|
|
||||||
Smolagents is an experimental API which is subject to change at any time. Results returned by the agents
|
Smolagents 是一个实验性的 API,可能会随时发生变化。由于 API 或底层模型可能发生变化,代理返回的结果也可能有所不同。
|
||||||
can vary as the APIs or underlying models are prone to change.
|
|
||||||
|
|
||||||
</Tip>
|
</Tip>
|
||||||
|
|
||||||
To learn more about agents and tools make sure to read the [introductory guide](../index). This page
|
要了解有关智能体和工具的更多信息,请务必阅读[入门指南](../index)。本页面包含基础类的 API 文档。
|
||||||
contains the API docs for the underlying classes.
|
|
||||||
|
|
||||||
## Agents
|
## 智能体(Agents)
|
||||||
|
|
||||||
Our agents inherit from [`MultiStepAgent`], which means they can act in multiple steps, each step consisting of one thought, then one tool call and execution. Read more in [this conceptual guide](../conceptual_guides/react).
|
我们的智能体继承自 [`MultiStepAgent`],这意味着它们可以执行多步操作,每一步包含一个思考(thought),然后是一个工具调用和执行。请阅读[概念指南](../conceptual_guides/react)以了解更多信息。
|
||||||
|
|
||||||
We provide two types of agents, based on the main [`Agent`] class.
|
我们提供两种类型的代理,它们基于主要的 [`Agent`] 类:
|
||||||
- [`CodeAgent`] is the default agent, it writes its tool calls in Python code.
|
- [`CodeAgent`] 是默认代理,它以 Python 代码编写工具调用。
|
||||||
- [`ToolCallingAgent`] writes its tool calls in JSON.
|
- [`ToolCallingAgent`] 以 JSON 编写工具调用。
|
||||||
|
|
||||||
Both require arguments `model` and list of tools `tools` at initialization.
|
两者在初始化时都需要提供参数 `model` 和工具列表 `tools`。
|
||||||
|
|
||||||
|
### 智能体类
|
||||||
### Classes of agents
|
|
||||||
|
|
||||||
[[autodoc]] MultiStepAgent
|
[[autodoc]] MultiStepAgent
|
||||||
|
|
||||||
|
@ -44,10 +42,9 @@ Both require arguments `model` and list of tools `tools` at initialization.
|
||||||
|
|
||||||
[[autodoc]] ToolCallingAgent
|
[[autodoc]] ToolCallingAgent
|
||||||
|
|
||||||
|
|
||||||
### ManagedAgent
|
### ManagedAgent
|
||||||
|
|
||||||
_This class is deprecated since 1.8.0: now you just need to pass name and description attributes to an agent to directly use it as previously done with a ManagedAgent._
|
_此类自 1.8.0 起已被弃用:现在您只需向普通代理传递 `name` 和 `description` 属性即可使其可被管理代理调用。_
|
||||||
|
|
||||||
### stream_to_gradio
|
### stream_to_gradio
|
||||||
|
|
||||||
|
@ -56,99 +53,11 @@ _This class is deprecated since 1.8.0: now you just need to pass name and descri
|
||||||
### GradioUI
|
### GradioUI
|
||||||
|
|
||||||
> [!TIP]
|
> [!TIP]
|
||||||
> You must have `gradio` installed to use the UI. Please run `pip install smolagents[gradio]` if it's not the case.
|
> 您必须安装 `gradio` 才能使用 UI。如果尚未安装,请运行 `pip install smolagents[gradio]`。
|
||||||
|
|
||||||
[[autodoc]] GradioUI
|
[[autodoc]] GradioUI
|
||||||
|
|
||||||
## Models
|
## 提示(Prompts)
|
||||||
|
|
||||||
You're free to create and use your own models to power your agent.
|
|
||||||
|
|
||||||
You could use any `model` callable for your agent, as long as:
|
|
||||||
1. It follows the [messages format](./chat_templating) (`List[Dict[str, str]]`) for its input `messages`, and it returns a `str`.
|
|
||||||
2. It stops generating outputs *before* the sequences passed in the argument `stop_sequences`
|
|
||||||
|
|
||||||
For defining your LLM, you can make a `custom_model` method which accepts a list of [messages](./chat_templating) and returns text. This callable also needs to accept a `stop_sequences` argument that indicates when to stop generating.
|
|
||||||
|
|
||||||
```python
|
|
||||||
from huggingface_hub import login, InferenceClient
|
|
||||||
|
|
||||||
login("<YOUR_HUGGINGFACEHUB_API_TOKEN>")
|
|
||||||
|
|
||||||
model_id = "meta-llama/Llama-3.3-70B-Instruct"
|
|
||||||
|
|
||||||
client = InferenceClient(model=model_id)
|
|
||||||
|
|
||||||
def custom_model(messages, stop_sequences=["Task"]) -> str:
|
|
||||||
response = client.chat_completion(messages, stop=stop_sequences, max_tokens=1000)
|
|
||||||
answer = response.choices[0].message.content
|
|
||||||
return answer
|
|
||||||
```
|
|
||||||
|
|
||||||
Additionally, `custom_model` can also take a `grammar` argument. In the case where you specify a `grammar` upon agent initialization, this argument will be passed to the calls to model, with the `grammar` that you defined upon initialization, to allow [constrained generation](https://huggingface.co/docs/text-generation-inference/conceptual/guidance) in order to force properly-formatted agent outputs.
|
|
||||||
|
|
||||||
### TransformersModel
|
|
||||||
|
|
||||||
For convenience, we have added a `TransformersModel` that implements the points above by building a local `transformers` pipeline for the model_id given at initialization.
|
|
||||||
|
|
||||||
```python
|
|
||||||
from smolagents import TransformersModel
|
|
||||||
|
|
||||||
model = TransformersModel(model_id="HuggingFaceTB/SmolLM-135M-Instruct")
|
|
||||||
|
|
||||||
print(model([{"role": "user", "content": "Ok!"}], stop_sequences=["great"]))
|
|
||||||
```
|
|
||||||
```text
|
|
||||||
>>> What a
|
|
||||||
```
|
|
||||||
|
|
||||||
> [!TIP]
|
|
||||||
> You must have `transformers` and `torch` installed on your machine. Please run `pip install smolagents[transformers]` if it's not the case.
|
|
||||||
|
|
||||||
[[autodoc]] TransformersModel
|
|
||||||
|
|
||||||
### HfApiModel
|
|
||||||
|
|
||||||
The `HfApiModel` wraps an [HF Inference API](https://huggingface.co/docs/api-inference/index) client for the execution of the LLM.
|
|
||||||
|
|
||||||
```python
|
|
||||||
from smolagents import HfApiModel
|
|
||||||
|
|
||||||
messages = [
|
|
||||||
{"role": "user", "content": "Hello, how are you?"},
|
|
||||||
{"role": "assistant", "content": "I'm doing great. How can I help you today?"},
|
|
||||||
{"role": "user", "content": "No need to help, take it easy."},
|
|
||||||
]
|
|
||||||
|
|
||||||
model = HfApiModel()
|
|
||||||
print(model(messages))
|
|
||||||
```
|
|
||||||
```text
|
|
||||||
>>> Of course! If you change your mind, feel free to reach out. Take care!
|
|
||||||
```
|
|
||||||
[[autodoc]] HfApiModel
|
|
||||||
|
|
||||||
### LiteLLMModel
|
|
||||||
|
|
||||||
The `LiteLLMModel` leverages [LiteLLM](https://www.litellm.ai/) to support 100+ LLMs from various providers.
|
|
||||||
You can pass kwargs upon model initialization that will then be used whenever using the model, for instance below we pass `temperature`.
|
|
||||||
|
|
||||||
```python
|
|
||||||
from smolagents import LiteLLMModel
|
|
||||||
|
|
||||||
messages = [
|
|
||||||
{"role": "user", "content": "Hello, how are you?"},
|
|
||||||
{"role": "assistant", "content": "I'm doing great. How can I help you today?"},
|
|
||||||
{"role": "user", "content": "No need to help, take it easy."},
|
|
||||||
]
|
|
||||||
|
|
||||||
model = LiteLLMModel("anthropic/claude-3-5-sonnet-latest", temperature=0.2, max_tokens=10)
|
|
||||||
print(model(messages))
|
|
||||||
```
|
|
||||||
|
|
||||||
[[autodoc]] LiteLLMModel
|
|
||||||
|
|
||||||
## Prompts
|
|
||||||
|
|
||||||
[[autodoc]] smolagents.agents.PromptTemplates
|
[[autodoc]] smolagents.agents.PromptTemplates
|
||||||
|
|
||||||
|
|
|
@ -0,0 +1,166 @@
|
||||||
|
|
||||||
|
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
|
||||||
|
|
||||||
|
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||||
|
the License. You may obtain a copy of the License at
|
||||||
|
|
||||||
|
http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
|
||||||
|
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||||
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||||
|
specific language governing permissions and limitations under the License.
|
||||||
|
|
||||||
|
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
||||||
|
rendered properly in your Markdown viewer.
|
||||||
|
|
||||||
|
-->
|
||||||
|
# 模型
|
||||||
|
|
||||||
|
<Tip warning={true}>
|
||||||
|
|
||||||
|
Smolagents 是一个实验性 API,其可能会随时发生更改。由于 API 或底层模型可能会变化,智能体返回的结果可能会有所不同。
|
||||||
|
|
||||||
|
</Tip>
|
||||||
|
|
||||||
|
要了解有关智能体和工具的更多信息,请务必阅读[入门指南](../index)。此页面包含底层类的 API 文档。
|
||||||
|
|
||||||
|
## 模型
|
||||||
|
|
||||||
|
您可以自由创建和使用自己的模型为智能体提供支持。
|
||||||
|
|
||||||
|
您可以使用任何 `model` 可调用对象作为智能体的模型,只要满足以下条件:
|
||||||
|
1. 它遵循[消息格式](./chat_templating)(`List[Dict[str, str]]`),将其作为输入 `messages`,并返回一个 `str`。
|
||||||
|
2. 它在生成的序列到达 `stop_sequences` 参数中指定的内容之前停止生成输出。
|
||||||
|
|
||||||
|
要定义您的 LLM,可以创建一个 `custom_model` 方法,该方法接受一个 [messages](./chat_templating) 列表,并返回一个包含 `.content` 属性的对象,其中包含生成的文本。此可调用对象还需要接受一个 `stop_sequences` 参数,用于指示何时停止生成。
|
||||||
|
|
||||||
|
```python
|
||||||
|
from huggingface_hub import login, InferenceClient
|
||||||
|
|
||||||
|
login("<YOUR_HUGGINGFACEHUB_API_TOKEN>")
|
||||||
|
|
||||||
|
model_id = "meta-llama/Llama-3.3-70B-Instruct"
|
||||||
|
|
||||||
|
client = InferenceClient(model=model_id)
|
||||||
|
|
||||||
|
def custom_model(messages, stop_sequences=["Task"]):
|
||||||
|
response = client.chat_completion(messages, stop=stop_sequences, max_tokens=1000)
|
||||||
|
answer = response.choices[0].message
|
||||||
|
return answer
|
||||||
|
```
|
||||||
|
|
||||||
|
此外,`custom_model` 还可以接受一个 `grammar` 参数。如果在智能体初始化时指定了 `grammar`,则此参数将在调用模型时传递,以便进行[约束生成](https://huggingface.co/docs/text-generation-inference/conceptual/guidance),从而强制生成格式正确的智能体输出。
|
||||||
|
|
||||||
|
### TransformersModel
|
||||||
|
|
||||||
|
为了方便起见,我们添加了一个 `TransformersModel`,该模型通过为初始化时指定的 `model_id` 构建一个本地 `transformers` pipeline 来实现上述功能。
|
||||||
|
|
||||||
|
```python
|
||||||
|
from smolagents import TransformersModel
|
||||||
|
|
||||||
|
model = TransformersModel(model_id="HuggingFaceTB/SmolLM-135M-Instruct")
|
||||||
|
|
||||||
|
print(model([{"role": "user", "content": [{"type": "text", "text": "Ok!"}]}], stop_sequences=["great"]))
|
||||||
|
```
|
||||||
|
```text
|
||||||
|
>>> What a
|
||||||
|
```
|
||||||
|
|
||||||
|
> [!TIP]
|
||||||
|
> 您必须在机器上安装 `transformers` 和 `torch`。如果尚未安装,请运行 `pip install smolagents[transformers]`。
|
||||||
|
|
||||||
|
[[autodoc]] TransformersModel
|
||||||
|
|
||||||
|
### HfApiModel
|
||||||
|
|
||||||
|
`HfApiModel` 封装了 huggingface_hub 的 [InferenceClient](https://huggingface.co/docs/huggingface_hub/main/en/guides/inference),用于执行 LLM。它支持 HF 的 [Inference API](https://huggingface.co/docs/api-inference/index) 以及 Hub 上所有可用的[Inference Providers](https://huggingface.co/blog/inference-providers)。
|
||||||
|
|
||||||
|
```python
|
||||||
|
from smolagents import HfApiModel
|
||||||
|
|
||||||
|
messages = [
|
||||||
|
{"role": "user", "content": [{"type": "text", "text": "Hello, how are you?"}]}
|
||||||
|
]
|
||||||
|
|
||||||
|
model = HfApiModel()
|
||||||
|
print(model(messages))
|
||||||
|
```
|
||||||
|
```text
|
||||||
|
>>> Of course! If you change your mind, feel free to reach out. Take care!
|
||||||
|
```
|
||||||
|
[[autodoc]] HfApiModel
|
||||||
|
|
||||||
|
### LiteLLMModel
|
||||||
|
|
||||||
|
`LiteLLMModel` 利用 [LiteLLM](https://www.litellm.ai/) 支持来自不同提供商的 100+ 个 LLM。您可以在模型初始化时传递 `kwargs`,这些参数将在每次使用模型时被使用,例如下面的示例中传递了 `temperature`。
|
||||||
|
|
||||||
|
```python
|
||||||
|
from smolagents import LiteLLMModel
|
||||||
|
|
||||||
|
messages = [
|
||||||
|
{"role": "user", "content": [{"type": "text", "text": "Hello, how are you?"}]}
|
||||||
|
]
|
||||||
|
|
||||||
|
model = LiteLLMModel("anthropic/claude-3-5-sonnet-latest", temperature=0.2, max_tokens=10)
|
||||||
|
print(model(messages))
|
||||||
|
```
|
||||||
|
|
||||||
|
[[autodoc]] LiteLLMModel
|
||||||
|
|
||||||
|
### OpenAIServerModel
|
||||||
|
|
||||||
|
此类允许您调用任何 OpenAIServer 兼容模型。
|
||||||
|
以下是设置方法(您可以自定义 `api_base` URL 指向其他服务器):
|
||||||
|
```py
|
||||||
|
import os
|
||||||
|
from smolagents import OpenAIServerModel
|
||||||
|
|
||||||
|
model = OpenAIServerModel(
|
||||||
|
model_id="gpt-4o",
|
||||||
|
api_base="https://api.openai.com/v1",
|
||||||
|
api_key=os.environ["OPENAI_API_KEY"],
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
[[autodoc]] OpenAIServerModel
|
||||||
|
|
||||||
|
### AzureOpenAIServerModel
|
||||||
|
|
||||||
|
`AzureOpenAIServerModel` 允许您连接到任何 Azure OpenAI 部署。
|
||||||
|
|
||||||
|
下面是设置示例,请注意,如果已经设置了相应的环境变量,您可以省略 `azure_endpoint`、`api_key` 和 `api_version` 参数——环境变量包括 `AZURE_OPENAI_ENDPOINT`、`AZURE_OPENAI_API_KEY` 和 `OPENAI_API_VERSION`。
|
||||||
|
|
||||||
|
请注意,`OPENAI_API_VERSION` 没有 `AZURE_` 前缀,这是由于底层 [openai](https://github.com/openai/openai-python) 包的设计所致。
|
||||||
|
|
||||||
|
```py
|
||||||
|
import os
|
||||||
|
|
||||||
|
from smolagents import AzureOpenAIServerModel
|
||||||
|
|
||||||
|
model = AzureOpenAIServerModel(
|
||||||
|
model_id = os.environ.get("AZURE_OPENAI_MODEL"),
|
||||||
|
azure_endpoint=os.environ.get("AZURE_OPENAI_ENDPOINT"),
|
||||||
|
api_key=os.environ.get("AZURE_OPENAI_API_KEY"),
|
||||||
|
api_version=os.environ.get("OPENAI_API_VERSION")
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
[[autodoc]] AzureOpenAIServerModel
|
||||||
|
|
||||||
|
### MLXModel
|
||||||
|
|
||||||
|
```python
|
||||||
|
from smolagents import MLXModel
|
||||||
|
|
||||||
|
model = MLXModel(model_id="HuggingFaceTB/SmolLM-135M-Instruct")
|
||||||
|
|
||||||
|
print(model([{"role": "user", "content": "Ok!"}], stop_sequences=["great"]))
|
||||||
|
```
|
||||||
|
```text
|
||||||
|
>>> What a
|
||||||
|
```
|
||||||
|
|
||||||
|
> [!TIP]
|
||||||
|
> 您必须在机器上安装 `mlx-lm`。如果尚未安装,请运行 `pip install smolagents[mlx-lm]`。
|
||||||
|
|
||||||
|
[[autodoc]] MLXModel
|
|
@ -1,3 +1,4 @@
|
||||||
|
|
||||||
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
|
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
|
||||||
|
|
||||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||||
|
@ -13,19 +14,17 @@ specific language governing permissions and limitations under the License.
|
||||||
rendered properly in your Markdown viewer.
|
rendered properly in your Markdown viewer.
|
||||||
|
|
||||||
-->
|
-->
|
||||||
# Tools
|
# 工具
|
||||||
|
|
||||||
<Tip warning={true}>
|
<Tip warning={true}>
|
||||||
|
|
||||||
Smolagents is an experimental API which is subject to change at any time. Results returned by the agents
|
Smolagents 是一个实验性 API,可能会随时更改。由于 API 或底层模型可能发生变化,代理返回的结果可能会有所不同。
|
||||||
can vary as the APIs or underlying models are prone to change.
|
|
||||||
|
|
||||||
</Tip>
|
</Tip>
|
||||||
|
|
||||||
To learn more about agents and tools make sure to read the [introductory guide](../index). This page
|
要了解更多关于智能体和工具的信息,请务必阅读[入门指南](../index)。本页面包含底层类的 API 文档。
|
||||||
contains the API docs for the underlying classes.
|
|
||||||
|
|
||||||
## Tools
|
## 工具
|
||||||
|
|
||||||
### load_tool
|
### load_tool
|
||||||
|
|
||||||
|
@ -43,40 +42,51 @@ contains the API docs for the underlying classes.
|
||||||
|
|
||||||
[[autodoc]] launch_gradio_demo
|
[[autodoc]] launch_gradio_demo
|
||||||
|
|
||||||
## Default tools
|
## 默认工具
|
||||||
|
|
||||||
### PythonInterpreterTool
|
### PythonInterpreterTool
|
||||||
|
|
||||||
[[autodoc]] PythonInterpreterTool
|
[[autodoc]] PythonInterpreterTool
|
||||||
|
|
||||||
|
### FinalAnswerTool
|
||||||
|
|
||||||
|
[[autodoc]] FinalAnswerTool
|
||||||
|
|
||||||
|
### UserInputTool
|
||||||
|
|
||||||
|
[[autodoc]] UserInputTool
|
||||||
|
|
||||||
### DuckDuckGoSearchTool
|
### DuckDuckGoSearchTool
|
||||||
|
|
||||||
[[autodoc]] DuckDuckGoSearchTool
|
[[autodoc]] DuckDuckGoSearchTool
|
||||||
|
|
||||||
|
### GoogleSearchTool
|
||||||
|
|
||||||
|
[[autodoc]] GoogleSearchTool
|
||||||
|
|
||||||
### VisitWebpageTool
|
### VisitWebpageTool
|
||||||
|
|
||||||
[[autodoc]] VisitWebpageTool
|
[[autodoc]] VisitWebpageTool
|
||||||
|
|
||||||
## ToolCollection
|
### SpeechToTextTool
|
||||||
|
|
||||||
|
[[autodoc]] SpeechToTextTool
|
||||||
|
|
||||||
|
## 工具集合
|
||||||
|
|
||||||
[[autodoc]] ToolCollection
|
[[autodoc]] ToolCollection
|
||||||
|
|
||||||
## Agent Types
|
## 智能体类型
|
||||||
|
|
||||||
Agents can handle any type of object in-between tools; tools, being completely multimodal, can accept and return
|
智能体可以处理工具之间的任何类型的对象;工具是完全多模态的,可以接受和返回文本、图像、音频、视频以及其他类型的对象。为了增加工具之间的兼容性,以及正确呈现在 ipython(jupyter、colab、ipython notebooks 等)中的返回结果,我们为这些类型实现了包装类。
|
||||||
text, image, audio, video, among other types. In order to increase compatibility between tools, as well as to
|
|
||||||
correctly render these returns in ipython (jupyter, colab, ipython notebooks, ...), we implement wrapper classes
|
|
||||||
around these types.
|
|
||||||
|
|
||||||
The wrapped objects should continue behaving as initially; a text object should still behave as a string, an image
|
被包装的对象应该继续保持其初始行为;例如,一个文本对象应继续表现为字符串,一个图像对象应继续表现为 `PIL.Image`。
|
||||||
object should still behave as a `PIL.Image`.
|
|
||||||
|
|
||||||
These types have three specific purposes:
|
这些类型有三个特定的用途:
|
||||||
|
|
||||||
- Calling `to_raw` on the type should return the underlying object
|
- 调用 `to_raw` 方法时,应返回底层对象
|
||||||
- Calling `to_string` on the type should return the object as a string: that can be the string in case of an `AgentText`
|
- 调用 `to_string` 方法时,应将对象转换为字符串:对于 `AgentText` 类型,可以直接返回字符串;对于其他实例,则返回对象序列化版本的路径
|
||||||
but will be the path of the serialized version of the object in other instances
|
- 在 ipython 内核中显示时,应正确显示对象
|
||||||
- Displaying it in an ipython kernel should display the object correctly
|
|
||||||
|
|
||||||
### AgentText
|
### AgentText
|
||||||
|
|
||||||
|
|
Loading…
Reference in New Issue