Fix minor docs (#173)

This commit is contained in:
duydl 2025-01-13 22:31:36 +07:00 committed by GitHub
parent a0b4350409
commit 67ee777370
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
5 changed files with 6 additions and 6 deletions

View File

@ -59,7 +59,7 @@ Agents are useful when you need an LLM to determine the workflow of an app. But
If the pre-determined workflow falls short too often, that means you need more flexibility. If the pre-determined workflow falls short too often, that means you need more flexibility.
Let's take an example: say you're making an app that handles customer requests on a surfing trip website. Let's take an example: say you're making an app that handles customer requests on a surfing trip website.
You could know in advance that the requests will can belong to either of 2 buckets (based on user choice), and you have a predefined workflow for each of these 2 cases. You could know in advance that the requests will belong to either of 2 buckets (based on user choice), and you have a predefined workflow for each of these 2 cases.
1. Want some knowledge on the trips? ⇒ give them access to a search bar to search your knowledge base 1. Want some knowledge on the trips? ⇒ give them access to a search bar to search your knowledge base
2. Wants to talk to sales? ⇒ let them type in a contact form. 2. Wants to talk to sales? ⇒ let them type in a contact form.

View File

@ -124,7 +124,7 @@ Now let us create an agent that leverages this tool.
We use the `CodeAgent`, which is smolagents main agent class: an agent that writes actions in code and can iterate on previous output according to the ReAct framework. We use the `CodeAgent`, which is smolagents main agent class: an agent that writes actions in code and can iterate on previous output according to the ReAct framework.
The model is the LLM that powers the agent system. HfApiModel allows you to call LLMs using HFs Inference API, either via Serverless or Dedicated endpoint, but you could also use any proprietary API. The model is the LLM that powers the agent system. `HfApiModel` allows you to call LLMs using HFs Inference API, either via Serverless or Dedicated endpoint, but you could also use any proprietary API.
```py ```py
from smolagents import CodeAgent, HfApiModel from smolagents import CodeAgent, HfApiModel

View File

@ -28,7 +28,7 @@ To initialize a minimal agent, you need at least these two arguments:
- [`HfApiModel`] leverages a `huggingface_hub.InferenceClient` under the hood. - [`HfApiModel`] leverages a `huggingface_hub.InferenceClient` under the hood.
- [`LiteLLMModel`] lets you call 100+ different models through [LiteLLM](https://docs.litellm.ai/)! - [`LiteLLMModel`] lets you call 100+ different models through [LiteLLM](https://docs.litellm.ai/)!
- `tools`, A list of `Tools` that the agent can use to solve the task. It can be an empty list. You can also add the default toolbox on top of your `tools` list by defining the optional argument `add_base_tools=True`. - `tools`, a list of `Tools` that the agent can use to solve the task. It can be an empty list. You can also add the default toolbox on top of your `tools` list by defining the optional argument `add_base_tools=True`.
Once you have these two arguments, `tools` and `model`, you can create an agent and run it. You can use any LLM you'd like, either through [Hugging Face API](https://huggingface.co/docs/api-inference/en/index), [transformers](https://github.com/huggingface/transformers/), [ollama](https://ollama.com/), or [LiteLLM](https://www.litellm.ai/). Once you have these two arguments, `tools` and `model`, you can create an agent and run it. You can use any LLM you'd like, either through [Hugging Face API](https://huggingface.co/docs/api-inference/en/index), [transformers](https://github.com/huggingface/transformers/), [ollama](https://ollama.com/), or [LiteLLM](https://www.litellm.ai/).

View File

@ -183,13 +183,13 @@ Would you need some added clarifications?
To provide extra information, we do not recommend to change the system prompt right away: the default system prompt has many adjustments that you do not want to mess up except if you understand the prompt very well. To provide extra information, we do not recommend to change the system prompt right away: the default system prompt has many adjustments that you do not want to mess up except if you understand the prompt very well.
Better ways to guide your LLM engine are: Better ways to guide your LLM engine are:
- If it 's about the task to solve: add all these details to the task. The task could be 100s of pages long. - If it's about the task to solve: add all these details to the task. The task could be 100s of pages long.
- If it's about how to use tools: the description attribute of your tools. - If it's about how to use tools: the description attribute of your tools.
### 3. Change the system prompt (generally not advised) ### 3. Change the system prompt (generally not advised)
If above clarifications above are not sufficient, you can change the system prompt. If above clarifications are not sufficient, you can change the system prompt.
Let's see how it works. For example, let us check the default system prompt for the [`CodeAgent`] (below version is shortened by skipping zero-shot examples). Let's see how it works. For example, let us check the default system prompt for the [`CodeAgent`] (below version is shortened by skipping zero-shot examples).

View File

@ -204,7 +204,7 @@ agent.run(
### Use a collection of tools ### Use a collection of tools
You can leverage tool collections by using the ToolCollection object, with the slug of the collection you want to use. You can leverage tool collections by using the `ToolCollection` object, with the slug of the collection you want to use.
Then pass them as a list to initialize your agent, and start using them! Then pass them as a list to initialize your agent, and start using them!
```py ```py