# Agents Transformers Agents is an experimental API which is subject to change at any time. Results returned by the agents can vary as the APIs or underlying models are prone to change. To learn more about agents and tools make sure to read the [introductory guide](../index). This page contains the API docs for the underlying classes. ## Agents Our agents inherit from [`ReactAgent`], which means they can act in multiple steps, each step consisting of one thought, then one tool call and execution. Read more in [this conceptual guide](../conceptual_guides/react). We provide two types of agents, based on the main [`Agent`] class. - [`JsonAgent`] writes its tool calls in JSON. - [`CodeAgent`] writes its tool calls in Python code. ### BaseAgent [[autodoc]] BaseAgent ### React agents [[autodoc]] ReactAgent [[autodoc]] JsonAgent [[autodoc]] CodeAgent ### ManagedAgent [[autodoc]] ManagedAgent ### stream_to_gradio [[autodoc]] stream_to_gradio ## Engines You're free to create and use your own engines to be usable by the Agents framework. These engines have the following specification: 1. Follow the [messages format](../chat_templating.md) for its input (`List[Dict[str, str]]`) and return a string. 2. Stop generating outputs *before* the sequences passed in the argument `stop_sequences` ### TransformersEngine For convenience, we have added a `TransformersEngine` that implements the points above, taking a pre-initialized `Pipeline` as input. ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline, TransformersEngine model_name = "HuggingFaceTB/SmolLM-135M-Instruct" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) pipe = pipeline("text-generation", model=model, tokenizer=tokenizer) engine = TransformersEngine(pipe) engine([{"role": "user", "content": "Ok!"}], stop_sequences=["great"]) ``` [[autodoc]] TransformersEngine ### HfApiEngine The `HfApiEngine` is an engine that wraps an [HF Inference API](https://huggingface.co/docs/api-inference/index) client for the execution of the LLM. ```python from transformers import HfApiEngine messages = [ {"role": "user", "content": "Hello, how are you?"}, {"role": "assistant", "content": "I'm doing great. How can I help you today?"}, {"role": "user", "content": "No need to help, take it easy."}, ] HfApiEngine()(messages, stop_sequences=["conversation"]) ``` ```text "That's very kind of you to say! It's always nice to have a relaxed " ``` [[autodoc]] HfApiEngine