Skip to content

Nodes

Nodes are the building blocks of a TapeAgent, representing atomic units of the agent's behavior.

Classes:

  • CallSubagent

    Node that calls a subagent with inputs from the current tape view.

  • ControlFlowNode

    A node that controls the flow of execution by selecting the next node based on tape content.

  • FixedStepsNode

    A Node that generates a fixed sequence of predefined steps.

  • MonoNode

    A node for simple monolithic agents that handles simple prompt generation, and universal LLM output parsing.

  • ObservationControlNode

    A control flow node that selects the next node based on the last observation in the tape.

CallSubagent

Bases: Node

Node that calls a subagent with inputs from the current tape view.

Source code in tapeagents/nodes.py
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
class CallSubagent(Node):
    """
    Node that calls a subagent with inputs from the current tape view.
    """

    agent: Agent
    inputs: tuple[str | int, ...] = Field(
        default_factory=tuple,
        description="Names of the subagents which outputs are required for the current subagent to run",
    )

    def model_post_init(self, __context: Any) -> None:
        self.name = f"{self.agent.name}Node"

    def generate_steps(self, _: Any, tape: Tape, llm_stream: LLMStream):
        view = TapeViewStack.compute(tape)
        yield Call(agent_name=self.agent.name)
        for input_ in self.inputs:
            yield view.top.get_output(input_).model_copy(deep=True)

ControlFlowNode

Bases: Node

A node that controls the flow of execution by selecting the next node based on tape content.

This abstract class provides a framework for implementing control flow logic in a node. It determines which node should be executed next based on the current state of the tape.

Example
class MyControlFlow(ControlFlowNode):
    def select_node(self, tape):
        if isinstance(tape[-1], SuccessObservation):
            return 'node_a'
        return 'node_b'

Methods:

  • generate_steps

    Generates steps that moves the execution to the next node based on the tape content.

  • select_node

    Select the next node based on the provided tape.

Source code in tapeagents/nodes.py
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
class ControlFlowNode(Node):
    """
    A node that controls the flow of execution by selecting the next node based on tape content.

    This abstract class provides a framework for implementing control flow logic in a node.
    It determines which node should be executed next based on the current state of the tape.

    Example:
        ```python
        class MyControlFlow(ControlFlowNode):
            def select_node(self, tape):
                if isinstance(tape[-1], SuccessObservation):
                    return 'node_a'
                return 'node_b'
        ```
    """

    def generate_steps(
        self, agent: Any, tape: Tape, llm_stream: LLMStream
    ) -> Generator[Step | PartialStep, None, None]:
        """
        Generates steps that moves the execution to the next node based on the tape content.

        Args:
            agent (Any): The agent instance executing the node
            tape (Tape): The tape object containing the context and state
            llm_stream (LLMStream): Stream for language model interaction

        Yields:
            step (SetNextNode): A step indicating which node should be executed next
        """
        yield SetNextNode(next_node=self.select_node(tape))

    def select_node(self, tape: Tape) -> str:
        """
        Select the next node based on the provided tape.

        This method should be implemented in a subclass to define the logic for selecting the next node.

        Args:
            tape (Tape): The tape object containing the necessary information for node selection.

        Returns:
            str: The identifier of the next node.

        Raises:
            NotImplementedError: If the method is not implemented in the subclass.
        """
        raise NotImplementedError("Implement this method in the subclass to set the next node according to your logic")

generate_steps(agent, tape, llm_stream)

Generates steps that moves the execution to the next node based on the tape content.

Parameters:

  • agent (Any) –

    The agent instance executing the node

  • tape (Tape) –

    The tape object containing the context and state

  • llm_stream (LLMStream) –

    Stream for language model interaction

Yields:

  • step ( SetNextNode ) –

    A step indicating which node should be executed next

Source code in tapeagents/nodes.py
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
def generate_steps(
    self, agent: Any, tape: Tape, llm_stream: LLMStream
) -> Generator[Step | PartialStep, None, None]:
    """
    Generates steps that moves the execution to the next node based on the tape content.

    Args:
        agent (Any): The agent instance executing the node
        tape (Tape): The tape object containing the context and state
        llm_stream (LLMStream): Stream for language model interaction

    Yields:
        step (SetNextNode): A step indicating which node should be executed next
    """
    yield SetNextNode(next_node=self.select_node(tape))

select_node(tape)

Select the next node based on the provided tape.

This method should be implemented in a subclass to define the logic for selecting the next node.

Parameters:

  • tape (Tape) –

    The tape object containing the necessary information for node selection.

Returns:

  • str ( str ) –

    The identifier of the next node.

Raises:

  • NotImplementedError

    If the method is not implemented in the subclass.

Source code in tapeagents/nodes.py
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
def select_node(self, tape: Tape) -> str:
    """
    Select the next node based on the provided tape.

    This method should be implemented in a subclass to define the logic for selecting the next node.

    Args:
        tape (Tape): The tape object containing the necessary information for node selection.

    Returns:
        str: The identifier of the next node.

    Raises:
        NotImplementedError: If the method is not implemented in the subclass.
    """
    raise NotImplementedError("Implement this method in the subclass to set the next node according to your logic")

FixedStepsNode

Bases: Node

A Node that generates a fixed sequence of predefined steps.

This node simply yields a sequence of steps that were provided during initialization, without any dynamic generation or modification.

Attributes:

  • steps (list[Step]) –

    A list of Step objects to be yielded in sequence.

Example
fixed_node = FixedStepsNode(steps=[
    AssistantStep(text="Hello"),
    SetNextNode(next_node="node_a")
])
Source code in tapeagents/nodes.py
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
class FixedStepsNode(Node):
    """A Node that generates a fixed sequence of predefined steps.

    This node simply yields a sequence of steps that were provided during initialization,
    without any dynamic generation or modification.

    Attributes:
        steps (list[Step]): A list of Step objects to be yielded in sequence.

    Example:
        ```python
        fixed_node = FixedStepsNode(steps=[
            AssistantStep(text="Hello"),
            SetNextNode(next_node="node_a")
        ])
        ```
    """

    steps: list[Step]

    def generate_steps(
        self, agent: Any, tape: Tape, llm_stream: LLMStream
    ) -> Generator[Step | PartialStep, None, None]:
        for step in self.steps:
            yield step

MonoNode

Bases: Node

A node for simple monolithic agents that handles simple prompt generation, and universal LLM output parsing.

This node performs the following functions:

  • Renders the entire tape into a prompt, trimming if needed
  • Attaches guidance text to the end of the prompt after rendering the tape
  • Parses the LLM output into provided step classes (class provided as annotated union)

Attributes:

  • guidance (str) –

    Guidance text attached to the end of the prompt

  • system_prompt (str) –

    System prompt used in message construction

  • steps_prompt (str) –

    Prompt describing the steps the agent can take

  • agent_step_cls (Any) –

    Class used for step validation, excluded from model

  • next_node (str) –

    Identifier for the next node in sequence

Example
node = MonoNode(
    guidance="Please respond with next action",
    system_prompt="You are a helpful assistant",
    steps_prompt="Available steps: think, act, finish",
    agent_step_cls=AgentStep
)

Methods:

  • generate_steps

    Generates a sequence of steps based on the LLM stream output.

  • get_steps_description

    Get the steps description for the agent's task.

  • make_llm_output

    Creates an LLMOutput from a sequence of steps in the tape that share the same prompt_id.

  • make_prompt

    Create a prompt from tape interactions.

  • parse_completion

    Parse LLM completion output into a sequence of agent steps.

  • postprocess_step

    Post-processes a step after its generation.

  • prepare_tape

    Prepares tape by filtering out control flow steps.

  • tape_to_messages

    Converts a Tape object and steps description into a list of messages for LLM conversation.

  • trim_tape

    Trims the tape by removing unnecessary positions.

Source code in tapeagents/nodes.py
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
class MonoNode(Node):
    """
    A node for simple monolithic agents that handles simple prompt generation, and universal LLM output parsing.

    This node performs the following functions:

    - Renders the entire tape into a prompt, trimming if needed
    - Attaches guidance text to the end of the prompt after rendering the tape
    - Parses the LLM output into provided step classes (class provided as annotated union)

    Attributes:
        guidance (str): Guidance text attached to the end of the prompt
        system_prompt (str): System prompt used in message construction
        steps_prompt (str): Prompt describing the steps the agent can take
        agent_step_cls (Any): Class used for step validation, excluded from model
        next_node (str): Identifier for the next node in sequence

    Example:
        ```python
        node = MonoNode(
            guidance="Please respond with next action",
            system_prompt="You are a helpful assistant",
            steps_prompt="Available steps: think, act, finish",
            agent_step_cls=AgentStep
        )
        ```
    """

    guidance: str = ""  # guidance text that is attached to the end of the prompt
    system_prompt: str = ""
    steps_prompt: str = ""  # prompt that describes the steps that the agent can take
    agent_step_cls: Any = Field(exclude=True)
    next_node: str = ""

    def make_prompt(self, agent: Any, tape: Tape) -> Prompt:
        """Create a prompt from tape interactions.

        This method constructs a prompt by processing the tape content and agent steps description
        into a format suitable for LLM consumption. It includes token count checks and tape trimming
        if needed to fit within context size limits.

        Args:
            agent (Any): The agent object containing LLM configuration.
            tape (Tape): The tape object containing interaction history.

        Returns:
            Prompt: A Prompt object containing formatted messages for LLM consumption.

        Note:
            The method performs the following steps:

            1. Cleans the tape content
            2. Gets steps description
            3. Converts tape to messages
            4. Checks token count and trims if needed
            5. Reconstructs messages if trimming occurred
        """
        cleaned_tape = self.prepare_tape(tape)
        steps_description = self.get_steps_description(tape, agent)
        messages = self.tape_to_messages(cleaned_tape, steps_description)
        if agent.llm.count_tokens(messages) > (agent.llm.context_size - 500):
            cleaned_tape = self.trim_tape(cleaned_tape)
        messages = self.tape_to_messages(cleaned_tape, steps_description)
        return Prompt(messages=messages)

    def prepare_tape(self, tape: Tape) -> Tape:
        """
        Prepares tape by filtering out control flow steps.

        This method creates a new tape instance with only non-control flow steps,
        specifically excluding SetNextNode instances.

        Args:
            tape (Tape): The input tape containing a sequence of steps.

        Returns:
            Tape: A new tape instance containing only non-control flow steps.
        """
        steps_without_control_flow = [step for step in tape.steps if not isinstance(step, SetNextNode)]
        return tape.model_copy(update=dict(steps=steps_without_control_flow))

    def make_llm_output(self, agent: Any, tape: Tape, index: int) -> LLMOutput:
        """
        Creates an LLMOutput from a sequence of steps in the tape that share the same prompt_id.

        Args:
            agent (Any): The agent instance associated with the output.
            tape (Tape): The tape containing the sequence of steps.
            index (int): The starting index in the tape to process steps from.

        Returns:
            LLMOutput: An output object containing:

                - role: Set to "assistant"
                - content: JSON string of step data, formatted as either: a single dictionary
                if there is only one step, or a list of dictionaries

        Note:
            - Only processes steps with matching prompt_id from the starting index
            - Excludes SetNextNode steps from the output
            - JSON content is formatted with indentation
        """
        steps = []
        i = index
        first_prompt_id = tape.steps[i].metadata.prompt_id
        while i < len(tape) and tape.steps[i].metadata.prompt_id == first_prompt_id:
            if not isinstance(tape.steps[i], SetNextNode):
                steps.append(tape.steps[i])
            i += 1

        # if there is only one step, return it as a single dict, not a list
        content = [step.llm_dict() for step in steps] if len(steps) > 1 else steps[0].llm_dict()
        return LLMOutput(role="assistant", content=json.dumps(content, indent=2, ensure_ascii=False))

    def tape_to_messages(self, tape: Tape, steps_description: str) -> list[dict]:
        """
        Converts a Tape object and steps description into a list of messages for LLM conversation.

        Args:
            tape (Tape): A Tape object containing conversation steps.
            steps_description (str): A description of the conversation steps.

        Returns:
            list[dict]: A list of dictionaries representing the conversation messages.
                       Each dictionary contains 'role' and 'content' keys.
                       Roles can be 'system', 'user', or 'assistant'.
                       The system prompt is always the first message.
                       If steps_description is provided, it's added as a user message.
                       Messages from tape are added with roles based on step type.
                       If guidance exists, it's added as the final user message.
        """
        messages: list[dict] = [
            {"role": "system", "content": self.system_prompt},
        ]
        if steps_description:
            messages.append({"role": "user", "content": steps_description})
        for step in tape:
            role = "assistant" if isinstance(step, AgentStep) else "user"
            messages.append({"role": role, "content": step.llm_view()})
        if self.guidance:
            messages.append({"role": "user", "content": self.guidance})
        return messages

    def get_steps_description(self, tape: Tape, agent: Any) -> str:
        """
        Get the steps description for the agent's task.

        This method returns the predefined steps prompt which describes the sequence of actions
        or steps that the agent should follow.

        Args:
            tape (Tape): The tape object containing the context and state information.
            agent (Any): The agent object that will execute the steps.

        Returns:
            str: The steps prompt describing the sequence of actions.
        """
        return self.steps_prompt

    def generate_steps(
        self, agent: Any, tape: Tape, llm_stream: LLMStream
    ) -> Generator[Step | PartialStep, None, None]:
        """
        Generates a sequence of steps based on the LLM stream output.

        This method processes the output from a language model stream and converts it into a series of steps.
        It handles the parsing of completions and post-processing of steps.

        Args:
            agent (Any): The agent instance that will execute the steps.
            tape (Tape): The tape object containing the execution context and history.
            llm_stream (LLMStream): The stream of language model outputs to process.

        Yields:
            Union[Step, PartialStep]: Individual steps generated from the LLM stream output.

        Raises:
            FatalError: If no completions are generated from the LLM stream.

        Note:
            - If the node has a next_node defined and the final step is not a StopStep,
              it will yield a SetNextNode step to continue the execution flow.
        """
        new_steps = []
        try:
            cnt = 0
            for event in llm_stream:
                if event.output:
                    cnt += 1
                    assert event.output.content
                    for step in self.parse_completion(event.output.content, llm_stream.prompt.id):
                        step = self.postprocess_step(tape, new_steps, step)
                        new_steps.append(step)
                        yield step
            if not cnt:
                raise FatalError("No completions!")
        except FatalError:
            raise

        if self.next_node and not isinstance(new_steps[-1], StopStep):
            yield SetNextNode(next_node=self.next_node)

    def postprocess_step(self, tape: Tape, new_steps: list[Step], step: Step) -> Step:
        """
        Post-processes a step after its generation.

        By default returns the step unchanged.

        Args:
            tape (Tape): The tape
            new_steps (list[Step]): List of new steps that were generated during the current iteration
            step (Step): The step that was just generated

        Returns:
            Step: The processed step, by default returns the original step unmodified
        """
        return step

    def parse_completion(self, llm_output: str, prompt_id: str) -> Generator[Step, None, None]:
        """Parse LLM completion output into a sequence of agent steps.

        This method processes the LLM output string by parsing it as JSON and validating it against
        the agent step class schema. It handles both single step and multi-step outputs.

        Args:
            llm_output (str): The raw output string from the LLM to be parsed
            prompt_id (str): Identifier for the prompt that generated this completion

        Yields:
            Step: Individual validated agent steps with prompt_id metadata
            LLMOutputParsingFailureAction: Error information if parsing or validation fails

        Note:
            All parsing errors are handled internally and yielded as
            LLMOutputParsingFailureAction objects.
        """
        try:
            step_dicts = json.loads(sanitize_json_completion(llm_output))
            if isinstance(step_dicts, dict):
                step_dicts = [step_dicts]
        except Exception as e:
            logger.exception(f"Failed to parse LLM output as json: {llm_output}\n\nError: {e}")
            yield LLMOutputParsingFailureAction(error=f"Failed to parse LLM output as json: {e}", llm_output=llm_output)
            return
        try:
            steps = [TypeAdapter(self.agent_step_cls).validate_python(step_dict) for step_dict in step_dicts]
        except ValidationError as e:
            err_text = ""
            for err in e.errors():
                loc = ".".join([str(loc) for loc in err["loc"]])
                err_text += f"{loc}: {err['msg']}\n"
            logger.exception(f"Failed to validate LLM output: {step_dicts}\n\nErrors:\n{err_text}")
            yield LLMOutputParsingFailureAction(
                error=f"Failed to validate LLM output: {err_text}", llm_output=llm_output
            )
            return
        except Exception as e:
            logger.exception(f"Failed to parse LLM output dict: {step_dicts}\n\nError: {e}")
            yield LLMOutputParsingFailureAction(error=f"Failed to parse LLM output dict: {e}", llm_output=llm_output)
            return
        for step in steps:
            step.metadata.prompt_id = prompt_id
            yield step

    def trim_tape(self, tape: Tape) -> Tape:
        """
        Trims the tape by removing unnecessary positions.

        Args:
            tape (Tape): The tape object to be trimmed.

        Returns:
            Tape: The trimmed tape object.

        Note:
            Currently this is a placeholder method that returns the tape unchanged.
        """
        return tape

generate_steps(agent, tape, llm_stream)

Generates a sequence of steps based on the LLM stream output.

This method processes the output from a language model stream and converts it into a series of steps. It handles the parsing of completions and post-processing of steps.

Parameters:

  • agent (Any) –

    The agent instance that will execute the steps.

  • tape (Tape) –

    The tape object containing the execution context and history.

  • llm_stream (LLMStream) –

    The stream of language model outputs to process.

Yields:

  • Step | PartialStep

    Union[Step, PartialStep]: Individual steps generated from the LLM stream output.

Raises:

  • FatalError

    If no completions are generated from the LLM stream.

Note
  • If the node has a next_node defined and the final step is not a StopStep, it will yield a SetNextNode step to continue the execution flow.
Source code in tapeagents/nodes.py
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
def generate_steps(
    self, agent: Any, tape: Tape, llm_stream: LLMStream
) -> Generator[Step | PartialStep, None, None]:
    """
    Generates a sequence of steps based on the LLM stream output.

    This method processes the output from a language model stream and converts it into a series of steps.
    It handles the parsing of completions and post-processing of steps.

    Args:
        agent (Any): The agent instance that will execute the steps.
        tape (Tape): The tape object containing the execution context and history.
        llm_stream (LLMStream): The stream of language model outputs to process.

    Yields:
        Union[Step, PartialStep]: Individual steps generated from the LLM stream output.

    Raises:
        FatalError: If no completions are generated from the LLM stream.

    Note:
        - If the node has a next_node defined and the final step is not a StopStep,
          it will yield a SetNextNode step to continue the execution flow.
    """
    new_steps = []
    try:
        cnt = 0
        for event in llm_stream:
            if event.output:
                cnt += 1
                assert event.output.content
                for step in self.parse_completion(event.output.content, llm_stream.prompt.id):
                    step = self.postprocess_step(tape, new_steps, step)
                    new_steps.append(step)
                    yield step
        if not cnt:
            raise FatalError("No completions!")
    except FatalError:
        raise

    if self.next_node and not isinstance(new_steps[-1], StopStep):
        yield SetNextNode(next_node=self.next_node)

get_steps_description(tape, agent)

Get the steps description for the agent's task.

This method returns the predefined steps prompt which describes the sequence of actions or steps that the agent should follow.

Parameters:

  • tape (Tape) –

    The tape object containing the context and state information.

  • agent (Any) –

    The agent object that will execute the steps.

Returns:

  • str ( str ) –

    The steps prompt describing the sequence of actions.

Source code in tapeagents/nodes.py
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
def get_steps_description(self, tape: Tape, agent: Any) -> str:
    """
    Get the steps description for the agent's task.

    This method returns the predefined steps prompt which describes the sequence of actions
    or steps that the agent should follow.

    Args:
        tape (Tape): The tape object containing the context and state information.
        agent (Any): The agent object that will execute the steps.

    Returns:
        str: The steps prompt describing the sequence of actions.
    """
    return self.steps_prompt

make_llm_output(agent, tape, index)

Creates an LLMOutput from a sequence of steps in the tape that share the same prompt_id.

Parameters:

  • agent (Any) –

    The agent instance associated with the output.

  • tape (Tape) –

    The tape containing the sequence of steps.

  • index (int) –

    The starting index in the tape to process steps from.

Returns:

  • LLMOutput ( LLMOutput ) –

    An output object containing:

    • role: Set to "assistant"
    • content: JSON string of step data, formatted as either: a single dictionary if there is only one step, or a list of dictionaries
Note
  • Only processes steps with matching prompt_id from the starting index
  • Excludes SetNextNode steps from the output
  • JSON content is formatted with indentation
Source code in tapeagents/nodes.py
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
def make_llm_output(self, agent: Any, tape: Tape, index: int) -> LLMOutput:
    """
    Creates an LLMOutput from a sequence of steps in the tape that share the same prompt_id.

    Args:
        agent (Any): The agent instance associated with the output.
        tape (Tape): The tape containing the sequence of steps.
        index (int): The starting index in the tape to process steps from.

    Returns:
        LLMOutput: An output object containing:

            - role: Set to "assistant"
            - content: JSON string of step data, formatted as either: a single dictionary
            if there is only one step, or a list of dictionaries

    Note:
        - Only processes steps with matching prompt_id from the starting index
        - Excludes SetNextNode steps from the output
        - JSON content is formatted with indentation
    """
    steps = []
    i = index
    first_prompt_id = tape.steps[i].metadata.prompt_id
    while i < len(tape) and tape.steps[i].metadata.prompt_id == first_prompt_id:
        if not isinstance(tape.steps[i], SetNextNode):
            steps.append(tape.steps[i])
        i += 1

    # if there is only one step, return it as a single dict, not a list
    content = [step.llm_dict() for step in steps] if len(steps) > 1 else steps[0].llm_dict()
    return LLMOutput(role="assistant", content=json.dumps(content, indent=2, ensure_ascii=False))

make_prompt(agent, tape)

Create a prompt from tape interactions.

This method constructs a prompt by processing the tape content and agent steps description into a format suitable for LLM consumption. It includes token count checks and tape trimming if needed to fit within context size limits.

Parameters:

  • agent (Any) –

    The agent object containing LLM configuration.

  • tape (Tape) –

    The tape object containing interaction history.

Returns:

  • Prompt ( Prompt ) –

    A Prompt object containing formatted messages for LLM consumption.

Note

The method performs the following steps:

  1. Cleans the tape content
  2. Gets steps description
  3. Converts tape to messages
  4. Checks token count and trims if needed
  5. Reconstructs messages if trimming occurred
Source code in tapeagents/nodes.py
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
def make_prompt(self, agent: Any, tape: Tape) -> Prompt:
    """Create a prompt from tape interactions.

    This method constructs a prompt by processing the tape content and agent steps description
    into a format suitable for LLM consumption. It includes token count checks and tape trimming
    if needed to fit within context size limits.

    Args:
        agent (Any): The agent object containing LLM configuration.
        tape (Tape): The tape object containing interaction history.

    Returns:
        Prompt: A Prompt object containing formatted messages for LLM consumption.

    Note:
        The method performs the following steps:

        1. Cleans the tape content
        2. Gets steps description
        3. Converts tape to messages
        4. Checks token count and trims if needed
        5. Reconstructs messages if trimming occurred
    """
    cleaned_tape = self.prepare_tape(tape)
    steps_description = self.get_steps_description(tape, agent)
    messages = self.tape_to_messages(cleaned_tape, steps_description)
    if agent.llm.count_tokens(messages) > (agent.llm.context_size - 500):
        cleaned_tape = self.trim_tape(cleaned_tape)
    messages = self.tape_to_messages(cleaned_tape, steps_description)
    return Prompt(messages=messages)

parse_completion(llm_output, prompt_id)

Parse LLM completion output into a sequence of agent steps.

This method processes the LLM output string by parsing it as JSON and validating it against the agent step class schema. It handles both single step and multi-step outputs.

Parameters:

  • llm_output (str) –

    The raw output string from the LLM to be parsed

  • prompt_id (str) –

    Identifier for the prompt that generated this completion

Yields:

  • Step ( Step ) –

    Individual validated agent steps with prompt_id metadata

  • LLMOutputParsingFailureAction ( Step ) –

    Error information if parsing or validation fails

Note

All parsing errors are handled internally and yielded as LLMOutputParsingFailureAction objects.

Source code in tapeagents/nodes.py
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
def parse_completion(self, llm_output: str, prompt_id: str) -> Generator[Step, None, None]:
    """Parse LLM completion output into a sequence of agent steps.

    This method processes the LLM output string by parsing it as JSON and validating it against
    the agent step class schema. It handles both single step and multi-step outputs.

    Args:
        llm_output (str): The raw output string from the LLM to be parsed
        prompt_id (str): Identifier for the prompt that generated this completion

    Yields:
        Step: Individual validated agent steps with prompt_id metadata
        LLMOutputParsingFailureAction: Error information if parsing or validation fails

    Note:
        All parsing errors are handled internally and yielded as
        LLMOutputParsingFailureAction objects.
    """
    try:
        step_dicts = json.loads(sanitize_json_completion(llm_output))
        if isinstance(step_dicts, dict):
            step_dicts = [step_dicts]
    except Exception as e:
        logger.exception(f"Failed to parse LLM output as json: {llm_output}\n\nError: {e}")
        yield LLMOutputParsingFailureAction(error=f"Failed to parse LLM output as json: {e}", llm_output=llm_output)
        return
    try:
        steps = [TypeAdapter(self.agent_step_cls).validate_python(step_dict) for step_dict in step_dicts]
    except ValidationError as e:
        err_text = ""
        for err in e.errors():
            loc = ".".join([str(loc) for loc in err["loc"]])
            err_text += f"{loc}: {err['msg']}\n"
        logger.exception(f"Failed to validate LLM output: {step_dicts}\n\nErrors:\n{err_text}")
        yield LLMOutputParsingFailureAction(
            error=f"Failed to validate LLM output: {err_text}", llm_output=llm_output
        )
        return
    except Exception as e:
        logger.exception(f"Failed to parse LLM output dict: {step_dicts}\n\nError: {e}")
        yield LLMOutputParsingFailureAction(error=f"Failed to parse LLM output dict: {e}", llm_output=llm_output)
        return
    for step in steps:
        step.metadata.prompt_id = prompt_id
        yield step

postprocess_step(tape, new_steps, step)

Post-processes a step after its generation.

By default returns the step unchanged.

Parameters:

  • tape (Tape) –

    The tape

  • new_steps (list[Step]) –

    List of new steps that were generated during the current iteration

  • step (Step) –

    The step that was just generated

Returns:

  • Step ( Step ) –

    The processed step, by default returns the original step unmodified

Source code in tapeagents/nodes.py
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
def postprocess_step(self, tape: Tape, new_steps: list[Step], step: Step) -> Step:
    """
    Post-processes a step after its generation.

    By default returns the step unchanged.

    Args:
        tape (Tape): The tape
        new_steps (list[Step]): List of new steps that were generated during the current iteration
        step (Step): The step that was just generated

    Returns:
        Step: The processed step, by default returns the original step unmodified
    """
    return step

prepare_tape(tape)

Prepares tape by filtering out control flow steps.

This method creates a new tape instance with only non-control flow steps, specifically excluding SetNextNode instances.

Parameters:

  • tape (Tape) –

    The input tape containing a sequence of steps.

Returns:

  • Tape ( Tape ) –

    A new tape instance containing only non-control flow steps.

Source code in tapeagents/nodes.py
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
def prepare_tape(self, tape: Tape) -> Tape:
    """
    Prepares tape by filtering out control flow steps.

    This method creates a new tape instance with only non-control flow steps,
    specifically excluding SetNextNode instances.

    Args:
        tape (Tape): The input tape containing a sequence of steps.

    Returns:
        Tape: A new tape instance containing only non-control flow steps.
    """
    steps_without_control_flow = [step for step in tape.steps if not isinstance(step, SetNextNode)]
    return tape.model_copy(update=dict(steps=steps_without_control_flow))

tape_to_messages(tape, steps_description)

Converts a Tape object and steps description into a list of messages for LLM conversation.

Parameters:

  • tape (Tape) –

    A Tape object containing conversation steps.

  • steps_description (str) –

    A description of the conversation steps.

Returns:

  • list[dict]

    list[dict]: A list of dictionaries representing the conversation messages. Each dictionary contains 'role' and 'content' keys. Roles can be 'system', 'user', or 'assistant'. The system prompt is always the first message. If steps_description is provided, it's added as a user message. Messages from tape are added with roles based on step type. If guidance exists, it's added as the final user message.

Source code in tapeagents/nodes.py
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
def tape_to_messages(self, tape: Tape, steps_description: str) -> list[dict]:
    """
    Converts a Tape object and steps description into a list of messages for LLM conversation.

    Args:
        tape (Tape): A Tape object containing conversation steps.
        steps_description (str): A description of the conversation steps.

    Returns:
        list[dict]: A list of dictionaries representing the conversation messages.
                   Each dictionary contains 'role' and 'content' keys.
                   Roles can be 'system', 'user', or 'assistant'.
                   The system prompt is always the first message.
                   If steps_description is provided, it's added as a user message.
                   Messages from tape are added with roles based on step type.
                   If guidance exists, it's added as the final user message.
    """
    messages: list[dict] = [
        {"role": "system", "content": self.system_prompt},
    ]
    if steps_description:
        messages.append({"role": "user", "content": steps_description})
    for step in tape:
        role = "assistant" if isinstance(step, AgentStep) else "user"
        messages.append({"role": role, "content": step.llm_view()})
    if self.guidance:
        messages.append({"role": "user", "content": self.guidance})
    return messages

trim_tape(tape)

Trims the tape by removing unnecessary positions.

Parameters:

  • tape (Tape) –

    The tape object to be trimmed.

Returns:

  • Tape ( Tape ) –

    The trimmed tape object.

Note

Currently this is a placeholder method that returns the tape unchanged.

Source code in tapeagents/nodes.py
296
297
298
299
300
301
302
303
304
305
306
307
308
309
def trim_tape(self, tape: Tape) -> Tape:
    """
    Trims the tape by removing unnecessary positions.

    Args:
        tape (Tape): The tape object to be trimmed.

    Returns:
        Tape: The trimmed tape object.

    Note:
        Currently this is a placeholder method that returns the tape unchanged.
    """
    return tape

ObservationControlNode

Bases: ControlFlowNode

A control flow node that selects the next node based on the last observation in the tape.

This node examines the last observation in the tape and uses it to determine which node to execute next based on a mapping of observation types to node names.

Attributes:

  • observation_to_node (dict[Type, str]) –

    Mapping of observation types to destination node names

  • default_node (str) –

    Default node to jump to if no matching observation type is found

Example
node = ObservationControlNode(
    observation_to_node={
        SuccessObservation: "success_node",
        ErrorObservation: "error_node"
    },
    default_node="fallback_node"
)

Methods:

  • select_node

    Selects the next node based on the type of the last observation in the tape.

Source code in tapeagents/nodes.py
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
class ObservationControlNode(ControlFlowNode):
    """
    A control flow node that selects the next node based on the last observation in the tape.

    This node examines the last observation in the tape and uses it to determine which node
    to execute next based on a mapping of observation types to node names.

    Attributes:
        observation_to_node (dict[Type, str]): Mapping of observation types to destination node names
        default_node (str): Default node to jump to if no matching observation type is found

    Example:
        ```python
        node = ObservationControlNode(
            observation_to_node={
                SuccessObservation: "success_node",
                ErrorObservation: "error_node"
            },
            default_node="fallback_node"
        )
        ```
    """

    observation_to_node: dict[Type, str] = {}
    default_node: str = ""  # jump to the last node by default

    def select_node(self, tape: Tape) -> str:
        """
        Selects the next node based on the type of the last observation in the tape.

        Returns default_node if no observations exist or no matching type is found.

        Args:
            tape (Tape): The tape object containing the context and state

        Returns:
            str: The name of the next node to execute
        """
        observations = [step for step in tape.steps if isinstance(step, Observation)]
        last_observation = observations[-1] if observations else None
        return self.observation_to_node.get(type(last_observation), self.default_node)

select_node(tape)

Selects the next node based on the type of the last observation in the tape.

Returns default_node if no observations exist or no matching type is found.

Parameters:

  • tape (Tape) –

    The tape object containing the context and state

Returns:

  • str ( str ) –

    The name of the next node to execute

Source code in tapeagents/nodes.py
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
def select_node(self, tape: Tape) -> str:
    """
    Selects the next node based on the type of the last observation in the tape.

    Returns default_node if no observations exist or no matching type is found.

    Args:
        tape (Tape): The tape object containing the context and state

    Returns:
        str: The name of the next node to execute
    """
    observations = [step for step in tape.steps if isinstance(step, Observation)]
    last_observation = observations[-1] if observations else None
    return self.observation_to_node.get(type(last_observation), self.default_node)