LLM Node
LLM Node¶
GraSP supports text generation using LLMs. GraSP provides integration with various LLMs hosted on different inference servers.
To use it, include the following configuration in your graph_config.yaml
file:
Example Configuration:¶
paraphrase_question:
node_type: llm
prompt:
- system: |
You are an assistant tasked with paraphrasing a user query in a {tone} tone acting as a {persona}. Do NOT change/paraphrase the python code and keep it as is. Do NOT generate any conversational text and respond ONLY with the paraphrased query in the following format: "PARAPHRASED QUERY: <query>"
- user: |
USER QUERY: Provide a brief description of the problem the code is trying to solve and a brief explanation of the code. Do NOT generate any conversational text and respond ONLY with the problem the code is trying to solve and the explanation of the code.
{code}
post_process: tasks.mbpp.code_explanation.task_executor.ParaphraseQuestionNodePostProcessor
output_keys:
- rephrased_text
model:
name: mistralai
parameters:
temperature: 0.3
Configuration Fields:¶
-
node_type
: This should be set tollm
. -
prompt
: This is the prompt that will be sent to the LLM. It should contain the system prompt and the user prompt. The system prompt defines the instructions for the LLM, and the user prompt provides the user query. -
post_process
: This is the function class oftype NodePostProcessor
, used to post-process the output from the LLM. The class need to defineapply()
method with parameterGraspMessage
.GraspMessage
is just a wrapper on the actual LangGraph message object(AIMessage
,UserMessage
, etc). Please note, if the variables returned by the above method are required as state variables, they should be defined in theoutput_vars
field for the node. This also has backward compatibility, you can set a direct method topost_process
with the above signature. -
output_keys
: These are the variables used to store the output from the LLM. This is typically a method defined by the user in theirtask_executor
file. It can be a list or a single variable.- If a postprocessor is not defined, the default postprocessor is invoked, and the output is stored in
output_keys
. - If a postprocessor is defined,
output_keys
can include multiple variables. - Note:
output_vars
andoutput_key
are deprecated. - Note: With this change, access the output_keys directly from state variable.
- Note: By default, the returned message is an assistant message. To change the role of the message, use
output_role
.
- If a postprocessor is not defined, the default postprocessor is invoked, and the output is stored in
-
model
: This defines the LLM model to be used. The primary model configuration should be specified in themodels.yaml
file under the config folder. Parameters defined in the node override those inmodels.yaml
. -
pre_process
: This is an optional functional class of typeNodePreProcessor
, used to preprocess the input before sending it to the LLM. If not provided, the default preprocessor is used. This class need to defineapply
method withGraspState
as a parameter.
Example code: ```python class CritiqueAnsNodePreProcessor(NodePreProcessor): def apply(self, state:GraspState): if not state["messages"]: state["messages"] = []
# We need to convert user turns to assistant and vice versa
cls_map = {"ai": HumanMessage, "human": AIMessage}
translated = [cls_map[msg.type](content=msg.content) for msg in state["messages"]]
state.update({"messages": translated})
return state
``
It also supports backward compatibility, user can set a simple method into
pre_process`.
-
output_key
: The old behavior is still maintained withoutput_keys
. But this variable is renamed, this may impact graph_config.yaml file and the output generator code. -
input_key
: This is an optional field to specify the input key for the LLM node. If not defined, the default input key (messages
) will be used. -
output_role
: This defines the role of the message returned by the LLM. It can besystem
,user
, orassistant
. If not specified, the default role (assistant
) will be used.