Skip to content

LmEvalEvaluatorConfig

Module: fast_llm.engine.evaluation.config

Variant of: EvaluatorConfig — select with type: lm_eval

Inherits from: EvaluatorConfig, IntervalConfig

Fields

intervalfeature

Type: int or None    Default: None

The number of training iterations between each interval. Setting to None will disable.

offsetfeature

Type: int    Default: 0

Offset for the first interval.

add_bos_token

Type: bool    Default: False

Whether to prepend a beginning-of-sequence (BOS) token, required for some models like LLaMA; passed to the Fast-LLM lm_eval model wrapper.

cli_args

Type: list[str]    Default: <lambda>()

lm_eval CLI arguments, excluding those related to model, wandb, batch sizes, and device.

communication_timeout_sec

Type: float    Default: 600.0

Maximum wait time (in seconds) for tensor-parallel or data-parallel model operations such as forward, generate, or gathering data. Needed because some ranks may have no data or post-processing can be slow, exceeding the default 60s timeout.

logits_cache

Type: bool    Default: True

Whether to enable logits caching for speedup and avoiding recomputation during repeated evaluations; passed to the Fast-LLM lm_eval model wrapper.

max_length

Type: int or None    Default: None

Maximum sequence length including both prompt and newly generated tokens. If not set, it is inferred from the Fast-LLM model config or tokenizer.

prefix_token_id

Type: int or None    Default: None

Token ID to use as a prefix to the input (e.g., for control codes or prompts); passed to the Fast-LLM lm_eval model wrapper.

tokenizer

Type: TokenizerConfig    Default: (sub-fields optional)

Configuration for the tokenizer.

truncation

Type: bool    Default: False

Whether to use truncation during tokenization (useful when inputs exceed model's max length); passed to the Fast-LLM lm_eval model wrapper.