Skip to content

VisionMultiModalModelConfig

Abstract

This class cannot be instantiated directly. Use one of the variants listed below.

Module: fast_llm.layers.vision.config

Inherits from: LanguageModelConfig, BlockConfig, ModuleConfig

Fields

decoderarchitecture

Type: BlockSequenceConfig    Default: (sub-fields optional)

Configuration for the language model decoder.

embeddingsarchitecture

Type: LanguageModelEmbeddingsConfig    Default: (sub-fields optional)

Configuration for the language model embeddings.

headarchitecture

Type: LanguageModelHeadConfig    Default: (sub-fields optional)

Configuration for the language model head(s).

hidden_sizearchitecture

Type: int    Default: 1024

Size of the model's main hidden dimension, e.g., for its input and output layers.

tied_embedding_weightarchitecture

Type: bool    Default: False

Tie the output weights (logits) with the vocabulary embedding.

vision_encoderarchitecture

Type: VisionEncoderConfig    Default: (sub-fields optional)

Configuration for the vision encoder.

image_token_indexoptional

Type: int or None    Default: None

Index of the image token. Unused, but required for Hugging Face conversion.

lr_scalefeature

Type: float or None    Default: None

Scaling factor for the layer learning rate. Combines multiplicatively with the scale set by the parent and child layers, if applicable.