lambeq.bobcat¶
- class lambeq.bobcat.BertForChartClassification(config: ChartClassifierConfig)[source]¶
Bases:
BertPreTrainedModel
- T_destination = ~T_destination¶
- __call__(*args, **kwargs)¶
Call self as a function.
- __init__(config: ChartClassifierConfig) None [source]¶
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- active_adapter() str ¶
- active_adapters() List[str] ¶
If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT official documentation: https://huggingface.co/docs/peft
Gets the current active adapters of the model. In case of multi-adapter inference (combining multiple adapters for inference) returns the list of all active adapters so that users can deal with them accordingly.
For previous PEFT versions (that does not support multi-adapter inference), module.active_adapter will return a single string.
- add_adapter(adapter_config, adapter_name: str | None = None) None ¶
If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT official documentation: https://huggingface.co/docs/peft
Adds a fresh new adapter to the current model for training purpose. If no adapter name is passed, a default name is assigned to the adapter to follow the convention of PEFT library (in PEFT we use “default” as the default adapter name).
- Args:
- adapter_config (~peft.PeftConfig):
The configuration of the adapter to add, supported adapters are non-prefix tuning and adaption prompts methods
- adapter_name (str, optional, defaults to “default”):
The name of the adapter to add. If no name is passed, a default name is assigned to the adapter.
- add_memory_hooks()¶
Add a memory hook before and after each sub-module forward pass to record increase in memory consumption.
Increase in memory consumption is stored in a mem_rss_diff attribute for each module and can be reset to zero with model.reset_memory_hooks_state().
- add_model_tags(tags: List[str] | str) None ¶
Add custom tags into the model that gets pushed to the Hugging Face Hub. Will not overwrite existing tags in the model.
- Args:
- tags (Union[List[str], str]):
The desired tags to inject in the model
Examples:
```python from transformers import AutoModel
model = AutoModel.from_pretrained(“google-bert/bert-base-cased”)
model.add_model_tags([“custom”, “custom-bert”])
# Push the model to your namespace with the name “my-custom-bert”. model.push_to_hub(“my-custom-bert”) ```
- add_module(name: str, module: Module | None) None ¶
Add a child module to the current module.
The module can be accessed as an attribute using the given name.
- Args:
- name (str): name of the child module. The child module can be
accessed from this module using the given name
module (Module): child module to be added to the module.
- apply(fn: Callable[[Module], None]) T ¶
Apply
fn
recursively to every submodule (as returned by.children()
) as well as self.Typical use includes initializing the parameters of a model (see also nn-init-doc).
- Args:
fn (
Module
-> None): function to be applied to each submodule- Returns:
Module: self
Example:
>>> @torch.no_grad() >>> def init_weights(m): >>> print(m) >>> if type(m) == nn.Linear: >>> m.weight.fill_(1.0) >>> print(m.weight) >>> net = nn.Sequential(nn.Linear(2, 2), nn.Linear(2, 2)) >>> net.apply(init_weights) Linear(in_features=2, out_features=2, bias=True) Parameter containing: tensor([[1., 1.], [1., 1.]], requires_grad=True) Linear(in_features=2, out_features=2, bias=True) Parameter containing: tensor([[1., 1.], [1., 1.]], requires_grad=True) Sequential( (0): Linear(in_features=2, out_features=2, bias=True) (1): Linear(in_features=2, out_features=2, bias=True) )
- property base_model: Module¶
torch.nn.Module: The main body of the model.
- base_model_prefix = 'bert'¶
- bfloat16() T ¶
Casts all floating point parameters and buffers to
bfloat16
datatype.Note
This method modifies the module in-place.
- Returns:
Module: self
- buffers(recurse: bool = True) Iterator[Tensor] ¶
Return an iterator over module buffers.
- Args:
- recurse (bool): if True, then yields buffers of this module
and all submodules. Otherwise, yields only buffers that are direct members of this module.
- Yields:
torch.Tensor: module buffer
Example:
>>> # xdoctest: +SKIP("undefined vars") >>> for buf in model.buffers(): >>> print(type(buf), buf.size()) <class 'torch.Tensor'> (20L,) <class 'torch.Tensor'> (20L, 1L, 5L, 5L)
- call_super_init: bool = False¶
- classmethod can_generate() bool ¶
Returns whether this model can generate sequences with .generate().
- Returns:
bool: Whether this model can generate sequences with .generate().
- children() Iterator[Module] ¶
Return an iterator over immediate children modules.
- Yields:
Module: a child module
- compile(*args, **kwargs)¶
Compile this Module’s forward using
torch.compile()
.This Module’s __call__ method is compiled and all arguments are passed as-is to
torch.compile()
.See
torch.compile()
for details on the arguments for this function.
- compute_transition_scores(sequences: Tensor, scores: Tuple[Tensor], beam_indices: Tensor | None = None, normalize_logits: bool = False) Tensor ¶
Computes the transition scores of sequences given the generation scores (and beam indices, if beam search was used). This is a convenient method to quicky obtain the scores of the selected tokens at generation time.
- Parameters:
- sequences (torch.LongTensor):
The generated sequences. The second dimension (sequence_length) is either equal to max_length or shorter if all batches finished early due to the eos_token_id.
- scores (tuple(torch.FloatTensor)):
Transition scores for each vocabulary token at each generation step. Beam transition scores consisting of log probabilities of tokens conditioned on log softmax of previously generated tokens in this beam. Tuple of torch.FloatTensor with up to max_new_tokens elements (one element for each generated token), with each tensor of shape (batch_size*num_beams, config.vocab_size).
- beam_indices (torch.LongTensor, optional):
Beam indices of generated token id at each generation step. torch.LongTensor of shape (batch_size*num_return_sequences, sequence_length). Only required if a num_beams>1 at generate-time.
- normalize_logits (bool, optional, defaults to False):
Whether to normalize the logits (which, for legacy reasons, may be unnormalized).
- Return:
- torch.Tensor: A torch.Tensor of shape (batch_size*num_return_sequences, sequence_length) containing
the transition scores (logits)
Examples:
```python >>> from transformers import GPT2Tokenizer, AutoModelForCausalLM >>> import numpy as np
>>> tokenizer = GPT2Tokenizer.from_pretrained("gpt2") >>> model = AutoModelForCausalLM.from_pretrained("openai-community/gpt2") >>> tokenizer.pad_token_id = tokenizer.eos_token_id >>> inputs = tokenizer(["Today is"], return_tensors="pt")
>>> # Example 1: Print the scores for each token generated with Greedy Search >>> outputs = model.generate(**inputs, max_new_tokens=5, return_dict_in_generate=True, output_scores=True) >>> transition_scores = model.compute_transition_scores( ... outputs.sequences, outputs.scores, normalize_logits=True ... ) >>> # input_length is the length of the input prompt for decoder-only models, like the GPT family, and 1 for >>> # encoder-decoder models, like BART or T5. >>> input_length = 1 if model.config.is_encoder_decoder else inputs.input_ids.shape[1] >>> generated_tokens = outputs.sequences[:, input_length:] >>> for tok, score in zip(generated_tokens[0], transition_scores[0]): ... # | token | token string | log probability | probability ... print(f"| {tok:5d} | {tokenizer.decode(tok):8s} | {score.numpy():.3f} | {np.exp(score.numpy()):.2%}") | 262 | the | -1.414 | 24.33% | 1110 | day | -2.609 | 7.36% | 618 | when | -2.010 | 13.40% | 356 | we | -1.859 | 15.58% | 460 | can | -2.508 | 8.14%
>>> # Example 2: Reconstruct the sequence scores from Beam Search >>> outputs = model.generate( ... **inputs, ... max_new_tokens=5, ... num_beams=4, ... num_return_sequences=4, ... return_dict_in_generate=True, ... output_scores=True, ... ) >>> transition_scores = model.compute_transition_scores( ... outputs.sequences, outputs.scores, outputs.beam_indices, normalize_logits=False ... ) >>> # If you sum the generated tokens' scores and apply the length penalty, you'll get the sequence scores. >>> # Tip 1: recomputing the scores is only guaranteed to match with `normalize_logits=False`. Depending on the >>> # use case, you might want to recompute it with `normalize_logits=True`. >>> # Tip 2: the output length does NOT include the input length >>> output_length = np.sum(transition_scores.numpy() < 0, axis=1) >>> length_penalty = model.generation_config.length_penalty >>> reconstructed_scores = transition_scores.sum(axis=1) / (output_length**length_penalty) >>> print(np.allclose(outputs.sequences_scores, reconstructed_scores)) True ```
- config_class¶
alias of
ChartClassifierConfig
- contrastive_search(*args, **kwargs)¶
- cpu() T ¶
Move all model parameters and buffers to the CPU.
Note
This method modifies the module in-place.
- Returns:
Module: self
- static create_extended_attention_mask_for_decoder(input_shape, attention_mask, device=None)¶
- cuda(device: int | device | None = None) T ¶
Move all model parameters and buffers to the GPU.
This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on GPU while being optimized.
Note
This method modifies the module in-place.
- Args:
- device (int, optional): if specified, all parameters will be
copied to that device
- Returns:
Module: self
- dequantize()¶
Potentially dequantize the model in case it has been quantized by a quantization method that support dequantization.
- property device: device¶
torch.device: The device on which the module is (assuming that all the module parameters are on the same device).
- disable_adapters() None ¶
If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT official documentation: https://huggingface.co/docs/peft
Disable all adapters that are attached to the model. This leads to inferring with the base model only.
- disable_input_require_grads()¶
Removes the _require_grads_hook.
- double() T ¶
Casts all floating point parameters and buffers to
double
datatype.Note
This method modifies the module in-place.
- Returns:
Module: self
- property dtype: dtype¶
torch.dtype: The dtype of the module (assuming that all the module parameters have the same dtype).
- property dummy_inputs: Dict[str, Tensor]¶
Dict[str, torch.Tensor]: Dummy inputs to do a forward pass in the network.
- dump_patches: bool = False¶
- enable_adapters() None ¶
If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT official documentation: https://huggingface.co/docs/peft
Enable adapters that are attached to the model. The model will use self.active_adapter()
- enable_input_require_grads()¶
Enables the gradients for the input embeddings. This is useful for fine-tuning adapter weights while keeping the model weights fixed.
- estimate_tokens(input_dict: Dict[str, Tensor | Any]) int ¶
Helper function to estimate the total number of tokens from the model inputs.
- Args:
inputs (dict): The model inputs.
- Returns:
int: The total number of tokens.
- eval() T ¶
Set the module in evaluation mode.
This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g.
Dropout
,BatchNorm
, etc.This is equivalent with
self.train(False)
.See locally-disable-grad-doc for a comparison between .eval() and several similar mechanisms that may be confused with it.
- Returns:
Module: self
- extra_repr() str ¶
Set the extra representation of the module.
To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.
- float(*args)¶
Casts all floating point parameters and buffers to
float
datatype.Note
This method modifies the module in-place.
- Returns:
Module: self
- floating_point_ops(input_dict: Dict[str, Tensor | Any], exclude_embeddings: bool = True) int ¶
Get number of (optionally, non-embeddings) floating-point operations for the forward and backward passes of a batch with this transformer model. Default approximation neglects the quadratic dependency on the number of tokens (valid if 12 * d_model << sequence_length) as laid out in [this paper](https://arxiv.org/pdf/2001.08361.pdf) section 2.1. Should be overridden for transformers with parameter re-use e.g. Albert or Universal Transformers, or if doing long-range modeling with very high sequence lengths.
- Args:
- batch_size (int):
The batch size for the forward pass.
- sequence_length (int):
The number of tokens in each line of the batch.
- exclude_embeddings (bool, optional, defaults to True):
Whether or not to count embedding and softmax operations.
- Returns:
int: The number of floating-point operations.
- forward(input_ids: LongTensor | None = None, attention_mask: FloatTensor | None = None, token_type_ids: LongTensor | None = None, position_ids: LongTensor | None = None, head_mask: FloatTensor | None = None, inputs_embeds: FloatTensor | None = None, tag_labels: LongTensor | None = None, span_labels: LongTensor | None = None, word_mask: BoolTensor | None = None, output_attentions: bool | None = None, output_hidden_states: bool | None = None, return_dict: bool | None = None) ChartClassifierOutput | tuple[Any, ...] [source]¶
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- property framework: str¶
- Str:
Identifies that this is a PyTorch model.
- classmethod from_pretrained(pretrained_model_name_or_path: str | PathLike | None, *model_args, config: PretrainedConfig | str | PathLike | None = None, cache_dir: str | PathLike | None = None, ignore_mismatched_sizes: bool = False, force_download: bool = False, local_files_only: bool = False, token: bool | str | None = None, revision: str = 'main', use_safetensors: bool = None, **kwargs) PreTrainedModel ¶
Instantiate a pretrained pytorch model from a pre-trained model configuration.
The model is set in evaluation mode by default using model.eval() (Dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train().
The warning Weights from XXX not initialized from pretrained model means that the weights of XXX do not come pretrained with the rest of the model. It is up to you to train those weights with a downstream fine-tuning task.
The warning Weights from XXX not used in YYY means that the layer XXX is not used by YYY, therefore those weights are discarded.
If model weights are the same precision as the base model (and is a supported model), weights will be lazily loaded in using the meta device and brought into memory once an input is passed through that layer regardless of low_cpu_mem_usage.
- Parameters:
- pretrained_model_name_or_path (str or os.PathLike, optional):
Can be either:
A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
A path to a directory containing model weights saved using [~PreTrainedModel.save_pretrained], e.g., ./my_model_directory/.
A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
A path or url to a model folder containing a flax checkpoint file in .msgpack format (e.g, ./flax_model/ containing flax_model.msgpack). In this case, from_flax should be set to True.
None if you are both providing the configuration and state dictionary (resp. with keyword arguments config and state_dict).
- model_args (sequence of positional arguments, optional):
All remaining positional arguments will be passed to the underlying model’s __init__ method.
- config (Union[PretrainedConfig, str, os.PathLike], optional):
Can be either:
an instance of a class derived from [PretrainedConfig],
a string or path valid as input to [~PretrainedConfig.from_pretrained].
Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:
The model is a model provided by the library (loaded with the model id string of a pretrained model).
The model was saved using [~PreTrainedModel.save_pretrained] and is reloaded by supplying the save directory.
The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory.
- state_dict (Dict[str, torch.Tensor], optional):
A state dictionary to use instead of a state dictionary loaded from saved weights file.
This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using [~PreTrainedModel.save_pretrained] and [~PreTrainedModel.from_pretrained] is not a simpler option.
- cache_dir (Union[str, os.PathLike], optional):
Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
- from_tf (bool, optional, defaults to False):
Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument).
- from_flax (bool, optional, defaults to False):
Load the model weights from a Flax checkpoint save file (see docstring of pretrained_model_name_or_path argument).
- ignore_mismatched_sizes (bool, optional, defaults to False):
Whether or not to raise an error if some of the weights from the checkpoint do not have the same size as the weights of the model (if for instance, you are instantiating a model with 10 labels from a checkpoint with 3 labels).
- force_download (bool, optional, defaults to False):
Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.
- resume_download:
Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.
- proxies (Dict[str, str], optional):
A dictionary of proxy servers to use by protocol or endpoint, e.g., {‘http’: ‘foo.bar:3128’, ‘http://hostname’: ‘foo.bar:4012’}. The proxies are used on each request.
- output_loading_info(bool, optional, defaults to False):
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
- local_files_only(bool, optional, defaults to False):
Whether or not to only look at local files (i.e., do not try to download the model).
- token (str or bool, optional):
The token to use as HTTP bearer authorization for remote files. If True, or not specified, will use the token generated when running huggingface-cli login (stored in ~/.huggingface).
- revision (str, optional, defaults to “main”):
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
<Tip>
To test a pull request you made on the Hub, you can pass `revision=”refs/pr/<pr_number>”.
</Tip>
- mirror (str, optional):
Mirror source to accelerate downloads in China. If you are from China and have an accessibility problem, you can set this option to resolve it. Note that we do not guarantee the timeliness or safety. Please refer to the mirror site for more information.
- _fast_init(bool, optional, defaults to True):
Whether or not to disable fast initialization.
<Tip warning={true}>
One should only disable _fast_init to ensure backwards compatibility with transformers.__version__ < 4.6.0 for seeded model initialization. This argument will be removed at the next major version. See [pull request 11471](https://github.com/huggingface/transformers/pull/11471) for more information.
</Tip>
- attn_implementation (str, optional):
The attention implementation to use in the model (if relevant). Can be any of “eager” (manual implementation of the attention), “sdpa” (using [F.scaled_dot_product_attention](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html)), or “flash_attention_2” (using [Dao-AILab/flash-attention](https://github.com/Dao-AILab/flash-attention)). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual “eager” implementation.
> Parameters for big model inference
- low_cpu_mem_usage(bool, optional):
Tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. Generally should be combined with a device_map (such as “auto”) for best results. This is an experimental feature and a subject to change at any moment. </Tip>
If the model weights are in the same precision as the model loaded in, low_cpu_mem_usage (without device_map) is redundant and will not provide any benefit in regards to CPU memory usage. However, this should still be enabled if you are passing in a device_map.
</Tip>
- torch_dtype (str or torch.dtype, optional):
Override the default torch.dtype and load the model under a specific dtype. The different options are:
torch.float16 or torch.bfloat16 or torch.float: load in a specified
dtype, ignoring the model’s config.torch_dtype if one exists. If not specified - the model will get loaded in torch.float (fp32).
“auto” - A torch_dtype entry in the config.json file of the model will be
attempted to be used. If this entry isn’t found then next check the dtype of the first weight in the checkpoint that’s of a floating point type and use that as dtype. This will load the model using the dtype it was saved in at the end of the training. It can’t be used as an indicator of how the model was trained. Since it could be trained in one of half precision dtypes, but saved in fp32.
A string that is a valid torch.dtype. E.g. “float32” loads the model in torch.float32, “float16” loads in torch.float16 etc.
<Tip>
For some models the dtype they were trained in is unknown - you may try to check the model’s paper or reach out to the authors and ask them to add this information to the model’s card and to insert the torch_dtype entry in config.json on the hub.
</Tip>
- device_map (str or Dict[str, Union[int, str, torch.device]] or int or torch.device, optional):
A map that specifies where each submodule should go. It doesn’t need to be refined to each parameter/buffer name, once a given module name is inside, every submodule of it will be sent to the same device. If we only pass the device (e.g., “cpu”, “cuda:1”, “mps”, or a GPU ordinal rank like 1) on which the model will be allocated, the device map will map the entire model to this device. Passing device_map = 0 means put the whole model on GPU 0.
To have Accelerate compute the most optimized device_map automatically, set device_map=”auto”. For more information about each option see [designing a device map](https://hf.co/docs/accelerate/main/en/usage_guides/big_modeling#designing-a-device-map).
- max_memory (Dict, optional):
A dictionary device identifier to maximum memory. Will default to the maximum memory available for each GPU and the available CPU RAM if unset.
- offload_folder (str or os.PathLike, optional):
If the device_map contains any value “disk”, the folder where we will offload weights.
- offload_state_dict (bool, optional):
If True, will temporarily offload the CPU state dict to the hard drive to avoid getting out of CPU RAM if the weight of the CPU state dict + the biggest shard of the checkpoint does not fit. Defaults to True when there is some disk offload.
- offload_buffers (bool, optional):
Whether or not to offload the buffers with the model parameters.
- quantization_config (Union[QuantizationConfigMixin,Dict], optional):
A dictionary of configuration parameters or a QuantizationConfigMixin object for quantization (e.g bitsandbytes, gptq). There may be other quantization-related kwargs, including load_in_4bit and load_in_8bit, which are parsed by QuantizationConfigParser. Supported only for bitsandbytes quantizations and not preferred. consider inserting all such arguments into quantization_config instead.
- subfolder (str, optional, defaults to “”):
In case the relevant files are located inside a subfolder of the model repo on huggingface.co, you can specify the folder name here.
- variant (str, optional):
If specified load weights from variant filename, e.g. pytorch_model.<variant>.bin. variant is ignored when using from_tf or from_flax.
- use_safetensors (bool, optional, defaults to None):
Whether or not to use safetensors checkpoints. Defaults to None. If not specified and safetensors is not installed, it will be set to False.
- kwargs (remaining dictionary of keyword arguments, optional):
Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded:
If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done)
If a configuration is not provided, kwargs will be first passed to the configuration class initialization function ([~PretrainedConfig.from_pretrained]). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function.
<Tip>
Activate the special [“offline-mode”](https://huggingface.co/transformers/installation.html#offline-mode) to use this method in a firewalled environment.
</Tip>
Examples:
```python >>> from transformers import BertConfig, BertModel
>>> # Download model and configuration from huggingface.co and cache. >>> model = BertModel.from_pretrained("google-bert/bert-base-uncased") >>> # Model was saved using *save_pretrained('./test/saved_model/')* (for example purposes, not runnable). >>> model = BertModel.from_pretrained("./test/saved_model/") >>> # Update configuration during loading. >>> model = BertModel.from_pretrained("google-bert/bert-base-uncased", output_attentions=True) >>> assert model.config.output_attentions == True >>> # Loading from a TF checkpoint file instead of a PyTorch model (slower, for example purposes, not runnable). >>> config = BertConfig.from_json_file("./tf_model/my_tf_model_config.json") >>> model = BertModel.from_pretrained("./tf_model/my_tf_checkpoint.ckpt.index", from_tf=True, config=config) >>> # Loading from a Flax checkpoint file instead of a PyTorch model (slower) >>> model = BertModel.from_pretrained("google-bert/bert-base-uncased", from_flax=True) ```
low_cpu_mem_usage algorithm:
This is an experimental function that loads the model using ~1x model size CPU memory
Here is how it works:
save which state_dict keys we have
drop state_dict before the model is created, since the latter takes 1x model size CPU memory
3. after the model has been instantiated switch to the meta device all params/buffers that are going to be replaced from the loaded state_dict 4. load state_dict 2nd time 5. replace the params/buffers from the state_dict
Currently, it can’t handle deepspeed ZeRO stage 3 and ignores loading errors
- generate(inputs: Tensor | None = None, generation_config: GenerationConfig | None = None, logits_processor: LogitsProcessorList | None = None, stopping_criteria: StoppingCriteriaList | None = None, prefix_allowed_tokens_fn: Callable[[int, Tensor], List[int]] | None = None, synced_gpus: bool | None = None, assistant_model: PreTrainedModel | None = None, streamer: BaseStreamer | None = None, negative_prompt_ids: Tensor | None = None, negative_prompt_attention_mask: Tensor | None = None, **kwargs) GenerateDecoderOnlyOutput | GenerateEncoderDecoderOutput | GenerateBeamDecoderOnlyOutput | GenerateBeamEncoderDecoderOutput | LongTensor ¶
Generates sequences of token ids for models with a language modeling head.
<Tip warning={true}>
Most generation-controlling parameters are set in generation_config which, if not passed, will be set to the model’s default generation configuration. You can override any generation_config by passing the corresponding parameters to generate(), e.g. .generate(inputs, num_beams=4, do_sample=True).
For an overview of generation strategies and code examples, check out the [following guide](../generation_strategies).
</Tip>
- Parameters:
- inputs (torch.Tensor of varying shape depending on the modality, optional):
The sequence used as a prompt for the generation or as model inputs to the encoder. If None the method initializes it with bos_token_id and a batch size of 1. For decoder-only models inputs should be in the format of input_ids. For encoder-decoder models inputs can represent any of input_ids, input_values, input_features, or pixel_values.
- generation_config ([~generation.GenerationConfig], optional):
The generation configuration to be used as base parametrization for the generation call. **kwargs passed to generate matching the attributes of generation_config will override them. If generation_config is not provided, the default will be used, which has the following loading priority: 1) from the generation_config.json model file, if it exists; 2) from the model configuration. Please note that unspecified parameters will inherit [~generation.GenerationConfig]’s default values, whose documentation should be checked to parameterize generation.
- logits_processor (LogitsProcessorList, optional):
Custom logits processors that complement the default logits processors built from arguments and generation config. If a logit processor is passed that is already created with the arguments or a generation config an error is thrown. This feature is intended for advanced users.
- stopping_criteria (StoppingCriteriaList, optional):
Custom stopping criteria that complements the default stopping criteria built from arguments and a generation config. If a stopping criteria is passed that is already created with the arguments or a generation config an error is thrown. If your stopping criteria depends on the scores input, make sure you pass return_dict_in_generate=True, output_scores=True to generate. This feature is intended for advanced users.
- prefix_allowed_tokens_fn (Callable[[int, torch.Tensor], List[int]], optional):
If provided, this function constraints the beam search to allowed tokens only at each step. If not provided no constraint is applied. This function takes 2 arguments: the batch ID batch_id and input_ids. It has to return a list with the allowed tokens for the next generation step conditioned on the batch ID batch_id and the previously generated tokens inputs_ids. This argument is useful for constrained generation conditioned on the prefix, as described in [Autoregressive Entity Retrieval](https://arxiv.org/abs/2010.00904).
- synced_gpus (bool, optional):
Whether to continue running the while loop until max_length. Unless overridden this flag will be set to True under DeepSpeed ZeRO Stage 3 multiple GPUs environment to avoid hanging if one GPU finished generating before other GPUs. Otherwise it’ll be set to False.
- assistant_model (PreTrainedModel, optional):
An assistant model that can be used to accelerate generation. The assistant model must have the exact same tokenizer. The acceleration is achieved when forecasting candidate tokens with the assistent model is much faster than running generation with the model you’re calling generate from. As such, the assistant model should be much smaller.
- streamer (BaseStreamer, optional):
Streamer object that will be used to stream the generated sequences. Generated tokens are passed through streamer.put(token_ids) and the streamer is responsible for any further processing.
- negative_prompt_ids (torch.LongTensor of shape (batch_size, sequence_length), optional):
The negative prompt needed for some processors such as CFG. The batch size must match the input batch size. This is an experimental feature, subject to breaking API changes in future versions.
- negative_prompt_attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional):
Attention_mask for negative_prompt_ids.
- kwargs (Dict[str, Any], optional):
Ad hoc parametrization of generation_config and/or additional model-specific kwargs that will be forwarded to the forward function of the model. If the model is an encoder-decoder model, encoder specific kwargs should not be prefixed and decoder specific kwargs should be prefixed with decoder_.
- Return:
[~utils.ModelOutput] or torch.LongTensor: A [~utils.ModelOutput] (if return_dict_in_generate=True or when config.return_dict_in_generate=True) or a torch.LongTensor.
If the model is not an encoder-decoder model (model.config.is_encoder_decoder=False), the possible [~utils.ModelOutput] types are:
[~generation.GenerateDecoderOnlyOutput],
[~generation.GenerateBeamDecoderOnlyOutput]
If the model is an encoder-decoder model (model.config.is_encoder_decoder=True), the possible [~utils.ModelOutput] types are:
[~generation.GenerateEncoderDecoderOutput],
[~generation.GenerateBeamEncoderDecoderOutput]
- get_adapter_state_dict(adapter_name: str | None = None) dict ¶
If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT official documentation: https://huggingface.co/docs/peft
Gets the adapter state dict that should only contain the weights tensors of the specified adapter_name adapter. If no adapter_name is passed, the active adapter is used.
- Args:
- adapter_name (str, optional):
The name of the adapter to get the state dict from. If no name is passed, the active adapter is used.
- get_buffer(target: str) Tensor ¶
Return the buffer given by
target
if it exists, otherwise throw an error.See the docstring for
get_submodule
for a more detailed explanation of this method’s functionality as well as how to correctly specifytarget
.- Args:
- target: The fully-qualified string name of the buffer
to look for. (See
get_submodule
for how to specify a fully-qualified string.)
- Returns:
torch.Tensor: The buffer referenced by
target
- Raises:
- AttributeError: If the target string references an invalid
path or resolves to something that is not a buffer
- get_extended_attention_mask(attention_mask: Tensor, input_shape: Tuple[int], device: device = None, dtype: torch.float32 = None) Tensor ¶
Makes broadcastable attention and causal masks so that future and masked tokens are ignored.
- Arguments:
- attention_mask (torch.Tensor):
Mask with ones indicating tokens to attend to, zeros for tokens to ignore.
- input_shape (Tuple[int]):
The shape of the input to the model.
- Returns:
torch.Tensor The extended attention mask, with a the same dtype as attention_mask.dtype.
- get_extra_state() Any ¶
Return any extra state to include in the module’s state_dict.
Implement this and a corresponding
set_extra_state()
for your module if you need to store extra state. This function is called when building the module’s state_dict().Note that extra state should be picklable to ensure working serialization of the state_dict. We only provide provide backwards compatibility guarantees for serializing Tensors; other objects may break backwards compatibility if their serialized pickled form changes.
- Returns:
object: Any extra state to store in the module’s state_dict
- get_head_mask(head_mask: Tensor | None, num_hidden_layers: int, is_attention_chunked: bool = False) Tensor ¶
Prepare the head mask if needed.
- Args:
- head_mask (torch.Tensor with shape [num_heads] or [num_hidden_layers x num_heads], optional):
The mask indicating if we should keep the heads or not (1.0 for keep, 0.0 for discard).
- num_hidden_layers (int):
The number of hidden layers in the model.
- is_attention_chunked (bool, optional, defaults to False):
Whether or not the attentions scores are computed by chunks or not.
- Returns:
torch.Tensor with shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] or list with [None] for each layer.
- get_input_embeddings() Module ¶
Returns the model’s input embeddings.
- Returns:
nn.Module: A torch module mapping vocabulary to hidden states.
- get_memory_footprint(return_buffers=True)¶
Get the memory footprint of a model. This will return the memory footprint of the current model in bytes. Useful to benchmark the memory footprint of the current model and design some tests. Solution inspired from the PyTorch discussions: https://discuss.pytorch.org/t/gpu-memory-that-model-uses/56822/2
- Arguments:
- return_buffers (bool, optional, defaults to True):
Whether to return the size of the buffer tensors in the computation of the memory footprint. Buffers are tensors that do not require gradients and not registered as parameters. E.g. mean and std in batch norm layers. Please see: https://discuss.pytorch.org/t/what-pytorch-means-by-buffers/120266/2
- get_output_embeddings() Module ¶
Returns the model’s output embeddings.
- Returns:
nn.Module: A torch module mapping hidden states to vocabulary.
- get_parameter(target: str) Parameter ¶
Return the parameter given by
target
if it exists, otherwise throw an error.See the docstring for
get_submodule
for a more detailed explanation of this method’s functionality as well as how to correctly specifytarget
.- Args:
- target: The fully-qualified string name of the Parameter
to look for. (See
get_submodule
for how to specify a fully-qualified string.)
- Returns:
torch.nn.Parameter: The Parameter referenced by
target
- Raises:
- AttributeError: If the target string references an invalid
path or resolves to something that is not an
nn.Parameter
- get_position_embeddings() Embedding | Tuple[Embedding] ¶
- get_submodule(target: str) Module ¶
Return the submodule given by
target
if it exists, otherwise throw an error.For example, let’s say you have an
nn.Module
A
that looks like this:A( (net_b): Module( (net_c): Module( (conv): Conv2d(16, 33, kernel_size=(3, 3), stride=(2, 2)) ) (linear): Linear(in_features=100, out_features=200, bias=True) ) )
(The diagram shows an
nn.Module
A
.A
has a nested submodulenet_b
, which itself has two submodulesnet_c
andlinear
.net_c
then has a submoduleconv
.)To check whether or not we have the
linear
submodule, we would callget_submodule("net_b.linear")
. To check whether we have theconv
submodule, we would callget_submodule("net_b.net_c.conv")
.The runtime of
get_submodule
is bounded by the degree of module nesting intarget
. A query againstnamed_modules
achieves the same result, but it is O(N) in the number of transitive modules. So, for a simple check to see if some submodule exists,get_submodule
should always be used.- Args:
- target: The fully-qualified string name of the submodule
to look for. (See above example for how to specify a fully-qualified string.)
- Returns:
torch.nn.Module: The submodule referenced by
target
- Raises:
- AttributeError: If the target string references an invalid
path or resolves to something that is not an
nn.Module
- gradient_checkpointing_disable()¶
Deactivates gradient checkpointing for the current model.
Note that in other frameworks this feature can be referred to as “activation checkpointing” or “checkpoint activations”.
- gradient_checkpointing_enable(gradient_checkpointing_kwargs=None)¶
Activates gradient checkpointing for the current model.
Note that in other frameworks this feature can be referred to as “activation checkpointing” or “checkpoint activations”.
We pass the __call__ method of the modules instead of forward because __call__ attaches all the hooks of the module. https://discuss.pytorch.org/t/any-different-between-model-input-and-model-forward-input/3690/2
- Args:
- gradient_checkpointing_kwargs (dict, optional):
Additional keyword arguments passed along to the torch.utils.checkpoint.checkpoint function.
- half(*args)¶
Casts all floating point parameters and buffers to
half
datatype.Note
This method modifies the module in-place.
- Returns:
Module: self
- heal_tokens(input_ids: LongTensor, tokenizer: PreTrainedTokenizerBase | None = None) LongTensor ¶
Generates sequences of token ids for models with a language modeling head. Parameters:
input_ids (torch.LongTensor): The sequence used as a prompt for the generation. tokenizer (PreTrainedTokenizerBase, optional): The tokenizer used to decode the input ids.
- Return:
torch.LongTensor where each sequence has its tail token replaced with its appropriate extension.
- init_weights()¶
If needed prunes and maybe initializes weights. If using a custom PreTrainedModel, you need to implement any initialization logic in _init_weights.
- invert_attention_mask(encoder_attention_mask: Tensor) Tensor ¶
Invert an attention mask (e.g., switches 0. and 1.).
- Args:
encoder_attention_mask (torch.Tensor): An attention mask.
- Returns:
torch.Tensor: The inverted attention mask.
- ipu(device: int | device | None = None) T ¶
Move all model parameters and buffers to the IPU.
This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on IPU while being optimized.
Note
This method modifies the module in-place.
- Arguments:
- device (int, optional): if specified, all parameters will be
copied to that device
- Returns:
Module: self
- property is_gradient_checkpointing: bool¶
Whether gradient checkpointing is activated for this model or not.
Note that in other frameworks this feature can be referred to as “activation checkpointing” or “checkpoint activations”.
- is_parallelizable = False¶
- load_adapter(peft_model_id: str | None = None, adapter_name: str | None = None, revision: str | None = None, token: str | None = None, device_map: str | None = 'auto', max_memory: str | None = None, offload_folder: str | None = None, offload_index: int | None = None, peft_config: Dict[str, Any] = None, adapter_state_dict: Dict[str, Tensor] | None = None, adapter_kwargs: Dict[str, Any] | None = None) None ¶
Load adapter weights from file or remote Hub folder. If you are not familiar with adapters and PEFT methods, we invite you to read more about them on PEFT official documentation: https://huggingface.co/docs/peft
Requires peft as a backend to load the adapter weights.
- Args:
- peft_model_id (str, optional):
The identifier of the model to look for on the Hub, or a local path to the saved adapter config file and adapter weights.
- adapter_name (str, optional):
The adapter name to use. If not set, will use the default adapter.
- revision (str, optional, defaults to “main”):
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
<Tip>
To test a pull request you made on the Hub, you can pass `revision=”refs/pr/<pr_number>”.
</Tip>
- token (str, optional):
Whether to use authentication token to load the remote folder. Userful to load private repositories that are on HuggingFace Hub. You might need to call huggingface-cli login and paste your tokens to cache it.
- device_map (str or Dict[str, Union[int, str, torch.device]] or int or torch.device, optional):
A map that specifies where each submodule should go. It doesn’t need to be refined to each parameter/buffer name, once a given module name is inside, every submodule of it will be sent to the same device. If we only pass the device (e.g., “cpu”, “cuda:1”, “mps”, or a GPU ordinal rank like 1) on which the model will be allocated, the device map will map the entire model to this device. Passing device_map = 0 means put the whole model on GPU 0.
To have Accelerate compute the most optimized device_map automatically, set device_map=”auto”. For more information about each option see [designing a device map](https://hf.co/docs/accelerate/main/en/usage_guides/big_modeling#designing-a-device-map).
- max_memory (Dict, optional):
A dictionary device identifier to maximum memory. Will default to the maximum memory available for each GPU and the available CPU RAM if unset.
- offload_folder (str or os.PathLike, optional):
If the device_map contains any value “disk”, the folder where we will offload weights.
- offload_index (int, optional):
offload_index argument to be passed to accelerate.dispatch_model method.
- peft_config (Dict[str, Any], optional):
The configuration of the adapter to add, supported adapters are non-prefix tuning and adaption prompts methods. This argument is used in case users directly pass PEFT state dicts
- adapter_state_dict (Dict[str, torch.Tensor], optional):
The state dict of the adapter to load. This argument is used in case users directly pass PEFT state dicts
- adapter_kwargs (Dict[str, Any], optional):
Additional keyword arguments passed along to the from_pretrained method of the adapter config and find_adapter_config_file method.
- load_state_dict(state_dict: Mapping[str, Any], strict: bool = True, assign: bool = False)¶
Copy parameters and buffers from
state_dict
into this module and its descendants.If
strict
isTrue
, then the keys ofstate_dict
must exactly match the keys returned by this module’sstate_dict()
function.Warning
If
assign
isTrue
the optimizer must be created after the call toload_state_dict
unlessget_swap_module_params_on_conversion()
isTrue
.- Args:
- state_dict (dict): a dict containing parameters and
persistent buffers.
- strict (bool, optional): whether to strictly enforce that the keys
in
state_dict
match the keys returned by this module’sstate_dict()
function. Default:True
- assign (bool, optional): When
False
, the properties of the tensors in the current module are preserved while when
True
, the properties of the Tensors in the state dict are preserved. The only exception is therequires_grad
field ofDefault: ``False`
- Returns:
NamedTuple
withmissing_keys
andunexpected_keys
fields:- missing_keys is a list of str containing any keys that are expected
by this module but missing from the provided
state_dict
.
- unexpected_keys is a list of str containing the keys that are not
expected by this module but present in the provided
state_dict
.
- Note:
If a parameter or buffer is registered as
None
and its corresponding key exists instate_dict
,load_state_dict()
will raise aRuntimeError
.
- load_tf_weights(config, tf_checkpoint_path)¶
Load tf checkpoints in a pytorch model.
- main_input_name = 'input_ids'¶
- model_tags = None¶
- modules() Iterator[Module] ¶
Return an iterator over all modules in the network.
- Yields:
Module: a module in the network
- Note:
Duplicate modules are returned only once. In the following example,
l
will be returned only once.
Example:
>>> l = nn.Linear(2, 2) >>> net = nn.Sequential(l, l) >>> for idx, m in enumerate(net.modules()): ... print(idx, '->', m) 0 -> Sequential( (0): Linear(in_features=2, out_features=2, bias=True) (1): Linear(in_features=2, out_features=2, bias=True) ) 1 -> Linear(in_features=2, out_features=2, bias=True)
- named_buffers(prefix: str = '', recurse: bool = True, remove_duplicate: bool = True) Iterator[Tuple[str, Tensor]] ¶
Return an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.
- Args:
prefix (str): prefix to prepend to all buffer names. recurse (bool, optional): if True, then yields buffers of this module
and all submodules. Otherwise, yields only buffers that are direct members of this module. Defaults to True.
remove_duplicate (bool, optional): whether to remove the duplicated buffers in the result. Defaults to True.
- Yields:
(str, torch.Tensor): Tuple containing the name and buffer
Example:
>>> # xdoctest: +SKIP("undefined vars") >>> for name, buf in self.named_buffers(): >>> if name in ['running_var']: >>> print(buf.size())
- named_children() Iterator[Tuple[str, Module]] ¶
Return an iterator over immediate children modules, yielding both the name of the module as well as the module itself.
- Yields:
(str, Module): Tuple containing a name and child module
Example:
>>> # xdoctest: +SKIP("undefined vars") >>> for name, module in model.named_children(): >>> if name in ['conv4', 'conv5']: >>> print(module)
- named_modules(memo: Set[Module] | None = None, prefix: str = '', remove_duplicate: bool = True)¶
Return an iterator over all modules in the network, yielding both the name of the module as well as the module itself.
- Args:
memo: a memo to store the set of modules already added to the result prefix: a prefix that will be added to the name of the module remove_duplicate: whether to remove the duplicated module instances in the result
or not
- Yields:
(str, Module): Tuple of name and module
- Note:
Duplicate modules are returned only once. In the following example,
l
will be returned only once.
Example:
>>> l = nn.Linear(2, 2) >>> net = nn.Sequential(l, l) >>> for idx, m in enumerate(net.named_modules()): ... print(idx, '->', m) 0 -> ('', Sequential( (0): Linear(in_features=2, out_features=2, bias=True) (1): Linear(in_features=2, out_features=2, bias=True) )) 1 -> ('0', Linear(in_features=2, out_features=2, bias=True))
- named_parameters(prefix: str = '', recurse: bool = True, remove_duplicate: bool = True) Iterator[Tuple[str, Parameter]] ¶
Return an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.
- Args:
prefix (str): prefix to prepend to all parameter names. recurse (bool): if True, then yields parameters of this module
and all submodules. Otherwise, yields only parameters that are direct members of this module.
- remove_duplicate (bool, optional): whether to remove the duplicated
parameters in the result. Defaults to True.
- Yields:
(str, Parameter): Tuple containing the name and parameter
Example:
>>> # xdoctest: +SKIP("undefined vars") >>> for name, param in self.named_parameters(): >>> if name in ['bias']: >>> print(param.size())
- num_parameters(only_trainable: bool = False, exclude_embeddings: bool = False) int ¶
Get number of (optionally, trainable or non-embeddings) parameters in the module.
- Args:
- only_trainable (bool, optional, defaults to False):
Whether or not to return only the number of trainable parameters
- exclude_embeddings (bool, optional, defaults to False):
Whether or not to return only the number of non-embeddings parameters
- Returns:
int: The number of parameters.
- parameters(recurse: bool = True) Iterator[Parameter] ¶
Return an iterator over module parameters.
This is typically passed to an optimizer.
- Args:
- recurse (bool): if True, then yields parameters of this module
and all submodules. Otherwise, yields only parameters that are direct members of this module.
- Yields:
Parameter: module parameter
Example:
>>> # xdoctest: +SKIP("undefined vars") >>> for param in model.parameters(): >>> print(type(param), param.size()) <class 'torch.Tensor'> (20L,) <class 'torch.Tensor'> (20L, 1L, 5L, 5L)
- post_init()¶
A method executed at the end of each Transformer model initialization, to execute code that needs the model’s modules properly initialized (such as weight initialization).
- prepare_inputs_for_generation(*args, **kwargs)¶
- prune_heads(heads_to_prune: Dict[int, List[int]])¶
Prunes heads of the base model.
- Arguments:
- heads_to_prune (Dict[int, List[int]]):
Dictionary with keys being selected layer indices (int) and associated values being the list of heads to prune in said layer (list of int). For instance {1: [0, 2], 2: [2, 3]} will prune heads 0 and 2 on layer 1 and heads 2 and 3 on layer 2.
- push_to_hub(repo_id: str, use_temp_dir: bool | None = None, commit_message: str | None = None, private: bool | None = None, token: bool | str | None = None, max_shard_size: int | str | None = '5GB', create_pr: bool = False, safe_serialization: bool = True, revision: str = None, commit_description: str = None, tags: List[str] | None = None, **deprecated_kwargs) str ¶
Upload the model file to the 🤗 Model Hub.
- Parameters:
- repo_id (str):
The name of the repository you want to push your model to. It should contain your organization name when pushing to a given organization.
- use_temp_dir (bool, optional):
Whether or not to use a temporary directory to store the files saved before they are pushed to the Hub. Will default to True if there is no directory named like repo_id, False otherwise.
- commit_message (str, optional):
Message to commit while pushing. Will default to “Upload model”.
- private (bool, optional):
Whether or not the repository created should be private.
- token (bool or str, optional):
The token to use as HTTP bearer authorization for remote files. If True, will use the token generated when running huggingface-cli login (stored in ~/.huggingface). Will default to True if repo_url is not specified.
- max_shard_size (int or str, optional, defaults to “5GB”):
Only applicable for models. The maximum size for a checkpoint before being sharded. Checkpoints shard will then be each of size lower than this size. If expressed as a string, needs to be digits followed by a unit (like “5MB”). We default it to “5GB” so that users can easily load models on free-tier Google Colab instances without any CPU OOM issues.
- create_pr (bool, optional, defaults to False):
Whether or not to create a PR with the uploaded files or directly commit.
- safe_serialization (bool, optional, defaults to True):
Whether or not to convert the model weights in safetensors format for safer serialization.
- revision (str, optional):
Branch to push the uploaded files to.
- commit_description (str, optional):
The description of the commit that will be created
- tags (List[str], optional):
List of tags to push on the Hub.
Examples:
```python from transformers import AutoModel
model = AutoModel.from_pretrained(“google-bert/bert-base-cased”)
# Push the model to your namespace with the name “my-finetuned-bert”. model.push_to_hub(“my-finetuned-bert”)
# Push the model to an organization with the name “my-finetuned-bert”. model.push_to_hub(“huggingface/my-finetuned-bert”) ```
- register_backward_hook(hook: Callable[[Module, Tuple[Tensor, ...] | Tensor, Tuple[Tensor, ...] | Tensor], None | Tuple[Tensor, ...] | Tensor]) RemovableHandle ¶
Register a backward hook on the module.
This function is deprecated in favor of
register_full_backward_hook()
and the behavior of this function will change in future versions.- Returns:
torch.utils.hooks.RemovableHandle
:a handle that can be used to remove the added hook by calling
handle.remove()
- register_buffer(name: str, tensor: Tensor | None, persistent: bool = True) None ¶
Add a buffer to the module.
This is typically used to register a buffer that should not to be considered a model parameter. For example, BatchNorm’s
running_mean
is not a parameter, but is part of the module’s state. Buffers, by default, are persistent and will be saved alongside parameters. This behavior can be changed by settingpersistent
toFalse
. The only difference between a persistent buffer and a non-persistent buffer is that the latter will not be a part of this module’sstate_dict
.Buffers can be accessed as attributes using given names.
- Args:
- name (str): name of the buffer. The buffer can be accessed
from this module using the given name
- tensor (Tensor or None): buffer to be registered. If
None
, then operations that run on buffers, such as
cuda
, are ignored. IfNone
, the buffer is not included in the module’sstate_dict
.- persistent (bool): whether the buffer is part of this module’s
Example:
>>> # xdoctest: +SKIP("undefined vars") >>> self.register_buffer('running_mean', torch.zeros(num_features))
- classmethod register_for_auto_class(auto_class='AutoModel')¶
Register this class with a given auto class. This should only be used for custom models as the ones in the library are already mapped with an auto class.
<Tip warning={true}>
This API is experimental and may have some slight breaking changes in the next releases.
</Tip>
- Args:
- auto_class (str or type, optional, defaults to “AutoModel”):
The auto class to register this new model with.
- register_forward_hook(hook: Callable[[T, Tuple[Any, ...], Any], Any | None] | Callable[[T, Tuple[Any, ...], Dict[str, Any], Any], Any | None], *, prepend: bool = False, with_kwargs: bool = False, always_call: bool = False) RemovableHandle ¶
Register a forward hook on the module.
The hook will be called every time after
forward()
has computed an output.If
with_kwargs
isFalse
or not specified, the input contains only the positional arguments given to the module. Keyword arguments won’t be passed to the hooks and only to theforward
. The hook can modify the output. It can modify the input inplace but it will not have effect on forward since this is called afterforward()
is called. The hook should have the following signature:hook(module, args, output) -> None or modified output
If
with_kwargs
isTrue
, the forward hook will be passed thekwargs
given to the forward function and be expected to return the output possibly modified. The hook should have the following signature:hook(module, args, kwargs, output) -> None or modified output
- Args:
hook (Callable): The user defined hook to be registered. prepend (bool): If
True
, the providedhook
will be firedbefore all existing
forward
hooks on thistorch.nn.modules.Module
. Otherwise, the providedhook
will be fired after all existingforward
hooks on thistorch.nn.modules.Module
. Note that globalforward
hooks registered withregister_module_forward_hook()
will fire before all hooks registered by this method. Default:False
- with_kwargs (bool): If
True
, thehook
will be passed the kwargs given to the forward function. Default:
False
- always_call (bool): If
True
thehook
will be run regardless of whether an exception is raised while calling the Module. Default:
False
- with_kwargs (bool): If
- Returns:
torch.utils.hooks.RemovableHandle
:a handle that can be used to remove the added hook by calling
handle.remove()
- register_forward_pre_hook(hook: Callable[[T, Tuple[Any, ...]], Any | None] | Callable[[T, Tuple[Any, ...], Dict[str, Any]], Tuple[Any, Dict[str, Any]] | None], *, prepend: bool = False, with_kwargs: bool = False) RemovableHandle ¶
Register a forward pre-hook on the module.
The hook will be called every time before
forward()
is invoked.If
with_kwargs
is false or not specified, the input contains only the positional arguments given to the module. Keyword arguments won’t be passed to the hooks and only to theforward
. The hook can modify the input. User can either return a tuple or a single modified value in the hook. We will wrap the value into a tuple if a single value is returned (unless that value is already a tuple). The hook should have the following signature:hook(module, args) -> None or modified input
If
with_kwargs
is true, the forward pre-hook will be passed the kwargs given to the forward function. And if the hook modifies the input, both the args and kwargs should be returned. The hook should have the following signature:hook(module, args, kwargs) -> None or a tuple of modified input and kwargs
- Args:
hook (Callable): The user defined hook to be registered. prepend (bool): If true, the provided
hook
will be fired beforeall existing
forward_pre
hooks on thistorch.nn.modules.Module
. Otherwise, the providedhook
will be fired after all existingforward_pre
hooks on thistorch.nn.modules.Module
. Note that globalforward_pre
hooks registered withregister_module_forward_pre_hook()
will fire before all hooks registered by this method. Default:False
- with_kwargs (bool): If true, the
hook
will be passed the kwargs given to the forward function. Default:
False
- with_kwargs (bool): If true, the
- Returns:
torch.utils.hooks.RemovableHandle
:a handle that can be used to remove the added hook by calling
handle.remove()
- register_full_backward_hook(hook: Callable[[Module, Tuple[Tensor, ...] | Tensor, Tuple[Tensor, ...] | Tensor], None | Tuple[Tensor, ...] | Tensor], prepend: bool = False) RemovableHandle ¶
Register a backward hook on the module.
The hook will be called every time the gradients with respect to a module are computed, i.e. the hook will execute if and only if the gradients with respect to module outputs are computed. The hook should have the following signature:
hook(module, grad_input, grad_output) -> tuple(Tensor) or None
The
grad_input
andgrad_output
are tuples that contain the gradients with respect to the inputs and outputs respectively. The hook should not modify its arguments, but it can optionally return a new gradient with respect to the input that will be used in place ofgrad_input
in subsequent computations.grad_input
will only correspond to the inputs given as positional arguments and all kwarg arguments are ignored. Entries ingrad_input
andgrad_output
will beNone
for all non-Tensor arguments.For technical reasons, when this hook is applied to a Module, its forward function will receive a view of each Tensor passed to the Module. Similarly the caller will receive a view of each Tensor returned by the Module’s forward function.
Warning
Modifying inputs or outputs inplace is not allowed when using backward hooks and will raise an error.
- Args:
hook (Callable): The user-defined hook to be registered. prepend (bool): If true, the provided
hook
will be fired beforeall existing
backward
hooks on thistorch.nn.modules.Module
. Otherwise, the providedhook
will be fired after all existingbackward
hooks on thistorch.nn.modules.Module
. Note that globalbackward
hooks registered withregister_module_full_backward_hook()
will fire before all hooks registered by this method.- Returns:
torch.utils.hooks.RemovableHandle
:a handle that can be used to remove the added hook by calling
handle.remove()
- register_full_backward_pre_hook(hook: Callable[[Module, Tuple[Tensor, ...] | Tensor], None | Tuple[Tensor, ...] | Tensor], prepend: bool = False) RemovableHandle ¶
Register a backward pre-hook on the module.
The hook will be called every time the gradients for the module are computed. The hook should have the following signature:
hook(module, grad_output) -> tuple[Tensor] or None
The
grad_output
is a tuple. The hook should not modify its arguments, but it can optionally return a new gradient with respect to the output that will be used in place ofgrad_output
in subsequent computations. Entries ingrad_output
will beNone
for all non-Tensor arguments.For technical reasons, when this hook is applied to a Module, its forward function will receive a view of each Tensor passed to the Module. Similarly the caller will receive a view of each Tensor returned by the Module’s forward function.
Warning
Modifying inputs inplace is not allowed when using backward hooks and will raise an error.
- Args:
hook (Callable): The user-defined hook to be registered. prepend (bool): If true, the provided
hook
will be fired beforeall existing
backward_pre
hooks on thistorch.nn.modules.Module
. Otherwise, the providedhook
will be fired after all existingbackward_pre
hooks on thistorch.nn.modules.Module
. Note that globalbackward_pre
hooks registered withregister_module_full_backward_pre_hook()
will fire before all hooks registered by this method.- Returns:
torch.utils.hooks.RemovableHandle
:a handle that can be used to remove the added hook by calling
handle.remove()
- register_load_state_dict_post_hook(hook)¶
Register a post hook to be run after module’s
load_state_dict
is called.- It should have the following signature::
hook(module, incompatible_keys) -> None
The
module
argument is the current module that this hook is registered on, and theincompatible_keys
argument is aNamedTuple
consisting of attributesmissing_keys
andunexpected_keys
.missing_keys
is alist
ofstr
containing the missing keys andunexpected_keys
is alist
ofstr
containing the unexpected keys.The given incompatible_keys can be modified inplace if needed.
Note that the checks performed when calling
load_state_dict()
withstrict=True
are affected by modifications the hook makes tomissing_keys
orunexpected_keys
, as expected. Additions to either set of keys will result in an error being thrown whenstrict=True
, and clearing out both missing and unexpected keys will avoid an error.- Returns:
torch.utils.hooks.RemovableHandle
:a handle that can be used to remove the added hook by calling
handle.remove()
- register_module(name: str, module: Module | None) None ¶
Alias for
add_module()
.
- register_parameter(name: str, param: Parameter | None) None ¶
Add a parameter to the module.
The parameter can be accessed as an attribute using given name.
- Args:
- name (str): name of the parameter. The parameter can be accessed
from this module using the given name
- param (Parameter or None): parameter to be added to the module. If
None
, then operations that run on parameters, such ascuda
, are ignored. IfNone
, the parameter is not included in the module’sstate_dict
.
- register_state_dict_pre_hook(hook)¶
Register a pre-hook for the
state_dict()
method.These hooks will be called with arguments:
self
,prefix
, andkeep_vars
before callingstate_dict
onself
. The registered hooks can be used to perform pre-processing before thestate_dict
call is made.
- requires_grad_(requires_grad: bool = True) T ¶
Change if autograd should record operations on parameters in this module.
This method sets the parameters’
requires_grad
attributes in-place.This method is helpful for freezing part of the module for finetuning or training parts of a model individually (e.g., GAN training).
See locally-disable-grad-doc for a comparison between .requires_grad_() and several similar mechanisms that may be confused with it.
- Args:
- requires_grad (bool): whether autograd should record operations on
parameters in this module. Default:
True
.
- Returns:
Module: self
- reset_memory_hooks_state()¶
Reset the mem_rss_diff attribute of each module (see [~modeling_utils.ModuleUtilsMixin.add_memory_hooks]).
- resize_position_embeddings(new_num_position_embeddings: int)¶
- resize_token_embeddings(new_num_tokens: int | None = None, pad_to_multiple_of: int | None = None) Embedding ¶
Resizes input token embeddings matrix of the model if new_num_tokens != config.vocab_size.
Takes care of tying weights embeddings afterwards if the model class has a tie_weights() method.
- Arguments:
- new_num_tokens (int, optional):
The new number of tokens in the embedding matrix. Increasing the size will add newly initialized vectors at the end. Reducing the size will remove vectors from the end. If not provided or None, just returns a pointer to the input tokens torch.nn.Embedding module of the model without doing anything.
- pad_to_multiple_of (int, optional):
If set will pad the embedding matrix to a multiple of the provided value.If new_num_tokens is set to None will just pad the embedding to a multiple of pad_to_multiple_of.
This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta), or on TPUs which benefit from having sequence lengths be a multiple of 128. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
- Return:
torch.nn.Embedding: Pointer to the input tokens Embeddings Module of the model.
- retrieve_modules_from_names(names, add_prefix=False, remove_prefix=False)¶
- reverse_bettertransformer()¶
Reverts the transformation from [~PreTrainedModel.to_bettertransformer] so that the original modeling is used, for example in order to save the model.
- Returns:
[PreTrainedModel]: The model converted back to the original modeling.
- save_pretrained(save_directory: str | ~os.PathLike, is_main_process: bool = True, state_dict: dict | None = None, save_function: ~typing.Callable = <function save>, push_to_hub: bool = False, max_shard_size: int | str = '5GB', safe_serialization: bool = True, variant: str | None = None, token: bool | str | None = None, save_peft_format: bool = True, **kwargs)¶
Save a model and its configuration file to a directory, so that it can be re-loaded using the [~PreTrainedModel.from_pretrained] class method.
- Arguments:
- save_directory (str or os.PathLike):
Directory to which to save. Will be created if it doesn’t exist.
- is_main_process (bool, optional, defaults to True):
Whether the process calling this is the main process or not. Useful when in distributed training like TPUs and need to call this function on all processes. In this case, set is_main_process=True only on the main process to avoid race conditions.
- state_dict (nested dictionary of torch.Tensor):
The state dictionary of the model to save. Will default to self.state_dict(), but can be used to only save parts of the model or if special precautions need to be taken when recovering the state dictionary of a model (like when using model parallelism).
- save_function (Callable):
The function to use to save the state dictionary. Useful on distributed training like TPUs when one need to replace torch.save by another method.
- push_to_hub (bool, optional, defaults to False):
Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the repository you want to push to with repo_id (will default to the name of save_directory in your namespace).
- max_shard_size (int or str, optional, defaults to “5GB”):
The maximum size for a checkpoint before being sharded. Checkpoints shard will then be each of size lower than this size. If expressed as a string, needs to be digits followed by a unit (like “5MB”). We default it to 5GB in order for models to be able to run easily on free-tier google colab instances without CPU OOM issues.
<Tip warning={true}>
If a single weight of the model is bigger than max_shard_size, it will be in its own checkpoint shard which will be bigger than max_shard_size.
</Tip>
- safe_serialization (bool, optional, defaults to True):
Whether to save the model using safetensors or the traditional PyTorch way (that uses pickle).
- variant (str, optional):
If specified, weights are saved in the format pytorch_model.<variant>.bin.
- token (str or bool, optional):
The token to use as HTTP bearer authorization for remote files. If True, or not specified, will use the token generated when running huggingface-cli login (stored in ~/.huggingface).
- save_peft_format (bool, optional, defaults to True):
For backward compatibility with PEFT library, in case adapter weights are attached to the model, all keys of the state dict of adapters needs to be pre-pended with base_model.model. Advanced users can disable this behaviours by setting save_peft_format to False.
- kwargs (Dict[str, Any], optional):
Additional key word arguments passed along to the [~utils.PushToHubMixin.push_to_hub] method.
- set_adapter(adapter_name: List[str] | str) None ¶
If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT official documentation: https://huggingface.co/docs/peft
Sets a specific adapter by forcing the model to use a that adapter and disable the other adapters.
- Args:
- adapter_name (Union[List[str], str]):
The name of the adapter to set. Can be also a list of strings to set multiple adapters.
- set_extra_state(state: Any) None ¶
Set extra state contained in the loaded state_dict.
This function is called from
load_state_dict()
to handle any extra state found within the state_dict. Implement this function and a correspondingget_extra_state()
for your module if you need to store extra state within its state_dict.- Args:
state (dict): Extra state from the state_dict
- set_input_embeddings(value: Module)¶
Set model’s input embeddings.
- Args:
value (nn.Module): A module mapping vocabulary to hidden states.
See
torch.Tensor.share_memory_()
.
- state_dict(*args, destination=None, prefix='', keep_vars=False)¶
Return a dictionary containing references to the whole state of the module.
Both parameters and persistent buffers (e.g. running averages) are included. Keys are corresponding parameter and buffer names. Parameters and buffers set to
None
are not included.Note
The returned object is a shallow copy. It contains references to the module’s parameters and buffers.
Warning
Currently
state_dict()
also accepts positional arguments fordestination
,prefix
andkeep_vars
in order. However, this is being deprecated and keyword arguments will be enforced in future releases.Warning
Please avoid the use of argument
destination
as it is not designed for end-users.- Args:
- destination (dict, optional): If provided, the state of module will
be updated into the dict and the same object is returned. Otherwise, an
OrderedDict
will be created and returned. Default:None
.- prefix (str, optional): a prefix added to parameter and buffer
names to compose the keys in state_dict. Default:
''
.- keep_vars (bool, optional): by default the
Tensor
s returned in the state dict are detached from autograd. If it’s set to
True
, detaching will not be performed. Default:False
.
- Returns:
- dict:
a dictionary containing a whole state of the module
Example:
>>> # xdoctest: +SKIP("undefined vars") >>> module.state_dict().keys() ['bias', 'weight']
- supports_gradient_checkpointing = True¶
- tie_weights()¶
Tie the weights between the input embeddings and the output embeddings.
If the torchscript flag is set in the configuration, can’t handle parameter sharing so we are cloning the weights instead.
- to(*args, **kwargs)¶
Move and/or cast the parameters and buffers.
This can be called as
- to(device=None, dtype=None, non_blocking=False)
- to(dtype, non_blocking=False)
- to(tensor, non_blocking=False)
- to(memory_format=torch.channels_last)
Its signature is similar to
torch.Tensor.to()
, but only accepts floating point or complexdtype
s. In addition, this method will only cast the floating point or complex parameters and buffers todtype
(if given). The integral parameters and buffers will be moveddevice
, if that is given, but with dtypes unchanged. Whennon_blocking
is set, it tries to convert/move asynchronously with respect to the host if possible, e.g., moving CPU Tensors with pinned memory to CUDA devices.See below for examples.
Note
This method modifies the module in-place.
- Args:
- device (
torch.device
): the desired device of the parameters and buffers in this module
- dtype (
torch.dtype
): the desired floating point or complex dtype of the parameters and buffers in this module
- tensor (torch.Tensor): Tensor whose dtype and device are the desired
dtype and device for all parameters and buffers in this module
- memory_format (
torch.memory_format
): the desired memory format for 4D parameters and buffers in this module (keyword only argument)
- device (
- Returns:
Module: self
Examples:
>>> # xdoctest: +IGNORE_WANT("non-deterministic") >>> linear = nn.Linear(2, 2) >>> linear.weight Parameter containing: tensor([[ 0.1913, -0.3420], [-0.5113, -0.2325]]) >>> linear.to(torch.double) Linear(in_features=2, out_features=2, bias=True) >>> linear.weight Parameter containing: tensor([[ 0.1913, -0.3420], [-0.5113, -0.2325]], dtype=torch.float64) >>> # xdoctest: +REQUIRES(env:TORCH_DOCTEST_CUDA1) >>> gpu1 = torch.device("cuda:1") >>> linear.to(gpu1, dtype=torch.half, non_blocking=True) Linear(in_features=2, out_features=2, bias=True) >>> linear.weight Parameter containing: tensor([[ 0.1914, -0.3420], [-0.5112, -0.2324]], dtype=torch.float16, device='cuda:1') >>> cpu = torch.device("cpu") >>> linear.to(cpu) Linear(in_features=2, out_features=2, bias=True) >>> linear.weight Parameter containing: tensor([[ 0.1914, -0.3420], [-0.5112, -0.2324]], dtype=torch.float16) >>> linear = nn.Linear(2, 2, bias=None).to(torch.cdouble) >>> linear.weight Parameter containing: tensor([[ 0.3741+0.j, 0.2382+0.j], [ 0.5593+0.j, -0.4443+0.j]], dtype=torch.complex128) >>> linear(torch.ones(3, 2, dtype=torch.cdouble)) tensor([[0.6122+0.j, 0.1150+0.j], [0.6122+0.j, 0.1150+0.j], [0.6122+0.j, 0.1150+0.j]], dtype=torch.complex128)
- to_bettertransformer() PreTrainedModel ¶
Converts the model to use [PyTorch’s native attention implementation](https://pytorch.org/docs/stable/generated/torch.nn.MultiheadAttention.html), integrated to Transformers through [Optimum library](https://huggingface.co/docs/optimum/bettertransformer/overview). Only a subset of all Transformers models are supported.
PyTorch’s attention fastpath allows to speed up inference through kernel fusions and the use of [nested tensors](https://pytorch.org/docs/stable/nested.html). Detailed benchmarks can be found in [this blog post](https://medium.com/pytorch/bettertransformer-out-of-the-box-performance-for-huggingface-transformers-3fbe27d50ab2).
- Returns:
[PreTrainedModel]: The model converted to BetterTransformer.
- to_empty(*, device: int | str | device | None, recurse: bool = True) T ¶
Move the parameters and buffers to the specified device without copying storage.
- Args:
- device (
torch.device
): The desired device of the parameters and buffers in this module.
- recurse (bool): Whether parameters and buffers of submodules should
be recursively moved to the specified device.
- device (
- Returns:
Module: self
- train(mode: bool = True) T ¶
Set the module in training mode.
This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g.
Dropout
,BatchNorm
, etc.- Args:
- mode (bool): whether to set training mode (
True
) or evaluation mode (
False
). Default:True
.
- mode (bool): whether to set training mode (
- Returns:
Module: self
- training: bool¶
- type(dst_type: dtype | str) T ¶
Casts all parameters and buffers to
dst_type
.Note
This method modifies the module in-place.
- Args:
dst_type (type or string): the desired type
- Returns:
Module: self
- warn_if_padding_and_no_attention_mask(input_ids, attention_mask)¶
Shows a one-time warning if the input_ids appear to contain padding and no attention mask was given.
- xpu(device: int | device | None = None) T ¶
Move all model parameters and buffers to the XPU.
This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on XPU while being optimized.
Note
This method modifies the module in-place.
- Arguments:
- device (int, optional): if specified, all parameters will be
copied to that device
- Returns:
Module: self
- zero_grad(set_to_none: bool = True) None ¶
Reset gradients of all model parameters.
See similar function under
torch.optim.Optimizer
for more context.- Args:
- set_to_none (bool): instead of setting to zero, set the grads to None.
See
torch.optim.Optimizer.zero_grad()
for details.
- class lambeq.bobcat.Category(atom: Atom = Atom.NONE, feature: Feature = Feature.NONE, var: int = 0, relation: Relation | None = None, dir: str = '\x00', result: Category | None = None, argument: Category | None = None, type_raising_dep_var: int = 0)[source]¶
Bases:
object
The type of a constituent in a CCG.
A category may be atomic (e.g. N) or complex (e.g. S/NP).
- __init__(atom: Atom = Atom.NONE, feature: Feature = Feature.NONE, var: int = 0, relation: Relation | None = None, dir: str = '\x00', result: Category | None = None, argument: Category | None = None, type_raising_dep_var: int = 0) None ¶
- atom: Atom = Atom.NONE¶
- property bwd: bool¶
Whether this is a backward complex category.
- dir: str = '\x00'¶
- feature: Feature = Feature.NONE¶
- property fwd: bool¶
Whether this is a forward complex category.
- matches(other: Any) bool [source]¶
Check if the template set out in this matches the argument.
Like == but the NONE feature matches with everything.
- static parse(string: str, type_raising_dep_var: str = '+') Category [source]¶
Parse a category string.
- relation: Relation | None = None¶
- slash(dir: str, argument: Category, var: int = 0, relation: Relation | None = None, type_raising_dep_var: int = 0) Category [source]¶
Create a complex category.
- translate(var_map: Mapping[int, int], feature: Feature = Feature.NONE) Category [source]¶
Translate a category.
- Parameters:
- var_mapdict of int to int
A mapping to relabel variable slots.
- featureFeature, optional
The concrete feature for variable features.
- type_raising_dep_var: int = 0¶
- var: int = 0¶
- class lambeq.bobcat.ChartParser(grammar: Grammar, cats: Iterable[str], root_cats: Iterable[str] | None, eisner_normal_form: bool, max_parse_trees: int, beam_size: int, input_tag_score_weight: float, missing_cat_score: float, missing_span_score: float)[source]¶
Bases:
object
- __init__(grammar: Grammar, cats: Iterable[str], root_cats: Iterable[str] | None, eisner_normal_form: bool, max_parse_trees: int, beam_size: int, input_tag_score_weight: float, missing_cat_score: float, missing_span_score: float) None [source]¶
- calc_score_binary(tree: ParseTree, span_scores: Mapping[int, float]) None [source]¶
Calculate the score for a binary tree.
- calc_score_unary(tree: ParseTree, span_scores: Mapping[int, float]) None [source]¶
Calculate the score for a unary tree (chain).
- class lambeq.bobcat.Grammar(categories: dict[str, str], binary_rules: list[tuple[str, str]], type_changing_rules: list[tuple[int, str, str | None, str, bool]], type_raising_rules: list[tuple[str, str, str]])[source]¶
Bases:
object
The grammar dataclass.
- Attributes:
- categoriesdict of str to str
A mapping from a plain category string to a marked up category string, e.g. ‘(NPNP)/NP’ to ‘((NP{Y}NP{Y}<1>){_}/NP{Z}<2>){_}’
- binary_rules: list of tuple of str
The list of binary rules as tuple pairs of strings, e.g. (‘(N/N)’, ‘N’)
- type_changing_ruleslist of tuple
The list of type changing rules, which may occur as either unary rules or punctuation rules, as tuples of:
an integer denoting the rule ID
a string denoting the left category, or the sole if unary
a string denoting the right category, or None if unary
a string denoting the resulting category
a boolean denoting whether to replace dependencies during parsing
- e.g. (1, ‘N’, None, ‘NP’, False)
(50, ‘S[dcl]/S[dcl]’, ‘,’, ‘S/S’, True)
- type_raising_ruleslist of tuple
- The list of type raising rules as tuples of:
a string denoting the original category
a string denoting the resulting marked-up category
a character denoting the new variable
e.g. (‘NP’, ‘(S[X]{Y}/(S[X]{Y}NP{_}){Y}){Y}’, ‘+’)
- __init__(categories: dict[str, str], binary_rules: list[tuple[str, str]], type_changing_rules: list[tuple[int, str, str | None, str, bool]], type_raising_rules: list[tuple[str, str, str]]) None ¶
- binary_rules: list[tuple[str, str]]¶
- categories: dict[str, str]¶
- type_changing_rules: list[tuple[int, str, str | None, str, bool]]¶
- type_raising_rules: list[tuple[str, str, str]]¶
- class lambeq.bobcat.ParseTree(rule: 'Rule', cat: 'Category', left: 'ParseTree', right: 'ParseTree', unfilled_deps: 'list[Dependency]', filled_deps: 'list[Dependency]', var_map: 'dict[int, Variable]', score: 'float' = 0)[source]¶
Bases:
object
- __init__(rule: Rule, cat: Category, left: ParseTree, right: ParseTree, unfilled_deps: list[Dependency], filled_deps: list[Dependency], var_map: dict[int, Variable], score: float = 0) None ¶
- property bwd_comp: bool¶
- property coordinated: bool¶
- property coordinated_or_type_raised: bool¶
- property deps: list[Dependency]¶
- property deps_and_tags: tuple[list[Dependency], list[str]]¶
- filled_deps: list[Dependency]¶
- property fwd_comp: bool¶
- property is_leaf: bool¶
- rule: Rule¶
- score: float = 0¶
- unfilled_deps: list[Dependency]¶
- var_map: dict[int, Variable]¶
- property variable: Variable¶
- property word: str¶
- class lambeq.bobcat.Sentence(words: list[str], input_supertags: list[list[Supertag]], span_scores: dict[Tuple[int, int], dict[int, float]])[source]¶
Bases:
object
An input sentence.
- Attributes:
- wordslist of str
The tokens in the sentence.
- input_supertagslist of list of Supertag
A list of supertags for each word.
- span_scoresdict of tuple of int and int to dict of int to float
Mapping of a span to a dict of category (indices) mapped to their log probability.
- __init__(words: list[str], input_supertags: list[list[Supertag]], span_scores: dict[Tuple[int, int], dict[int, float]]) None ¶
- span_scores: dict[Tuple[int, int], dict[int, float]]¶
- words: list[str]¶
- class lambeq.bobcat.Supertag(category: str, probability: float)[source]¶
Bases:
object
A string category, annotated with its log probability.
- __init__(category: str, probability: float) None ¶
- category: str¶
- probability: float¶
- class lambeq.bobcat.Tagger(model: PreTrainedModel, tokenizer: PreTrainedTokenizerFast, batch_size: int = 1, tag_top_k: int = 1, tag_prob_threshold: float = 1, tag_prob_threshold_strategy: str = 'relative', span_top_k: int = 1, span_prob_threshold: float = 1, span_prob_threshold_strategy: str = 'relative')[source]¶
Bases:
object
- __call__(inputs: Sequence[Sequence[str]], batch_size: int | None = None, verbose: str = 'progress') TaggerOutput [source]¶
Parse a list of sentences.
- __init__(model: PreTrainedModel, tokenizer: PreTrainedTokenizerFast, batch_size: int = 1, tag_top_k: int = 1, tag_prob_threshold: float = 1, tag_prob_threshold_strategy: str = 'relative', span_top_k: int = 1, span_prob_threshold: float = 1, span_prob_threshold_strategy: str = 'relative') None [source]¶