Post

TrainingMethod

An introduction to the functions in TrainingMethod

TrainingMethod

Trainer

(

config_file: <class 'str'>

)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
The model trainer class.
Users need to set the dataset and dataloader manually.

Args:
    config_file: the path of input file
    verbose: control the verboseness of output
    device: the device that models run on

Methods:
    train(model: torch.nn.Module), run model training.
    set_device(device: str | torch.device), manually set model and data running device.
    set_loss_fn(loss_fn: Any, loss_config: Optional[Dict] = None), manually set loss function which can be customized.
    set_metrics(metrics_fn: Dict[str, Callable], metrics_fn_config: Dict[str, Dict] | None = None), manually set metrics functions which can be customized.
    set_model_config(model_config: Dict[str, Any] | None = None), manually (re)set the configs (hyperparameters) of model.
    set_lr_scheduler(lr_scheduler: th.optim.lr_scheduler.LRScheduler, lr_scheduler_config: Optional[Dict[str, Any]] = None), set the learning rate scheduler.
    set_model_param(model_state_dict: Dict, is_strict: bool = True, is_assign: bool = False), manually set model parameters by giving model state dict.

reload_config

(

config_file_path: <class 'str'>

)

1
    Reload the yaml configs file.

set_dataloader

(

DataLoader: <class 'inspect._empty'>

DataLoader_configs: Optional[Dict] = None

)

1
2
3
4
    Set the data loader which is :
        * DataLoader(data, batchsize, device, **kwargs) -> Iterable
        * next(iter( DataLoader(data) )) -> (data, label)
    The argument 'batchsize' and 'device' of DataLoader would be read from self.BATCH_SIZE and self.DEVICE respectively.

set_dataset

(

train_data: Dict[Literal['data', 'labels'], Any]

valid_data: Optional[Dict[Literal['data', 'labels'], Any]] = None

)

1
2
3
4
5
    Load the data that put into DataLoader.
    Parameters:
        train_data: {'data': Any, 'labels':Any}, the Dict of training set.
        valid_data: {'data': Any, 'labels':Any}, the Dict of validation set.
    Both training and validation set data must implement __len__() method, and they are correspond to the input of dataloader.

set_device

(

device: str | torch.device

)

reset the device that model would train on

set_layerwise_optim_config

(

layer_config_dict: Optional[Dict[str, Dict[str, Any]]] = None

)

1
2
3
4
5
6
7
8
9
10
    The optimizer configs of layers in `layer_config_dict.keys()` would set to the corresponding values,
    and other unspecified layers would use the config of `OPTIM_CONFIG` in the input file.
    The parameters of the layer which lr is set to `None` would be fixed during training without calculating gradients.
    The name of layers can be specified by regular expressions e.g.,
     {"fc1.*": {"lr": 1e-4, "weight_decay": 1e-4}, "fc2.[a-zA-Z]+Norm.*": {"lr": None}}

    Args:
        layer_config_dict: dict of named layers' learning config: {layer name: {'lr': ...}}.

    Returns: None

set_loss_fn

(

loss_fn: <class 'inspect._empty'>

loss_config: Optional[Dict] = None

)

1
2
3
4
    Reset loss function, and reset configs of loss function optionally.
    parameters:
        loss_fn: uninstantiated class torch.nn.Module, a user-defind loss function.
        loss_config: Dict[str, Any]|None, the new configs of given loss function. if None, loss_config would not change.

set_lr_scheduler

(

lr_scheduler: <class 'inspect._empty'>

lr_scheduler_config: Optional[Dict[str, Any]] = None

)

1
    Set the lr_scheduler that inherit from torch.optim.lr_scheduler.LRScheduler

set_metrics

(

metrics_fn: Dict[str, Callable]

metrics_fn_config: Optional[Dict[str, Dict]] = None

)

1
2
3
4
    Set user-defined metrics function.
    Parameters:
        metrics_fn: Dict[str, Callable], str is the name of metrics function.
        metrics_fn_config: Dict[str, Dict]|None, the configs of metrics function corresponding to the function name str.

set_model_config

(

model_config: Optional[Dict[str, Any]] = None

)

1
    Set the new configs (hyperparameters) of model.

set_model_param

(

model_state_dict: Dict

is_strict: <class 'bool'> = True

is_assign: <class 'bool'> = False

)

1
2
3
4
5
6
    Set the trained model parameters from direct input.
    Parameters:
        model_state_dict: Dict, a dict containing parameters and persistent buffers.
        is_strict: bool,whether to strictly enforce that the keys in state_dict match the keys returned by this module's state_dict() function.
        is_assign: bool, When False, the properties of the tensors in the current module are preserved while when True,
        the properties of the Tensors in the state dict are preserved.

set_optimizer

(

optimizer: <class 'inspect._empty'>

optim_config: Optional[Dict] = None

)

1
2
3
4
    Set the optimizer that inherit from torch.optim.Optimizer, and reset optimizer configs optionally.
    parameters:
        optimizer: torch.optim.Optimizer, a user-defind optimizer.
        optim_config: Dict[str, Any]|None, the new configs of given optimizer. if None, optim_config would not change.

train

(

model: <class 'inspect._empty'>

)

1
2
3
    Start Training.

    Herein the input model must be an `uninstantiated` nn.Module class.

Predictor

(

config_file: <class 'str'>

)

1
2
3
4
5
6
7
A Base Predictor class.
Users need to set the dataset and dataloader manually.

Args:
    config_file: the path of input file
    verbose: control the verboseness of output
    device: the device that models run on

predict

(

model: <class 'inspect._empty'>

test_model: <class 'bool'> = False

warm_up: <class 'bool'> = False

)

1
2
3
4
5
6
7
8
    Parameters:
        model: the input model which is uninstantiated nn.Module class.
        test_model: bool, if True, consumed time per batch and max memory per batch would be returned.
        warm_up: bool, if True, model will idle on some pseudo-samples for warming up.

    Returns:
        None, if `SAVE_PREDICTIONS` in config_file is true.
        np.NdArray, if `SAVE_PREDICTIONS` in config_file is false.

reload_config

(

config_file_path: <class 'str'>

)

1
    Reload the yaml configs file.

set_dataloader

(

DataLoader: <class 'inspect._empty'>

DataLoader_configs: Optional[Dict] = None

)

1
2
3
4
    Set the data loader which is :
        * DataLoader(data, batchsize, device, **kwargs) -> Iterable
        * next(iter( DataLoader(data) )) -> (data, label)
    The argument 'batchsize' and 'device' of DataLoader would be read from self.BATCH_SIZE and self.DEVICE respectively.

set_dataset

(

train_data: Dict[Literal['data', 'labels'], Any]

valid_data: Optional[Dict[Literal['data', 'labels'], Any]] = None

)

1
2
3
4
5
    Load the data that put into DataLoader.
    Parameters:
        train_data: {'data': Any, 'labels':Any}, the Dict of training set.
        valid_data: {'data': Any, 'labels':Any}, the Dict of validation set.
    Both training and validation set data must implement __len__() method, and they are correspond to the input of dataloader.

set_device

(

device: str | torch.device

)

reset the device that model would train on

set_loss_fn

(

loss_fn: <class 'inspect._empty'>

loss_config: Optional[Dict] = None

)

1
2
3
4
    Reset loss function, and reset configs of loss function optionally.
    parameters:
        loss_fn: uninstantiated class torch.nn.Module, a user-defind loss function.
        loss_config: Dict[str, Any]|None, the new configs of given loss function. if None, loss_config would not change.

set_lr_scheduler

(

lr_scheduler: <class 'inspect._empty'>

lr_scheduler_config: Optional[Dict[str, Any]] = None

)

1
    Set the lr_scheduler that inherit from torch.optim.lr_scheduler.LRScheduler

set_metrics

(

metrics_fn: Dict[str, Callable]

metrics_fn_config: Optional[Dict[str, Dict]] = None

)

1
2
3
4
    Set user-defined metrics function.
    Parameters:
        metrics_fn: Dict[str, Callable], str is the name of metrics function.
        metrics_fn_config: Dict[str, Dict]|None, the configs of metrics function corresponding to the function name str.

set_model_config

(

model_config: Optional[Dict[str, Any]] = None

)

1
    Set the new configs (hyperparameters) of model.

set_model_param

(

model_state_dict: Dict

is_strict: <class 'bool'> = True

is_assign: <class 'bool'> = False

)

1
2
3
4
5
6
    Set the trained model parameters from direct input.
    Parameters:
        model_state_dict: Dict, a dict containing parameters and persistent buffers.
        is_strict: bool,whether to strictly enforce that the keys in state_dict match the keys returned by this module's state_dict() function.
        is_assign: bool, When False, the properties of the tensors in the current module are preserved while when True,
        the properties of the Tensors in the state dict are preserved.

set_optimizer

(

optimizer: <class 'inspect._empty'>

optim_config: Optional[Dict] = None

)

1
2
3
4
    Set the optimizer that inherit from torch.optim.Optimizer, and reset optimizer configs optionally.
    parameters:
        optimizer: torch.optim.Optimizer, a user-defind optimizer.
        optim_config: Dict[str, Any]|None, the new configs of given optimizer. if None, optim_config would not change.

StructureOptimization

(

config_file: <class 'str'>

data_type: Literal['pyg', 'dgl'] = pyg

args: <class 'inspect._empty'>

kwargs: <class 'inspect._empty'>

)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
The class of structure optimization for relaxation and transition state.
Users need to set the dataset and dataloader manually.

Args:
    config_file: the path of input file.
    data_type: graph data type. 'pyg' for torch-geometric BatchData, 'dgl' for dgl DGLGraph.
    VERBOSE: control the verboseness of output.
    device: the device that models run on.

Input file parameters:
    # For relaxation tasks:
    RELAXATION:
      ALGO: Literal[CG, BFGS, FIRE], the optimization algo.
      ITER_SCHEME: Literal['PR+', 'FR', 'PR', 'WYL'], only use for ALGO=CG, the iteration scheme of CG. Default: PR+.
      E_THRES: float, threshold of Energy difference (eV). Default: 1e-4.
      F_THRES: float, threshold of max Force (eV/Ang). Default: 5e-2.
      MAXITER: int, the maximum iteration numbers. Default: 300.
      STEPLENGTH: float, the initial step length for line search. Default: 0.5.

      LINESEARCH: Literal[Backtrack, Golden, Wolfe], only used for ALGO=CG, BFGS or MIX.
        'Backtrack' with Armijo's cond., 'Golden' for exact line search by golden sec. Algo., 'Wolfe' for advance & retreat algo. With weak Wolfe cond.
      LINESEARCH_MAXITER: the maximum iteration numbers of line search, only used for CG, BFGS and MIX. Default: 10.
      LINESEARCH_THRES: float, threshold of exact line search. Only used for LINESEARCH=Golden.
      LINESEARCH_FACTOR: A factor in linesearch. Shrinkage factor for "Backtrack" & "Wolfe", scaling factor in interval search for "Golden". Default: 0.6.

      # Following parameters are only for ALGO=FIRE
      ALPHA:
      ALPHA_FAC:
      FAC_INC:
      FAC_DEC: float = 0.5
      N_MIN: int = 5
      MASS: float = 20.0

    # For transition state tasks:
    TRANSITION_STATE:
        ALGO: Literal[DIMER, DIMER_LS], the optimization algo.
        X_DIFF: list, the dimer difference coordinate corresponding to initial coordinate X. Default: a random tensor with the same shape of X.
        E_THRES: float, threshold of Energy difference (eV). Default: 1e-4.
        TORQ_THRES: float, the threshold of max torque of Dimer. Default: 1e-2.
        F_THRES: float, threshold of max Force (eV/Ang). Default: 5e-2.
        MAXITER_TRANS: int, the maximum iteration numbers of transition steps. Default: 300.
        MAXITER_ROT: int, the maximum iteration numbers of rotation steps. Default: 10.
        MAX_STEPLENGTH: float, the maximum step length for dimer transitions. Default: 0.5.
        DX: float, the step length of finite difference. Default: 1.e-2.

        # Following parameters are only for ALGO=DIMER_LS
        LINESEARCH_MAXITER: int, the maximum iteration numbers of line search, only used for CG, BFGS and MIX. Default: 10.
        STEPLENGTH: float, the initial step length for line search of transition steps. Default: 0.5.
        MOMENTA_COEFF: float, the coefficient of momentum in transition steps. Default: 0.

relax

(

model: <class 'inspect._empty'>

)

1
2
    Parameters:
        model: the input model which is `uninstantiated` nn.Module class.

reload_config

(

config_file_path: <class 'str'>

)

1
    Reload the yaml configs file.

set_dataloader

(

DataLoader: <class 'inspect._empty'>

DataLoader_configs: Optional[Dict] = None

)

1
2
3
4
    Set the data loader which is :
        * DataLoader(data, batchsize, device, **kwargs) -> Iterable
        * next(iter( DataLoader(data) )) -> (data, label)
    The argument 'batchsize' and 'device' of DataLoader would be read from self.BATCH_SIZE and self.DEVICE respectively.

set_dataset

(

train_data: Dict[Literal['data', 'labels'], Any]

valid_data: Optional[Dict[Literal['data', 'labels'], Any]] = None

)

1
2
3
4
5
    Load the data that put into DataLoader.
    Parameters:
        train_data: {'data': Any, 'labels':Any}, the Dict of training set.
        valid_data: {'data': Any, 'labels':Any}, the Dict of validation set.
    Both training and validation set data must implement __len__() method, and they are correspond to the input of dataloader.

set_device

(

device: str | torch.device

)

reset the device that model would train on

set_loss_fn

(

loss_fn: <class 'inspect._empty'>

loss_config: Optional[Dict] = None

)

1
2
3
4
    Reset loss function, and reset configs of loss function optionally.
    parameters:
        loss_fn: uninstantiated class torch.nn.Module, a user-defind loss function.
        loss_config: Dict[str, Any]|None, the new configs of given loss function. if None, loss_config would not change.

set_lr_scheduler

(

lr_scheduler: <class 'inspect._empty'>

lr_scheduler_config: Optional[Dict[str, Any]] = None

)

1
    Set the lr_scheduler that inherit from torch.optim.lr_scheduler.LRScheduler

set_metrics

(

metrics_fn: Dict[str, Callable]

metrics_fn_config: Optional[Dict[str, Dict]] = None

)

1
2
3
4
    Set user-defined metrics function.
    Parameters:
        metrics_fn: Dict[str, Callable], str is the name of metrics function.
        metrics_fn_config: Dict[str, Dict]|None, the configs of metrics function corresponding to the function name str.

set_model_config

(

model_config: Optional[Dict[str, Any]] = None

)

1
    Set the new configs (hyperparameters) of model.

set_model_param

(

model_state_dict: Dict

is_strict: <class 'bool'> = True

is_assign: <class 'bool'> = False

)

1
2
3
4
5
6
    Set the trained model parameters from direct input.
    Parameters:
        model_state_dict: Dict, a dict containing parameters and persistent buffers.
        is_strict: bool,whether to strictly enforce that the keys in state_dict match the keys returned by this module's state_dict() function.
        is_assign: bool, When False, the properties of the tensors in the current module are preserved while when True,
        the properties of the Tensors in the state dict are preserved.

set_optimizer

(

optimizer: <class 'inspect._empty'>

optim_config: Optional[Dict] = None

)

1
2
3
4
    Set the optimizer that inherit from torch.optim.Optimizer, and reset optimizer configs optionally.
    parameters:
        optimizer: torch.optim.Optimizer, a user-defind optimizer.
        optim_config: Dict[str, Any]|None, the new configs of given optimizer. if None, optim_config would not change.

transition_state

(

model: <class 'inspect._empty'>

)

1
2
    Parameters:
        model: the input model which is `uninstantiated` nn.Module class.

ts

(

model: <class 'inspect._empty'>

)

1
    Alias of `self.transition_state`.

MolecularDynamics

(

config_file: <class 'str'>

data_type: Literal['pyg', 'dgl'] = pyg

)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
Class of molecular dynamics simulation.
Users need to set the dataset and dataloader manually.

Args:
    config_file: path to input file.
    data_type: graph data type. 'pyg' for torch-geometric BatchData, 'dgl' for dgl DGLGraph.
    verbose: control the verboseness of output.
    device: the device that models run on.

Input file parameters:
    ENSEMBLE: Literal[NVE, NVT], the ensemble for MD.
    THERMOSTAT: Literal[Langevin, VR, CSVR, Nose-Hoover], the thermostat type. only used for ENSEMBLE=NVT.
                'VR' is Velocity Rescaling and 'CSVR' is Canonical Sampling Velocity Rescaling by Bussi et al. [1].
    THERMOSTAT_CONFIG: Dict, the configs of thermostat.
                       * For 'Langevin', option key 'damping_coeff' (fs^-1) is to control the damping coefficient. Large damping_coeff lead to a strong coupling. Default: 0.01
                       * For 'CSVR', option key 'time_const' (fs) is to control the characteristic timescale. Large time_const leads to a weak coupling. Default: 10*TIME_STEP
    TIME_STEP: float, the time step (fs) for MD. Default: 1
    MAX_STEP: int, total time (fs) = TIME_STEP * MAX_STEP
    T_INIT: float, initial Temperature (K). Default: 298.15
            * For ENSEMBLE=NVE, T_INIT is only used to generate ramdom initial velocities by Boltzmann dist if V_init was not given.
            * For ENSEMBLE=NVT, T_INIT is the target temperature of thermostat.
    OUTPUT_COORDS_PER_STEP: int, to control the frequency of outputting atom coordinates. If verbose = 3, atom velocities would also be outputted. Default: 1

References:
    [1] J. Chem. Phys., 2007, 126, 014101.

reload_config

(

config_file_path: <class 'str'>

)

1
    Reload the yaml configs file.

run

(

model: <class 'inspect._empty'>

)

1
2
    Parameters:
        model: the input model which is non-instantiated nn.Module class.

set_dataloader

(

DataLoader: <class 'inspect._empty'>

DataLoader_configs: Optional[Dict] = None

)

1
2
3
4
    Set the data loader which is :
        * DataLoader(data, batchsize, device, **kwargs) -> Iterable
        * next(iter( DataLoader(data) )) -> (data, label)
    The argument 'batchsize' and 'device' of DataLoader would be read from self.BATCH_SIZE and self.DEVICE respectively.

set_dataset

(

train_data: Dict[Literal['data', 'labels'], Any]

valid_data: Optional[Dict[Literal['data', 'labels'], Any]] = None

)

1
2
3
4
5
    Load the data that put into DataLoader.
    Parameters:
        train_data: {'data': Any, 'labels':Any}, the Dict of training set.
        valid_data: {'data': Any, 'labels':Any}, the Dict of validation set.
    Both training and validation set data must implement __len__() method, and they are correspond to the input of dataloader.

set_device

(

device: str | torch.device

)

reset the device that model would train on

set_loss_fn

(

loss_fn: <class 'inspect._empty'>

loss_config: Optional[Dict] = None

)

1
2
3
4
    Reset loss function, and reset configs of loss function optionally.
    parameters:
        loss_fn: uninstantiated class torch.nn.Module, a user-defind loss function.
        loss_config: Dict[str, Any]|None, the new configs of given loss function. if None, loss_config would not change.

set_lr_scheduler

(

lr_scheduler: <class 'inspect._empty'>

lr_scheduler_config: Optional[Dict[str, Any]] = None

)

1
    Set the lr_scheduler that inherit from torch.optim.lr_scheduler.LRScheduler

set_metrics

(

metrics_fn: Dict[str, Callable]

metrics_fn_config: Optional[Dict[str, Dict]] = None

)

1
2
3
4
    Set user-defined metrics function.
    Parameters:
        metrics_fn: Dict[str, Callable], str is the name of metrics function.
        metrics_fn_config: Dict[str, Dict]|None, the configs of metrics function corresponding to the function name str.

set_model_config

(

model_config: Optional[Dict[str, Any]] = None

)

1
    Set the new configs (hyperparameters) of model.

set_model_param

(

model_state_dict: Dict

is_strict: <class 'bool'> = True

is_assign: <class 'bool'> = False

)

1
2
3
4
5
6
    Set the trained model parameters from direct input.
    Parameters:
        model_state_dict: Dict, a dict containing parameters and persistent buffers.
        is_strict: bool,whether to strictly enforce that the keys in state_dict match the keys returned by this module's state_dict() function.
        is_assign: bool, When False, the properties of the tensors in the current module are preserved while when True,
        the properties of the Tensors in the state dict are preserved.

set_optimizer

(

optimizer: <class 'inspect._empty'>

optim_config: Optional[Dict] = None

)

1
2
3
4
    Set the optimizer that inherit from torch.optim.Optimizer, and reset optimizer configs optionally.
    parameters:
        optimizer: torch.optim.Optimizer, a user-defind optimizer.
        optim_config: Dict[str, Any]|None, the new configs of given optimizer. if None, optim_config would not change.

VibrationAnalysis

(

config_file: <class 'str'>

data_type: Literal['pyg', 'dgl'] = pyg

)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
The class of normal mode frequencies calculation by finite difference algo.
Due to the huge computation cost, it would run sequentially instead of batched.
Users need to set the dataset and dataloader manually.

Args:
    config_file: the path of input file.
    data_type: graph data type. 'pyg' for torch-geometric BatchData, 'dgl' for dgl DGLGraph.
    verbose: control the verboseness of output.
    device: the device that models run on.

Input file parameters:
    BLOCK_SIZE: int, the batch size of points (i.e., one structure image of finite difference) for parallel computing at one time. Default: 1.
    DELTA: float, the step length of finite difference. Default: 1e-2.
    SAVE_HESSIAN: bool, whether to save calculated Hessian matrix. Default: False.

reload_config

(

config_file_path: <class 'str'>

)

1
    Reload the yaml configs file.

run

(

model: <class 'inspect._empty'>

)

1
2
    Parameters:
        model: the input model which is `uninstantiated` nn.Module class.

set_dataloader

(

DataLoader: <class 'inspect._empty'>

DataLoader_configs: Optional[Dict] = None

)

1
2
3
4
    Set the data loader which is :
        * DataLoader(data, batchsize, device, **kwargs) -> Iterable
        * next(iter( DataLoader(data) )) -> (data, label)
    The argument 'batchsize' and 'device' of DataLoader would be read from self.BATCH_SIZE and self.DEVICE respectively.

set_dataset

(

train_data: Dict[Literal['data', 'labels'], Any]

valid_data: Optional[Dict[Literal['data', 'labels'], Any]] = None

)

1
2
3
4
5
    Load the data that put into DataLoader.
    Parameters:
        train_data: {'data': Any, 'labels':Any}, the Dict of training set.
        valid_data: {'data': Any, 'labels':Any}, the Dict of validation set.
    Both training and validation set data must implement __len__() method, and they are correspond to the input of dataloader.

set_device

(

device: str | torch.device

)

reset the device that model would train on

set_loss_fn

(

loss_fn: <class 'inspect._empty'>

loss_config: Optional[Dict] = None

)

1
2
3
4
    Reset loss function, and reset configs of loss function optionally.
    parameters:
        loss_fn: uninstantiated class torch.nn.Module, a user-defind loss function.
        loss_config: Dict[str, Any]|None, the new configs of given loss function. if None, loss_config would not change.

set_lr_scheduler

(

lr_scheduler: <class 'inspect._empty'>

lr_scheduler_config: Optional[Dict[str, Any]] = None

)

1
    Set the lr_scheduler that inherit from torch.optim.lr_scheduler.LRScheduler

set_metrics

(

metrics_fn: Dict[str, Callable]

metrics_fn_config: Optional[Dict[str, Dict]] = None

)

1
2
3
4
    Set user-defined metrics function.
    Parameters:
        metrics_fn: Dict[str, Callable], str is the name of metrics function.
        metrics_fn_config: Dict[str, Dict]|None, the configs of metrics function corresponding to the function name str.

set_model_config

(

model_config: Optional[Dict[str, Any]] = None

)

1
    Set the new configs (hyperparameters) of model.

set_model_param

(

model_state_dict: Dict

is_strict: <class 'bool'> = True

is_assign: <class 'bool'> = False

)

1
2
3
4
5
6
    Set the trained model parameters from direct input.
    Parameters:
        model_state_dict: Dict, a dict containing parameters and persistent buffers.
        is_strict: bool,whether to strictly enforce that the keys in state_dict match the keys returned by this module's state_dict() function.
        is_assign: bool, When False, the properties of the tensors in the current module are preserved while when True,
        the properties of the Tensors in the state dict are preserved.

set_optimizer

(

optimizer: <class 'inspect._empty'>

optim_config: Optional[Dict] = None

)

1
2
3
4
    Set the optimizer that inherit from torch.optim.Optimizer, and reset optimizer configs optionally.
    parameters:
        optimizer: torch.optim.Optimizer, a user-defind optimizer.
        optim_config: Dict[str, Any]|None, the new configs of given optimizer. if None, optim_config would not change.

Energy_Force_Loss

(

loss_E: Union[Literal['MAE', 'MSE'], torch.nn.modules.module.Module] = MAE

loss_F: Union[Literal['MAE', 'MSE'], torch.nn.modules.module.Module] = MAE

coeff_E: <class 'float'> = 1.0

coeff_F: <class 'float'> = 1.0

)

1
2
3
4
5
6
7
8
9
10
11
12
A loss function that evaluates both predicted energy and forces.
Both energy & force losses would be averaged over each atom.

Parameters:
    loss_E: the loss function of energy.
    loss_F: the loss function of forces.
    coeff_E: the coefficient of energy.
    coeff_F: the coefficient of forces.

forward:
    pred: Dict[Literal['energy'], th.Tensor], output of models;
    label: Dict[Literal['energy'], th.Tensor], labels.

add_module

(

name: <class 'str'>

module: Optional[ForwardRef('Module')]

)

Add a child module to the current module.

1
2
3
4
5
6
    The module can be accessed as an attribute using the given name.

    Args:
        name (str): name of the child module. The child module can be
            accessed from this module using the given name
        module (Module): child module to be added to the module.

apply

(

fn: Callable[[ForwardRef('Module')], NoneType]

)

Apply fn recursively to every submodule (as returned by .children()) as well as self.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
    Typical use includes initializing the parameters of a model
    (see also :ref:`nn-init-doc`).

    Args:
        fn (:class:`Module` -> None): function to be applied to each submodule

    Returns:
        Module: self

    Example::

        >>> @torch.no_grad()
        >>> def init_weights(m):
        >>>     print(m)
        >>>     if type(m) == nn.Linear:
        >>>         m.weight.fill_(1.0)
        >>>         print(m.weight)
        >>> net = nn.Sequential(nn.Linear(2, 2), nn.Linear(2, 2))
        >>> net.apply(init_weights)
        Linear(in_features=2, out_features=2, bias=True)
        Parameter containing:
        tensor([[1., 1.],
                [1., 1.]], requires_grad=True)
        Linear(in_features=2, out_features=2, bias=True)
        Parameter containing:
        tensor([[1., 1.],
                [1., 1.]], requires_grad=True)
        Sequential(
          (0): Linear(in_features=2, out_features=2, bias=True)
          (1): Linear(in_features=2, out_features=2, bias=True)
        )

bfloat16

(

)

Casts all floating point parameters and buffers to bfloat16 datatype.

1
2
3
4
5
    .. note::
        This method modifies the module in-place.

    Returns:
        Module: self

buffers

(

recurse: <class 'bool'> = True

)

Return an iterator over module buffers.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
    Args:
        recurse (bool): if True, then yields buffers of this module
            and all submodules. Otherwise, yields only buffers that
            are direct members of this module.

    Yields:
        torch.Tensor: module buffer

    Example::

        >>> # xdoctest: +SKIP("undefined vars")
        >>> for buf in model.buffers():
        >>>     print(type(buf), buf.size())
        <class 'torch.Tensor'> (20L,)
        <class 'torch.Tensor'> (20L, 1L, 5L, 5L)

children

(

)

Return an iterator over immediate children modules.

1
2
    Yields:
        Module: a child module

compile

(

args: <class 'inspect._empty'>

kwargs: <class 'inspect._empty'>

)

1
2
3
4
5
6
    Compile this Module's forward using :func:`torch.compile`.

    This Module's `__call__` method is compiled and all arguments are passed as-is
    to :func:`torch.compile`.

    See :func:`torch.compile` for details on the arguments for this function.

cpu

(

)

Move all model parameters and buffers to the CPU.

1
2
3
4
5
    .. note::
        This method modifies the module in-place.

    Returns:
        Module: self

cuda

(

device: Union[int, torch.device, NoneType] = None

)

Move all model parameters and buffers to the GPU.

1
2
3
4
5
6
7
8
9
10
11
12
13
    This also makes associated parameters and buffers different objects. So
    it should be called before constructing the optimizer if the module will
    live on GPU while being optimized.

    .. note::
        This method modifies the module in-place.

    Args:
        device (int, optional): if specified, all parameters will be
            copied to that device

    Returns:
        Module: self

double

(

)

Casts all floating point parameters and buffers to double datatype.

1
2
3
4
5
    .. note::
        This method modifies the module in-place.

    Returns:
        Module: self

eval

(

)

Set the module in evaluation mode.

1
2
3
4
5
6
7
8
9
10
11
12
    This has an effect only on certain modules. See the documentation of
    particular modules for details of their behaviors in training/evaluation
    mode, i.e. whether they are affected, e.g. :class:`Dropout`, :class:`BatchNorm`,
    etc.

    This is equivalent with :meth:`self.train(False) <torch.nn.Module.train>`.

    See :ref:`locally-disable-grad-doc` for a comparison between
    `.eval()` and several similar mechanisms that may be confused with it.

    Returns:
        Module: self

extra_repr

(

)

Return the extra representation of the module.

1
2
3
    To print customized extra information, you should re-implement
    this method in your own modules. Both single-line and multi-line
    strings are acceptable.

float

(

)

Casts all floating point parameters and buffers to float datatype.

1
2
3
4
5
    .. note::
        This method modifies the module in-place.

    Returns:
        Module: self

forward

(

pred: Dict[Literal['energy', 'forces'], torch.Tensor]

label: Dict[Literal['energy', 'forces'], torch.Tensor]

)

get_buffer

(

target: <class 'str'>

)

Return the buffer given by target if it exists, otherwise throw an error.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
    See the docstring for ``get_submodule`` for a more detailed
    explanation of this method's functionality as well as how to
    correctly specify ``target``.

    Args:
        target: The fully-qualified string name of the buffer
            to look for. (See ``get_submodule`` for how to specify a
            fully-qualified string.)

    Returns:
        torch.Tensor: The buffer referenced by ``target``

    Raises:
        AttributeError: If the target string references an invalid
            path or resolves to something that is not a
            buffer

get_extra_state

(

)

Return any extra state to include in the module’s state_dict.

1
2
3
4
5
6
7
8
9
10
11
    Implement this and a corresponding :func:`set_extra_state` for your module
    if you need to store extra state. This function is called when building the
    module's `state_dict()`.

    Note that extra state should be picklable to ensure working serialization
    of the state_dict. We only provide backwards compatibility guarantees
    for serializing Tensors; other objects may break backwards compatibility if
    their serialized pickled form changes.

    Returns:
        object: Any extra state to store in the module's state_dict

get_parameter

(

target: <class 'str'>

)

Return the parameter given by target if it exists, otherwise throw an error.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
    See the docstring for ``get_submodule`` for a more detailed
    explanation of this method's functionality as well as how to
    correctly specify ``target``.

    Args:
        target: The fully-qualified string name of the Parameter
            to look for. (See ``get_submodule`` for how to specify a
            fully-qualified string.)

    Returns:
        torch.nn.Parameter: The Parameter referenced by ``target``

    Raises:
        AttributeError: If the target string references an invalid
            path or resolves to something that is not an
            ``nn.Parameter``

get_submodule

(

target: <class 'str'>

)

Return the submodule given by target if it exists, otherwise throw an error.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
    For example, let's say you have an ``nn.Module`` ``A`` that
    looks like this:

    .. code-block:: text

        A(
            (net_b): Module(
                (net_c): Module(
                    (conv): Conv2d(16, 33, kernel_size=(3, 3), stride=(2, 2))
                )
                (linear): Linear(in_features=100, out_features=200, bias=True)
            )
        )

    (The diagram shows an ``nn.Module`` ``A``. ``A`` which has a nested
    submodule ``net_b``, which itself has two submodules ``net_c``
    and ``linear``. ``net_c`` then has a submodule ``conv``.)

    To check whether or not we have the ``linear`` submodule, we
    would call ``get_submodule("net_b.linear")``. To check whether
    we have the ``conv`` submodule, we would call
    ``get_submodule("net_b.net_c.conv")``.

    The runtime of ``get_submodule`` is bounded by the degree
    of module nesting in ``target``. A query against
    ``named_modules`` achieves the same result, but it is O(N) in
    the number of transitive modules. So, for a simple check to see
    if some submodule exists, ``get_submodule`` should always be
    used.

    Args:
        target: The fully-qualified string name of the submodule
            to look for. (See above example for how to specify a
            fully-qualified string.)

    Returns:
        torch.nn.Module: The submodule referenced by ``target``

    Raises:
        AttributeError: If at any point along the path resulting from
            the target string the (sub)path resolves to a non-existent
            attribute name or an object that is not an instance of ``nn.Module``.

half

(

)

Casts all floating point parameters and buffers to half datatype.

1
2
3
4
5
    .. note::
        This method modifies the module in-place.

    Returns:
        Module: self

ipu

(

device: Union[int, torch.device, NoneType] = None

)

Move all model parameters and buffers to the IPU.

1
2
3
4
5
6
7
8
9
10
11
12
13
    This also makes associated parameters and buffers different objects. So
    it should be called before constructing the optimizer if the module will
    live on IPU while being optimized.

    .. note::
        This method modifies the module in-place.

    Arguments:
        device (int, optional): if specified, all parameters will be
            copied to that device

    Returns:
        Module: self

load_state_dict

(

state_dict: collections.abc.Mapping[str, Any]

strict: <class 'bool'> = True

assign: <class 'bool'> = False

)

Copy parameters and buffers from :attr:state_dict into this module and its descendants.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
    If :attr:`strict` is ``True``, then
    the keys of :attr:`state_dict` must exactly match the keys returned
    by this module's :meth:`~torch.nn.Module.state_dict` function.

    .. warning::
        If :attr:`assign` is ``True`` the optimizer must be created after
        the call to :attr:`load_state_dict` unless
        :func:`~torch.__future__.get_swap_module_params_on_conversion` is ``True``.

    Args:
        state_dict (dict): a dict containing parameters and
            persistent buffers.
        strict (bool, optional): whether to strictly enforce that the keys
            in :attr:`state_dict` match the keys returned by this module's
            :meth:`~torch.nn.Module.state_dict` function. Default: ``True``
        assign (bool, optional): When set to ``False``, the properties of the tensors
            in the current module are preserved whereas setting it to ``True`` preserves
            properties of the Tensors in the state dict. The only
            exception is the ``requires_grad`` field of :class:`~torch.nn.Parameter`s
            for which the value from the module is preserved.
            Default: ``False``

    Returns:
        ``NamedTuple`` with ``missing_keys`` and ``unexpected_keys`` fields:
            * **missing_keys** is a list of str containing any keys that are expected
                by this module but missing from the provided ``state_dict``.
            * **unexpected_keys** is a list of str containing the keys that are not
                expected by this module but present in the provided ``state_dict``.

    Note:
        If a parameter or buffer is registered as ``None`` and its corresponding key
        exists in :attr:`state_dict`, :meth:`load_state_dict` will raise a
        ``RuntimeError``.

modules

(

)

Return an iterator over all modules in the network.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
    Yields:
        Module: a module in the network

    Note:
        Duplicate modules are returned only once. In the following
        example, ``l`` will be returned only once.

    Example::

        >>> l = nn.Linear(2, 2)
        >>> net = nn.Sequential(l, l)
        >>> for idx, m in enumerate(net.modules()):
        ...     print(idx, '->', m)

        0 -> Sequential(
          (0): Linear(in_features=2, out_features=2, bias=True)
          (1): Linear(in_features=2, out_features=2, bias=True)
        )
        1 -> Linear(in_features=2, out_features=2, bias=True)

mtia

(

device: Union[int, torch.device, NoneType] = None

)

Move all model parameters and buffers to the MTIA.

1
2
3
4
5
6
7
8
9
10
11
12
13
    This also makes associated parameters and buffers different objects. So
    it should be called before constructing the optimizer if the module will
    live on MTIA while being optimized.

    .. note::
        This method modifies the module in-place.

    Arguments:
        device (int, optional): if specified, all parameters will be
            copied to that device

    Returns:
        Module: self

named_buffers

(

prefix: <class 'str'> =

recurse: <class 'bool'> = True

remove_duplicate: <class 'bool'> = True

)

Return an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
    Args:
        prefix (str): prefix to prepend to all buffer names.
        recurse (bool, optional): if True, then yields buffers of this module
            and all submodules. Otherwise, yields only buffers that
            are direct members of this module. Defaults to True.
        remove_duplicate (bool, optional): whether to remove the duplicated buffers in the result. Defaults to True.

    Yields:
        (str, torch.Tensor): Tuple containing the name and buffer

    Example::

        >>> # xdoctest: +SKIP("undefined vars")
        >>> for name, buf in self.named_buffers():
        >>>     if name in ['running_var']:
        >>>         print(buf.size())

named_children

(

)

Return an iterator over immediate children modules, yielding both the name of the module as well as the module itself.

1
2
3
4
5
6
7
8
9
    Yields:
        (str, Module): Tuple containing a name and child module

    Example::

        >>> # xdoctest: +SKIP("undefined vars")
        >>> for name, module in model.named_children():
        >>>     if name in ['conv4', 'conv5']:
        >>>         print(module)

named_modules

(

memo: Optional[set['Module']] = None

prefix: <class 'str'> =

remove_duplicate: <class 'bool'> = True

)

Return an iterator over all modules in the network, yielding both the name of the module as well as the module itself.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
    Args:
        memo: a memo to store the set of modules already added to the result
        prefix: a prefix that will be added to the name of the module
        remove_duplicate: whether to remove the duplicated module instances in the result
            or not

    Yields:
        (str, Module): Tuple of name and module

    Note:
        Duplicate modules are returned only once. In the following
        example, ``l`` will be returned only once.

    Example::

        >>> l = nn.Linear(2, 2)
        >>> net = nn.Sequential(l, l)
        >>> for idx, m in enumerate(net.named_modules()):
        ...     print(idx, '->', m)

        0 -> ('', Sequential(
          (0): Linear(in_features=2, out_features=2, bias=True)
          (1): Linear(in_features=2, out_features=2, bias=True)
        ))
        1 -> ('0', Linear(in_features=2, out_features=2, bias=True))

named_parameters

(

prefix: <class 'str'> =

recurse: <class 'bool'> = True

remove_duplicate: <class 'bool'> = True

)

Return an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
    Args:
        prefix (str): prefix to prepend to all parameter names.
        recurse (bool): if True, then yields parameters of this module
            and all submodules. Otherwise, yields only parameters that
            are direct members of this module.
        remove_duplicate (bool, optional): whether to remove the duplicated
            parameters in the result. Defaults to True.

    Yields:
        (str, Parameter): Tuple containing the name and parameter

    Example::

        >>> # xdoctest: +SKIP("undefined vars")
        >>> for name, param in self.named_parameters():
        >>>     if name in ['bias']:
        >>>         print(param.size())

parameters

(

recurse: <class 'bool'> = True

)

Return an iterator over module parameters.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
    This is typically passed to an optimizer.

    Args:
        recurse (bool): if True, then yields parameters of this module
            and all submodules. Otherwise, yields only parameters that
            are direct members of this module.

    Yields:
        Parameter: module parameter

    Example::

        >>> # xdoctest: +SKIP("undefined vars")
        >>> for param in model.parameters():
        >>>     print(type(param), param.size())
        <class 'torch.Tensor'> (20L,)
        <class 'torch.Tensor'> (20L, 1L, 5L, 5L)

register_backward_hook

(

hook: Callable[[ForwardRef('Module'), Union[tuple[torch.Tensor, ...], torch.Tensor], Union[tuple[torch.Tensor, ...], torch.Tensor]], Union[NoneType, tuple[torch.Tensor, ...], torch.Tensor]]

)

Register a backward hook on the module.

1
2
3
4
5
6
7
    This function is deprecated in favor of :meth:`~torch.nn.Module.register_full_backward_hook` and
    the behavior of this function will change in future versions.

    Returns:
        :class:`torch.utils.hooks.RemovableHandle`:
            a handle that can be used to remove the added hook by calling
            ``handle.remove()``

register_buffer

(

name: <class 'str'>

tensor: Optional[torch.Tensor]

persistent: <class 'bool'> = True

)

Add a buffer to the module.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
    This is typically used to register a buffer that should not to be
    considered a model parameter. For example, BatchNorm's ``running_mean``
    is not a parameter, but is part of the module's state. Buffers, by
    default, are persistent and will be saved alongside parameters. This
    behavior can be changed by setting :attr:`persistent` to ``False``. The
    only difference between a persistent buffer and a non-persistent buffer
    is that the latter will not be a part of this module's
    :attr:`state_dict`.

    Buffers can be accessed as attributes using given names.

    Args:
        name (str): name of the buffer. The buffer can be accessed
            from this module using the given name
        tensor (Tensor or None): buffer to be registered. If ``None``, then operations
            that run on buffers, such as :attr:`cuda`, are ignored. If ``None``,
            the buffer is **not** included in the module's :attr:`state_dict`.
        persistent (bool): whether the buffer is part of this module's
            :attr:`state_dict`.

    Example::

        >>> # xdoctest: +SKIP("undefined vars")
        >>> self.register_buffer('running_mean', torch.zeros(num_features))

register_forward_hook

(

hook: Union[Callable[[~T, tuple[Any, ...], Any], Optional[Any]], Callable[[~T, tuple[Any, ...], dict[str, Any], Any], Optional[Any]]]

prepend: <class 'bool'> = False

with_kwargs: <class 'bool'> = False

always_call: <class 'bool'> = False

)

Register a forward hook on the module.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
    The hook will be called every time after :func:`forward` has computed an output.

    If ``with_kwargs`` is ``False`` or not specified, the input contains only
    the positional arguments given to the module. Keyword arguments won't be
    passed to the hooks and only to the ``forward``. The hook can modify the
    output. It can modify the input inplace but it will not have effect on
    forward since this is called after :func:`forward` is called. The hook
    should have the following signature::

        hook(module, args, output) -> None or modified output

    If ``with_kwargs`` is ``True``, the forward hook will be passed the
    ``kwargs`` given to the forward function and be expected to return the
    output possibly modified. The hook should have the following signature::

        hook(module, args, kwargs, output) -> None or modified output

    Args:
        hook (Callable): The user defined hook to be registered.
        prepend (bool): If ``True``, the provided ``hook`` will be fired
            before all existing ``forward`` hooks on this
            :class:`torch.nn.Module`. Otherwise, the provided
            ``hook`` will be fired after all existing ``forward`` hooks on
            this :class:`torch.nn.Module`. Note that global
            ``forward`` hooks registered with
            :func:`register_module_forward_hook` will fire before all hooks
            registered by this method.
            Default: ``False``
        with_kwargs (bool): If ``True``, the ``hook`` will be passed the
            kwargs given to the forward function.
            Default: ``False``
        always_call (bool): If ``True`` the ``hook`` will be run regardless of
            whether an exception is raised while calling the Module.
            Default: ``False``

    Returns:
        :class:`torch.utils.hooks.RemovableHandle`:
            a handle that can be used to remove the added hook by calling
            ``handle.remove()``

register_forward_pre_hook

(

hook: Union[Callable[[~T, tuple[Any, ...]], Optional[Any]], Callable[[~T, tuple[Any, ...], dict[str, Any]], Optional[tuple[Any, dict[str, Any]]]]]

prepend: <class 'bool'> = False

with_kwargs: <class 'bool'> = False

)

Register a forward pre-hook on the module.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
    The hook will be called every time before :func:`forward` is invoked.


    If ``with_kwargs`` is false or not specified, the input contains only
    the positional arguments given to the module. Keyword arguments won't be
    passed to the hooks and only to the ``forward``. The hook can modify the
    input. User can either return a tuple or a single modified value in the
    hook. We will wrap the value into a tuple if a single value is returned
    (unless that value is already a tuple). The hook should have the
    following signature::

        hook(module, args) -> None or modified input

    If ``with_kwargs`` is true, the forward pre-hook will be passed the
    kwargs given to the forward function. And if the hook modifies the
    input, both the args and kwargs should be returned. The hook should have
    the following signature::

        hook(module, args, kwargs) -> None or a tuple of modified input and kwargs

    Args:
        hook (Callable): The user defined hook to be registered.
        prepend (bool): If true, the provided ``hook`` will be fired before
            all existing ``forward_pre`` hooks on this
            :class:`torch.nn.Module`. Otherwise, the provided
            ``hook`` will be fired after all existing ``forward_pre`` hooks
            on this :class:`torch.nn.Module`. Note that global
            ``forward_pre`` hooks registered with
            :func:`register_module_forward_pre_hook` will fire before all
            hooks registered by this method.
            Default: ``False``
        with_kwargs (bool): If true, the ``hook`` will be passed the kwargs
            given to the forward function.
            Default: ``False``

    Returns:
        :class:`torch.utils.hooks.RemovableHandle`:
            a handle that can be used to remove the added hook by calling
            ``handle.remove()``

register_full_backward_hook

(

hook: Callable[[ForwardRef('Module'), Union[tuple[torch.Tensor, ...], torch.Tensor], Union[tuple[torch.Tensor, ...], torch.Tensor]], Union[NoneType, tuple[torch.Tensor, ...], torch.Tensor]]

prepend: <class 'bool'> = False

)

Register a backward hook on the module.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
    The hook will be called every time the gradients with respect to a module
    are computed, i.e. the hook will execute if and only if the gradients with
    respect to module outputs are computed. The hook should have the following
    signature::

        hook(module, grad_input, grad_output) -> tuple(Tensor) or None

    The :attr:`grad_input` and :attr:`grad_output` are tuples that contain the gradients
    with respect to the inputs and outputs respectively. The hook should
    not modify its arguments, but it can optionally return a new gradient with
    respect to the input that will be used in place of :attr:`grad_input` in
    subsequent computations. :attr:`grad_input` will only correspond to the inputs given
    as positional arguments and all kwarg arguments are ignored. Entries
    in :attr:`grad_input` and :attr:`grad_output` will be ``None`` for all non-Tensor
    arguments.

    For technical reasons, when this hook is applied to a Module, its forward function will
    receive a view of each Tensor passed to the Module. Similarly the caller will receive a view
    of each Tensor returned by the Module's forward function.

    .. warning ::
        Modifying inputs or outputs inplace is not allowed when using backward hooks and
        will raise an error.

    Args:
        hook (Callable): The user-defined hook to be registered.
        prepend (bool): If true, the provided ``hook`` will be fired before
            all existing ``backward`` hooks on this
            :class:`torch.nn.Module`. Otherwise, the provided
            ``hook`` will be fired after all existing ``backward`` hooks on
            this :class:`torch.nn.Module`. Note that global
            ``backward`` hooks registered with
            :func:`register_module_full_backward_hook` will fire before
            all hooks registered by this method.

    Returns:
        :class:`torch.utils.hooks.RemovableHandle`:
            a handle that can be used to remove the added hook by calling
            ``handle.remove()``

register_full_backward_pre_hook

(

hook: Callable[[ForwardRef('Module'), Union[tuple[torch.Tensor, ...], torch.Tensor]], Union[NoneType, tuple[torch.Tensor, ...], torch.Tensor]]

prepend: <class 'bool'> = False

)

Register a backward pre-hook on the module.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
    The hook will be called every time the gradients for the module are computed.
    The hook should have the following signature::

        hook(module, grad_output) -> tuple[Tensor] or None

    The :attr:`grad_output` is a tuple. The hook should
    not modify its arguments, but it can optionally return a new gradient with
    respect to the output that will be used in place of :attr:`grad_output` in
    subsequent computations. Entries in :attr:`grad_output` will be ``None`` for
    all non-Tensor arguments.

    For technical reasons, when this hook is applied to a Module, its forward function will
    receive a view of each Tensor passed to the Module. Similarly the caller will receive a view
    of each Tensor returned by the Module's forward function.

    .. warning ::
        Modifying inputs inplace is not allowed when using backward hooks and
        will raise an error.

    Args:
        hook (Callable): The user-defined hook to be registered.
        prepend (bool): If true, the provided ``hook`` will be fired before
            all existing ``backward_pre`` hooks on this
            :class:`torch.nn.Module`. Otherwise, the provided
            ``hook`` will be fired after all existing ``backward_pre`` hooks
            on this :class:`torch.nn.Module`. Note that global
            ``backward_pre`` hooks registered with
            :func:`register_module_full_backward_pre_hook` will fire before
            all hooks registered by this method.

    Returns:
        :class:`torch.utils.hooks.RemovableHandle`:
            a handle that can be used to remove the added hook by calling
            ``handle.remove()``

register_load_state_dict_post_hook

(

hook: <class 'inspect._empty'>

)

Register a post-hook to be run after module’s :meth:~nn.Module.load_state_dict is called.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
    It should have the following signature::
        hook(module, incompatible_keys) -> None

    The ``module`` argument is the current module that this hook is registered
    on, and the ``incompatible_keys`` argument is a ``NamedTuple`` consisting
    of attributes ``missing_keys`` and ``unexpected_keys``. ``missing_keys``
    is a ``list`` of ``str`` containing the missing keys and
    ``unexpected_keys`` is a ``list`` of ``str`` containing the unexpected keys.

    The given incompatible_keys can be modified inplace if needed.

    Note that the checks performed when calling :func:`load_state_dict` with
    ``strict=True`` are affected by modifications the hook makes to
    ``missing_keys`` or ``unexpected_keys``, as expected. Additions to either
    set of keys will result in an error being thrown when ``strict=True``, and
    clearing out both missing and unexpected keys will avoid an error.

    Returns:
        :class:`torch.utils.hooks.RemovableHandle`:
            a handle that can be used to remove the added hook by calling
            ``handle.remove()``

register_load_state_dict_pre_hook

(

hook: <class 'inspect._empty'>

)

Register a pre-hook to be run before module’s :meth:~nn.Module.load_state_dict is called.

1
2
3
4
5
6
    It should have the following signature::
        hook(module, state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs) -> None  # noqa: B950

    Arguments:
        hook (Callable): Callable hook that will be invoked before
            loading the state dict.

register_module

(

name: <class 'str'>

module: Optional[ForwardRef('Module')]

)

Alias for :func:add_module.

register_parameter

(

name: <class 'str'>

param: Optional[torch.nn.parameter.Parameter]

)

Add a parameter to the module.

1
2
3
4
5
6
7
8
9
    The parameter can be accessed as an attribute using given name.

    Args:
        name (str): name of the parameter. The parameter can be accessed
            from this module using the given name
        param (Parameter or None): parameter to be added to the module. If
            ``None``, then operations that run on parameters, such as :attr:`cuda`,
            are ignored. If ``None``, the parameter is **not** included in the
            module's :attr:`state_dict`.

register_state_dict_post_hook

(

hook: <class 'inspect._empty'>

)

Register a post-hook for the :meth:~torch.nn.Module.state_dict method.

1
2
3
4
    It should have the following signature::
        hook(module, state_dict, prefix, local_metadata) -> None

    The registered hooks can modify the ``state_dict`` inplace.

register_state_dict_pre_hook

(

hook: <class 'inspect._empty'>

)

Register a pre-hook for the :meth:~torch.nn.Module.state_dict method.

1
2
3
4
5
    It should have the following signature::
        hook(module, prefix, keep_vars) -> None

    The registered hooks can be used to perform pre-processing before the ``state_dict``
    call is made.

requires_grad_

(

requires_grad: <class 'bool'> = True

)

Change if autograd should record operations on parameters in this module.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
    This method sets the parameters' :attr:`requires_grad` attributes
    in-place.

    This method is helpful for freezing part of the module for finetuning
    or training parts of a model individually (e.g., GAN training).

    See :ref:`locally-disable-grad-doc` for a comparison between
    `.requires_grad_()` and several similar mechanisms that may be confused with it.

    Args:
        requires_grad (bool): whether autograd should record operations on
                              parameters in this module. Default: ``True``.

    Returns:
        Module: self

set_extra_state

(

state: Any

)

Set extra state contained in the loaded state_dict.

1
2
3
4
5
6
7
    This function is called from :func:`load_state_dict` to handle any extra state
    found within the `state_dict`. Implement this function and a corresponding
    :func:`get_extra_state` for your module if you need to store extra state within its
    `state_dict`.

    Args:
        state (dict): Extra state from the `state_dict`

set_submodule

(

target: <class 'str'>

module: Module

strict: <class 'bool'> = False

)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
    Set the submodule given by ``target`` if it exists, otherwise throw an error.

    .. note::
        If ``strict`` is set to ``False`` (default), the method will replace an existing submodule
        or create a new submodule if the parent module exists. If ``strict`` is set to ``True``,
        the method will only attempt to replace an existing submodule and throw an error if
        the submodule does not exist.

    For example, let's say you have an ``nn.Module`` ``A`` that
    looks like this:

    .. code-block:: text

        A(
            (net_b): Module(
                (net_c): Module(
                    (conv): Conv2d(3, 3, 3)
                )
                (linear): Linear(3, 3)
            )
        )

    (The diagram shows an ``nn.Module`` ``A``. ``A`` has a nested
    submodule ``net_b``, which itself has two submodules ``net_c``
    and ``linear``. ``net_c`` then has a submodule ``conv``.)

    To override the ``Conv2d`` with a new submodule ``Linear``, you
    could call ``set_submodule("net_b.net_c.conv", nn.Linear(1, 1))``
    where ``strict`` could be ``True`` or ``False``

    To add a new submodule ``Conv2d`` to the existing ``net_b`` module,
    you would call ``set_submodule("net_b.conv", nn.Conv2d(1, 1, 1))``.

    In the above if you set ``strict=True`` and call
    ``set_submodule("net_b.conv", nn.Conv2d(1, 1, 1), strict=True)``, an AttributeError
    will be raised because ``net_b`` does not have a submodule named ``conv``.

    Args:
        target: The fully-qualified string name of the submodule
            to look for. (See above example for how to specify a
            fully-qualified string.)
        module: The module to set the submodule to.
        strict: If ``False``, the method will replace an existing submodule
            or create a new submodule if the parent module exists. If ``True``,
            the method will only attempt to replace an existing submodule and throw an error
            if the submodule doesn't already exist.

    Raises:
        ValueError: If the ``target`` string is empty or if ``module`` is not an instance of ``nn.Module``.
        AttributeError: If at any point along the path resulting from
            the ``target`` string the (sub)path resolves to a non-existent
            attribute name or an object that is not an instance of ``nn.Module``.

share_memory

(

)

See :meth:torch.Tensor.share_memory_.

state_dict

(

args: <class 'inspect._empty'>

destination: <class 'inspect._empty'> = None

prefix: <class 'inspect._empty'> =

keep_vars: <class 'inspect._empty'> = False

)

Return a dictionary containing references to the whole state of the module.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
    Both parameters and persistent buffers (e.g. running averages) are
    included. Keys are corresponding parameter and buffer names.
    Parameters and buffers set to ``None`` are not included.

    .. note::
        The returned object is a shallow copy. It contains references
        to the module's parameters and buffers.

    .. warning::
        Currently ``state_dict()`` also accepts positional arguments for
        ``destination``, ``prefix`` and ``keep_vars`` in order. However,
        this is being deprecated and keyword arguments will be enforced in
        future releases.

    .. warning::
        Please avoid the use of argument ``destination`` as it is not
        designed for end-users.

    Args:
        destination (dict, optional): If provided, the state of module will
            be updated into the dict and the same object is returned.
            Otherwise, an ``OrderedDict`` will be created and returned.
            Default: ``None``.
        prefix (str, optional): a prefix added to parameter and buffer
            names to compose the keys in state_dict. Default: ``''``.
        keep_vars (bool, optional): by default the :class:`~torch.Tensor` s
            returned in the state dict are detached from autograd. If it's
            set to ``True``, detaching will not be performed.
            Default: ``False``.

    Returns:
        dict:
            a dictionary containing a whole state of the module

    Example::

        >>> # xdoctest: +SKIP("undefined vars")
        >>> module.state_dict().keys()
        ['bias', 'weight']

to

(

args: <class 'inspect._empty'>

kwargs: <class 'inspect._empty'>

)

Move and/or cast the parameters and buffers.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
    This can be called as

    .. function:: to(device=None, dtype=None, non_blocking=False)
       :noindex:

    .. function:: to(dtype, non_blocking=False)
       :noindex:

    .. function:: to(tensor, non_blocking=False)
       :noindex:

    .. function:: to(memory_format=torch.channels_last)
       :noindex:

    Its signature is similar to :meth:`torch.Tensor.to`, but only accepts
    floating point or complex :attr:`dtype`\ s. In addition, this method will
    only cast the floating point or complex parameters and buffers to :attr:`dtype`
    (if given). The integral parameters and buffers will be moved
    :attr:`device`, if that is given, but with dtypes unchanged. When
    :attr:`non_blocking` is set, it tries to convert/move asynchronously
    with respect to the host if possible, e.g., moving CPU Tensors with
    pinned memory to CUDA devices.

    See below for examples.

    .. note::
        This method modifies the module in-place.

    Args:
        device (:class:`torch.device`): the desired device of the parameters
            and buffers in this module
        dtype (:class:`torch.dtype`): the desired floating point or complex dtype of
            the parameters and buffers in this module
        tensor (torch.Tensor): Tensor whose dtype and device are the desired
            dtype and device for all parameters and buffers in this module
        memory_format (:class:`torch.memory_format`): the desired memory
            format for 4D parameters and buffers in this module (keyword
            only argument)

    Returns:
        Module: self

    Examples::

        >>> # xdoctest: +IGNORE_WANT("non-deterministic")
        >>> linear = nn.Linear(2, 2)
        >>> linear.weight
        Parameter containing:
        tensor([[ 0.1913, -0.3420],
                [-0.5113, -0.2325]])
        >>> linear.to(torch.double)
        Linear(in_features=2, out_features=2, bias=True)
        >>> linear.weight
        Parameter containing:
        tensor([[ 0.1913, -0.3420],
                [-0.5113, -0.2325]], dtype=torch.float64)
        >>> # xdoctest: +REQUIRES(env:TORCH_DOCTEST_CUDA1)
        >>> gpu1 = torch.device("cuda:1")
        >>> linear.to(gpu1, dtype=torch.half, non_blocking=True)
        Linear(in_features=2, out_features=2, bias=True)
        >>> linear.weight
        Parameter containing:
        tensor([[ 0.1914, -0.3420],
                [-0.5112, -0.2324]], dtype=torch.float16, device='cuda:1')
        >>> cpu = torch.device("cpu")
        >>> linear.to(cpu)
        Linear(in_features=2, out_features=2, bias=True)
        >>> linear.weight
        Parameter containing:
        tensor([[ 0.1914, -0.3420],
                [-0.5112, -0.2324]], dtype=torch.float16)

        >>> linear = nn.Linear(2, 2, bias=None).to(torch.cdouble)
        >>> linear.weight
        Parameter containing:
        tensor([[ 0.3741+0.j,  0.2382+0.j],
                [ 0.5593+0.j, -0.4443+0.j]], dtype=torch.complex128)
        >>> linear(torch.ones(3, 2, dtype=torch.cdouble))
        tensor([[0.6122+0.j, 0.1150+0.j],
                [0.6122+0.j, 0.1150+0.j],
                [0.6122+0.j, 0.1150+0.j]], dtype=torch.complex128)

to_empty

(

device: Union[int, str, torch.device, NoneType]

recurse: <class 'bool'> = True

)

Move the parameters and buffers to the specified device without copying storage.

1
2
3
4
5
6
7
8
    Args:
        device (:class:`torch.device`): The desired device of the parameters
            and buffers in this module.
        recurse (bool): Whether parameters and buffers of submodules should
            be recursively moved to the specified device.

    Returns:
        Module: self

train

(

mode: <class 'bool'> = True

)

Set the module in training mode.

1
2
3
4
5
6
7
8
9
10
11
    This has an effect only on certain modules. See the documentation of
    particular modules for details of their behaviors in training/evaluation
    mode, i.e., whether they are affected, e.g. :class:`Dropout`, :class:`BatchNorm`,
    etc.

    Args:
        mode (bool): whether to set training mode (``True``) or evaluation
                     mode (``False``). Default: ``True``.

    Returns:
        Module: self

type

(

dst_type: Union[torch.dtype, str]

)

Casts all parameters and buffers to :attr:dst_type.

1
2
3
4
5
6
7
8
    .. note::
        This method modifies the module in-place.

    Args:
        dst_type (type or string): the desired type

    Returns:
        Module: self

xpu

(

device: Union[int, torch.device, NoneType] = None

)

Move all model parameters and buffers to the XPU.

1
2
3
4
5
6
7
8
9
10
11
12
13
    This also makes associated parameters and buffers different objects. So
    it should be called before constructing optimizer if the module will
    live on XPU while being optimized.

    .. note::
        This method modifies the module in-place.

    Arguments:
        device (int, optional): if specified, all parameters will be
            copied to that device

    Returns:
        Module: self

zero_grad

(

set_to_none: <class 'bool'> = True

)

Reset gradients of all model parameters.

1
2
3
4
5
    See similar function under :class:`torch.optim.Optimizer` for more context.

    Args:
        set_to_none (bool): instead of setting to zero, set the grads to None.
            See :meth:`torch.optim.Optimizer.zero_grad` for details.

Energy_Loss

(

loss_E: Union[Literal['MAE', 'MSE', 'SmoothMAE', 'Huber'], torch.nn.modules.module.Module] = MAE

)

1
2
3
4
5
6
7
8
A loss function that evaluates both predicted energy.

Parameters:
    loss_E: the loss function of energy.

forward:
    pred: Dict[Literal['energy'], th.Tensor], output of models;
    label: Dict[Literal['energy'], th.Tensor], labels.

add_module

(

name: <class 'str'>

module: Optional[ForwardRef('Module')]

)

Add a child module to the current module.

1
2
3
4
5
6
    The module can be accessed as an attribute using the given name.

    Args:
        name (str): name of the child module. The child module can be
            accessed from this module using the given name
        module (Module): child module to be added to the module.

apply

(

fn: Callable[[ForwardRef('Module')], NoneType]

)

Apply fn recursively to every submodule (as returned by .children()) as well as self.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
    Typical use includes initializing the parameters of a model
    (see also :ref:`nn-init-doc`).

    Args:
        fn (:class:`Module` -> None): function to be applied to each submodule

    Returns:
        Module: self

    Example::

        >>> @torch.no_grad()
        >>> def init_weights(m):
        >>>     print(m)
        >>>     if type(m) == nn.Linear:
        >>>         m.weight.fill_(1.0)
        >>>         print(m.weight)
        >>> net = nn.Sequential(nn.Linear(2, 2), nn.Linear(2, 2))
        >>> net.apply(init_weights)
        Linear(in_features=2, out_features=2, bias=True)
        Parameter containing:
        tensor([[1., 1.],
                [1., 1.]], requires_grad=True)
        Linear(in_features=2, out_features=2, bias=True)
        Parameter containing:
        tensor([[1., 1.],
                [1., 1.]], requires_grad=True)
        Sequential(
          (0): Linear(in_features=2, out_features=2, bias=True)
          (1): Linear(in_features=2, out_features=2, bias=True)
        )

bfloat16

(

)

Casts all floating point parameters and buffers to bfloat16 datatype.

1
2
3
4
5
    .. note::
        This method modifies the module in-place.

    Returns:
        Module: self

buffers

(

recurse: <class 'bool'> = True

)

Return an iterator over module buffers.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
    Args:
        recurse (bool): if True, then yields buffers of this module
            and all submodules. Otherwise, yields only buffers that
            are direct members of this module.

    Yields:
        torch.Tensor: module buffer

    Example::

        >>> # xdoctest: +SKIP("undefined vars")
        >>> for buf in model.buffers():
        >>>     print(type(buf), buf.size())
        <class 'torch.Tensor'> (20L,)
        <class 'torch.Tensor'> (20L, 1L, 5L, 5L)

children

(

)

Return an iterator over immediate children modules.

1
2
    Yields:
        Module: a child module

compile

(

args: <class 'inspect._empty'>

kwargs: <class 'inspect._empty'>

)

1
2
3
4
5
6
    Compile this Module's forward using :func:`torch.compile`.

    This Module's `__call__` method is compiled and all arguments are passed as-is
    to :func:`torch.compile`.

    See :func:`torch.compile` for details on the arguments for this function.

cpu

(

)

Move all model parameters and buffers to the CPU.

1
2
3
4
5
    .. note::
        This method modifies the module in-place.

    Returns:
        Module: self

cuda

(

device: Union[int, torch.device, NoneType] = None

)

Move all model parameters and buffers to the GPU.

1
2
3
4
5
6
7
8
9
10
11
12
13
    This also makes associated parameters and buffers different objects. So
    it should be called before constructing the optimizer if the module will
    live on GPU while being optimized.

    .. note::
        This method modifies the module in-place.

    Args:
        device (int, optional): if specified, all parameters will be
            copied to that device

    Returns:
        Module: self

double

(

)

Casts all floating point parameters and buffers to double datatype.

1
2
3
4
5
    .. note::
        This method modifies the module in-place.

    Returns:
        Module: self

eval

(

)

Set the module in evaluation mode.

1
2
3
4
5
6
7
8
9
10
11
12
    This has an effect only on certain modules. See the documentation of
    particular modules for details of their behaviors in training/evaluation
    mode, i.e. whether they are affected, e.g. :class:`Dropout`, :class:`BatchNorm`,
    etc.

    This is equivalent with :meth:`self.train(False) <torch.nn.Module.train>`.

    See :ref:`locally-disable-grad-doc` for a comparison between
    `.eval()` and several similar mechanisms that may be confused with it.

    Returns:
        Module: self

extra_repr

(

)

Return the extra representation of the module.

1
2
3
    To print customized extra information, you should re-implement
    this method in your own modules. Both single-line and multi-line
    strings are acceptable.

float

(

)

Casts all floating point parameters and buffers to float datatype.

1
2
3
4
5
    .. note::
        This method modifies the module in-place.

    Returns:
        Module: self

forward

(

pred: Dict[Literal['energy'], torch.Tensor]

label: Dict[Literal['energy'], torch.Tensor]

)

get_buffer

(

target: <class 'str'>

)

Return the buffer given by target if it exists, otherwise throw an error.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
    See the docstring for ``get_submodule`` for a more detailed
    explanation of this method's functionality as well as how to
    correctly specify ``target``.

    Args:
        target: The fully-qualified string name of the buffer
            to look for. (See ``get_submodule`` for how to specify a
            fully-qualified string.)

    Returns:
        torch.Tensor: The buffer referenced by ``target``

    Raises:
        AttributeError: If the target string references an invalid
            path or resolves to something that is not a
            buffer

get_extra_state

(

)

Return any extra state to include in the module’s state_dict.

1
2
3
4
5
6
7
8
9
10
11
    Implement this and a corresponding :func:`set_extra_state` for your module
    if you need to store extra state. This function is called when building the
    module's `state_dict()`.

    Note that extra state should be picklable to ensure working serialization
    of the state_dict. We only provide backwards compatibility guarantees
    for serializing Tensors; other objects may break backwards compatibility if
    their serialized pickled form changes.

    Returns:
        object: Any extra state to store in the module's state_dict

get_parameter

(

target: <class 'str'>

)

Return the parameter given by target if it exists, otherwise throw an error.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
    See the docstring for ``get_submodule`` for a more detailed
    explanation of this method's functionality as well as how to
    correctly specify ``target``.

    Args:
        target: The fully-qualified string name of the Parameter
            to look for. (See ``get_submodule`` for how to specify a
            fully-qualified string.)

    Returns:
        torch.nn.Parameter: The Parameter referenced by ``target``

    Raises:
        AttributeError: If the target string references an invalid
            path or resolves to something that is not an
            ``nn.Parameter``

get_submodule

(

target: <class 'str'>

)

Return the submodule given by target if it exists, otherwise throw an error.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
    For example, let's say you have an ``nn.Module`` ``A`` that
    looks like this:

    .. code-block:: text

        A(
            (net_b): Module(
                (net_c): Module(
                    (conv): Conv2d(16, 33, kernel_size=(3, 3), stride=(2, 2))
                )
                (linear): Linear(in_features=100, out_features=200, bias=True)
            )
        )

    (The diagram shows an ``nn.Module`` ``A``. ``A`` which has a nested
    submodule ``net_b``, which itself has two submodules ``net_c``
    and ``linear``. ``net_c`` then has a submodule ``conv``.)

    To check whether or not we have the ``linear`` submodule, we
    would call ``get_submodule("net_b.linear")``. To check whether
    we have the ``conv`` submodule, we would call
    ``get_submodule("net_b.net_c.conv")``.

    The runtime of ``get_submodule`` is bounded by the degree
    of module nesting in ``target``. A query against
    ``named_modules`` achieves the same result, but it is O(N) in
    the number of transitive modules. So, for a simple check to see
    if some submodule exists, ``get_submodule`` should always be
    used.

    Args:
        target: The fully-qualified string name of the submodule
            to look for. (See above example for how to specify a
            fully-qualified string.)

    Returns:
        torch.nn.Module: The submodule referenced by ``target``

    Raises:
        AttributeError: If at any point along the path resulting from
            the target string the (sub)path resolves to a non-existent
            attribute name or an object that is not an instance of ``nn.Module``.

half

(

)

Casts all floating point parameters and buffers to half datatype.

1
2
3
4
5
    .. note::
        This method modifies the module in-place.

    Returns:
        Module: self

ipu

(

device: Union[int, torch.device, NoneType] = None

)

Move all model parameters and buffers to the IPU.

1
2
3
4
5
6
7
8
9
10
11
12
13
    This also makes associated parameters and buffers different objects. So
    it should be called before constructing the optimizer if the module will
    live on IPU while being optimized.

    .. note::
        This method modifies the module in-place.

    Arguments:
        device (int, optional): if specified, all parameters will be
            copied to that device

    Returns:
        Module: self

load_state_dict

(

state_dict: collections.abc.Mapping[str, Any]

strict: <class 'bool'> = True

assign: <class 'bool'> = False

)

Copy parameters and buffers from :attr:state_dict into this module and its descendants.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
    If :attr:`strict` is ``True``, then
    the keys of :attr:`state_dict` must exactly match the keys returned
    by this module's :meth:`~torch.nn.Module.state_dict` function.

    .. warning::
        If :attr:`assign` is ``True`` the optimizer must be created after
        the call to :attr:`load_state_dict` unless
        :func:`~torch.__future__.get_swap_module_params_on_conversion` is ``True``.

    Args:
        state_dict (dict): a dict containing parameters and
            persistent buffers.
        strict (bool, optional): whether to strictly enforce that the keys
            in :attr:`state_dict` match the keys returned by this module's
            :meth:`~torch.nn.Module.state_dict` function. Default: ``True``
        assign (bool, optional): When set to ``False``, the properties of the tensors
            in the current module are preserved whereas setting it to ``True`` preserves
            properties of the Tensors in the state dict. The only
            exception is the ``requires_grad`` field of :class:`~torch.nn.Parameter`s
            for which the value from the module is preserved.
            Default: ``False``

    Returns:
        ``NamedTuple`` with ``missing_keys`` and ``unexpected_keys`` fields:
            * **missing_keys** is a list of str containing any keys that are expected
                by this module but missing from the provided ``state_dict``.
            * **unexpected_keys** is a list of str containing the keys that are not
                expected by this module but present in the provided ``state_dict``.

    Note:
        If a parameter or buffer is registered as ``None`` and its corresponding key
        exists in :attr:`state_dict`, :meth:`load_state_dict` will raise a
        ``RuntimeError``.

modules

(

)

Return an iterator over all modules in the network.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
    Yields:
        Module: a module in the network

    Note:
        Duplicate modules are returned only once. In the following
        example, ``l`` will be returned only once.

    Example::

        >>> l = nn.Linear(2, 2)
        >>> net = nn.Sequential(l, l)
        >>> for idx, m in enumerate(net.modules()):
        ...     print(idx, '->', m)

        0 -> Sequential(
          (0): Linear(in_features=2, out_features=2, bias=True)
          (1): Linear(in_features=2, out_features=2, bias=True)
        )
        1 -> Linear(in_features=2, out_features=2, bias=True)

mtia

(

device: Union[int, torch.device, NoneType] = None

)

Move all model parameters and buffers to the MTIA.

1
2
3
4
5
6
7
8
9
10
11
12
13
    This also makes associated parameters and buffers different objects. So
    it should be called before constructing the optimizer if the module will
    live on MTIA while being optimized.

    .. note::
        This method modifies the module in-place.

    Arguments:
        device (int, optional): if specified, all parameters will be
            copied to that device

    Returns:
        Module: self

named_buffers

(

prefix: <class 'str'> =

recurse: <class 'bool'> = True

remove_duplicate: <class 'bool'> = True

)

Return an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
    Args:
        prefix (str): prefix to prepend to all buffer names.
        recurse (bool, optional): if True, then yields buffers of this module
            and all submodules. Otherwise, yields only buffers that
            are direct members of this module. Defaults to True.
        remove_duplicate (bool, optional): whether to remove the duplicated buffers in the result. Defaults to True.

    Yields:
        (str, torch.Tensor): Tuple containing the name and buffer

    Example::

        >>> # xdoctest: +SKIP("undefined vars")
        >>> for name, buf in self.named_buffers():
        >>>     if name in ['running_var']:
        >>>         print(buf.size())

named_children

(

)

Return an iterator over immediate children modules, yielding both the name of the module as well as the module itself.

1
2
3
4
5
6
7
8
9
    Yields:
        (str, Module): Tuple containing a name and child module

    Example::

        >>> # xdoctest: +SKIP("undefined vars")
        >>> for name, module in model.named_children():
        >>>     if name in ['conv4', 'conv5']:
        >>>         print(module)

named_modules

(

memo: Optional[set['Module']] = None

prefix: <class 'str'> =

remove_duplicate: <class 'bool'> = True

)

Return an iterator over all modules in the network, yielding both the name of the module as well as the module itself.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
    Args:
        memo: a memo to store the set of modules already added to the result
        prefix: a prefix that will be added to the name of the module
        remove_duplicate: whether to remove the duplicated module instances in the result
            or not

    Yields:
        (str, Module): Tuple of name and module

    Note:
        Duplicate modules are returned only once. In the following
        example, ``l`` will be returned only once.

    Example::

        >>> l = nn.Linear(2, 2)
        >>> net = nn.Sequential(l, l)
        >>> for idx, m in enumerate(net.named_modules()):
        ...     print(idx, '->', m)

        0 -> ('', Sequential(
          (0): Linear(in_features=2, out_features=2, bias=True)
          (1): Linear(in_features=2, out_features=2, bias=True)
        ))
        1 -> ('0', Linear(in_features=2, out_features=2, bias=True))

named_parameters

(

prefix: <class 'str'> =

recurse: <class 'bool'> = True

remove_duplicate: <class 'bool'> = True

)

Return an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
    Args:
        prefix (str): prefix to prepend to all parameter names.
        recurse (bool): if True, then yields parameters of this module
            and all submodules. Otherwise, yields only parameters that
            are direct members of this module.
        remove_duplicate (bool, optional): whether to remove the duplicated
            parameters in the result. Defaults to True.

    Yields:
        (str, Parameter): Tuple containing the name and parameter

    Example::

        >>> # xdoctest: +SKIP("undefined vars")
        >>> for name, param in self.named_parameters():
        >>>     if name in ['bias']:
        >>>         print(param.size())

parameters

(

recurse: <class 'bool'> = True

)

Return an iterator over module parameters.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
    This is typically passed to an optimizer.

    Args:
        recurse (bool): if True, then yields parameters of this module
            and all submodules. Otherwise, yields only parameters that
            are direct members of this module.

    Yields:
        Parameter: module parameter

    Example::

        >>> # xdoctest: +SKIP("undefined vars")
        >>> for param in model.parameters():
        >>>     print(type(param), param.size())
        <class 'torch.Tensor'> (20L,)
        <class 'torch.Tensor'> (20L, 1L, 5L, 5L)

register_backward_hook

(

hook: Callable[[ForwardRef('Module'), Union[tuple[torch.Tensor, ...], torch.Tensor], Union[tuple[torch.Tensor, ...], torch.Tensor]], Union[NoneType, tuple[torch.Tensor, ...], torch.Tensor]]

)

Register a backward hook on the module.

1
2
3
4
5
6
7
    This function is deprecated in favor of :meth:`~torch.nn.Module.register_full_backward_hook` and
    the behavior of this function will change in future versions.

    Returns:
        :class:`torch.utils.hooks.RemovableHandle`:
            a handle that can be used to remove the added hook by calling
            ``handle.remove()``

register_buffer

(

name: <class 'str'>

tensor: Optional[torch.Tensor]

persistent: <class 'bool'> = True

)

Add a buffer to the module.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
    This is typically used to register a buffer that should not to be
    considered a model parameter. For example, BatchNorm's ``running_mean``
    is not a parameter, but is part of the module's state. Buffers, by
    default, are persistent and will be saved alongside parameters. This
    behavior can be changed by setting :attr:`persistent` to ``False``. The
    only difference between a persistent buffer and a non-persistent buffer
    is that the latter will not be a part of this module's
    :attr:`state_dict`.

    Buffers can be accessed as attributes using given names.

    Args:
        name (str): name of the buffer. The buffer can be accessed
            from this module using the given name
        tensor (Tensor or None): buffer to be registered. If ``None``, then operations
            that run on buffers, such as :attr:`cuda`, are ignored. If ``None``,
            the buffer is **not** included in the module's :attr:`state_dict`.
        persistent (bool): whether the buffer is part of this module's
            :attr:`state_dict`.

    Example::

        >>> # xdoctest: +SKIP("undefined vars")
        >>> self.register_buffer('running_mean', torch.zeros(num_features))

register_forward_hook

(

hook: Union[Callable[[~T, tuple[Any, ...], Any], Optional[Any]], Callable[[~T, tuple[Any, ...], dict[str, Any], Any], Optional[Any]]]

prepend: <class 'bool'> = False

with_kwargs: <class 'bool'> = False

always_call: <class 'bool'> = False

)

Register a forward hook on the module.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
    The hook will be called every time after :func:`forward` has computed an output.

    If ``with_kwargs`` is ``False`` or not specified, the input contains only
    the positional arguments given to the module. Keyword arguments won't be
    passed to the hooks and only to the ``forward``. The hook can modify the
    output. It can modify the input inplace but it will not have effect on
    forward since this is called after :func:`forward` is called. The hook
    should have the following signature::

        hook(module, args, output) -> None or modified output

    If ``with_kwargs`` is ``True``, the forward hook will be passed the
    ``kwargs`` given to the forward function and be expected to return the
    output possibly modified. The hook should have the following signature::

        hook(module, args, kwargs, output) -> None or modified output

    Args:
        hook (Callable): The user defined hook to be registered.
        prepend (bool): If ``True``, the provided ``hook`` will be fired
            before all existing ``forward`` hooks on this
            :class:`torch.nn.Module`. Otherwise, the provided
            ``hook`` will be fired after all existing ``forward`` hooks on
            this :class:`torch.nn.Module`. Note that global
            ``forward`` hooks registered with
            :func:`register_module_forward_hook` will fire before all hooks
            registered by this method.
            Default: ``False``
        with_kwargs (bool): If ``True``, the ``hook`` will be passed the
            kwargs given to the forward function.
            Default: ``False``
        always_call (bool): If ``True`` the ``hook`` will be run regardless of
            whether an exception is raised while calling the Module.
            Default: ``False``

    Returns:
        :class:`torch.utils.hooks.RemovableHandle`:
            a handle that can be used to remove the added hook by calling
            ``handle.remove()``

register_forward_pre_hook

(

hook: Union[Callable[[~T, tuple[Any, ...]], Optional[Any]], Callable[[~T, tuple[Any, ...], dict[str, Any]], Optional[tuple[Any, dict[str, Any]]]]]

prepend: <class 'bool'> = False

with_kwargs: <class 'bool'> = False

)

Register a forward pre-hook on the module.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
    The hook will be called every time before :func:`forward` is invoked.


    If ``with_kwargs`` is false or not specified, the input contains only
    the positional arguments given to the module. Keyword arguments won't be
    passed to the hooks and only to the ``forward``. The hook can modify the
    input. User can either return a tuple or a single modified value in the
    hook. We will wrap the value into a tuple if a single value is returned
    (unless that value is already a tuple). The hook should have the
    following signature::

        hook(module, args) -> None or modified input

    If ``with_kwargs`` is true, the forward pre-hook will be passed the
    kwargs given to the forward function. And if the hook modifies the
    input, both the args and kwargs should be returned. The hook should have
    the following signature::

        hook(module, args, kwargs) -> None or a tuple of modified input and kwargs

    Args:
        hook (Callable): The user defined hook to be registered.
        prepend (bool): If true, the provided ``hook`` will be fired before
            all existing ``forward_pre`` hooks on this
            :class:`torch.nn.Module`. Otherwise, the provided
            ``hook`` will be fired after all existing ``forward_pre`` hooks
            on this :class:`torch.nn.Module`. Note that global
            ``forward_pre`` hooks registered with
            :func:`register_module_forward_pre_hook` will fire before all
            hooks registered by this method.
            Default: ``False``
        with_kwargs (bool): If true, the ``hook`` will be passed the kwargs
            given to the forward function.
            Default: ``False``

    Returns:
        :class:`torch.utils.hooks.RemovableHandle`:
            a handle that can be used to remove the added hook by calling
            ``handle.remove()``

register_full_backward_hook

(

hook: Callable[[ForwardRef('Module'), Union[tuple[torch.Tensor, ...], torch.Tensor], Union[tuple[torch.Tensor, ...], torch.Tensor]], Union[NoneType, tuple[torch.Tensor, ...], torch.Tensor]]

prepend: <class 'bool'> = False

)

Register a backward hook on the module.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
    The hook will be called every time the gradients with respect to a module
    are computed, i.e. the hook will execute if and only if the gradients with
    respect to module outputs are computed. The hook should have the following
    signature::

        hook(module, grad_input, grad_output) -> tuple(Tensor) or None

    The :attr:`grad_input` and :attr:`grad_output` are tuples that contain the gradients
    with respect to the inputs and outputs respectively. The hook should
    not modify its arguments, but it can optionally return a new gradient with
    respect to the input that will be used in place of :attr:`grad_input` in
    subsequent computations. :attr:`grad_input` will only correspond to the inputs given
    as positional arguments and all kwarg arguments are ignored. Entries
    in :attr:`grad_input` and :attr:`grad_output` will be ``None`` for all non-Tensor
    arguments.

    For technical reasons, when this hook is applied to a Module, its forward function will
    receive a view of each Tensor passed to the Module. Similarly the caller will receive a view
    of each Tensor returned by the Module's forward function.

    .. warning ::
        Modifying inputs or outputs inplace is not allowed when using backward hooks and
        will raise an error.

    Args:
        hook (Callable): The user-defined hook to be registered.
        prepend (bool): If true, the provided ``hook`` will be fired before
            all existing ``backward`` hooks on this
            :class:`torch.nn.Module`. Otherwise, the provided
            ``hook`` will be fired after all existing ``backward`` hooks on
            this :class:`torch.nn.Module`. Note that global
            ``backward`` hooks registered with
            :func:`register_module_full_backward_hook` will fire before
            all hooks registered by this method.

    Returns:
        :class:`torch.utils.hooks.RemovableHandle`:
            a handle that can be used to remove the added hook by calling
            ``handle.remove()``

register_full_backward_pre_hook

(

hook: Callable[[ForwardRef('Module'), Union[tuple[torch.Tensor, ...], torch.Tensor]], Union[NoneType, tuple[torch.Tensor, ...], torch.Tensor]]

prepend: <class 'bool'> = False

)

Register a backward pre-hook on the module.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
    The hook will be called every time the gradients for the module are computed.
    The hook should have the following signature::

        hook(module, grad_output) -> tuple[Tensor] or None

    The :attr:`grad_output` is a tuple. The hook should
    not modify its arguments, but it can optionally return a new gradient with
    respect to the output that will be used in place of :attr:`grad_output` in
    subsequent computations. Entries in :attr:`grad_output` will be ``None`` for
    all non-Tensor arguments.

    For technical reasons, when this hook is applied to a Module, its forward function will
    receive a view of each Tensor passed to the Module. Similarly the caller will receive a view
    of each Tensor returned by the Module's forward function.

    .. warning ::
        Modifying inputs inplace is not allowed when using backward hooks and
        will raise an error.

    Args:
        hook (Callable): The user-defined hook to be registered.
        prepend (bool): If true, the provided ``hook`` will be fired before
            all existing ``backward_pre`` hooks on this
            :class:`torch.nn.Module`. Otherwise, the provided
            ``hook`` will be fired after all existing ``backward_pre`` hooks
            on this :class:`torch.nn.Module`. Note that global
            ``backward_pre`` hooks registered with
            :func:`register_module_full_backward_pre_hook` will fire before
            all hooks registered by this method.

    Returns:
        :class:`torch.utils.hooks.RemovableHandle`:
            a handle that can be used to remove the added hook by calling
            ``handle.remove()``

register_load_state_dict_post_hook

(

hook: <class 'inspect._empty'>

)

Register a post-hook to be run after module’s :meth:~nn.Module.load_state_dict is called.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
    It should have the following signature::
        hook(module, incompatible_keys) -> None

    The ``module`` argument is the current module that this hook is registered
    on, and the ``incompatible_keys`` argument is a ``NamedTuple`` consisting
    of attributes ``missing_keys`` and ``unexpected_keys``. ``missing_keys``
    is a ``list`` of ``str`` containing the missing keys and
    ``unexpected_keys`` is a ``list`` of ``str`` containing the unexpected keys.

    The given incompatible_keys can be modified inplace if needed.

    Note that the checks performed when calling :func:`load_state_dict` with
    ``strict=True`` are affected by modifications the hook makes to
    ``missing_keys`` or ``unexpected_keys``, as expected. Additions to either
    set of keys will result in an error being thrown when ``strict=True``, and
    clearing out both missing and unexpected keys will avoid an error.

    Returns:
        :class:`torch.utils.hooks.RemovableHandle`:
            a handle that can be used to remove the added hook by calling
            ``handle.remove()``

register_load_state_dict_pre_hook

(

hook: <class 'inspect._empty'>

)

Register a pre-hook to be run before module’s :meth:~nn.Module.load_state_dict is called.

1
2
3
4
5
6
    It should have the following signature::
        hook(module, state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs) -> None  # noqa: B950

    Arguments:
        hook (Callable): Callable hook that will be invoked before
            loading the state dict.

register_module

(

name: <class 'str'>

module: Optional[ForwardRef('Module')]

)

Alias for :func:add_module.

register_parameter

(

name: <class 'str'>

param: Optional[torch.nn.parameter.Parameter]

)

Add a parameter to the module.

1
2
3
4
5
6
7
8
9
    The parameter can be accessed as an attribute using given name.

    Args:
        name (str): name of the parameter. The parameter can be accessed
            from this module using the given name
        param (Parameter or None): parameter to be added to the module. If
            ``None``, then operations that run on parameters, such as :attr:`cuda`,
            are ignored. If ``None``, the parameter is **not** included in the
            module's :attr:`state_dict`.

register_state_dict_post_hook

(

hook: <class 'inspect._empty'>

)

Register a post-hook for the :meth:~torch.nn.Module.state_dict method.

1
2
3
4
    It should have the following signature::
        hook(module, state_dict, prefix, local_metadata) -> None

    The registered hooks can modify the ``state_dict`` inplace.

register_state_dict_pre_hook

(

hook: <class 'inspect._empty'>

)

Register a pre-hook for the :meth:~torch.nn.Module.state_dict method.

1
2
3
4
5
    It should have the following signature::
        hook(module, prefix, keep_vars) -> None

    The registered hooks can be used to perform pre-processing before the ``state_dict``
    call is made.

requires_grad_

(

requires_grad: <class 'bool'> = True

)

Change if autograd should record operations on parameters in this module.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
    This method sets the parameters' :attr:`requires_grad` attributes
    in-place.

    This method is helpful for freezing part of the module for finetuning
    or training parts of a model individually (e.g., GAN training).

    See :ref:`locally-disable-grad-doc` for a comparison between
    `.requires_grad_()` and several similar mechanisms that may be confused with it.

    Args:
        requires_grad (bool): whether autograd should record operations on
                              parameters in this module. Default: ``True``.

    Returns:
        Module: self

set_extra_state

(

state: Any

)

Set extra state contained in the loaded state_dict.

1
2
3
4
5
6
7
    This function is called from :func:`load_state_dict` to handle any extra state
    found within the `state_dict`. Implement this function and a corresponding
    :func:`get_extra_state` for your module if you need to store extra state within its
    `state_dict`.

    Args:
        state (dict): Extra state from the `state_dict`

set_submodule

(

target: <class 'str'>

module: Module

strict: <class 'bool'> = False

)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
    Set the submodule given by ``target`` if it exists, otherwise throw an error.

    .. note::
        If ``strict`` is set to ``False`` (default), the method will replace an existing submodule
        or create a new submodule if the parent module exists. If ``strict`` is set to ``True``,
        the method will only attempt to replace an existing submodule and throw an error if
        the submodule does not exist.

    For example, let's say you have an ``nn.Module`` ``A`` that
    looks like this:

    .. code-block:: text

        A(
            (net_b): Module(
                (net_c): Module(
                    (conv): Conv2d(3, 3, 3)
                )
                (linear): Linear(3, 3)
            )
        )

    (The diagram shows an ``nn.Module`` ``A``. ``A`` has a nested
    submodule ``net_b``, which itself has two submodules ``net_c``
    and ``linear``. ``net_c`` then has a submodule ``conv``.)

    To override the ``Conv2d`` with a new submodule ``Linear``, you
    could call ``set_submodule("net_b.net_c.conv", nn.Linear(1, 1))``
    where ``strict`` could be ``True`` or ``False``

    To add a new submodule ``Conv2d`` to the existing ``net_b`` module,
    you would call ``set_submodule("net_b.conv", nn.Conv2d(1, 1, 1))``.

    In the above if you set ``strict=True`` and call
    ``set_submodule("net_b.conv", nn.Conv2d(1, 1, 1), strict=True)``, an AttributeError
    will be raised because ``net_b`` does not have a submodule named ``conv``.

    Args:
        target: The fully-qualified string name of the submodule
            to look for. (See above example for how to specify a
            fully-qualified string.)
        module: The module to set the submodule to.
        strict: If ``False``, the method will replace an existing submodule
            or create a new submodule if the parent module exists. If ``True``,
            the method will only attempt to replace an existing submodule and throw an error
            if the submodule doesn't already exist.

    Raises:
        ValueError: If the ``target`` string is empty or if ``module`` is not an instance of ``nn.Module``.
        AttributeError: If at any point along the path resulting from
            the ``target`` string the (sub)path resolves to a non-existent
            attribute name or an object that is not an instance of ``nn.Module``.

share_memory

(

)

See :meth:torch.Tensor.share_memory_.

state_dict

(

args: <class 'inspect._empty'>

destination: <class 'inspect._empty'> = None

prefix: <class 'inspect._empty'> =

keep_vars: <class 'inspect._empty'> = False

)

Return a dictionary containing references to the whole state of the module.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
    Both parameters and persistent buffers (e.g. running averages) are
    included. Keys are corresponding parameter and buffer names.
    Parameters and buffers set to ``None`` are not included.

    .. note::
        The returned object is a shallow copy. It contains references
        to the module's parameters and buffers.

    .. warning::
        Currently ``state_dict()`` also accepts positional arguments for
        ``destination``, ``prefix`` and ``keep_vars`` in order. However,
        this is being deprecated and keyword arguments will be enforced in
        future releases.

    .. warning::
        Please avoid the use of argument ``destination`` as it is not
        designed for end-users.

    Args:
        destination (dict, optional): If provided, the state of module will
            be updated into the dict and the same object is returned.
            Otherwise, an ``OrderedDict`` will be created and returned.
            Default: ``None``.
        prefix (str, optional): a prefix added to parameter and buffer
            names to compose the keys in state_dict. Default: ``''``.
        keep_vars (bool, optional): by default the :class:`~torch.Tensor` s
            returned in the state dict are detached from autograd. If it's
            set to ``True``, detaching will not be performed.
            Default: ``False``.

    Returns:
        dict:
            a dictionary containing a whole state of the module

    Example::

        >>> # xdoctest: +SKIP("undefined vars")
        >>> module.state_dict().keys()
        ['bias', 'weight']

to

(

args: <class 'inspect._empty'>

kwargs: <class 'inspect._empty'>

)

Move and/or cast the parameters and buffers.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
    This can be called as

    .. function:: to(device=None, dtype=None, non_blocking=False)
       :noindex:

    .. function:: to(dtype, non_blocking=False)
       :noindex:

    .. function:: to(tensor, non_blocking=False)
       :noindex:

    .. function:: to(memory_format=torch.channels_last)
       :noindex:

    Its signature is similar to :meth:`torch.Tensor.to`, but only accepts
    floating point or complex :attr:`dtype`\ s. In addition, this method will
    only cast the floating point or complex parameters and buffers to :attr:`dtype`
    (if given). The integral parameters and buffers will be moved
    :attr:`device`, if that is given, but with dtypes unchanged. When
    :attr:`non_blocking` is set, it tries to convert/move asynchronously
    with respect to the host if possible, e.g., moving CPU Tensors with
    pinned memory to CUDA devices.

    See below for examples.

    .. note::
        This method modifies the module in-place.

    Args:
        device (:class:`torch.device`): the desired device of the parameters
            and buffers in this module
        dtype (:class:`torch.dtype`): the desired floating point or complex dtype of
            the parameters and buffers in this module
        tensor (torch.Tensor): Tensor whose dtype and device are the desired
            dtype and device for all parameters and buffers in this module
        memory_format (:class:`torch.memory_format`): the desired memory
            format for 4D parameters and buffers in this module (keyword
            only argument)

    Returns:
        Module: self

    Examples::

        >>> # xdoctest: +IGNORE_WANT("non-deterministic")
        >>> linear = nn.Linear(2, 2)
        >>> linear.weight
        Parameter containing:
        tensor([[ 0.1913, -0.3420],
                [-0.5113, -0.2325]])
        >>> linear.to(torch.double)
        Linear(in_features=2, out_features=2, bias=True)
        >>> linear.weight
        Parameter containing:
        tensor([[ 0.1913, -0.3420],
                [-0.5113, -0.2325]], dtype=torch.float64)
        >>> # xdoctest: +REQUIRES(env:TORCH_DOCTEST_CUDA1)
        >>> gpu1 = torch.device("cuda:1")
        >>> linear.to(gpu1, dtype=torch.half, non_blocking=True)
        Linear(in_features=2, out_features=2, bias=True)
        >>> linear.weight
        Parameter containing:
        tensor([[ 0.1914, -0.3420],
                [-0.5112, -0.2324]], dtype=torch.float16, device='cuda:1')
        >>> cpu = torch.device("cpu")
        >>> linear.to(cpu)
        Linear(in_features=2, out_features=2, bias=True)
        >>> linear.weight
        Parameter containing:
        tensor([[ 0.1914, -0.3420],
                [-0.5112, -0.2324]], dtype=torch.float16)

        >>> linear = nn.Linear(2, 2, bias=None).to(torch.cdouble)
        >>> linear.weight
        Parameter containing:
        tensor([[ 0.3741+0.j,  0.2382+0.j],
                [ 0.5593+0.j, -0.4443+0.j]], dtype=torch.complex128)
        >>> linear(torch.ones(3, 2, dtype=torch.cdouble))
        tensor([[0.6122+0.j, 0.1150+0.j],
                [0.6122+0.j, 0.1150+0.j],
                [0.6122+0.j, 0.1150+0.j]], dtype=torch.complex128)

to_empty

(

device: Union[int, str, torch.device, NoneType]

recurse: <class 'bool'> = True

)

Move the parameters and buffers to the specified device without copying storage.

1
2
3
4
5
6
7
8
    Args:
        device (:class:`torch.device`): The desired device of the parameters
            and buffers in this module.
        recurse (bool): Whether parameters and buffers of submodules should
            be recursively moved to the specified device.

    Returns:
        Module: self

train

(

mode: <class 'bool'> = True

)

Set the module in training mode.

1
2
3
4
5
6
7
8
9
10
11
    This has an effect only on certain modules. See the documentation of
    particular modules for details of their behaviors in training/evaluation
    mode, i.e., whether they are affected, e.g. :class:`Dropout`, :class:`BatchNorm`,
    etc.

    Args:
        mode (bool): whether to set training mode (``True``) or evaluation
                     mode (``False``). Default: ``True``.

    Returns:
        Module: self

type

(

dst_type: Union[torch.dtype, str]

)

Casts all parameters and buffers to :attr:dst_type.

1
2
3
4
5
6
7
8
    .. note::
        This method modifies the module in-place.

    Args:
        dst_type (type or string): the desired type

    Returns:
        Module: self

xpu

(

device: Union[int, torch.device, NoneType] = None

)

Move all model parameters and buffers to the XPU.

1
2
3
4
5
6
7
8
9
10
11
12
13
    This also makes associated parameters and buffers different objects. So
    it should be called before constructing optimizer if the module will
    live on XPU while being optimized.

    .. note::
        This method modifies the module in-place.

    Arguments:
        device (int, optional): if specified, all parameters will be
            copied to that device

    Returns:
        Module: self

zero_grad

(

set_to_none: <class 'bool'> = True

)

Reset gradients of all model parameters.

1
2
3
4
5
    See similar function under :class:`torch.optim.Optimizer` for more context.

    Args:
        set_to_none (bool): instead of setting to zero, set the grads to None.
            See :meth:`torch.optim.Optimizer.zero_grad` for details.

WrapperBoostLoss

(

loss_E: Union[Literal['MAE', 'MSE', 'SmoothMAE', 'Huber'], torch.nn.modules.module.Module] = MAE

loss_F: <class 'inspect._empty'> = None

)

None

add_module

(

name: <class 'str'>

module: Optional[ForwardRef('Module')]

)

Add a child module to the current module.

1
2
3
4
5
6
    The module can be accessed as an attribute using the given name.

    Args:
        name (str): name of the child module. The child module can be
            accessed from this module using the given name
        module (Module): child module to be added to the module.

apply

(

fn: Callable[[ForwardRef('Module')], NoneType]

)

Apply fn recursively to every submodule (as returned by .children()) as well as self.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
    Typical use includes initializing the parameters of a model
    (see also :ref:`nn-init-doc`).

    Args:
        fn (:class:`Module` -> None): function to be applied to each submodule

    Returns:
        Module: self

    Example::

        >>> @torch.no_grad()
        >>> def init_weights(m):
        >>>     print(m)
        >>>     if type(m) == nn.Linear:
        >>>         m.weight.fill_(1.0)
        >>>         print(m.weight)
        >>> net = nn.Sequential(nn.Linear(2, 2), nn.Linear(2, 2))
        >>> net.apply(init_weights)
        Linear(in_features=2, out_features=2, bias=True)
        Parameter containing:
        tensor([[1., 1.],
                [1., 1.]], requires_grad=True)
        Linear(in_features=2, out_features=2, bias=True)
        Parameter containing:
        tensor([[1., 1.],
                [1., 1.]], requires_grad=True)
        Sequential(
          (0): Linear(in_features=2, out_features=2, bias=True)
          (1): Linear(in_features=2, out_features=2, bias=True)
        )

bfloat16

(

)

Casts all floating point parameters and buffers to bfloat16 datatype.

1
2
3
4
5
    .. note::
        This method modifies the module in-place.

    Returns:
        Module: self

buffers

(

recurse: <class 'bool'> = True

)

Return an iterator over module buffers.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
    Args:
        recurse (bool): if True, then yields buffers of this module
            and all submodules. Otherwise, yields only buffers that
            are direct members of this module.

    Yields:
        torch.Tensor: module buffer

    Example::

        >>> # xdoctest: +SKIP("undefined vars")
        >>> for buf in model.buffers():
        >>>     print(type(buf), buf.size())
        <class 'torch.Tensor'> (20L,)
        <class 'torch.Tensor'> (20L, 1L, 5L, 5L)

children

(

)

Return an iterator over immediate children modules.

1
2
    Yields:
        Module: a child module

compile

(

args: <class 'inspect._empty'>

kwargs: <class 'inspect._empty'>

)

1
2
3
4
5
6
    Compile this Module's forward using :func:`torch.compile`.

    This Module's `__call__` method is compiled and all arguments are passed as-is
    to :func:`torch.compile`.

    See :func:`torch.compile` for details on the arguments for this function.

cpu

(

)

Move all model parameters and buffers to the CPU.

1
2
3
4
5
    .. note::
        This method modifies the module in-place.

    Returns:
        Module: self

cuda

(

device: Union[int, torch.device, NoneType] = None

)

Move all model parameters and buffers to the GPU.

1
2
3
4
5
6
7
8
9
10
11
12
13
    This also makes associated parameters and buffers different objects. So
    it should be called before constructing the optimizer if the module will
    live on GPU while being optimized.

    .. note::
        This method modifies the module in-place.

    Args:
        device (int, optional): if specified, all parameters will be
            copied to that device

    Returns:
        Module: self

double

(

)

Casts all floating point parameters and buffers to double datatype.

1
2
3
4
5
    .. note::
        This method modifies the module in-place.

    Returns:
        Module: self

eval

(

)

Set the module in evaluation mode.

1
2
3
4
5
6
7
8
9
10
11
12
    This has an effect only on certain modules. See the documentation of
    particular modules for details of their behaviors in training/evaluation
    mode, i.e. whether they are affected, e.g. :class:`Dropout`, :class:`BatchNorm`,
    etc.

    This is equivalent with :meth:`self.train(False) <torch.nn.Module.train>`.

    See :ref:`locally-disable-grad-doc` for a comparison between
    `.eval()` and several similar mechanisms that may be confused with it.

    Returns:
        Module: self

extra_repr

(

)

Return the extra representation of the module.

1
2
3
    To print customized extra information, you should re-implement
    this method in your own modules. Both single-line and multi-line
    strings are acceptable.

float

(

)

Casts all floating point parameters and buffers to float datatype.

1
2
3
4
5
    .. note::
        This method modifies the module in-place.

    Returns:
        Module: self

forward

(

pred: Dict[Literal['energy', 'forces'], List[torch.Tensor]]

label: Dict[Literal['energy', 'forces'], torch.Tensor]

)

get_buffer

(

target: <class 'str'>

)

Return the buffer given by target if it exists, otherwise throw an error.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
    See the docstring for ``get_submodule`` for a more detailed
    explanation of this method's functionality as well as how to
    correctly specify ``target``.

    Args:
        target: The fully-qualified string name of the buffer
            to look for. (See ``get_submodule`` for how to specify a
            fully-qualified string.)

    Returns:
        torch.Tensor: The buffer referenced by ``target``

    Raises:
        AttributeError: If the target string references an invalid
            path or resolves to something that is not a
            buffer

get_extra_state

(

)

Return any extra state to include in the module’s state_dict.

1
2
3
4
5
6
7
8
9
10
11
    Implement this and a corresponding :func:`set_extra_state` for your module
    if you need to store extra state. This function is called when building the
    module's `state_dict()`.

    Note that extra state should be picklable to ensure working serialization
    of the state_dict. We only provide backwards compatibility guarantees
    for serializing Tensors; other objects may break backwards compatibility if
    their serialized pickled form changes.

    Returns:
        object: Any extra state to store in the module's state_dict

get_parameter

(

target: <class 'str'>

)

Return the parameter given by target if it exists, otherwise throw an error.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
    See the docstring for ``get_submodule`` for a more detailed
    explanation of this method's functionality as well as how to
    correctly specify ``target``.

    Args:
        target: The fully-qualified string name of the Parameter
            to look for. (See ``get_submodule`` for how to specify a
            fully-qualified string.)

    Returns:
        torch.nn.Parameter: The Parameter referenced by ``target``

    Raises:
        AttributeError: If the target string references an invalid
            path or resolves to something that is not an
            ``nn.Parameter``

get_submodule

(

target: <class 'str'>

)

Return the submodule given by target if it exists, otherwise throw an error.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
    For example, let's say you have an ``nn.Module`` ``A`` that
    looks like this:

    .. code-block:: text

        A(
            (net_b): Module(
                (net_c): Module(
                    (conv): Conv2d(16, 33, kernel_size=(3, 3), stride=(2, 2))
                )
                (linear): Linear(in_features=100, out_features=200, bias=True)
            )
        )

    (The diagram shows an ``nn.Module`` ``A``. ``A`` which has a nested
    submodule ``net_b``, which itself has two submodules ``net_c``
    and ``linear``. ``net_c`` then has a submodule ``conv``.)

    To check whether or not we have the ``linear`` submodule, we
    would call ``get_submodule("net_b.linear")``. To check whether
    we have the ``conv`` submodule, we would call
    ``get_submodule("net_b.net_c.conv")``.

    The runtime of ``get_submodule`` is bounded by the degree
    of module nesting in ``target``. A query against
    ``named_modules`` achieves the same result, but it is O(N) in
    the number of transitive modules. So, for a simple check to see
    if some submodule exists, ``get_submodule`` should always be
    used.

    Args:
        target: The fully-qualified string name of the submodule
            to look for. (See above example for how to specify a
            fully-qualified string.)

    Returns:
        torch.nn.Module: The submodule referenced by ``target``

    Raises:
        AttributeError: If at any point along the path resulting from
            the target string the (sub)path resolves to a non-existent
            attribute name or an object that is not an instance of ``nn.Module``.

half

(

)

Casts all floating point parameters and buffers to half datatype.

1
2
3
4
5
    .. note::
        This method modifies the module in-place.

    Returns:
        Module: self

ipu

(

device: Union[int, torch.device, NoneType] = None

)

Move all model parameters and buffers to the IPU.

1
2
3
4
5
6
7
8
9
10
11
12
13
    This also makes associated parameters and buffers different objects. So
    it should be called before constructing the optimizer if the module will
    live on IPU while being optimized.

    .. note::
        This method modifies the module in-place.

    Arguments:
        device (int, optional): if specified, all parameters will be
            copied to that device

    Returns:
        Module: self

load_state_dict

(

state_dict: collections.abc.Mapping[str, Any]

strict: <class 'bool'> = True

assign: <class 'bool'> = False

)

Copy parameters and buffers from :attr:state_dict into this module and its descendants.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
    If :attr:`strict` is ``True``, then
    the keys of :attr:`state_dict` must exactly match the keys returned
    by this module's :meth:`~torch.nn.Module.state_dict` function.

    .. warning::
        If :attr:`assign` is ``True`` the optimizer must be created after
        the call to :attr:`load_state_dict` unless
        :func:`~torch.__future__.get_swap_module_params_on_conversion` is ``True``.

    Args:
        state_dict (dict): a dict containing parameters and
            persistent buffers.
        strict (bool, optional): whether to strictly enforce that the keys
            in :attr:`state_dict` match the keys returned by this module's
            :meth:`~torch.nn.Module.state_dict` function. Default: ``True``
        assign (bool, optional): When set to ``False``, the properties of the tensors
            in the current module are preserved whereas setting it to ``True`` preserves
            properties of the Tensors in the state dict. The only
            exception is the ``requires_grad`` field of :class:`~torch.nn.Parameter`s
            for which the value from the module is preserved.
            Default: ``False``

    Returns:
        ``NamedTuple`` with ``missing_keys`` and ``unexpected_keys`` fields:
            * **missing_keys** is a list of str containing any keys that are expected
                by this module but missing from the provided ``state_dict``.
            * **unexpected_keys** is a list of str containing the keys that are not
                expected by this module but present in the provided ``state_dict``.

    Note:
        If a parameter or buffer is registered as ``None`` and its corresponding key
        exists in :attr:`state_dict`, :meth:`load_state_dict` will raise a
        ``RuntimeError``.

modules

(

)

Return an iterator over all modules in the network.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
    Yields:
        Module: a module in the network

    Note:
        Duplicate modules are returned only once. In the following
        example, ``l`` will be returned only once.

    Example::

        >>> l = nn.Linear(2, 2)
        >>> net = nn.Sequential(l, l)
        >>> for idx, m in enumerate(net.modules()):
        ...     print(idx, '->', m)

        0 -> Sequential(
          (0): Linear(in_features=2, out_features=2, bias=True)
          (1): Linear(in_features=2, out_features=2, bias=True)
        )
        1 -> Linear(in_features=2, out_features=2, bias=True)

mtia

(

device: Union[int, torch.device, NoneType] = None

)

Move all model parameters and buffers to the MTIA.

1
2
3
4
5
6
7
8
9
10
11
12
13
    This also makes associated parameters and buffers different objects. So
    it should be called before constructing the optimizer if the module will
    live on MTIA while being optimized.

    .. note::
        This method modifies the module in-place.

    Arguments:
        device (int, optional): if specified, all parameters will be
            copied to that device

    Returns:
        Module: self

named_buffers

(

prefix: <class 'str'> =

recurse: <class 'bool'> = True

remove_duplicate: <class 'bool'> = True

)

Return an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
    Args:
        prefix (str): prefix to prepend to all buffer names.
        recurse (bool, optional): if True, then yields buffers of this module
            and all submodules. Otherwise, yields only buffers that
            are direct members of this module. Defaults to True.
        remove_duplicate (bool, optional): whether to remove the duplicated buffers in the result. Defaults to True.

    Yields:
        (str, torch.Tensor): Tuple containing the name and buffer

    Example::

        >>> # xdoctest: +SKIP("undefined vars")
        >>> for name, buf in self.named_buffers():
        >>>     if name in ['running_var']:
        >>>         print(buf.size())

named_children

(

)

Return an iterator over immediate children modules, yielding both the name of the module as well as the module itself.

1
2
3
4
5
6
7
8
9
    Yields:
        (str, Module): Tuple containing a name and child module

    Example::

        >>> # xdoctest: +SKIP("undefined vars")
        >>> for name, module in model.named_children():
        >>>     if name in ['conv4', 'conv5']:
        >>>         print(module)

named_modules

(

memo: Optional[set['Module']] = None

prefix: <class 'str'> =

remove_duplicate: <class 'bool'> = True

)

Return an iterator over all modules in the network, yielding both the name of the module as well as the module itself.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
    Args:
        memo: a memo to store the set of modules already added to the result
        prefix: a prefix that will be added to the name of the module
        remove_duplicate: whether to remove the duplicated module instances in the result
            or not

    Yields:
        (str, Module): Tuple of name and module

    Note:
        Duplicate modules are returned only once. In the following
        example, ``l`` will be returned only once.

    Example::

        >>> l = nn.Linear(2, 2)
        >>> net = nn.Sequential(l, l)
        >>> for idx, m in enumerate(net.named_modules()):
        ...     print(idx, '->', m)

        0 -> ('', Sequential(
          (0): Linear(in_features=2, out_features=2, bias=True)
          (1): Linear(in_features=2, out_features=2, bias=True)
        ))
        1 -> ('0', Linear(in_features=2, out_features=2, bias=True))

named_parameters

(

prefix: <class 'str'> =

recurse: <class 'bool'> = True

remove_duplicate: <class 'bool'> = True

)

Return an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
    Args:
        prefix (str): prefix to prepend to all parameter names.
        recurse (bool): if True, then yields parameters of this module
            and all submodules. Otherwise, yields only parameters that
            are direct members of this module.
        remove_duplicate (bool, optional): whether to remove the duplicated
            parameters in the result. Defaults to True.

    Yields:
        (str, Parameter): Tuple containing the name and parameter

    Example::

        >>> # xdoctest: +SKIP("undefined vars")
        >>> for name, param in self.named_parameters():
        >>>     if name in ['bias']:
        >>>         print(param.size())

parameters

(

recurse: <class 'bool'> = True

)

Return an iterator over module parameters.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
    This is typically passed to an optimizer.

    Args:
        recurse (bool): if True, then yields parameters of this module
            and all submodules. Otherwise, yields only parameters that
            are direct members of this module.

    Yields:
        Parameter: module parameter

    Example::

        >>> # xdoctest: +SKIP("undefined vars")
        >>> for param in model.parameters():
        >>>     print(type(param), param.size())
        <class 'torch.Tensor'> (20L,)
        <class 'torch.Tensor'> (20L, 1L, 5L, 5L)

register_backward_hook

(

hook: Callable[[ForwardRef('Module'), Union[tuple[torch.Tensor, ...], torch.Tensor], Union[tuple[torch.Tensor, ...], torch.Tensor]], Union[NoneType, tuple[torch.Tensor, ...], torch.Tensor]]

)

Register a backward hook on the module.

1
2
3
4
5
6
7
    This function is deprecated in favor of :meth:`~torch.nn.Module.register_full_backward_hook` and
    the behavior of this function will change in future versions.

    Returns:
        :class:`torch.utils.hooks.RemovableHandle`:
            a handle that can be used to remove the added hook by calling
            ``handle.remove()``

register_buffer

(

name: <class 'str'>

tensor: Optional[torch.Tensor]

persistent: <class 'bool'> = True

)

Add a buffer to the module.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
    This is typically used to register a buffer that should not to be
    considered a model parameter. For example, BatchNorm's ``running_mean``
    is not a parameter, but is part of the module's state. Buffers, by
    default, are persistent and will be saved alongside parameters. This
    behavior can be changed by setting :attr:`persistent` to ``False``. The
    only difference between a persistent buffer and a non-persistent buffer
    is that the latter will not be a part of this module's
    :attr:`state_dict`.

    Buffers can be accessed as attributes using given names.

    Args:
        name (str): name of the buffer. The buffer can be accessed
            from this module using the given name
        tensor (Tensor or None): buffer to be registered. If ``None``, then operations
            that run on buffers, such as :attr:`cuda`, are ignored. If ``None``,
            the buffer is **not** included in the module's :attr:`state_dict`.
        persistent (bool): whether the buffer is part of this module's
            :attr:`state_dict`.

    Example::

        >>> # xdoctest: +SKIP("undefined vars")
        >>> self.register_buffer('running_mean', torch.zeros(num_features))

register_forward_hook

(

hook: Union[Callable[[~T, tuple[Any, ...], Any], Optional[Any]], Callable[[~T, tuple[Any, ...], dict[str, Any], Any], Optional[Any]]]

prepend: <class 'bool'> = False

with_kwargs: <class 'bool'> = False

always_call: <class 'bool'> = False

)

Register a forward hook on the module.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
    The hook will be called every time after :func:`forward` has computed an output.

    If ``with_kwargs`` is ``False`` or not specified, the input contains only
    the positional arguments given to the module. Keyword arguments won't be
    passed to the hooks and only to the ``forward``. The hook can modify the
    output. It can modify the input inplace but it will not have effect on
    forward since this is called after :func:`forward` is called. The hook
    should have the following signature::

        hook(module, args, output) -> None or modified output

    If ``with_kwargs`` is ``True``, the forward hook will be passed the
    ``kwargs`` given to the forward function and be expected to return the
    output possibly modified. The hook should have the following signature::

        hook(module, args, kwargs, output) -> None or modified output

    Args:
        hook (Callable): The user defined hook to be registered.
        prepend (bool): If ``True``, the provided ``hook`` will be fired
            before all existing ``forward`` hooks on this
            :class:`torch.nn.Module`. Otherwise, the provided
            ``hook`` will be fired after all existing ``forward`` hooks on
            this :class:`torch.nn.Module`. Note that global
            ``forward`` hooks registered with
            :func:`register_module_forward_hook` will fire before all hooks
            registered by this method.
            Default: ``False``
        with_kwargs (bool): If ``True``, the ``hook`` will be passed the
            kwargs given to the forward function.
            Default: ``False``
        always_call (bool): If ``True`` the ``hook`` will be run regardless of
            whether an exception is raised while calling the Module.
            Default: ``False``

    Returns:
        :class:`torch.utils.hooks.RemovableHandle`:
            a handle that can be used to remove the added hook by calling
            ``handle.remove()``

register_forward_pre_hook

(

hook: Union[Callable[[~T, tuple[Any, ...]], Optional[Any]], Callable[[~T, tuple[Any, ...], dict[str, Any]], Optional[tuple[Any, dict[str, Any]]]]]

prepend: <class 'bool'> = False

with_kwargs: <class 'bool'> = False

)

Register a forward pre-hook on the module.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
    The hook will be called every time before :func:`forward` is invoked.


    If ``with_kwargs`` is false or not specified, the input contains only
    the positional arguments given to the module. Keyword arguments won't be
    passed to the hooks and only to the ``forward``. The hook can modify the
    input. User can either return a tuple or a single modified value in the
    hook. We will wrap the value into a tuple if a single value is returned
    (unless that value is already a tuple). The hook should have the
    following signature::

        hook(module, args) -> None or modified input

    If ``with_kwargs`` is true, the forward pre-hook will be passed the
    kwargs given to the forward function. And if the hook modifies the
    input, both the args and kwargs should be returned. The hook should have
    the following signature::

        hook(module, args, kwargs) -> None or a tuple of modified input and kwargs

    Args:
        hook (Callable): The user defined hook to be registered.
        prepend (bool): If true, the provided ``hook`` will be fired before
            all existing ``forward_pre`` hooks on this
            :class:`torch.nn.Module`. Otherwise, the provided
            ``hook`` will be fired after all existing ``forward_pre`` hooks
            on this :class:`torch.nn.Module`. Note that global
            ``forward_pre`` hooks registered with
            :func:`register_module_forward_pre_hook` will fire before all
            hooks registered by this method.
            Default: ``False``
        with_kwargs (bool): If true, the ``hook`` will be passed the kwargs
            given to the forward function.
            Default: ``False``

    Returns:
        :class:`torch.utils.hooks.RemovableHandle`:
            a handle that can be used to remove the added hook by calling
            ``handle.remove()``

register_full_backward_hook

(

hook: Callable[[ForwardRef('Module'), Union[tuple[torch.Tensor, ...], torch.Tensor], Union[tuple[torch.Tensor, ...], torch.Tensor]], Union[NoneType, tuple[torch.Tensor, ...], torch.Tensor]]

prepend: <class 'bool'> = False

)

Register a backward hook on the module.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
    The hook will be called every time the gradients with respect to a module
    are computed, i.e. the hook will execute if and only if the gradients with
    respect to module outputs are computed. The hook should have the following
    signature::

        hook(module, grad_input, grad_output) -> tuple(Tensor) or None

    The :attr:`grad_input` and :attr:`grad_output` are tuples that contain the gradients
    with respect to the inputs and outputs respectively. The hook should
    not modify its arguments, but it can optionally return a new gradient with
    respect to the input that will be used in place of :attr:`grad_input` in
    subsequent computations. :attr:`grad_input` will only correspond to the inputs given
    as positional arguments and all kwarg arguments are ignored. Entries
    in :attr:`grad_input` and :attr:`grad_output` will be ``None`` for all non-Tensor
    arguments.

    For technical reasons, when this hook is applied to a Module, its forward function will
    receive a view of each Tensor passed to the Module. Similarly the caller will receive a view
    of each Tensor returned by the Module's forward function.

    .. warning ::
        Modifying inputs or outputs inplace is not allowed when using backward hooks and
        will raise an error.

    Args:
        hook (Callable): The user-defined hook to be registered.
        prepend (bool): If true, the provided ``hook`` will be fired before
            all existing ``backward`` hooks on this
            :class:`torch.nn.Module`. Otherwise, the provided
            ``hook`` will be fired after all existing ``backward`` hooks on
            this :class:`torch.nn.Module`. Note that global
            ``backward`` hooks registered with
            :func:`register_module_full_backward_hook` will fire before
            all hooks registered by this method.

    Returns:
        :class:`torch.utils.hooks.RemovableHandle`:
            a handle that can be used to remove the added hook by calling
            ``handle.remove()``

register_full_backward_pre_hook

(

hook: Callable[[ForwardRef('Module'), Union[tuple[torch.Tensor, ...], torch.Tensor]], Union[NoneType, tuple[torch.Tensor, ...], torch.Tensor]]

prepend: <class 'bool'> = False

)

Register a backward pre-hook on the module.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
    The hook will be called every time the gradients for the module are computed.
    The hook should have the following signature::

        hook(module, grad_output) -> tuple[Tensor] or None

    The :attr:`grad_output` is a tuple. The hook should
    not modify its arguments, but it can optionally return a new gradient with
    respect to the output that will be used in place of :attr:`grad_output` in
    subsequent computations. Entries in :attr:`grad_output` will be ``None`` for
    all non-Tensor arguments.

    For technical reasons, when this hook is applied to a Module, its forward function will
    receive a view of each Tensor passed to the Module. Similarly the caller will receive a view
    of each Tensor returned by the Module's forward function.

    .. warning ::
        Modifying inputs inplace is not allowed when using backward hooks and
        will raise an error.

    Args:
        hook (Callable): The user-defined hook to be registered.
        prepend (bool): If true, the provided ``hook`` will be fired before
            all existing ``backward_pre`` hooks on this
            :class:`torch.nn.Module`. Otherwise, the provided
            ``hook`` will be fired after all existing ``backward_pre`` hooks
            on this :class:`torch.nn.Module`. Note that global
            ``backward_pre`` hooks registered with
            :func:`register_module_full_backward_pre_hook` will fire before
            all hooks registered by this method.

    Returns:
        :class:`torch.utils.hooks.RemovableHandle`:
            a handle that can be used to remove the added hook by calling
            ``handle.remove()``

register_load_state_dict_post_hook

(

hook: <class 'inspect._empty'>

)

Register a post-hook to be run after module’s :meth:~nn.Module.load_state_dict is called.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
    It should have the following signature::
        hook(module, incompatible_keys) -> None

    The ``module`` argument is the current module that this hook is registered
    on, and the ``incompatible_keys`` argument is a ``NamedTuple`` consisting
    of attributes ``missing_keys`` and ``unexpected_keys``. ``missing_keys``
    is a ``list`` of ``str`` containing the missing keys and
    ``unexpected_keys`` is a ``list`` of ``str`` containing the unexpected keys.

    The given incompatible_keys can be modified inplace if needed.

    Note that the checks performed when calling :func:`load_state_dict` with
    ``strict=True`` are affected by modifications the hook makes to
    ``missing_keys`` or ``unexpected_keys``, as expected. Additions to either
    set of keys will result in an error being thrown when ``strict=True``, and
    clearing out both missing and unexpected keys will avoid an error.

    Returns:
        :class:`torch.utils.hooks.RemovableHandle`:
            a handle that can be used to remove the added hook by calling
            ``handle.remove()``

register_load_state_dict_pre_hook

(

hook: <class 'inspect._empty'>

)

Register a pre-hook to be run before module’s :meth:~nn.Module.load_state_dict is called.

1
2
3
4
5
6
    It should have the following signature::
        hook(module, state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs) -> None  # noqa: B950

    Arguments:
        hook (Callable): Callable hook that will be invoked before
            loading the state dict.

register_module

(

name: <class 'str'>

module: Optional[ForwardRef('Module')]

)

Alias for :func:add_module.

register_parameter

(

name: <class 'str'>

param: Optional[torch.nn.parameter.Parameter]

)

Add a parameter to the module.

1
2
3
4
5
6
7
8
9
    The parameter can be accessed as an attribute using given name.

    Args:
        name (str): name of the parameter. The parameter can be accessed
            from this module using the given name
        param (Parameter or None): parameter to be added to the module. If
            ``None``, then operations that run on parameters, such as :attr:`cuda`,
            are ignored. If ``None``, the parameter is **not** included in the
            module's :attr:`state_dict`.

register_state_dict_post_hook

(

hook: <class 'inspect._empty'>

)

Register a post-hook for the :meth:~torch.nn.Module.state_dict method.

1
2
3
4
    It should have the following signature::
        hook(module, state_dict, prefix, local_metadata) -> None

    The registered hooks can modify the ``state_dict`` inplace.

register_state_dict_pre_hook

(

hook: <class 'inspect._empty'>

)

Register a pre-hook for the :meth:~torch.nn.Module.state_dict method.

1
2
3
4
5
    It should have the following signature::
        hook(module, prefix, keep_vars) -> None

    The registered hooks can be used to perform pre-processing before the ``state_dict``
    call is made.

requires_grad_

(

requires_grad: <class 'bool'> = True

)

Change if autograd should record operations on parameters in this module.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
    This method sets the parameters' :attr:`requires_grad` attributes
    in-place.

    This method is helpful for freezing part of the module for finetuning
    or training parts of a model individually (e.g., GAN training).

    See :ref:`locally-disable-grad-doc` for a comparison between
    `.requires_grad_()` and several similar mechanisms that may be confused with it.

    Args:
        requires_grad (bool): whether autograd should record operations on
                              parameters in this module. Default: ``True``.

    Returns:
        Module: self

set_extra_state

(

state: Any

)

Set extra state contained in the loaded state_dict.

1
2
3
4
5
6
7
    This function is called from :func:`load_state_dict` to handle any extra state
    found within the `state_dict`. Implement this function and a corresponding
    :func:`get_extra_state` for your module if you need to store extra state within its
    `state_dict`.

    Args:
        state (dict): Extra state from the `state_dict`

set_submodule

(

target: <class 'str'>

module: Module

strict: <class 'bool'> = False

)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
    Set the submodule given by ``target`` if it exists, otherwise throw an error.

    .. note::
        If ``strict`` is set to ``False`` (default), the method will replace an existing submodule
        or create a new submodule if the parent module exists. If ``strict`` is set to ``True``,
        the method will only attempt to replace an existing submodule and throw an error if
        the submodule does not exist.

    For example, let's say you have an ``nn.Module`` ``A`` that
    looks like this:

    .. code-block:: text

        A(
            (net_b): Module(
                (net_c): Module(
                    (conv): Conv2d(3, 3, 3)
                )
                (linear): Linear(3, 3)
            )
        )

    (The diagram shows an ``nn.Module`` ``A``. ``A`` has a nested
    submodule ``net_b``, which itself has two submodules ``net_c``
    and ``linear``. ``net_c`` then has a submodule ``conv``.)

    To override the ``Conv2d`` with a new submodule ``Linear``, you
    could call ``set_submodule("net_b.net_c.conv", nn.Linear(1, 1))``
    where ``strict`` could be ``True`` or ``False``

    To add a new submodule ``Conv2d`` to the existing ``net_b`` module,
    you would call ``set_submodule("net_b.conv", nn.Conv2d(1, 1, 1))``.

    In the above if you set ``strict=True`` and call
    ``set_submodule("net_b.conv", nn.Conv2d(1, 1, 1), strict=True)``, an AttributeError
    will be raised because ``net_b`` does not have a submodule named ``conv``.

    Args:
        target: The fully-qualified string name of the submodule
            to look for. (See above example for how to specify a
            fully-qualified string.)
        module: The module to set the submodule to.
        strict: If ``False``, the method will replace an existing submodule
            or create a new submodule if the parent module exists. If ``True``,
            the method will only attempt to replace an existing submodule and throw an error
            if the submodule doesn't already exist.

    Raises:
        ValueError: If the ``target`` string is empty or if ``module`` is not an instance of ``nn.Module``.
        AttributeError: If at any point along the path resulting from
            the ``target`` string the (sub)path resolves to a non-existent
            attribute name or an object that is not an instance of ``nn.Module``.

share_memory

(

)

See :meth:torch.Tensor.share_memory_.

state_dict

(

args: <class 'inspect._empty'>

destination: <class 'inspect._empty'> = None

prefix: <class 'inspect._empty'> =

keep_vars: <class 'inspect._empty'> = False

)

Return a dictionary containing references to the whole state of the module.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
    Both parameters and persistent buffers (e.g. running averages) are
    included. Keys are corresponding parameter and buffer names.
    Parameters and buffers set to ``None`` are not included.

    .. note::
        The returned object is a shallow copy. It contains references
        to the module's parameters and buffers.

    .. warning::
        Currently ``state_dict()`` also accepts positional arguments for
        ``destination``, ``prefix`` and ``keep_vars`` in order. However,
        this is being deprecated and keyword arguments will be enforced in
        future releases.

    .. warning::
        Please avoid the use of argument ``destination`` as it is not
        designed for end-users.

    Args:
        destination (dict, optional): If provided, the state of module will
            be updated into the dict and the same object is returned.
            Otherwise, an ``OrderedDict`` will be created and returned.
            Default: ``None``.
        prefix (str, optional): a prefix added to parameter and buffer
            names to compose the keys in state_dict. Default: ``''``.
        keep_vars (bool, optional): by default the :class:`~torch.Tensor` s
            returned in the state dict are detached from autograd. If it's
            set to ``True``, detaching will not be performed.
            Default: ``False``.

    Returns:
        dict:
            a dictionary containing a whole state of the module

    Example::

        >>> # xdoctest: +SKIP("undefined vars")
        >>> module.state_dict().keys()
        ['bias', 'weight']

to

(

args: <class 'inspect._empty'>

kwargs: <class 'inspect._empty'>

)

Move and/or cast the parameters and buffers.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
    This can be called as

    .. function:: to(device=None, dtype=None, non_blocking=False)
       :noindex:

    .. function:: to(dtype, non_blocking=False)
       :noindex:

    .. function:: to(tensor, non_blocking=False)
       :noindex:

    .. function:: to(memory_format=torch.channels_last)
       :noindex:

    Its signature is similar to :meth:`torch.Tensor.to`, but only accepts
    floating point or complex :attr:`dtype`\ s. In addition, this method will
    only cast the floating point or complex parameters and buffers to :attr:`dtype`
    (if given). The integral parameters and buffers will be moved
    :attr:`device`, if that is given, but with dtypes unchanged. When
    :attr:`non_blocking` is set, it tries to convert/move asynchronously
    with respect to the host if possible, e.g., moving CPU Tensors with
    pinned memory to CUDA devices.

    See below for examples.

    .. note::
        This method modifies the module in-place.

    Args:
        device (:class:`torch.device`): the desired device of the parameters
            and buffers in this module
        dtype (:class:`torch.dtype`): the desired floating point or complex dtype of
            the parameters and buffers in this module
        tensor (torch.Tensor): Tensor whose dtype and device are the desired
            dtype and device for all parameters and buffers in this module
        memory_format (:class:`torch.memory_format`): the desired memory
            format for 4D parameters and buffers in this module (keyword
            only argument)

    Returns:
        Module: self

    Examples::

        >>> # xdoctest: +IGNORE_WANT("non-deterministic")
        >>> linear = nn.Linear(2, 2)
        >>> linear.weight
        Parameter containing:
        tensor([[ 0.1913, -0.3420],
                [-0.5113, -0.2325]])
        >>> linear.to(torch.double)
        Linear(in_features=2, out_features=2, bias=True)
        >>> linear.weight
        Parameter containing:
        tensor([[ 0.1913, -0.3420],
                [-0.5113, -0.2325]], dtype=torch.float64)
        >>> # xdoctest: +REQUIRES(env:TORCH_DOCTEST_CUDA1)
        >>> gpu1 = torch.device("cuda:1")
        >>> linear.to(gpu1, dtype=torch.half, non_blocking=True)
        Linear(in_features=2, out_features=2, bias=True)
        >>> linear.weight
        Parameter containing:
        tensor([[ 0.1914, -0.3420],
                [-0.5112, -0.2324]], dtype=torch.float16, device='cuda:1')
        >>> cpu = torch.device("cpu")
        >>> linear.to(cpu)
        Linear(in_features=2, out_features=2, bias=True)
        >>> linear.weight
        Parameter containing:
        tensor([[ 0.1914, -0.3420],
                [-0.5112, -0.2324]], dtype=torch.float16)

        >>> linear = nn.Linear(2, 2, bias=None).to(torch.cdouble)
        >>> linear.weight
        Parameter containing:
        tensor([[ 0.3741+0.j,  0.2382+0.j],
                [ 0.5593+0.j, -0.4443+0.j]], dtype=torch.complex128)
        >>> linear(torch.ones(3, 2, dtype=torch.cdouble))
        tensor([[0.6122+0.j, 0.1150+0.j],
                [0.6122+0.j, 0.1150+0.j],
                [0.6122+0.j, 0.1150+0.j]], dtype=torch.complex128)

to_empty

(

device: Union[int, str, torch.device, NoneType]

recurse: <class 'bool'> = True

)

Move the parameters and buffers to the specified device without copying storage.

1
2
3
4
5
6
7
8
    Args:
        device (:class:`torch.device`): The desired device of the parameters
            and buffers in this module.
        recurse (bool): Whether parameters and buffers of submodules should
            be recursively moved to the specified device.

    Returns:
        Module: self

train

(

mode: <class 'bool'> = True

)

Set the module in training mode.

1
2
3
4
5
6
7
8
9
10
11
    This has an effect only on certain modules. See the documentation of
    particular modules for details of their behaviors in training/evaluation
    mode, i.e., whether they are affected, e.g. :class:`Dropout`, :class:`BatchNorm`,
    etc.

    Args:
        mode (bool): whether to set training mode (``True``) or evaluation
                     mode (``False``). Default: ``True``.

    Returns:
        Module: self

type

(

dst_type: Union[torch.dtype, str]

)

Casts all parameters and buffers to :attr:dst_type.

1
2
3
4
5
6
7
8
    .. note::
        This method modifies the module in-place.

    Args:
        dst_type (type or string): the desired type

    Returns:
        Module: self

xpu

(

device: Union[int, torch.device, NoneType] = None

)

Move all model parameters and buffers to the XPU.

1
2
3
4
5
6
7
8
9
10
11
12
13
    This also makes associated parameters and buffers different objects. So
    it should be called before constructing optimizer if the module will
    live on XPU while being optimized.

    .. note::
        This method modifies the module in-place.

    Arguments:
        device (int, optional): if specified, all parameters will be
            copied to that device

    Returns:
        Module: self

zero_grad

(

set_to_none: <class 'bool'> = True

)

Reset gradients of all model parameters.

1
2
3
4
5
    See similar function under :class:`torch.optim.Optimizer` for more context.

    Args:
        set_to_none (bool): instead of setting to zero, set the grads to None.
            See :meth:`torch.optim.Optimizer.zero_grad` for details.

WrapperMeanLoss

(

loss_E: Union[Literal['MAE', 'MSE', 'SmoothMAE', 'Huber'], torch.nn.modules.module.Module] = MAE

loss_F: Union[Literal['MAE', 'MSE', 'SmoothMAE', 'Huber'], torch.nn.modules.module.Module, NoneType] = None

)

None

add_module

(

name: <class 'str'>

module: Optional[ForwardRef('Module')]

)

Add a child module to the current module.

1
2
3
4
5
6
    The module can be accessed as an attribute using the given name.

    Args:
        name (str): name of the child module. The child module can be
            accessed from this module using the given name
        module (Module): child module to be added to the module.

apply

(

fn: Callable[[ForwardRef('Module')], NoneType]

)

Apply fn recursively to every submodule (as returned by .children()) as well as self.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
    Typical use includes initializing the parameters of a model
    (see also :ref:`nn-init-doc`).

    Args:
        fn (:class:`Module` -> None): function to be applied to each submodule

    Returns:
        Module: self

    Example::

        >>> @torch.no_grad()
        >>> def init_weights(m):
        >>>     print(m)
        >>>     if type(m) == nn.Linear:
        >>>         m.weight.fill_(1.0)
        >>>         print(m.weight)
        >>> net = nn.Sequential(nn.Linear(2, 2), nn.Linear(2, 2))
        >>> net.apply(init_weights)
        Linear(in_features=2, out_features=2, bias=True)
        Parameter containing:
        tensor([[1., 1.],
                [1., 1.]], requires_grad=True)
        Linear(in_features=2, out_features=2, bias=True)
        Parameter containing:
        tensor([[1., 1.],
                [1., 1.]], requires_grad=True)
        Sequential(
          (0): Linear(in_features=2, out_features=2, bias=True)
          (1): Linear(in_features=2, out_features=2, bias=True)
        )

bfloat16

(

)

Casts all floating point parameters and buffers to bfloat16 datatype.

1
2
3
4
5
    .. note::
        This method modifies the module in-place.

    Returns:
        Module: self

buffers

(

recurse: <class 'bool'> = True

)

Return an iterator over module buffers.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
    Args:
        recurse (bool): if True, then yields buffers of this module
            and all submodules. Otherwise, yields only buffers that
            are direct members of this module.

    Yields:
        torch.Tensor: module buffer

    Example::

        >>> # xdoctest: +SKIP("undefined vars")
        >>> for buf in model.buffers():
        >>>     print(type(buf), buf.size())
        <class 'torch.Tensor'> (20L,)
        <class 'torch.Tensor'> (20L, 1L, 5L, 5L)

children

(

)

Return an iterator over immediate children modules.

1
2
    Yields:
        Module: a child module

compile

(

args: <class 'inspect._empty'>

kwargs: <class 'inspect._empty'>

)

1
2
3
4
5
6
    Compile this Module's forward using :func:`torch.compile`.

    This Module's `__call__` method is compiled and all arguments are passed as-is
    to :func:`torch.compile`.

    See :func:`torch.compile` for details on the arguments for this function.

cpu

(

)

Move all model parameters and buffers to the CPU.

1
2
3
4
5
    .. note::
        This method modifies the module in-place.

    Returns:
        Module: self

cuda

(

device: Union[int, torch.device, NoneType] = None

)

Move all model parameters and buffers to the GPU.

1
2
3
4
5
6
7
8
9
10
11
12
13
    This also makes associated parameters and buffers different objects. So
    it should be called before constructing the optimizer if the module will
    live on GPU while being optimized.

    .. note::
        This method modifies the module in-place.

    Args:
        device (int, optional): if specified, all parameters will be
            copied to that device

    Returns:
        Module: self

double

(

)

Casts all floating point parameters and buffers to double datatype.

1
2
3
4
5
    .. note::
        This method modifies the module in-place.

    Returns:
        Module: self

eval

(

)

Set the module in evaluation mode.

1
2
3
4
5
6
7
8
9
10
11
12
    This has an effect only on certain modules. See the documentation of
    particular modules for details of their behaviors in training/evaluation
    mode, i.e. whether they are affected, e.g. :class:`Dropout`, :class:`BatchNorm`,
    etc.

    This is equivalent with :meth:`self.train(False) <torch.nn.Module.train>`.

    See :ref:`locally-disable-grad-doc` for a comparison between
    `.eval()` and several similar mechanisms that may be confused with it.

    Returns:
        Module: self

extra_repr

(

)

Return the extra representation of the module.

1
2
3
    To print customized extra information, you should re-implement
    this method in your own modules. Both single-line and multi-line
    strings are acceptable.

float

(

)

Casts all floating point parameters and buffers to float datatype.

1
2
3
4
5
    .. note::
        This method modifies the module in-place.

    Returns:
        Module: self

forward

(

pred: Dict[Literal['energy', 'forces'], List[torch.Tensor]]

label: Dict[Literal['energy', 'forces'], torch.Tensor]

)

get_buffer

(

target: <class 'str'>

)

Return the buffer given by target if it exists, otherwise throw an error.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
    See the docstring for ``get_submodule`` for a more detailed
    explanation of this method's functionality as well as how to
    correctly specify ``target``.

    Args:
        target: The fully-qualified string name of the buffer
            to look for. (See ``get_submodule`` for how to specify a
            fully-qualified string.)

    Returns:
        torch.Tensor: The buffer referenced by ``target``

    Raises:
        AttributeError: If the target string references an invalid
            path or resolves to something that is not a
            buffer

get_extra_state

(

)

Return any extra state to include in the module’s state_dict.

1
2
3
4
5
6
7
8
9
10
11
    Implement this and a corresponding :func:`set_extra_state` for your module
    if you need to store extra state. This function is called when building the
    module's `state_dict()`.

    Note that extra state should be picklable to ensure working serialization
    of the state_dict. We only provide backwards compatibility guarantees
    for serializing Tensors; other objects may break backwards compatibility if
    their serialized pickled form changes.

    Returns:
        object: Any extra state to store in the module's state_dict

get_parameter

(

target: <class 'str'>

)

Return the parameter given by target if it exists, otherwise throw an error.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
    See the docstring for ``get_submodule`` for a more detailed
    explanation of this method's functionality as well as how to
    correctly specify ``target``.

    Args:
        target: The fully-qualified string name of the Parameter
            to look for. (See ``get_submodule`` for how to specify a
            fully-qualified string.)

    Returns:
        torch.nn.Parameter: The Parameter referenced by ``target``

    Raises:
        AttributeError: If the target string references an invalid
            path or resolves to something that is not an
            ``nn.Parameter``

get_submodule

(

target: <class 'str'>

)

Return the submodule given by target if it exists, otherwise throw an error.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
    For example, let's say you have an ``nn.Module`` ``A`` that
    looks like this:

    .. code-block:: text

        A(
            (net_b): Module(
                (net_c): Module(
                    (conv): Conv2d(16, 33, kernel_size=(3, 3), stride=(2, 2))
                )
                (linear): Linear(in_features=100, out_features=200, bias=True)
            )
        )

    (The diagram shows an ``nn.Module`` ``A``. ``A`` which has a nested
    submodule ``net_b``, which itself has two submodules ``net_c``
    and ``linear``. ``net_c`` then has a submodule ``conv``.)

    To check whether or not we have the ``linear`` submodule, we
    would call ``get_submodule("net_b.linear")``. To check whether
    we have the ``conv`` submodule, we would call
    ``get_submodule("net_b.net_c.conv")``.

    The runtime of ``get_submodule`` is bounded by the degree
    of module nesting in ``target``. A query against
    ``named_modules`` achieves the same result, but it is O(N) in
    the number of transitive modules. So, for a simple check to see
    if some submodule exists, ``get_submodule`` should always be
    used.

    Args:
        target: The fully-qualified string name of the submodule
            to look for. (See above example for how to specify a
            fully-qualified string.)

    Returns:
        torch.nn.Module: The submodule referenced by ``target``

    Raises:
        AttributeError: If at any point along the path resulting from
            the target string the (sub)path resolves to a non-existent
            attribute name or an object that is not an instance of ``nn.Module``.

half

(

)

Casts all floating point parameters and buffers to half datatype.

1
2
3
4
5
    .. note::
        This method modifies the module in-place.

    Returns:
        Module: self

ipu

(

device: Union[int, torch.device, NoneType] = None

)

Move all model parameters and buffers to the IPU.

1
2
3
4
5
6
7
8
9
10
11
12
13
    This also makes associated parameters and buffers different objects. So
    it should be called before constructing the optimizer if the module will
    live on IPU while being optimized.

    .. note::
        This method modifies the module in-place.

    Arguments:
        device (int, optional): if specified, all parameters will be
            copied to that device

    Returns:
        Module: self

load_state_dict

(

state_dict: collections.abc.Mapping[str, Any]

strict: <class 'bool'> = True

assign: <class 'bool'> = False

)

Copy parameters and buffers from :attr:state_dict into this module and its descendants.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
    If :attr:`strict` is ``True``, then
    the keys of :attr:`state_dict` must exactly match the keys returned
    by this module's :meth:`~torch.nn.Module.state_dict` function.

    .. warning::
        If :attr:`assign` is ``True`` the optimizer must be created after
        the call to :attr:`load_state_dict` unless
        :func:`~torch.__future__.get_swap_module_params_on_conversion` is ``True``.

    Args:
        state_dict (dict): a dict containing parameters and
            persistent buffers.
        strict (bool, optional): whether to strictly enforce that the keys
            in :attr:`state_dict` match the keys returned by this module's
            :meth:`~torch.nn.Module.state_dict` function. Default: ``True``
        assign (bool, optional): When set to ``False``, the properties of the tensors
            in the current module are preserved whereas setting it to ``True`` preserves
            properties of the Tensors in the state dict. The only
            exception is the ``requires_grad`` field of :class:`~torch.nn.Parameter`s
            for which the value from the module is preserved.
            Default: ``False``

    Returns:
        ``NamedTuple`` with ``missing_keys`` and ``unexpected_keys`` fields:
            * **missing_keys** is a list of str containing any keys that are expected
                by this module but missing from the provided ``state_dict``.
            * **unexpected_keys** is a list of str containing the keys that are not
                expected by this module but present in the provided ``state_dict``.

    Note:
        If a parameter or buffer is registered as ``None`` and its corresponding key
        exists in :attr:`state_dict`, :meth:`load_state_dict` will raise a
        ``RuntimeError``.

modules

(

)

Return an iterator over all modules in the network.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
    Yields:
        Module: a module in the network

    Note:
        Duplicate modules are returned only once. In the following
        example, ``l`` will be returned only once.

    Example::

        >>> l = nn.Linear(2, 2)
        >>> net = nn.Sequential(l, l)
        >>> for idx, m in enumerate(net.modules()):
        ...     print(idx, '->', m)

        0 -> Sequential(
          (0): Linear(in_features=2, out_features=2, bias=True)
          (1): Linear(in_features=2, out_features=2, bias=True)
        )
        1 -> Linear(in_features=2, out_features=2, bias=True)

mtia

(

device: Union[int, torch.device, NoneType] = None

)

Move all model parameters and buffers to the MTIA.

1
2
3
4
5
6
7
8
9
10
11
12
13
    This also makes associated parameters and buffers different objects. So
    it should be called before constructing the optimizer if the module will
    live on MTIA while being optimized.

    .. note::
        This method modifies the module in-place.

    Arguments:
        device (int, optional): if specified, all parameters will be
            copied to that device

    Returns:
        Module: self

named_buffers

(

prefix: <class 'str'> =

recurse: <class 'bool'> = True

remove_duplicate: <class 'bool'> = True

)

Return an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
    Args:
        prefix (str): prefix to prepend to all buffer names.
        recurse (bool, optional): if True, then yields buffers of this module
            and all submodules. Otherwise, yields only buffers that
            are direct members of this module. Defaults to True.
        remove_duplicate (bool, optional): whether to remove the duplicated buffers in the result. Defaults to True.

    Yields:
        (str, torch.Tensor): Tuple containing the name and buffer

    Example::

        >>> # xdoctest: +SKIP("undefined vars")
        >>> for name, buf in self.named_buffers():
        >>>     if name in ['running_var']:
        >>>         print(buf.size())

named_children

(

)

Return an iterator over immediate children modules, yielding both the name of the module as well as the module itself.

1
2
3
4
5
6
7
8
9
    Yields:
        (str, Module): Tuple containing a name and child module

    Example::

        >>> # xdoctest: +SKIP("undefined vars")
        >>> for name, module in model.named_children():
        >>>     if name in ['conv4', 'conv5']:
        >>>         print(module)

named_modules

(

memo: Optional[set['Module']] = None

prefix: <class 'str'> =

remove_duplicate: <class 'bool'> = True

)

Return an iterator over all modules in the network, yielding both the name of the module as well as the module itself.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
    Args:
        memo: a memo to store the set of modules already added to the result
        prefix: a prefix that will be added to the name of the module
        remove_duplicate: whether to remove the duplicated module instances in the result
            or not

    Yields:
        (str, Module): Tuple of name and module

    Note:
        Duplicate modules are returned only once. In the following
        example, ``l`` will be returned only once.

    Example::

        >>> l = nn.Linear(2, 2)
        >>> net = nn.Sequential(l, l)
        >>> for idx, m in enumerate(net.named_modules()):
        ...     print(idx, '->', m)

        0 -> ('', Sequential(
          (0): Linear(in_features=2, out_features=2, bias=True)
          (1): Linear(in_features=2, out_features=2, bias=True)
        ))
        1 -> ('0', Linear(in_features=2, out_features=2, bias=True))

named_parameters

(

prefix: <class 'str'> =

recurse: <class 'bool'> = True

remove_duplicate: <class 'bool'> = True

)

Return an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
    Args:
        prefix (str): prefix to prepend to all parameter names.
        recurse (bool): if True, then yields parameters of this module
            and all submodules. Otherwise, yields only parameters that
            are direct members of this module.
        remove_duplicate (bool, optional): whether to remove the duplicated
            parameters in the result. Defaults to True.

    Yields:
        (str, Parameter): Tuple containing the name and parameter

    Example::

        >>> # xdoctest: +SKIP("undefined vars")
        >>> for name, param in self.named_parameters():
        >>>     if name in ['bias']:
        >>>         print(param.size())

parameters

(

recurse: <class 'bool'> = True

)

Return an iterator over module parameters.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
    This is typically passed to an optimizer.

    Args:
        recurse (bool): if True, then yields parameters of this module
            and all submodules. Otherwise, yields only parameters that
            are direct members of this module.

    Yields:
        Parameter: module parameter

    Example::

        >>> # xdoctest: +SKIP("undefined vars")
        >>> for param in model.parameters():
        >>>     print(type(param), param.size())
        <class 'torch.Tensor'> (20L,)
        <class 'torch.Tensor'> (20L, 1L, 5L, 5L)

register_backward_hook

(

hook: Callable[[ForwardRef('Module'), Union[tuple[torch.Tensor, ...], torch.Tensor], Union[tuple[torch.Tensor, ...], torch.Tensor]], Union[NoneType, tuple[torch.Tensor, ...], torch.Tensor]]

)

Register a backward hook on the module.

1
2
3
4
5
6
7
    This function is deprecated in favor of :meth:`~torch.nn.Module.register_full_backward_hook` and
    the behavior of this function will change in future versions.

    Returns:
        :class:`torch.utils.hooks.RemovableHandle`:
            a handle that can be used to remove the added hook by calling
            ``handle.remove()``

register_buffer

(

name: <class 'str'>

tensor: Optional[torch.Tensor]

persistent: <class 'bool'> = True

)

Add a buffer to the module.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
    This is typically used to register a buffer that should not to be
    considered a model parameter. For example, BatchNorm's ``running_mean``
    is not a parameter, but is part of the module's state. Buffers, by
    default, are persistent and will be saved alongside parameters. This
    behavior can be changed by setting :attr:`persistent` to ``False``. The
    only difference between a persistent buffer and a non-persistent buffer
    is that the latter will not be a part of this module's
    :attr:`state_dict`.

    Buffers can be accessed as attributes using given names.

    Args:
        name (str): name of the buffer. The buffer can be accessed
            from this module using the given name
        tensor (Tensor or None): buffer to be registered. If ``None``, then operations
            that run on buffers, such as :attr:`cuda`, are ignored. If ``None``,
            the buffer is **not** included in the module's :attr:`state_dict`.
        persistent (bool): whether the buffer is part of this module's
            :attr:`state_dict`.

    Example::

        >>> # xdoctest: +SKIP("undefined vars")
        >>> self.register_buffer('running_mean', torch.zeros(num_features))

register_forward_hook

(

hook: Union[Callable[[~T, tuple[Any, ...], Any], Optional[Any]], Callable[[~T, tuple[Any, ...], dict[str, Any], Any], Optional[Any]]]

prepend: <class 'bool'> = False

with_kwargs: <class 'bool'> = False

always_call: <class 'bool'> = False

)

Register a forward hook on the module.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
    The hook will be called every time after :func:`forward` has computed an output.

    If ``with_kwargs`` is ``False`` or not specified, the input contains only
    the positional arguments given to the module. Keyword arguments won't be
    passed to the hooks and only to the ``forward``. The hook can modify the
    output. It can modify the input inplace but it will not have effect on
    forward since this is called after :func:`forward` is called. The hook
    should have the following signature::

        hook(module, args, output) -> None or modified output

    If ``with_kwargs`` is ``True``, the forward hook will be passed the
    ``kwargs`` given to the forward function and be expected to return the
    output possibly modified. The hook should have the following signature::

        hook(module, args, kwargs, output) -> None or modified output

    Args:
        hook (Callable): The user defined hook to be registered.
        prepend (bool): If ``True``, the provided ``hook`` will be fired
            before all existing ``forward`` hooks on this
            :class:`torch.nn.Module`. Otherwise, the provided
            ``hook`` will be fired after all existing ``forward`` hooks on
            this :class:`torch.nn.Module`. Note that global
            ``forward`` hooks registered with
            :func:`register_module_forward_hook` will fire before all hooks
            registered by this method.
            Default: ``False``
        with_kwargs (bool): If ``True``, the ``hook`` will be passed the
            kwargs given to the forward function.
            Default: ``False``
        always_call (bool): If ``True`` the ``hook`` will be run regardless of
            whether an exception is raised while calling the Module.
            Default: ``False``

    Returns:
        :class:`torch.utils.hooks.RemovableHandle`:
            a handle that can be used to remove the added hook by calling
            ``handle.remove()``

register_forward_pre_hook

(

hook: Union[Callable[[~T, tuple[Any, ...]], Optional[Any]], Callable[[~T, tuple[Any, ...], dict[str, Any]], Optional[tuple[Any, dict[str, Any]]]]]

prepend: <class 'bool'> = False

with_kwargs: <class 'bool'> = False

)

Register a forward pre-hook on the module.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
    The hook will be called every time before :func:`forward` is invoked.


    If ``with_kwargs`` is false or not specified, the input contains only
    the positional arguments given to the module. Keyword arguments won't be
    passed to the hooks and only to the ``forward``. The hook can modify the
    input. User can either return a tuple or a single modified value in the
    hook. We will wrap the value into a tuple if a single value is returned
    (unless that value is already a tuple). The hook should have the
    following signature::

        hook(module, args) -> None or modified input

    If ``with_kwargs`` is true, the forward pre-hook will be passed the
    kwargs given to the forward function. And if the hook modifies the
    input, both the args and kwargs should be returned. The hook should have
    the following signature::

        hook(module, args, kwargs) -> None or a tuple of modified input and kwargs

    Args:
        hook (Callable): The user defined hook to be registered.
        prepend (bool): If true, the provided ``hook`` will be fired before
            all existing ``forward_pre`` hooks on this
            :class:`torch.nn.Module`. Otherwise, the provided
            ``hook`` will be fired after all existing ``forward_pre`` hooks
            on this :class:`torch.nn.Module`. Note that global
            ``forward_pre`` hooks registered with
            :func:`register_module_forward_pre_hook` will fire before all
            hooks registered by this method.
            Default: ``False``
        with_kwargs (bool): If true, the ``hook`` will be passed the kwargs
            given to the forward function.
            Default: ``False``

    Returns:
        :class:`torch.utils.hooks.RemovableHandle`:
            a handle that can be used to remove the added hook by calling
            ``handle.remove()``

register_full_backward_hook

(

hook: Callable[[ForwardRef('Module'), Union[tuple[torch.Tensor, ...], torch.Tensor], Union[tuple[torch.Tensor, ...], torch.Tensor]], Union[NoneType, tuple[torch.Tensor, ...], torch.Tensor]]

prepend: <class 'bool'> = False

)

Register a backward hook on the module.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
    The hook will be called every time the gradients with respect to a module
    are computed, i.e. the hook will execute if and only if the gradients with
    respect to module outputs are computed. The hook should have the following
    signature::

        hook(module, grad_input, grad_output) -> tuple(Tensor) or None

    The :attr:`grad_input` and :attr:`grad_output` are tuples that contain the gradients
    with respect to the inputs and outputs respectively. The hook should
    not modify its arguments, but it can optionally return a new gradient with
    respect to the input that will be used in place of :attr:`grad_input` in
    subsequent computations. :attr:`grad_input` will only correspond to the inputs given
    as positional arguments and all kwarg arguments are ignored. Entries
    in :attr:`grad_input` and :attr:`grad_output` will be ``None`` for all non-Tensor
    arguments.

    For technical reasons, when this hook is applied to a Module, its forward function will
    receive a view of each Tensor passed to the Module. Similarly the caller will receive a view
    of each Tensor returned by the Module's forward function.

    .. warning ::
        Modifying inputs or outputs inplace is not allowed when using backward hooks and
        will raise an error.

    Args:
        hook (Callable): The user-defined hook to be registered.
        prepend (bool): If true, the provided ``hook`` will be fired before
            all existing ``backward`` hooks on this
            :class:`torch.nn.Module`. Otherwise, the provided
            ``hook`` will be fired after all existing ``backward`` hooks on
            this :class:`torch.nn.Module`. Note that global
            ``backward`` hooks registered with
            :func:`register_module_full_backward_hook` will fire before
            all hooks registered by this method.

    Returns:
        :class:`torch.utils.hooks.RemovableHandle`:
            a handle that can be used to remove the added hook by calling
            ``handle.remove()``

register_full_backward_pre_hook

(

hook: Callable[[ForwardRef('Module'), Union[tuple[torch.Tensor, ...], torch.Tensor]], Union[NoneType, tuple[torch.Tensor, ...], torch.Tensor]]

prepend: <class 'bool'> = False

)

Register a backward pre-hook on the module.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
    The hook will be called every time the gradients for the module are computed.
    The hook should have the following signature::

        hook(module, grad_output) -> tuple[Tensor] or None

    The :attr:`grad_output` is a tuple. The hook should
    not modify its arguments, but it can optionally return a new gradient with
    respect to the output that will be used in place of :attr:`grad_output` in
    subsequent computations. Entries in :attr:`grad_output` will be ``None`` for
    all non-Tensor arguments.

    For technical reasons, when this hook is applied to a Module, its forward function will
    receive a view of each Tensor passed to the Module. Similarly the caller will receive a view
    of each Tensor returned by the Module's forward function.

    .. warning ::
        Modifying inputs inplace is not allowed when using backward hooks and
        will raise an error.

    Args:
        hook (Callable): The user-defined hook to be registered.
        prepend (bool): If true, the provided ``hook`` will be fired before
            all existing ``backward_pre`` hooks on this
            :class:`torch.nn.Module`. Otherwise, the provided
            ``hook`` will be fired after all existing ``backward_pre`` hooks
            on this :class:`torch.nn.Module`. Note that global
            ``backward_pre`` hooks registered with
            :func:`register_module_full_backward_pre_hook` will fire before
            all hooks registered by this method.

    Returns:
        :class:`torch.utils.hooks.RemovableHandle`:
            a handle that can be used to remove the added hook by calling
            ``handle.remove()``

register_load_state_dict_post_hook

(

hook: <class 'inspect._empty'>

)

Register a post-hook to be run after module’s :meth:~nn.Module.load_state_dict is called.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
    It should have the following signature::
        hook(module, incompatible_keys) -> None

    The ``module`` argument is the current module that this hook is registered
    on, and the ``incompatible_keys`` argument is a ``NamedTuple`` consisting
    of attributes ``missing_keys`` and ``unexpected_keys``. ``missing_keys``
    is a ``list`` of ``str`` containing the missing keys and
    ``unexpected_keys`` is a ``list`` of ``str`` containing the unexpected keys.

    The given incompatible_keys can be modified inplace if needed.

    Note that the checks performed when calling :func:`load_state_dict` with
    ``strict=True`` are affected by modifications the hook makes to
    ``missing_keys`` or ``unexpected_keys``, as expected. Additions to either
    set of keys will result in an error being thrown when ``strict=True``, and
    clearing out both missing and unexpected keys will avoid an error.

    Returns:
        :class:`torch.utils.hooks.RemovableHandle`:
            a handle that can be used to remove the added hook by calling
            ``handle.remove()``

register_load_state_dict_pre_hook

(

hook: <class 'inspect._empty'>

)

Register a pre-hook to be run before module’s :meth:~nn.Module.load_state_dict is called.

1
2
3
4
5
6
    It should have the following signature::
        hook(module, state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs) -> None  # noqa: B950

    Arguments:
        hook (Callable): Callable hook that will be invoked before
            loading the state dict.

register_module

(

name: <class 'str'>

module: Optional[ForwardRef('Module')]

)

Alias for :func:add_module.

register_parameter

(

name: <class 'str'>

param: Optional[torch.nn.parameter.Parameter]

)

Add a parameter to the module.

1
2
3
4
5
6
7
8
9
    The parameter can be accessed as an attribute using given name.

    Args:
        name (str): name of the parameter. The parameter can be accessed
            from this module using the given name
        param (Parameter or None): parameter to be added to the module. If
            ``None``, then operations that run on parameters, such as :attr:`cuda`,
            are ignored. If ``None``, the parameter is **not** included in the
            module's :attr:`state_dict`.

register_state_dict_post_hook

(

hook: <class 'inspect._empty'>

)

Register a post-hook for the :meth:~torch.nn.Module.state_dict method.

1
2
3
4
    It should have the following signature::
        hook(module, state_dict, prefix, local_metadata) -> None

    The registered hooks can modify the ``state_dict`` inplace.

register_state_dict_pre_hook

(

hook: <class 'inspect._empty'>

)

Register a pre-hook for the :meth:~torch.nn.Module.state_dict method.

1
2
3
4
5
    It should have the following signature::
        hook(module, prefix, keep_vars) -> None

    The registered hooks can be used to perform pre-processing before the ``state_dict``
    call is made.

requires_grad_

(

requires_grad: <class 'bool'> = True

)

Change if autograd should record operations on parameters in this module.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
    This method sets the parameters' :attr:`requires_grad` attributes
    in-place.

    This method is helpful for freezing part of the module for finetuning
    or training parts of a model individually (e.g., GAN training).

    See :ref:`locally-disable-grad-doc` for a comparison between
    `.requires_grad_()` and several similar mechanisms that may be confused with it.

    Args:
        requires_grad (bool): whether autograd should record operations on
                              parameters in this module. Default: ``True``.

    Returns:
        Module: self

set_extra_state

(

state: Any

)

Set extra state contained in the loaded state_dict.

1
2
3
4
5
6
7
    This function is called from :func:`load_state_dict` to handle any extra state
    found within the `state_dict`. Implement this function and a corresponding
    :func:`get_extra_state` for your module if you need to store extra state within its
    `state_dict`.

    Args:
        state (dict): Extra state from the `state_dict`

set_submodule

(

target: <class 'str'>

module: Module

strict: <class 'bool'> = False

)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
    Set the submodule given by ``target`` if it exists, otherwise throw an error.

    .. note::
        If ``strict`` is set to ``False`` (default), the method will replace an existing submodule
        or create a new submodule if the parent module exists. If ``strict`` is set to ``True``,
        the method will only attempt to replace an existing submodule and throw an error if
        the submodule does not exist.

    For example, let's say you have an ``nn.Module`` ``A`` that
    looks like this:

    .. code-block:: text

        A(
            (net_b): Module(
                (net_c): Module(
                    (conv): Conv2d(3, 3, 3)
                )
                (linear): Linear(3, 3)
            )
        )

    (The diagram shows an ``nn.Module`` ``A``. ``A`` has a nested
    submodule ``net_b``, which itself has two submodules ``net_c``
    and ``linear``. ``net_c`` then has a submodule ``conv``.)

    To override the ``Conv2d`` with a new submodule ``Linear``, you
    could call ``set_submodule("net_b.net_c.conv", nn.Linear(1, 1))``
    where ``strict`` could be ``True`` or ``False``

    To add a new submodule ``Conv2d`` to the existing ``net_b`` module,
    you would call ``set_submodule("net_b.conv", nn.Conv2d(1, 1, 1))``.

    In the above if you set ``strict=True`` and call
    ``set_submodule("net_b.conv", nn.Conv2d(1, 1, 1), strict=True)``, an AttributeError
    will be raised because ``net_b`` does not have a submodule named ``conv``.

    Args:
        target: The fully-qualified string name of the submodule
            to look for. (See above example for how to specify a
            fully-qualified string.)
        module: The module to set the submodule to.
        strict: If ``False``, the method will replace an existing submodule
            or create a new submodule if the parent module exists. If ``True``,
            the method will only attempt to replace an existing submodule and throw an error
            if the submodule doesn't already exist.

    Raises:
        ValueError: If the ``target`` string is empty or if ``module`` is not an instance of ``nn.Module``.
        AttributeError: If at any point along the path resulting from
            the ``target`` string the (sub)path resolves to a non-existent
            attribute name or an object that is not an instance of ``nn.Module``.

share_memory

(

)

See :meth:torch.Tensor.share_memory_.

state_dict

(

args: <class 'inspect._empty'>

destination: <class 'inspect._empty'> = None

prefix: <class 'inspect._empty'> =

keep_vars: <class 'inspect._empty'> = False

)

Return a dictionary containing references to the whole state of the module.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
    Both parameters and persistent buffers (e.g. running averages) are
    included. Keys are corresponding parameter and buffer names.
    Parameters and buffers set to ``None`` are not included.

    .. note::
        The returned object is a shallow copy. It contains references
        to the module's parameters and buffers.

    .. warning::
        Currently ``state_dict()`` also accepts positional arguments for
        ``destination``, ``prefix`` and ``keep_vars`` in order. However,
        this is being deprecated and keyword arguments will be enforced in
        future releases.

    .. warning::
        Please avoid the use of argument ``destination`` as it is not
        designed for end-users.

    Args:
        destination (dict, optional): If provided, the state of module will
            be updated into the dict and the same object is returned.
            Otherwise, an ``OrderedDict`` will be created and returned.
            Default: ``None``.
        prefix (str, optional): a prefix added to parameter and buffer
            names to compose the keys in state_dict. Default: ``''``.
        keep_vars (bool, optional): by default the :class:`~torch.Tensor` s
            returned in the state dict are detached from autograd. If it's
            set to ``True``, detaching will not be performed.
            Default: ``False``.

    Returns:
        dict:
            a dictionary containing a whole state of the module

    Example::

        >>> # xdoctest: +SKIP("undefined vars")
        >>> module.state_dict().keys()
        ['bias', 'weight']

to

(

args: <class 'inspect._empty'>

kwargs: <class 'inspect._empty'>

)

Move and/or cast the parameters and buffers.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
    This can be called as

    .. function:: to(device=None, dtype=None, non_blocking=False)
       :noindex:

    .. function:: to(dtype, non_blocking=False)
       :noindex:

    .. function:: to(tensor, non_blocking=False)
       :noindex:

    .. function:: to(memory_format=torch.channels_last)
       :noindex:

    Its signature is similar to :meth:`torch.Tensor.to`, but only accepts
    floating point or complex :attr:`dtype`\ s. In addition, this method will
    only cast the floating point or complex parameters and buffers to :attr:`dtype`
    (if given). The integral parameters and buffers will be moved
    :attr:`device`, if that is given, but with dtypes unchanged. When
    :attr:`non_blocking` is set, it tries to convert/move asynchronously
    with respect to the host if possible, e.g., moving CPU Tensors with
    pinned memory to CUDA devices.

    See below for examples.

    .. note::
        This method modifies the module in-place.

    Args:
        device (:class:`torch.device`): the desired device of the parameters
            and buffers in this module
        dtype (:class:`torch.dtype`): the desired floating point or complex dtype of
            the parameters and buffers in this module
        tensor (torch.Tensor): Tensor whose dtype and device are the desired
            dtype and device for all parameters and buffers in this module
        memory_format (:class:`torch.memory_format`): the desired memory
            format for 4D parameters and buffers in this module (keyword
            only argument)

    Returns:
        Module: self

    Examples::

        >>> # xdoctest: +IGNORE_WANT("non-deterministic")
        >>> linear = nn.Linear(2, 2)
        >>> linear.weight
        Parameter containing:
        tensor([[ 0.1913, -0.3420],
                [-0.5113, -0.2325]])
        >>> linear.to(torch.double)
        Linear(in_features=2, out_features=2, bias=True)
        >>> linear.weight
        Parameter containing:
        tensor([[ 0.1913, -0.3420],
                [-0.5113, -0.2325]], dtype=torch.float64)
        >>> # xdoctest: +REQUIRES(env:TORCH_DOCTEST_CUDA1)
        >>> gpu1 = torch.device("cuda:1")
        >>> linear.to(gpu1, dtype=torch.half, non_blocking=True)
        Linear(in_features=2, out_features=2, bias=True)
        >>> linear.weight
        Parameter containing:
        tensor([[ 0.1914, -0.3420],
                [-0.5112, -0.2324]], dtype=torch.float16, device='cuda:1')
        >>> cpu = torch.device("cpu")
        >>> linear.to(cpu)
        Linear(in_features=2, out_features=2, bias=True)
        >>> linear.weight
        Parameter containing:
        tensor([[ 0.1914, -0.3420],
                [-0.5112, -0.2324]], dtype=torch.float16)

        >>> linear = nn.Linear(2, 2, bias=None).to(torch.cdouble)
        >>> linear.weight
        Parameter containing:
        tensor([[ 0.3741+0.j,  0.2382+0.j],
                [ 0.5593+0.j, -0.4443+0.j]], dtype=torch.complex128)
        >>> linear(torch.ones(3, 2, dtype=torch.cdouble))
        tensor([[0.6122+0.j, 0.1150+0.j],
                [0.6122+0.j, 0.1150+0.j],
                [0.6122+0.j, 0.1150+0.j]], dtype=torch.complex128)

to_empty

(

device: Union[int, str, torch.device, NoneType]

recurse: <class 'bool'> = True

)

Move the parameters and buffers to the specified device without copying storage.

1
2
3
4
5
6
7
8
    Args:
        device (:class:`torch.device`): The desired device of the parameters
            and buffers in this module.
        recurse (bool): Whether parameters and buffers of submodules should
            be recursively moved to the specified device.

    Returns:
        Module: self

train

(

mode: <class 'bool'> = True

)

Set the module in training mode.

1
2
3
4
5
6
7
8
9
10
11
    This has an effect only on certain modules. See the documentation of
    particular modules for details of their behaviors in training/evaluation
    mode, i.e., whether they are affected, e.g. :class:`Dropout`, :class:`BatchNorm`,
    etc.

    Args:
        mode (bool): whether to set training mode (``True``) or evaluation
                     mode (``False``). Default: ``True``.

    Returns:
        Module: self

type

(

dst_type: Union[torch.dtype, str]

)

Casts all parameters and buffers to :attr:dst_type.

1
2
3
4
5
6
7
8
    .. note::
        This method modifies the module in-place.

    Args:
        dst_type (type or string): the desired type

    Returns:
        Module: self

xpu

(

device: Union[int, torch.device, NoneType] = None

)

Move all model parameters and buffers to the XPU.

1
2
3
4
5
6
7
8
9
10
11
12
13
    This also makes associated parameters and buffers different objects. So
    it should be called before constructing optimizer if the module will
    live on XPU while being optimized.

    .. note::
        This method modifies the module in-place.

    Arguments:
        device (int, optional): if specified, all parameters will be
            copied to that device

    Returns:
        Module: self

zero_grad

(

set_to_none: <class 'bool'> = True

)

Reset gradients of all model parameters.

1
2
3
4
5
    See similar function under :class:`torch.optim.Optimizer` for more context.

    Args:
        set_to_none (bool): instead of setting to zero, set the grads to None.
            See :meth:`torch.optim.Optimizer.zero_grad` for details.

DglGraphLoader

(

data: Dict[Union[Literal['data', 'labels'], Literal['data', 'names']], Any]

batch_size: <class 'int'>

device: str | torch.device = cpu

shuffle: <class 'bool'> = True

is_train: <class 'bool'> = True

data_names: Optional[Sequence[str]] = None

)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
A Data loader to form dgl graph IN MEMORY

Args:
    data:
        when is_train == True:
            Dict: {
                'data': List[dgl.Graph],
                'label': Dict{'energy': Sequence[float], 'forces': Sequence[np.NDArray[n_atom, 3]]}
            }. Wherein 'force' is optional.
        else see `data_names`.
    batch_size: batch size.
    device: the device that data put on.
    shuffle: whether shuffle data.
    is_train: if `is_train` = True, data need to contain labels; else data need not contain labels and the return of labels [i.e., next(iter(dataloader))] was depended on `contain_names`.
    data_names: only works when `is_train` = False.
            if `data_names` is not None, it should be a Sequence(data names) with the same order as data,
            and the returned `labels` [i.e., next(iter(dataloader))] would be data_names instead of "energy" or "forces",
            else `labels` would be None.

Yields:
        (dgl.DGLGraph, {'energy': energy, 'forces': force})
     or (dgl.DGLGraph, {'energy': energy, })
     or (dgl.DGLGraph, data_names | None) [when is_train == False]

shuffle

(

)

PyGDataLoader

(

data: Dict[Union[Literal['data', 'labels'], Literal['data', 'names']], Any]

batch_size: <class 'int'>

device: str | torch.device = cpu

shuffle: <class 'bool'> = True

is_train: <class 'bool'> = True

data_names: Optional[Sequence[str]] = None

)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
A Data loader to form pygData IN MEMORY

Args:
    data: pyg.Data that contain attributes `pos`, `cell`, `atomic_numbers`, `natoms`, `tags`, `fixed`, `pbc`, `idx`.
          `pos`: Tensor, atom coordinates.
          `cell`: Tensor, cell vectors.
          `atomic_numbers`: Tensor, atomic numbers, corresponding to `pos` one by one.
          `natoms`: int, number of atoms.
          `tags`: Tensor, to be compatible with 'FAIR-CHEM' (https://fair-chem.github.io/),
                  which fixed slab part is set to 0, free slab part is 1, adsorbate is 2.
          `fixed`: Tensor, fixed tag, which fixed atoms are 0, free atoms are 1.
          `pbc`: List[bool, bool, bool], where to be periodic at x, y, z directions.
    batch_size: batch size.
    device: the device that data put on.
    shuffle: whether shuffle data.
    is_train: if `is_train` = True, data need to contain labels; else data need not contain labels and the return of labels [i.e., next(iter(dataloader))] was depended on `contain_names`.
    data_names: only works when `is_train` = False.
            if `data_names` is not None, it should be a Sequence(data names) with the same order as data,
            and the returned `labels` [i.e., next(iter(dataloader))] would be data_names instead of "energy" or "forces",
            else `labels` would be None.

Yields:
        (pyg.Data, {'energy': energy, 'forces': force})
     or (pyg.Data, {'energy': energy, })
     or (pyg.Data, data_names | None) [when is_train == False]

shuffle

(

)

E_MAE

(

pred: Dict[Literal['energy', 'forces'], torch.Tensor]

label: Dict[Literal['energy', 'forces'], torch.Tensor]

reduction: Literal['mean', 'sum', 'none'] = mean

)

None

E_R2

(

pred: Dict[Literal['energy', 'forces'], torch.Tensor]

label: Dict[Literal['energy', 'forces'], torch.Tensor]

)

None

F_MAE

(

pred: Dict[Literal['energy', 'forces'], torch.Tensor]

label: Dict[Literal['energy', 'forces'], torch.Tensor]

reduction: Literal['mean', 'sum', 'none'] = mean

)

None

F_MaxE

(

pred: Dict[Literal['energy', 'forces'], torch.Tensor]

label: Dict[Literal['energy', 'forces'], torch.Tensor]

)

None

This post is licensed under CC BY 4.0 by the author.