Post

TS

An introduction to the functions in /BatchOptim/TS/

TS

CI_NEB

(

N_images: <class 'int'>

spring_const: <class 'float'> = 0.1

optimizer: Literal['CG', 'QN', 'FIRE'] = CG

iter_scheme: Literal['BFGS', 'Newton', 'PR', 'PR+', 'SD', 'FR'] = PR+

linesearch: Literal['Backtrack', 'Wolfe', 'NWolfe', '2PT', '3PT', 'Golden', 'Newton', 'None'] = Backtrack

steplength: <class 'float'> = 0.05

E_threshold: <class 'float'> = 0.001

F_threshold: <class 'float'> = 0.05

maxiter: <class 'int'> = 100

device: str | torch.device = cpu

verbose: <class 'int'> = 2

)

1
Ref. TODO

run

(

func: Any | torch.nn.modules.module.Module

X_init: <class 'torch.Tensor'>

X_fin: <class 'torch.Tensor'>

grad_func: Any | torch.nn.modules.module.Module = None

func_args: Sequence = ()

func_kwargs: <class 'inspect._empty'> = None

grad_func_args: Sequence = ()

grad_func_kwargs: <class 'inspect._empty'> = None

is_grad_func_contain_y: <class 'bool'> = True

output_grad: <class 'bool'> = False

fixed_atom_tensor: Optional[torch.Tensor] = None

)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
    run Dimer algo.
    Args:
        func: function
        X_init: initial structure coordinates
        X_fin: finale structure coordinates
        grad_func: function of func's gradient
        func_args: function args
        func_kwargs: function kwargs
        grad_func_args: gradient function args
        grad_func_kwargs: gradient function kwargs
        is_grad_func_contain_y: whether gradient function contains dependant various y
        output_grad: whether output gradient
        fixed_atom_tensor: mask of fixed atoms

    Returns:

Dimer

(

E_threshold: <class 'float'> = 0.001

Torque_thres: <class 'float'> = 0.01

F_threshold: <class 'float'> = 0.05

maxiter_trans: <class 'int'> = 100

maxiter_rot: <class 'int'> = 10

maxiter_linsearch: <class 'int'> = 10

steplength: float | torch.Tensor = 0.5

max_steplength: <class 'float'> = 0.5

dx: <class 'float'> = 0.01

device: str | torch.device = cpu

verbose: <class 'int'> = 2

)

1
Ref. J Chem Phys 2005, 132, 224101.

run

(

func: Any | torch.nn.modules.module.Module

X: <class 'torch.Tensor'>

X_diff: <class 'torch.Tensor'>

grad_func: Any | torch.nn.modules.module.Module = None

func_args: Sequence = ()

func_kwargs: <class 'inspect._empty'> = None

grad_func_args: Sequence = ()

grad_func_kwargs: <class 'inspect._empty'> = None

is_grad_func_contain_y: <class 'bool'> = True

output_grad: <class 'bool'> = False

fixed_atom_tensor: Optional[torch.Tensor] = None

)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
    run Dimer algo.
    Args:
        func:
        X:
        X_diff:
        grad_func:
        func_args:
        func_kwargs:
        grad_func_args:
        grad_func_kwargs:
        is_grad_func_contain_y:
        output_grad:
        fixed_atom_tensor:

    Returns:

DimerLinsMomt

(

E_threshold: <class 'float'> = 0.001

Torque_thres: <class 'float'> = 0.01

F_threshold: <class 'float'> = 0.05

maxiter_trans: <class 'int'> = 100

maxiter_rot: <class 'int'> = 10

maxiter_linsearch: <class 'int'> = 10

steplength: float | torch.Tensor = 0.5

max_steplength: <class 'float'> = 0.5

momenta_coeff: <class 'float'> = 0.9

dx: <class 'float'> = 0.01

device: str | torch.device = cpu

verbose: <class 'int'> = 2

)

1
Ref. J Chem Phys 2005, 132, 224101., with revision of momenta and linear search.

run

(

func: Any | torch.nn.modules.module.Module

X: <class 'torch.Tensor'>

X_diff: <class 'torch.Tensor'>

grad_func: Any | torch.nn.modules.module.Module = None

func_args: Sequence = ()

func_kwargs: <class 'inspect._empty'> = None

grad_func_args: Sequence = ()

grad_func_kwargs: <class 'inspect._empty'> = None

is_grad_func_contain_y: <class 'bool'> = True

output_grad: <class 'bool'> = False

fixed_atom_tensor: Optional[torch.Tensor] = None

)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
    run Dimer algo.
    Args:
        func:
        X:
        X_diff:
        grad_func:
        func_args:
        func_kwargs:
        grad_func_args:
        grad_func_kwargs:
        is_grad_func_contain_y:
        fixed_atom_tensor:
        output_grad:

    Returns:
This post is licensed under CC BY 4.0 by the author.