Skip to content

Commit 52b9e6f

Browse files
authored
Merge pull request brucefan1983#1303 from brucefan1983/doc-gnep
doc gnep
2 parents dc5d39c + 3f1ad2f commit 52b9e6f

1 file changed

Lines changed: 28 additions & 0 deletions

File tree

doc/installation.rst

Lines changed: 28 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -33,6 +33,34 @@ You can find several examples for how to use both the ``gpumd`` and ``nep`` exec
3333
.. _netcdf_setup:
3434
.. index::
3535
single: NetCDF setup
36+
37+
GNEP setup
38+
==========
39+
40+
GNEP stands for a method of training NEP models using analytical Gradients (G stands for Gradients).
41+
See the `implementation paper <https://doi.org/10.1016/j.cpc.2025.109994/>`_ for details.
42+
43+
To compile the ``gnep`` executable, one can run ``make gnep`` in the ``src`` directory.
44+
45+
The usage of the ``gnep`` executable is similar to that of the ``nep`` executable.
46+
The major difference is that training hyperparameters are written in ``gnep.in`` instead of ``nep.in``'
47+
Below we use an explicit example with default parameters (except for the ``type`` keyword) to illustrate the inputs in ``gnep.in``::
48+
49+
type 2 Ge Se # same usage as in nep.in
50+
prediction 0 # same usage as in nep.in
51+
cutoff 8 4 # same usage as in nep.in
52+
n_max 4 4 # same usage as in nep.in
53+
basis_size 8 8 # same usage as in nep.in
54+
l_max 4 # same usage as in nep.in but does not support 4-body and 5-body descriptors
55+
neuron 30 # same usage as in nep.in
56+
lambda_e 1.0 # same usage as in nep.in
57+
lambda_f 2.0 # same usage as in nep.in but defaults to 2
58+
lambda_v 0.1 # same usage as in nep.in
59+
start_lr 1e-3 # new keyword to set the starting learning rate, which should be a non-negative floating-point number
60+
stop_lr 1e-7 # new keyword to set the stopping learning rate, which should be a non-negative floating-point number
61+
weight_decay 0.0 # new keyword to set the weight decay parameter, which should be a non-negative floating-point number
62+
batch 2 # same usage as in nep.in but favors small values
63+
epoch 50 # one epoch equals #structures/#batchsize training steps
3664

3765
NetCDF setup
3866
============

0 commit comments

Comments
 (0)