quantax.model.ResSum#
- quantax.model.ResSum(nblocks: int, channels: int, kernel_size: int | ~typing.Sequence[int], final_activation: ~typing.Callable | None = None, trans_symm: ~quantax.symmetry.symmetry.Symmetry | None = None, dtype: ~numpy.dtype = <class 'jax.numpy.float32'>)#
The convolutional residual network with a summation in the end.
- Parameters:
nblocks – The number of residual blocks. Each block contains two convolutional layers.
channels – The number of channels. Each layer has the same amount of channels.
kernel_size – The kernel size. Each layer has the same kernel size.
final_activation – The activation function in the last layer. By default,
Exp
is used.trans_symm – The translation symmetry to be applied in the last layer, see
ConvSymmetrize
.dtype – The data type of the parameters.
Tip
This is the recommended architecture for deep neural quantum states.