- compressai.ops.compute_padding(in_h: int, in_w: int, *, out_h=None, out_w=None, min_div=1)#
Returns tuples for padding and unpadding.
in_h – Input height.
in_w – Input width.
out_h – Output height.
out_w – Output width.
min_div – Length that output dimensions should be divisible by.
- compressai.ops.quantize_ste(x: Tensor) Tensor #
Rounding with non-zero gradients. Gradients are approximated by replacing the derivative by the identity function.
Implemented with the pytorch detach() reparametrization trick:
x_round = x_round - x.detach() + x
- class compressai.ops.LowerBound(bound: float)#
Lower bound operator, computes torch.max(x, bound) with a custom gradient.
The derivative is replaced by the identity function when x is moved towards the bound, otherwise the gradient is kept to zero.
- class compressai.ops.NonNegativeParametrizer(minimum: float = 0, reparam_offset: float = 3.814697265625e-06)#
Non negative reparametrization.
Used for stability during training.