compressai.ops#

compute_padding#

compressai.ops.compute_padding(in_h: int, in_w: int, *, out_h=None, out_w=None, min_div=1)[source]#

Returns tuples for padding and unpadding.

Parameters:
  • in_h – Input height.

  • in_w – Input width.

  • out_h – Output height.

  • out_w – Output width.

  • min_div – Length that output dimensions should be divisible by.

quantize_ste#

compressai.ops.quantize_ste(x: Tensor) Tensor[source]#

Rounding with non-zero gradients. Gradients are approximated by replacing the derivative by the identity function.

Used in “Lossy Image Compression with Compressive Autoencoders”

Note

Implemented with the pytorch detach() reparametrization trick:

x_round = x_round - x.detach() + x

LowerBound#

class compressai.ops.LowerBound(bound: float)[source]#

Lower bound operator, computes torch.max(x, bound) with a custom gradient.

The derivative is replaced by the identity function when x is moved towards the bound, otherwise the gradient is kept to zero.

NonNegativeParametrizer#

class compressai.ops.NonNegativeParametrizer(minimum: float = 0, reparam_offset: float = 3.814697265625e-06)[source]#

Non negative reparametrization.

Used for stability during training.