Command line usage

Command line usage#

bench#

Collect performance metrics of published traditional or end-to-end image codecs.

usage: python -m compressai.utils.bench [-h] {} ...

Collect codec metrics.

positional arguments:
  {}          Select codec

options:
  -h, --help  show this help message and exit

eval_model#

Evaluate an end-to-end compression model on an image dataset.

usage: python -m compressai.utils.eval_model [-h] {pretrained,checkpoint} ...

Evaluate a model on an image dataset.

positional arguments:
  {pretrained,checkpoint}
                        model source

options:
  -h, --help            show this help message and exit

find_close#

Find the closest codec quality parameter to reach a given metric (bpp, ms-ssim, or psnr).

Example usages:
  • python -m compressai.utils.find_close webp ~/picture.png 0.5 --metric bpp

  • python -m compressai.utils.find_close jpeg ~/picture.png 35 --metric psnr --save

usage: python -m compressai.utils.find_close [-h]
                                             [-m {bpp,psnr-rgb,ms-ssim-rgb}]
                                             [--save]
                                             {} ... image target

Collect codec metrics and performances.

positional arguments:
  {}                    Select codec
  image                 image filepath
  target                target value to match

options:
  -h, --help            show this help message and exit
  -m {bpp,psnr-rgb,ms-ssim-rgb}, --metric {bpp,psnr-rgb,ms-ssim-rgb}
  --save                Save reconstructed image to disk

plot#

Simple plotting utility to display Rate-Distortion curves (RD) comparison between codecs.

usage: python -m compressai.utils.plot [-h] -f [...] [-m] [-t] [-o]
                                       [--figsize ] [--axes   ] [--backend]
                                       [--show]

options:
  -h, --help            show this help message and exit
  -f [ ...], --results-file [ ...]
  -m , --metric         Metric (default: psnr)
  -t , --title          Plot title
  -o , --output         Output file name
  --figsize             Figure relative size (width, height), default: (9, 6)
  --axes                Axes limit (xmin, xmax, ymin, ymax), default:
                        autorange
  --backend             Change plot backend (default: matplotlib)
  --show                Open plot figure

update_model#

Update the CDFs parameters of a trained model.

To be called on a model checkpoint after training. This will update the internal CDFs related buffers required for entropy coding.

usage: python -m compressai.utils.update_model [-h] [-n NAME] [-d DIR]
                                               [--no-update]
                                               [-a {factorized-prior,jarhp,mean-scale-hyperprior,scale-hyperprior,ssf2020,bmshj2018-factorized,bmshj2018_factorized_relu,bmshj2018-hyperprior,mbt2018-mean,mbt2018,cheng2020-anchor,cheng2020-attn,bmshj2018-hyperprior-vbr,mbt2018-mean-vbr,mbt2018-vbr}]
                                               filepath

Export a trained model to a new checkpoint with updated CDFs and a hash prefix
so that it can be loaded later via `load_state_dict_from_url`.

positional arguments:
  filepath              Path to the checkpoint model to be exported.

options:
  -h, --help            show this help message and exit
  -n NAME, --name NAME  Exported model name.
  -d DIR, --dir DIR     Exported model directory.
  --no-update           Do not update the model CDFs parameters.
  -a {factorized-prior,jarhp,mean-scale-hyperprior,scale-hyperprior,ssf2020,bmshj2018-factorized,bmshj2018_factorized_relu,bmshj2018-hyperprior,mbt2018-mean,mbt2018,cheng2020-anchor,cheng2020-attn,bmshj2018-hyperprior-vbr,mbt2018-mean-vbr,mbt2018-vbr}, --architecture {factorized-prior,jarhp,mean-scale-hyperprior,scale-hyperprior,ssf2020,bmshj2018-factorized,bmshj2018_factorized_relu,bmshj2018-hyperprior,mbt2018-mean,mbt2018,cheng2020-anchor,cheng2020-attn,bmshj2018-hyperprior-vbr,mbt2018-mean-vbr,mbt2018-vbr}
                        Set model architecture (default: scale-hyperprior).