compressai_trainer.run#

Runnable CLI utilities, including:

Module name

Description

train

Training.

eval_model

Evaluate metrics and generate outputs (e.g. bitstreams, reconstructed input) on input dataset.

plot_rd

RD curve plotter that can query metrics from the experiment tracker (Aim).

train#

Train a model.

Please see Walkthrough for a complete guide.

compressai_trainer.run.train.main(conf: omegaconf.dictconfig.DictConfig)[source]#
compressai_trainer.run.train.setup(conf: omegaconf.dictconfig.DictConfig) tuple[catalyst.runners.runner.Runner, dict[str, Any]][source]#

eval_model#

Evaluate a model.

Evaluates a model on the configured dataset.infer (i.e. test set). Saves bitstreams and reconstructed images to paths.output_dir. Computes metrics and saves per-file results and averaged results to JSON and TSV files.

To evaluate models trained using CompressAI Trainer:

compressai-eval \
    --config-path="$HOME/data/runs/e4e6d4d5e5c59c69f3bd7be2/configs" \
    --config-path="$HOME/data/runs/d4d5e5c5e4e6bd7be29c69f3/configs" \
    ...

To evaluate models from the CompressAI zoo:

compressai-eval \
    --config-name="eval_zoo" ++model.name="bmshj2018-factorized" ++model.quality=1 ++criterion.lmbda=0.0018 \
    --config-name="eval_zoo" ++model.name="bmshj2018-factorized" ++model.quality=2 ++criterion.lmbda=0.0035 \
    --config-name="eval_zoo" ++model.name="bmshj2018-factorized" ++model.quality=3 ++criterion.lmbda=0.0067 \
    --config-name="eval_zoo" ++model.name="bmshj2018-factorized" ++model.quality=4 ++criterion.lmbda=0.0130 \
    --config-name="eval_zoo" ++model.name="bmshj2018-factorized" ++model.quality=5 ++criterion.lmbda=0.0250 \
    --config-name="eval_zoo" ++model.name="bmshj2018-factorized" ++model.quality=6 ++criterion.lmbda=0.0483 \
    --config-name="eval_zoo" ++model.name="bmshj2018-factorized" ++model.quality=7 ++criterion.lmbda=0.0932 \
    --config-name="eval_zoo" ++model.name="bmshj2018-factorized" ++model.quality=8 ++criterion.lmbda=0.1800

The above can be written more compactly by prepending a “default” override ++model.name="bmshj2018-factorized" that applies to all configs:

compressai-eval \
    ++model.name="bmshj2018-factorized" \
    --config-name="eval_zoo" ++model.quality=1 ++criterion.lmbda=0.0018 \
    --config-name="eval_zoo" ++model.quality=2 ++criterion.lmbda=0.0035 \
    --config-name="eval_zoo" ++model.quality=3 ++criterion.lmbda=0.0067 \
    --config-name="eval_zoo" ++model.quality=4 ++criterion.lmbda=0.0130 \
    --config-name="eval_zoo" ++model.quality=5 ++criterion.lmbda=0.0250 \
    --config-name="eval_zoo" ++model.quality=6 ++criterion.lmbda=0.0483 \
    --config-name="eval_zoo" ++model.quality=7 ++criterion.lmbda=0.0932 \
    --config-name="eval_zoo" ++model.quality=8 ++criterion.lmbda=0.1800

By default, the following options are used, if not specified:

--config-path="conf"
--config-name="config"

++model.source="config"

# if model.source == "config":
++paths.output_dir="outputs/${model.source}-${env.aim.run_hash}-${model.name}"
++paths.model_checkpoint='${paths.checkpoints}/runner.last.pth'

# if model.source == "from_state_dict":
++paths.output_dir="outputs/${model.name}-${Path(paths.model_checkpoint).stem}"

# if model.source == "zoo":
++paths.output_dir="outputs/${model.source}-${model.name}-${model.metric}-${model.quality}"

The model is evaluated on dataset.infer, which may be configured as follows:

dataset:
  infer:
    type: "ImageFolder"
    config:
      root: "/path/to/directory/containing/images"
      split: ""
    loader:
      shuffle: False
      batch_size: 1
      num_workers: 2
    settings:
    transforms:
      - "ToTensor": {}
    meta:
      name: "Custom dataset"
      identifier: "image/custom"
      num_samples: 0  # ignored during eval

To evaluate a model on a custom directory of samples, use the above config and override dataset.infer.config.root.

compressai_trainer.run.eval_model.main()[source]#
compressai_trainer.run.eval_model.run_eval_model(runner, batches, filenames, output_dir, metrics)[source]#
compressai_trainer.run.eval_model.setup(conf: omegaconf.dictconfig.DictConfig) catalyst.runners.runner.Runner[source]#

plot_rd#

RD curves plotter.

See Plot an RD curve for more information.

compressai_trainer.run.plot_rd.build_args(argv)[source]#
compressai_trainer.run.plot_rd.create_dataframe(repo, args)[source]#
compressai_trainer.run.plot_rd.main()[source]#
compressai_trainer.run.plot_rd.plot_dataframe(df: pandas.core.frame.DataFrame, args)[source]#
compressai_trainer.run.plot_rd.wrap(s)[source]#