Command Line Interface Reference

supercat

supercat [OPTIONS] COMMAND [ARGS]...

Options

-v, --version

Prints the current version.

--install-completion <install_completion>

Install completion for the specified shell.

Options

bash | zsh | fish | powershell | pwsh

--show-completion <show_completion>

Show completion for the specified shell, to copy it or customize the installation.

Options

bash | zsh | fish | powershell | pwsh

bibliography

supercat bibliography [OPTIONS]

bibtex

supercat bibtex [OPTIONS]

infer

supercat infer [OPTIONS]

Options

--gpu, --no-gpu

Whether or not to use a GPU for processing if available.

Default

True

--pretrained <pretrained>

The location (URL or filepath) of a pretrained model.

--reload, --no-reload

Should the pretrained model be downloaded again if it is online and already present locally.

Default

False

--dim <dim>

The dimension of the dataset. 2 or 3.

Default

2

--items <items>
--item-dir <item_dir>

A directory with images to upscale.

--width <width>

The width of the final image/volume.

Default

500

--height <height>

The height of the final image/volume.

--depth <depth>

The depth of the final image/volume.

--start-x <start_x>
--end-x <end_x>
--start-y <start_y>
--end-y <end_y>
--start-z <start_z>
--end-z <end_z>
--return-data, --no-return-data
Default

False

--output-dir <output_dir>

The location of the output directory. If not given then it uses the directory of the item.

--suffix <suffix>

The file extension for the output file.

Default

lr-finder

supercat lr-finder [OPTIONS]

Options

--plot-filename <plot_filename>
--start-lr <start_lr>
Default

1e-07

--end-lr <end_lr>
Default

10

--iterations <iterations>
Default

100

--fp16, --no-fp16

Whether or not the floating-point precision of learner should be set to 16 bit.

Default

True

--output-dir <output_dir>

The location of the output directory.

Default

./outputs

--weight-decay <weight_decay>

The amount of weight decay. If None then it uses the default amount of weight decay in fastai.

--dim <dim>

The dimension of the dataset. 2 or 3.

Default

2

--deeprock <deeprock>

The path to the DeepRockSR dataset.

--downsample-scale <downsample_scale>

Should it use the 2x or 4x downsampled images.

Default

X4

Options

X2 | X4

--downsample-method <downsample_method>

Should it use the default method to downsample (bicubic) or a random kernel (UNKNOWN).

Default

unknown

Options

default | unknown

--batch-size <batch_size>

The batch size.

Default

10

--force, --no-force

Whether or not to force the conversion of the bicubic upscaling.

Default

False

--max-samples <max_samples>

If set, then the number of input samples for training/validation is truncated at this number.

--include-sand, --no-include-sand

Including DeepSand-SR dataset.

Default

False

--pretrained <pretrained>
--initial-features <initial_features>

The number of features after the initial CNN layer. If not set then it is derived from the MACC.

--growth-factor <growth_factor>

The factor to grow the number of convolutional filters each time the model downscales.

Default

2.0

--kernel-size <kernel_size>

The size of the kernel in the convolutional layers.

Default

3

--stub-kernel-size <stub_kernel_size>

The size of the kernel in the initial stub convolutional layer.

Default

7

--downblock-layers <downblock_layers>

The number of layers to downscale (and upscale) in the UNet.

Default

4

--macc <macc>

The approximate number of multiply or accumulate operations in the model per pixel/voxel. Used to set initial_features if it is not provided explicitly.

Default

132000

show-batch

supercat show-batch [OPTIONS]

Options

--output-path <output_path>

A location to save the HTML which summarizes the batch.

--dim <dim>

The dimension of the dataset. 2 or 3.

Default

2

--deeprock <deeprock>

The path to the DeepRockSR dataset.

--downsample-scale <downsample_scale>

Should it use the 2x or 4x downsampled images.

Default

X4

Options

X2 | X4

--downsample-method <downsample_method>

Should it use the default method to downsample (bicubic) or a random kernel (UNKNOWN).

Default

unknown

Options

default | unknown

--batch-size <batch_size>

The batch size.

Default

10

--force, --no-force

Whether or not to force the conversion of the bicubic upscaling.

Default

False

--max-samples <max_samples>

If set, then the number of input samples for training/validation is truncated at this number.

--include-sand, --no-include-sand

Including DeepSand-SR dataset.

Default

False

train

supercat train [OPTIONS]

Options

--distributed, --no-distributed

If the learner is distributed.

Default

False

--fp16, --no-fp16

Whether or not the floating-point precision of learner should be set to 16 bit.

Default

True

--output-dir <output_dir>

The location of the output directory.

Default

./outputs

--weight-decay <weight_decay>

The amount of weight decay. If None then it uses the default amount of weight decay in fastai.

--dim <dim>

The dimension of the dataset. 2 or 3.

Default

2

--deeprock <deeprock>

The path to the DeepRockSR dataset.

--downsample-scale <downsample_scale>

Should it use the 2x or 4x downsampled images.

Default

X4

Options

X2 | X4

--downsample-method <downsample_method>

Should it use the default method to downsample (bicubic) or a random kernel (UNKNOWN).

Default

unknown

Options

default | unknown

--batch-size <batch_size>

The batch size.

Default

10

--force, --no-force

Whether or not to force the conversion of the bicubic upscaling.

Default

False

--max-samples <max_samples>

If set, then the number of input samples for training/validation is truncated at this number.

--include-sand, --no-include-sand

Including DeepSand-SR dataset.

Default

False

--pretrained <pretrained>
--initial-features <initial_features>

The number of features after the initial CNN layer. If not set then it is derived from the MACC.

--growth-factor <growth_factor>

The factor to grow the number of convolutional filters each time the model downscales.

Default

2.0

--kernel-size <kernel_size>

The size of the kernel in the convolutional layers.

Default

3

--stub-kernel-size <stub_kernel_size>

The size of the kernel in the initial stub convolutional layer.

Default

7

--downblock-layers <downblock_layers>

The number of layers to downscale (and upscale) in the UNet.

Default

4

--macc <macc>

The approximate number of multiply or accumulate operations in the model per pixel/voxel. Used to set initial_features if it is not provided explicitly.

Default

132000

--epochs <epochs>

The number of epochs.

Default

20

--freeze-epochs <freeze_epochs>

The number of epochs to train when the learner is frozen and the last layer is trained by itself. Only if fine_tune is set on the app.

Default

3

--learning-rate <learning_rate>

The base learning rate (when fine tuning) or the max learning rate otherwise.

Default

0.0001

--project-name <project_name>

The name for this project for logging purposes.

--run-name <run_name>

The name for this particular run for logging purposes.

--run-id <run_id>

A unique ID for this particular run for logging purposes.

--notes <notes>

A longer description of the run for logging purposes.

--tag <tag>

A tag for logging purposes. Multiple tags can be added each introduced with –tag.

--wandb, --no-wandb

Whether or not to use ‘Weights and Biases’ for logging.

Default

False

--wandb-mode <wandb_mode>

The mode for ‘Weights and Biases’.

Default

online

--wandb-dir <wandb_dir>

The location for ‘Weights and Biases’ output.

--wandb-entity <wandb_entity>

An entity is a username or team name where you’re sending runs.

--wandb-group <wandb_group>

Specify a group to organize individual runs into a larger experiment.

--wandb-job-type <wandb_job_type>

Specify the type of run, which is useful when you’re grouping runs together into larger experiments using group.

--mlflow, --no-mlflow

Whether or not to use MLflow for logging.

Default

False

tune

supercat tune [OPTIONS]

Options

--runs <runs>

The number of runs to attempt to train the model.

Default

1

--engine <engine>

The optimizer to use to perform the hyperparameter tuning. Options: wandb, optuna, skopt.

Default

skopt

--id <id>

The ID of this hyperparameter tuning job. If using wandb, then this is the sweep id. If using optuna, then this is the storage. If using skopt, then this is the file to store the results.

Default

--name <name>

An informative name for this hyperparameter tuning job. If empty, then it creates a name from the project name.

Default

--method <method>

The sampling method to use to perform the hyperparameter tuning. By default it chooses the default method of the engine.

Default

--min-iter <min_iter>

The minimum number of iterations if using early termination. If left empty, then early termination is not used.

--seed <seed>

A seed for the random number generator.

--distributed, --no-distributed

If the learner is distributed.

Default

False

--fp16, --no-fp16

Whether or not the floating-point precision of learner should be set to 16 bit.

Default

True

--output-dir <output_dir>

The location of the output directory.

Default

./outputs

--weight-decay <weight_decay>

The amount of weight decay. If None then it uses the default amount of weight decay in fastai.

--dim <dim>

The dimension of the dataset. 2 or 3.

Default

2

--deeprock <deeprock>

The path to the DeepRockSR dataset.

--downsample-scale <downsample_scale>

Should it use the 2x or 4x downsampled images.

Default

X4

Options

X2 | X4

--downsample-method <downsample_method>

Should it use the default method to downsample (bicubic) or a random kernel (UNKNOWN).

Default

unknown

Options

default | unknown

--batch-size <batch_size>

The batch size.

Default

10

--force, --no-force

Whether or not to force the conversion of the bicubic upscaling.

Default

False

--max-samples <max_samples>

If set, then the number of input samples for training/validation is truncated at this number.

--include-sand, --no-include-sand

Including DeepSand-SR dataset.

Default

False

--pretrained <pretrained>
--initial-features <initial_features>

The number of features after the initial CNN layer. If not set then it is derived from the MACC.

--growth-factor <growth_factor>

The factor to grow the number of convolutional filters each time the model downscales.

--kernel-size <kernel_size>

The size of the kernel in the convolutional layers.

--stub-kernel-size <stub_kernel_size>

The size of the kernel in the initial stub convolutional layer.

--downblock-layers <downblock_layers>

The number of layers to downscale (and upscale) in the UNet.

--macc <macc>

The approximate number of multiply or accumulate operations in the model per pixel/voxel. Used to set initial_features if it is not provided explicitly.

Default

132000

--epochs <epochs>

The number of epochs.

Default

20

--freeze-epochs <freeze_epochs>

The number of epochs to train when the learner is frozen and the last layer is trained by itself. Only if fine_tune is set on the app.

Default

3

--learning-rate <learning_rate>

The base learning rate (when fine tuning) or the max learning rate otherwise.

Default

0.0001

--project-name <project_name>

The name for this project for logging purposes.

--run-name <run_name>

The name for this particular run for logging purposes.

--run-id <run_id>

A unique ID for this particular run for logging purposes.

--notes <notes>

A longer description of the run for logging purposes.

--tag <tag>

A tag for logging purposes. Multiple tags can be added each introduced with –tag.

--wandb, --no-wandb

Whether or not to use ‘Weights and Biases’ for logging.

Default

False

--wandb-mode <wandb_mode>

The mode for ‘Weights and Biases’.

Default

online

--wandb-dir <wandb_dir>

The location for ‘Weights and Biases’ output.

--wandb-entity <wandb_entity>

An entity is a username or team name where you’re sending runs.

--wandb-group <wandb_group>

Specify a group to organize individual runs into a larger experiment.

--wandb-job-type <wandb_job_type>

Specify the type of run, which is useful when you’re grouping runs together into larger experiments using group.

--mlflow, --no-mlflow

Whether or not to use MLflow for logging.

Default

False

validate

supercat validate [OPTIONS]

Options

--gpu, --no-gpu

Whether or not to use a GPU for processing if available.

Default

True

--pretrained <pretrained>

The location (URL or filepath) of a pretrained model.

--reload, --no-reload

Should the pretrained model be downloaded again if it is online and already present locally.

Default

False

--dim <dim>

The dimension of the dataset. 2 or 3.

Default

2

--deeprock <deeprock>

The path to the DeepRockSR dataset.

--downsample-scale <downsample_scale>

Should it use the 2x or 4x downsampled images.

Default

X4

Options

X2 | X4

--downsample-method <downsample_method>

Should it use the default method to downsample (bicubic) or a random kernel (UNKNOWN).

Default

unknown

Options

default | unknown

--batch-size <batch_size>

The batch size.

Default

10

--force, --no-force

Whether or not to force the conversion of the bicubic upscaling.

Default

False

--max-samples <max_samples>

If set, then the number of input samples for training/validation is truncated at this number.

--include-sand, --no-include-sand

Including DeepSand-SR dataset.

Default

False