AOT Tuning API
tune
aitune.torch.tune
tune(func, dataset, batch_sizes=None, max_num_batches_per_batch_size=None, device=DEFAULT_DEVICE, dry_run=False, disable_external_logging=False, clear_cache=False, ignore_failing_modules=True)
Tune a callable which runs inference on a pipeline or a model.
Parameters:
-
func(Callable) –The function to tune.
-
dataset(DatasetLike | DataLoaderFactory | Tensor) –The dataset to tune on. It can be DataLoaderFactory or any dataset/iterable and even torch.Tensor. Tensor will be treated as a single sample dataset.
-
batch_sizes(list[int] | None, default:None) –The batch sizes to use for tuning. At least 2 different batch sizes are required for determining batch axis.
-
max_num_batches_per_batch_size(int | None, default:None) –The maximum number of batches to use for tuning per batch size.
-
device(str | device | None, default:DEFAULT_DEVICE) –The device to use for tuning.
-
dry_run(bool, default:False) –If True, only dry run the tuning.
-
disable_external_logging(bool, default:False) –If True, libraries logging will be suppressed.
-
clear_cache(bool, default:False) –If True, the cache will be cleared before tuning.
-
ignore_failing_modules(bool, default:True) –If True, failing modules will be ignored and tuning will continue.
Note
Max batch size is limited by specified batch_size.
Source code in aitune/torch/tuning.py
save
aitune.torch.save
Save the tuned module to a file.
Parameters:
-
module(Module) –The module to save.
-
path(str | Path) –The path to save the module to.
-
storage(Storage | None, default:None) –The storage to use to save the module. If not provided, a local storage will be used.
If storage is not provided, a default storage will be used.
If you would like to save the module to a different folder, you can use the storage parameter.
For example, if you want to save the module to the ckpt folder, you can do the following:
Example
import doctest doctest.ELLIPSIS_MARKER = "" # using *** instead of ... to avoid doctest failure import aitune.torch as ait from torch.nn import Linear model = ait.Module(Linear(10, 10), "model", strategy=ait.FirstWinsStrategy([ait.backend.TorchEagerBackend()])) dataset = torch.randn(10, 10) ait.tune(model, dataset, batch_sizes=[1, 2], device="cpu") # doctest: +ELLIPSIS 🎯 Tuning module:
model(all graphs) storage = ait.LocalTorchStorage(base_folder="ckpt") ait.save(model, "tuned_model.ait", storage=storage) ✅ Checkpoint compressed and saved to*** loaded_model = ait.load(model, "tuned_model.ait", storage=storage)
Source code in aitune/torch/tuning.py
load
aitune.torch.load
Load the tuned module from a file.
Parameters:
-
module(Module) –The module to load.
-
path(str | Path) –The path to load the module from.
-
storage(Storage | None, default:None) –The storage to use to load the module. If not provided, a local storage will be used.
-
device_map(dict[str, device] | None, default:None) –The device map to load modules to. Overrides the device of stored in state dict.
-
disable_external_logging(bool, default:True) –If True, libraries logging will be suppressed.
If storage is not provided, a default storage will be used. Check save function for more details.