OneBackendStrategy API
aitune.torch.tune_strategy.one_backend_strategy.OneBackendStrategy
Bases: TuneStrategyFindMaxBatchSizeExtension
Strategy which uses just one provided backend.
Initializes strategy.
Source code in aitune/torch/tune_strategy/one_backend_strategy.py
check_correctness
Check outputs for NaN/inf.
Parameters:
-
backend(Backend) –The backend to check.
-
name(str) –The name of the module.
-
graph_spec(GraphSpec) –The graph spec of the module.
-
data(list[Sample]) –The data to check.
Note
This method is should be called by the _tune method to check the correctness of the backend.
You can disable correctness check by calling enable_correctness_check(False).
Raises:
-
CorrectnessCheckError–if the backend fails any check.
Source code in aitune/torch/tune_strategy/tune_strategy.py
clone
default_profiling_config
staticmethod
default_profiling_config(batching=True, max_batch_size=DEFAULT_MAX_BATCH_SIZE, window_size=DEFAULT_WINDOW_SIZE, stability_percentage=DEFAULT_STABILITY_PERCENTAGE, throughput_cutoff_threshold=DEFAULT_THROUGHPUT_CUTOFF_THRESHOLD, throughput_backoff_limit=DEFAULT_THROUGHPUT_BACKOFF_LIMIT)
Get profiling config for finding max batch size.
Parameters:
-
batching(bool, default:True) –Whether to profile with batching.
-
max_batch_size(int, default:DEFAULT_MAX_BATCH_SIZE) –Max batch size to find used to construct batch sizes, the batch sizes will be 2^n for n in range(max_batch_size.bit_length()).
-
window_size(int, default:DEFAULT_WINDOW_SIZE) –Window size for measuring stop strategy.
-
stability_percentage(float, default:DEFAULT_STABILITY_PERCENTAGE) –Stability percentage for measuring stop strategy.
-
throughput_cutoff_threshold(float, default:DEFAULT_THROUGHPUT_CUTOFF_THRESHOLD) –Throughput cutoff threshold for profiling stop strategy.
-
throughput_backoff_limit(int, default:DEFAULT_THROUGHPUT_BACKOFF_LIMIT) –Throughput backoff limit for profiling stop strategy.
Returns:
-
ProfilingConfig–Profiling config for finding max batch size.
Note
The profiling config will use defaults from highest throughput strategy.
Source code in aitune/torch/tune_strategy/extension/find_max_batch_size_extension.py
describe
enable_correctness_check
enable_find_max_batch_size
Enables or disables find max batch size.
Source code in aitune/torch/tune_strategy/extension/find_max_batch_size_extension.py
find_max_batch_size
Finds max batch size for the module.
Source code in aitune/torch/tune_strategy/extension/find_max_batch_size_extension.py
set_find_max_batch_size_default_backend_class
Sets default backend class for find max batch size.
Source code in aitune/torch/tune_strategy/extension/find_max_batch_size_extension.py
set_find_max_batch_size_profiling_config
Sets profiling config for find max batch size.
Source code in aitune/torch/tune_strategy/extension/find_max_batch_size_extension.py
tune
Tunes given torch module with provided graph_spec and data.
Source code in aitune/torch/tune_strategy/tune_strategy.py
tune_dry_run
Performs tune dry run.