speediedan
released this
20 Dec 19:09
·
2 commits
to release/2.5.x
since this release
[2.5.0] - 2024-12-20
Added
- Support for Lightning and PyTorch
2.5.0
- FTS support for PyTorch's composable distributed (e.g.
fully_shard
,checkpoint
) and Tensor Parallelism (TP) APIs - Support for Lightning's
ModelParallelStrategy
- Experimental 'Auto' FSDP2 Plan Configuration feature, allowing application of the
fully_shard
API using module
name/pattern-based configuration instead of manually inspecting modules and applying the API inLightningModule.configure_model
- FSDP2 'Auto' Plan Convenience Aliases, simplifying use of both composable and non-composable activation checkpointing APIs
- Flexible orchestration of advanced profiling combining multiple complementary PyTorch profilers with FTS
MemProfiler
Fixed
- Added logic to more robustly condition depth-aligned checkpoint metadata updates to address edge-cases where
current_score
precisely equaled thebest_model_score
at multiple different depths. Resolved #15.
Deprecated
- As upstream PyTorch has deprecated official Anaconda channel builds,
finetuning-scheduler
will no longer be releasing conda builds. Installation of FTS via pip (irrespective of the virtual environment used) is the recommended installation approach. - removed support for PyTorch
2.1
Thanks to the following users/contributors for their feedback and/or contributions in this release:
@CyprienRicque