This is a reinforcement learning library implemented for the use with the stEVE framework. Albeit it can be used with any environment implementing the Farama gymnasium interface.
This framework implements the Soft-Actor-Critic method using pytorch.
The emphasis of this library lies on parallelisation of working agents, which perform episodes in the simulation and a single training agent utilizing one GPU. This is specifically helpful for computational intensive simulations.
- Setup stEVE (including Sofa)
- Install stEVE_rl package
python3 -m pip install -e .
- Test the installation
python3 examples/function_check.py
- Design your Neural Network using network components (e.g. MLP, CNN, LSTM). This will define the hidden layers
- Bundle them in a network structure (e.g. Q-Network, Gaussian-Policy-Network). This will define the input, output and connection between the components.
- Define Optimizer and Scheduler for the neural networks.
- Bundle all of them in a neural network model (specific to each algorithm).
- Define the algorithm (e.g. SAC).
- Define a Replay Buffer.
- Define an Agent.
- Write your training loop or use one of the runners.
Have a look at the example folder! More sophisticated usage examples can be found in stEVE_training.