An Environment that allows us to use Gymnasium and BeamNG.tech together, so we can gather observations and perform steps asynchronously.
We developed a way to harness the available in-game AI to generate larger training datasets in less time, compared to human controlled / generated datasets.
Continued experimentation with different approaches including DQN, Segmentation, Frame Stacking and more.
Our approach uses the Soft-Actor-Critic architecture to train an agent on several in-game attributes.
We investigated how adding the current image to the observations can improve the performance of the Soft-Actor-Critic Agent.
The networks purpose is to generate a part of the needed inputs to the Soft-Actor-Critic Agent from an image, instead of using the in-game data.
Much like the Road Curvatures, the networks purpose is to generate a part of the needed inputs to the Soft-Actor-Critic Agent from an image, instead of using the in-game data.
Reading out parts of the UI to get current speed and acceleration for the Soft-Actor-Critic Agent.
We're investigating whether retraining the Soft-Actor-Critic Agent with the new inputs, generated from the different networks, will equal or improve the performance of the Agent.
Finalizing our End-to-End Network, by combining all of the previously mentioned parts into a single network, that takes an image as input and outputs the needed controls for the Game.