State-Action Trimming
In the previous sections we established how to initialize and activate a Neural Network, and how to train that Network using Deep Reinforcement.
But Hula wouldn't be Hula if it weren't experimental!
Saving every singe state and action will bog down the program and take up memory. There are a variety of other solutions to this problem, but Hula has brought it's own to the table.
State-Action Trimming will combine actions with close states and cut out actions that led to bad results.
Using the previous example,
Using the commandNet.simplify(threshold, min_score)
we can do State-Action Trimming.
Last updated