-
Notifications
You must be signed in to change notification settings - Fork 475
Open
Description
Hello, nice and clear implementation! I want to ask something about the LSTM usage. While gatthering experience the input to the LSTM is of dimension [1, 1, 64] which represents 1 timestep of 1 episode along with the 64 FC features?
Also when training on a batch you sample this size eg [20, 1, 64] which corresponds to 20 timesteps?
Finally, shouldn't the hidden state be of the same dimensions except the last? Correspond to the timestep dimension for example? What is the best way to handle using an LSTM is it just an implementation choice?
Metadata
Metadata
Assignees
Labels
No labels