Skip to content

Query about LSTM #50

@npitsillos

Description

@npitsillos

Hello, nice and clear implementation! I want to ask something about the LSTM usage. While gatthering experience the input to the LSTM is of dimension [1, 1, 64] which represents 1 timestep of 1 episode along with the 64 FC features?

Also when training on a batch you sample this size eg [20, 1, 64] which corresponds to 20 timesteps?

Finally, shouldn't the hidden state be of the same dimensions except the last? Correspond to the timestep dimension for example? What is the best way to handle using an LSTM is it just an implementation choice?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions