朱 捍華
Revised edition of the paper:Training Agents with Long-range Information in Deep Reinforcement Learning
Section: Related Work, Paragraph 1
Deep Recurrent Q-Networks incorporates LSTM into the network structure of DQN to help the model capture the long-range dependencies in time. Although DRQN processes both convolutional and recurrent operations, space and time dependencies are computed separately and the relations between some positions will never be computed. For example, the relations between the bottom right positions and the top left positions in a former kernel state will never be computed unless in a rare situation where one neuron’s receptive field covers the whole inputs before the recurrent operations.
Deep Recurrent Q-Networks incorporates LSTM into the network structure of DQN to help the model capture the dependencies in time. Although DRQN processes both convolutional and recurrent operations, space and time dependencies are computed separately and the relations between some positions may be forgotten during the training.
Newest version: GPW2019.pdf