Learning with Hindsight Experience Replay

We have seen how experience replay is used in Deep Q Network (DQN) to avoid the correlated experience. Also, we learned about prioritized experience replay as an improvement to the vanilla experience replay by prioritizing each experience with TD (Temporal difference) error. Now we will see a new technique called hindsight experience replay (HER) proposed by OpenAI researchers for dealing with sparse rewards.


Do you remember how you learned to ride a bike? At your first try, you wouldn’t have balanced the bike properly. You would have failed several times to balance correctly. But all the failures doesn’t mean you haven’t learned anything. The failures would have taught you how not to balance a bike. Even though you have not learned to ride a bike (goal), you have learned a different goal i.e you have learned how not to balance a bike. This is how we humans learn right? we learn from failures and this is the idea behind hindsight experience replay.

Let us consider the same example given in the paper.

Look at the FetchSlide environment as shown in the below figure, the goal in this environment is to move the robotic arm and slide a puck across the table hit the target (small red circle).

Image source: https://blog.openai.com/ingredients-for-robotics-research/

Hindsight Experience Replay
Hindsight Experience Replay

In few first trails, the agent could not definitely achieve the goal. So the agent will only receive -1 as rewards which tell the agent it was doing wrong and not attained the goal.

Hindsight Experience Replay
Hindsight Experience Replay

But this doesn’t mean that agent has not learned anything. The agent has learned a different goal i.e it has learned to move closer to our actual goal. So instead of considering it as a failure, we consider it has a different goal.

So if we repeat this process over several iterations, the agent will learn to achieve our actual goal. HER can be applied to any off policy algorithms.

The performance of HER is compared by DDPG without HER and DDPG with her and the results show that DDPG with HER converges quickly than DDPG without HER.


You can see the performance of HER in this video https://youtu.be/Dz_HuzgMxzo.

Leave a Reply