Ms. Pac-Man is considered by many as one of the hard games of the ALE benchmark set. Standard deep reinforcement learning methods have so far failed to achieve a performance that is close to human performance. That Ms. Pac-Man is a hard game is surprising, because it seems to classify as a reactive game, on which deep reinforcement learning methods typically dominate humans. In this talk, we argue that the reason that Ms. Pac-Man is hard is that the optimal value function for Ms. Pac-Man is very complex – much more so than for other reactive ALE games. Furthermore, we show that by using reward decomposition a complex value function can be decomposed into a set of low-complexity value functions. Using this strategy, we are able to achieve above-human performance on the challenging Ms. Pac-Man game.