Cool follow-up to our Physical Atari work. People who have only used RL with fast and simple simulated environments vastly underestimate the complexity of the real world; they end up developing research goals that cannot be achieved in complex environments (e.g., zero-shot generalization, learning causal models). Physical Atari is still an extremely simple environment and yet it is enough to highlight the limitations of methods that were developed for learning with fast simulations. Humans and animals learn in environments that are orders of magnitude more complex than Physical Atari. Developing algorithms that can do the same should be the goal if we want abundant intelligence.