Current reinforcement learning (RL) models still have difficulty generalizing to novel, but related, tasks. They are also often composed of deep neural networks which carry out computations that do not prioritize human interpretability. Motivated by these issues of generalization and interpretability, this paper builds a version of a proposed cognitive model. This model, referred to as a construal model, dynamically constructs and exactly plans over an abstract mathematical representation of the task. Results from RL experiments in an Atari-style game show that the construal model generalizes significantly better than a neural network baseline when an extremely low number of training levels are available. A qualitative analysis of model visualizations suggests that the construal model actually behaves in a more interpretable manner. Additionally, this paper proposes different auxiliary losses to guide the training of the construal model with helpful inductive biases. Qualitative analysis provides evidence that these losses, especially the state prediction error, are important for preventing the construal model from learning an uninterpretable, degenerate task abstraction. Future research can help better understand how exactly the construal model achieves better generalization and interpretability in this RL setting.