Artificial Intelligence (AI) is becoming ubiquitous, yet the lack of transparency in its decision-making process and its value system raises alarms. To address this, my senior thesis combines Reinforcement Learning (a subfield of AI) with Prospect Theory (a theory from behavioral psychology) to develop an AI agent with a human-aligned value system. Prospect Theory models the psychological value humans place on different uncertain actions, thus reflecting human preferences. By incorporating Prospect Theory into the Reinforcement Learning (RL) agent's decision-making process, I create an agent that is more aligned with human values which also happens to outperform baseline models. This approach enhances the transparency and interpretability of AI, as the agent's decisions can be explained in terms of human risk preferences, and my research demonstrates the potential for using Prospect Theory as a framework for designing reliable AI agents.