Emergency departments in hospitals worldwide employ triage to determine the urgency of care for incoming patients. Patients are assigned a score that dictates how quickly they are seen. Properly assigning this score is critical for patient well-being and resource management. Predictive machine learning models have begun to be developed to automate the task of triage assignment. Particularly in the U.S., research suggests that there is racial bias in the human assignments of these scores. This research shows that White patients have been prioritized over Black and Hispanic patients.. My research combines these two areas to examine racial bias in a U.S. predictive triage model. I create an XGBoost model based on human-administered triage scores from the MIMIC-IV dataset, which can perform up to the scholarly standard for these models, with AUC-ROC in the 0.78-0.91 range. I compare model performance by racial group for versions of the model trained and not trained on race. Although this dataset is trained on data consisting of a majority of White patients, both the models with and without race have significant differences in error by race. Interestingly, I find that the race-conscious model statistically significantly more often over-predicts the resource needs of White patients, and under-predicts the needs of Black patients when compared to the race-blind model, suggesting that human-administered triage may be biased specifically based on race. This work emphasizes existing inequities in the triage process and suggests further examination of the role of race in developing these predictive models.