Search and rescue missions are time-sensitive and dangerous tasks that are critical after natural and man-made disasters. In this work, we better robotic systems to improve the success and efficiency of search and rescue, potentially saving lives and decreasing injuries after disasters. Specifically, we study how machine learning can be used to learn efficient collaborative behaviors for heterogeneous multi-robot search and rescue, allowing robots with diverse capabilities to collaborate effectively without human intervention. To do this, we use the spatial action maps framework, which stores and communicates information as map-based state representations. These maps are used as input to a machine learning model that outputs a "spatial action maps" action representation for decision-making. We test if more efficient collaborative behaviors can be learned with additional maps that allow robots to communicate information about robot locations and rescue target locations. Without the additional maps, learned collaborative behaviors include indirect communication, where robots with rescue action capabilities avoid other robots indirectly communicating that no rescue targets are in a particular location. With the additional maps, learned collaborative behaviors include direct communication and coordination, where robots without rescue action capabilities can communicate the location of rescue targets to robots with rescue action capabilities. This is shown both between ground robots with and without rescue action capabilities, and between ground and aerial robots with different search and rescue capabilities. Overall, we find that efficient collaboration for heterogeneous multi-robot search and rescue can be learned by introducing additional maps for direction communication and collaboration.