Brain-computer interfaces are one of humanity's most ambitious goals. This technology promises to augment our cognitive and physical abilities by allowing us to control devices and communicate with others more easily. Companies like Elon Musk's Neuralink and Facebook's Ctrl-Mind are already working on technology that could be transformed into products we use day-to-day. However, the prospect of connecting your iPhone to your stream of thoughts is still far away. In the meantime, there is one area of research that shows a lot of promise and could soon help people suffering from motor speech disorders or neurodegenerative illnesses. Using state-of-the-art deep learning techniques we are able to decode brain signals into speech in real-time. These improvements promise to help patients by bringing their rate of speech transmission closer to that of natural speech (150 words per minute), compared to 10 words per minute for the current generation of assistive technologies. Our research is a big step towards creating the first devices that can be used with real patients. This is because we rely on data from 45 patients who were monitored and recorded for an entire week, which is more than any previous work done in this area. In the context of universal brain-computer interfaces, we hope that the results from our work will provide an important step towards not only better assistive technologies, but also more research in the field.