Caching is crucial to the end-to-end performance of distributed systems. By storing content that is commonly requested so that it can be served faster, this technique can improve request latency and reduce load on backend servers. There are three objectives in caching: object miss ratio (OMR), byte miss ratio (BMR), and miss ratio (MR) for unit-sized object caching. Different objectives are typically important to different systems. Learning Relaxed Belady (LRB) is an existing machine learning (ML) caching algorithm that achieves substantially better byte miss ratios than existing state-of-the-art approaches. In this project, we adapt LRB for the two other objectives: object miss ratio and caching for unit-sized objects. OMR is a metric that is crucial to a wide range of caches, including CDN in-memory caches and key-value caches for large storage systems. Decreasing OMR translates directly into improved application performance. We apply a novel sampling technique, byte sampling, to LRB that allows it outperform other state-of-the-art caching methods for OMR. LRB also performs better than other policies for unit-sized traces, demonstrating the broad applicability of this algorithm. We evaluate LRB on 5 production traces and demonstrate its robustness in performance on varying workloads. LRB, enhanced with byte sampling, is the only algorithm we know of that can consistently outperform other state-of-the-art policies for all three caching objectives. We unify these objectives with LRB and simplify the method through which further advancements can be made.