Learning 3D representations of objects is a task at the heart of computer vision, robotic manipulation, scene understanding, medicine, and content generation. Implicit neural representations which employ neural networks to approximate 3D geometry have become popular. They have achieved lower memory requirements and faster training and inference speeds than conventional explicit representations such as voxels, point clouds, and meshes. Existing methods are capable of learning a handful of objects with extensive detail and fast inference speeds, but they do not generalize to unseen classes. In our work, we propose a novel two-stage meta-learning approach. We assess our method on both synthetic and real-world large-scale datasets. Extensive experimental results and analysis validate that outperforms state-of-the-art methods and is capable of generalizing to hundreds of unseen classes.