top of page
Search

A Guide to NeurIPS 2023 — 7 Research Areas and 10 Spotlight Papers to Read

Updated: Dec 9, 2023

NeurIPS is back in New Orleans, with record-breaking numbers across the board: 3584 main papers, 58 workshops, 14 tutorials, and 8 keynote talks. Featuring new developments in fields such as Language Modeling, Reinforcement Learning, Machine Learning Optimization, Representation Learning, and Diffusion Models, this year's edition is once again packed with world-class AI research. At Zeta Alpha, we have curated a short guide to help you navigate the conference, highlighting some of the papers and areas that grabbed our attention.

Image by Zeta Alpha.

Before we start: we will be covering these highlights in this month's installment of our Trends in AI webinar, so join us live this Thursday, December 14th, either in person at LAB42 or online on Zoom, to discuss recent developments in the field, along with a highlight of our favorites from NeurIPS and other papers from the past month!


To visualize the content that will be presented at the conference, we created a semantic map of the published papers, categorizing them into clusters based on their similarity using the VOSviewer tool and a bit of our secret sauce using Language Models to automatically label the cluster centroids.


Pro tip: view the graph below in full screen (or in a new tab) to navigate it freely, and discover the papers of interest in each cluster at your own pace!


As this high-level overview can still be overwhelming, we dove deeper into each research area presented in the visualization above and selected some of the spotlight papers that are worth reading in more detail. We have organized our top-10 spotlight papers in a Zeta Alpha tag for your browsing convenience, and you can also find all 77 oral papers here.


You will notice that most of these papers are already widely known, with some having already received dozens of citations and inspired follow-up work, as they have been public on arXiv for a few months now. Despite this, this overview also serves as a round-up of some of the most influential works in 2023, with their publication at this prestigious venue being yet another proof of their impact.


Throughout this guide, you can use Zeta Alpha to browse more of the papers that will be presented at NeurIPS this year, by clicking on the "🔎 Find Similar" button next to each of the featured titles, which will take you to our platform to discover similar work using neural search.


Before we dive in, we wanted to give a shoutout to the excellent work that came out of Amsterdam, which is where our home base is located. You can explore the contributions from researchers affiliated with the University of Amsterdam to this year's conference directly in Zeta Alpha!


 

⚠️ Disclaimer ⚠️ Of course, this cannot be a fully comprehensive guide, given the sheer number of papers we're working with here, but we hope this is a useful entry point to the conference.


1. Large Language Models


Spotlight papers:


💡 Researchers from the University of Washington have developed QLoRA, an efficient finetuning approach for quantized language models, achieving state-of-the-art results on the Vicuna benchmark and reducing memory usage while maintaining performance.


💡 This paper presents the "Tree of Thoughts" (ToT) framework, which enhances language model inference by introducing deliberate problem-solving through the generation and evaluation of coherent text units, demonstrating improved problem-solving on tasks requiring planning or search.


💡 This work questions emergent abilities in large language models, asserting that these skills are shaped by metric choices, not inherent model traits. By altering evaluation metrics, the authors demonstrate how observed abilities can vanish or emerge, emphasizing the crucial role of metric selection and the necessity for rigorous controls in assessing LLM capabilities.


More in this research area:


2. Reinforcement Learning


Spotlight paper:


💡 Direct Preference Optimization (DPO), a novel algorithm for training language models, aligns with human preferences by directly optimizing the policy through a binary cross-entropy objective, outperforming methods like PPO while being computationally efficient and simple to implement.


More in this research area:


3. Neural Network Optimization


Spotlight papers:


💡 Researchers from Princeton University have developed MeZO, a memory-efficient zeroth-order optimizer that significantly reduces memory requirements for fine-tuning large language models, achieves comparable performance to backpropagation with reduced memory usage, and is compatible with non-differentiable objectives and various tuning techniques.


💡 This paper explores the scaling of language models in data-constrained scenarios, suggesting that training models with multiple epochs of repeated data can yield negligible changes in loss compared to using unique data. The authors propose a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters, while also discussing strategies for mitigating data scarcity, such as augmenting the training dataset with code data or removing commonly used filters.


More in this research area:


4. Equivariant Representation Learning


Spotlight paper:


💡 Clifford Group Equivariant Neural Networks, utilizing Clifford algebra and the Clifford group, present a novel method for constructing O(n)- and E(n)-equivariant models, achieving state-of-the-art performance in tasks like n-body simulations, convex hull estimation, and top tagging in particle physics.


More in this research area:


5. Adversarial Attacks & Generalization


Spotlight paper:


💡 Despite safety training, large language models remain susceptible to jailbreak attacks due to competing objectives and mismatched generalization, revealing persistent vulnerabilities that scaling alone cannot eliminate, underscoring the crucial necessity for safety-capability parity and addressing limitations in current safety training methods.


More in this research area:


6. Diffusion Models


Spotlight paper:


💡 This paper reveals the close relationship between diffusion model objectives and the Evidence Lower Bound (ELBO), showing that diffusion objectives can be understood as weighted integrals of ELBOs and proposing new monotonic weightings that achieve state-of-the-art scores on ImageNet.


More in this research area:


7. 3D Object Representations


Spotlight paper:


💡 This paper introduces Rotating Features, a novel approach for object discovery in machine learning, leveraging higher-dimensional generalization of complex-valued features to represent a greater number of objects concurrently, showcasing scalability across simple toy datasets to complex real-world datasets and offering a new paradigm to tackle the binding problem.


More in this research area:


 

Did we miss anything major? Let us know what you think on X (formerly Twitter): @ZetaVector.

1,829 views

Recent Posts

See All
bottom of page