Visualizing a Semantic Map of ICLR 2021
How do you quickly navigate a very large research conference with hundreds or even thousands of papers? Ever since Lauren Doyle's 1961 (!) article "Semantic Road Maps for Literature Searchers", building truly semantic navigation interfaces for researchers has been on people's minds. Are we there yet? Not fully, but we are making interesting steps in that direction.
Fig 1. An interactive semantic map of the ICLR 2021 papers (c) Zeta Alpha As a sponsor for ICLR 2021 (International Conference on Learning Representations), one of the premier conferences on Deep Learning, we took the opportunity to use our work in neural document embeddings to produce a semantic map of the contents of the conference. With 860 accepted papers, it is not a small task to discover all the papers that could be relevant for your research project in a few days. With the help of our friends at Leiden University's CWTS, we were able to produce a very cool visualization, which shows the papers grouped by similarity of content, while allowing an interactive navigation of the links and discovery of the actual papers. The pictures above and below show you a small impression of how it looks. Click on this link to play with the interactive version. We think it is very nice experience, and we hope it helps you to discover relevant work at ICLR 2021 that you would otherwise have missed. Enjoy!
And here's a more close up view of our favorites in the NLP area of the map:
Fig 2. Zoom view of NLP area in semantic map of the ICLR 2021 papers (c) Zeta Alpha And while this is still an experimental first version, with a few glitches here and there, we are looking forward to integrating it in the Zeta Alpha AI Research Navigator very soon, to visualize not just ICLR 2021, but any document collection that you choose to discover! Just a quick note on how it works: we encode all ICLR papers as vector embeddings, and then we do a simple pairwise similarity calculation. For each paper we create a maximum of 20 links, and we cut off the ones below threshold. With a bit of tuning of the similarity threshold, and the parameters of the clustering, the visualization above is created. This is very different approach to the more common visualization of citation links, which is usually less 'semantic' in nature, but the two approaches can probably nicely complement each other in the future. For me personally, this is a very nice milestone in a journey that began in 1996 with my paper "Neural navigation interfaces for Information Retrieval: Are they more than an appealing idea?". It was worth the time to get here, and I'm curious to hear what you think...