top of page
Writer's pictureJakub Zavrel

"Transformers at Work" recordings, slides and photo's

Transformers at Work” in person at Science Park Amsterdam ... Together we made it happen!


And what a day it was! With eight fascinating speakers, and close to one hundred in person participants it was a much needed, refreshing, and inspiring start of the Amsterdam AI conference season.


Even the weather was smiling on the Startup Village last Friday, and instead of the promised rain and thunder, we could keep the social part going until deep into the evening. We hope you enjoyed participating in the workshop as much as we did organizing it.


Yes, in person discussion and talks work are simply at another league compared to their online versions, but whether you were there from start to finish, had to leave early, were watching our improvised live stream, or weren’t able to make it at all (better luck next year!), we have made some digital memorabilia available for you:


A recap of the workshop


We started off with an introduction from Jakub Zavrel and Sergi Castella where we learned about what got us there for the workshop: the introduction of Transformers in 2017 followed by a couple of years of progress in neural retrieval since we last met in January 2020.


The first speaker to take the stage was our Research Lead at Zeta Alpha Marzieh Fadaee, who told us all about the basics of dense retrieval and how we're building our semantic search at Zeta Alpha with the help of Transformers. Moreover, she explained how a few months ago, we hit a special milestone in the past year, which is that our Transformer-based search beat the keyword search in our benchmarks, and it's only gotten better since then.


Moving on to the next talk, Sebastian Hofstätter told us all about training Transformers for dense retrieval in a more cost effective way with distillation and Topic Aware Sampling, a technique proposed by himself and colleagues.



Nils Reimers from Huggingface wrapped up the first part of the workshop reminding us of the pitfalls of current Information Retrieval (IR) benchmarks and where neural retrieval still lagged behind traditional search techniques, zero shot out of domain retrieval.


After a much needed coffee break, Andrew Yates presented ways forward for neural retrieval that rely less heavily on scaling model and data. He also pointed out how we can learn about retrieval by looking at the type of signals Transformers learn.


With Daniel Daza, we took a sidestep to look at Knowledge Graphs and how we can extract data from text to fill them up using Transformers, even among unseen entities, making their curation and maintenance easier.


Rodrigo Nogueira's talk—joining remotely from Campinas, Brazil—was a journey of ups and downs. Firstly, we learned how a simple Transformer-based pipeline climbed to the top of IR benchmarks, secondly, how despite all their success, Transformers failed at learning simple arithmetic tasks, and finally, how Google query Trends can be predicted by only looking at the document corpus, and how that reveals interesting insights on the ML community interests.


After our second break, Thomas Wolf from Huggingface told us how they've been working on a collaborative effort to train a huge language model in an open, collaborative and inclusive way: BigScience. Inspired by the efforts of experimental physicists, who come together when they need a lot of resources to build large scale experiments.


To finalize the event, Mostafa Dehghani from Google told us the backstory of how Vision Transformers came to be, how they work surprisingly well, and what's the way forward for them.


After some last short remarks from the speakers on the past two years of NLP, IR, and ML research, we moved on to the much anticipated food and drinks by the catering team from Kok op Stelten. The cherry on top of the cake to close the event was the live music from the amazingly funky Valvetronic Brassband!


 

So that's it for this edition. We'll soon see where Transformers bring us in another year... and if it is worth hosting another workshop. Our bet: yes. If you want to stay on top of what’s going in AI and Data Science, sign up for our next monthly "Best of AI" webinar on Friday October 1st, and follow us on Twitter (@ZetaVector). Let’s truly enjoy the continuing pace of innovation AI and Data Science!

302 views0 comments

Recent Posts

See All

Comments


bottom of page