top of page
Search
Writer's picturezavrel8

Trends in AI — June 2023 // With Apple's Vision Pro, AI Doomers, Optimists and AI Regulation, TII's.

Is Apple finally making a dent in the AI universe? What's deal on AI doomers pushing for, or maybe rather more against AI regulation? And of course the latest top of the bill open source LLM: Falcon-40B from Abu Dhabi's TII --with a little bit of help from Paris-based LightOn? Watch the Zeta Alpha crew discuss these developments, and catch our monthly top-10 of impactful papers.

Of course we survey the map of this month's AI Research papers...

and present the Zeta Alpha top-10 paper selection, with:

1) QLoRa By Tim Dettmers et al. + a bit of SpQR, and Elo Ranking of LLMs.

65 Billion parameter LLMs can now be finetuned (thanks to QLoRA's quantization tricks) and run for inference (thanks to more wizardry from SpQR) on a single GPU. En passant, Dettmers et al. produce Guanaco, a new open source LLM that stacks up pretty well against chatGPT, and introduce a new way to benchmark LLMs using tournament style Elo rankings. And that's not all, the code is released as open source. Great reading!

2) The new Sophia optimiser beats Adam by 2x

This new drop in replacement for Adam, looks at second order derivatives of the error landscape, and through a neat clipping trick, manages to leverage the Hessian matrix to achieve two times faster convergence for LLMs. This can save quite a lot of training compute and hence cash!

3) To RLHF or just to DPO? Or better LIMA instead? In the paper "Why Your Language Model is Secretly a Reward Model", a group of researchers from Stanford University shows that the three step Reinforcement Learning from Human Feedback (RLHF) might not be necessary and that a direct feedback from an LLM itself can also serve as a target for finetuning LLMs on instructions.


In the related LIMA paper, a team from Meta shows that just 1000 carefully selected instruction alignment examples can be sufficient to achieve state of the art performance. 4) NVIDIA’s plays MineCraft using GPT-4 In the paper "Voyager: An Open-Ended Embodied Agent with LLMs" a team from NVIDIA builds upon their earlier work on agents playing MineCraft. They harness GPT-4 to develop a curriculum for the agent and a way to build up a library of tools.

This significantly speeds up learning and leads to higher quality solutions.

5) Gorilla masters API's and LLMs can act as Tool Makers The theme of wielding and making tools are also the topics of this month's papers: "Gorilla: Large Language Model Connected with Massive APIs" and "Large Language Models as Tool Makers".


Of course retrieval tools are central to LLMs ability to generate text with factual knowledge about recent events. At the same time, we are left wondering whether the enhancement of LLMs with the capability to call arbitrary API's and even generate and execute its own code is not exactly the type of uncontrolled AI risks that AI Doomers are worried about...

6) A new LLM decoding trick for tasks that require planning Tree of Thoughts is a new approach to Chain of Thought prompting that enables much higher performance on tasks that require planning.


Multiple paths of CoT-like thoughts are generated, evaluated and then expanded during the LLM generation pass. Very promising, although the prompts still are hard coded for each task.


7) Massively Multilingual Speech In the paper "Scaling Speech Technology to 1,000+ Languages",

a team from Meta AI unveils how they've built a speech recognition and generation model for 1100+ languages using wav2vec 2.0 and training low resource languages on readings of the Bible.



The resulting word error rates are half those of OpenAI's Whisper model and everything is open source. Is Meta really becoming the real "Open AI" company?


8) Drag Your GAN This paper surely deserves this month's very cool demo award. By clicking on a point in an image and dragging it to where you want it to be you can directly edit your photo's. How cool is that?


9) SiamMAE


This paper on representation learning from videos shows how a completely unsupervised agressive masking approach using Self-supervised contrastive learning can get state of the art results.

10) Plus recent cool Information Retrieval papers. And finally, since Neural Search and IR are the topics that are closest to our heart, here's our favorites for the month:

- Fusion-in-T5: Unifying Document Ranking Signals: gets very good results on MS Marco injecting classical reranking signals into the token sequence of a T5 reranker.


- Entity-Seeking Queries with Set Operations, a data set and a solution? Watch the video for more information.

- A new alternative to late interaction in neural IR: the I^3 Retriever?

Enjoy discovery! Here's our list of top-10 papers and a bit more from the video:


 

This month’s selection is all wrapped up — if you want to stay ahead of the curve, give us a follow on Twitter @zetavector and sign up for the next month’s webinar!






562 views0 comments

Comments


bottom of page