Top Twitter Topics by Data Scientists #41

Trending this week: Learn attention-based deep Multiple Instance Learning (MIL); Use Transformer(-based) and Sequential Denoising Auto-Encoder (TSDAE) for unsupervised fine-tuning of sentence transformers; Build your own Question-Answering system; DeepMind’s Fictitious Co-Play trains RL more effective agents to collaborate with humans without using human data.

Every week we analyze the most discussed topics on Twitter by Data Science & AI influencers.

The following topics, URLs, resources, and tweets have been automatically extracted using a topic modeling technique based on Sentence BERT, which we have enhanced to fit our use case.

Want to know more about the methodology used? Check out this article for more details, and find the codes in this Github repository!


In this new publication of our series of posts dedicated to the technology watch, we will talk about:

  • DL Update
  • RL Update
  • Projects for Every Aspiring Data Scientist

Discover what Data Science and AI influencers have been posted on Twitter this week in the following paragraphs.

DL Update

This week, some Data Science and AI influencers have posted some updates on deep learning.

François Chollet has shared a very interesting tutorial demonstrating Classification using Attention-based Deep Multiple Instance Learning (MIL). This blog post first introduces the concept of Multiple Instance Learning (MIL) and the motivation behind it. Then, this post provides a detailed example of the implementation of MIL in Keras with all the codes. It also gives key indications on how to create and evaluate multiple instance learning models, and how to overcome several common traps as overfitting.

Illustration of a dual attention Multiple Instance Learning architecture (Image from the paper “Dual attention multiple instance learning with unsupervised complementary loss for COVID-19 screening”).

James Briggs has shared the following two amazing posts on natural language processing, understanding, and generation (NLP, NLI, NLG):

A blog post introducing a new pre-training method for unsupervised fine-tuning of sentence transformers: Transformer(-based) and Sequential Denoising Auto-Encoder (TSDAE). TSDAE has been developed by Kexin Wang, Nils Reimers, and Iryna Gurevych in 2021 and is currently one of the best performing options for unsupervised fine-tuning of sentence transformers. The main idea behind TSDAE is the combination of transformers and layers and (denoising) auto-encoder.

TSDAE introduces noise to input sequences by deleting or swapping tokens (e.g., words). These damaged sentences are encoded by the transformer model into sentence vectors. Another decoder network then attempts to reconstruct the original input from the damaged sentence encoding.

Despite TSDAE producing lower-performing models than other supervised methods, it opens doors for many previously inaccessible domains and languages. With nothing more than unstructured text, it allows building effective sentence transformers.

The last article gives an Introduction to Open Domain Question-Answering. This article focuses on open-domain QA (ODQA) – systems dealing with questions across broad topics that cannot rely on a specific set of rules. However, it also defines the alternative to open-domain, which is closed-domain, which focuses on a limited domain/scope and can often rely on explicit logic.

This post details the two types of Question-Answering (QA) systems:

  • Extractive QA, which allows extracting the answers to questions from a predefined corpus of texts available, and restitute them as they are without any modification;
  • Abstractive QA, which allows generating totally newly created answers to the questions based on a predefined corpus of texts available.

Furthermore, this post explains the idea behind semantic similarity and how it is applied to QA models. It explores the various components of QA systems like vector databases, retrievers, readers, and generators. Alongside that, this article demonstrates how to implement these different stacks using different tools and models.

RL Update

Also, some updates on reinforcement learning were shared in various posts. We have selected the following collection for you:

Mike Tamir has shared an article in which DeepMind says (that) reinforcement learning is “enough” to reach general AI. This post discusses how all the efforts to create all kinds of complicated mechanisms and technologies to replicate vision, language, reasoning, motor skills, and other abilities associated with intelligent life, have resulted in AI systems that can efficiently solve specific problems in limited environments. But, these efforts fall short of developing the kind of general intelligence seen in humans and animals.

This article cites the paper titled “Reward is Enough” by scientists at U.K.-based AI lab DeepMind, which argues that intelligence and its associated abilities will emerge not from formulating and solving complicated problems but by sticking to a simple but powerful principle: reward maximization. The authors suggest that reward maximization and trial-and-error experience are enough to develop behavior that exhibits the kind of abilities associated with intelligence. And from this, they conclude that reinforcement learning, a branch of AI that is based on reward maximization, can lead to the development of artificial general intelligence.

On their side, ipfconline have tweeted about how DeepMind’s Fictitious Co-Play Trains RL Agents to Collaborate with Novel Humans Without Using Human Data. This post summarizes the research paper Collaborating with Humans without Human Data by DeepMind, which explores the problem of how to train agents to collaborate well with novel human partners without using human data and presents Fictitious Co-Play (FCP), a surprisingly simple approach designed to address this challenge.

The researchers summarize the main contributions of their study as:

  1. Proposing Fictitious Co-Play (FCP) to train agents capable of zero-shot coordination with humans.
  2. Demonstrating that FCP agents generalize better than SP (self-play), PP (population play), and BCP (behavioral cloning play) in zero-shot coordination with a variety of held-out agents.
  3. Proposing a rigorous human-agent interaction study with behavioral analysis and participant feedback.
  4. Demonstrating that FCP significantly outperforms the BCP state-of-the-art, both in task score and in human partner preference.
Illustration of the different methods for RL agent training (source: paper “Collaborating with Humans without Human Data by DeepMind”).

Finally, Sergey Levine has shared a blog post by Kate Rakelly — a researcher at Berkeley Artificial Intelligent Research (BAIR) — discussing Which Mutual Information Representation Learning Objectives are Sufficient for Control?

This post discusses how processing raw sensory inputs is crucial for applying deep RL algorithms to real-world problems, and the challenge to build a direct “end-to-end” RL system that maps sensor data to actions (figure here below, left). This challenge is often broken down into two problems (figure here below, right):

  • (1) Extract a representation of the sensory inputs that retains only the relevant information;
  • (2) Perform RL with these representations of the inputs as the system state.

This post discusses if sufficiency matters in RL and proposes an experiment to evaluate the learned representations.

Projects for Every Aspiring Data Scientist

This week, a lot of data science influencers shared a post on projects every aspiring data scientist should do. Kosta Derpanis has shared the PennAction dataset which contains 2326 video sequences of 15 different actions and human joint annotations for each sequence. The dataset is available for download for free from the link shared by him.

KDnuggets have shared various lists of projects which can be used by data scientists.

This includes a list of the Top 10 Data Science Courses to take in 2021. Another very interesting read on 6 Goals Every Wannabe Data Scientist Should Make. Many people who are already working in tech-centric fields realize they want to embark on new paths that result in eventually having careers as data scientists. That goal in itself is a worthy one, but it’s essential for people to also set goals for 2019 that will help them get closer to that broader aim.

They also shared some use cases in the Top 8 Data Science Use Cases in Gaming. Data science has entered various industries and improved the principles of their functioning forever. The industry of gaming is no exception here. Moreover, data science techniques and methodologies have become integral parts of games development, design, operation, and many other stages of their functioning.

Hope you enjoyed this new post of our technology watch series of articles. Stay tuned!