Trending this week: Use Perceiver deep learning model to process all types of input; AI that can decode your thoughts; AI used to generate immune system clock that predicts health and mortality.
Every week we analyze the most discussed topics on Twitter by Data Science & AI influencers.
The following topics, URLs, resources, and tweets have been automatically extracted using a topic modeling technique based on Sentence BERT, which we have enhanced to fit our use case.
This week, Data Science and AI influencers on Twitter have talked about:
- ML Updates
- Amazing Deep Learning Applications
- ML How-Tos
The following sections provide all the details for each topic.
AI & data science influencers have shared some updates on machine learning.
Nige Willson has shared a post summarizing a research paper from DeepMind introducing its supermodel AI Perceiver: General Perception with Iterative Attention. This deep-learning model adapts the Transformer (a.k.a. attention) to let it process and classify all the types of input ranging from audio to images and perform different tasks. The Perceiver is in the spirit of a multi-tasking approach, and it shows meaningful results on benchmark tests, including the classic ImageNet test of image recognition, Audio Set, and ModelNet.
Sebastian Raschka has shared a research paper titled “PonderNet: Learning to Ponder”. This paper introduces PonderNet, a new algorithm that learns to adapt the amount of computation based on the complexity of the problem at hand. PonderNet optimizes the tradeoff between training prediction accuracy, computational cost and generalization, and has achieved state-of-the-art results on a complex task designed to test the reasoning capabilities of neural networks.
Mike Tamir has shared a report introducing AndroidEnv, an open-source platform for Reinforcement Learning (RL) research built on top of the Android ecosystem. Android’s large ecosystem opens up the possibility for defining varied tasks, enabling agents to learn to achieve different types of objectives on the same platform. This report presents an empirical evaluation of some popular reinforcement learning agents on a set of tasks built on this platform.
KDnuggets has shared a post talking about TabNet, a high-performance and interpretable canonical deep tabular data learning architecture that uses sequential attention to choose which features to reason from at each decision step. According to the paper introducing TabNet, this architecture outperforms the leading tree-based models across a variety of benchmarks, while being more explainable than boosted tree models as it has built-in explainability. It can also be used without any feature preprocessing.
Amazing Deep Learning Applications
Some amazing examples of the application of artificial intelligence via the use of deep learning were also shared by the influencers this week.
Nige Willson has shared the following posts:
One talking about an AI Used to Generate Immune System “Clock” that Predicts Health and Mortality. This post explains how investigators at Stanford University School of Medicine and the Buck Institute for Research on Aging have harnessed artificial intelligence to build an inflammatory-aging clock — iAge — that they suggest can predict how strong your immune system is, how soon you’ll become frail, or whether you have — as yet unseen — cardiovascular problems that could in the future become clinically relevant. The leading team of this project published its findings in a paper titled, “An inflammatory aging clock (iAge) based on deep learning tracks multimorbidity, immunosenescence, frailty and cardiovascular aging.”
Another explaining how a brain implant turns paralyzed man’s thoughts into ‘speech’. According to this post, a team of scientists tested an AI system on a man in his late 30s, who suffered a brainstem stroke in his teens that left him unable to talk. He now communicates by using a pointer attached to a baseball cap to poke letters on a screen. This feat was achieved by a system based on custom neural network models distinguished between the neurological signals. The subject of the experience was then asked to say different short sentences composed of 50 words. The words were then decoded from his brain activity onto a screen. However, this system is still prone to errors, it decoded the words with a median accuracy of 74% at 15 words per minute, and reached a peak performance of 93% accuracy at 18 words per minute.
On his side, Marcus Borba shared a post explaining how a research team at the University of Southern California has developed a new artificial intelligence system that uses human-like abilities to imagine never-before-seen objects. According to this post, by using systems that extrapolate data, researchers were able to envision an object and change its attributes in a similar process to human imagination. To built their system, the researchers began to explore disentanglement, an approach infamously used in creating deepfakes. They built an AI that can look at a few sample images of a chair, for example, understand the basic attributes of those chairs, and use that knowledge to create new chairs. They call it “controllable novel image synthesis” or imagination. The Researchers say this framework can be compatible with nearly any type of data or knowledge and widens the opportunity for other applications, like self-driving vehicles or synthesizing new medicine.
This week, the AI & data science influencers have shared some examples of projects showing how to build specific machine learning (ML) systems.
Here are some posts shared by KDnuggets:
An article presenting How to Create Unbiased Machine Learning Models. This post discusses the concepts of bias and fairness in the Machine Learning world and shows how ML biases often reflect existing biases in society. Also, it presents various methods for testing and enforcing fairness in ML models. Finally, the author states that applying these methods will lead to more just decision-making in AI-assisted systems around the world.
A post discussing Buildin a Deep Learning-Based Reverse Image Search. This article explains how to perform a reverse image search or content-based image retrieval, which consists of, given a target image, searching for similar content in a database. It is this same concept of reverse image search that Google reverse image search uses to take in an image and return you the most similar images in a fraction of a second. This post details the two key points on which reverse image search relies: the use of deep learning architecture to extract features from images, and the use of a similarity measure to compare them.
A post talking about how to perform Unsupervised Learning for Predictive Maintenance using Auto-Encoders. This article outlines a machine learning approach to detect and diagnose anomalies in the context of machine maintenance. It presents: machine maintenance; What is predictive maintenance?; Approaches for machine diagnosis; Machine diagnosis using machine learning. The last point details, in particular, the use of Multi-Scale Convolutional Recurrent Encoder-Decoder (MSCRED).