RoBERTa (Robustly Optimized BERT Approach) is a powerful transformer model that has shown excellent performance in various NLP tasks. In this post, I’ll explain how to implement text classification using RoBERTa, based on my implementation for multi-lingual text classification.
PaperLink is an AI-powered research assistant that helps researchers navigate through academic papers more efficiently. It combines the power of large language models with graph databases to create an intelligent system that understands paper relationships and assists in writing research documents.
It’s a classic technique from robust statistics (1), to understand and improve machine learning models. By tracing a model’s prediction back to its training data, influence functions determine the training points most responsible for a given prediction. This information can be used to improve the model by selecting and removing noisy or irrelevant data and to debug models by identifying errors in the training data or the model’s assumptions. Overall, influence functions are a powerful tool for understanding and improving machine learning models (2). They are relatively easy to compute and can be used with linear and non-linear models, making them increasingly popular in machine learning research and practice.
The forward-forward algorithm is a novel method for training neural networks as an alternative to backpropagation.
This Kaggle competition required us to focus on a particular kind of anomaly, which was to detect under extrusion. Under extrusion in 3D printing occurs when the 3D printer doesn’t supply enough filament for the print job. This can result in gaps, weak structures, or incomplete layers in the printed object. Hence the objective was to detect this kind of extrusion using images obtained from various 3D printers.