Techno Blender
Digitally Yours.
Browsing Tag

Nakul

Neural Networks as Decision Trees | by Nakul Upadhya | Apr, 2023

Photo by Jens Lelie on UnsplashGet the power of a Neural Network with the interpretable structure of a Decision TreeThe recent boom in AI has clearly shown the power of deep neural networks in various tasks, especially in the field of classification problems where the data is high-dimensional and has complex, non-linear relationships with the target variables. However, explaining the decisions of any neural classifier is an incredibly hard problem. While many post-hoc methods such as DeepLift and Layer-Wise Relevance…

XAI for Forecasting: Basis Expansion | by Nakul Upadhya | Mar, 2023

Photo by Richard Horvath on UnsplashNBEATS and other Interpretable Deep Forecasting ModelsForecasting is a critical aspect of many industries, from finance to supply chain management. Over the years, researchers have explored various techniques for forecasting, ranging from traditional time-series methods to machine learning-based models.In recent years, forecasters have turned to deep learning and have gotten promising results with models such as Long Short-Term Memory (LSTM) networks and Temporal Convolution Networks…

SHAP for Time Series Event Detection | by Nakul Upadhya | Feb, 2023

Photo by Luke Chesser on UnsplashUsing a modified KernelSHAP for time-series event detectionFeature importance is a widespread technique used to explain how machine learning models make their predictions. The technique assigns a score or weight to each feature, indicating how much that feature contributes to the prediction. The scores can be used to identify the most important features and to understand how the model is making its predictions. One frequently used version of this is Shapley values, a model-agnostic metric…

Do Transformers Lose to Linear Models? | by Nakul Upadhya | Jan, 2023

Photo by Nicholas Cappello on UnsplashLong-Term Forecasting using Transformers may not be the way to goIn recent years, Transformer-based solutions have been gaining incredible popularity. With the success of BERT, GTP, and other language transformers researchers started to apply this architecture to other sequential-modeling problems, specifically in the area of time series forecasting (also known as Long-Term Time Series Forecasting or LTSF). The attention mechanism seemed to be a perfect method to extract some of the…

Defining Interpretable Features. A summary of the findings and developed… | by Nakul Upadhya | Jan, 2023

Photo by Kevin Ku on UnsplashA summary of the findings and developed taxonomy developed by MIT researchers.In February 2022, researchers at the Data to AI (DAI) group at MIT released a paper called “The Need for Interpretable Features: Motivation and Taxonomy” . In this post, I aim to summarize some of the main points and contributions of these authors and discuss some of the potential implications and critiques of their work. I highly recommend reading the original paper if you find any of this intriguing. Additionally,…

Solving Two-Stage Stochastic Programs in Gurobi | by Nakul Upadhya | Oct, 2022

Photo by Taylor Vick on UnsplashFormulating and solving a two-stage stochastic server farm problemStochastic programming (SP) is a framework for modeling optimization problems that involve uncertainty . In many cases, SP models take the form of a two-stage problem. The first stage involves finding the optimal deterministic decisions. These decisions are based on information we know to be certain (AKA the here-and-now decisions). The second stage involves making decisions that rely on randomness (also called the recourse…