Techno Blender
Digitally Yours.

Visualization, Math, Time Series, and More: Our Best Recent Deep Dives

0 32


Welcome to the 150th edition of the Variable! Choosing the articles we share in this space is always one of our weekly high points, as it offers us—and hopefully you, too—an opportunity to appreciate the depth and diversity of experiences our authors bring to TDS.

We couldn’t think of a better way to celebrate this milestone than to put together a selection of some of our best recent deep dives. These are the posts that might require the most effort on the part of both writers and editors, but that also deliver on their ambition. Whether they tackle introductory topics or advanced research, they approach their subject matter with nuance and great detail, and patiently walk the reader through new questions and workflows. Let’s dive in!

  • An Interactive Visualization for Your Graph Neural Network Explanations
    To kick things off, we turn to Benjamin Lee’s detailed tutorial, where we learn how to build an interactive visualization for GNNs in five steps, and where readers can find all the code snippets they’ll need to start tinkering and creating on their own.
  • Deep Learning Illustrated, Part 1: How Does A Neural Network Work?
    Going deep doesn’t mean inaccessible or hard-to-follow writing—on the contrary! Case in point: Shreya Rao’s latest beginner-friendly post, an intro to neural networks that assumes very little prior knowledge and offers a lovingly illustrated explanation of the networks’ inner workings.
  • Handling Gaps in Time Series
    Imputing missing data in time series is as essential a data science task as they come, but its ubiquity doesn’t make it any easier to execute well. Erich Silva’s patient guide covers the common challenges inherent to missing-data analysis and evaluation metrics.
Photo by Malgorzata Bujalska on Unsplash
  • 9 Simple Tips to Take You From “Busy” Data Scientist to Productive Data Scientist in 2024
    List-based articles come with the risk of rushing through too many items and leaving the reader with very few concrete insights. Madison Hunter’s latest career-advice post shows that it’s possible to cover quite a bit of ground and offer actionable advice even when you divide your material into more digestible morsels.
  • Building a Random Forest by Hand in Python
    To truly grasp how an algorithm like random forest works, few approaches are more effective than building it yourself. This may sound daunting, but fortunately Matt Sosna is here to keep you on the right path with a patient guide that implements the algorithm from scratch in Python.
  • Binary Logistic Regression in R
    Whether you’re taking your first steps with logistic regression or looking for some hands-on practice for coding in R, Antoine Soetewey’s new article is the one-stop resource you don’t want to miss—it outlines when and how to use a (univariate and multivariate) binary logistic regression, as well as how to visualize and report results.
  • 12 RAG Pain Points and Proposed Solutions
    We end on a similar note to the one we started with: a comprehensive, practical guide on a timely technical topic—in this case, Wenqi Glantz’s troubleshooting post on common issues you might run into in your retrieval-augmented generation workflows, and how to move past them.

Not every great post has to be very long! We appreciate well-executed articles in all shapes and sizes, as our other weekly standouts show:

  • Learn how to make your charts more accessible by following along Caroline Arnold’s tutorial on creating visualizations that colorblind people can decipher and analyze.
  • With economic uncertainty and layoffs regularly dominating tech news cycles, Tessa Xie offers a detailed roadmap for making yourself less likely to be affected.
  • How do context windows affect Transformer training and usage? Chris Hughes unpacks the stakes in a clear and concise explainer.
  • Catch up with some cutting-edge research at the intersection of AI and medical imaging— Lambert T Leong, PhD presents a promising new project that aims to make health assessment more accessible.
  • If you just want to roll up your sleeves and code away, Stephanie Kirmer recently shared an easy-to-follow tutorial on putting ML models into production with AWS Lambda.

Thank you for supporting the work of our authors! If you’re feeling inspired to join their ranks, why not write your first post? We’d love to read it.

Until the next Variable,

TDS Team


Visualization, Math, Time Series, and More: Our Best Recent Deep Dives was originally published in Towards Data Science on Medium, where people are continuing the conversation by highlighting and responding to this story.


Welcome to the 150th edition of the Variable! Choosing the articles we share in this space is always one of our weekly high points, as it offers us—and hopefully you, too—an opportunity to appreciate the depth and diversity of experiences our authors bring to TDS.

We couldn’t think of a better way to celebrate this milestone than to put together a selection of some of our best recent deep dives. These are the posts that might require the most effort on the part of both writers and editors, but that also deliver on their ambition. Whether they tackle introductory topics or advanced research, they approach their subject matter with nuance and great detail, and patiently walk the reader through new questions and workflows. Let’s dive in!

  • An Interactive Visualization for Your Graph Neural Network Explanations
    To kick things off, we turn to Benjamin Lee’s detailed tutorial, where we learn how to build an interactive visualization for GNNs in five steps, and where readers can find all the code snippets they’ll need to start tinkering and creating on their own.
  • Deep Learning Illustrated, Part 1: How Does A Neural Network Work?
    Going deep doesn’t mean inaccessible or hard-to-follow writing—on the contrary! Case in point: Shreya Rao’s latest beginner-friendly post, an intro to neural networks that assumes very little prior knowledge and offers a lovingly illustrated explanation of the networks’ inner workings.
  • Handling Gaps in Time Series
    Imputing missing data in time series is as essential a data science task as they come, but its ubiquity doesn’t make it any easier to execute well. Erich Silva’s patient guide covers the common challenges inherent to missing-data analysis and evaluation metrics.
Photo by Malgorzata Bujalska on Unsplash
  • 9 Simple Tips to Take You From “Busy” Data Scientist to Productive Data Scientist in 2024
    List-based articles come with the risk of rushing through too many items and leaving the reader with very few concrete insights. Madison Hunter’s latest career-advice post shows that it’s possible to cover quite a bit of ground and offer actionable advice even when you divide your material into more digestible morsels.
  • Building a Random Forest by Hand in Python
    To truly grasp how an algorithm like random forest works, few approaches are more effective than building it yourself. This may sound daunting, but fortunately Matt Sosna is here to keep you on the right path with a patient guide that implements the algorithm from scratch in Python.
  • Binary Logistic Regression in R
    Whether you’re taking your first steps with logistic regression or looking for some hands-on practice for coding in R, Antoine Soetewey’s new article is the one-stop resource you don’t want to miss—it outlines when and how to use a (univariate and multivariate) binary logistic regression, as well as how to visualize and report results.
  • 12 RAG Pain Points and Proposed Solutions
    We end on a similar note to the one we started with: a comprehensive, practical guide on a timely technical topic—in this case, Wenqi Glantz’s troubleshooting post on common issues you might run into in your retrieval-augmented generation workflows, and how to move past them.

Not every great post has to be very long! We appreciate well-executed articles in all shapes and sizes, as our other weekly standouts show:

  • Learn how to make your charts more accessible by following along Caroline Arnold’s tutorial on creating visualizations that colorblind people can decipher and analyze.
  • With economic uncertainty and layoffs regularly dominating tech news cycles, Tessa Xie offers a detailed roadmap for making yourself less likely to be affected.
  • How do context windows affect Transformer training and usage? Chris Hughes unpacks the stakes in a clear and concise explainer.
  • Catch up with some cutting-edge research at the intersection of AI and medical imaging— Lambert T Leong, PhD presents a promising new project that aims to make health assessment more accessible.
  • If you just want to roll up your sleeves and code away, Stephanie Kirmer recently shared an easy-to-follow tutorial on putting ML models into production with AWS Lambda.

Thank you for supporting the work of our authors! If you’re feeling inspired to join their ranks, why not write your first post? We’d love to read it.

Until the next Variable,

TDS Team


Visualization, Math, Time Series, and More: Our Best Recent Deep Dives was originally published in Towards Data Science on Medium, where people are continuing the conversation by highlighting and responding to this story.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment