Techno Blender
Digitally Yours.
Browsing Tag

PyTorch

How to boost PyTorch Dataset using memory-mapped files | by Tudor Surdoiu | Jul, 2022

This article will discuss the reasoning and the steps of implementing a PyTorch dataset that uses memory-mapped filesPhoto by Eléonore Kemmel on UnsplashIntroductionWhen training a neural network one of the most common speed-related bottlenecks is represented by the data loading module. If we are bringing the data over the network, besides prefetching and caching there aren’t any other easy optimizations that we can apply.However, if the data is in a local storage we can optimize the file reading operations by combining…

Implementing RepVGG in PyTorch. Make your CNN >100x faster | by Francesco Zuppichini | Jul, 2022

Photo by Alberto Restifo on UnsplashMake your CNN >100x fasterHello There!! Today we’ll see how to implement RepVGG in PyTorch proposed in RepVGG: Making VGG-style ConvNets Great AgainCode is here, an interactive version of this article can be downloaded from here.Let’s get started!The paper proposed a new architecture that can be tuned after training to make it faster on modern hardware. And by faster I mean lighting fast, this idea was used by Apple’s MobileOne model.Image by Xiaohan Ding, Xiangyu Zhang, Ningning Ma,…

Train a Neural Network to Detect Breast MRI Tumors with PyTorch | by Nick Konz | Jul, 2022

A practical tutorial for medical image analysisAn example breast MRI scan from our dataset.Most research in computer vision with deep learning is conducted on common natural image datasets such as MNIST, CIFAR-10 and ImageNet. However, an important application area of computer vision is for medical image analysis, where deep learning has been used for tasks such as cancer detection, organ segmentation, data harmonization, and many other examples. However, medical image datasets can often be more involved to “plug” into…

Distribute your PyTorch model in less than 20 lines of code | by Renato Sortino | Jul, 2022

A guide to make parallelization less painfulPhoto by Nana Dua on UnsplashWhen you approach Deep Learning for the first time, you learn that you can speed up your training by moving the model and data to the GPU. That’s definitely a significant improvement compared to training on the CPU, as you can now train your models and see results in way less time.Great, but let’s suppose that your model becomes too big (e.g. transformers) for a single GPU to handle a batch size greater than relatively small values such as 8. Or…

Backpropagation — Chain Rule and PyTorch in Action | by Robert Kwiatkowski | Jul, 2022

A simple guide from theory to implementation in Pytorch.Image by gerald on PixabayThe modern businesses rely more and more on the advances in the innovative fields like Artificial Intelligence to deliver the best product and services to its customers. Many of production AI systems is based on various Neural Networks trained often using a tremendous amount of data. For development teams one of the biggest challenges is how to train their models in a constrained time frame. There are various techniques and algorithms to do…

How to Build an Image-Captioning Model in Pytorch | by Saketh Kotamraju | Jun, 2022

A detailed step-by-step explanation of how to build an image-captioning model in PytorchPhoto by Adam Dutton on UnsplashIn this article, I will explain how you can build an image captioning model architecture using the Pytorch deep learning library. In addition to explaining the intuition behind the model architectures, I will also provide the Pytorch code for the models.Note that this article was written in June 2022, so earlier/future versions of Pytorch may be a little different and the code in this article may not…

Leveling up Training: NVTabular and PyTorch Lightning | by Dylan Valerio | Jun, 2022

Training a wide and deep recommender model on MovieLens 25MNVTabular is a feature engineering framework designed to work with NVIDIA Merlin. It can process large datasets typical in production recommender setups. I tried to work with NVIDIA Merlin on free instances, but the recommended approach seems to be the only way forward. But I still wanted to use NVTabular since the value of using the GPU for data engineering and data loading is very attractive. In this post, I’m going to use NVTabular with PyTorch Lightning to…

Installing PyTorch on Apple M1 chip with GPU Acceleration | by Nikos Kafritsas | Jun, 2022

It finally arrived!Photo by Ash Edmonds on UnsplashThe trajectory of Deep Learning support for the MacOS community has been amazing so far.Starting with the M1 devices, Apple introduced a built-in graphics processor that enables GPU acceleration. Hence, M1 Macbooks became suitable for deep learning tasks. No more Colab for MacOS data scientists!Next on the agenda was compatibility with the popular ML frameworks. Tensorflow was the first framework to become available in Apple Silicon devices. Using the Metal plugin,…

PyTorch vs. TensorFlow for Transformer-Based NLP Applications | by Mathieu Lemay | Jun, 2022

Deployment Considerations Should Be the Priority When Using BERT-Based ModelsPhoto by Pixabay from Pexels.TL;DR: BERT is an incredible advancement in NLP. Both major neural network frameworks have successfully and fully implemented BERT, especially with the support of HuggingFace. However, although at first glance TensorFlow is easier to prototype with and deploy from, PyTorch seems to have advantages when it comes to quantization and to some GPU deployments. This should be taken into consideration when kicking off a…

Binary Image Classification in PyTorch | by Marcello Politi | May, 2022

Photo by Clément Hélardot on UnsplashTrain a convolutional neural network adopting a transfer learning approachI personally approached deep learning using TensorFlow, which I immediately found very easy and intuitive. Many books also use this framework as a reference, such as Hands-On Machine Learning with Scikit-Learn, Keras, and Tensorflow.I then noticed that PyTorch is often used in research in both academia and industry. So I started to implement simple projects that I had already developed in TensorFlow using…