Techno Blender
Digitally Yours.

Hands-on Generative Adversarial Networks (GAN) for Signal Processing, with Python | by Piero Paialunga | Dec, 2022

0 48


Photo by Michael Dziedzic on Unsplash

In my research, I use Machine (Deep) Learning a lot. Two days ago, I was working on Generative Adversarial Network (GAN) and seeing how I can apply it to my work.

After the code was ready, I started writing this article on Medium and I tried to find the best words to start with a proper introduction, as I always do.

I start asking myself questions like:

“Why should a reader read this? Why what I am trying to communicate is meaningful? What is the framework that we need to have before reading this?”

Now, of course, I believe that the reader should read this because I think what I write is meaningful and interesting.

But the truth is that I love signal processing, and I love writing about it because I love signal processing. This article is about the two things I love the most: signal processing and artificial intelligence. I put all my love, energy, and passion into these two (I actually crossed an ocean to research them) and I hope that you will find this topic interesting.

As you might have guessed from the title, we will use Generative Adversarial Networks for Signal Processing. The game we will do is the following:

Imagine doing an experiment. The setup of this experiment is made by a generator. The output of this generator is a time series (aka a signal).

Image by author

Imagine that this experiment is expensive, and takes a lot of energy and computational effort. We want to stop doing this experiment eventually. To do that we need to change our generator into a surrogate.

Image by author

That little salmon-pink brain that you see is our surrogate model. In particular, this surrogate is a Machine Learning model. As the name suggests, this Machine Learning model is a Generative Adversarial Network (GAN).

This article will go like this:

  1. Building our experiment: We will generate our controlled dataset and we will describe it.
  2. Defining our Machine Learning model: We will describe the specific features of our GAN model.
  3. Exploring the results: we will run our generative model and use our surrogate model to extract our signals.

I hope you are as excited as I am. Let’s do it!

Most of the signals that come from an electrical/mechanical engineer setup are sinusoidal signals*

*I wrote an article about it if you are interested! Here’s where you can find it.

This means that the output signal is somehow something like this:

Image by author

Where:

  • A is the amplitude of our signal
  • omega is the frequency
  • b is the bias

Actually, in a real-world experiment, we have the noise element.
Now, there are multiple kinds of noises and they all have their colored pair (white noise, pink noise, blue noise, green noise…). One of the most typical noises is the so-called gaussian white noise which is a noise that lives at all frequencies and has a Gaussian distribution.

So the target signal looks more like something like this:

Image by author

Now, in practice:

  • The mean is usually 0
  • The standard deviation can vary, but it is safe to assume it is 1 and fixed for our experiment.
  • Another constant can be considered to be in front of the noise factor as a sort of amplitude of the noise

So at the end of the day, it looks more like this:

Image by author

Now, this is our perfect world, our Pandora, like they’d say in Avatar 😄

In real life, things work differently.
Let’s say that we fix an amplitude, but that amplitude changes a lot.
For example, let’s say that:

  1. The amplitude can range from 0.1 to 10 with step = 0.1
  2. The bias can range from 0.1 to 10 with step = 0.1
  3. The frequency can range from 1 to 2 with step = 0.001
  4. The noise amplitude is fixed and 0.3 (the randomness of the noise is in its probability distribution anyways)

If we want to incorporate all this randomness we can do that using the following lines of code:

Here are some outputs:

Images by author, generated using the code above

At this point, the goal should be clear enough:

“Let’s say we don’t have the target function, how can we generate signals that look like they were generated by the source?”

Let’s start with the Machine Learning then 🤗

The Machine Learning model that we are using is the Generative Adversarial Network (GAN).

I really want this article about GAN on signal processing rather than a head-to-toe description of GANs, but I will try to briefly introduce it. DISCLAIMER: some people do it way better than me (big up for Joseph Rocca on this one: Understanding Generative Adversarial Networks (GANs))

Let’s say that the GANs are the models that are used for Deepfake.
Those are generative models, as the name suggests, that are implemented by training a generative part and a discriminator.

The generative part tries to generate a model that is as close as possible to the real one. If that was it there would be nothing different than a standard encoder-decoder. The “real deal” is the presence of the discriminative part.

The discriminative part is a classifier that tries to distinguish the real and “fake” (generated by the generative model) instances.

So the game is a competition between the generative model that tries to build a fake object that looks like a training data object and a discriminative model that tries to distinguish the training data objects and the fake ones. This “game” is realized by a min-max loss function and an elegant yet simple algorithm built by the beautiful mind of Ian J. Goodfellow (Generative Adversarial Nets paper).

Now, the very common use of the GAN is the Conditional GAN. The conditional GANs are generative models that are related to a certain input.
Let’s say the input is a string

“An adorable cat flying to the moon”

And the output is the image:

Image by author

In this example, the model is more naive than that and the generative model is not related to a specific input.
The input of this generative model is now the noise so the model tries to go from the noise to a signal that is likely to be generated from the source.

The architecture of the generative model is the following:

Image by author

The architecture of the discriminative model is the following:

Image by author

The generative model is an LSTM model that takes as input a random noise vector (a three dimension vector), and outputs a 300-long vector that is ideally the desired signal:

Image by author

The discriminative model distinguishes a real (from the training data) and a fake (generated by the generative model) output:

Image by author

The hands-on implementation of this GAN is the following:

Now the length of the input is a parameter of our model:

LENGHT_INPUT = 300

And the dimension of the noise vector is the latent_dim parameter.

Now we have to generate our dataset. This means building a function that generates n signals. We also have to generate n random noise inputs with a given dimensionality and we have to build the code that generates the fake signals given n random noise signals.

Last but not least, we have to build our train function.

This code will train our generative model. It will also show, every n_eval steps, the progress of the generative model by plotting the real and fake data (again, by fake we mean “generated by our model”).

The whole script that

  • Generates the dataset
  • Build the GAN model
  • Train the GAN

is the following:

Let me show you some progress:

Image by author

Now let’s generate 100000 random signals.

This is great. Imagine that each experiment costs you $0.5. You just “saved” $50k. Imagine that each experiment takes 1 minute. You just “saved” 70 days. That is the purpose of using these GANs models at the end:
“to save time and effort”.

Now let’s generate 100k real signals.

Let’s plot some results:

In this article we:

  1. We established that artificial intelligence and signal processing are awesome, so we decided to put them together.
  2. We made up a signal processing scenario, where you have this noisy sine generator. This sine can have different amplitudes, different frequencies, and different biases.
  3. We briefly described the GAN models. We described what is the generative part of the model, what is the discriminative part, and what is the loss of the model. The input of the generative model is a 3-dimensional noise, the output is a signal that looks like one of the training data.
  4. We trained the GAN model and we generated some random signals.

The key part of this model is its generative ability, so the trained generative model can save us time, money, and energy. That is because, instead of doing the experiment, you just have to press “run” on your python environment 🚀

If you liked the article and you want to know more about machine learning, or you just want to ask me something, you can:

A. Follow me on Linkedin, where I publish all my stories
B. Subscribe to my newsletter. It will keep you updated about new stories and give you the chance to text me to receive all the corrections or doubts you may have.
C. Become a referred member, so you won’t have any “maximum number of stories for the month” and you can read whatever I (and thousands of other Machine Learning and Data Science top writers) write about the newest technology available.


Photo by Michael Dziedzic on Unsplash

In my research, I use Machine (Deep) Learning a lot. Two days ago, I was working on Generative Adversarial Network (GAN) and seeing how I can apply it to my work.

After the code was ready, I started writing this article on Medium and I tried to find the best words to start with a proper introduction, as I always do.

I start asking myself questions like:

“Why should a reader read this? Why what I am trying to communicate is meaningful? What is the framework that we need to have before reading this?”

Now, of course, I believe that the reader should read this because I think what I write is meaningful and interesting.

But the truth is that I love signal processing, and I love writing about it because I love signal processing. This article is about the two things I love the most: signal processing and artificial intelligence. I put all my love, energy, and passion into these two (I actually crossed an ocean to research them) and I hope that you will find this topic interesting.

As you might have guessed from the title, we will use Generative Adversarial Networks for Signal Processing. The game we will do is the following:

Imagine doing an experiment. The setup of this experiment is made by a generator. The output of this generator is a time series (aka a signal).

Image by author

Imagine that this experiment is expensive, and takes a lot of energy and computational effort. We want to stop doing this experiment eventually. To do that we need to change our generator into a surrogate.

Image by author

That little salmon-pink brain that you see is our surrogate model. In particular, this surrogate is a Machine Learning model. As the name suggests, this Machine Learning model is a Generative Adversarial Network (GAN).

This article will go like this:

  1. Building our experiment: We will generate our controlled dataset and we will describe it.
  2. Defining our Machine Learning model: We will describe the specific features of our GAN model.
  3. Exploring the results: we will run our generative model and use our surrogate model to extract our signals.

I hope you are as excited as I am. Let’s do it!

Most of the signals that come from an electrical/mechanical engineer setup are sinusoidal signals*

*I wrote an article about it if you are interested! Here’s where you can find it.

This means that the output signal is somehow something like this:

Image by author

Where:

  • A is the amplitude of our signal
  • omega is the frequency
  • b is the bias

Actually, in a real-world experiment, we have the noise element.
Now, there are multiple kinds of noises and they all have their colored pair (white noise, pink noise, blue noise, green noise…). One of the most typical noises is the so-called gaussian white noise which is a noise that lives at all frequencies and has a Gaussian distribution.

So the target signal looks more like something like this:

Image by author

Now, in practice:

  • The mean is usually 0
  • The standard deviation can vary, but it is safe to assume it is 1 and fixed for our experiment.
  • Another constant can be considered to be in front of the noise factor as a sort of amplitude of the noise

So at the end of the day, it looks more like this:

Image by author

Now, this is our perfect world, our Pandora, like they’d say in Avatar 😄

In real life, things work differently.
Let’s say that we fix an amplitude, but that amplitude changes a lot.
For example, let’s say that:

  1. The amplitude can range from 0.1 to 10 with step = 0.1
  2. The bias can range from 0.1 to 10 with step = 0.1
  3. The frequency can range from 1 to 2 with step = 0.001
  4. The noise amplitude is fixed and 0.3 (the randomness of the noise is in its probability distribution anyways)

If we want to incorporate all this randomness we can do that using the following lines of code:

Here are some outputs:

Images by author, generated using the code above

At this point, the goal should be clear enough:

“Let’s say we don’t have the target function, how can we generate signals that look like they were generated by the source?”

Let’s start with the Machine Learning then 🤗

The Machine Learning model that we are using is the Generative Adversarial Network (GAN).

I really want this article about GAN on signal processing rather than a head-to-toe description of GANs, but I will try to briefly introduce it. DISCLAIMER: some people do it way better than me (big up for Joseph Rocca on this one: Understanding Generative Adversarial Networks (GANs))

Let’s say that the GANs are the models that are used for Deepfake.
Those are generative models, as the name suggests, that are implemented by training a generative part and a discriminator.

The generative part tries to generate a model that is as close as possible to the real one. If that was it there would be nothing different than a standard encoder-decoder. The “real deal” is the presence of the discriminative part.

The discriminative part is a classifier that tries to distinguish the real and “fake” (generated by the generative model) instances.

So the game is a competition between the generative model that tries to build a fake object that looks like a training data object and a discriminative model that tries to distinguish the training data objects and the fake ones. This “game” is realized by a min-max loss function and an elegant yet simple algorithm built by the beautiful mind of Ian J. Goodfellow (Generative Adversarial Nets paper).

Now, the very common use of the GAN is the Conditional GAN. The conditional GANs are generative models that are related to a certain input.
Let’s say the input is a string

“An adorable cat flying to the moon”

And the output is the image:

Image by author

In this example, the model is more naive than that and the generative model is not related to a specific input.
The input of this generative model is now the noise so the model tries to go from the noise to a signal that is likely to be generated from the source.

The architecture of the generative model is the following:

Image by author

The architecture of the discriminative model is the following:

Image by author

The generative model is an LSTM model that takes as input a random noise vector (a three dimension vector), and outputs a 300-long vector that is ideally the desired signal:

Image by author

The discriminative model distinguishes a real (from the training data) and a fake (generated by the generative model) output:

Image by author

The hands-on implementation of this GAN is the following:

Now the length of the input is a parameter of our model:

LENGHT_INPUT = 300

And the dimension of the noise vector is the latent_dim parameter.

Now we have to generate our dataset. This means building a function that generates n signals. We also have to generate n random noise inputs with a given dimensionality and we have to build the code that generates the fake signals given n random noise signals.

Last but not least, we have to build our train function.

This code will train our generative model. It will also show, every n_eval steps, the progress of the generative model by plotting the real and fake data (again, by fake we mean “generated by our model”).

The whole script that

  • Generates the dataset
  • Build the GAN model
  • Train the GAN

is the following:

Let me show you some progress:

Image by author

Now let’s generate 100000 random signals.

This is great. Imagine that each experiment costs you $0.5. You just “saved” $50k. Imagine that each experiment takes 1 minute. You just “saved” 70 days. That is the purpose of using these GANs models at the end:
“to save time and effort”.

Now let’s generate 100k real signals.

Let’s plot some results:

In this article we:

  1. We established that artificial intelligence and signal processing are awesome, so we decided to put them together.
  2. We made up a signal processing scenario, where you have this noisy sine generator. This sine can have different amplitudes, different frequencies, and different biases.
  3. We briefly described the GAN models. We described what is the generative part of the model, what is the discriminative part, and what is the loss of the model. The input of the generative model is a 3-dimensional noise, the output is a signal that looks like one of the training data.
  4. We trained the GAN model and we generated some random signals.

The key part of this model is its generative ability, so the trained generative model can save us time, money, and energy. That is because, instead of doing the experiment, you just have to press “run” on your python environment 🚀

If you liked the article and you want to know more about machine learning, or you just want to ask me something, you can:

A. Follow me on Linkedin, where I publish all my stories
B. Subscribe to my newsletter. It will keep you updated about new stories and give you the chance to text me to receive all the corrections or doubts you may have.
C. Become a referred member, so you won’t have any “maximum number of stories for the month” and you can read whatever I (and thousands of other Machine Learning and Data Science top writers) write about the newest technology available.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.
Leave a comment