Techno Blender
Digitally Yours.

Build a Docker Image for Jupyter Notebooks and run on Cloud’s VertexAI | by Jesko Rehberg | Dec, 2022

0 121


Ship your apps with dockerized containers (image by Swanson Chan)

Have you successfully programmed a Python application locally and now want to bring it to the cloud? This is an easy to follow and exhaustive step by step tutorial on how to turn a Python script into a Docker image and push it to the Google Cloud Registry. In Google Cloud Platform, this Docker image can be called automatically in VertexAI via Pub/Sub. This tutorial was created on a Windows computer, but for Linux or Mac the essential steps are the same. In the end of this article you will be able to create your own Docker Image on your operating system and automatically trigger Python scripts in VertexAI.

What you will go through:

  • Installation of “Docker Desktop”
  • Docker Image and Container: build, tag, run, and push to GCloud
  • Automatically run any Python script in GCloud’s VertexAI via Bash, Scheduler, Pub/Sub, and Function

You have the following files in this directory structure:

The Sourcecode folder contains the Jupyter Notebook “dockerVertexai” and “input.csv” file:

The Jupyter Notebook itself is only a tiny Python application:

import pandas as pd
df=pd.read_csv('input.csv')
df['Output'] ='Now With Output added'
df.to_csv('output.csv')

All this script does is to import a csv called “input”:

.. adds a new column “Output” to that dataframe and exports the result as a new file “output.csv”:

Even though this might not be the most impressive script in the world, it serves you very well to easily confirm functionality in VertexAI later. Also, hopefully this script provides a helpful extension to the usual “hello world Flask Docker” examples. Even though this script is very simple, it shows you how to work not only with the source code files, but moreover with additional input-output files in cloud’s notebook instances. With this knowledge, there are no limits to what you can do with Python in the Cloud.

If not already the case, you will need to download Docker Desktop before you can start building the Docker Image:

Once the (in this case Windows) version is downloaded you can start Docker Desktop simply by opening the application:

Wait until Docker Desktop has launched (image by author)

For our purpose we do not need to do anything more than only starting Docker Desktop.

The next preparation step is the Dockerfile.

Take a look at the Dockerfile first:

FROM gcr.io/deeplearning-platform-release/base-cpu:latest

# RUN pip install torch # you can pip install any needed package like this

RUN mkdir -p /Sourcecode

COPY /Sourcecode /Sourcecode

The Dockerfile defines which vm (virtual machine ) image shall be used when running the instance later on. In this case you go with deeplearning-platform-release.

If there should be any additional Python packages necessary, these could be pip installed here as well (e.g. RUN pip install python-dotenv). Since you are only going to use Pandas (which already comes with the deeplearning-platform-release), there is no need to pip install it.

The command “run mkdir -p /Sourcecode” will create a new folder “Sourcecode”, as soon as the instance will be up and running later on.

The last command in the dockerfile will copy all files from the Sourcecode folder (which is the folder in the path on your local machine, from which you are going to build the docker image shortly) into the newly created Sourcecode folder within the running instance (you will see this again in a few steps).

In addition, you can also optionally store a Dockerignore file. This is intended to exclude packages that are not required for the execution of the Python code. This serves the purpose of keeping the image as small as possible, and not to build unneeded packages unnecessarily.

__pycache__
Sourcecode/temp

The Dockerignore does not matter for the further steps of this tutorial.

To build Docker Images you can also use Requirement.txt files. You will not use this approach in this post, but you might want to read this article in case you are interested (link).

As the very last preparation (before you can start the Docker build), you need a deployed project in Google Cloud Platform.

I assume that this is already the case for you. If you don’t know how to do that, you might find this a useful link.

My tip at this point is to copy the ProjectId, because you will need it many times in this tutorial. Project ID in GCP for this example is: socialsentiment-xyz

The gcloud (Google Cloud) command-line interface is the primary CLI tool to create and manage Google Cloud resources. The CLI tool performs many common platform tasks from the command line and also in scripts and other automation. If you haven’t downloaded CLI yet, you can find instructions here.

You will first initialize a configuration to be able to work with your Google Cloud Project within the command line:

gcloud init

Check in with your existing account:

..and finally select the appropriate project number (if any).

By default Artifact Registry isn’t enabled for new projects. So you must first enable it (otherwise you cannot publish to GCloud Artifact Registry).

To enable Artifact Registry, execute the following command:

gcloud services enable artifactregistry.googleapis.com

And add a list of repository-host names, like europe-west3-docker.pkg.dev:

gcloud auth configure-docker europe-west3-docker.pkg.dev

Nice, you have finished all preparations so you can finally start with the Docker build.

In your terminal, move to the folder where your local files are stored and enter:

docker build --tag python-docker .

Please note the blank space before the dot.

In case you receive an error message like “Docker daemon is not running” this could be because you forgot to start Docker Desktop application. Simply start Docker-Desktop (like mentioned at the beginning) and re-run the docker build command afterwards.

After the building..:

..is finished:

There will be no “run pip install” command if you did not add one in the Dockerfile (image by author)

..the Docker image is up and running. You can also get additional assurance of this by taking a look at the Docker Desktop application:

Or you can also cross-check this in your terminal:

docker images
You will need to tag the ImageID in the next step (image by author)

Copy and paste the ImageID from above’s Docker images command and use this as your Docker tag. Secondly, you need to copy the path for your Google Cloud project. This consists of your Google Cloud’s Project-ID (socialsentiment-xyz), and Repository (the name from your artifact’s settings):

Replace socialsentiment-blank (socialsentiment-xyz) with your project id (image by author)

Even though this step is not of further relevance to us, it is briefly mentioned for the sake of completeness.

Simply use run to run your image as a container:

docker run python-docker

I apologize that there is another cross-reference at this point. But so far we have not created an Artifact repository in the GCloud Platform. But this is elementary important before you can push the Docker image. Luckily this is child’s play:

You must create a repository to upload artifacts. Each repository can contain artifacts for a supported format, in your case Docker.

I called the repository “tweetimentrep” (image by author)

Choose a region that suits you. In my case that would be Frankfurt:

Google recommends Artifacts over Container Registry, that’s why we go with them (image by author)

Note: before proceeding, make sure you really have created a repository name in your GCloud’s artifact:

You can only tag the python-docker image if the repository exists (image by author)

Sorry for the inconsistency: when tweetiment-xyz is referred to in the following, the same project socialsentiment-xyz is meant.

Now go ahead and tag with your settings accordingly:

docker tag 4f0eac4b3b65 europe-west3-docker.pkg.dev/tweetiment-xyz/tweetimentrep/python-docker

That’s how easily you tagged the Docker image.

Now that you have tagged the Docker image, you can push it into GCP’s registry:

docker push europe-west3-docker.pkg.dev/tweetiment-xyz/tweetimentrep/python-docker

In case you receive a message similar to this:

.. that only means you have to activate Artifact Registry API (and do not forget to give it a repository name, otherwise you will not be able to tag and push your Docker image correctly).

Storing and managing artifacts in a scalable and integrated way (image by author)

Now you can re-run the push command. If the API is still not active, just wait a few minutes until it is.

After the Push is ready:

You will be able to see it in your Artifact Registry:

Great, you have transferred your Python code along with its folder structure and content to the Google Cloud as a Docker Image! This means that you can create a notebook in Google Cloud where you can manually run the Jupyter Notebook (or the Python .py code).

It is helpful to understand how to manually create a Notebook instance. If you don’t know how, this link should be supportive. But it is much more exciting to learn how to run such an instance automatically. To do this, first look at how to automatically run the Python script in the VertexAI instance once the instance is created. For this purpose use a bash script (a .sh file).

#!/bin/bash
# copy sourcode files from docker container to vm directory
sudo docker cp -a payload-container:/Sourcecode/ /home/jupyter
sudo -i
cd /home/jupyter/Sourcecode
# Execute notebook
ipython -c “%run dockerVertexai.ipynb”

Now upload the startup script.sh into your Cloud Storage’s bucket:

Cloud Storage is a managed service for storing (un)structured data (image by author)

Go to VertexAI and activate the VertexAI API and Notebook API, if not already the case.

Within the Workbench, click on “New Notebook” and choose “Customize” as the first option from the top:

Within the next form, select your container image from your artifact registry.

Usually you will choose the most frequent version (image by author)

Also don’t forget to select a script to run after creation. This would be your .sh file, which you just saved in your bucket some minutes ago:

And now you are good to go:

You are now ready to create the Jupyter Lab, which also enables Jupyter Notebooks (image by author)

Thanks to the startup script, the Jupyter Notebook was automatically executed when the Jupyter Lab instance was booted. You can ensure that the script ran, because of the existence of the output.csv (which did not exist before):

Voila, the output.csv has been generated automatically (image by author)

Fine, the script was executed automatically and an output file was created. But unfortunately this output file is not persisted. So it would be better to save this file in cloud storage. Namely in a bucket created by us before.

from google.cloud import storage # Imports the Google Cloud client library
storage_client = storage.Client() # Instantiates a client
BUCKET_NAME =’socialsenti’
blob_name=’output.csv’
storage_client = storage.Client()
bucket = storage_client.bucket(BUCKET_NAME)
blob = bucket.blob(blob_name)
with blob.open(“w”) as f:
f.write(“Output file has been saved into your bucket”)
with blob.open(“r”) as f:
print(f.read())

Add this into a new cell of your Jupyter Notebook and you will see that the output file will be saved in your bucket afterwards.

Save results into Storage if you want to keep them (image by author)

That’s better. But there is still one possible optimization. Because until now the instance runs all the time, even if the result (the output file) was already created. So you would need a way to shut down the virtual machine (ultimately VertexAI only uses Google Compute Engine GCE) at the end of the script, so that you don’t incur further stand-by costs.

Now let’s take the next step to automation. Instead of starting the instance manually, you will now create it by command from the terminal:

gcloud notebooks instances create instancetweetiment - container-repository=europe-west3-docker.pkg.dev/tweetiment-xyz/tweetimentrep/python-docker - container-tag=latest - machine-type=n1-standard-4 - location=us-central1-b - post-startup-script="gs://socialsenti/default-startup-script.sh"

Or, if you prefer, you can run it directly from a Jupyter notebook.

from google.cloud.notebooks_v1.types import Instance, VmImage, ContainerImage
from google.cloud import notebooks_v1
client = notebooks_v1.NotebookServiceClient()
notebook_instance = Instance(container_image=ContainerImage(repository="europe-west3-docker.pkg.dev/tweetiment-xyz/tweetimentrep/python-docker",),machine_type="n1-standard-8",post_startup_script="gs://socialsenti/default-startup-script.sh")
parent = "projects/tweetiment-xyz/locations/us-central1-a"
request = notebooks_v1.CreateInstanceRequest(parent=parent,instance_id="notetweeti",instance=notebook_instance,)
op = client.create_instance(request=request)
op.result()

Just in case you get an error due to lack of authentication when you run it for the first time, run this command. This will create the credential file:

gcloud auth application-default login

This credential file usually will be stored under this path:

%APPDATA%\gcloud\application_default_credentials.json

Now you can programmatically reach the GCP. If you have created a notebook (no matter if manually or programmatically), you can read it with this Python code. The Instance_Name is the name of the notebook itself. This excerpt is from the GCP Documentation:

from google.cloud import notebooks_v1

def sample_get_instance():
# Create a client
client = notebooks_v1.NotebookServiceClient()
# Initialize request argument(s)
request = notebooks_v1.GetInstanceRequest(
name="projects/tweetiment-xyz/locations/us-central1-a/instances/test",
)
# Make the request
response = client.get_instance(request=request)
# Handle the response
print(response)
sample_get_instance()

If you want to schedule a recurring Python script you can use Google’s Cloud Scheduler in conjunction with Cloud Function and Pub/Sub.

Select Pub/Sub as Target Type and an appropriate Pub/Sub topic:

You do not need to enter a message body (image by author)

You can wait until the time is reached at which the scheduler should become active. You can also start it manually at any time to test it.

Schedulers are nice. But sometimes you also want the Python script to be executed as soon as an action has been performed on a web page. For this purpose, Cloud Functions are a good choice.

Select Pub/Sub as your function’s trigger (image by author)

Select Python as the runtime and enter e.g. “TweetiNoteStartTopic” as the entry point. That entry point will also be your function’s name as well. So in the end the Cloud Function looks like this:

In the requirements.txt you can place all dependencies as needed:

# Function dependencies, for example:
google-cloud-notebooks>=1.4.4
google-cloud>=0.34.0

And your main.py could be similar to this:

import base64
from google.cloud.notebooks_v1.types import Instance, ContainerImage
from google.cloud import notebooks_v1

def TweetiNoteStartTopic(event, context):
client = notebooks_v1.NotebookServiceClient()
notebook_instance = Instance(container_image=ContainerImage(repository="europe-west3-docker.pkg.dev/tweetiment-xyz/tweetimentrep/python-docker",),machine_type="n1-standard-8",post_startup_script="gs://socialsenti/default-startup-script.sh")
parent = "projects/tweetiment-xyz/locations/us-central1-a"
request = notebooks_v1.CreateInstanceRequest(parent=parent,instance_id="notetweeti",instance=notebook_instance,)
op = client.create_instance(request=request)
op.result()
print("finished")

Note that you can also trigger your Cloud Function any time. Just go to the test tab and enter this json:

{“data”:”TweetiNoteStartTopic”}

You can now programmatically run any Python scripts in the cloud! Either directly via Cloud Function as a pure Python script, or as a Docker Container on a virtual machine. It is VertexAI which offers you to run Jupyter Notebooks in a Jupyter Lab environment, which itself runs on virtual machines on Google Cloud Engine (GCE). Thanks to the Google hardware—which you can programmatically set on VertexAI—there are almost no limits. Or did you ever work with 96vCPU and 360GB Ram before? I think it is now legitimate to wear your Google Cloud shirt with pride 🙂

Many thanks for reading! I hope this article is helpful for you. Feel free to connect with me on LinkedIn, Twitter or Workrooms.




Ship your apps with dockerized containers (image by Swanson Chan)

Have you successfully programmed a Python application locally and now want to bring it to the cloud? This is an easy to follow and exhaustive step by step tutorial on how to turn a Python script into a Docker image and push it to the Google Cloud Registry. In Google Cloud Platform, this Docker image can be called automatically in VertexAI via Pub/Sub. This tutorial was created on a Windows computer, but for Linux or Mac the essential steps are the same. In the end of this article you will be able to create your own Docker Image on your operating system and automatically trigger Python scripts in VertexAI.

What you will go through:

  • Installation of “Docker Desktop”
  • Docker Image and Container: build, tag, run, and push to GCloud
  • Automatically run any Python script in GCloud’s VertexAI via Bash, Scheduler, Pub/Sub, and Function

You have the following files in this directory structure:

The Sourcecode folder contains the Jupyter Notebook “dockerVertexai” and “input.csv” file:

The Jupyter Notebook itself is only a tiny Python application:

import pandas as pd
df=pd.read_csv('input.csv')
df['Output'] ='Now With Output added'
df.to_csv('output.csv')

All this script does is to import a csv called “input”:

.. adds a new column “Output” to that dataframe and exports the result as a new file “output.csv”:

Even though this might not be the most impressive script in the world, it serves you very well to easily confirm functionality in VertexAI later. Also, hopefully this script provides a helpful extension to the usual “hello world Flask Docker” examples. Even though this script is very simple, it shows you how to work not only with the source code files, but moreover with additional input-output files in cloud’s notebook instances. With this knowledge, there are no limits to what you can do with Python in the Cloud.

If not already the case, you will need to download Docker Desktop before you can start building the Docker Image:

Once the (in this case Windows) version is downloaded you can start Docker Desktop simply by opening the application:

Wait until Docker Desktop has launched (image by author)

For our purpose we do not need to do anything more than only starting Docker Desktop.

The next preparation step is the Dockerfile.

Take a look at the Dockerfile first:

FROM gcr.io/deeplearning-platform-release/base-cpu:latest

# RUN pip install torch # you can pip install any needed package like this

RUN mkdir -p /Sourcecode

COPY /Sourcecode /Sourcecode

The Dockerfile defines which vm (virtual machine ) image shall be used when running the instance later on. In this case you go with deeplearning-platform-release.

If there should be any additional Python packages necessary, these could be pip installed here as well (e.g. RUN pip install python-dotenv). Since you are only going to use Pandas (which already comes with the deeplearning-platform-release), there is no need to pip install it.

The command “run mkdir -p /Sourcecode” will create a new folder “Sourcecode”, as soon as the instance will be up and running later on.

The last command in the dockerfile will copy all files from the Sourcecode folder (which is the folder in the path on your local machine, from which you are going to build the docker image shortly) into the newly created Sourcecode folder within the running instance (you will see this again in a few steps).

In addition, you can also optionally store a Dockerignore file. This is intended to exclude packages that are not required for the execution of the Python code. This serves the purpose of keeping the image as small as possible, and not to build unneeded packages unnecessarily.

__pycache__
Sourcecode/temp

The Dockerignore does not matter for the further steps of this tutorial.

To build Docker Images you can also use Requirement.txt files. You will not use this approach in this post, but you might want to read this article in case you are interested (link).

As the very last preparation (before you can start the Docker build), you need a deployed project in Google Cloud Platform.

I assume that this is already the case for you. If you don’t know how to do that, you might find this a useful link.

My tip at this point is to copy the ProjectId, because you will need it many times in this tutorial. Project ID in GCP for this example is: socialsentiment-xyz

The gcloud (Google Cloud) command-line interface is the primary CLI tool to create and manage Google Cloud resources. The CLI tool performs many common platform tasks from the command line and also in scripts and other automation. If you haven’t downloaded CLI yet, you can find instructions here.

You will first initialize a configuration to be able to work with your Google Cloud Project within the command line:

gcloud init

Check in with your existing account:

..and finally select the appropriate project number (if any).

By default Artifact Registry isn’t enabled for new projects. So you must first enable it (otherwise you cannot publish to GCloud Artifact Registry).

To enable Artifact Registry, execute the following command:

gcloud services enable artifactregistry.googleapis.com

And add a list of repository-host names, like europe-west3-docker.pkg.dev:

gcloud auth configure-docker europe-west3-docker.pkg.dev

Nice, you have finished all preparations so you can finally start with the Docker build.

In your terminal, move to the folder where your local files are stored and enter:

docker build --tag python-docker .

Please note the blank space before the dot.

In case you receive an error message like “Docker daemon is not running” this could be because you forgot to start Docker Desktop application. Simply start Docker-Desktop (like mentioned at the beginning) and re-run the docker build command afterwards.

After the building..:

..is finished:

There will be no “run pip install” command if you did not add one in the Dockerfile (image by author)

..the Docker image is up and running. You can also get additional assurance of this by taking a look at the Docker Desktop application:

Or you can also cross-check this in your terminal:

docker images
You will need to tag the ImageID in the next step (image by author)

Copy and paste the ImageID from above’s Docker images command and use this as your Docker tag. Secondly, you need to copy the path for your Google Cloud project. This consists of your Google Cloud’s Project-ID (socialsentiment-xyz), and Repository (the name from your artifact’s settings):

Replace socialsentiment-blank (socialsentiment-xyz) with your project id (image by author)

Even though this step is not of further relevance to us, it is briefly mentioned for the sake of completeness.

Simply use run to run your image as a container:

docker run python-docker

I apologize that there is another cross-reference at this point. But so far we have not created an Artifact repository in the GCloud Platform. But this is elementary important before you can push the Docker image. Luckily this is child’s play:

You must create a repository to upload artifacts. Each repository can contain artifacts for a supported format, in your case Docker.

I called the repository “tweetimentrep” (image by author)

Choose a region that suits you. In my case that would be Frankfurt:

Google recommends Artifacts over Container Registry, that’s why we go with them (image by author)

Note: before proceeding, make sure you really have created a repository name in your GCloud’s artifact:

You can only tag the python-docker image if the repository exists (image by author)

Sorry for the inconsistency: when tweetiment-xyz is referred to in the following, the same project socialsentiment-xyz is meant.

Now go ahead and tag with your settings accordingly:

docker tag 4f0eac4b3b65 europe-west3-docker.pkg.dev/tweetiment-xyz/tweetimentrep/python-docker

That’s how easily you tagged the Docker image.

Now that you have tagged the Docker image, you can push it into GCP’s registry:

docker push europe-west3-docker.pkg.dev/tweetiment-xyz/tweetimentrep/python-docker

In case you receive a message similar to this:

.. that only means you have to activate Artifact Registry API (and do not forget to give it a repository name, otherwise you will not be able to tag and push your Docker image correctly).

Storing and managing artifacts in a scalable and integrated way (image by author)

Now you can re-run the push command. If the API is still not active, just wait a few minutes until it is.

After the Push is ready:

You will be able to see it in your Artifact Registry:

Great, you have transferred your Python code along with its folder structure and content to the Google Cloud as a Docker Image! This means that you can create a notebook in Google Cloud where you can manually run the Jupyter Notebook (or the Python .py code).

It is helpful to understand how to manually create a Notebook instance. If you don’t know how, this link should be supportive. But it is much more exciting to learn how to run such an instance automatically. To do this, first look at how to automatically run the Python script in the VertexAI instance once the instance is created. For this purpose use a bash script (a .sh file).

#!/bin/bash
# copy sourcode files from docker container to vm directory
sudo docker cp -a payload-container:/Sourcecode/ /home/jupyter
sudo -i
cd /home/jupyter/Sourcecode
# Execute notebook
ipython -c “%run dockerVertexai.ipynb”

Now upload the startup script.sh into your Cloud Storage’s bucket:

Cloud Storage is a managed service for storing (un)structured data (image by author)

Go to VertexAI and activate the VertexAI API and Notebook API, if not already the case.

Within the Workbench, click on “New Notebook” and choose “Customize” as the first option from the top:

Within the next form, select your container image from your artifact registry.

Usually you will choose the most frequent version (image by author)

Also don’t forget to select a script to run after creation. This would be your .sh file, which you just saved in your bucket some minutes ago:

And now you are good to go:

You are now ready to create the Jupyter Lab, which also enables Jupyter Notebooks (image by author)

Thanks to the startup script, the Jupyter Notebook was automatically executed when the Jupyter Lab instance was booted. You can ensure that the script ran, because of the existence of the output.csv (which did not exist before):

Voila, the output.csv has been generated automatically (image by author)

Fine, the script was executed automatically and an output file was created. But unfortunately this output file is not persisted. So it would be better to save this file in cloud storage. Namely in a bucket created by us before.

from google.cloud import storage # Imports the Google Cloud client library
storage_client = storage.Client() # Instantiates a client
BUCKET_NAME =’socialsenti’
blob_name=’output.csv’
storage_client = storage.Client()
bucket = storage_client.bucket(BUCKET_NAME)
blob = bucket.blob(blob_name)
with blob.open(“w”) as f:
f.write(“Output file has been saved into your bucket”)
with blob.open(“r”) as f:
print(f.read())

Add this into a new cell of your Jupyter Notebook and you will see that the output file will be saved in your bucket afterwards.

Save results into Storage if you want to keep them (image by author)

That’s better. But there is still one possible optimization. Because until now the instance runs all the time, even if the result (the output file) was already created. So you would need a way to shut down the virtual machine (ultimately VertexAI only uses Google Compute Engine GCE) at the end of the script, so that you don’t incur further stand-by costs.

Now let’s take the next step to automation. Instead of starting the instance manually, you will now create it by command from the terminal:

gcloud notebooks instances create instancetweetiment - container-repository=europe-west3-docker.pkg.dev/tweetiment-xyz/tweetimentrep/python-docker - container-tag=latest - machine-type=n1-standard-4 - location=us-central1-b - post-startup-script="gs://socialsenti/default-startup-script.sh"

Or, if you prefer, you can run it directly from a Jupyter notebook.

from google.cloud.notebooks_v1.types import Instance, VmImage, ContainerImage
from google.cloud import notebooks_v1
client = notebooks_v1.NotebookServiceClient()
notebook_instance = Instance(container_image=ContainerImage(repository="europe-west3-docker.pkg.dev/tweetiment-xyz/tweetimentrep/python-docker",),machine_type="n1-standard-8",post_startup_script="gs://socialsenti/default-startup-script.sh")
parent = "projects/tweetiment-xyz/locations/us-central1-a"
request = notebooks_v1.CreateInstanceRequest(parent=parent,instance_id="notetweeti",instance=notebook_instance,)
op = client.create_instance(request=request)
op.result()

Just in case you get an error due to lack of authentication when you run it for the first time, run this command. This will create the credential file:

gcloud auth application-default login

This credential file usually will be stored under this path:

%APPDATA%\gcloud\application_default_credentials.json

Now you can programmatically reach the GCP. If you have created a notebook (no matter if manually or programmatically), you can read it with this Python code. The Instance_Name is the name of the notebook itself. This excerpt is from the GCP Documentation:

from google.cloud import notebooks_v1

def sample_get_instance():
# Create a client
client = notebooks_v1.NotebookServiceClient()
# Initialize request argument(s)
request = notebooks_v1.GetInstanceRequest(
name="projects/tweetiment-xyz/locations/us-central1-a/instances/test",
)
# Make the request
response = client.get_instance(request=request)
# Handle the response
print(response)
sample_get_instance()

If you want to schedule a recurring Python script you can use Google’s Cloud Scheduler in conjunction with Cloud Function and Pub/Sub.

Select Pub/Sub as Target Type and an appropriate Pub/Sub topic:

You do not need to enter a message body (image by author)

You can wait until the time is reached at which the scheduler should become active. You can also start it manually at any time to test it.

Schedulers are nice. But sometimes you also want the Python script to be executed as soon as an action has been performed on a web page. For this purpose, Cloud Functions are a good choice.

Select Pub/Sub as your function’s trigger (image by author)

Select Python as the runtime and enter e.g. “TweetiNoteStartTopic” as the entry point. That entry point will also be your function’s name as well. So in the end the Cloud Function looks like this:

In the requirements.txt you can place all dependencies as needed:

# Function dependencies, for example:
google-cloud-notebooks>=1.4.4
google-cloud>=0.34.0

And your main.py could be similar to this:

import base64
from google.cloud.notebooks_v1.types import Instance, ContainerImage
from google.cloud import notebooks_v1

def TweetiNoteStartTopic(event, context):
client = notebooks_v1.NotebookServiceClient()
notebook_instance = Instance(container_image=ContainerImage(repository="europe-west3-docker.pkg.dev/tweetiment-xyz/tweetimentrep/python-docker",),machine_type="n1-standard-8",post_startup_script="gs://socialsenti/default-startup-script.sh")
parent = "projects/tweetiment-xyz/locations/us-central1-a"
request = notebooks_v1.CreateInstanceRequest(parent=parent,instance_id="notetweeti",instance=notebook_instance,)
op = client.create_instance(request=request)
op.result()
print("finished")

Note that you can also trigger your Cloud Function any time. Just go to the test tab and enter this json:

{“data”:”TweetiNoteStartTopic”}

You can now programmatically run any Python scripts in the cloud! Either directly via Cloud Function as a pure Python script, or as a Docker Container on a virtual machine. It is VertexAI which offers you to run Jupyter Notebooks in a Jupyter Lab environment, which itself runs on virtual machines on Google Cloud Engine (GCE). Thanks to the Google hardware—which you can programmatically set on VertexAI—there are almost no limits. Or did you ever work with 96vCPU and 360GB Ram before? I think it is now legitimate to wear your Google Cloud shirt with pride 🙂

Many thanks for reading! I hope this article is helpful for you. Feel free to connect with me on LinkedIn, Twitter or Workrooms.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment