Schedule GCP VMs to start or stop
For some reason, one cannot schedule VMs that run on GCP to start and stop natively.
If you are running your development and test environments in the cloud, you will suddenly have a lot of infrastructure that probably will be idle for most of the time.
There are some professional tools out there that address this problem, but there are also a few things you can do on your own.
I decided to do a little Proof of Concept script that are able to start/stop VMs by looking at what kind of label the VM has. This script also only look at compute engine VMs, not appengine, kubernetes, databases etc.
The script would run on a micro VM (free tier) by crontab, that has a service account where projects that wants their VMs controlled by this script simply can grant access.
I have used the gcloud tool wrapped in python.. (The APIs are native, so really.. i should have used the APIs..)
However, the“gcloud” tool is a very useful tool, and among the things it can do is:
- Listing all projects
- listing VMs, with metainfo
- Stopping and starting VMs
To keep things simple, you can add some labels to your VMs. What i have used for my PoC script is
[u”run-24–7", u”run-24–5", u”run-16–5"]
run-24–7 means run all hours, all days.
run-24–5 means run all hours, only weekdays
run-16–5 means run 16 hours, only weekdays.
A simple script would be:
List all projects
def program_list_all_projects(self):
p_args = ["gcloud", "projects", "list", "--format", "json",
"--quiet"]
o = subprocess.check_output(p_args)
return json.loads(o)
for each project list all VMs
def program_list_all_compute(self):
project_tag = "--project={}".format(self.project)
p_args = ["gcloud", "compute", "instances", "list", "--format"
, "json", project_tag, "--quiet"]
o = subprocess.check_output(p_args)
return json.loads(o)
for each VM start or stop VM according to label.
def program_stop_vm(self, instance_name, instance_zone):
project_tag = "--project={}".format(self.project)
p_args = ["gcloud", "compute", "instances", "stop", "--format",
"json", instance_name, project_tag,
"--zone={}".format(instance_zone), "--quiet"]
if self.dry_run:
return json.loads("[]")
o = subprocess.check_output(p_args)
return json.loads(o)def program_start_vm(self, instance_name, instance_zone):
project_tag = "--project={}".format(self.project)
p_args = ["gcloud", "compute", "instances", "start", "--format",
"json", instance_name, project_tag,
"--zone={}".format(instance_zone), "--quiet"]
if self.dry_run:
return json.loads("[]")
o = subprocess.check_output(p_args)
return json.loads(o)
I also have a script that collect the status of all visible VMs and log it to BQ.
Hopefully, this will give you some ideas on how to do something similar
Or you can look at this project on github that looks very promising, and probably has more effort into it than my some hours of scripting : https://github.com/doitintl/zorya
Hope this gives you some ideas on how to schedule your VMs! its not really that hard.