Deploying google cloud workstation with Terraform 

Setting up a custom development environment on google cloud workstation with Terraform

Having your development IDE in the cloud where your applications run makes much more sense for speed, security, and flexibility.

Read –  Why you should switch to remote development with google cloud workstations?

What is a cloud workstation by Google Cloud?

Cloud Workstations provides managed development environments on Google Cloud with built-in security and preconfigured yet customizable development environments. Instead of requiring your developers to install software and run setup scripts, you can create a workstation configuration that specifies your environment in a reproducible way.

Deploying Google’s cloud workstation with Terraform

Note: You need a Google beta provider for Terraform to deploy the workstation. Also, the service account that will be used to deploy the workstation must have workstations.workstationCreator permissions.

Let’s start with creating a provider file —  provider.tf

terraform {
required_providers {
google = {
source = "hashicorp/google"
}
google-beta = {
source = "hashicorp/google-beta"
}
}
required_version = ">= 0.13"
}

variables.tf

variable "vpc_name" {
type = string
}
variable "subnetwork_name" {
type = string
}
variable "subnetwork_range" {
type = string
}
variable “project_id” {
type = string
}
variable “region” {
type = string
}
variable "artifact_repo"{
type = string
}

terraform.tfvars

vpc_name    = “somevpc”
subnetwork_name = “somesubnet”
subnetwork_range = “some range”
project_id = “someproject”
region = “someregion”
artifact_repo = "somerepo"

network.tf

resource "google_compute_network" “main-vpc" {
name = var.vpc_name
auto_create_subnetworks = false
project = var.project_id
}

resource "google_compute_subnetwork" “main-subnet” {
name = var.subnetwork_name
ip_cidr_range = var.subnetwork_range
region = var.region
project = var.project_id
network = google_compute_network.main-vpc.name
private_ip_google_access = true
}

Enabling the workstation API

resource "google_project_service" "api" {
  for_each = toset([
    "workstations.googleapis.com",
    "artifactregistry.googleapis.com",
  ])
  service            = each.value
  disable_on_destroy = false
}

Now we will create firewall rules for workstations. As per official docs,

You need to create a firewall rule to allow internal ingress to the control plane with with cloud-workstations-instance tag(Cloud Workstations automatically applies the cloud-workstations-instance network tag to the workstation VMs). Also allow internal egress connections from VMs with the cloud-workstations-instance tag for the TCP protocol on ports 980 and 443

resource "google_compute_firewall" "workstation-egress" {
name = "workstation-internal-egress"
network = var.vpc_name
project = var.project_id

allow {
protocol = "tcp"
ports = ["980", "443"]
}

priority = "10"
direction = "EGRESS"
target_tags = ["cloud-workstations-instance"]
}

#workstation internal ingress
resource "google_compute_firewall" "workstation-ingress" {
name = "workstation-internal-ingress"
network = var.vpc_name
project = var.project_id

allow {
protocol = "icmp"
}

allow {
protocol = "tcp"
ports = ["0-65535"]
}

allow {
protocol = "udp"
ports = ["0-65535"]
}

target_tags = []
source_ranges = [var.subnet_range]
direction = "INGRESS"
priority = "20"
}

we will create service accounts that will be used by the workstation to pull the container image from Artifact Registry.

# To pull image for workstation
resource "google_service_account" "image-pull" {
  project      = var.project_id
  account_id   = "image-pull"
  display_name = "Service Account - container image pull"
}

resource "google_project_iam_member" "workstation-image" {
  project = var.project_id
  role    = "roles/artifactregistry.reader"
  member  = "serviceAccount:${google_service_account.image-pull.email}"
}

Before we create a workstation, we need to create an artifact registry, build a custom image and push the image to the artifact registry.

artifact.tf

resource "google_artifact_registry_repository" "workstation-repo" {
project = var.project_id
location = var.region
repository_id = var.artifact_repo
format = "DOCKER"
}

You can build container image in many ways
– Build image on local machine and push it manually to artifact.
– use docker provider for terraform to build image on run time.
– Build it in your CI workflow and push it to artifact registry.

I will be manually building custom image and pushing it to repository before i deploy the workstation for ease.

you can run terraform init, terraform plan, and terraform apply on your local machine by impersonating a service account with project level owner role.

Although this is not recommended in production environments, always deploy using CI workflow and follow security best practices.

# creating service account 
gcloud iam service-accounts create terraform-deploy \
--display-name="terraform-deploy-svc"

# adding owner role on project level
gcloud projects add-iam-policy-binding someproject \
--member="serviceAccount:terraform-deploy@someproject.iam.gserviceaccount.com" \
--role="Owner"

#create the key for
gcloud iam service-accounts keys create terraform-deploy.json --iam-account \
terraform-deploy@someproject.iam.gserviceaccount.com

#impersonating the service account.
gcloud auth activate-service-account —key-file=rterraform-deploy.json

#terraform init
# terraform plan
#terraform apply

Now that we have done the boring part, let’s come to the fun part by building your custom image and spinning up the custom workstation.

Dockerfile for custom worksation image

we will use the predefined image for Code-OSS as a base image and configure our custom image as per our need — in this case, we will configure it for Java development.

Here is the list of predefined container images for the workstation

FROM us-central1-docker.pkg.dev/cloud-workstations-images/predefined/intellij-ultimate:latest

# Install essential packages
RUN apt-get update && apt-get install -y \
curl \
wget \
gnupg2 \
software-properties-common \
unzip

# Install Java 1.8
RUN apt-get install -y openjdk-8-jdk

# Install Git
RUN apt-get install -y git

# Install Node.js 14.17.4 and npm 6.14.14
RUN curl -fsSL https://deb.nodesource.com/setup_14.x | bash -
RUN apt-get install -y nodejs
RUN npm install -g npm@6.14.14

# Install PostgreSQL (psql)
RUN apt-get install -y postgresql-client

# Install Kubernetes CLI (kubectl)
RUN curl -LO https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl && \
install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl && \
rm kubectl

# Install Helm
RUN curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 && \
chmod 700 get_helm.sh && \
./get_helm.sh && \
rm get_helm.sh

# Configuration configuration - if you want any custom script to run, keep it
# under /etc/workstation-startup.d/ and it will run will deploying

COPY custom-setup.sh /etc/workstation-startup.d/

Let’s build the image, tag it and push it.

# image path will look like below
# #${var.region}-docker.pkg.dev/${var.project_id}/${var.artifact_repo}
# someregion-docker.pkg.dev/someproject/somerepo/

#Docker image build
docker build -t image

#tagging image
docker tag image someregion-docker.pkg.dev/someproject/somerepo/workstation-image:1.0

# authenticate with artifact repo
gcloud auth configure-docker someregion-docker.pkg.dev

#docker push
docker push someregion-docker.pkg.dev/someproject/somerepo/workstation-image:1.0

Now we have the custom image stored in the artifact and we will use this to deploy workstations.

Use case – Let’s say you have 4 developers who will be using the workstations, you will need 4 workstations assigned to individual developers. each one will have access to only their workstation. Let’s use the Terraform locals to store their names and emails, and will use those while creating workstations.

Lets understand what is workstation cluster and workstation config in google cloud

A workstation cluster contains and manages a collection of workstations in a single cloud region and VPC network inside your project.

Workstation configurations act as templates for workstations. The workstation configuration defines details such as the workstation virtual machine (VM) instance type, persistent storage, container image , which IDE or Code Editor to use, and other configuration

Let’s deploy the workstation using a custom image.

workstation cluster, workstation config, and workstation with IAM policies

workstation.tf

locals {
network_id = "projects/${var.project_id}/global/networks/${var.vpc_name}"
subnet_id = "projects/${var.project_id}/regions/${var.region}/subnetworks/${var.subnetwork_name}"
developers_email = [
"dev1@company.com",
"dev2@company.com",
"dev3@company.com",
"dev4@company.com",
]

developers_name = [
"dev1",
"dev2",
"dev3",
"dev4",
]
}

# Creating workstation cluster
resource "google_workstations_workstation_cluster" "default" {
provider = google-beta
project = var.project_id
workstation_cluster_id = "workstation-terraform"
network = local.network_id
subnetwork = local.subnet_id
location = var.region
}

# Creating workstation config
resource "google_workstations_workstation_config" "default" {
provider = google-beta
workstation_config_id = "workstation-config"
workstation_cluster_id = google_workstations_workstation_cluster.default.workstation_cluster_id
location = var.region
project = var.project_id

host {
gce_instance {
machine_type = "e2-standard-4"
boot_disk_size_gb = 50
disable_public_ip_addresses = false
service_account = google_service_account.image-pull.email
}
}

container {
image = "someregion-docker.pkg.dev/someproject/somerepo/workstation-image:1.0"
working_dir = "/home"
}

persistent_directories {
mount_path = "/home"
gce_pd {
size_gb = 200
disk_type = "pd-ssd"
reclaim_policy = "DELETE"
}
}
}

#worksation creation
resource "google_workstations_workstation" "default" {
provider = google-beta
count = length(local.developers_email)
workstation_id = "workstation-${local.developers_name[count.index]}"
workstation_config_id = google_workstations_workstation_config.default.workstation_config_id
workstation_cluster_id = google_workstations_workstation_cluster.default.workstation_cluster_id
location = var.region
project = var.project_id

}

#iam permissions to access workstation i.e workstations.user
resource "google_workstations_workstation_iam_member" "member" {
count = length(local.developers_email)
provider = google-beta
project = var.project_id
location = var.region
workstation_cluster_id = google_workstations_workstation_cluster.default.workstation_cluster_id
workstation_config_id = google_workstations_workstation_config.default.workstation_config_id
workstation_id = "workstation-${local.developers_name[count.index]}"
role = "roles/workstations.user"
member = "user:${local.developers_email[count.index]}"
}

Voila, we are done.

What can we do better?

If you disable public IP addresses, you must set up Private Google Access or Cloud NAT

  • disable root privileges for anyone using the workstation, set the CLOUD_WORKSTATIONS_CONFIG_DISABLE_SUDO environment variable to true
  • Use Shielded VM, Confidential VM options
  • If you have compliance requirements, use customer-managed encryption

Read the official docs for more on workstations.

Akhilesh Mishra

Akhilesh Mishra

I am Akhilesh Mishra, a self-taught Devops engineer with 11+ years working on private and public cloud (GCP & AWS)technologies.

I also mentor DevOps aspirants in their journey to devops by providing guided learning and Mentorship.

Topmate: https://topmate.io/akhilesh_mishra/