Deploy Secure Google Cloud VMs with Terraform and GitHub Actions in 2025
Automate VM provisioning, networking, and CI/CD pipelines using Infrastructure as Code best practices

Part 2 of my comprehensive Terraform on Google Cloud series – Building on your VPC foundation with compute resources and automation
Ready to automate your VM deployments like a DevOps pro? This tutorial builds directly on our Part 1 foundation, adding secure Compute Engine instances with automated GitHub Actions workflows. You’ll master firewall rules, Cloud NAT setup, and production-ready CI/CD patterns.
By the end of this guide, you’ll have a complete automated pipeline that deploys VMs to Google Cloud every time you push code to your GitHub repository.
Perfect for teams wanting to eliminate manual infrastructure deployments and embrace true DevOps automation.
What You’ll Learn in This Guide
- Secure VM provisioning with proper service accounts and networking
- Firewall rule management for SSH access and internet connectivity
- Cloud NAT configuration for private VM internet access
- GitHub Actions automation with service account authentication
- Production CI/CD patterns for infrastructure deployment
Prerequisites
Before starting this tutorial, ensure you have:
✅ Completed Part 1 of this series (VPC and storage setup)
✅ GitHub account with repository access
✅ Git installed locally with basic knowledge
✅ GitHub personal access token created
✅ Same GCP project from Part 1 with billing enabled
Understanding Google Cloud Compute Components
What are Firewall Rules in GCP?
Firewall rules in Google Cloud are security policies that control inbound and outbound traffic to your VM instances and other resources within a VPC network. They work at the network level and are stateful – meaning return traffic is automatically allowed.
Key concepts:
- Direction: INGRESS (incoming) or EGRESS (outgoing) traffic
- Priority: Lower numbers = higher priority (0-65534)
- Targets: Which resources the rule applies to (tags, service accounts)
- Sources/Destinations: Where traffic can come from or go to
- Protocols and Ports: What type of traffic is allowed
What is Cloud NAT?
Cloud NAT (Network Address Translation) provides outbound internet connectivity for VM instances that only have private IP addresses. This is essential for:
- Security: VMs don’t need public IPs to access the internet
- Cost optimization: Fewer external IP addresses needed
- Compliance: Keeps resources private while allowing necessary internet access
What is Cloud Router?
Cloud Router is a networking service that provides dynamic routing capabilities. It’s required for Cloud NAT and enables:
- Dynamic routing between VPCs and on-premises networks
- BGP support for advanced networking scenarios
- NAT gateway functionality when paired with Cloud NAT
Extending Your Terraform Configuration
Let’s add the new variables and resources to your existing Terraform setup from Part 1.
Add VM Variables (variables.tf
)
Add these variables to your existing variables.tf
file:
variable "zone" {
type = string
description = "GCP zone for VM deployment"
default = "us-central1-b"
validation {
condition = can(regex("^[a-z]+-[a-z]+[0-9]+-[a-z]$", var.zone))
error_message = "Zone must be a valid GCP zone format (e.g., us-central1-b)."
}
}
variable "vm_machine_type" {
type = string
description = "Machine type for the VM instance"
default = "e2-standard-2"
validation {
condition = contains([
"e2-micro", "e2-small", "e2-medium", "e2-standard-2",
"e2-standard-4", "n1-standard-1", "n1-standard-2"
], var.vm_machine_type)
error_message = "Machine type must be a valid GCP machine type."
}
}
variable "prefix" {
type = string
description = "Prefix for resource naming"
default = "main"
validation {
condition = can(regex("^[a-z][a-z0-9-]{1,10}$", var.prefix))
error_message = "Prefix must start with a letter, contain only lowercase letters, numbers, and hyphens, and be 2-11 characters long."
}
}
variable "vm_disk_size" {
type = number
description = "Boot disk size in GB"
default = 50
validation {
condition = var.vm_disk_size >= 10 && var.vm_disk_size <= 2000
error_message = "VM disk size must be between 10 and 2000 GB."
}
}
variable "vm_image" {
type = string
description = "VM boot disk image"
default = "debian-cloud/debian-12"
validation {
condition = contains([
"debian-cloud/debian-12", "ubuntu-os-cloud/ubuntu-2204-lts",
"centos-cloud/centos-7", "rhel-cloud/rhel-8"
], var.vm_image)
error_message = "VM image must be a supported OS image."
}
}
Update Variable Values (terraform.tfvars
)
Add these values to your existing terraform.tfvars
file:
# Existing variables from Part 1
project_id = "your-gcp-project-id"
environment = "dev"
region = "us-central1"
vpc_name = "main-vpc"
subnet_name = "primary-subnet"
subnet_cidr = "10.0.1.0/24"
# New VM variables
zone = "us-central1-b"
vm_machine_type = "e2-standard-2"
prefix = "demo"
vm_disk_size = 50
vm_image = "debian-cloud/debian-12"
Create VM Resources (vm.tf
)
Create a new file vm.tf
for your compute resources:
# Service account for VM instances
resource "google_service_account" "vm_service_account" {
project = var.project_id
account_id = "${var.prefix}-vm-sa"
display_name = "VM Service Account for ${var.environment}"
description = "Service account for VM instances with minimal required permissions"
}
# IAM roles for VM service account
resource "google_project_iam_member" "vm_sa_logging" {
project = var.project_id
role = "roles/logging.logWriter"
member = "serviceAccount:${google_service_account.vm_service_account.email}"
}
resource "google_project_iam_member" "vm_sa_monitoring" {
project = var.project_id
role = "roles/monitoring.metricWriter"
member = "serviceAccount:${google_service_account.vm_service_account.email}"
}
# Compute Engine VM instance
resource "google_compute_instance" "main_vm" {
name = "${var.prefix}-${var.environment}-vm"
project = var.project_id
machine_type = var.vm_machine_type
zone = var.zone
tags = ["ssh-allowed", "http-server"]
allow_stopping_for_update = true
# Enable deletion protection in production
deletion_protection = var.environment == "prod" ? true : false
# Boot disk configuration
boot_disk {
initialize_params {
image = var.vm_image
type = "pd-standard"
size = var.vm_disk_size
labels = {
environment = var.environment
managed-by = "terraform"
}
}
auto_delete = true
}
# Network configuration - private IP only
network_interface {
network = google_compute_network.main_vpc.self_link
subnetwork = google_compute_subnetwork.main_subnet.self_link
# No external IP - will use Cloud NAT for internet access
# Uncomment below for external IP:
# access_config {
# nat_ip = google_compute_address.vm_external_ip.address
# }
}
# Service account and scopes
service_account {
email = google_service_account.vm_service_account.email
scopes = [
"https://www.googleapis.com/auth/cloud-platform",
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring.write"
]
}
# Metadata for VM configuration
metadata = {
enable-oslogin = "TRUE"
startup-script = <<-EOF
#!/bin/bash
apt-get update
apt-get install -y nginx
systemctl start nginx
systemctl enable nginx
echo "<h1>Hello from ${var.prefix}-${var.environment}-vm</h1>" > /var/www/html/index.html
echo "<p>Deployed with Terraform and GitHub Actions</p>" >> /var/www/html/index.html
EOF
}
# Labels for resource management
labels = {
environment = var.environment
managed-by = "terraform"
team = "devops"
}
# Depends on the subnet being ready
depends_on = [
google_compute_subnetwork.main_subnet
]
}
# Optional: Static external IP (commented out for security)
# resource "google_compute_address" "vm_external_ip" {
# name = "${var.prefix}-${var.environment}-vm-ip"
# project = var.project_id
# region = var.region
# }
Create Firewall Rules (firewall.tf
)
Create a new file firewall.tf
for network security:
# Ingress firewall rule for SSH access
resource "google_compute_firewall" "allow_ssh" {
name = "${var.environment}-allow-ssh"
network = google_compute_network.main_vpc.name
project = var.project_id
description = "Allow SSH access to instances with ssh-allowed tag"
direction = "INGRESS"
priority = 1000
allow {
protocol = "tcp"
ports = ["22"]
}
# Restrict source ranges for better security
# For production, use your specific IP ranges
source_ranges = ["0.0.0.0/0"] # Change this in production!
target_tags = ["ssh-allowed"]
# Log firewall activity
log_config {
metadata = "INCLUDE_ALL_METADATA"
}
}
# Ingress firewall rule for HTTP traffic
resource "google_compute_firewall" "allow_http" {
name = "${var.environment}-allow-http"
network = google_compute_network.main_vpc.name
project = var.project_id
description = "Allow HTTP access to web servers"
direction = "INGRESS"
priority = 1000
allow {
protocol = "tcp"
ports = ["80", "8080"]
}
source_ranges = ["0.0.0.0/0"]
target_tags = ["http-server"]
log_config {
metadata = "INCLUDE_ALL_METADATA"
}
}
# Egress firewall rule for internet access
resource "google_compute_firewall" "allow_egress" {
name = "${var.environment}-allow-egress"
network = google_compute_network.main_vpc.name
project = var.project_id
description = "Allow outbound internet access"
direction = "EGRESS"
priority = 1000
allow {
protocol = "tcp"
ports = ["80", "443", "53"]
}
allow {
protocol = "udp"
ports = ["53", "123"]
}
destination_ranges = ["0.0.0.0/0"]
target_tags = ["ssh-allowed", "http-server"]
}
# Internal communication firewall rule
resource "google_compute_firewall" "allow_internal" {
name = "${var.environment}-allow-internal"
network = google_compute_network.main_vpc.name
project = var.project_id
description = "Allow internal communication within VPC"
direction = "INGRESS"
priority = 1000
allow {
protocol = "tcp"
ports = ["0-65535"]
}
allow {
protocol = "udp"
ports = ["0-65535"]
}
allow {
protocol = "icmp"
}
# Allow traffic from the VPC CIDR
source_ranges = [var.subnet_cidr, "10.1.0.0/16", "10.2.0.0/16"]
}
Update Networking with Cloud NAT (networking.tf
)
Add these resources to your existing networking.tf
file:
# Cloud NAT for private VM internet access
resource "google_compute_router_nat" "main_nat" {
name = "${var.environment}-nat-gateway"
router = google_compute_router.main_router.name
region = var.region
# NAT IP allocation
nat_ip_allocate_option = "AUTO_ONLY"
# Which subnets to provide NAT for
source_subnetwork_ip_ranges_to_nat = "LIST_OF_SUBNETWORKS"
subnetwork {
name = google_compute_subnetwork.main_subnet.name
source_ip_ranges_to_nat = ["ALL_IP_RANGES"]
}
# Logging for monitoring
log_config {
enable = true
filter = "ERRORS_ONLY"
}
# Timeout settings
min_ports_per_vm = 64
udp_idle_timeout_sec = 30
icmp_idle_timeout_sec = 30
tcp_established_idle_timeout_sec = 1200
tcp_transitory_idle_timeout_sec = 30
}
Update Outputs (outputs.tf
)
Add these VM-related outputs to your existing outputs.tf
:
# Existing outputs from Part 1...
# VM outputs
output "vm_name" {
value = google_compute_instance.main_vm.name
description = "Name of the created VM instance"
}
output "vm_internal_ip" {
value = google_compute_instance.main_vm.network_interface[0].network_ip
description = "Internal IP address of the VM"
}
output "vm_zone" {
value = google_compute_instance.main_vm.zone
description = "Zone where the VM is deployed"
}
output "vm_service_account" {
value = google_service_account.vm_service_account.email
description = "Service account email used by the VM"
}
output "ssh_command" {
value = "gcloud compute ssh ${google_compute_instance.main_vm.name} --zone=${var.zone} --project=${var.project_id}"
description = "Command to SSH into the VM instance"
}
# NAT gateway output
output "nat_gateway_name" {
value = google_compute_router_nat.main_nat.name
description = "Name of the Cloud NAT gateway"
}
Setting Up GitHub Actions Automation
Now let’s create an automated CI/CD pipeline that will deploy your infrastructure whenever you push code to GitHub.
Create GitHub Secrets
First, you need to store your service account credentials in GitHub secrets:
- Get your service account key content:
cat terraform-sa-key.json
- In your GitHub repository:
- Go to Settings > Secrets and variables > Actions
- Click New repository secret
- Name:
GCLOUD_SERVICE_ACCOUNT_KEY
- Value: Paste the entire contents of your service account key file
Security Note: This approach using service account keys is for learning purposes. In Part 3, we’ll upgrade to Workload Identity Federation for keyless authentication, which is the production-recommended approach.
Create GitHub Actions Workflow
Create the workflow directory and file:
mkdir -p .github/workflows
touch .github/workflows/deploy.yml
Add this workflow configuration to deploy.yml
:
name: Terraform Infrastructure Deployment
on:
push:
branches:
- main
- develop
pull_request:
branches:
- main
env:
PROJECT_ID: your-gcp-project-id # Replace with your project ID
TF_VERSION: 1.6.0
jobs:
terraform:
name: Terraform Plan and Apply
runs-on: ubuntu-latest
permissions:
contents: read
id-token: write
pull-requests: write
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Set up Google Cloud SDK
uses: google-github-actions/setup-gcloud@v1
with:
version: 'latest'
service_account_key: ${{ secrets.GCLOUD_SERVICE_ACCOUNT_KEY }}
project_id: ${{ env.PROJECT_ID }}
- name: Configure Terraform authentication
run: |
echo '${{ secrets.GCLOUD_SERVICE_ACCOUNT_KEY }}' > /tmp/gcp-key.json
gcloud auth activate-service-account --key-file=/tmp/gcp-key.json
gcloud config set project ${{ env.PROJECT_ID }}
gcloud config set compute/region us-central1
export GOOGLE_APPLICATION_CREDENTIALS=/tmp/gcp-key.json
- name: Setup Terraform
uses: hashicorp/setup-terraform@v3
with:
terraform_version: ${{ env.TF_VERSION }}
- name: Terraform Format Check
id: fmt
run: terraform fmt -check -diff -recursive
continue-on-error: true
- name: Terraform Initialize
id: init
run: terraform init
- name: Terraform Validation
id: validate
run: terraform validate
- name: Terraform Plan
id: plan
run: |
terraform plan -input=false -out=tfplan -detailed-exitcode
continue-on-error: true
- name: Comment PR with Plan
if: github.event_name == 'pull_request'
uses: actions/github-script@v7
with:
script: |
const output = `#### Terraform Format and Style 🖌\`${{ steps.fmt.outcome }}\`
#### Terraform Initialization ⚙️\`${{ steps.init.outcome }}\`
#### Terraform Validation 🤖\`${{ steps.validate.outcome }}\`
#### Terraform Plan 📖\`${{ steps.plan.outcome }}\`
<details><summary>Show Plan</summary>
\`\`\`terraform
${{ steps.plan.outputs.stdout }}
\`\`\`
</details>
*Pusher: @${{ github.actor }}, Action: \`${{ github.event_name }}\`*`;
github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: output
})
- name: Terraform Apply
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
run: terraform apply -input=false tfplan
- name: Clean up credentials
if: always()
run: |
rm -f /tmp/gcp-key.json
rm -f tfplan
- name: Output VM Information
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
run: |
echo "## 🚀 Deployment Complete!" >> $GITHUB_STEP_SUMMARY
echo "| Resource | Value |" >> $GITHUB_STEP_SUMMARY
echo "|----------|-------|" >> $GITHUB_STEP_SUMMARY
echo "| VM Name | $(terraform output -raw vm_name) |" >> $GITHUB_STEP_SUMMARY
echo "| Internal IP | $(terraform output -raw vm_internal_ip) |" >> $GITHUB_STEP_SUMMARY
echo "| SSH Command | \`$(terraform output -raw ssh_command)\` |" >> $GITHUB_STEP_SUMMARY
Deploying Your Infrastructure
Local Deployment First
Before pushing to GitHub, test your configuration locally:
# Format and validate
terraform fmt -recursive
terraform validate
# Plan and apply
terraform plan -out=tfplan
terraform apply tfplan
# Verify deployment
terraform output
Expected Output
You should see output similar to:
vm_internal_ip = "10.0.1.2"
vm_name = "demo-dev-vm"
vm_zone = "us-central1-b"
ssh_command = "gcloud compute ssh demo-dev-vm --zone=us-central1-b --project=your-project-id"
Test VM Connectivity
# SSH into your VM (if you have gcloud configured)
gcloud compute ssh demo-dev-vm --zone=us-central1-b --project=your-project-id
# Test internet connectivity from VM
curl -I http://google.com
# Check nginx is running
curl localhost
Push to GitHub for Automated Deployment
Once local testing works:
git add .
git commit -m "Add VM deployment with GitHub Actions automation"
git push origin main
GitHub Actions will:
- Validate your Terraform code
- Plan the infrastructure changes
- Apply changes automatically on main branch
- Comment on pull requests with plan details
Monitoring Your Deployment
Check GitHub Actions
- Go to your repository on GitHub
- Click the Actions tab
- Watch your workflow run in real-time
- Check the job summary for deployment details
Verify in GCP Console
- Compute Engine > VM instances – see your VM
- VPC network > Firewall – check your firewall rules
- Network services > Cloud NAT – verify NAT configuration
- Logging > Logs Explorer – monitor VM and firewall logs
Test Your VM
# List your VMs
gcloud compute instances list
# SSH into the VM
gcloud compute ssh demo-dev-vm --zone=us-central1-b
# Inside the VM, test internet access
curl -I https://www.google.com
# Check nginx status
sudo systemctl status nginx
curl localhost
Production Considerations
Security Enhancements
Firewall Rules:
- Restrict SSH access to specific IP ranges
- Use IAP (Identity-Aware Proxy) for SSH access
- Implement network tags consistently
Service Accounts:
- Follow principle of least privilege
- Use separate service accounts for different workloads
- Regularly rotate service account keys
Monitoring and Logging
Enable Monitoring:
# Add to vm.tf
metadata = {
enable-oslogin = "TRUE"
google-monitoring-enabled = "TRUE"
google-logging-enabled = "TRUE"
}
Set up Alerts:
- VM instance health checks
- High CPU or memory usage
- Network connectivity issues
- Unusual SSH access patterns
Cost Optimization
Right-sizing:
- Monitor VM performance and adjust machine types
- Use preemptible instances for non-critical workloads
- Implement auto-shutdown for development VMs
Storage Optimization:
- Use appropriate disk types (pd-standard vs pd-ssd)
- Enable disk auto-deletion with VMs
- Implement disk snapshots for backups
Troubleshooting Common Issues
Authentication Failures
# Re-authenticate gcloud
gcloud auth application-default login
# Verify project setting
gcloud config get-value project
# Check service account permissions
gcloud projects get-iam-policy YOUR_PROJECT_ID
VM Won’t Start
# Check VM status
gcloud compute instances describe VM_NAME --zone=ZONE
# View serial console output
gcloud compute instances get-serial-port-output VM_NAME --zone=ZONE
# Check quotas
gcloud compute project-info describe --project=PROJECT_ID
Network Connectivity Issues
# Test firewall rules
gcloud compute firewall-rules list
# Check routes
gcloud compute routes list
# Verify NAT configuration
gcloud compute routers get-nat-mappings ROUTER_NAME --region=REGION
What’s Next in Part 3
In our next tutorial, we’ll enhance this setup with:
🔜 Secure PostgreSQL deployment with Cloud SQL
🔜 Workload Identity Federation for keyless GitHub Actions
🔜 Private service connections for database security
🔜 Advanced networking with VPC peering
Key Takeaways
✅ Automated infrastructure reduces human error and increases deployment speed
✅ GitHub Actions integration enables true Infrastructure as Code workflows
✅ Proper firewall rules are essential for both security and functionality
✅ Cloud NAT provides secure internet access without public IPs
✅ Service account best practices improve security posture from the start
Ready to level up your infrastructure automation? This foundation of automated VM deployment sets you up perfectly for the advanced database and security patterns we’ll cover in Part 3!
Connect with me:
- LinkedIn for more Google Cloud and DevOps content
Tags: #Terraform #GoogleCloud #GCP #GitHubActions #DevOps #InfrastructureAsCode #ComputeEngine #Automation