Terraform on Google Cloud V1.3 — Secret Manager

In this blog, I will talk about deploying GCP Secret Manager with Terraform to store the service account key

Let me start this blog by telling you a work story.

I was working on a Java-based big data project where we were storing Maven and NPM dependencies on Google Cloud Artifact Registry. I had set up AR for Maven and NPM and also created a service account with access to them.

The team was using Gradle to build the application with a GitHub Actions workflow. We needed to use the service account key to communicate with AR for dependencies in our workflow. The easiest approach was to create a GitHub secret with the service account key and use it in the CI workflow, but it raised security concerns.

The service account key should be rotated within a limited time frame, and it should be automated. I suggested using Secret Manager to store the service account key and pull the secret with the CI workflow using gcloud commands. I also recommended setting up a Cloud Function with Cloud Scheduler to regularly rotate the service account key. It was a perfect solution.

Coming to the blog, in this blog, I will deploy the Secret Manager to store the service account key using Terraform.

In the next blog, i will talk about how to deploy a Cloud function to rotate the service account key stored in Secret Manager.

Let’s talk briefly about Secret Manager

What is GCP Secret Manager?

Secret Manager is a secure and convenient storage system for API keys, passwords, certificates, and other sensitive data. Secret Manager provides a central place and single source of truth to manage, access, and audit secrets across Google Cloud.

Secret Manager provides a scalable and robust solution for secrets management, ensuring encryption, access controls, and audit trails.

In simple words

  • You can easily store and retrieve your secrets
  • You can rotate and version your secrets
  • It integrates well with other GCP services
  • It provides audit logs to track access to secrets
  • It encrypts your secret data before storing it
  • You can use customer-managed encryption
  • You can control who can access these secrets

Why use Secret Manager?

With Secret Manager, you can store secrets centrally, rather than hardcoding them into your code or configuration files. This approach improves security by reducing the risk of accidental exposure or unauthorized access to sensitive information.

It is designed to help developers securely store, access, and distribute secrets needed by their applications and services.

Think of it like a secure vault where you put all your important keys and passwords, and only the people you trust can access them. Plus, you get a complete record of who accessed what and when.

What we’ll build in this tutorial

Let’s layout all the steps we will go through using Terraform:

  1. Create a service account for Artifact Registry access
  2. Assign needed permissions
  3. Create the service account key
  4. Create a secret with GCP Secret manager
  5. Create a secret version with service account key
  6. Set up proper IAM permissions for secret access
  7. Create additional secrets for our database credentials
  8. Test everything works properly

Setting up Secret Manager with Terraform

Enable required APIs first

Before we start, we need to make sure the required APIs are enabled. Add this to your main.tf:

# Enable required APIs
resource "google_project_service" "secretmanager_api" {
  project = var.project_id
  service = "secretmanager.googleapis.com"
  
  disable_dependent_services = true
  disable_on_destroy         = false
}

resource "google_project_service" "artifactregistry_api" {
  project = var.project_id
  service = "artifactregistry.googleapis.com"
  
  disable_dependent_services = true
  disable_on_destroy         = false
}

resource "google_project_service" "cloudfunctions_api" {
  project = var.project_id
  service = "cloudfunctions.googleapis.com"
  
  disable_dependent_services = true
  disable_on_destroy         = false
}

Create a file secrets.tf

Create a file secrets.tf and paste all terraform code snippets into it. This keeps our secret management code organized and separate from other resources.

Creating service account and assigning permission role

# Service account for Artifact Registry access
resource "google_service_account" "artifact_registry_sa" {
  project      = var.project_id
  account_id   = "artifact-registry-access"
  display_name = "Artifact Registry Access Service Account"
  description  = "Service account for accessing Artifact Registry repositories"
}

# Grant Artifact Registry permissions
resource "google_project_iam_member" "artifact_registry_reader" {
  project = var.project_id
  role    = "roles/artifactregistry.reader"
  member  = "serviceAccount:${google_service_account.artifact_registry_sa.email}"
}

resource "google_project_iam_member" "artifact_registry_writer" {
  project = var.project_id
  role    = "roles/artifactregistry.writer" 
  member  = "serviceAccount:${google_service_account.artifact_registry_sa.email}"
}

# Additional permission for repository management
resource "google_project_iam_member" "artifact_registry_repo_admin" {
  project = var.project_id
  role    = "roles/artifactregistry.repoAdmin"
  member  = "serviceAccount:${google_service_account.artifact_registry_sa.email}"
}

Creating key for service account

# Creating key for service account
resource "google_service_account_key" "artifact_registry_key" {
  service_account_id = google_service_account.artifact_registry_sa.name
  public_key_type    = "TYPE_X509_PEM_FILE"
}

Creating secrets and secret versions

Now here’s where the magic happens. We’ll create multiple secrets to demonstrate different use cases.

# Secret for Artifact Registry service account key
resource "google_secret_manager_secret" "artifact_registry_secret" {
  project   = var.project_id
  secret_id = "artifact-registry-sa-key"

  labels = {
    environment = var.environment
    purpose     = "artifact-registry"
    managed-by  = "terraform"
  }

  replication {
    auto {
      customer_managed_encryption {
        kms_key_name = var.kms_key_id != "" ? var.kms_key_id : null
      }
    }
  }
  
  depends_on = [google_project_service.secretmanager_api]
}

# Creating secret version with service account key
resource "google_secret_manager_secret_version" "artifact_registry_version" {
  secret      = google_secret_manager_secret.artifact_registry_secret.id
  secret_data = base64decode(google_service_account_key.artifact_registry_key.private_key)
}

# Secret for database password (using the one from Part 3)
resource "google_secret_manager_secret" "database_password" {
  project   = var.project_id
  secret_id = "postgres-db-password"

  labels = {
    environment = var.environment
    purpose     = "database"
    managed-by  = "terraform"
  }

  replication {
    auto {}
  }
  
  depends_on = [google_project_service.secretmanager_api]
}

# Store the database password we generated in Part 3
resource "google_secret_manager_secret_version" "database_password_version" {
  secret      = google_secret_manager_secret.database_password.id
  secret_data = random_password.postgres_password.result
}

# Secret for application API keys
resource "google_secret_manager_secret" "app_api_keys" {
  project   = var.project_id
  secret_id = "application-api-keys"

  labels = {
    environment = var.environment
    purpose     = "application"
    managed-by  = "terraform"
  }

  replication {
    auto {}
  }
  
  depends_on = [google_project_service.secretmanager_api]
}

# Generate some sample API keys
resource "random_password" "api_key" {
  length  = 32
  special = false
}

resource "google_secret_manager_secret_version" "app_api_keys_version" {
  secret = google_secret_manager_secret.app_api_keys.id
  secret_data = jsonencode({
    stripe_api_key    = "sk_test_${random_password.api_key.result}"
    sendgrid_api_key  = "SG.${random_password.api_key.result}"
    github_token      = "ghp_${random_password.api_key.result}"
  })
}

Setting up IAM permissions for secrets

This is crucial – you need to control who can access your secrets.

# Allow our GitHub Actions service account to access secrets
resource "google_secret_manager_secret_iam_member" "github_access_artifact_registry" {
  project   = var.project_id
  secret_id = google_secret_manager_secret.artifact_registry_secret.secret_id
  role      = "roles/secretmanager.secretAccessor"
  member    = "serviceAccount:${google_service_account.github_actions_sa.email}"
}

resource "google_secret_manager_secret_iam_member" "github_access_database" {
  project   = var.project_id
  secret_id = google_secret_manager_secret.database_password.secret_id
  role      = "roles/secretmanager.secretAccessor"
  member    = "serviceAccount:${google_service_account.github_actions_sa.email}"
}

resource "google_secret_manager_secret_iam_member" "github_access_api_keys" {
  project   = var.project_id
  secret_id = google_secret_manager_secret.app_api_keys.secret_id
  role      = "roles/secretmanager.secretAccessor"
  member    = "serviceAccount:${google_service_account.github_actions_sa.email}"
}

# Allow our VMs to access specific secrets
resource "google_secret_manager_secret_iam_member" "vm_access_database" {
  project   = var.project_id
  secret_id = google_secret_manager_secret.database_password.secret_id
  role      = "roles/secretmanager.secretAccessor"
  member    = "serviceAccount:${google_service_account.vm_service_account.email}"
}

resource "google_secret_manager_secret_iam_member" "vm_access_api_keys" {
  project   = var.project_id
  secret_id = google_secret_manager_secret.app_api_keys.secret_id
  role      = "roles/secretmanager.secretAccessor"
  member    = "serviceAccount:${google_service_account.vm_service_account.email}"
}

# Create a custom role for secret management
resource "google_project_iam_custom_role" "secret_manager" {
  role_id     = "secretManager"
  title       = "Secret Manager"
  description = "Custom role for managing secrets"
  permissions = [
    "secretmanager.secrets.create",
    "secretmanager.secrets.delete",
    "secretmanager.secrets.get",
    "secretmanager.secrets.list",
    "secretmanager.versions.add",
    "secretmanager.versions.access",
    "secretmanager.versions.destroy",
    "secretmanager.versions.list"
  ]
}

Add variables to variables.tf

Add these new variables to your variables.tf:

variable "kms_key_id" {
  type        = string
  description = "KMS key ID for encrypting secrets (optional)"
  default     = ""
}

variable "secret_rotation_days" {
  type        = number
  description = "Number of days after which secrets should be rotated"
  default     = 90
  validation {
    condition     = var.secret_rotation_days >= 30 && var.secret_rotation_days <= 365
    error_message = "Secret rotation days must be between 30 and 365."
  }
}

variable "enable_secret_notifications" {
  type        = bool
  description = "Enable notifications for secret access"
  default     = true
}

State management improvements

This blog is the 4th blog post on my series on Terraform on Google Cloud. I will use the terraform configuration for variables, output, and providers from my previous posts:

  • Terraform on GCP V1.0 — Getting started
  • Terraform on GCP V1.1 — Deploying VM with GitHub actions
  • Terraform on GCP V1.2 — Deploying PostgreSQL with Github Actions

I suggest you go through them as well.

In the first blog in the series, I used local state for Terraform. In this blog, I will migrate the state to a remote GCS bucket that supports state-locking by default. I will use the cloud storage bucket I created in the same blog.

Migrate to remote state

Go to backend.tf file from Getting started blog and replace the code for local state withthe below code for Terraform remote state:

terraform {
  backend "gcs" {
    bucket  = "your-project-id-dev-terraform-state"  # Replace with your bucket name
    prefix  = "terraform/state"
    project = "your-project-id"                      # Replace with your project ID
  }
}

Run the below command to migrate the state to remote. Give “yes” if prompted for input:

terraform init -migrate-state

Now we have successfully migrated our Terraform state to remote, and it is using Google Cloud storage bucket as backend. It supports state locking by default.

Note: AWS S3 bucket can be used as remote state but it does not support state locking by default. You can configure the locking mechanism using AWS DynamoDB.

What is State Locking in Terraform?

State locking in Terraform is a mechanism designed to prevent concurrent modifications and conflicts when working with shared infrastructure states in a team or collaborative environment.

State locking works by acquiring a lock on the state file, which is typically stored in a remote backend. When a user or process wants to make changes to the infrastructure defined in the Terraform configuration, it first requests and acquires a lock on the state file.

Once the lock is acquired, Terraform allows the user or process to proceed with the changes, ensuring exclusive access to the state.

To force unlock the state, use the below command:

terraform force-unlock LOCK_ID

Adding comprehensive outputs

Add these outputs to your outputs.tf to make the secrets accessible:

# Secret Manager outputs
output "artifact_registry_secret_name" {
  value       = google_secret_manager_secret.artifact_registry_secret.name
  description = "Name of the Artifact Registry service account secret"
}

output "artifact_registry_secret_id" {
  value       = google_secret_manager_secret.artifact_registry_secret.secret_id
  description = "Secret ID for Artifact Registry service account"
}

output "database_secret_name" {
  value       = google_secret_manager_secret.database_password.name
  description = "Name of the database password secret"
}

output "api_keys_secret_name" {
  value       = google_secret_manager_secret.app_api_keys.name
  description = "Name of the API keys secret"
}

# Service account outputs
output "artifact_registry_sa_email" {
  value       = google_service_account.artifact_registry_sa.email
  description = "Email of the Artifact Registry service account"
}

# Instructions for accessing secrets
output "secret_access_commands" {
  value = {
    artifact_registry = "gcloud secrets versions access latest --secret=${google_secret_manager_secret.artifact_registry_secret.secret_id}"
    database_password = "gcloud secrets versions access latest --secret=${google_secret_manager_secret.database_password.secret_id}"
    api_keys         = "gcloud secrets versions access latest --secret=${google_secret_manager_secret.app_api_keys.secret_id}"
  }
  description = "Commands to access secrets using gcloud CLI"
}

Using secrets in your applications

From GitHub Actions

Update your GitHub Actions workflow to use secrets from Secret Manager:

# Add this step to your existing workflow in .github/workflows/deploy.yml
- name: Get secrets from Secret Manager
  id: secrets
  run: |
    # Get database password
    DB_PASSWORD=$(gcloud secrets versions access latest --secret="postgres-db-password")
    echo "::add-mask::$DB_PASSWORD"
    echo "db_password=$DB_PASSWORD" >> $GITHUB_OUTPUT
    
    # Get API keys
    API_KEYS=$(gcloud secrets versions access latest --secret="application-api-keys")
    echo "::add-mask::$API_KEYS"
    echo "api_keys=$API_KEYS" >> $GITHUB_OUTPUT

- name: Use secrets in deployment
  run: |
    echo "Database password retrieved successfully"
    echo "API keys retrieved successfully"
    # Use the secrets in your deployment process
    # For example, set them as environment variables for your application

From your VM instances

Create a simple script to fetch secrets on your VMs. SSH into your VM and create this script:

# SSH into your VM
gcloud compute ssh demo-dev-vm --zone=us-central1-b

# Create a script to fetch secrets
sudo tee /usr/local/bin/get-secrets.sh > /dev/null <<'EOF'
#!/bin/bash

# Function to get secret value
get_secret() {
    local secret_name=$1
    gcloud secrets versions access latest --secret="$secret_name" 2>/dev/null
}

# Get database password
DB_PASSWORD=$(get_secret "postgres-db-password")
if [ $? -eq 0 ]; then
    echo "Database password retrieved successfully"
    export DB_PASSWORD
else
    echo "Failed to retrieve database password"
    exit 1
fi

# Get API keys (returns JSON)
API_KEYS=$(get_secret "application-api-keys")
if [ $? -eq 0 ]; then
    echo "API keys retrieved successfully"
    # Parse JSON and export individual keys
    export STRIPE_API_KEY=$(echo "$API_KEYS" | jq -r '.stripe_api_key')
    export SENDGRID_API_KEY=$(echo "$API_KEYS" | jq -r '.sendgrid_api_key')
    export GITHUB_TOKEN=$(echo "$API_KEYS" | jq -r '.github_token')
else
    echo "Failed to retrieve API keys"
    exit 1
fi

echo "All secrets loaded successfully!"
EOF

# Make script executable
sudo chmod +x /usr/local/bin/get-secrets.sh

# Install jq for JSON parsing
sudo apt-get update && sudo apt-get install -y jq

# Test the script
/usr/local/bin/get-secrets.sh

From your applications

Here’s how you’d use secrets in a Python application:

# Example Python code for accessing secrets
from google.cloud import secretmanager
import json
import os

class SecretManager:
    def __init__(self, project_id):
        self.project_id = project_id
        self.client = secretmanager.SecretManagerServiceClient()
    
    def get_secret(self, secret_id, version_id="latest"):
        """Retrieve a secret from Secret Manager"""
        try:
            name = f"projects/{self.project_id}/secrets/{secret_id}/versions/{version_id}"
            response = self.client.access_secret_version(request={"name": name})
            return response.payload.data.decode("UTF-8")
        except Exception as e:
            print(f"Error retrieving secret {secret_id}: {e}")
            return None
    
    def get_database_config(self):
        """Get database configuration"""
        password = self.get_secret("postgres-db-password")
        if password:
            return {
                "host": "your-db-private-ip",
                "port": 5432,
                "database": "appdb",
                "username": "postgres",
                "password": password
            }
        return None
    
    def get_api_keys(self):
        """Get API keys as a dictionary"""
        api_keys_json = self.get_secret("application-api-keys")
        if api_keys_json:
            return json.loads(api_keys_json)
        return None

# Usage example
if __name__ == "__main__":
    project_id = os.getenv("GOOGLE_CLOUD_PROJECT", "your-project-id")
    sm = SecretManager(project_id)
    
    # Get database config
    db_config = sm.get_database_config()
    if db_config:
        print("Database configuration loaded successfully")
    
    # Get API keys
    api_keys = sm.get_api_keys()
    if api_keys:
        print(f"Loaded {len(api_keys)} API keys")
        stripe_key = api_keys.get("stripe_api_key")
        sendgrid_key = api_keys.get("sendgrid_api_key")

Testing your Secret Manager setup

Deploy and test

Just go ahead with deploying everything by running Terraform commands or a CI workflow:

# Format, validate, and apply
terraform fmt -recursive
terraform validate
terraform plan -out=tfplan
terraform apply tfplan

Verify secrets are created

# List all secrets
gcloud secrets list

# Get specific secret metadata
gcloud secrets describe artifact-registry-sa-key
gcloud secrets describe postgres-db-password
gcloud secrets describe application-api-keys

# Test secret access (be careful with sensitive data)
gcloud secrets versions access latest --secret="postgres-db-password"

Test IAM permissions

# Test access with different service accounts
gcloud auth activate-service-account github-actions-sa@your-project-id.iam.gserviceaccount.com --key-file=path-to-key.json
gcloud secrets versions access latest --secret="artifact-registry-sa-key"

# Should work - this service account has access
gcloud auth activate-service-account demo-dev-vm-sa@your-project-id.iam.gserviceaccount.com --key-file=path-to-key.json
gcloud secrets versions access latest --secret="postgres-db-password"

Monitoring and auditing secrets

Set up logging and monitoring

Add this to your secrets.tf to enable audit logging:

# Enable audit logging for Secret Manager
resource "google_logging_project_sink" "secret_manager_audit" {
  name        = "secret-manager-audit-sink"
  destination = "storage.googleapis.com/${google_storage_bucket.app_storage.name}"
  
  filter = <<EOF
protoPayload.serviceName="secretmanager.googleapis.com"
AND (
  protoPayload.methodName="google.cloud.secretmanager.v1.SecretManagerService.AccessSecretVersion"
  OR protoPayload.methodName="google.cloud.secretmanager.v1.SecretManagerService.CreateSecret"
  OR protoPayload.methodName="google.cloud.secretmanager.v1.SecretManagerService.DeleteSecret"
)
EOF

  unique_writer_identity = true
}

# Grant the log sink writer access to the storage bucket
resource "google_storage_bucket_iam_member" "audit_log_writer" {
  bucket = google_storage_bucket.app_storage.name
  role   = "roles/storage.objectCreator"
  member = google_logging_project_sink.secret_manager_audit.writer_identity
}

Create monitoring alerts

# Alert for unauthorized secret access attempts
resource "google_monitoring_alert_policy" "secret_access_alert" {
  display_name          = "Unauthorized Secret Access"
  combiner             = "OR"
  enabled              = true
  notification_channels = [] # Add your notification channels here
  
  conditions {
    display_name = "Secret access from unauthorized source"
    
    condition_threshold {
      filter          = "resource.type=\"gce_instance\" AND log_name=\"projects/${var.project_id}/logs/cloudaudit.googleapis.com%2Fdata_access\" AND protoPayload.serviceName=\"secretmanager.googleapis.com\""
      duration        = "300s"
      comparison      = "COMPARISON_GREATER_THAN"
      threshold_value = 5
      
      aggregations {
        alignment_period   = "300s"
        per_series_aligner = "ALIGN_RATE"
      }
    }
  }
  
  depends_on = [google_project_service.secretmanager_api]
}

Best practices for Secret Manager

Security considerations

  1. Principle of least privilege – only grant access to secrets that are actually needed
  2. Use separate secrets for different environments (dev, staging, prod)
  3. Enable audit logging to track who accesses what secrets when
  4. Rotate secrets regularly – we’ll automate this in the next blog
  5. Use customer-managed encryption keys for highly sensitive data

Operational considerations

  1. Label your secrets consistently for better organization
  2. Use descriptive secret names that make their purpose clear
  3. Document your secrets and who has access to them
  4. Set up monitoring and alerting for secret access patterns
  5. Have a recovery plan for when secrets are compromised

Cost optimization

  1. Clean up old secret versions that are no longer needed
  2. Use regional replication only when necessary
  3. Monitor secret access patterns to identify unused secrets
  4. Implement proper lifecycle management for secrets

What we accomplished

In this blog, we successfully:

Set up Secret Manager with proper encryption and replication
Created multiple types of secrets for different use cases
Configured proper IAM permissions with least privilege access
Migrated to remote state with state locking
Added comprehensive monitoring and audit logging
Provided practical examples for accessing secrets from applications
Implemented security best practices from the ground up

In the next blog

In the next blog, I will use Cloud Functions to rotate the service account key stored in Secret Manager and automate the entire secret lifecycle management process. We’ll also set up Cloud Scheduler to trigger the rotation automatically.

Thank you for reading, I hope this post has added some value to you.

Connect with me:

  • LinkedIn for more content on Google Cloud, Terraform, Python and other DevOps tools

Tags: #GoogleCloud #Terraform #SecretManager #Security #DevOps #Infrastructure #CloudSecurity #SecretManagement

Akhilesh Mishra

Akhilesh Mishra

I am Akhilesh Mishra, a self-taught Devops engineer with 11+ years working on private and public cloud (GCP & AWS)technologies.

I also mentor DevOps aspirants in their journey to devops by providing guided learning and Mentorship.

Topmate: https://topmate.io/akhilesh_mishra/