Automating Machine Image Creation with HashiCorp Packer

HashiCorp Packer Tutorial: Automate AWS AMI and Docker Image Creation with Real Examples

In today’s DevOps world, consistency and automation are paramount. One tool that has revolutionized how we create and manage machine images is HashiCorp Packer. Whether you’re building AMIs for AWS, Docker images for containerized applications, or VM templates for on-premises infrastructure, Packer provides a unified approach to image creation that’s both powerful and elegant.

What is HashiCorp Packer?

HashiCorp Packer is an open-source tool that automates the creation of machine images for multiple platforms from a single source configuration. Think of it as “Infrastructure as Code” for machine images. Instead of manually spinning up instances, installing software, configuring services, and then creating images, Packer codifies this entire process into repeatable, version-controlled templates.

Packer follows a simple yet powerful workflow:

  1. Provision temporary infrastructure (EC2 instance, Docker container, VM)
  2. Configure the system using provisioners (shell scripts, Ansible, Chef)
  3. Create an image from the configured system
  4. Clean up temporary resources automatically

Core Components of Packer

Templates and Configurations

Packer uses JSON or HCL (HashiCorp Configuration Language) files to define how images should be built. These templates contain three main sections:

  • Sources/Builders: Define the platform and base image
  • Provisioners: Configure and customize the system
  • Post-processors: Handle final image operations

Builders

Builders are platform-specific plugins that know how to create images for different environments:

  • amazon-ebs for AWS AMIs
  • docker for Docker images
  • azure-arm for Azure managed images
  • vmware-iso for VMware templates
  • googlecompute for Google Cloud images

Provisioners

Provisioners install and configure software on temporary instances:

  • Shell: Execute bash/PowerShell scripts
  • File: Upload files and directories
  • Ansible: Run Ansible playbooks
  • Breakpoint: Pause for manual intervention during development

Key Use Cases for Packer

1. Immutable Infrastructure

Create golden images with all dependencies pre-installed, reducing deployment time and configuration drift.

2. Multi-Cloud Deployment

Build identical images across different cloud providers from a single configuration.

3. Compliance and Security

Embed security configurations, patches, and compliance requirements directly into base images.

4. Faster Auto-Scaling

Pre-configured images allow auto-scaling groups to launch instances much faster than bootstrapping at runtime.

5. Development Environment Standardization

Ensure development, staging, and production environments use identical base configurations.

Building AWS AMIs with Packer

Let’s dive into a practical example of creating a web server AMI for AWS.

Prerequisites

  • AWS CLI configured with appropriate permissions
  • Packer installed on your local machine
  • Basic understanding of AWS EC2

Example 1: Basic Web Server AMI

First, let’s create a simple web server AMI with Nginx pre-installed:

# webserver.pkr.hcl
packer {
  required_plugins {
    amazon = {
      source  = "github.com/hashicorp/amazon"
      version = "~> 1"
    }
  }
}

# Variables for reusability
variable "region" {
  type    = string
  default = "us-west-2"
}

variable "instance_type" {
  type    = string
  default = "t3.micro"
}

variable "ami_name_prefix" {
  type    = string
  default = "webserver"
}

# Data source to get the latest Ubuntu AMI
data "amazon-ami" "ubuntu" {
  filters = {
    name                = "ubuntu/images/hvm-ssd/ubuntu-20.04-amd64-server-*"
    root-device-type    = "ebs"
    virtualization-type = "hvm"
  }
  most_recent = true
  owners      = ["099720109477"] # Canonical
  region      = var.region
}

# Source configuration
source "amazon-ebs" "webserver" {
  ami_name        = "${var.ami_name_prefix}-{{timestamp}}"
  ami_description = "Web server with Nginx and custom configuration"
  instance_type   = var.instance_type
  region          = var.region
  source_ami      = data.amazon-ami.ubuntu.id
  ssh_username    = "ubuntu"
  
  # Additional EBS configuration
  ebs_optimized = true
  
  # Security group allowing SSH access
  temporary_security_group_source_cidrs = ["0.0.0.0/0"]
  
  # Tags for the AMI
  tags = {
    Name        = "${var.ami_name_prefix}-{{timestamp}}"
    Environment = "production"
    OS          = "Ubuntu"
    Base        = "{{ .SourceAMI }}"
    CreatedBy   = "Packer"
  }
}

# Build configuration
build {
  name = "webserver-build"
  sources = ["source.amazon-ebs.webserver"]

  # Update system packages
  provisioner "shell" {
    inline = [
      "sudo apt-get update",
      "sudo apt-get upgrade -y"
    ]
  }

  # Install and configure Nginx
  provisioner "shell" {
    inline = [
      "sudo apt-get install -y nginx",
      "sudo systemctl enable nginx",
      "sudo systemctl start nginx"
    ]
  }

  # Upload custom Nginx configuration
  provisioner "file" {
    source      = "files/nginx.conf"
    destination = "/tmp/nginx.conf"
  }

  # Upload website files
  provisioner "file" {
    source      = "files/website/"
    destination = "/tmp/website/"
  }

  # Configure Nginx and deploy website
  provisioner "shell" {
    inline = [
      "sudo cp /tmp/nginx.conf /etc/nginx/nginx.conf",
      "sudo cp -r /tmp/website/* /var/www/html/",
      "sudo chown -R www-data:www-data /var/www/html",
      "sudo nginx -t",
      "sudo systemctl restart nginx"
    ]
  }

  # Install monitoring agent
  provisioner "shell" {
    script = "scripts/install-monitoring.sh"
  }

  # Security hardening
  provisioner "shell" {
    scripts = [
      "scripts/security-hardening.sh",
      "scripts/cleanup.sh"
    ]
  }
}

Example 2: Application Server AMI with Secrets

For more complex applications that require secrets and environment-specific configurations:

# app-server.pkr.hcl
packer {
  required_plugins {
    amazon = {
      source  = "github.com/hashicorp/amazon"
      version = "~> 1"
    }
  }
}

variable "app_version" {
  type        = string
  description = "Application version to build"
}

variable "environment" {
  type    = string
  default = "staging"
  validation {
    condition = contains(["development", "staging", "production"], var.environment)
    error_message = "Environment must be development, staging, or production."
  }
}

source "amazon-ebs" "app-server" {
  ami_name      = "app-server-${var.environment}-${var.app_version}-{{timestamp}}"
  instance_type = "t3.small"
  region        = "us-west-2"
  source_ami_filter {
    filters = {
      name                = "ubuntu/images/hvm-ssd/ubuntu-20.04-amd64-server-*"
      root-device-type    = "ebs"
      virtualization-type = "hvm"
    }
    most_recent = true
    owners      = ["099720109477"]
  }
  ssh_username = "ubuntu"
  
  # Use IAM instance profile for AWS API access
  iam_instance_profile = "packer-builder-role"
  
  # Encrypt the root volume
  encrypt_boot = true
  kms_key_id   = "alias/ami-encryption-key"
}

build {
  sources = ["source.amazon-ebs.app-server"]

  # Install Docker and Docker Compose
  provisioner "shell" {
    script = "scripts/install-docker.sh"
  }

  # Install AWS CLI and configure
  provisioner "shell" {
    inline = [
      "curl 'https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip' -o 'awscliv2.zip'",
      "sudo apt-get install -y unzip",
      "unzip awscliv2.zip",
      "sudo ./aws/install"
    ]
  }

  # Download application artifacts from S3
  provisioner "shell" {
    environment_vars = [
      "APP_VERSION=${var.app_version}",
      "ENVIRONMENT=${var.environment}"
    ]
    script = "scripts/download-app.sh"
  }

  # Configure application
  provisioner "file" {
    source      = "configs/${var.environment}/"
    destination = "/tmp/app-config/"
  }

  provisioner "shell" {
    inline = [
      "sudo mkdir -p /opt/myapp",
      "sudo cp -r /tmp/app-config/* /opt/myapp/",
      "sudo chown -R ubuntu:ubuntu /opt/myapp"
    ]
  }

  # Install and configure systemd service
  provisioner "file" {
    source      = "services/myapp.service"
    destination = "/tmp/myapp.service"
  }

  provisioner "shell" {
    inline = [
      "sudo cp /tmp/myapp.service /etc/systemd/system/",
      "sudo systemctl daemon-reload",
      "sudo systemctl enable myapp"
    ]
  }
}

Building Docker Images with Packer

Docker images built with Packer can incorporate complex provisioning logic that might be cumbersome in traditional Dockerfiles.

Example 1: Multi-Stage Application Image

# docker-app.pkr.hcl
packer {
  required_plugins {
    docker = {
      source  = "github.com/hashicorp/docker"
      version = "~> 1"
    }
  }
}

variable "app_version" {
  type = string
}

variable "base_image" {
  type    = string
  default = "ubuntu:20.04"
}

source "docker" "app-base" {
  image  = var.base_image
  commit = true
  changes = [
    "EXPOSE 8080",
    "ENV APP_VERSION=${var.app_version}",
    "ENV NODE_ENV=production",
    "WORKDIR /app",
    "CMD ['/app/start.sh']"
  ]
}

build {
  sources = ["source.docker.app-base"]

  # Install system dependencies
  provisioner "shell" {
    inline = [
      "apt-get update",
      "apt-get install -y curl wget gnupg2 software-properties-common",
      "curl -fsSL https://deb.nodesource.com/setup_16.x | bash -",
      "apt-get install -y nodejs",
      "npm install -g pm2"
    ]
  }

  # Create application user
  provisioner "shell" {
    inline = [
      "useradd -m -s /bin/bash appuser",
      "mkdir -p /app",
      "chown appuser:appuser /app"
    ]
  }

  # Copy application files
  provisioner "file" {
    source      = "src/"
    destination = "/app/"
  }

  # Install application dependencies
  provisioner "shell" {
    inline = [
      "cd /app && npm ci --only=production",
      "chown -R appuser:appuser /app"
    ]
  }

  # Configure startup script
  provisioner "file" {
    source      = "scripts/start.sh"
    destination = "/app/start.sh"
  }

  provisioner "shell" {
    inline = [
      "chmod +x /app/start.sh",
      "chown appuser:appuser /app/start.sh"
    ]
  }

  # Security and cleanup
  provisioner "shell" {
    inline = [
      "apt-get clean",
      "rm -rf /var/lib/apt/lists/*",
      "rm -rf /tmp/*",
      "history -c"
    ]
  }

  # Tag and push to registry
  post-processor "docker-tag" {
    repository = "mycompany/myapp"
    tags       = [var.app_version, "latest"]
  }

  post-processor "docker-push" {
    login          = true
    login_username = "mycompany"
    login_password = "${env("DOCKER_PASSWORD")}"
  }
}

Example 2: Development Environment Image

# dev-environment.pkr.hcl
packer {
  required_plugins {
    docker = {
      source  = "github.com/hashicorp/docker"
      version = "~> 1"
    }
  }
}

source "docker" "dev-base" {
  image  = "ubuntu:20.04"
  commit = true
  changes = [
    "ENV TERM=xterm-256color",
    "ENV SHELL=/bin/zsh",
    "WORKDIR /workspace",
    "EXPOSE 3000 8080 9000"
  ]
}

build {
  sources = ["source.docker.dev-base"]

  # Install development tools
  provisioner "shell" {
    inline = [
      "apt-get update && apt-get install -y",
      "curl wget git vim neovim zsh",
      "build-essential python3 python3-pip",
      "nodejs npm docker.io docker-compose",
      "postgresql-client redis-tools",
      "jq tree htop tmux"
    ]
  }

  # Install Oh My Zsh
  provisioner "shell" {
    inline = [
      "sh -c \"$(curl -fsSL https://raw.github.com/ohmyzsh/ohmyzsh/master/tools/install.sh)\" || true",
      "chsh -s $(which zsh)"
    ]
  }

  # Install development language versions
  provisioner "shell" {
    script = "scripts/install-languages.sh"
  }

  # Configure development environment
  provisioner "file" {
    source      = "dotfiles/"
    destination = "/root/"
  }

  # Install VS Code extensions list
  provisioner "file" {
    source      = "configs/vscode-extensions.txt"
    destination = "/workspace/vscode-extensions.txt"
  }

  # Final setup
  provisioner "shell" {
    inline = [
      "npm install -g nodemon typescript ts-node",
      "pip3 install black flake8 pytest",
      "git config --global init.defaultBranch main"
    ]
  }
}

Best Practices for Packer

1. Use Variables for Flexibility

variable "environment" {
  type = string
  validation {
    condition = contains(["dev", "staging", "prod"], var.environment)
    error_message = "Environment must be dev, staging, or prod."
  }
}

2. Implement Proper Error Handling

provisioner "shell" {
  inline = [
    "set -e",  # Exit on any error
    "sudo apt-get update || (sleep 5 && sudo apt-get update)"
  ]
  pause_before = "10s"  # Wait before execution
}

3. Use Data Sources for Dynamic AMI Selection

data "amazon-ami" "latest-ubuntu" {
  filters = {
    name = "ubuntu/images/hvm-ssd/ubuntu-20.04-amd64-server-*"
  }
  most_recent = true
  owners      = ["099720109477"]
}

4. Organize Scripts and Files

project/
├── builds/
│   ├── webserver.pkr.hcl
│   └── database.pkr.hcl
├── scripts/
│   ├── install-docker.sh
│   └── security-hardening.sh
├── files/
│   ├── nginx.conf
│   └── app.conf
└── configs/
    ├── dev/
    └── prod/

5. Use Breakpoints for Debugging

provisioner "breakpoint" {
  disable = false
  note    = "Check if nginx is configured correctly"
}

6. Implement Proper Cleanup

provisioner "shell" {
  inline = [
    "sudo apt-get clean",
    "sudo rm -rf /var/lib/apt/lists/*",
    "sudo rm -rf /tmp/*",
    "history -c"
  ]
}

Advanced Packer Features

Parallel Builds

Build multiple images simultaneously:

build {
  sources = [
    "source.amazon-ebs.web-server",
    "source.docker.web-app"
  ]
  
  # Shared provisioners
  provisioner "shell" {
    inline = ["echo 'Common setup'"]
  }
}

Build Matrices

Create multiple variations of the same image:

packer build -var 'environment=dev' -var 'region=us-west-2' webserver.pkr.hcl
packer build -var 'environment=prod' -var 'region=us-east-1' webserver.pkr.hcl

Integration with CI/CD

Example GitHub Actions workflow:

name: Build AMI
on:
  push:
    branches: [main]

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
      - name: Setup Packer
        uses: hashicorp/setup-packer@main
      - name: Build AMI
        run: |
          packer init .
          packer build -var "app_version=${{ github.sha }}" webserver.pkr.hcl
        env:
          AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
          AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}

Troubleshooting Common Issues

Permission Problems

Ensure your AWS credentials have sufficient permissions:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "ec2:*",
        "iam:PassRole"
      ],
      "Resource": "*"
    }
  ]
}

Build Failures

Use the -debug flag for detailed output:

packer build -debug webserver.pkr.hcl

SSH Connection Issues

Configure security groups properly:

source "amazon-ebs" "example" {
  # ... other config ...
  security_group_id = "sg-12345678"
  # OR
  temporary_security_group_source_cidrs = ["0.0.0.0/0"]
}

Conclusion

HashiCorp Packer represents a paradigm shift from manual, error-prone image creation to automated, repeatable, and version-controlled processes. By treating machine images as code, teams can achieve greater consistency, faster deployments, and improved reliability across their infrastructure.

Whether you’re building AMIs for auto-scaling groups, Docker images for containerized applications, or VM templates for hybrid clouds, Packer provides the flexibility and power needed for modern infrastructure automation. The examples provided in this post should give you a solid foundation to start implementing Packer in your own infrastructure workflows.

Remember to start simple, use variables for flexibility, implement proper error handling, and gradually build up to more complex scenarios. With Packer in your toolkit, you’ll be well-equipped to handle the image management challenges of modern cloud-native applications.

Additional Resources

Akhilesh Mishra

Akhilesh Mishra

I am Akhilesh Mishra, a self-taught Devops engineer with 11+ years working on private and public cloud (GCP & AWS)technologies.

I also mentor DevOps aspirants in their journey to devops by providing guided learning and Mentorship.

Topmate: https://topmate.io/akhilesh_mishra/