Running Flatcar Container Linux on AWS EC2

    The current AMIs for all Flatcar Container Linux channels and EC2 regions are listed below and updated frequently. Using CloudFormation is the easiest way to launch a cluster, but it is also possible to follow the manual steps at the end of the article. Questions can be directed to the Flatcar Container Linux Matrix channel or user mailing list .

    At the end of the document there are instructions for deploying with Terraform.

    Release retention time

    After publishing, releases will remain available as public AMIs on AWS for 9 months. AMIs older than 9 months will be un-published in regular garbage collection sweeps. Please note that this will not impact existing AWS instances that use those releases. However, deploying new instances (e.g. in autoscaling groups pinned to a specific AMI) will not be possible after the AMI was un-published.

    Choosing a channel

    Flatcar Container Linux is designed to be updated automatically with different schedules per channel. You can disable this feature , although we don’t recommend it. Read the release notes for specific features and bug fixes.

    The Stable channel should be used by production clusters. Versions of Flatcar Container Linux are battle-tested within the Beta and Alpha channels before being promoted. The current version is Flatcar Container Linux 4459.2.3.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-05d29da533997a34d Launch Stack
    HVM (arm64) ami-0df7236f58a67e79f Launch Stack
    ap-east-1 HVM (amd64) ami-08eed9309afe776ed Launch Stack
    HVM (arm64) ami-03c25a87ead2ecfcf Launch Stack
    ap-northeast-1 HVM (amd64) ami-0ec9759867cde327f Launch Stack
    HVM (arm64) ami-058a4fc36d509cd0f Launch Stack
    ap-northeast-2 HVM (amd64) ami-0a9bbabf99b91b527 Launch Stack
    HVM (arm64) ami-06ca8492b0bd2dd26 Launch Stack
    ap-south-1 HVM (amd64) ami-0e291b18220722395 Launch Stack
    HVM (arm64) ami-069733583c0f226df Launch Stack
    ap-southeast-1 HVM (amd64) ami-0c10cf21277054643 Launch Stack
    HVM (arm64) ami-09068ed11b8566e7a Launch Stack
    ap-southeast-2 HVM (amd64) ami-0550d5a0af7122010 Launch Stack
    HVM (arm64) ami-0082debfc7bf4ba8d Launch Stack
    ap-southeast-3 HVM (amd64) ami-0534c6efa38e0a563 Launch Stack
    HVM (arm64) ami-07d96837f35f40fe3 Launch Stack
    ca-central-1 HVM (amd64) ami-097e385b5c9e53d71 Launch Stack
    HVM (arm64) ami-07edda3cf739693ac Launch Stack
    eu-central-1 HVM (amd64) ami-02bcbcae9d50f5d12 Launch Stack
    HVM (arm64) ami-043a50a21f18eaff5 Launch Stack
    eu-north-1 HVM (amd64) ami-0776695c4756a300a Launch Stack
    HVM (arm64) ami-0c75f1570744ef8f5 Launch Stack
    eu-south-1 HVM (amd64) ami-0ae15a78d1ecc9433 Launch Stack
    HVM (arm64) ami-078379290a1441d0a Launch Stack
    eu-west-1 HVM (amd64) ami-0f184e7b7dd5a7e4f Launch Stack
    HVM (arm64) ami-02749ded5503d7f8a Launch Stack
    eu-west-2 HVM (amd64) ami-0258bd11b9680000f Launch Stack
    HVM (arm64) ami-065a59bceeb8c457a Launch Stack
    eu-west-3 HVM (amd64) ami-07c6c94a9af0c9716 Launch Stack
    HVM (arm64) ami-0604b1e6cc5272729 Launch Stack
    me-south-1 HVM (amd64) ami-03286c1deb1d0a230 Launch Stack
    HVM (arm64) ami-058183e63dcf3c2ad Launch Stack
    sa-east-1 HVM (amd64) ami-020471e17c597bb5e Launch Stack
    HVM (arm64) ami-03f3a32abd8243a9a Launch Stack
    us-east-1 HVM (amd64) ami-0ff03fd3dce9ab2f4 Launch Stack
    HVM (arm64) ami-0f824188b8b0c87f5 Launch Stack
    us-east-2 HVM (amd64) ami-06c1c22d5c6f9b3b6 Launch Stack
    HVM (arm64) ami-0dbc451f91f323650 Launch Stack
    us-west-1 HVM (amd64) ami-001a087f49f8f4b36 Launch Stack
    HVM (arm64) ami-02001738ef8c5402c Launch Stack
    us-west-2 HVM (amd64) ami-0bc0cbbe73aa1e523 Launch Stack
    HVM (arm64) ami-0f194eee0f96a513f Launch Stack

    The Beta channel consists of promoted Alpha releases. The current version is Flatcar Container Linux 4547.1.0.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-0282c6edc40a2062f Launch Stack
    HVM (arm64) ami-0ea04bda48d43fe07 Launch Stack
    ap-east-1 HVM (amd64) ami-0ecddb868b44f4018 Launch Stack
    HVM (arm64) ami-00740e85dd256d4b8 Launch Stack
    ap-northeast-1 HVM (amd64) ami-0b612eefd058be536 Launch Stack
    HVM (arm64) ami-0b38e1f1572745ebc Launch Stack
    ap-northeast-2 HVM (amd64) ami-0a5cc3d5571610baf Launch Stack
    HVM (arm64) ami-0848dd933d9a1556f Launch Stack
    ap-south-1 HVM (amd64) ami-0b75d78b603a384ab Launch Stack
    HVM (arm64) ami-0e06a9142c6b44577 Launch Stack
    ap-southeast-1 HVM (amd64) ami-02cc1ee54f25631d8 Launch Stack
    HVM (arm64) ami-0b88478cc9a99fcae Launch Stack
    ap-southeast-2 HVM (amd64) ami-0248119136c9f097c Launch Stack
    HVM (arm64) ami-0a36b0a29138ba9bb Launch Stack
    ap-southeast-3 HVM (amd64) ami-0a4ffdf7758d5fbab Launch Stack
    HVM (arm64) ami-0f9e3bbab6328094d Launch Stack
    ca-central-1 HVM (amd64) ami-0aab3fdfb9d66efa9 Launch Stack
    HVM (arm64) ami-01bdb04fd0fa0ca98 Launch Stack
    eu-central-1 HVM (amd64) ami-01c07a9241a29fc4f Launch Stack
    HVM (arm64) ami-0af851d8e5b77ab64 Launch Stack
    eu-north-1 HVM (amd64) ami-0a26bdf1f68700c9a Launch Stack
    HVM (arm64) ami-0c4f4955fa625904d Launch Stack
    eu-south-1 HVM (amd64) ami-07d8e5755d658fcf8 Launch Stack
    HVM (arm64) ami-005cf4ae5291a373b Launch Stack
    eu-west-1 HVM (amd64) ami-0c624f517a097178d Launch Stack
    HVM (arm64) ami-0f175ebc918abfbcd Launch Stack
    eu-west-2 HVM (amd64) ami-0dee5ba6fba008063 Launch Stack
    HVM (arm64) ami-0958816e9ec819981 Launch Stack
    eu-west-3 HVM (amd64) ami-0205da804ea11e49e Launch Stack
    HVM (arm64) ami-0d75c207d2bbd751e Launch Stack
    me-south-1 HVM (amd64) ami-0f1b433e00b60e6a7 Launch Stack
    HVM (arm64) ami-0068d7ca3f937fef9 Launch Stack
    sa-east-1 HVM (amd64) ami-0d0668ad5aebeee02 Launch Stack
    HVM (arm64) ami-08cabcde46f4e4b09 Launch Stack
    us-east-1 HVM (amd64) ami-09eaf786bfe86d5f7 Launch Stack
    HVM (arm64) ami-073248398286e888b Launch Stack
    us-east-2 HVM (amd64) ami-089cc5301c1f5516e Launch Stack
    HVM (arm64) ami-0335458f0e8de6fef Launch Stack
    us-west-1 HVM (amd64) ami-0b0bb8743353a4b75 Launch Stack
    HVM (arm64) ami-0ded650e714d5551d Launch Stack
    us-west-2 HVM (amd64) ami-07344fdea28051c97 Launch Stack
    HVM (arm64) ami-0c314f432a2315bf3 Launch Stack

    The Alpha channel closely tracks master and is released frequently. The newest versions of system libraries and utilities will be available for testing. The current version is Flatcar Container Linux 4593.0.0.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-02aad01d0887dc11b Launch Stack
    HVM (arm64) ami-08f37b7b5754f7109 Launch Stack
    ap-east-1 HVM (amd64) ami-008b3aeac59a6f961 Launch Stack
    HVM (arm64) ami-0f04d52fd0550e37e Launch Stack
    ap-northeast-1 HVM (amd64) ami-036c13af7f7410a3a Launch Stack
    HVM (arm64) ami-046901ee222a16f47 Launch Stack
    ap-northeast-2 HVM (amd64) ami-0fb05015f0926f969 Launch Stack
    HVM (arm64) ami-0a112c7a863630845 Launch Stack
    ap-south-1 HVM (amd64) ami-069748ae0901e7399 Launch Stack
    HVM (arm64) ami-0ab81d7e77b7d4a50 Launch Stack
    ap-southeast-1 HVM (amd64) ami-0a3579461d14d464b Launch Stack
    HVM (arm64) ami-087b370a6a07e148f Launch Stack
    ap-southeast-2 HVM (amd64) ami-067978917bc618ec3 Launch Stack
    HVM (arm64) ami-0ae446b84cdb587b6 Launch Stack
    ap-southeast-3 HVM (amd64) ami-0b99cfcb4f90d76e5 Launch Stack
    HVM (arm64) ami-0188b8cd3109b57b1 Launch Stack
    ca-central-1 HVM (amd64) ami-0b1da6b638e5add47 Launch Stack
    HVM (arm64) ami-08445bb670a04a25d Launch Stack
    eu-central-1 HVM (amd64) ami-022c37fa04382132c Launch Stack
    HVM (arm64) ami-0f7cf05f9686ab041 Launch Stack
    eu-north-1 HVM (amd64) ami-081f73ac1b071bbb1 Launch Stack
    HVM (arm64) ami-04931e9da1519ffc2 Launch Stack
    eu-south-1 HVM (amd64) ami-002d8f09eb3f43af9 Launch Stack
    HVM (arm64) ami-04a15bfaaac84bb59 Launch Stack
    eu-west-1 HVM (amd64) ami-0031c1b101c7e119f Launch Stack
    HVM (arm64) ami-00939602ebc3f5f4b Launch Stack
    eu-west-2 HVM (amd64) ami-0100d9acfc8e11306 Launch Stack
    HVM (arm64) ami-0fae3babc89f5a69a Launch Stack
    eu-west-3 HVM (amd64) ami-026c2e937b9b971f0 Launch Stack
    HVM (arm64) ami-0ca20dfc740132564 Launch Stack
    me-south-1 HVM (amd64) ami-023f4e8d5ae034e19 Launch Stack
    HVM (arm64) ami-0c8dbd2d36013a92c Launch Stack
    sa-east-1 HVM (amd64) ami-05cdcb0cec0fc6d08 Launch Stack
    HVM (arm64) ami-07e6b2eb722c91713 Launch Stack
    us-east-1 HVM (amd64) ami-05df530eeef5bd705 Launch Stack
    HVM (arm64) ami-093d89386aa54985e Launch Stack
    us-east-2 HVM (amd64) ami-0c315a433ae9f88d6 Launch Stack
    HVM (arm64) ami-0f05927705e505dad Launch Stack
    us-west-1 HVM (amd64) ami-029ec17015fa79aa3 Launch Stack
    HVM (arm64) ami-07cf2b92c79cf2c50 Launch Stack
    us-west-2 HVM (amd64) ami-0d4bbc899b1d2ee8c Launch Stack
    HVM (arm64) ami-07f1f6782d6b825eb Launch Stack

    LTS release streams are maintained for an extended lifetime of 18 months. The yearly LTS streams have an overlap of 6 months. The current version is Flatcar Container Linux 4081.3.6.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-065d742b53d039f10 Launch Stack
    HVM (arm64) ami-031e6aa017e3d66a4 Launch Stack
    ap-east-1 HVM (amd64) ami-05d861bfa50523be9 Launch Stack
    HVM (arm64) ami-00376960872d79ace Launch Stack
    ap-northeast-1 HVM (amd64) ami-05dd5c8176aae392e Launch Stack
    HVM (arm64) ami-0d187650ed489eb63 Launch Stack
    ap-northeast-2 HVM (amd64) ami-082997538fee72535 Launch Stack
    HVM (arm64) ami-03cc0c6cbfd15b96b Launch Stack
    ap-south-1 HVM (amd64) ami-05a8e27ad68c7c095 Launch Stack
    HVM (arm64) ami-0b2d1b5a81d288101 Launch Stack
    ap-southeast-1 HVM (amd64) ami-0bbc11922d35e88f7 Launch Stack
    HVM (arm64) ami-019dbbc6398ee063e Launch Stack
    ap-southeast-2 HVM (amd64) ami-0453f031a5311e96c Launch Stack
    HVM (arm64) ami-09d8d953473bdd4bb Launch Stack
    ap-southeast-3 HVM (amd64) ami-06a63dc511c9781f3 Launch Stack
    HVM (arm64) ami-074bb47a98f1747b4 Launch Stack
    ca-central-1 HVM (amd64) ami-080a9e8c39c377a17 Launch Stack
    HVM (arm64) ami-05895f696017a8301 Launch Stack
    eu-central-1 HVM (amd64) ami-0099e069036c934fa Launch Stack
    HVM (arm64) ami-0c6adc94939c2f348 Launch Stack
    eu-north-1 HVM (amd64) ami-0eb12fd4cf77da266 Launch Stack
    HVM (arm64) ami-00c4b52eb4c77f737 Launch Stack
    eu-south-1 HVM (amd64) ami-06548dff7a06688c4 Launch Stack
    HVM (arm64) ami-00c72fd113bab908e Launch Stack
    eu-west-1 HVM (amd64) ami-01b7787bc0f8621e5 Launch Stack
    HVM (arm64) ami-03448c137612fac2a Launch Stack
    eu-west-2 HVM (amd64) ami-0061694a1f70ac69b Launch Stack
    HVM (arm64) ami-0e6da03e8bfc266bd Launch Stack
    eu-west-3 HVM (amd64) ami-028ac53f4abd50a0a Launch Stack
    HVM (arm64) ami-08ff956abf5f1b861 Launch Stack
    me-south-1 HVM (amd64) ami-0597951317c148292 Launch Stack
    HVM (arm64) ami-09584968f1259e17c Launch Stack
    sa-east-1 HVM (amd64) ami-0e79099b46011b2a7 Launch Stack
    HVM (arm64) ami-0a3e84660861b4e0f Launch Stack
    us-east-1 HVM (amd64) ami-08f4bc25055494068 Launch Stack
    HVM (arm64) ami-086c5cca4129f4102 Launch Stack
    us-east-2 HVM (amd64) ami-0da2ef08fd5010737 Launch Stack
    HVM (arm64) ami-02da50159337b6b16 Launch Stack
    us-west-1 HVM (amd64) ami-08befc8df1e62f5a9 Launch Stack
    HVM (arm64) ami-08292a8b7fd99dd25 Launch Stack
    us-west-2 HVM (amd64) ami-033de58d5bfead60e Launch Stack
    HVM (arm64) ami-008bca8970ab8471d Launch Stack

    Butane Configs

    Flatcar Container Linux allows you to configure machine parameters, configure networking, launch systemd units on startup, and more via Butane Configs. These configs are then transpiled into Ignition configs and given to booting machines. Head over to the docs to learn about the supported features .

    You can provide a raw Ignition JSON config to Flatcar Container Linux via the Amazon web console or via the EC2 API .

    As an example, this Butane YAML config will start an NGINX Docker container:

    variant: flatcar
    version: 1.0.0
    systemd:
      units:
        - name: nginx.service
          enabled: true
          contents: |
            [Unit]
            Description=NGINX example
            After=docker.service
            Requires=docker.service
            [Service]
            TimeoutStartSec=0
            ExecStartPre=-/usr/bin/docker rm --force nginx1
            ExecStart=/usr/bin/docker run --name nginx1 --pull always --log-driver=journald --net host docker.io/nginx:1
            ExecStop=/usr/bin/docker stop nginx1
            Restart=always
            RestartSec=5s
            [Install]
            WantedBy=multi-user.target
    

    Transpile it to Ignition JSON:

    cat cl.yaml | docker run --rm -i quay.io/coreos/butane:latest > ignition.json
    

    Instance storage

    Ephemeral disks and additional EBS volumes attached to instances can be mounted with a .mount unit. Amazon’s block storage devices are attached differently depending on the instance type . Here’s the Butane Config to format and mount the first ephemeral disk, xvdb, on most instance types:

    variant: flatcar
    version: 1.0.0
    storage:
      filesystems:
        - device: /dev/xvdb
          format: ext4
          wipe_filesystem: true
          label: ephemeral
    systemd:
      units:
        - name: media-ephemeral.mount
          enabled: true
          contents: |
            [Mount]
            What=/dev/disk/by-label/ephemeral
            Where=/media/ephemeral
            Type=ext4
    
            [Install]
            RequiredBy=local-fs.target
    

    For more information about mounting storage, Amazon’s own documentation is the best source. You can also read about mounting storage on Flatcar Container Linux .

    Adding more machines

    To add more instances to the cluster, just launch more with the same Butane Config, the appropriate security group and the AMI for that region. New instances will join the cluster regardless of region if the security groups are configured correctly.

    SSH to your instances

    Flatcar Container Linux is set up to be a little more secure than other cloud images. By default, it uses the core user instead of root and doesn’t use a password for authentication. You’ll need to add an SSH key(s) via the AWS console or add keys/passwords via your Butane Config in order to log in.

    To connect to an instance after it’s created, run:

    ssh core@<ip address>
    

    Multiple clusters

    If you would like to create multiple clusters you will need to change the “Stack Name”. You can find the direct template file on S3 .

    Manual setup

    TL;DR: launch three instances of ami-05df530eeef5bd705 (amd64) in us-east-1 with a security group that has open port 22, 2379, 2380, 4001, and 7001 and the same “User Data” of each host. SSH uses the core user and you have etcd and Docker to play with.

    Creating the security group

    You need open port 2379, 2380, 7001 and 4001 between servers in the etcd cluster. Step by step instructions below.

    Note: This step is only needed once

    First we need to create a security group to allow Flatcar Container Linux instances to communicate with one another.

    1. Go to the security group page in the EC2 console.
    2. Click “Create Security Group”
      • Name: flatcar-testing
      • Description: Flatcar Container Linux instances
      • VPC: No VPC
      • Click: “Yes, Create”
    3. In the details of the security group, click the Inbound tab
    4. First, create a security group rule for SSH
      • Create a new rule: SSH
      • Source: 0.0.0.0/0
      • Click: “Add Rule”
    5. Add two security group rules for etcd communication
      • Create a new rule: Custom TCP rule
      • Port range: 2379
      • Source: type “flatcar-testing” until your security group auto-completes. Should be something like “sg-8d4feabc”
      • Click: “Add Rule”
      • Repeat this process for port range 2380, 4001 and 7001 as well
    6. Click “Apply Rule Changes”

    Launching a test cluster

    We will be launching three instances, with a few parameters in the User Data, and selecting our security group.

    • Open the quick launch wizard to boot: Alpha ami-05df530eeef5bd705 (amd64), Beta ami-09eaf786bfe86d5f7 (amd64), or Stable ami-0ff03fd3dce9ab2f4 (amd64)
    • On the second page of the wizard, launch 3 servers to test our clustering
      • Number of instances: 3, “Continue”
    • Paste your Ignition JSON config in the EC2 dashboard into the “User Data” field, “Continue”
    • Storage Configuration, “Continue”
    • Tags, “Continue”
    • Create Key Pair: Choose a key of your choice, it will be added in addition to the one in the gist, “Continue”
    • Choose one or more of your existing Security Groups: “flatcar-testing” as above, “Continue”
    • Launch!

    Installation from a VMDK image

    One of the possible ways of installation is to import the generated VMDK Flatcar image as a snapshot. The image file will be in https://${CHANNEL}.release.flatcar-linux.net/${ARCH}-usr/${VERSION}/flatcar_production_ami_vmdk_image.vmdk.bz2. Make sure you download the signature (it’s available in https://${CHANNEL}.release.flatcar-linux.net/${ARCH}-usr/${VERSION}/flatcar_production_ami_vmdk_image.vmdk.bz2.sig) and check it before proceeding.

    $ wget https://alpha.release.flatcar-linux.net/amd64-usr/current/flatcar_production_ami_vmdk_image.vmdk.bz2
    $ wget https://alpha.release.flatcar-linux.net/amd64-usr/current/flatcar_production_ami_vmdk_image.vmdk.bz2.sig
    $ gpg --verify flatcar_production_ami_vmdk_image.vmdk.bz2.sig
    gpg: assuming signed data in 'flatcar_production_ami_vmdk_image.vmdk.bz2'
    gpg: Signature made Thu 15 Mar 2018 10:27:57 AM CET
    gpg:                using RSA key A621F1DA96C93C639506832D603443A1D0FC498C
    gpg: Good signature from "Flatcar Buildbot (Official Builds) <buildbot@flatcar-linux.org>" [ultimate]
    

    Then, follow the instructions in Importing a Disk as a Snapshot Using VM Import/Export . You’ll need to upload the uncompressed vmdk file to S3.

    After the snapshot is imported, you can go to “Snapshots” in the EC2 dashboard, and generate an AMI image from it. To make it work, use /dev/sda2 as the “Root device name” and you probably want to select “Hardware-assisted virtualization” as “Virtualization type”.

    Using Flatcar Container Linux

    Now that you have a machine booted it is time to play around. Check out the Flatcar Container Linux Quickstart guide or dig into more specific topics .

    Terraform

    The aws Terraform Provider allows to deploy machines in a declarative way. Read more about using Terraform and Flatcar here .

    The following Terraform v0.13 module may serve as a base for your own setup. It will also take care of registering your SSH key at AWS EC2 and managing the network environment with Terraform.

    You can clone the setup from the Flatcar Terraform examples repository or create the files manually as we go through them and explain each one.

    git clone https://github.com/flatcar/flatcar-terraform.git
    # From here on you could directly run it, TLDR:
    cd aws
    export AWS_ACCESS_KEY_ID=...
    export AWS_SECRET_ACCESS_KEY=...
    terraform init
    # Edit the server configs or just go ahead with the default example
    terraform plan
    terraform apply
    

    Start with a aws-ec2-machines.tf file that contains the main declarations:

    terraform {
      required_version = ">= 0.13"
      required_providers {
        ct = {
          source  = "poseidon/ct"
          version = "0.7.1"
        }
        template = {
          source  = "hashicorp/template"
          version = "~> 2.2.0"
        }
        null = {
          source  = "hashicorp/null"
          version = "~> 3.0.0"
        }
        aws = {
          source  = "hashicorp/aws"
          version = "~> 3.19.0"
        }
      }
    }
    
    provider "aws" {
      region = var.aws_region
    }
    
    resource "aws_vpc" "network" {
      cidr_block = var.vpc_cidr
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_subnet" "subnet" {
      vpc_id     = aws_vpc.network.id
      cidr_block = var.subnet_cidr
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_internet_gateway" "gateway" {
      vpc_id = aws_vpc.network.id
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_route_table" "default" {
      vpc_id = aws_vpc.network.id
    
      route {
        cidr_block = "0.0.0.0/0"
        gateway_id = aws_internet_gateway.gateway.id
      }
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_route_table_association" "public" {
      route_table_id = aws_route_table.default.id
      subnet_id      = aws_subnet.subnet.id
    }
    
    resource "aws_security_group" "securitygroup" {
      vpc_id = aws_vpc.network.id
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_security_group_rule" "outgoing_any" {
      security_group_id = aws_security_group.securitygroup.id
      type              = "egress"
      from_port         = 0
      to_port           = 0
      protocol          = "-1"
      cidr_blocks       = ["0.0.0.0/0"]
    }
    
    resource "aws_security_group_rule" "incoming_any" {
      security_group_id = aws_security_group.securitygroup.id
      type              = "ingress"
      from_port         = 0
      to_port           = 0
      protocol          = "-1"
      cidr_blocks       = ["0.0.0.0/0"]
    }
    
    resource "aws_key_pair" "ssh" {
      key_name   = var.cluster_name
      public_key = var.ssh_keys.0
    }
    
    data "aws_ami" "flatcar_stable_latest" {
      most_recent = true
      owners      = ["aws-marketplace"]
    
      filter {
        name   = "architecture"
        values = ["x86_64"]
      }
    
      filter {
        name   = "virtualization-type"
        values = ["hvm"]
      }
    
      filter {
        name   = "name"
        values = ["Flatcar-stable-*"]
      }
    }
    
    resource "aws_instance" "machine" {
      for_each      = toset(var.machines)
      instance_type = var.instance_type
      user_data     = data.ct_config.machine-ignitions[each.key].rendered
      ami           = data.aws_ami.flatcar_stable_latest.image_id
      key_name      = aws_key_pair.ssh.key_name
    
      associate_public_ip_address = true
      subnet_id                   = aws_subnet.subnet.id
      vpc_security_group_ids      = [aws_security_group.securitygroup.id]
    
      tags = {
        Name = "${var.cluster_name}-${each.key}"
      }
    }
    
    data "ct_config" "machine-ignitions" {
      for_each = toset(var.machines)
      content  = data.template_file.machine-configs[each.key].rendered
    }
    
    data "template_file" "machine-configs" {
      for_each = toset(var.machines)
      template = file("${path.module}/cl/machine-${each.key}.yaml.tmpl")
    
      vars = {
        ssh_keys = jsonencode(var.ssh_keys)
        name     = each.key
      }
    }
    

    Create a variables.tf file that declares the variables used above:

    variable "machines" {
      type        = list(string)
      description = "Machine names, corresponding to cl/machine-NAME.yaml.tmpl files"
    }
    
    variable "cluster_name" {
      type        = string
      description = "Cluster name used as prefix for the machine names"
    }
    
    variable "ssh_keys" {
      type        = list(string)
      description = "SSH public keys for user 'core'"
    }
    
    variable "aws_region" {
      type        = string
      default     = "us-east-2"
      description = "AWS Region to use for running the machine"
    }
    
    variable "instance_type" {
      type        = string
      default     = "t3.medium"
      description = "Instance type for the machine"
    }
    
    variable "vpc_cidr" {
      type    = string
      default = "172.16.0.0/16"
    }
    
    variable "subnet_cidr" {
      type    = string
      default = "172.16.10.0/24"
    }
    

    An outputs.tf file shows the resulting IP addresses:

    output "ip-addresses" {
      value = {
        for key in var.machines :
        "${var.cluster_name}-${key}" => aws_instance.machine[key].public_ip
      }
    }
    

    Now you can use the module by declaring the variables and a Container Linux Configuration for a machine. First create a terraform.tfvars file with your settings:

    cluster_name           = "mycluster"
    machines               = ["mynode"]
    ssh_keys               = ["ssh-rsa AA... me@mail.net"]
    

    The machine name listed in the machines variable is used to retrieve the corresponding Container Linux Config . For each machine in the list, you should have a machine-NAME.yaml.tmpl file with a corresponding name.

    For example, create the configuration for mynode in the file machine-mynode.yaml.tmpl (The SSH key used there is not really necessary since we already set it as VM attribute):

    ---
    passwd:
      users:
        - name: core
          ssh_authorized_keys:
            - ${ssh_keys}
    storage:
      files:
        - path: /home/core/works
          filesystem: root
          mode: 0755
          contents:
            inline: |
              #!/bin/bash
              set -euo pipefail
               # This script demonstrates how templating and variable substitution works when using Terraform templates for Container Linux Configs.
              hostname="$(hostname)"
              echo My name is ${name} and the hostname is $${hostname}
    

    Finally, run Terraform v0.13 as follows to create the machine:

    export AWS_ACCESS_KEY_ID=...
    export AWS_SECRET_ACCESS_KEY=...
    terraform init
    terraform apply
    

    Log in via ssh core@IPADDRESS with the printed IP address (maybe add -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null).

    When you make a change to machine-mynode.yaml.tmpl and run terraform apply again, the machine will be replaced.

    You can find this Terraform module in the repository for Flatcar Terraform examples .