Configure AWS IAM for kOps

At the first execution of our devcontainer or GitHub codespace, we run the file postCreateCommand.sh from the .devcontainer folder, where we’ll create a new iam group and attach several policies to it.

The file executes, among some other scripts, the following:

aws iam create-group --group-name kops
aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonEC2FullAccess --group-name kops
aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonRoute53FullAccess --group-name kops
aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess --group-name kops
aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/IAMFullAccess --group-name kops
aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonVPCFullAccess --group-name kops
aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonSQSFullAccess --group-name kops
aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonEventBridgeFullAccess --group-name kops

Now, create the kOps IAM user and add the user to the kOps group:

aws iam create-user --user-name kops
aws iam add-user-to-group --user-name kops --group-name kops
aws iam create-access-key --user-name kops

To use the newly created IAM user for kOps, set the AWS access key ID and secret access key as environment variables:

export AWS_ACCESS_KEY_ID=$(aws configure get aws_access_key_id --profile=kops)
export AWS_SECRET_ACCESS_KEY=$(aws configure get aws_secret_access_key --profile=kops)

Setting up Environment Variables

Some environment variables are already set for us. You can print them in case you want to double-check, using:

env | grep AWS

A cluster domain and a name is also needed to be able to proceed. Let’s export some variables to our terminal. In this case, we’ll use the region and the domain to make the full cluster name.

export CLUSTER_DOMAIN=training.dx-book.com
export CLUSTER_NAME=$AWS_DEFAULT_REGION.$CLUSTER_DOMAIN

Let’s update some GitHub Codespace environment variables with our cluster name and domain to make sure it will be available for usage later on. You might need to reload the Codespace as suggested to apply the values.

gh secret set "CLUSTER_DOMAIN" -r "$GITHUB_USER/workspace" --user --body "${CLUSTER_DOMAIN}"
gh secret set "CLUSTER_NAME" -r "$GITHUB_USER/workspace" --user --body "${CLUSTER_NAME}"

Create a Route53 Hosted Zone

A Route53 hosted zone is required for kOps to create and manage the DNS records for your cluster.

To list all hosted zones with a current domain:

aws route53 list-hosted-zones | jq '.HostedZones[] | select(.Name=="dx-book.com.") | .Id'

To create a subdomain hosted zone:

aws route53 create-hosted-zone --name $CLUSTER_DOMAIN --caller-reference $(uuidgen) > subdomain.json 

Now let’s get the NS for the subdomain and create a file to request a resource record change in the parent domain:

cat <<EOT > modify-parent.json
{
  "Comment": "Create a subdomain NS record in the parent domain",
  "Changes": [
    {
      "Action": "CREATE",
      "ResourceRecordSet": {
        "Name": "$CLUSTER_DOMAIN",
        "Type": "NS",
        "TTL": 60,
        "ResourceRecords": [
          {
            "Value": $(cat subdomain.json | jq '.DelegationSet.NameServers[0]')
          },
          {
            "Value": $(cat subdomain.json | jq '.DelegationSet.NameServers[1]')
          },
          {
            "Value": $(cat subdomain.json | jq '.DelegationSet.NameServers[2]')
          },
          {
            "Value": $(cat subdomain.json | jq '.DelegationSet.NameServers[3]')
          }
        ]
      }
    }
  ]
}
EOT

To change resource record sets on the parent domain (dx-book.com) to create a new subdomain (training.dx-book.com)

PARENT_ZONE=$(aws route53 list-hosted-zones | jq '.HostedZones[] | select(.Name=="dx-book.com.") | .Id | split("/")[2]' | tr -d '"')
aws route53 change-resource-record-sets \
    --hosted-zone-id $PARENT_ZONE \
    --change-batch file://modify-parent.json

You should now be able to dig your domain (or subdomain) and see the AWS Name Servers on the other end.

dig ns $CLUSTER_DOMAIN

Should return something similar to:

;; ANSWER SECTION:
training.dx-book.com.        172800  IN  NS  ns-1.<example-aws-dns>-1.net.
training.dx-book.com.        172800  IN  NS  ns-2.<example-aws-dns>-2.org.
training.dx-book.com.        172800  IN  NS  ns-3.<example-aws-dns>-3.com.
training.dx-book.com.        172800  IN  NS  ns-4.<example-aws-dns>-4.co.uk.

Please DO NOT MOVE ON until you have validated your NS records! This is required.

Step 4: Create S3 Buckets for kOps State Store and OIDC

In order to store the state of your cluster, and the representation of your cluster, we need to create a dedicated S3 bucket for kOps to use. This bucket will become the source of truth for our cluster configuration.

Define environment variables for the state store also:

export KOPS_STATE_PREFIX="${AWS_DEFAULT_REGION}-${CLUSTER_DOMAIN//./-}-kops-state"
export KOPS_STATE_STORE="s3://${AWS_DEFAULT_REGION}-${CLUSTER_DOMAIN//./-}-kops-state"
export KOPS_OIDC_STORE="${AWS_DEFAULT_REGION}-${CLUSTER_DOMAIN//./-}-oidc"

Also, let’s make sure the environment variables above are also available inside our Codespace:

gh secret set "KOPS_STATE_PREFIX" -r "$GITHUB_USER/workspace" --user --body "${KOPS_STATE_PREFIX}"
gh secret set "KOPS_STATE_STORE" -r "$GITHUB_USER/workspace" --user --body "${KOPS_STATE_STORE}"
gh secret set "KOPS_OIDC_STORE" -r "$GITHUB_USER/workspace" --user --body "${KOPS_OIDC_STORE}"

To create an S3 bucket for the kOps state store:

aws s3api create-bucket \
    --bucket ${KOPS_STATE_PREFIX} \
    --region ${AWS_DEFAULT_REGION} \
    --create-bucket-configuration LocationConstraint=${AWS_DEFAULT_REGION}

Note: We STRONGLY recommend versioning your S3 bucket in case you ever need to revert or recover a previous state store.

aws s3api put-bucket-versioning --bucket ${KOPS_STATE_PREFIX} --versioning-configuration Status=Enabled --region ${AWS_DEFAULT_REGION}

In order for ServiceAccounts to use external permissions (aka IAM Roles for ServiceAccounts), you also need a bucket for hosting the OIDC documents. While you can reuse the bucket above if you grant it a public ACL, we do recommend a separate bucket for these files.

The ACL must be public so that the AWS STS service can access them.

aws s3api create-bucket \
    --bucket ${KOPS_OIDC_STORE} \
    --region ${AWS_DEFAULT_REGION} \
    --object-ownership BucketOwnerPreferred \
    --create-bucket-configuration LocationConstraint=${AWS_DEFAULT_REGION}

aws s3api put-public-access-block \
    --bucket ${KOPS_OIDC_STORE} \
    --region ${AWS_DEFAULT_REGION} \
    --public-access-block-configuration BlockPublicAcls=false,IgnorePublicAcls=false,BlockPublicPolicy=false,RestrictPublicBuckets=false

aws s3api put-bucket-acl \
    --bucket ${KOPS_OIDC_STORE} \
    --region ${AWS_DEFAULT_REGION} \
    --acl public-read

Information regarding cluster state store location must be set when using kOps cli. We’ll cover that soon.