Deploying Kubernetes on CoreOS to AWS

Linux containers are a relatively new abstraction framework with exciting implications for Continuous Integration and Continuous Delivery patterns. They allow appropriately designed applications to be tested, validated, and deployed in an immutable fashion at much greater speed than with traditional virtual machines. When it comes to production use however, an orchestration framework is desirable to maintain a minimum number of container workers, load balance between them, schedule jobs and the like. An extremely popular method of doing this is to use AWS EC2 Container Service (ECS) with the Amazon Linux distribution, however if you find yourself making the “how do we run containers” decision then it pays to explore other technology stacks as well.

In this demo, we’ll launch a Kubernetes cluster of CoreOS on AWS. Kubernetes is a container orchestration platform that utilizes Docker to provision, run, and monitor containers on Linux. It is developed primarily by Google, and is similar to container orchestration projects they run internally. CoreOS is a lightweight Linux distribution optimized for container hosting. Related projects are Docker Inc’s “Docker Datacenter,” RedHat’s Atomic, and RancherOS.

Steps
1) Download the kubernetes package. Releases are at https://github.com/kubernetes/kubernetes/releases . This demo assumes version 1.1.8.

2) Download the coreos-kubernetes package. Releases are at https://github.com/coreos/coreos-kubernetes/releases . This demo assumes version 0.4.1.

3) Extract both. core-kubernetes provides a kube-aws binary used for provisioning a kube cluster in AWS (using CloudFormation), while the kubernetes package is used for its kubectl binary

tar xzf kubernetes.tar.gz
tar xzf kube-aws-PLATFORM-ARCH.tar.gz (kube-aws-darwin-amd64.tar.gz for Macs, for instance)

4) Setup your AWS credentials and profiles

5) Generate a KMS key to use for the cluster (change region from us-west-1 if desired, but you will need to change it everywhere). Make a note of the ARN of the generated key, it will be used in the cluster.yaml later

aws kms --region=us-west-1 create-key --description="kube-aws assets"

5) Get a sample cluster.yaml . This is a configuration file later used for generating the AWS CloudFormation scripts and associated resources used to launch the cluster.

mkdir my-cluster; cd my-cluster
~/darwin-amd64/kube-aws init --cluster-name=YOURCLUSTERNAME \
 --external-dns-name=FQDNFORCLUSTER --region=us-west-1 \
 --availability-zone=us-west-1c --key-name=VALIDEC2KEYNAME \
 --kms-key-arn="arn:aws:kms:us-west-1:ARN:OF:PREVIOUSLY:GENERATED:KMS:KEY"

6) Modify the cluster.yaml with appropriate settings. “externalDNSName” wants a FQDN that will either be configured automatically if you provide a Route53 zone id for “hostedZoneId” or that you will configure AFTER provisioning has completed. This becomes the kube controller endpoint used by the Kubernetes control tooling.

Note that a new VPC is created for the Kubernetes cluster unless you configure it to use an existing VPC. You specify a region in the cluster.yaml, and if you don’t specify an Availability Zone then the “A” AZ will be used by default.

7) Render the CFN templates, validate, then launch the cluster

~/darwin-amd64/kube-aws render
~/darwin-amd64/kube-aws validate
~/darwin-amd64/kube-aws up

 

This will setup a short-term Certificate Authority (365 days) and SSL certs (90 days) for communication and then launch a cluster into CloudFormation. It will also store data about the cluster for use with kubectl

8) After the cluster has come up, an EIP will be output. Assign this EIP to the FQDN you used for externalDNSName in cluster.yaml if you did not allow kube-aws to configure this automatically via Route53. This is important, as it’s how the tools will try to control the cluster.

9) You can then start playing with the cluster. My sample session:

# Display active Kubernetes nodes
~/kubernetes/platforms/darwin/amd64/kubectl --kubeconfig=kubeconfig get nodes
NAME STATUS AGE
ip-10-0-x-x.us-west-1.compute.internal Ready,SchedulingDisabled 19m
ip-10-0-x-x.us-west-1.compute.internal Ready 19m
ip-10-0-x-x.us-west-1.compute.internal Ready 19m
.
# Display name and EIP of the cluster
~/darwin-amd64/kube-aws status
Cluster Name: CLUSTERNAME
Controller IP: a.b.c.d
.
# Launch the "nginx" Docker image as container instance "my-nginx"
# 2 replicas, wire port 80
~/kubernetes/platforms/darwin/amd64/kubectl --kubeconfig=kubeconfig run my-nginx --image=nginx --replicas=2 --port=80
deployment "my-nginx" created
.
# Show process list
~/kubernetes/platforms/darwin/amd64/kubectl --kubeconfig=kubeconfig get po
NAME READY STATUS RESTARTS AGE
my-nginx-2494149703-2dhrr 1/1 Running 0 2m
my-nginx-2494149703-joqb5 1/1 Running 0 2m
.
# Expose port 80 on the my-nginx instances via an Elastic Load Balancer
~/kubernetes/platforms/darwin/amd64/kubectl --kubeconfig=kubeconfig expose deployment my-nginx --port=80 --type=LoadBalancer
service "my-nginx" exposed
.
# Show result for the service
~/kubernetes/platforms/darwin/amd64/kubectl --kubeconfig=kubeconfig get svc my-nginx -o wide
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
my-nginx 10.x.0.y LONGELBHOSTNAME.us-west-1.elb.amazonaws.com 80/TCP 3m run=my-nginx
.
# Describe the my-nginx service. This will show the CNAME of the ELB that
# was created and which exposes port 80
~/kubernetes/platforms/darwin/amd64/kubectl --kubeconfig=kubeconfig describe service my-nginx
Name: my-nginx
Namespace: default
Labels: run=my-nginx
Selector: run=my-nginx
Type: LoadBalancer
IP: 10.x.0.y
LoadBalancer Ingress: LONGELBHOSTNAME.us-west-1.elb.amazonaws.com
Port: <unset> 80/TCP
NodePort: <unset> 31414/TCP
Endpoints: 10.a.b.c:80,10.d.e.f:80
Session Affinity: None
Events:
 FirstSeen LastSeen Count From SubobjectPath Type Reason Message
 --------- -------- ----- ---- ------------- -------- ------ -------
 4m 4m 1 {service-controller } Normal CreatingLoadBalancer Creating load balancer
 4m 4m 1 {service-controller } Normal CreatedLoadBalancer Created load balancer

Thus we have created a three node Kubernetes cluster (one controller, two workers) running two copies of the nginx container. We then setup an ELB to balance traffic between the instances.

Kubernetes certainly has operational complexity to trade off against features and robustness. There are a lot of moving parts to maintain, and at Stelligent we tend to recommend AWS ECS for situations in which it will suffice.

References:

Stelligent is hiring! Do you enjoy working on complex problems like figuring out ways to automate all the things as part of a deployment pipeline? Do you believe in the “everything-as-code” mantra? If your skills and interests lie at the intersection of DevOps automation and the AWS cloud, check out the careers page on our website.

One thought on “Deploying Kubernetes on CoreOS to AWS

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s