Kubernetes on AWS with kops

I evaluated ways to deploy an Kubernetes cluster on AWS and settled on kops, since it is well supported by the Kubernetes community and can set up HA clusters.

K8S UI

Make sure your AWS cli is set up properly and the kops binary is on your path. You’ll also need kubectl, the Kubernetes cli (on macOS you can do it via brew install kubernetes-cli).

First, create a S3 bucket for kops to keep the cluster state:

$ aws s3 mb s3://sbrosinski-k8s

Decide on a Route53 hosted zone to use for all Kubernetes hostnames (I used b7i.de) and on a cluster name (k8s.b7i.de). Once the Kubernetes instances will boot up, they will create additional entries in this hosted zone. In our case, it’s going to look like this once the cluster is up:

K8S DNS

Then set these two environment variables:

$ export KOPS_STATE_STORE=s3://sbrosinski-k8s
$ export CLUSTER_NAME=k8s.b7i.de

Now you’re ready to create a cluster. For my simple test I’m just using a single AZ and I want to reuse my existing VPC (kops will create a new subnet in this VPC):

$ kops create cluster \
    --cloud=aws \
    --zones=eu-west-1a \
    --name=k8s.b7i.de \
    --vpc=vpc-9a9be8fe \
    --network-cidr=10.0.0.0/16 \
    --dns-zone=b7i.de

You can then edit the cluster settings with one of these commands:

  • List clusters with: kops get cluster
  • Edit this cluster with: kops edit cluster k8s.b7i.de
  • Edit your node instance group: kops edit ig --name=k8s.b7i.de nodes
  • Edit your master instance group: kops edit ig --name=k8s.b7i.de master-eu-west-1a

Once you’re happy, actually create the cluster on AWS with:

$ kops update cluster k8s.b7i.de --yes

Then wait, it takes quite some time for the instances to boot and the DNS entries to be added in the zone. Once everything is up you should be able to get the kubernetes nodes:

$ kubectl get nodes

NAME                                        STATUS    AGE
ip-10-0-32-243.eu-west-1.compute.internal   Ready     1m
ip-10-0-32-244.eu-west-1.compute.internal   Ready     55s
ip-10-0-54-213.eu-west-1.compute.internal   Ready     3m

To enable the Kubernetes UI you need to install the UI service:

$ kubectl create -f https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml

Then you can use the kubctl proxy to access the UI from your machine:

$ kubectl proxy --port=8080 &

The UI should now be available at http://localhost:8080

To test our new Kubernetes cluster, we could deploy a simple service made up of some nginx containers:

Create an nginx deployment:

$ kubectl run my-nginx --image=nginx --replicas=2 --port=80
$ kubectl get pods

NAME                       READY     STATUS    RESTARTS   AGE
my-nginx-379829228-xb9y3   1/1       Running   0          10s
my-nginx-379829228-yhd25   1/1       Running   0          10s

$ kubectl get deployments

NAME       DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
my-nginx   2         2         2            2           29s

Expose the deployment as service. This will create an ELB in front of those 2 containers and allow us to publicly access them:

$ kubectl expose deployment my-nginx --port=80 --type=LoadBalancer

$ kubectl get services -o wide

NAME         CLUSTER-IP      EXTERNAL-IP                                                              PORT(S)   AGE       SELECTOR
kubernetes   100.64.0.1      <none>                                                                   443/TCP   25m       <none>
my-nginx     100.70.129.69   a2f17a8c6aa9c11e697ef02f5623d251-981392638.eu-west-1.elb.amazonaws.com   80/TCP    19m       run=my-nginx

There is an ELB running on a2f17a8c6aa9c11e697ef02f5623d251-981392638.eu-west-1.elb.amazonaws.com with our nginx containers behind it:

K8S nginx

Now, to get rid of the cluster we can completely remove all AWS resources with:

$ kops delete cluster --name=k8s.b7i.de --yes
Stephan avatar
About Stephan
Senior Dev at Smaato Inc. - Adtech, Data Engineering, Stream Processing with Scala, Java, Python, Go, Spark & Kafka - Automation Fanatic - Father of 2
comments powered by Disqus