Migrating My Site To KubernetesMETA:kubernetes:
March 4th, 2018
Previously when I brought my my site back online I briefly mentioned the simple setup I threw together with Caddy running on a tiny GCE VM with a few scripts — Since then I’ve had plenty of time to experience the awesomeness that is managing services with Kubernetes at work while developing Kubernetes’s testing infrastructure (which we run on GKE).
So I decided, of course, that it was only natural to migrate my own service(s) to Kubernetes for maximum dog-fooding. :kubernetes: ↔ :dog:
This turned out to be even easier than expected and I was quickly up and running on a toy single-node cluster running on a spare linux box at home with the help of the excellent official docs for setting up a cluster with kubeadm. After that I set up ingress-nginx to handle ingress to my service(s) and kube-lego to manage letsencrypt certificates. I then replaced Caddy with my own minimal containerized Go service to continue having GitHub webhooks trigger site updates. :go_gopher:
I did run into the following hiccups:
1) To get DNS resolution within the cluster of external services I needed to configure kube-dns with
kubectl apply -f ./k8s/kube-dns-configmap.yaml where my
1# Use Google's public DNS to resolve external services 2apiVersion: v1 3kind: ConfigMap 4metadata: 5 name: kube-dns 6 namespace: kube-system 7data: 8 upstreamNameservers: | 9 ["220.127.116.11", "18.104.22.168"]
2) I also needed to configure RBAC for
kube-lego which doesn’t currently ship with RBAC configured out of the box. Again, this was just involved applying a config update based on the comments at jetstack/kube-lego#99 with
kubectl apply -f k8s/kube-lego.yaml. The config below is probably giving
kube-lego a lot more access than it needs, but I wasn’t particularly concerned about this since this is on a toy “cluster” for my personal site and the service is already managing my TLS certificates. :shrug:
1# Complete setup for kube-lego. 2# The only thing specific to my cluster here is the lego.email setting, 3# the rest is just kube-lego with RBAC. 4# Thanks to comments at: https://github.com/jetstack/kube-lego/issues/99 5apiVersion: v1 6kind: Namespace 7metadata: 8 name: kube-lego 9--- 10apiVersion: v1 11metadata: 12 name: kube-lego 13 namespace: kube-lego 14data: 15 # modify this to specify your address 16 lego.email: "firstname.lastname@example.org" 17 # configure for letsencrypt's production api 18 lego.url: "https://acme-v01.api.letsencrypt.org/directory" 19kind: ConfigMap 20--- 21apiVersion: rbac.authorization.k8s.io/v1beta1 22kind: ClusterRole 23metadata: 24 name: lego 25rules: 26- apiGroups: 27 - "" 28 - "extensions" 29 resources: 30 - configmaps 31 - secrets 32 - services 33 - endpoints 34 - ingresses 35 - nodes 36 - pods 37 verbs: 38 - list 39 - get 40 - watch 41- apiGroups: 42 - "" 43 resources: 44 - services 45 verbs: 46 - create 47- apiGroups: 48 - "extensions" 49 - "" 50 resources: 51 - ingresses 52 - ingresses/status 53 verbs: 54 - get 55 - update 56 - create 57 - list 58 - patch 59 - delete 60 - watch 61- apiGroups: 62 - "*" 63 - "" 64 resources: 65 - events 66 - certificates 67 - secrets 68 verbs: 69 - create 70 - list 71 - update 72 - get 73 - patch 74 - watch 75--- 76apiVersion: rbac.authorization.k8s.io/v1beta1 77kind: ClusterRoleBinding 78metadata: 79 name: lego 80roleRef: 81 apiGroup: rbac.authorization.k8s.io 82 kind: ClusterRole 83 name: lego 84subjects: 85 - kind: ServiceAccount 86 name: lego 87 namespace: kube-lego 88--- 89apiVersion: v1 90kind: ServiceAccount 91metadata: 92 name: lego 93 namespace: kube-lego 94--- 95apiVersion: extensions/v1beta1 96kind: Deployment 97metadata: 98 name: kube-lego 99 namespace: kube-lego 100spec: 101 replicas: 1 102 template: 103 metadata: 104 labels: 105 app: kube-lego 106 spec: 107 serviceAccountName: lego 108 containers: 109 - name: kube-lego 110 image: jetstack/kube-lego:0.1.5 111 imagePullPolicy: Always 112 ports: 113 - containerPort: 8080 114 env: 115 - name: LEGO_EMAIL 116 valueFrom: 117 configMapKeyRef: 118 name: kube-lego 119 key: lego.email 120 - name: LEGO_URL 121 valueFrom: 122 configMapKeyRef: 123 name: kube-lego 124 key: lego.url 125 - name: LEGO_NAMESPACE 126 valueFrom: 127 fieldRef: 128 fieldPath: metadata.namespace 129 - name: LEGO_POD_IP 130 valueFrom: 131 fieldRef: 132 fieldPath: status.podIP 133 readinessProbe: 134 httpGet: 135 path: /healthz 136 port: 8080 137 initialDelaySeconds: 5 138 timeoutSeconds: 1 139---
After applying these two changes the rest of my very simple config to deploy the Go service
behind automatic TLS termination worked flawlessly. Since then managing the
site has been an excellent experience with the power of
kubectl, the Kubernetes
“swiss army knife”.
If you haven’t given Kubernetes a try but you are already comfortable with Docker you should give it a try. Kubernetes makes managing services easy and portable. The same
kubectlcommands I use to debug our services for the Kubernetes project’s infrstructure on GKE work just as well on my toy cluster at home. :smile:
If you want to give hosting on Kubernetes a try with much less effort, Google Cloud offers a free 12 month, $300 credit and an always-free tier which both include Google Kubernetes Engine.
We use GKE heavily for the project infrastructure and I can speak highly to it’s ease of use and freedom to focus on your services without worrying about setting up and maintaining all of the pluggable Kubernetes bits such as logging, master upgrades, node auto repair, IAM, cluster networking, etc.
If my site were a serious production service instead of a toy learning experience I would seriously look towards GKE instead of a one node “cluster” running on a DIY “server” sitting by my desk at home, but setting up a toy cluster with
kubeadm was a great experience for experimenting with Kubernetes. I can recommend using kubeadm for similar experiments, it’s quite simple to use once you have all the prerequesites installed and configured and the docs are quite good, however it won’t solve many of the things you’ll want for a production cluster.
You may also want to look around the list of the many CNCF certified Kubernetes conformant products for other options if for some reason neither of these sound appealing to you.
If you really just want to play with it first (and not host anything), check out minikube.
I also used Calico for my overlay network, but I haven’t really exercised it yet so I can’t really comment on it.
Kubernetes secrets are awesome. My simple Go service can just read in the GitHub webhook secret as an environment variable injected into the container without worrying about how the secret is loaded and stored.
To get a one node cluster working you need to remove the master taint. This is terrible idea for a production cluster but great for tinkering and effectively using kubelet as your PID1.
UPDATE: My site is on Netlify now, but I still run my own Kubernetes cluster to host other small projects. Hosting it on a toy Kubernetes cluster worked well, execept when the power went out at my apartment … I’d like my site to be online even then, hence Netlify :upside_down_face: