Categories
Software development

Traefik as Ingress for a Raspberry Pi K3S Cluster @ Home w/kube-vip

I recently learned Rasperry Pi can be netbooted following this workshop by Alex Ellis. It’s amazing, you should check it out. Seriously! Invest in yourself.

After setting up the Pi cluster, I put it to work by installing kube-vip and then K3S (using k3sup).

I want my development cluster to be accessible to the Internet and accomplish that for pennies. I decided to use port-forwarding through my router to get Ingress to work with the public Internet. Here is a diagram showing my setup.

My K3S cluster consists of four Raspberry Pi’s, each runs a kube-vip pod. They elect a leader, and the leader node’s MAC address is assigned to the VIP (virtual IP address). If the leader node goes offline, a new leader is elected on another node, and I can continue using the same VIP for my port forwarding. Without a VIP, I’d be forced to forward to one node, and if that node were to go offline, I’d lose access to my cluster.

Port forwarding to one cluster is a short term solution to facilitate development w/o cost. My needs are currently simple, I can manage the router and associated risk, and as things change I will adjust accordingly.

Long term, I plan to use tunneling via Inlets Operator (check this out for a primer). There are limitations associated with my short term plan:

  1. I am relying on port-forwarding…I can forward the ports from my router once. What if I had many clusters at home, each with their own ingress?
  2. I use a cronjob to manage the public IP address associated with my Ingress’s host DNS entry. What if my IP changes? I’ll have to wait for the job to run.

Why am I sharing this post? I was really impressed with kube-vip, and that resulted in a Twitter thread. The knowledge and software in the CNCF ecosystem is powerful. Check it out!

To setup Traefik to use the VIP provided by kube-vip, install it like so:

# Set service.externalIPs to your kube-vip $VIP
# For example, my $VIP is 192.168.2.200
helm upgrade --install traefik traefik/traefik \
--namespace kube-system \
--set additional.checkNewVersion=false \
--set additional.sendAnonymousUsage=false \
--set dashboard.ingressRoute=false \
--set service.externalIPs=$VIP

References:

workshop for netbooting Raspberry Pis

kube-vip instructions for K3S

k3sup

traefik helm chart

Categories
Software development

Using NATS and STAN in Kubernetes

In this article we’re going to:

  1. Install NATS and STAN using Helm into Kubernetes
  2. Follow along in a NATS minimal setup tutorial
    1. Create an administrative pod for NATS and STAN
    2. Confirm NATS sub/pub is working
    3. Confirm STAN sub/pub is working

This installs NATS so that clients can use it without authenticating!

At the time of this writing I’m not aware of a good way to install STAN supporting authentication. For example, like referencing a Config Map to get an account and Secret to get a password. Therefore, so I opted out of requiring client authentication for NATS so that I can do minimal testing.

I will follow-up in Github and in NATS documentation and circle back here to add the missing authentication settings.

# install NATS
helm repo add bitnami https://charts.bitnami.com/bitnami
helm install nats bitnami/nats --set replicaCount=3 \
--set auth.enabled=false \
--set clusterAuth.enabled=true,clusterAuth.user=admin,clusterAuth.password=$password

# install STAN (notice how no creds are NOT needed!)
helm repo add nats https://nats-io.github.io/k8s/helm/charts/
helm install stan nats/stan \
--set store.type=file --set store.file.storageSize=1Gi \
--set stan.nats.url=nats://nats-client.default.svc.cluster.local:4222 \
--set store.cluster.enabled=true \
--set store.cluster.logPath=/data/stan/store/log

Now let’s follow along in the tutorial.

kubectl run -i --rm --tty nats-box --image=synadia/nats-box --restart=Never
nats-sub -s nats-client.default.svc.cluster.local -t test &
nats-pub -s nats-client.default.svc.cluster.local -t test "this is a test"

Which should yield:

2020/05/03 23:20:11 [#1] Received on [test]: 'this is a test'

stan-sub -c stan -s nats-client.default.svc.cluster.local test &
stan-pub -c stan -s nats-client.default.svc.cluster.local test "this is a test"

Which should yield:

[#1] Received: sequence:1 subject:"test" data:"this is a test" timestamp:1588550818187067125

Categories
Software development

HMAC auth with Kong in Kubernetes

This may help you setup hmac-auth with Kong in Kubernetes.

There is no guide for setting up HMAC, but, the JWT guide is great. It helped me understand Kong Plugins have specific expectations for Kubernetes Secrets.

In a nutshell, Kong Plugins expect Kubernetes Secrets to (1) reference the plugin type in the kongCredType field and (2) contain literals that match the Kong Credential type supported by your intended Kong Plugin.

kubectl create secret \ 
generic a-hmac-secret -n test-namespace \ 
--from-literal=kongCredType=hmac-auth \ 
--from-literal=username=hook-user \ 
--from-literal=secret=$hmackey

To setup the HMAC plugin, your Kubernetes secret must define the kongCredType, username, and secret. The kongCredType for HMAC is hmac-auth, username is a string and gets sent by your client in the Authorization header, and secret is a string that must be known by both the client and server to do encryption and decryption. Look at the properties on the credential object here for supporting detail.

The below YAML should help you get HMAC authorization working for your Ingress using Kong Plugins. Here are some tips:

  1. This assumes you have Kong installed in your Kubernetes cluster, already, and the IP address for the Kong Proxy ingress associated with proxy.mydomain.com.
  2. We intend to protect backend-service with HMAC auth, which exists in the test-namespace.
  3. Kubernetes objects we create to support HMAC auth for this service will be created in the same namespace, test-namespace.
  4. Environment variables (things beginning with $) are known and set prior to running scripts where applicable.
  5. Watch logs from your Kong deployment to observe whether Ingress/Plugin setup has errors kubectl logs deploy/kong-kong --follow --all-containers
  6. My ingress (below) has additional configuration to support http to https redirects, and provision SSL automatically (related links are included in comments).
---
apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
  name: webhook-hmac
  namespace: test-namespace
plugin: hmac-auth
config:
  hide_credentials: true
  enforce_headers: 
  - date 
  - host
  - request-line
  algorithms: 
  - hmac-sha256
---
---
apiVersion: configuration.konghq.com/v1
kind: KongConsumer
metadata:
  name: hook-user
  namespace: test-namespace
username: hook-user
credentials:
- a-hmac-secret
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: webhook-frontend
  namespace: test-namespace
  annotations:
    konghq.com/plugins: webhook-hmac
    # https://github.com/Kong/kubernetes-ingress-controller/blob/master/docs/guides/cert-manager.md
    kubernetes.io/tls-acme: "true"
    cert-manager.io/cluster-issuer: letsencrypt-prod
    # https://github.com/Kong/kubernetes-ingress-controller/blob/master/docs/guides/configuring-https-redirect.md
    konghq.com/override: https-only
spec:
  # https://github.com/Kong/kubernetes-ingress-controller/blob/master/docs/guides/cert-manager.md
  tls:
  - secretName: proxy-mydomain-com
    hosts:
    - proxy.mydomain.com
  rules:
  - host: proxy.mydomain.com
    http:
      paths:
      - path: /my/protected/path
        backend:
          serviceName: backend-service
          servicePort: 8080 
---

Now, requests sent to https://proxy.mydomain.com/my/protected/path will require an Authorization header supporting HMAC as configured above, and if valid, be sent to the backend-service.

Categories
Software development

Setting $KUBECONFIG

This should be simple. It’s an environment variable.

But, as with all things, it depends! What if you’re working with multiple clusters, or you setup and tear-down clusters often, or work with different tools that have certain expectations?

I’ve set it a few different ways:

Save cluster config file as $HOME/.kube/config

This works. But it feels messy!

I don’t like having to constantly overwrite the configuration file for my cluster’s source-of-truth.

Plus, kubectl (and maybe other programs?) overwrite this file when connections are made to clusters.

Set $KUBECONFIG path in ~/.profile

This works, too. It doesn’t scale well when you change clusters often or use a variety of tooling (due to tooling behaviors and expectations).

# Get the config file from the current folder or from home
export KUBECONFIG=kubeconfig:$HOME/.kube/config

If kubeconfig doesn’t exist in the current folder (maybe because the tool you’re using changed folders), $KUBECONFIG will resolve to to $HOME/.kube/config.

The problem is that $HOME/.kube/config isn’t necessarily my desired config…it represents what was last used.

Set $KUBECONFIG path in this session

I was using a script like this to initialize my k8s connection in a terminal. It helped ensure predictability (my machine’s state isn’t mutated), as I had to explicitly set the context given a script in that project folder.

#!/bin/sh
# call like this: source ./start.sh
#
# Do stuff we need done before beginning a development session
# Doesn't modfy the machine long-term
#
# read in the absolute path of the kubeconfig in my current directory
export KUBECONFIG=$(readlink kubeconfig -f)

Keep is simple, use kubectx

Now, I just use kubectx. Get it using arkade, and other handy tools and k8s applications.

Categories
Software development

Why move from 2 to 4 GB nodes in k8s?

I upgraded to a higher performing set of nodes in my k8s cluster. It was easy to upgrade, and I freed up 1GB of memory.

Now my applications can use that memory or I can rely on it for bursting, w/o needing another node for memory! Below you’ll find the steps I took to upgrade and some quick math.

Upgrade steps:

  1. Create a new pool for bigger nodes
    1. Members in a pool all of have the same specs
  2. Add nodes to the new pool (I used 4GB nodes)
  3. As nodes come online in the new pool, scale down nodes in the old pool (my old pool had 2GB nodes)
  4. Once all new nodes were online in the new pool, I deleted the old pool
  5. k8s took care of moving my workloads around! I love k8s.

Why did I do this?

The fact that this cost info was readily available at my finger tips is what prompted me to take action. Not because I’m paying the bill, but because its easy for me (as a developer) to see what my system is costing. And I don’t want it to run inefficiently.

The small nodes cost $10, and each came with 2GB of memory. I had three nodes, totaling 6GB of memory. On average, with my application running, this consumed 65% of the available memory or 3.9GB, leaving roughly 2GB left.

The bigger nodes cost $20, and each come with 4GB of memory. I have 2 nodes currently, totaling 8GB of memory. On average, with my application running this consumes 3.04GB.

That’s a savings of almost 1GB! In the container world, that’s power. Why? Each node runs system processes in the kube-system namespace, and the sum memory used by those processes on three small nodes is greater than the sum memory used on two bigger nodes.

Categories
Software development

Use a private Docker Hub repo with OpenFaaS

My function’s pods would not start! They were failing when trying to pull the image from my private repository in Docker Hub.

I had followed this article to create a docker-registry secret in my kubernetes cluster…but when I “kubectl describe”d my pod, I saw I was still getting an authorize error.

I fixed two things, am much better off, and wanted to make sure other folks don’t have similar trouble. The fix was:

  1. Add the docker-registry secret to my openfaas-fn namespace
  2. Reference the secret by name in my function’s YAML