Categories
Software development

Traefik as Ingress for a Raspberry Pi K3S Cluster @ Home w/kube-vip

I recently learned Rasperry Pi can be netbooted following this workshop by Alex Ellis. It’s amazing, you should check it out. Seriously! Invest in yourself.

After setting up the Pi cluster, I put it to work by installing kube-vip and then K3S (using k3sup).

I want my development cluster to be accessible to the Internet and accomplish that for pennies. I decided to use port-forwarding through my router to get Ingress to work with the public Internet. Here is a diagram showing my setup.

My K3S cluster consists of four Raspberry Pi’s, each runs a kube-vip pod. They elect a leader, and the leader node’s MAC address is assigned to the VIP (virtual IP address). If the leader node goes offline, a new leader is elected on another node, and I can continue using the same VIP for my port forwarding. Without a VIP, I’d be forced to forward to one node, and if that node were to go offline, I’d lose access to my cluster.

Port forwarding to one cluster is a short term solution to facilitate development w/o cost. My needs are currently simple, I can manage the router and associated risk, and as things change I will adjust accordingly.

Long term, I plan to use tunneling via Inlets Operator (check this out for a primer). There are limitations associated with my short term plan:

  1. I am relying on port-forwarding…I can forward the ports from my router once. What if I had many clusters at home, each with their own ingress?
  2. I use a cronjob to manage the public IP address associated with my Ingress’s host DNS entry. What if my IP changes? I’ll have to wait for the job to run.

Why am I sharing this post? I was really impressed with kube-vip, and that resulted in a Twitter thread. The knowledge and software in the CNCF ecosystem is powerful. Check it out!

To setup Traefik to use the VIP provided by kube-vip, install it like so:

# Set service.externalIPs to your kube-vip $VIP
# For example, my $VIP is 192.168.2.200
helm upgrade --install traefik traefik/traefik \
--namespace kube-system \
--set additional.checkNewVersion=false \
--set additional.sendAnonymousUsage=false \
--set dashboard.ingressRoute=false \
--set service.externalIPs=$VIP

References:

workshop for netbooting Raspberry Pis

kube-vip instructions for K3S

k3sup

traefik helm chart

Categories
Software development

Architecture with Auth0, an Angular SPA, and OpenFaaS on Kubernetes

I’ve had spare time lately due to the pandemic and opted to learn how to build a cloud native system. It’s been a lot of fun and I thought I’d share the architecture. Hopefully it helps you – let me know what you think?

I’ll probably end up moving the Azure and Digital Ocean components to AWS. I don’t want to give Amazon my money…but my workplace uses AWS. In other words, if I use AWS for this personal project, then I can leverage that experience in the workplace.

Also, go give this guy a follow and check out his new ebook. His software and variety of content have helped me a ton!

Categories
Software development

DRY Angular Storage

The below code snippets work in Angular 9. This will ensure the implementation for the storage service is the same, regardless of whether we are persisting to Local or Session storage.

import { InjectionToken } from '@angular/core';

export const LOCAL_STORAGE = new InjectionToken<Storage>('Browser Storage', {
    providedIn: 'root',
    factory: () => localStorage
});

export const SESSION_STORAGE = new InjectionToken<Storage>('Browser Storage', {
    providedIn: 'root',
    factory: () => sessionStorage
});
export interface IStore {
    get(key: string): any;
    set(key: string, value: string): void
    remove(key: string): void
    clear(): void
}
import { IStore } from './istore';

export class StorageService implements IStore {
    // it is expected that implementors of this class will inject Storage
    constructor(public storage: Storage) { }
    get(key: string) {
        return this.storage.getItem(key);
    }

    set(key: string, value: string) {
        this.storage.setItem(key, value);
    }

    remove(key: string) {
        this.storage.removeItem(key);
    }

    clear() {
        this.storage.clear();
    }
}
import { Injectable, Inject } from '@angular/core';
import { SESSION_STORAGE } from './session-storage.token';
import { StorageService } from './storage-service';

/**
 * A storage service. 
 * 
 * Can be used for local or session, depending on what you pass in.
 *
 * @export
 * @class StorageService
 */
@Injectable({
  providedIn: 'root'
})
export class SessionStorageService extends StorageService {

  constructor(@Inject(SESSION_STORAGE) public storage: Storage) {
    // inject our storage instance into the base class, in this case, session storage
    super(storage);
  }

}
import { Injectable, Inject } from '@angular/core';
import { LOCAL_STORAGE } from './local-storage.token';
import { StorageService } from './storage-service';

@Injectable({
  providedIn: 'root'
})
export class LocalStorageService extends StorageService {

  constructor(@Inject(LOCAL_STORAGE) public storage: Storage) {
    // inject our storage instance into the base class, in this case, local storage
    super(storage);
  }

}
import { Injectable } from '@angular/core';
import { SessionStorageService } from './session-storage.service';

@Injectable({
  providedIn: 'root'
})
export class PreventCSRFService {

  private _key : string = 'nonce';
  public get key() : string {
    return this._key;
  }

  constructor(
    // use Session Storage for persistence
    private storage: SessionStorageService
    ) { 
    const state = this.storage.get(this.key);
    if (state === null || state === undefined) {
      const nonce = this.randomString(16);
      this.storage.set(this.key, nonce);
    }   
  }  

  /**
   * Generate a cryptographically secure randoms string
   * https://auth0.com/docs/api-auth/tutorials/nonce
   *
   * @returns {string}
   * @memberof PreventCSRFService
   */
  public randomString(length: number): string {
    const charset = '0123456789ABCDEFGHIJKLMNOPQRSTUVXYZabcdefghijklmnopqrstuvwxyz-._'
    let result: string = '';

    while (length > 0) {
        const bytes = new Uint8Array(16);
        let random = window.crypto.getRandomValues(bytes);

        random.forEach(function(c) {
            if (length == 0) {
                return;
            }
            if (c < charset.length) {
                result += charset[c];
                length--;
            }
        });
    }
    return result;
  }  

  /**
   * Check if the nonce for this session matches or not
   *
   * @param {string} actual What we're comparing against
   * @returns {boolean} Whether what we got matches what we stored
   * @memberof PreventCSRFService
   */
  public nonceMatches(actual: string): boolean {
    const expected = this.storage.get(this.key);

    if (expected && actual) {
      return expected === actual;
    }

    return false;
  }


  /**
   * Get the nonce for this session
   *
   * @returns {string} A cryptographically strong random string, which is persisted in session storage
   * @memberof PreventCSRFService
   */
  public getNonce(): string {
    return this.storage.get(this.key);
  }
}
Categories
Software development

Content (Delivery Network) is King

It’s all about the user experience. As a user, when I browse to either of the following domains, I should see the same content:

  1. www.mydomain.com (www, in this case, is a sub-domain)
  2. mydomain.com (when you refer to a domain w/o a sub-domain, it is generally called the apex or root)

For example, end-users may omit the “www”! Skipping characters saves precious characters and time…

In addition to expecting the same content, it (arguably must) be served using HTTPS. For example, if a request comes in using HTTP, transform it somehow to HTTPS.

This weekend I purchased a domain from GoDaddy for a side project. Initially I setup the domain using GoDaddy DNS to serve a SPA from a Standard Akamai CDN using Microsoft Azure. References:

  1. Setup a static website (front-end assets get pushed here)
  2. Link the site to a CDN (moving the assets closer to users)
  3. Use my custom domains for the CDN (including HTTPS)

Here is an example of the GoDaddy DNS configuration used to serve content from an Azure CDN:

TypeKeyValue
CNAMEwwwdomain.azureedge.net
CNAMEcdnverifycdnverify.domain.azuredge.net
GoDaddy doesn’t let you create a CNAME with a key of @…

But I ran into a couple problems!

  1. Azure CDN does not assign SSL certs to apex/root domains. I could have manually assigned a certificate, but I’m trying to avoid having to do manual things these days.
  2. Requests to http://mydomain.com would not serve content! The name would not resolve. GoDaddy (my DNS provider at the time), does not allow us to create a CNAME pointing at an apex/root domain.

I could have purchased a public static IPv4 address from Azure, linked it to my CDN, and setup an ANAME in GoDaddy to point at the public IP. But, considering both problems (above), I decided to take the following action:

  1. Delegate my domain’s DNS to point at an Azure DNS Zone
  2. Setup my local network (OpenWRT) to use Cloudflare for DNS resolution (my ISP was taking too long to resolve when testing these changes).
    1. This is a nice alternative to Google’s 8.8.8.8/8.8.4.4…why give one company all your data?!
  3. Delete the Standard Akamai CDN, and setup a Standard Microsoft CDN, which has a comprehensive rules engine, and other goodies.
  4. Use the Rules Engine to enforce a consistent experience…
    1. Redirect HTTP requests for my apex/root via a 301 response to https://www.mydomain.com
    2. Redirect HTTPS requests for my apex/root via a 301 response to https://www.mydomain.com
    3. Redirect all other HTTP requests via a 301 to HTTPS

Now…all your base are belong to us! I mean, requests for http://mydomain.com (insecure root), https://mydomain.com (secure root), and http://www.mydomain.com (insecure www sub-domain) will redirect to https://www.mydomain.com.

So, I’m using the cloud for DNS, everything has a cost.For example, two million queries with one zone will cost $1.30. Sounds like a good problem to have!

Categories
Software development

Using NATS and STAN in Kubernetes

In this article we’re going to:

  1. Install NATS and STAN using Helm into Kubernetes
  2. Follow along in a NATS minimal setup tutorial
    1. Create an administrative pod for NATS and STAN
    2. Confirm NATS sub/pub is working
    3. Confirm STAN sub/pub is working

This installs NATS so that clients can use it without authenticating!

At the time of this writing I’m not aware of a good way to install STAN supporting authentication. For example, like referencing a Config Map to get an account and Secret to get a password. Therefore, so I opted out of requiring client authentication for NATS so that I can do minimal testing.

I will follow-up in Github and in NATS documentation and circle back here to add the missing authentication settings.

# install NATS
helm repo add bitnami https://charts.bitnami.com/bitnami
helm install nats bitnami/nats --set replicaCount=3 \
--set auth.enabled=false \
--set clusterAuth.enabled=true,clusterAuth.user=admin,clusterAuth.password=$password

# install STAN (notice how no creds are NOT needed!)
helm repo add nats https://nats-io.github.io/k8s/helm/charts/
helm install stan nats/stan \
--set store.type=file --set store.file.storageSize=1Gi \
--set stan.nats.url=nats://nats-client.default.svc.cluster.local:4222 \
--set store.cluster.enabled=true \
--set store.cluster.logPath=/data/stan/store/log

Now let’s follow along in the tutorial.

kubectl run -i --rm --tty nats-box --image=synadia/nats-box --restart=Never
nats-sub -s nats-client.default.svc.cluster.local -t test &
nats-pub -s nats-client.default.svc.cluster.local -t test "this is a test"

Which should yield:

2020/05/03 23:20:11 [#1] Received on [test]: 'this is a test'

stan-sub -c stan -s nats-client.default.svc.cluster.local test &
stan-pub -c stan -s nats-client.default.svc.cluster.local test "this is a test"

Which should yield:

[#1] Received: sequence:1 subject:"test" data:"this is a test" timestamp:1588550818187067125

Categories
Software development

HMAC auth with Kong in Kubernetes

This may help you setup hmac-auth with Kong in Kubernetes.

There is no guide for setting up HMAC, but, the JWT guide is great. It helped me understand Kong Plugins have specific expectations for Kubernetes Secrets.

In a nutshell, Kong Plugins expect Kubernetes Secrets to (1) reference the plugin type in the kongCredType field and (2) contain literals that match the Kong Credential type supported by your intended Kong Plugin.

kubectl create secret \ 
generic a-hmac-secret -n test-namespace \ 
--from-literal=kongCredType=hmac-auth \ 
--from-literal=username=hook-user \ 
--from-literal=secret=$hmackey

To setup the HMAC plugin, your Kubernetes secret must define the kongCredType, username, and secret. The kongCredType for HMAC is hmac-auth, username is a string and gets sent by your client in the Authorization header, and secret is a string that must be known by both the client and server to do encryption and decryption. Look at the properties on the credential object here for supporting detail.

The below YAML should help you get HMAC authorization working for your Ingress using Kong Plugins. Here are some tips:

  1. This assumes you have Kong installed in your Kubernetes cluster, already, and the IP address for the Kong Proxy ingress associated with proxy.mydomain.com.
  2. We intend to protect backend-service with HMAC auth, which exists in the test-namespace.
  3. Kubernetes objects we create to support HMAC auth for this service will be created in the same namespace, test-namespace.
  4. Environment variables (things beginning with $) are known and set prior to running scripts where applicable.
  5. Watch logs from your Kong deployment to observe whether Ingress/Plugin setup has errors kubectl logs deploy/kong-kong --follow --all-containers
  6. My ingress (below) has additional configuration to support http to https redirects, and provision SSL automatically (related links are included in comments).
---
apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
  name: webhook-hmac
  namespace: test-namespace
plugin: hmac-auth
config:
  hide_credentials: true
  enforce_headers: 
  - date 
  - host
  - request-line
  algorithms: 
  - hmac-sha256
---
---
apiVersion: configuration.konghq.com/v1
kind: KongConsumer
metadata:
  name: hook-user
  namespace: test-namespace
username: hook-user
credentials:
- a-hmac-secret
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: webhook-frontend
  namespace: test-namespace
  annotations:
    konghq.com/plugins: webhook-hmac
    # https://github.com/Kong/kubernetes-ingress-controller/blob/master/docs/guides/cert-manager.md
    kubernetes.io/tls-acme: "true"
    cert-manager.io/cluster-issuer: letsencrypt-prod
    # https://github.com/Kong/kubernetes-ingress-controller/blob/master/docs/guides/configuring-https-redirect.md
    konghq.com/override: https-only
spec:
  # https://github.com/Kong/kubernetes-ingress-controller/blob/master/docs/guides/cert-manager.md
  tls:
  - secretName: proxy-mydomain-com
    hosts:
    - proxy.mydomain.com
  rules:
  - host: proxy.mydomain.com
    http:
      paths:
      - path: /my/protected/path
        backend:
          serviceName: backend-service
          servicePort: 8080 
---

Now, requests sent to https://proxy.mydomain.com/my/protected/path will require an Authorization header supporting HMAC as configured above, and if valid, be sent to the backend-service.

Categories
Software development

Setting $KUBECONFIG

This should be simple. It’s an environment variable.

But, as with all things, it depends! What if you’re working with multiple clusters, or you setup and tear-down clusters often, or work with different tools that have certain expectations?

I’ve set it a few different ways:

Save cluster config file as $HOME/.kube/config

This works. But it feels messy!

I don’t like having to constantly overwrite the configuration file for my cluster’s source-of-truth.

Plus, kubectl (and maybe other programs?) overwrite this file when connections are made to clusters.

Set $KUBECONFIG path in ~/.profile

This works, too. It doesn’t scale well when you change clusters often or use a variety of tooling (due to tooling behaviors and expectations).

# Get the config file from the current folder or from home
export KUBECONFIG=kubeconfig:$HOME/.kube/config

If kubeconfig doesn’t exist in the current folder (maybe because the tool you’re using changed folders), $KUBECONFIG will resolve to to $HOME/.kube/config.

The problem is that $HOME/.kube/config isn’t necessarily my desired config…it represents what was last used.

Set $KUBECONFIG path in this session

I was using a script like this to initialize my k8s connection in a terminal. It helped ensure predictability (my machine’s state isn’t mutated), as I had to explicitly set the context given a script in that project folder.

#!/bin/sh
# call like this: source ./start.sh
#
# Do stuff we need done before beginning a development session
# Doesn't modfy the machine long-term
#
# read in the absolute path of the kubeconfig in my current directory
export KUBECONFIG=$(readlink kubeconfig -f)

Keep is simple, use kubectx

Now, I just use kubectx. Get it using arkade, and other handy tools and k8s applications.

Categories
Hardware

Rasberry Pi Print Server

Amazing!

I snagged this bundle and set-up my Pi using an existing 4GB micro SD card following this article.

But I ran into some trouble, I kept getting an error saying this file could not be found when using the Linux PPD driver for my printer (a Samsung ML-1740): /usr/lib/cups/filter/rastertospl

I tried the universal print driver from here, but no dice! Frustrating… I hate having to turn my desktop on just to print. And, I’ll be damned if I’m going to buy something like this!

Fortunately my Google-foo skillz led me to this answer on StackExchange, and it saved my bacon. Thanks, Intarwebs.

After installing Splix I was able to select Samsung ML-1740, 2.0.0 as a print driver and it worked like a champ. Happy computing!

Categories
Software development

Why move from 2 to 4 GB nodes in k8s?

I upgraded to a higher performing set of nodes in my k8s cluster. It was easy to upgrade, and I freed up 1GB of memory.

Now my applications can use that memory or I can rely on it for bursting, w/o needing another node for memory! Below you’ll find the steps I took to upgrade and some quick math.

Upgrade steps:

  1. Create a new pool for bigger nodes
    1. Members in a pool all of have the same specs
  2. Add nodes to the new pool (I used 4GB nodes)
  3. As nodes come online in the new pool, scale down nodes in the old pool (my old pool had 2GB nodes)
  4. Once all new nodes were online in the new pool, I deleted the old pool
  5. k8s took care of moving my workloads around! I love k8s.

Why did I do this?

The fact that this cost info was readily available at my finger tips is what prompted me to take action. Not because I’m paying the bill, but because its easy for me (as a developer) to see what my system is costing. And I don’t want it to run inefficiently.

The small nodes cost $10, and each came with 2GB of memory. I had three nodes, totaling 6GB of memory. On average, with my application running, this consumed 65% of the available memory or 3.9GB, leaving roughly 2GB left.

The bigger nodes cost $20, and each come with 4GB of memory. I have 2 nodes currently, totaling 8GB of memory. On average, with my application running this consumes 3.04GB.

That’s a savings of almost 1GB! In the container world, that’s power. Why? Each node runs system processes in the kube-system namespace, and the sum memory used by those processes on three small nodes is greater than the sum memory used on two bigger nodes.

Categories
Software development

Use a private Docker Hub repo with OpenFaaS

My function’s pods would not start! They were failing when trying to pull the image from my private repository in Docker Hub.

I had followed this article to create a docker-registry secret in my kubernetes cluster…but when I “kubectl describe”d my pod, I saw I was still getting an authorize error.

I fixed two things, am much better off, and wanted to make sure other folks don’t have similar trouble. The fix was:

  1. Add the docker-registry secret to my openfaas-fn namespace
  2. Reference the secret by name in my function’s YAML