Our K8s helm chart for full stack Node.js apps
2019-05-17 von Artikel von HannesHannes

  •  Howto
  •  Tech

In this article I want to explain how I finally got something deployed to Kubernetes and why I started to like it, despite initially thinking I never could.

This article will show you how to...

  • deploy a helm chart to K8s
  • use helm chart dependencies
  • deploy a Node.js application with frontend and backend

Getting started with K8s is hard

Getting started with Kubernetes is hard, this cannot be said enough. I'm not the only one feeling this way right? 😭 For quite some time I was looking at a bunch of YAML files that seemed over complicated compared to Docker Compose.

So I took a dive into Helm to make things easier... BUT same thing, even harder, because of crazy go templating stuff. This is the future? Why? Please not 🙏

It was frustrating, although all the concepts so far seemed promising and I was sure there must be some truth to the hype.

Kubernetes and Helm promised to solve a lot of hard problems for developers like...

  • ci process, rolling updates and rollback
  • routing and loadbalancing
  • auto scaling
  • uptime monitoring and ha
  • cron jobs
  • backup

...and many more in a generic way. If Kubernetes is the abstraction layer everybody can agree on BE MY GUEST!

It took me some time to get warm with the concepts and I fiddled with it for a few months, but when we started to work on a new project I felt that I need to get serious.

This is what worked

In this case starting from scratch didn't seem to work. The ultimate goal here was to come up with a lightweight CI solution to deploy Gitlab repositories to Kubernetes. Since the guys at Gitlab build a fantastic product that serves us well for years I took a look at what they are doing. Turns out their whole Auto Devops feature is build on bunch of bash scripts and a helm chart.

BINGO ✅

At this point I had already set up an internal Kubernetes cluster using MikroK8s and could start with a NodeJS example application. There was a lot of back magic involved, but after a few hours of figuring out the details express was greeting me with "Welcome to Express"... Ok this was cool and a separate instance with an unique url was automatically deployed for every new git branch.

MAGIC 🔮

So I took a dive into the Auto DevOps Helm Chart to see what is going on. First you have to understand what helm actually does for you. Remember the crazy YAML files? Helm can create them in a more sane way with a lot of best practices already applied. In fact the best way to create your own helm charts is to use helm create <chart-name>.

This is the basic structure of a helm chart:

yourchart
|-- Chart.yaml
|-- charts
|-- requirements.yaml
|-- templates
|   |-- NOTES.txt
|   |-- _helpers.tpl
|   |-- deployment.yaml
|   |-- ingress.yaml
|   `-- service.yaml
`-- values.yaml

Let me tell you about the values.yaml file in the root of the chart. It holds all the default configuration of the chart and you can basically configure the whole chart through this file. Further helm let's you modify everything in this file when you install the chart either directly on the commandline or by providing an additional YAML file containing the values you want to override.

So far so good, but until now I only had a single Node.js backend container running. This has not much to do with real world applications which consist at least of a frontend, a backend and a database. The database part was already solved by the gitlab implementation by using chart dependencies. Read on to see how that works.

Composing helm charts

A helm chart can have other charts as dependencies itself. I wanted to have a PostgreSQL database deployed along with my chart. Your charts dependencies live in the requirements.yaml file in the root of the chart which looks like this:

dependencies:
  - name: postgresql
    version: '0.7.1'
    repository: 'https://kubernetes-charts.storage.googleapis.com/'
    condition: postgresql.enabled

The dependency can be controlled with the condition: postgresql.enabled flag. Where do you think this comes from? EXACTLY, the values.yaml file has an entry like this:

# enable or disable the postgresql dependency
postgresql:
  enabled: true

So we can switch on or off our database deployment here, but it gets even better. Since postgresql is the name of the dependency, which is a helm chart itself, you can also reconfigure everything this helm chart exposes in it's own values.yaml file. For example this will override the default username that is used to connect to the database:

postgresql:
  enabled: true
  # override database username
  postgresqlUsername: MyCustomUsername

This way you can change the configuration to make the dependency play nicely with our own chart's logic. A huge time saver is that helm has already a big repository of predefined charts for all kinds of fancy technologies. You just have to integrate them into your own chart and get a well maintained setup, that is often created by the core developers.

Adding a frontend deployment

So far all my findings are explained in the one or other form all over the internet, but it is a steep learning curve and I hope my summary is help to someone who goes through the same process.

One thing I really struggled to find information about is best practices on how to deploy an application with frontend and backend logic or let's say a more complex one... A common setup for a React / Node.js Application looks like this:

graph LR request(Request) --> nginx(Nginx) nginx -- "serve /*" --> frontend("Frontend / Static Assets") nginx -- "proxy /api" --> backend("Backend / Node.js")

This could be a little different based on the technologies you are using, but in our Javascript / Node.js world we normally have a statically compiled Javascript frontend and a Node.js backend hidden behind a mature webserver like Nginx. So how do we achieve this in K8s? Obviously we could package everything into a single docker image and a lot of projects seem to do exactly that.

But I don't like this approach because it has some unpretty disadvantages 👎:

  1. We cannot use official images like nginx:stable-alpine (frontend) and node:lts-alpine (backend). Instead we have to maintain the setup process of Node.js and/or Nginx ourselves.

  2. The image bootstrap logic gets more complicated and we cannot install only the bare minimum of what is needed to run for that particular component.

  3. We cannot build our frontend and backend images in parallel.

  4. We cannot scale our frontend and backend deployment independently.

  5. The final image has to run multiple processes (Nginx, Node.js) at the same time and we need to setup an additional process manager.

  6. It's much harder to analyze the performance characteristics and request runtimes of your deployment.

  7. It just feels wrong and not modular 🤓

In my opinion extending our helm chart with an additional frontend deployment is the better approach. Let me show you that this is not so hard.

1. Create a frontend docker image

First we want to build a separate docker image for our frontend deployment. Docker multi-stage builds come in handy here. We first compile the frontend with a Node.js base image and then copy the static assets into a stable Nginx image:

FROM node:lts-alpine as builder
WORKDIR /app
ADD ./package.json ./yarn.lock /app/
RUN yarn install --no-cache --frozen-lockfile
COPY . /app
RUN yarn build:frontend

FROM nginx:stable-alpine
COPY --from=builder /app/packages/frontend/build /usr/share/nginx/html

In our CI pipeline we build the frontend and backend images in parallel. When finished we push the frontend image to our container registry tagged as <build-version>-frontend.

2. Helm: Add a frontend deployment

Now we have to extend our helm chart to add the deployment logic for the frontend. First we add an enabled switch to control the behavior of the chart:

# values.yml

frontend:
  enabled: false

Now we add the actual frontend deployment. Note that we use the frontend image we pushed to the registry before:

# templates/frontend-deployment.yaml

{{- if .Values.frontend.enabled -}}
apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ template "trackableappname" . }}-frontend
  labels:
    app: {{ template "appname" . }}-frontend
spec:
  ...
  selector:
    matchLabels:
      app: {{ template "appname" . }}-frontend
  template:
    metadata:
      labels:
        app: {{ template "appname" . }}-frontend
    spec:
      imagePullSecrets:
{{ toYaml .Values.image.secrets | indent 10 }}
      containers:
      - name: {{ .Chart.Name }}-frontend
        # we use the frontend image here
        image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}-frontend"
        imagePullPolicy: {{ .Values.image.pullPolicy }}
        ports:
        - containerPort: 80
{{- end -}}

The frontend will only be deployed if we set the frontend switch to enabled: true which is not the case by default.

3. Helm: Add a frontend service and modify the ingress controller

Now we have to add a service for our frontend deployment because we want to modify the ingress to route all requests except the ones to /api/* to the frontend service. The service looks like this:

# templates/frontend-service.yaml

{{- if .Values.frontend.enabled -}}
apiVersion: v1
kind: Service
metadata:
  name: {{ template "fullname" . }}-frontend
spec:
  selector:
    app: {{ template "appname" . }}-frontend
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
{{- end -}}

Here comes the fun part. Based on our frontend being enabled or not, we change the routing logic in the ingress controller.

If the frontend is enabled we route all requests for /api to the backend service and everything else to the frontend service. If the frontend is disabled everything is routed to the backend service:

# templates/ingress.yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  ...
spec:
  rules:
  - host: {{ template "hostname" .Values.service.url }}
    http:
      paths:
{{- if .Values.frontend.enabled }}
      - path: /
        backend:
          serviceName: {{ template "fullname" . }}-frontend
          servicePort: 80
      - path: /api
        backend:
          serviceName: {{ template "fullname" . }}
          servicePort: {{ .Values.service.externalPort }}
{{- else }}
      - path: /
        backend:
          serviceName: {{ template "fullname" . }}
          servicePort: {{ .Values.service.externalPort }}
{{- end }}
...

When the frontend deployment is enabled we get something like this:

graph LR request(Request) --> ingress(Ingress) ingress -- "route /*" --> frontendService("Frontend Service") frontendService --> frontendPods("Frontend Pods / Nginx") ingress -- "route /api"--> backendService("Backend Service") backendService --> backendPods("Backend Pods / Node.js")
4. Deploy the frontend

Now that our chart is prepared we can finally deploy it to our Kubernetes cluster using helm. Here we can also override the defaults that are defined in the values.yaml file by using --set in the upgrade call:

helm upgrade --install \
  --wait \
  ...
  --set frontend.enabled=true \
  --namespace=<K8S-NAMESPACE> \
  <RELEASE-NAME> .

You should now see that a frontend deployment and service get deployed and your ingress should behave as expected. HOORAY!

Final Thoughts

I hope this overview of my K8s journey was interesting and helps you to deploy real world apps to Kubernetes in the future. As said before, the learning curve can be a bit overwhelming, but since Kubernetes at this point seems to be the dominant container platform I think it worth it. The ecosystem is definitely evolving rapidly.

Here and there in this article I left out some implementation details, since I only wanted to give you a brief overview of my findings. If you have question feel free to reach out.

Hope it helps and feedback is welcome!

Relateted Articles:

Hosting a Helm repository on GitLab Pages

Gitlab Auto DevOps Documentation

How To Create Your First Helm Chart

Kommentare

Disqus ist nur in Produktion verfügbar...