Configuration as Data - Environment Variables, Secrets and ConfigMaps

Share on:

Overview

As we saw Kubernetes is an orchestrator for containerized application, the main rule of containerized application, that your image is immutable, it means if you want to make any change to your image, you need to build a new one.

The question that we need to answer is how can we deploy the same image in different environment (let’s say docker image) without be obliged to build a new image for each environment.

The answer is easy easy we need to inject the configuration at start time, it means when you start your docker image (create a container) you inject the configuration, no need to have your config file inside your docker image.

Docker already supports injecting configuration at start time by using environment variables or sharing files or directory between the container and the host.

Kubernetes allow the same thing, by using Environment variables in the pod spec definition or by using specific API objects ConfigMap and Secret.

First of all, let’s create a custom docker image, the image will display a configuration from different sources, environment variable and config files

Here is the code to create GoLong console application able to log on the console the configuration parameters

package main

import (
   "bufio"
   "fmt"
   "os"
   "strings"
   "time"
)

func main() {

   fmt.Println("---------Values from Env Variable------------")
   displayConfig("AZURE_ACCESS_KEY_ID", os.Getenv("AZURE_ACCESS_KEY_ID"))
   displayConfig("AZURE_SECRET_ACCESS_KEY", os.Getenv("AZURE_SECRET_ACCESS_KEY"))

   congigFile := "./config/config.conf"
   secretFile := "./secret/secret.conf"

   if fileExists(congigFile) {

   	fmt.Println("file config.conf found")
   	configFromFile := ReadConfig(congigFile)

   	for key, value := range configFromFile {
   		fmt.Println("---------Values from config file------------")
   		displayConfig(key, value)
   	}
   }

   if fileExists(secretFile) {
   	fmt.Println("file secret.conf found")
   	configFromFile := ReadConfig(secretFile)

   	for key, value := range configFromFile {
   		fmt.Println("---------Values from secret file------------")
   		displayConfig(key, value)
   	}
   }
   fmt.Println("Starting image loop")
   for true {
   	fmt.Println(time.Now().String())
   	time.Sleep(time.Minute)
   }

}

func ReadConfig(configFile string) map[string]string {

   configFromFileMap := make(map[string]string)

   file, err := os.Open(configFile)
   if err != nil {
   	fmt.Fprintf(os.Stderr, "Failed to open file: %s", err)
   	os.Exit(1)
   }
   defer file.Close()

   scanner := bufio.NewScanner(file)
   for scanner.Scan() {
   	line := strings.TrimSpace(scanner.Text())

   	fmt.Println(line)

   	if !strings.HasPrefix(line, "#") && len(line) != 0 {
   		kv := strings.Split(line, "=")
   		parameter := strings.TrimSpace(kv[0])
   		value := strings.TrimSpace(kv[1])
   		configFromFileMap[parameter] = value
   	}
   }

   if err := scanner.Err(); err != nil {
   	fmt.Printf("Failed to read file: %s", err)
   	os.Exit(1)
   }

   return configFromFileMap

}

func fileExists(filename string) bool {
   info, err := os.Stat(filename)
   if os.IsNotExist(err) {
   	return false
   }
   return !info.IsDir()
}

func displayConfig(key string, value string) {
   fmt.Println("Key:", key, "=>", "Value:", value)
}

Then we create a docker file to build and deploy our custom docker image, I’m using multi staging build, the first stage use image that contains golang SDK, this image is used to build the golong source code. Then the second stage uses debian 10 distroless distributions image to deploy the output of the first stage.

# syntax=docker/dockerfile:1
## Build
FROM golang:1.17.3-buster AS build
WORKDIR /app

COPY *.go ./
RUN go mod init display-config
RUN go build -o /display-config

## Deploy
FROM gcr.io/distroless/base-debian10
WORKDIR /

COPY --from=build /display-config /display-config
USER nonroot:nonroot
ENTRYPOINT ["/display-config"]

To build and publish the image in docker hub, you need to create a free account on dockerhub, then connect to your dockerhub from your docker desktop or use docker login command

docker build -t ylasmak/dispaly-config:1.0 .
docker push ylasmak/dispaly-config:1.0

NB: ylasmak is the name of my repository in dockerhub, you need to change ylasmak by your own repository’s name

Environment variables

The first way to inject configuration in your container, is to add environment variables to your container definition

apiVersion: apps/v1
kind: Deployment
metadata:
  name: display-config-deployment
  labels:
    app: display-config
spec:
  replicas: 1
  selector:
    matchLabels:
      app: display-config
  template:
    metadata:
      labels:
        app: display-config
    spec:
      containers:
      - name: display-config
        image: ylasmak/dispaly-config:1.0
        env:
        - name: AZURE_ACCESS_KEY_ID
          value: "AZ1key1234rfdf"
        - name: AZURE_SECRET_ACCESS_KEY
          value: "AZURE_SECRET_ACCESS_KEY=AZSecretfdsdffsd98sd09f80s8f9sf7"

Deploy the configuration using kubeclt command and then check the pod logs with the command.

kubectl logs deploy/display-config-deployment

you should get the configuration injected in environment variables.

Key: AZURE_ACCESS_KEY_ID => Value: AZ1key1234rfdf
Key: AZURE_SECRET_ACCESS_KEY => Value: AZURE_SECRET_ACCESS_KEY=AZSecretfdsdffsd98sd09f80s8f9sf7
Starting image loop
2021-11-21 01:28:55.5968981 +0000 UTC m=+0.000173701

I know what you are wondering, Are we going to store the secret access key to clear in the deployment manifest? ….. Of course, not :) keep reading…

ConfigMap API Object

ConfigMap is a Key value pairs exposed into a Pod used by the application as configuration settings it decouples application and Pod configurations and maximize our container image’s portability

apiVersion: v1
kind: ConfigMap
metadata:
  name: display-config
data:
  AZURE_ACCESS_KEY_ID: "AZ1key1234rfdf"

Secret API Object

Secret object is used to store sensitive information such as passwords, API tokens and certificates as Objects, secrets are namespaces and can only be referenced by Pods in the same Namespace, they can be protected by using RBAC (role based access control), please note that values are base64 encoded.


apiVersion: v1
kind: Secret
metadata:
  name: display-config
data:
  AZURE_SECRET_ACCESS_KEY: Y1dFUm9QUU53U1VwbnptUUl5MC81MHJESVJpMkxEREI5VDAvcGkK

Let’s change our deployment manifest to get configuration values from configmap and secret

apiVersion: apps/v1
kind: Deployment
metadata:
  name: display-config-deployment
  labels:
    app: display-config
spec:
  replicas: 1
  selector:
    matchLabels:
      app: display-config
  template:
    metadata:
      labels:
        app: display-config
    spec:
      containers:
      - name: display-config
        image: ylasmak/dispaly-config:1.0
        env:
        - name: AZURE_ACCESS_KEY_ID
          valueFrom:
            configMapKeyRef:
              name: display-config
              key: AZURE_ACCESS_KEY_ID 
        - name: AZURE_SECRET_ACCESS_KEY
          valueFrom:
            secretKeyRef:
              name: display-config
              key: AZURE_SECRET_ACCESS_KEY 

Let’s check the pod logs with the command

kubectl logs deploy/display-config-deployment

Log :

---------Values from Env Variable------------
Key: AZURE_ACCESS_KEY_ID => Value: AZ1key1234rfdf
Key: AZURE_SECRET_ACCESS_KEY => Value: cWERoPQNwSUpnzmQIy0/50rDIRi2LDDB9T0/pi

Starting image loop
2021-11-21 03:54:41.7462354 +0000 UTC m=+0.000121401
2021-11-21 03:55:41.7235125 +0000 UTC m=+60.046281701

It is also possible to expose Secret and ConfigMap as volumes to the pod, then the container will mount the volumes to the container file system.

If you make a change to your ConfigMap or secret it will be taken in charge by your container without restarting your pod, which is great if you want to make changes at runtime.

Let’s create two files in your working machine config.con secret.conf

config.conf

AWS_ACCESS_KEY_ID = AKIAUSQ7TEDTOE4YQTAS

secret.conf

AWS_SECRET_ACCESS_KEY = cWERoPQNwSUpnzmQIy0/50rDIRi2LDDB9T$+h/pi

Then create configmap and secret objects from files using imperative mode.

kubectl create configmap aws-key --from-file=config.conf
kubectl create secret generic aws-secret --from-file=secret.conf

Now we have to make changes to the deployment manifest to mount configuration and secrete files on the container file system.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: display-config-deployment
  labels:
    app: display-config
spec:
  replicas: 2
  selector:
    matchLabels:
      app: display-config
  template:
    metadata:
      labels:
        app: display-config
    spec:
      volumes:
        - name: configvolume
          configMap:
            name: aws-key
        - name: secretvolume
          secret:
            secretName: aws-secret
      containers:
      - name: display-config
        image: ylasmak/dispaly-config:1.0
        env:
        - name: AZURE_ACCESS_KEY_ID
          valueFrom:
            configMapKeyRef:
              name: display-config
              key: AZURE_ACCESS_KEY_ID 
        - name: AZURE_SECRET_ACCESS_KEY
          valueFrom:
            secretKeyRef:
              name: display-config
              key: AZURE_SECRET_ACCESS_KEY 
        volumeMounts:
          - name: configvolume
            mountPath: "/config"
          - name: secretvolume
            mountPath: "/secret"

Let’s check the first pod output logs.

kubectl logs $(kubectl get pods | grep display-config-deployment | awk '{print $1}' | head -n 1)

you will get output like this

Key: AZURE_ACCESS_KEY_ID => Value: AZ1key1234rfdf
Key: AZURE_SECRET_ACCESS_KEY => Value: cWERoPQNwSUpnzmQIy0/50rDIRi2LDDB9T0/pi

file config.conf found
AWS_ACCESS_KEY_ID = AKIAUSQ7TEDTOE4YQTAS
---------Values from config file------------
Key: AWS_ACCESS_KEY_ID => Value: AKIAUSQ7TEDTOE4YQTAS
file secret.conf found
AWS_SECRET_ACCESS_KEY = cWERoPQNwSUpnzmQIy0/50rDIRi2LDDB9T$+h/pi
---------Values from secret file------------
Key: AWS_SECRET_ACCESS_KEY => Value: cWERoPQNwSUpnzmQIy0/50rDIRi2LDDB9T$+h/pi
Starting image loop
2021-11-21 23:33:08.816686463 +0000 UTC m=+0.000138831

You can get the same log from the second instance by using this command.

kubectl logs $(kubectl get pods | grep display-config-deployment | awk '{print $1}' | head -n 2 | tail -1)