Kubernetes Core Concepts Explained with a Golang Example
This article breaks down Kubernetes fundamentals through a hands-on approach. We'll explore key concepts while deploying a Go application using Kind (Kubernetes in Docker).
Why Use Kind?
Kind is an excellent tool for setting up a local Kubernetes environment. It offers:
Run Kubernetes clusters inside Docker containers.
Quick and simple setup for local development.
Perfect for both beginners and experienced developers experimenting with Kubernetes features.
Who Is This Article For?
Kubernetes Beginners: Developers getting started with Kubernetes who want a practical, hands-on introduction.
Experienced Developers: Those who prefer a “deploy first” approach—setting up containers and Kubernetes clusters locally before moving to cloud infrastructure.
What You’ll Learn
This article breaks down Kubernetes core concepts step by step. In each section, we’ll dive deeper into the fundamentals, explaining key Kubernetes components and how they interact with each other through practical examples.
Base Project
To keep things practical and focused on Kubernetes concepts, we’ll use a simple Go application. You can find the complete code here: gst-app
Containerization and Kind: Building and Managing Our Kubernetes Environment
Containerization has transformed how applications are built, shipped, and deployed. By isolating applications and their dependencies into lightweight, self-contained packages, containers ensure consistent behavior across different environments—from development to production. Let’s explore containerization fundamentals and how we use Kind (Kubernetes in Docker) to set up our Kubernetes environment.
Containerization: A Modern Approach to Application Deployment
What is Containerization?
Containerization involves packaging an application and its dependencies into a “container”—a lightweight, portable, self-sufficient environment that runs consistently across various infrastructures.
Key Benefits:
Isolation: Containers provide isolated environments, preventing conflicts between applications running on the same host.
Portability: Containers run on any system supporting the container runtime, ensuring consistent deployment across development, testing, and production.
Scalability: Containers can easily scale up or down, making them ideal for dynamic, cloud-native applications.
Example: Dockerfile Configuration
In our project, dockerfile.todo defines the Docker image for the todo-api service:
FROM golang:1.23.0 AS build_todo-api
ENV CGO_ENABLED=0 GOOS=linux GOARCH=amd64
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN go build -o todo-api ./main.go
FROM alpine:3.18
RUN apk --no-cache add postgresql-client
RUN addgroup -g 1000 -S todo && \
adduser -u 1000 -h /app -G todo -S todo
WORKDIR /app
COPY --from=build_todo-api --chown=todo:todo /app/todo-api /app/todo-api
USER todo
EXPOSE 8000
CMD [”./todo-api”]
LABEL org.opencontainers.image.title=”todo-api” \
org.opencontainers.image.authors=”Diêgo <diegomagalhaes.contact@gmail.com>” \
org.opencontainers.image.source=”https://github.com/diegom7s-dev/gst-app” \
org.opencontainers.image.version=”1.0.0”This Dockerfile uses a two-stage build:
Build Stage (
golang:1.23.0 AS build_todo-api): Compiles the application in a clean Go environment, ensuring the final image contains only necessary binaries.Runtime Stage (
FROM alpine:3.18): Copies the compiled binary to a minimal Alpine Linux image, providing a lightweight runtime environment with only essential dependencies likepostgresql-client.
By separating build and runtime stages, we optimize the image for both size and security—following containerization best practices.
Kind: Simulating a Kubernetes Cluster in Docker
What is Kind?
Kind (Kubernetes in Docker) is a tool for running local Kubernetes clusters using Docker containers as nodes. It’s excellent for local development and testing, allowing developers to create multi-node clusters without needing multiple physical or virtual machines.
Example: Kind Configuration
The kind.config.yaml file defines our Kind cluster configuration:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
extraPortMappings:
# Todo-Api
- containerPort: 8000
hostPort: 8000
# Postgres
- containerPort: 5432
hostPort: 5432This configuration creates a single-node Kind cluster with a control plane role. It maps ports 8000 and 5432 from the host to the container, allowing us to access services running inside the cluster (the todo-api on port 8000 and PostgreSQL on port 5432) directly from our local machine.
Integrating Containerization with Kind: Building and Running the Service
Makefile Setup for Automation
Our project’s Makefile automates various tasks related to building and deploying the todo-api service using Docker and Kind:
# Define dependencies
GOLANG := golang:1.22.2
ALPINE := alpine:3.18
KIND := kindest/node:v1.27.3
POSTGRES := postgres:15.4
# Building containers
service:
docker build \
-f infra/docker/dockerfile.todo \
-t $(SERVICE_IMAGE) \
--build-arg BUILD_REF=$(VERSION) \
--build-arg BUILD_DATE=`date -u +”%Y-%m-%dT%H:%M:%SZ”` \
.
# Running from within k8s/kind
dev-up:
kind create cluster \
--image $(KIND) \
--name $(KIND_CLUSTER) \
--config infra/k8s/dev/kind/kind.config.yaml
kubectl config use-context kind-$(KIND_CLUSTER)
kubectl wait --timeout=120s --namespace=local-path-storage --for=condition=Available deployment/local-path-provisioner
kind load docker-image $(POSTGRES) --name $(KIND_CLUSTER)Key Makefile Targets:
service: Builds the Docker image for the
todo-apiservice using the Dockerfile atinfra/docker/dockerfile.todo.dev-up: Creates a Kind cluster using the specified configuration file and loads necessary Docker images into the cluster.
By leveraging Docker and Kind, our setup ensures a streamlined development workflow that mirrors a production environment (within limitations). This allows us to build, deploy, and test our Go application in a local Kubernetes cluster, providing a high-fidelity environment for development and testing.
Essential Kubernetes Components: What They Are and How to Use Them
Understanding Kubernetes core components is fundamental to effectively deploying and managing applications. Let’s explore the key components that form the foundation of Kubernetes, using our Go application as a practical example.
1. Nodes: The Worker Machines in Kubernetes Clusters
What are Nodes?
Nodes are the worker machines in Kubernetes clusters. They’re responsible for running containerized applications and providing the computational resources needed to keep your applications running smoothly. Nodes can be physical servers or virtual machines, depending on your cluster configuration.
Architectural Role:
Runtime Environment: Nodes serve as the execution environment for your Pods. Each node runs at least a kubelet (an agent responsible for communicating with the Kubernetes control plane), a container runtime (like Docker or containerd), and kube-proxy (which maintains network rules on nodes).
Resource Management: Nodes provide CPU, memory, storage, and network resources for running containers. Kubernetes manages these resources efficiently, ensuring each Pod receives the necessary resources as specified in its configuration.
Node Components:
Kubelet: An agent running on each node that ensures containers are running in a Pod. It continuously monitors Pod status and communicates with the Kubernetes API server to maintain the desired state.
Container Runtime: The software responsible for running containers. Popular runtimes include Docker, containerd, and CRI-O. Kubernetes supports any runtime implementing the Kubernetes Container Runtime Interface (CRI).
Kube-proxy: A network proxy running on each node that manages network communication between Pods across different nodes. It implements Kubernetes networking services on each node, ensuring Pods can communicate with each other and external services.
Example in Our Project:
In our project, nodes are represented by Docker containers running Kubernetes when using Kind. Each node in a Kind cluster is a Docker container, allowing us to simulate a multi-node Kubernetes cluster locally.
While we don’t have a specific YAML manifest to define nodes (since nodes are managed by the control plane), we rely on them to provide the necessary environment for our Pods and services. For example, when deploying the PostgreSQL database or the todo-api application, Kubernetes schedules these Pods on available nodes, utilizing their computational resources.
Key Concepts Related to Nodes:
Node Affinity and Anti-Affinity: Kubernetes provides mechanisms to control how Pods are scheduled on nodes. Node affinity allows you to define rules that attract Pods to certain nodes, while anti-affinity ensures Pods are distributed across nodes for improved fault tolerance.
Taints and Tolerations: Used to prevent certain Pods from being scheduled on specific nodes. For example, a node can be tainted to allow only specific workloads, like those requiring GPUs, ensuring only compatible Pods are scheduled there.
Understanding Node Management:
Node Status: Each node maintains a status providing essential information like node health, capacity (CPU, memory, etc.), and conditions (e.g., Ready, DiskPressure, MemoryPressure).
Node Maintenance: Nodes can be marked as unschedulable when needing maintenance, preventing new Pods from being scheduled while allowing existing Pods to continue running or be rescheduled.
2. Pods: The Fundamental Building Block of Kubernetes
What are Pods?
Pods are the smallest deployable units in Kubernetes, representing a single instance of a running process. A Pod can encapsulate one or more containers that share the same network namespace and storage. Containers within a Pod can communicate using localhost and share storage volumes.
Architectural Role:
Ephemeral Nature: Pods are designed to be ephemeral. When a Pod fails, Kubernetes automatically creates a new Pod to replace it rather than repairing the existing one.
Container Co-location: Containers that need to share resources (like storage or networking) or must always be deployed together are grouped in a single Pod.
Example in Our Project:
In our project, the StatefulSet configuration in dev-database.yaml defines a Pod template for running a PostgreSQL container:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: database
namespace: simple-go-todo
spec:
replicas: 1
selector:
matchLabels:
app: database
template:
metadata:
labels:
app: database
spec:
containers:
- name: postgres
image: ‘postgres:15.4’
volumeMounts:
- name: data
mountPath: /var/lib/postgresql/dataThis configuration ensures a Pod running a PostgreSQL database is created and maintained, with persistent storage mounted at /var/lib/postgresql/data.
3. Deployments: Managing Your Application’s Desired State
What are Deployments?
Deployments are abstractions that manage Pods and ReplicaSets. They provide declarative updates, ensuring the specified number of Pods is always running, and handle tasks like scaling, rolling updates, and rollbacks.
Architectural Role:
Scalability and Resilience: Deployments enable horizontal scaling of applications (increasing the number of replicas) to handle increased traffic or workload.
Rolling Updates and Rollbacks: Support zero-downtime updates by incrementally updating Pods with new application versions and can roll back to a previous version if needed.
Example in Our Project:
The base-service.yaml file specifies a Deployment for our todo application:
apiVersion: apps/v1
kind: Deployment
metadata:
name: todo
namespace: simple-go-todo
spec:
replicas: 1
selector:
matchLabels:
app: todo
template:
metadata:
labels:
app: todo
spec:
containers:
- name: todo-api
image: service-imageThis Deployment manages the lifecycle of the todo-api Pod, ensuring one instance is always running and can be scaled as needed.
4. Services: Stable Networking for Your Pods
What are Services?
Services provide stable network endpoints for accessing Pods within a Kubernetes cluster. They abstract network access to Pods, enabling communication within the cluster and with external clients.
Service Types:
ClusterIP: Exposes the Service on an internal cluster IP, accessible only within the cluster
NodePort: Exposes the Service on a static port on each node’s IP
LoadBalancer: Provisions an external IP to load balance traffic across nodes
ExternalName: Maps a Service to an external DNS name
Architectural Role:
Decoupling: Services decouple clients from the underlying Pod IP addresses, which can change if Pods are recreated or rescheduled
Service Discovery: They provide a consistent interface for service discovery, allowing other applications to reliably discover and communicate with Pods
Example in Our Project:
The dev-todo-patch-service.yaml creates a Service for the todo-api:
apiVersion: v1
kind: Service
metadata:
name: todo-api
namespace: simple-go-todo
spec:
type: ClusterIP
ports:
- name: todo-api
port: 8000
targetPort: todo-apiThis Service enables internal cluster communication to access the todo-api on a stable IP and port.
5. ConfigMaps and Secrets: Managing Configuration and Sensitive Data
What are ConfigMaps and Secrets?
ConfigMaps: Store non-sensitive configuration data in key-value pairs
Secrets: Store sensitive data like passwords, OAuth tokens, and SSH keys, base64-encoded
Architectural Role:
Separation of Configuration and Code: Enable separating configuration from application code, making applications portable and easier to manage
Secure and Flexible Management: Secrets ensure sensitive data is managed securely, while ConfigMaps provide a flexible way to manage configurations without hardcoding values
Example in Our Project:
We use a ConfigMap to configure PostgreSQL settings in dev-database.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: pghbaconf
namespace: simple-go-todo
data:
pg_hba.conf: |
local all all trust
# IPv4 local connections:
host all all 0.0.0.0/0 trust
# IPv6 local connections:
host all all ::1/128 trust
# Allow replication connections from localhost, by a user with the
# replication privilege.
local replication all trust
host replication all 0.0.0.0/0 trust
host replication all ::1/128 trustThis ConfigMap stores PostgreSQL’s access control configuration, mounted as a file in the Pod.
6. StatefulSets: Managing Stateful Applications
What are StatefulSets?
StatefulSets manage the deployment and scaling of a set of Pods, providing guarantees about the ordering and uniqueness of these Pods.
Architectural Role:
Stateful Application Management: Ideal for managing stateful applications where each Pod must have a unique identity and stable persistent storage
Stable Network Identity and Storage: Ensures each Pod has a unique, stable network identity and can maintain persistent storage across restarts
Example in Our Project:
The dev-database.yaml file uses a StatefulSet to deploy a PostgreSQL instance:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: database
namespace: simple-go-todo
spec:
selector:
matchLabels:
app: database
replicas: 1
template:
metadata:
labels:
app: databaseThis configuration provides stable identity and persistent storage for our PostgreSQL database.
Kubernetes’ Declarative Model
Kubernetes uses a declarative model where you define your application’s desired state, and Kubernetes continuously works to maintain that state. By defining your application components as YAML manifests, you can easily manage and scale your applications in a Kubernetes cluster. This approach contrasts with imperative models where each step is executed manually, offering a more scalable and resilient way to manage applications.
Understanding these Kubernetes objects and their architecture is crucial to leveraging Kubernetes’ full potential. In our project, we apply these concepts to deploy a Go application, providing a practical example of how each component fits into the overall architecture.
Putting It Into Practice: Running the Project with Makefile Commands
In this final section, we’ll walk through the step-by-step process of building, deploying, and running our Go application using the commands defined in the Makefile. This will provide a comprehensive understanding of how each command contributes to the overall deployment process and ensure everything works correctly in your Kubernetes environment.
The Makefile simplifies the workflow by automating repetitive tasks. Let’s break down the main commands and what happens when you execute each one.
1. Installing Dependencies
The first step in setting up our environment is installing all necessary dependencies. The Makefile provides a target to install these dependencies using Homebrew (feel free to adapt this for your preferred package manager):
dev-brew:
brew update
brew list kind || brew install kind
brew list kubectl || brew install kubectl
brew list kustomize || brew install kustomize
brew list pgcli || brew install pgcliThis step ensures all necessary tools are available on your machine to interact with the Kubernetes cluster and manage configurations.
2. Pulling Docker Images
Before building our custom Docker image, we need to ensure we have the necessary base images:
dev-docker:
docker pull $(GOLANG)
docker pull $(ALPINE)
docker pull $(KIND)
docker pull $(POSTGRES)This script pulls the specified Docker images for Go, Alpine, Kind node, and PostgreSQL. These images are the foundation for building our custom application image and running our local Kubernetes cluster.
3. Building the Docker Image
The Makefile includes a command to build the Docker image for our todo-api service:
service:
docker build \
-f infra/docker/dockerfile.todo \
-t $(SERVICE_IMAGE) \
--build-arg BUILD_REF=$(VERSION) \
--build-arg BUILD_DATE=`date -u +”%Y-%m-%dT%H:%M:%SZ”` \
.Docker Image Build: This command builds the Docker image using the Dockerfile at infra/docker/dockerfile.todo. It tags the image with the version specified in the VERSION variable (like todo-api:0.0.1).
Build Arguments: BUILD_REF and BUILD_DATE are passed as build arguments to incorporate versioning information and build metadata into the image.
The resulting Docker image contains the compiled Go application, ready to be deployed to our Kubernetes cluster.
4. Creating the Kind Cluster
To simulate a Kubernetes environment locally, we use Kind to create a new cluster:
dev-up:
kind create cluster \
--image $(KIND) \
--name $(KIND_CLUSTER) \
--config infra/k8s/dev/kind/kind.config.yaml
kubectl config use-context kind-$(KIND_CLUSTER)
kubectl wait --timeout=120s --namespace=local-path-storage --for=condition=Available deployment/local-path-provisioner
kind load docker-image $(POSTGRES) --name $(KIND_CLUSTER)Creating the Kind Cluster: The
kind create clustercommand creates a new Kubernetes cluster namedsgt-kind-clusterusing the specified Kind node image (kindest/node:v1.27.3) and configuration file (kind.config.yaml).Setting Kubernetes Context:
kubectl config use-contextswitches the current Kubernetes context to the new Kind cluster, allowing subsequentkubectlcommands to interact with it.Waiting for Storage Provisioner: The
kubectl waitcommand waits until thelocal-path-provisionerdeployment is available, ensuring the cluster is ready to provision storage volumes.Loading Docker Image into Cluster:
kind load docker-imageloads the PostgreSQL Docker image into the Kind cluster, making it available for our application.
5. Deploying the Application to Kubernetes
With the cluster configured and images loaded, we can now deploy our application and its dependencies.
Using Kustomize to Manage Kubernetes Configurations
What is Kustomize?
Kustomize is a Kubernetes-native tool that allows you to customize Kubernetes resource configurations without modifying the original YAML files. It’s especially useful for managing different environments (like development, testing, and production) from a common base of configuration files. Using Kustomize, we can automatically generate customized manifests for our cluster by applying specific overlays that adjust configurations as needed.
dev-apply:
kustomize build infra/k8s/dev/database | kubectl apply -f -
kubectl rollout status --namespace=$(NAMESPACE) --watch --timeout=120s sts/database
kustomize build infra/k8s/dev/service | kubectl apply -f -
kubectl wait pods --namespace=$(NAMESPACE) --selector app=$(APP) --timeout=120s --for=condition=ReadyApply Database Configuration:
kustomize buildgenerates Kubernetes manifests for the database configuration from base YAML files.kubectl apply -f -applies these configurations to the cluster, creating necessary resources (e.g., StatefulSet for PostgreSQL)Wait for Database Deployment: The
kubectl rollout statuscommand waits for the PostgreSQL StatefulSet to be fully deployed and running before proceedingApply Service Configuration: The process repeats for the
todo-apiservice, ensuring the service and its dependencies are deployed to the clusterWait for Pods to Be Ready:
kubectl wait podsensures all Pods associated with thetodo-apiapplication are running and ready before completing the deployment process
6. Testing Application Endpoints
Finally, we can use the Makefile to test our REST API endpoints and verify everything is working as expected:
test_all: create get_all get_one update deleteBy following these steps, you can successfully build, deploy, and test your Go application in a local Kubernetes cluster using Docker and Kind. The Makefile automates much of this process, making it easier to manage and reducing the risk of errors.
Conclusion
By combining containerization, Kubernetes, and Kind, we can create a powerful and flexible local development environment that closely resembles a production setup (within limitations). This approach enables efficient development, testing, and iteration, ensuring your applications are robust, scalable, and ready for deployment in real-world environments.

