📍 Introduction
Kubernetes has emerged as the de-facto standard for container orchestration, allowing developers to easily deploy and manage containerized applications. However, as the complexity of applications increases, managing them on Kubernetes can become more challenging. Kubernetes workloads, jobs, and cronjobs are powerful tools that can help streamline application management and ensure that your applications run smoothly. In this blog, we will explore these tools and how they can be used to manage containerized applications on Kubernetes.
🔹Deployments
Deployments are the most common type of workload in Kubernetes. They are used to manage stateless applications that can be easily scaled horizontally. Deployments can be used to define the desired state of an application and then automatically create and manage the necessary resources to achieve that state. For example, a deployment can be used to manage a web server that needs to be scaled up or down based on the amount of traffic it receives.
Kubernetes deployment resources can be managed by a YAML file also called a manifest file. For example below YAML file which defines how to deploy a sample application on Kubernetes.
apiVersion: apps/v1
kind: Deployment
metadata:
name: sample-deployment
labels:
app: sample
spec:
replicas: 2
selector:
matchLabels:
app: sample
template:
metadata:
labels:
app: sample
spec:
containers:
- name: sample
image: sample: v1
ports:
- containerPort: 80
environment:
CONFIG_ SERVER: https://server: 380231
kind: Kind specified kind of resource it is.
metadata: We define some metadata and the name of artifacts
replicas: We define the no. of replicas that manage aur application
🔹StatefulSets
StatefulSet allows you to mount a volume to the pod and persist data in that volume across pod restarts. Now, you can run stateful apps with the Deployment object as well You can create a Persistent Volume Claim, which is later consumed by a pod.
apiVersion: apps/vI
kind: Deployment
metadata:
name: zookeeper
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: zookeeper
template:
metadata:
labels:
app: zookeeper
spec:
containers:
- name: zookeeper
image: zookeeper: 3.5.9
ports:
- containerPort: 2181
name: client
volumeMounts:
- mountPath: /data
name: zookeeper-data
restartPolicy: Always
volumes:
- name: zookeeper-data
persistentVolumeClaim:
claimName: zookeever
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: zookeeper
namespace: default
spec:
storageClassName: gp2
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
Kubernetes StatetullSet has 4 unique features to help run stateful applications. First of all, it's stable, persistent storage. Stable in this context refers to persistence across pod (re)scheduling. StatefulSet allows you to mount a volume to the pod and persist data in that volume across pod restarts. Now, you can run stateful apps with the Deployment object as well. You can create a Persistent Volume Claim, which is later consumed by a pod. If you have a default storage class or specified a storage class when creating PVC, Persistent Volume will be created.
🔹DaemonSet
A demon set is like a deployment it creates a specific number of Pods to be deployed within the Kubernetes cluster but it's a little bit different it creates a put in each worker node we have an example here of a demonstration that will go to deploy an nginx container within our cluster so that the number of ports will be defined by the number of the worker nodes within our cluster. When you create a DaemonSet, Kubernetes automatically creates a pod on each node in the cluster that matches the specified selector. If a new node is added to the cluster, Kubernetes will automatically create a new pod on that node. If a node is removed from the cluster, Kubernetes will terminate the pod running on that node.
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd
spec:
selector:
matchLabels:
app: fluentd
template:
metadata:
labels:
app: fluentd
spec:
containers:
- name: fluentd
image: fluent/fluentd:v1.12-debian-1
resources:
limits:
memory: 500Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
This DaemonSet ensures that a pod running the fluentd
container is created on every node in the cluster. The fluentd
container is responsible for collecting logs from all other containers running on the node and forwarding them to a central logging system.
The spec
section of the DaemonSet configuration specifies the template
for the pod that should be created on each node, which includes the container image to use, resource limits, and volume mounts.
🔹Jobs and cronjobs
n Kubernetes, Jobs and CronJobs are resources used to manage and schedule batch jobs.
A Job is a Kubernetes resource that creates one or more Pods and ensures that they complete successfully. The Job resource will create one or more Pods to run the specified workload, and it will continue to create new Pods until the desired number of successful completions is reached.
Jobs are useful for running a one-off batch process or running a process that needs to be run to completion once. They can be used to run data migrations, generate reports, or run other batch processes.
Here is an example of a Job configuration that runs a simple batch process:
yamlCopy codeapiVersion: batch/v1
kind: Job
metadata:
name: example-job
spec:
template:
spec:
containers:
- name: example
image: busybox
command: ["echo", "Hello, World!"]
restartPolicy: OnFailure
backoffLimit: 2
This Job creates a single Pod that runs the busybox
container and executes the echo "Hello, World!"
command. The restart policy
of OnFailure
indicates that the Job will be retried if the Pod fails to complete successfully. The backoffLimit
of 2 indicates that the Job will be retried up to two times before it is marked as failed.
A CronJob, on the other hand, is a Kubernetes resource used to schedule recurring jobs. CronJobs are useful for running tasks on a regular schedule, such as backups, cleanup tasks, or regular data processing tasks.
Here is an example of a CronJob configuration that runs a batch process every day at 3:30 AM:
yamlCopy codeapiVersion: batch/v1beta1
kind: CronJob
metadata:
name: example-cronjob
spec:
schedule: "30 3 * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: example
image: busybox
command: ["echo", "Hello, World!"]
restartPolicy: OnFailure
concurrencyPolicy: Forbid
This CronJob runs a Job every day at 3:30 AM, as specified by the schedule
field in the spec
. The job template
section of the configuration specifies the Job that should be run on the schedule, and the concurrencyPolicy
of Forbid
ensures that only one Job is running at a time.
📍 Conclusion
This blog highlights the importance of Kubernetes in managing containerized applications and acknowledges the potential challenges that come with increasing application complexity. It also introduces the focus of the blog, which is to explore Kubernetes workloads, jobs, and cronjobs as powerful tools for streamlining application management and ensuring smooth operations. Overall, the paragraph sets the stage for the rest of the blog post.