Workloads within Kubernetes are higher level objects that manage Pods or other higher level objects.
In ALL CASES a Pod Template is included, and acts as the base tier of management.
Note:
-
If you are coming directly from the previous tutorials (core), you may still be configured to use the
kind-dev
context. Switch to thekind-kind
context before proceeding with the rest of the tutorials. -
Unlike some of the other tutorials, the workload exercises should be cleaned up before moving onto the next workload type. The clean-up commands will included after Summary section of the exercise.
ReplicaSets are the primary method of managing Pod replicas and their lifecycle. This includes their scheduling, scaling, and deletion.
Their job is simple, always ensure the desired number of replicas
that match the selector are running.
Objective: Create and scale a ReplicaSet. Explore and gain an understanding of how the Pods are generated from the Pod template, and how they are targeted with selectors.
- Begin by creating a ReplicaSet called
rs-example
with3
replicas
, using thenginx:stable-alpine
image and configure the labels and selectors to targetapp=nginx
andenv=prod
. The yaml block below or the manifestmanifests/rs-example.yaml
may be used.
manifests/rs-example.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: example-rs
spec:
replicas: 3
selector:
matchLabels:
app: nginx
env: prod
template:
metadata:
labels:
app: nginx
env: prod
spec:
containers:
- name: nginx
image: nginx:stable-alpine
ports:
- containerPort: 80
Command
$ kubectl create -f manifests/rs-example.yaml
- Watch as the newly created ReplicaSet provisions the Pods based off the Pod Template.
$ kubectl get pods --watch --show-labels
Note that the newly provisioned Pods are given a name based off the ReplicaSet name appended with a 5 character random string. These Pods are labeled with the labels as specified in the manifest.
- Scale ReplicaSet
rs-example
up to5
replicas with the below command.
$ kubectl scale replicaset rs-example --replicas=5
Tip: replicaset
can be substituted with rs
when using kubectl
.
- Describe
rs-example
and take note of theReplicas
andPod Status
field in addition to theEvents
.
$ kubectl describe rs rs-example
- Now, using the
scale
command bring the replicas back down to3
.
$ kubectl scale rs rs-example --replicas=3
- Watch as the ReplicaSet Controller terminates 2 of the Pods to bring the cluster back into it's desired state of 3 replicas.
$ kubectl get pods --show-labels --watch
- Once
rs-example
is back down to 3 Pods. Create an independent Pod manually with the same labels as the one targeted byrs-example
from the manifestmanifests/pod-rs-example.yaml
.
manifests/pod-rs-example.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-example
labels:
app: nginx
env: prod
spec:
containers:
- name: nginx
image: nginx:stable-alpine
ports:
- containerPort: 80
Command
$ kubectl create -f manifests/pod-rs-example.yaml
- Immediately watch the Pods.
$ kubectl get pods --show-labels --watch
Note that the Pod is created and immediately terminated.
- Describe
rs-example
and look at theevents
.
$ kubectl describe rs rs-example
There will be an entry with Deleted pod: pod-example
. This is because a ReplicaSet targets ALL Pods matching
the labels supplied in the selector.
Summary: ReplicaSets ensure a desired number of replicas matching the selector are present. They manage the lifecycle of ALL matching Pods. If the desired number of replicas matching the selector currently exist when the ReplicaSet is created, no new Pods will be created. If they are missing, then the ReplicaSet Controller will create new Pods based off the Pod Template till the desired number of Replicas are present.
Clean Up Command
kubectl delete rs rs-example
Deployments are a declarative method of managing Pods via ReplicaSets. They provide rollback functionality in addition to more granular update control mechanisms.
Objective: Create, update and scale a Deployment as well as explore the relationship of Deployment, ReplicaSet and Pod.
- Create a Deployment
deploy-example
. Configure it using the example yaml block below or use the manifestmanifests/deploy-example.yaml
. Additionally pass the--record
flag tokubectl
when you create the Deployment. The--record
flag saves the command as an annotation, and it can be thought of similar to a git commit message.
manifests/deployment-example.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-example
spec:
replicas: 3
revisionHistoryLimit: 3
selector:
matchLabels:
app: nginx
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:stable-alpine
ports:
- containerPort: 80
Command
$ kubectl create -f manifests/deploy-example.yaml --record
- Check the status of the Deployment.
$ kubectl get deployments
- Once the Deployment is ready, view the current ReplicaSets and be sure to show the labels.
$ kubectl get rs --show-labels
Note the name and pod-template-hash
label of the newly created ReplicaSet. The created ReplicaSet's name will
include the pod-template-hash
.
- Describe the generated ReplicaSet.
$ kubectl describe rs deploy-example-<pod-template-hash>
Look at both the Labels
and the Selectors
fields. The pod-template-hash
value has automatically been added to
both the Labels and Selector of the ReplicaSet. Then take note of the Controlled By
field. This will reference the
direct parent object, and in this case the original deploy-example
Deployment.
- Now, get the Pods and pass the
--show-labels
flag.
$ kubectl get pods --show-labels
Just as with the ReplicaSet, the Pods name are labels include the pod-template-hash
.
- Describe one of the Pods.
$ kubectl describe pod deploy-example-<pod-template-hash-<random>
Look at the Controlled By
field. It will contain a reference to the parent ReplicaSet, but not the parent Deployment.
Now that the relationship from Deployment to ReplicaSet to Pod is understood. It is time to update the
deploy-example
and see an update in action.
- Update the
deploy-example
manifest and add a few additional labels to the Pod template. Once done, apply the change with the--record
flag.
$ kubectl apply -f manifests/deploy-example.yaml --record
< or >
$ kubectl edit deploy deploy-example --record
Tip: deploy
can be substituted for deployment
when using kubectl
.
- Immediately watch the Pods.
$ kubectl get pods --show-labels --watch
The old version of the Pods will be phased out one at a time and instances of the new version will take its place.
The way in which this is controlled is through the strategy
stanza. For specific documentation this feature, see
the Deployment Strategy Documentation.
- Now view the ReplicaSets.
$ kubectl get rs --show-labels
There will now be two ReplicaSets, with the previous version of the Deployment being scaled down to 0.
- Now, scale the Deployment up as you would a ReplicaSet, and set the
replicas=5
.
$ kubectl scale deploy deploy-example --replicas=5
- List the ReplicaSets.
$ kubectl get rs --show-labels
Note that there is NO new ReplicaSet generated. Scaling actions do NOT trigger a change in the Pod Template.
- Just as before, describe the Deployment, ReplicaSet and one of the Pods. Note the
Events
andControlled By
fields. It should present a clear picture of relationship between objects during an update of a Deployment.
$ kubectl describe deploy deploy-example
$ kubectl describe rs deploy-example-<pod-template-hash>
$ kubectl describe pod deploy-example-<pod-template-hash-<random>
Summary: Deployments are the main method of managing applications deployed within Kubernetes. They create and
supervise targeted ReplicaSets by generating a unique hash called the pod-template-hash
and attaching it to child
objects as a Label along with automatically including it in their Selector. This method of managing rollouts along with
being able to define the methods and tolerances in the update strategy permits for a safe and seamless way of updating
an application in place.
Objective: Learn how to view the history of a Deployment and rollback to older revisions.
Note: This exercise builds off the previous exercise: Using Deployments. If you have not, complete it first before continuing.
- Use the
rollout
command to view thehistory
of the Deploymentdeploy-example
.
$ kubectl rollout history deployment deploy-example
There should be two revisions. One for when the Deployment was first created, and another when the additional Labels
were added. The number of revisions saved is based off of the revisionHistoryLimit
attribute in the Deployment spec.
- Look at the details of a specific revision by passing the
--revision=<revision number>
flag.
$ kubectl rollout history deployment deploy-example --revision=1
$ kubectl rollout history deployment deploy-example --revision=2
Viewing the specific revision will display a summary of the Pod Template.
- Choose to go back to revision
1
by using therollout undo
command.
$ kubectl rollout undo deployment deploy-example --to-revision=1
Tip: The --to-revision
flag can be omitted if you wish to just go back to the previous configuration.
- Immediately watch the Pods.
$ kubectl get pods --show-labels --watch
They will cycle through rolling back to the previous revision.
- Describe the Deployment
deploy-example
.
$ kubectl describe deployment deploy-example
The events will describe the scaling back of the previous and switching over to the desired revision.
Summary: Understanding how to use rollout
command to both get a diff of the different revisions as well as
be able to roll-back to a previously known good configuration is an important aspect of Deployments that cannot
be left out.
Clean Up Command
kubectl delete deploy deploy-example
DaemonSets ensure that all nodes matching certain criteria will run an instance of the supplied Pod.
They bypass default scheduling mechanisms and restrictions, and are ideal for cluster wide services such as log forwarding, or health monitoring.
Objective: Experience creating, updating, and rolling back a DaemonSet. Additionally delve into the process of how they are scheduled and how an update occurs.
- Create DaemonSet
ds-example
and pass the--record
flag. Use the example yaml block below as a base, or use the manifestmanifests/ds-example.yaml
directly.
manifests/ds-example.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: ds-example
spec:
revisionHistoryLimit: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
nodeSelector:
nodeType: edge
containers:
- name: nginx
image: nginx:stable-alpine
ports:
- containerPort: 80
Command
$ kubectl create -f manifests/ds-example.yaml --record
- View the current DaemonSets.
$ kubectl get daemonset
As there are no matching nodes, no Pods should be scheduled.
- Label the
kind-control-plane
node withnodeType=edge
$ kubectl label node kind-control-plane nodeType=edge
- View the current DaemonSets once again.
$ kubectl get daemonsets
There should now be a single instance of the DaemonSet ds-example
deployed.
- View the current Pods and display their labels with
--show-labels
.
$ kubectl get pods --show-labels
Note that the deployed Pod has a controller-revision-hash
label. This is used like the pod-template-hash
in a
Deployment to track and allow for rollback functionality.
- Describing the DaemonSet will provide you with status information regarding it's Deployment cluster wide.
$ kubectl describe ds ds-example
Tip: ds
can be substituted for daemonset
when using kubectl
.
- Update the DaemonSet by adding a few additional labels to the Pod Template and use the
--record
flag.
$ kubectl apply -f manifests/ds-example.yaml --record
< or >
$ kubectl edit ds ds-example --record
- Watch the Pods and be sure to show the labels.
$ kubectl get pods --show-labels --watch
The old version of the DaemonSet will be phased out one at a time and instances of the new version will take its
place. Similar to Deployments, DaemonSets have their own equivalent to a Deployment's strategy
in the form of
updateStrategy
. The defaults are generally suitable, but other tuning options may be set. For reference, see the
Updating DaemonSet Documentation.
Summary: DaemonSets are usually used for important cluster-wide support services such as Pod Networking, Logging,
or Monitoring. They differ from other workloads in that their scheduling bypasses normal mechanisms, and is centered
around node placement. Like Deployments, they have their own pod-template-hash
in the form of
controller-revision-hash
used for keeping track of Pod Template revisions and enabling rollback functionality.
Objective: Explore using the rollout
command to rollback to a specific version of a DaemonSet.
Note: This exercise is functionally identical to the ExerciseRolling Back a Deployment. If you have completed that exercise, then this may be considered optional. Additionally, this exercise builds off the previous exercise Managing DaemonSets and it must be completed before continuing.
- Use the
rollout
command to view thehistory
of the DaemonSetds-example
$ kubectl rollout history ds ds-example
There should be two revisions. One for when the Deployment was first created, and another when the additional Labels
were added. The number of revisions saved is based off of the revisionHistoryLimit
attribute in the DaemonSet spec.
- Look at the details of a specific revision by passing the
--revision=<revision number>
flag.
$ kubectl rollout history ds ds-example --revision=1
$ kubectl rollout history ds ds-example --revision=2
Viewing the specific revision will display the Pod Template.
- Choose to go back to revision
1
by using therollout undo
command.
$ kubectl rollout undo ds ds-example --to-revision=1
Tip: The --to-revision
flag can be omitted if you wish to just go back to the previous configuration.
- Immediately watch the Pods.
$ kubectl get pods --show-labels --watch
They will cycle through rolling back to the previous revision.
- Describe the DaemonSet
ds-example
.
$ kubectl describe ds ds-example
The events will be sparse with a single host, however in an actual Deployment they will describe the status of updating the DaemonSet cluster wide, cycling through hosts one-by-one.
Summary: Being able to use the rollout
command with DaemonSets is import in scenarios where one may have
to quickly go back to a previously known-good version. This becomes even more important for 'infrastructure' like
services such as Pod Networking.
Clean Up Command
kubectl delete ds ds-example
The StatefulSet controller is tailored to managing Pods that must persist or maintain state. Pod identity including hostname, network, and storage can be considered persistent.
They ensure persistence by making use of three things:
- The StatefulSet controller enforcing predicable naming, and ordered provisioning/updating/deletion.
- A headless service to provide a unique network identity.
- A volume template to ensure stable per-instance storage.
Objective: Create, update, and delete a StatefulSet
to gain an understanding of how the StatefulSet lifecycle
differs from other workloads with regards to updating, deleting and the provisioning of storage.
- Create StatefulSet
sts-example
using the yaml block below or the manifestmanifests/sts-example.yaml
.
manifests/sts-example.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: sts-example
spec:
replicas: 3
revisionHistoryLimit: 3
selector:
matchLabels:
app: stateful
serviceName: app
updateStrategy:
type: OnDelete
template:
metadata:
labels:
app: stateful
spec:
containers:
- name: nginx
image: nginx:stable-alpine
ports:
- containerPort: 80
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: standard
resources:
requests:
storage: 1Gi
Command
$ kubectl create -f manifests/sts-example.yaml
- Immediately watch the Pods being created.
$ kubectl get pods --show-labels --watch
Unlike Deployments or DaemonSets, the Pods of a StatefulSet are created one-by-one, going by their ordinal index.
Meaning, sts-example-0
will fully be provisioned before sts-example-1
starts up. Additionally, take notice of
the controller-revision-hash
label. This serves the same purpose as the controller-revision-hash
label in a
DaemonSet or the pod-template-hash
in a Deployment. It provides a means of tracking the revision of the Pod
Template and enables rollback functionality.
- More information on the StatefulSet can be gleaned about the state of the StatefulSet by describing it.
$ kubectl describe statefulset sts-example
Within the events, notice that it is creating claims for volumes before each Pod is created.
- View the current Persistent Volume Claims.
$ kubectl get pvc
The StatefulSet controller creates a volume for each instance based off the volumeClaimTemplate
. It prepends
the volume name to the Pod name. e.g. www-sts-example-0
.
- Update the StatefulSet's Pod Template and add a few additional labels.
$ kubectl apply -f manifests/sts-example.yaml --record
< or >
$ kubectl edit statefulset sts-example --record
- Return to watching the Pods.
$ kubectl get pods --show-labels
None of the Pods are being updated to the new version of the Pod.
- Delete the
sts-example-2
Pod.
$ kubectl delete pod sts-example-2
- Immediately get the Pods.
$ kubectl get pods --show-labels --watch
The new sts-example-2
Pod should be created with the new additional labels. The OnDelete
Update Strategy will
not spawn a new iteration of the Pod until the previous one was deleted. This allows for manual gating the
update process for the StatefulSet.
- Update the StatefulSet and change the Update Strategy Type to
RollingUpdate
.
$ kubectl apply -f manifests/sts-example.yaml --record
< or >
$ kubectl edit statefulset sts-example --record
- Immediately watch the Pods once again.
$ kubectl get pods --show-labels --watch
Note that the Pods are sequentially updated in descending order, or largest to smallest based on the
Pod's ordinal index. This means that if sts-example-2
was not updated already, it would be updated first, then
sts-example-1
and finally sts-example-0
.
- Delete the StatefulSet
sts-example
$ kubectl delete statefulset sts-example
- View the Persistent Volume Claims.
$ kubectl get pvc
Created PVCs are NOT garbage collected automatically when a StatefulSet is deleted. They must be reclaimed independently of the StatefulSet itself.
- Recreate the StatefulSet using the same manifest.
$ kubectl create -f manifests/sts-example.yaml --record
- View the Persistent Volume Claims again.
$ kubectl get pvc
Note that new PVCs were NOT provisioned. The StatefulSet controller assumes if the matching name is present, that PVC is intended to be used for the associated Pod.
Summary: Like many applications where state must be taken into account, the planning and usage of StatefulSets requires forethought. The consistency brought by standard naming, ordered updates/deletes and templated storage does however make this task easier.
Objective: Create a "headless service" or a service without a ClusterIP
(ClusterIP=None
) for use with the
StatefulSet sts-example
, then explore how this enables consistent service discovery.
- Create the headless service
app
using theapp=stateful
selector from the yaml below or the manifestmanifests/service-sts-example.yaml
.
manifests/service-sts-example.yaml
apiVersion: v1
kind: Service
metadata:
name: app
spec:
clusterIP: None
selector:
app: stateful
ports:
- protocol: TCP
port: 80
targetPort: 80
Command
$ kubectl create -f manifests/service-sts-example.yaml
- Describe the newly created service
$ kubectl describe svc app
Notice that it does not have a clusterIP
, but does have the Pod Endpoints listed. Headless services are unique
in this behavior.
- Query the DNS entry for the
app
service.
$ kubectl exec sts-example-0 -- nslookup app.default.svc.cluster.local
An A record will have been returned for each instance of the StatefulSet. Querying the service directly will do simple DNS round-robin load-balancing.
- Finally, query one of instances directly.
$ kubectl exec sts-example-0 -- nslookup sts-example-1.app.default.svc.cluster.local
This is a unique feature to StatefulSets. This allows for services to directly interact with a specific instance of a Pod. If the Pod is updated and obtains a new IP, the DNS record will immediately point to it enabling consistent service discovery.
Summary: StatefulSet service discovery is unique within Kubernetes in that it augments a headless service
(A service without a unique ClusterIP
) to provide a consistent mapping to the individual Pods. These mappings
take the form of an A record in format of: <StatefulSet Name>-<ordinal>.<service name>.<namespace>.svc.cluster.local
and can be used consistently throughout other Workloads.
Clean Up Command
kubectl delete svc app
kubectl delete statefulset sts-example
kubectl delete pvc www-sts-example-0 www-sts-example-1 www-sts-example-2
The Job Controller ensures one or more Pods are executed and successfully terminate. Essentially a task executor that can be run in parallel.
CronJobs are an extension of the Job Controller, and enable Jobs to be run on a schedule.
Objective: Create a Kubernetes Job
and work to understand how the Pods are managed with completions
and
parallelism
directives.
- Create job
job-example
using the yaml below, or the manifest located atmanifests/job-example.yaml
manifests/job-example.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: job-example
spec:
backoffLimit: 4
completions: 4
parallelism: 2
template:
spec:
containers:
- name: hello
image: alpine:latest
command: ["/bin/sh", "-c"]
args: ["echo hello from $HOSTNAME!"]
restartPolicy: Never
Command
$ kubectl create -f manifests/job-example.yaml
- Watch the Pods as they are being created.
$ kubectl get pods --show-labels --watch
Only two Pods are being provisioned at a time; adhering to the parallelism
attribute. This is done until the total
number of completions
is satisfied. Additionally, the Pods are labeled with controller-uid
, this acts as a
unique ID for that specific Job.
When done, the Pods persist in a Completed
state. They are not deleted after the Job is completed or failed.
This is intentional to better support troubleshooting.
- A summary of these events can be seen by describing the Job itself.
$ kubectl describe job job-example
- Delete the job.
$ kubectl delete job job-example
- View the Pods once more.
$ kubectl get pods
The Pods will now be deleted. They are cleaned up when the Job itself is removed.
Summary: Jobs are fire and forget one off tasks, batch processing or as an executor for a workflow engine.
They "run to completion" or terminate gracefully adhering to the completions
and parallelism
directives.
Objective: Create a CronJob based off a Job Template. Understand how the Jobs are generated and how to suspend a job in the event of a problem.
- Create CronJob
cronjob-example
based off the yaml below, or use the manifestmanifests/cronjob-example.yaml
It is configured to run the Job from the earlier example every minute, using the cron schedule"*/1 * * * *"
. This schedule is UTC ONLY.
manifests/cronjob-example.yaml
apiVersion: batch/v1
kind: CronJob
metadata:
name: cronjob-example
spec:
schedule: "*/1 * * * *"
successfulJobsHistoryLimit: 2
failedJobsHistoryLimit: 1
jobTemplate:
spec:
completions: 4
parallelism: 2
template:
spec:
containers:
- name: hello
image: alpine:latest
command: ["/bin/sh", "-c"]
args: ["echo hello from $HOSTNAME!"]
restartPolicy: Never
Command
$ kubectl create -f manifests/cronjob-example.yaml
- Give it some time to run, and then list the Jobs.
$ kubectl get jobs
There should be at least one Job named in the format <cronjob-name>-<unix time stamp>
. Note the timestamp of
the oldest Job.
- Give it a few minutes and list the Jobs once again
$ kubectl get jobs
The oldest Job should have been removed. The CronJob controller will purge Jobs according to the
successfulJobHistoryLimit
and failedJobHistoryLimit
attributes. In this case, it is retaining strictly the
last 3 successful Jobs.
- Describe the CronJob
cronjob-example
$ kubectl describe CronJob cronjob-example
The events will show the records of the creation and deletion of the Jobs.
- Edit the CronJob
cronjob-example
and locate theSuspend
field. Then set it to true.
$ kubectl edit CronJob cronjob-example
This will prevent the cronjob from firing off any future events, and is useful to do to initially troubleshoot an issue without having to delete the CronJob directly.
- Delete the CronJob
$ kubectl delete cronjob cronjob-example
Deleting the CronJob WILL delete all child Jobs. Use Suspend
to 'stop' the Job temporarily if attempting
to troubleshoot.
Summary: CronJobs are a useful extension of Jobs. They are great for backup or other day-to-day tasks, with the only caveat being they adhere to a UTC ONLY schedule.
Clean Up Commands
kubectl delete CronJob cronjob-example