Kubernetes Secrets, ConfigMaps, RBAC, PVs and PVCs
Create the KinD Cluster
kind create cluster --name tambootcamp
Make sure the Nodes are in the Ready State. If not wait for it. You can check the status by running the below command:
kubectl get nodes
Apply the sidecar pod yaml manifest using
kubectl apply -f sidecar.yaml
Get the pods running using
kubectl get pods
Now exec into the pod
kubectl exec -it sidecar-container-demo -c main-container -- /bin/shapt-get update && apt-get install -y curlcurl localhost
You will see the below output
echo Mon May 10 00:05:07 UTC 2021 Hi I am from Sidecar containerecho Mon May 10 00:05:11 UTC 2021 Hi I am from Sidecar containerecho Mon May 10 00:05:16 UTC 2021 Hi I am from Sidecar containerecho Mon May 10 00:05:21 UTC 2021 Hi I am from Sidecar containerecho Mon May 10 00:05:26 UTC 2021 Hi I am from Sidecar containerecho Mon May 10 00:05:31 UTC 2021 Hi I am from Sidecar containerecho Mon May 10 00:05:36 UTC 2021 Hi I am from Sidecar containerecho Mon May 10 00:05:41 UTC 2021 Hi I am from Sidecar containerecho Mon May 10 00:05:46 UTC 2021 Hi I am from Sidecar containerecho Mon May 10 00:05:51 UTC 2021 Hi I am from Sidecar containerecho Mon May 10 00:05:56 UTC 2021 Hi I am from Sidecar containerecho Mon May 10 00:06:01 UTC 2021 Hi I am from Sidecar container
So here you can see that the logs are generated by the main container, however the output is from the Sidecar container.
Type exit to exit from the container.
exit
Now lets's create a Secret
kubectl create secret generic credentials --from-literal=username=arathod --from-literal=password=VMware
Have a look at the secret using the describe and get -o yaml
kubectl describe secret credentialskubectl get secret credentials -o yaml
You will notice that the username and password looks different from what you have created. That is because the data is stored in a base64 encoded format.
You might have guessed it right, even though we call it a secret , it's not precisely a super secret thing since this can be decoded exactly how this was encoded using the command
echo YXJhdGhvZA==| base64 --decode
Now you could use the secret-test-pod.yaml file to create a pod which will reference the secret. Note that we are mounting the secret as a Volume at /etc/secret-volume within the container.
So lets apply the secret-test-pod.yaml file using
kubectl apply -f secret-test-pod.yaml
Let's exec into the pod using
kubectl exec -it secret-test-pod -- /bin/bash
Now, you should be in the container's shell , navigate to /etc/secret-volume using
cd /etc/secret-volume
You will notice that there are two files, username and password and if you cat them, you will see the username and password in the decoded format.
Now let's create a ConfigMap(cm)
kubectl create configmap my-config --from-literal=fqdn=mysite.com
If you noticed ConfigMaps are very similar to Secrets. Even they get mounted as a volume within the container. They differ with the idea that you wouldnt store anything confidential or secretive in a ConfigMap.
Here I am creating a ConfigMap where I want to pass a configuration data "fqdn=mysite.com" so that my application gets the configuration in order to do it's job.
Have a look at the ConfigMap using the describe and get -o yaml
kubectl describe cm my-configkubectl get cm my-config -o yaml
Just like secrets, let's mount this into a container in a pod and see if the configuration exists there, For that I have the pod yaml cm-test-pod.yaml, let's apply that.
kubectl apply -f cm-test-pod.yaml
Now, you should be in the container's shell , navigate to /etc/config using
cd /etc/configcat fqdn
Now let's see RBAC - Roles, ClusterRoles, RoleBindings and ClusterRoleBindings
To create a Role named "pod-reader" that allows user to perform "get", "watch" and "list" on pods
kubectl create role pod-reader --verb=get --verb=list --verb=watch --resource=pods
Now we bind this Role which has the permission to get, watch and list pods to a specific user/group using a RoleBinding or a ClusterRoleBinding using the command
kubectl create rolebinding admin --role=pod-reader --user=user1
You can test if the user can create pods using the command:
kubectl auth can-i create pod --as=user1
Feel free to try with get instead of create in the above command and see the difference.
The same logic applies for ClusterRoles and ClusterRoleBindings with the only difference being the scope for them is cluster-wide.
Check out the existing ClusterRoles and ClusterRoleBindings using the command
kubectl get clusterroleskubectl describe clusterroles adminkubectl describe clusterroles cluster-adminkubectl get clusterrolebindingkubectl describe clusterrolebinding system:basic-userkubectl describe clusterrolebinding cluster-admin
The user which was created during the rolebinding creation is different from a service account take a look at https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/
Now let's switch gears to PersistentVolumes
Since we are using KinD, before we learn PeristentVolumes, you need to know about StorageClasses.
Let's get the StorageClasses in the current cluster using
kubectl get sc
StorageClass describes the parameters for a class of storage for which PersistentVolumes can be dynamically provisioned. You will notice a standard StorageClass in there. Make a note of the VOLUMEBINDINGMODE which states WaitForFirstConsumer. This means that this will delay the binding and provisioning of a PersistentVolume until a Pod using the PersistentVolumeClaim is created.
There is no imperative way that I know of to create a PersistentVolume, so we will have to create a YAML for this. In the repo you will find a file called as pv.yaml. Let's apply that.
cat pv.yaml
Notice the storageClassName and the storage portions.
kubectl apply -f pv.yaml
Now lets take a look at the pv's using(note the output in terms of status and claim)
kubectl get pv
Now let's create a PeristentVolumeClaim for this Persistent Volume using the pvc.yaml file.
kubectl apply -f pvc.yaml
Note the status, claim and the access modes in there. Even though we created a pvc for this pv, the status and the claim has'nt changed. This is because of the volumeBindingMode being set to WaitForFirstConsumer in the StorageClass. It basically means that there has to be a consumer created so that the PersistentVolume becomes bound.
So let's get our first consumer to use the Peristent Volume. Let's use the pod_storage.yaml file which is a pod definition file which specifies the claim name. It also gives a name to the volume in the volumes section under the spec. Meanwhile it also specifies the name of the volume under volumeMounts within the container section of the yaml file.
kubectl apply -f pod_storage.yaml
One last thing from a storage perspective is using dynamic provisioning using volumeClaimTemplates. For this we will be deploying a StatefulSet(mongodb.yaml)
kubectl apply -f mongodb.yaml
Run the commands again to get the pv,pvc again.
kubectl get pv,pvc
Yes you can separate the object names using a ","!
Finally, let's take a look at CRDs. These are the type of objects which Kubernetes Out of the Box does not understand and needs to be told about it. In here we will create a CRD called as route. This is the OpenShift Route CRD.
kubectl apply -f route_crd.yaml
Once applied, you can run commands like
kubectl explain routekubectl get routes
You can also see this now in the list of api-resources
kubectl api-resources
For the Node Operations , you will have to setup a cluster with more than 1 node. The cluster which you have been using so far is made of 1 single node. We will setup a 3 node cluster using the kind-config.yaml file in the repo.
Before you create another K8S cluster, you can delete the current cluster using the below command.
kind delete cluster --name tambootcamp
Now you can create the three node cluster using the command below:
kind create cluster --name tamlab --config kind-config.yaml
Allow it some time, it's a three node cluster!
Once the cluster is provisioned, it's important to see if the nodes are in the ready state using
kubectl get nodes
You can describe the node using the command
kubectl describe node INSERT_NODE_NAME_HERE
First let's cordon it using
kubectl cordon INSERT_NODE_NAME_HERE
Now drain it using
kubectl drain INSERT_NODE_NAME_HERE
Once the maintenance of the node is done, you can uncordon it using
kubcetl uncordon INSERT_NODE_NAME_HERE