Tuesday, October 15, 2019

kubernetes "change-cause" to describe a deployment

$ kubectl run nginx --image=nginx --replicas=4

$ kubectl annotate deployment/nginx kubernetes.io/change-cause='initial deployment'
deployment.extensions/nginx annotated

$ kubectl set image deploy nginx nginx=nginx:1.7.9

$ kubectl annotate deployment/nginx kubernetes.io/change-cause='nginx:1.7.9'
deployment.extensions/nginx annotated

$ kubectl set image deploy nginx nginx=nginx:1.9.1

$ kubectl annotate deployment/nginx kubernetes.io/change-cause='nginx:1.9.1'
deployment.extensions/nginx annotated


$ kubectl rollout history deploy nginx
deployment.extensions/nginx
REVISION CHANGE-CAUSE

5 initial deployment
6 nginx:1.7.9
7 nginx:1.9.1



This seems to me a very good practice, to be able to trace all changes in PROD.

You can always trace what changed:

kubectl rollout history deploy nginx --revision=6

deployment.extensions/nginx with revision #6
Pod Template:
  Labels:       pod-template-hash=7b74859c78
        run=nginx
  Containers:
   nginx:
    Image:      nginx:1.7.9
    Port:       
    Host Port:  
    Environment:        
    Mounts:     
  Volumes:      






joy of Openshift SCC

if you do

oc describe project

you will see 2 annotations

openshift.io/sa.scc.supplemental-groups=1000800000/10000
openshift.io/sa.scc.uid-range=1000800000/10000


Even if you specify a "USER 10001" in your Dockerfile, your actual uid will be remapped withing the range specified by those 2 annotations (the second parameter "/10000" is the block length! meaning that you can have 10000 different users starting from uid 1000800000 ) :

sh-4.2$ id
uid=1000800000(root) gid=0(root) groups=0(root),1000800000
sh-4.2$ id root
uid=0(root) gid=0(root) groups=0(root)


and in order for this new user to be a first class citizen in your Linux, you must run a uid_entrypoint script to append it to /etc/passwd

for more details:

https://docs.openshift.com/enterprise/3.1/architecture/additional_concepts/authorization.html

https://docs.openshift.com/container-platform/3.11/creating_images/guidelines.html#openshift-specific-guidelines




Monday, October 7, 2019

kubernetes mount file on an existing folder

With ConfigMap and Secret you can "populate" a volume with files and "mount" that volume to a container, so that the application can access those files.


echo "one=1" > file1.properties
echo "two=2" > file2.properties
kubectl create configmap myconfig --from-file file1.properties --from-file file2.properties
kubectl describe configmaps myconfig


Name:         myconfig
Namespace:    default
Labels:       
Annotations:  

Data
====
file1.properties:
----
one=1

file2.properties:
----
two=2

Events:  


Now I can mount the ConfigMap into a Pod, as described here


cat mypod.yml

apiVersion: v1
kind: Pod
metadata:
  name: configmap-pod
spec:
  containers:
    - name: test
      image: nginx
      volumeMounts:
        - name: config-vol
          mountPath: /etc/config
  volumes:
    - name: config-vol
      configMap:
        name: myconfig
        items:
          - key: file1.properties
            path: myfile1.properties


kubectl create -f mypod.yml

kubectl exec -ti configmap-pod bash

cat /etc/config/myfile1.properties
one=1




Now I change the image to vernetto/mynginx, which contains already a /etc/config/file0.properties
The existing folder /etc/config/ is completely replaced by the volumeMount, so file0.properties disappears!
Only /etc/config/file1.properties is there.

They claim that one can selectively mount only one file from the volume, and leave the original files in the base image:
https://stackoverflow.com/questions/33415913/whats-the-best-way-to-share-mount-one-file-into-a-pod/43404857#43404857 using subPath, but it is definitely not working for me.