Linux-Foundation CKS Sample Questions
Question # 1
use the Trivy to scan the following images,
1. amazonlinux:1
2. k8s.gcr.io/kube-controller-manager:v1.18.6
Look for images with HIGH or CRITICAL severity vulnerabilities and store the output of the
same in /opt/trivy-vulnerable.txt
Question # 2
Enable audit logs in the cluster, To Do so, enable the log backend, and ensure that
1. logs are stored at /var/log/kubernetes-logs.txt.
2. Log files are retained for 12 days.
3. at maximum, a number of 8 old audit logs files are retained.
4. set the maximum size before getting rotated to 200MB
Edit and extend the basic policy to log:
1. namespaces changes at RequestResponse
2. Log the request body of secrets changes in the namespace kube-system.
Question No : 46 CORRECT TEXT
Linux Foundation CKS : Practice Test
130
3. Log all other resources in core and extensions at the Request level.
4. Log "pods/portforward", "services/proxy" at Metadata level.
5. Omit the Stage RequestReceived
All other requests at the Metadata level
Answer: See the explanation below: Explanation: Kubernetes auditing provides a security-relevant chronological set of records about a cluster. Kube-apiserver performs auditing. Each request on each stage of its execution generates an event, which is then pre-processed according to a certain policy and written to a backend. The policy determines what’s recorded and the backends persist the records. You might want to configure the audit log as part of compliance with the CIS (Center for Internet Security) Kubernetes Benchmark controls. The audit log can be enabled by default using the following configuration in cluster.yml: services: kube-api: audit_log: enabled: true When the audit log is enabled, you should be able to see the default values at /etc/kubernetes/audit-policy.yaml The log backend writes audit events to a file in JSONlines format. You can configure the log audit backend using the following kube-apiserver flags: --audit-log-path specifies the log file path that log backend uses to write audit events. Not specifying this flag disables log backend. - means standard out --audit-log-maxage defined the maximum number of days to retain old audit log files --audit-log-maxbackup defines the maximum number of audit log files to retain --audit-log-maxsize defines the maximum size in megabytes of the audit log file before it gets rotated If your cluster's control plane runs the kube-apiserver as a Pod, remember to mount the hostPath to the location of the policy file and log file, so that audit records are persisted. For example: --audit-policy-file=/etc/kubernetes/audit-policy.yaml \ --audit-log-path=/var/log/audit.log
Question # 3
Create a RuntimeClass named gvisor-rc using the prepared runtime handler named runsc.
Create a Pods of image Nginx in the Namespace server to run on the gVisor runtime class
Explanation:
Install the Runtime Class for gVisor{ # Step 1: Install a RuntimeClassQuestion No : 44 CORRECT TEXTLinux Foundation CKS : Practice Test124cat <<EOF | kubectl apply -f -apiVersion: node.k8s.io/v1beta1kind: RuntimeClassmetadata:name: gvisorhandler: runscEOF}Create a Pod with the gVisor Runtime Class{ # Step 2: Create a podcat <<EOF | kubectl apply -f -apiVersion: v1kind: Podmetadata:name: nginx-gvisorspec:runtimeClassName: gvisorcontainers:- name: nginximage: nginxEOF}Verify that the Pod is running{ # Step 3: Get the podkubectl get pod nginx-gvisor -o wide}
Question # 4
You must complete this task on the following cluster/nodes:
Cluster: trace
Question No : 40 CORRECT TEXT
Linux Foundation CKS : Practice Test
111
Master node: master
Worker node: worker1
You can switch the cluster/configuration context using the following command:
[desk@cli] $ kubectl config use-context trace
Given: You may use Sysdig or Falco documentation.
Task:
Use detection tools to detect anomalies like processes spawning and executing something
weird frequently in the single container belonging to Pod tomcat.
Two tools are available to use:
1. falco
2. sysdig
Tools are pre-installed on the worker1 node only.
Analyse the container’s behaviour for at least 40 seconds, using filters that detect newly
spawning and executing processes.
Store an incident file at /home/cert_masters/report, in the following format:
[timestamp],[uid],[processName]
Note: Make sure to store incident file on the cluster's worker node, don't move it to
master node.
Answer: See the explanation below Explanation: $vim /etc/falco/falco_rules.local.yaml uk.co.certification.simulator.questionpool.PList@120e24d0 $kill -1 Explanation[desk@cli] $ ssh node01[node01@cli] $ vim /etc/falco/falco_rules.yamlsearch for Container Drift Detected & paste in falco_rules.local.yaml[node01@cli] $ vim /etc/falco/falco_rules.local.yaml - rule: Container Drift Detected (open+create) desc: New executable created in a container due to open+create condition: > Linux Foundation CKS : Practice Test 112 evt.type in (open,openat,creat) and evt.is_open_exec=true and container and not runc_writing_exec_fifo and not runc_writing_var_lib_docker and not user_known_container_drift_activities and evt.rawres>=0 output: > %evt.time,%user.uid,%proc.name # Add this/Refer falco documentation priority: ERROR [node01@cli] $ vim /etc/falco/falco.yaml
Question # 5
Create a User named john, create the CSR Request, fetch the certificate of the user after
approving it.
Create a Role name john-role to list secrets, pods in namespace john
Finally, Create a RoleBinding named john-role-binding to attach the newly created role
john-role to the user john in the namespace john.
To Verify: Use the kubectl auth CLI command to verify the permissions.
Answer: See the Explanation below. Explanation: se kubectl to create a CSR and approve it. Get the list of CSRs: kubectl get csr Approve the CSR: kubectl certificate approve myuser Question No : 39 CORRECT TEXT Linux Foundation CKS : Practice Test 110 Get the certificateRetrieve the certificate from the CSR: kubectl get csr/myuser -o yaml here are the role and role-binding to give john permission to create NEW_CRD resource: kubectl apply -f roleBindingJohn.yaml --as=john rolebinding.rbac.authorization.k8s.io/john_external-rosource-rb created kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: john_crd namespace: development-john subjects: - kind: User name: john apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: crd-creation kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: crd-creation rules: - apiGroups: ["kubernetes-client.io/v1"] resources: ["NEW_CRD"] verbs: ["create, list, get"]
Question # 6
Use the kubesec docker images to scan the given YAML manifest, edit and apply the
advised changes, and passed with a score of 4 points.
kubesec-test.yaml
apiVersion: v1
kind: Pod
metadata:
name: kubesec-demo
Question No : 38 CORRECT TEXT
Linux Foundation CKS : Practice Test
108
spec:
containers:
- name: kubesec-demo
image: gcr.io/google-samples/node-hello:1.0
securityContext:
readOnlyRootFilesystem: true
Hint: docker run -i kubesec/kubesec:512c5e0 scan /dev/stdin < kubesec-test.yaml
Answer: See explanation below. Explanation: kubesec scan k8s-deployment.yaml cat < kubesec-test.yaml apiVersion: v1 kind: Pod metadata: name: kubesec-demo spec: containers: - name: kubesec-demo image: gcr.io/google-samples/node-hello:1.0 securityContext: readOnlyRootFilesystem: true EOF kubesec scan kubesec-test.yaml docker run -i kubesec/kubesec:512c5e0 scan /dev/stdin < kubesec-test.yaml kubesec http 8080 & [1] 12345 {"severity":"info","timestamp":"2019-05- 12T11:58:34.662+0100","caller":"server/server.go:69","message":"Starting HTTP server on port 8080"} curl -sSX POST --data-binary @test/asset/score-0-cap-sys-admin.yml
http://localhost:8080/scan [ Linux Foundation CKS : Practice Test 109 { "object": "Pod/security-context-demo.default", "valid": true, "message": "Failed with a score of -30 points", "score": -30, "scoring": { "critical": [ { "selector": "containers[] .securityContext .capabilities .add == SYS_ADMIN", "reason": "CAP_SYS_ADMIN is the most privileged capability and should always be avoided" }, { "selector": "containers[] .securityContext .runAsNonRoot == true", "reason": "Force the running image to run as a non-root user to ensure least privilege" }, // ...
Question # 7
Analyze and edit the given Dockerfile
FROM ubuntu:latest
RUN apt-get update -y
RUN apt-install nginx -y
COPY entrypoint.sh /
ENTRYPOINT ["/entrypoint.sh"]
USER ROOT
Fixing two instructions present in the file being prominent security best practice issues
Analyze and edit the deployment manifest file
apiVersion: v1
Question No : 36 CORRECT TEXT
Linux Foundation CKS : Practice Test
101
kind: Pod
metadata:
name: security-context-demo-2
spec:
securityContext:
runAsUser: 1000
containers:
- name: sec-ctx-demo-2
image: gcr.io/google-samples/node-hello:1.0
securityContext:
runAsUser: 0
privileged: True
allowPrivilegeEscalation: false
Fixing two fields present in the file being prominent security best practice issues
Don't add or remove configuration settings; only modify the existing configuration settings
Whenever you need an unprivileged user for any of the tasks, use user test-user with the
user id 548
Explanation:
FROM debian:latest
MAINTAINER
[email protected]
# 1 - RUN
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -yq apt-utils
RUN DEBIAN_FRONTEND=noninteractive apt-get install -yq htop
RUN apt-get clean
# 2 - CMD
#CMD ["htop"]
#CMD ["ls", "-l"]
# 3 - WORKDIR and ENV
WORKDIR /root
ENV DZ version1
$ docker image build -t bogodevops/demo .
Sending build context to Docker daemon 3.072kB
Step 1/7 : FROM debian:latest
---> be2868bebaba
Linux Foundation CKS : Practice Test
102
Step 2/7 : MAINTAINER
[email protected]
---> Using cache
---> e2eef476b3fd
Step 3/7 : RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -yq
apt-utils
---> Using cache
---> 32fd044c1356
Step 4/7 : RUN DEBIAN_FRONTEND=noninteractive apt-get install -yq htop
---> Using cache
---> 0a5b514a209e
Step 5/7 : RUN apt-get clean
---> Using cache
---> 5d1578a47c17
Step 6/7 : WORKDIR /root
---> Using cache
---> 6b1c70e87675
Step 7/7 : ENV DZ version1
---> Using cache
---> cd195168c5c7
Successfully built cd195168c5c7
Successfully tagged bogodevops/demo:lates
Question # 8
You can switch the cluster/configuration context using the following command:
[desk@cli] $ kubectl config use-context stage
Context:
A PodSecurityPolicy shall prevent the creation of privileged Pods in a specific namespace.
Task:
1. Create a new PodSecurityPolcy named deny-policy, which prevents the creation of
privileged Pods.
2. Create a new ClusterRole name deny-access-role, which uses the newly created
PodSecurityPolicy deny-policy.
3. Create a new ServiceAccount named psd-denial-sa in the existing namespace
development.
Finally, create a new ClusterRoleBindind named restrict-access-bind, which binds the
Question No : 32 CORRECT TEXT
Linux Foundation CKS : Practice Test
90
newly created ClusterRole deny-access-role to the newly created ServiceAccount pspdenial-sa
Answer: See the explanation below Explanation: Create psp to disallow privileged container uk.co.certification.simulator.questionpool.PList@11600d40 k create sa psp-denial-sa -n development uk.co.certification.simulator.questionpool.PList@11601040 namespace: development Explanationmaster1 $ vim psp.yaml apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: name: deny-policy spec: privileged: false # Don't allow privileged pods! seLinux: rule: RunAsAny supplementalGroups: rule: RunAsAny runAsUser: rule: RunAsAny fsGroup: rule: RunAsAny volumes: - '*' master1 $ vim cr1.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: deny-access-role rules: - apiGroups: ['policy'] resources: ['podsecuritypolicies'] verbs: ['use'] resourceNames: - “deny-policy” master1 $ k create sa psp-denial-sa -n developmentmaster1 $ vim cb1.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: restrict-access-bing roleRef: kind: ClusterRole name: deny-access-role apiGroup: rbac.authorization.k8s.io Linux Foundation CKS : Practice Test 91 subjects: # Authorize specific service accounts: - kind: ServiceAccount name: psp-denial-sa namespace: development
Question # 9
You can switch the cluster/configuration context using the following command:
[desk@cli] $ kubectl config use-context qa
Context:
A pod fails to run because of an incorrectly specified ServiceAccount
Task:
Question No : 30 CORRECT TEXT
Linux Foundation CKS : Practice Test
85
Create a new service account named backend-qa in an existing namespace qa, which
must not have access to any secret.
Edit the frontend pod yaml to use backend-qa service account
Note: You can find the frontend pod yaml at /home/cert_masters/frontend-pod.yaml
Answer: See the explanation below
Explanation:
[desk@cli] $ k create sa backend-qa -n qasa/backend-qa created[desk@cli] $ k get
role,rolebinding -n qaNo resources found in qa namespace.[desk@cli] $ k create role
backend -n qa --resource pods,namespaces,configmaps --verb list# No access to secret
[desk@cli] $ k create rolebinding backend -n qa --role backend --serviceaccount
qa:backend-qa[desk@cli] $ vim /home/cert_masters/frontend-pod.yaml
uk.co.certification.simulator.questionpool.PList@120e0660
[desk@cli] $ k apply -f /home/cert_masters/frontend-pod.yamlpod created
[desk@cli] $ k create sa backend-qa -n qaserviceaccount/backend-qa created[desk@cli]
$ k get role,rolebinding -n qaNo resources found in qa namespace.[desk@cli] $ k create
role backend -n qa --resource pods,namespaces,configmaps --verb
listrole.rbac.authorization.k8s.io/backend created[desk@cli] $ k create rolebinding backend
-n qa --role backend --serviceaccount qa:backendqarolebinding.rbac.authorization.k8s.io/backend created[desk@cli] $ vim
/home/cert_masters/frontend-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: frontend
spec:
serviceAccountName: backend-qa # Add this
image: nginx
name: frontend
[desk@cli] $ k apply -f /home/cert_masters/frontend-pod.yamlpod/frontend
created
https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
Question # 10
Service is running on port 389 inside the system, find the process-id of the process, and
stores the names of all the open-files inside the /candidate/KH77539/files.txt, and also
delete the binary.
Answer: See explanation below.
Explanation:
Question No : 25 CORRECT TEXT
Linux Foundation CKS : Practice Test
68
root# netstat -ltnup
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:17600 0.0.0.0:* LISTEN 1293/dropbox
tcp 0 0 127.0.0.1:17603 0.0.0.0:* LISTEN 1293/dropbox
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 575/sshd
tcp 0 0 127.0.0.1:9393 0.0.0.0:* LISTEN 900/perl
tcp 0 0 :::80 :::* LISTEN 9583/docker-proxy
tcp 0 0 :::443 :::* LISTEN 9571/docker-proxy
udp 0 0 0.0.0.0:68 0.0.0.0:* 8822/dhcpcd
root# netstat -ltnup | grep ':22'
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 575/sshd
The ss command is the replacement of the netstat command.
Now let’s see how to use the ss command to see which process is listening on port 22:
root# ss -ltnup 'sport = :22'
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port
tcp LISTEN 0 128 0.0.0.0:22 0.0.0.0:* users:("sshd",pid=575,fd=3))
Question # 11
You can switch the cluster/configuration context using the following command:
[desk@cli] $ kubectl config use-context test-account
Task: Enable audit logs in the cluster.
To do so, enable the log backend, and ensure that:
1. logs are stored at /var/log/Kubernetes/logs.txt
2. log files are retained for 5 days
3. at maximum, a number of 10 old audit log files are retained
A basic policy is provided at /etc/Kubernetes/logpolicy/audit-policy.yaml. It only
specifies what not to log.
Note: The base policy is located on the cluster's master node.
Edit and extend the basic policy to log:
1. Nodes changes at RequestResponse level
2. The request body of persistentvolumes changes in the namespace frontend
3. ConfigMap and Secret changes in all namespaces at the Metadata level
Also, add a catch-all rule to log all other requests at the Metadata leve Note: Don't forget to apply the modified policy.
Answer: See the explanation below
Explanation:
$ vim /etc/kubernetes/log-policy/audit-policy.yaml
uk.co.certification.simulator.questionpool.PList@11602760
$ vim /etc/kubernetes/manifests/kube-apiserver.yamlAdd these
uk.co.certification.simulator.questionpool.PList@11602c70
- --audit-log-maxbackup=10
Explanation[desk@cli] $ ssh master1[master1@cli] $ vim /etc/kubernetes/log-policy/auditpolicy.yaml
apiVersion: audit.k8s.io/v1 # This is required.
kind: Policy
# Don't generate audit events for all requests in RequestReceived stage.
omitStages:
- "RequestReceived"
rules:
# Don't log watch requests by the "system:kube-proxy" on endpoints or services
- level: None
users: ["system:kube-proxy"]
verbs: ["watch"]
resources:
- group: "" # core API group
resources: ["endpoints", "services"]
# Don't log authenticated requests to certain non-resource URL paths.
- level: None
userGroups: ["system:authenticated"]
nonResourceURLs:
- "/api*" # Wildcard matching.
- "/version"
# Add your changes below
- level: RequestResponse
userGroups: ["system:nodes"] # Block for nodes
- level: Request
resources:
- group: "" # core API group
resources: ["persistentvolumes"] # Block for persistentvolumes
namespaces: ["frontend"] # Block for persistentvolumes of frontend ns
- level: Metadata
Linux Foundation CKS : Practice Test
60
resources:
- group: "" # core API group
resources: ["configmaps", "secrets"] # Block for configmaps & secrets
- level: Metadata # Block for everything else
[master1@cli] $ vim /etc/kubernetes/manifests/kube-apiserver.yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 10.0.0.5:6443
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --advertise-address=10.0.0.5
- --allow-privileged=true
- --authorization-mode=Node,RBAC
- --audit-policy-file=/etc/kubernetes/log-policy/audit-policy.yaml #Add this
- --audit-log-path=/var/log/kubernetes/logs.txt #Add this
- --audit-log-maxage=5 #Add this
- --audit-log-maxbackup=10 #Add this
output truncated
Question # 12
Create a PSP that will prevent the creation of privileged pods in the namespace.
Create a new PodSecurityPolicy named prevent-privileged-policy which prevents the
creation of privileged pods.
Create a new ServiceAccount named psp-sa in the namespace default.
Create a new ClusterRole named prevent-role, which uses the newly created Pod Security
Policy prevent-privileged-policy.
Create a new ClusterRoleBinding named prevent-role-binding, which binds the created
ClusterRole prevent-role to the created SA psp-sa.
Also, Check the Configuration is working or not by trying to Create a Privileged pod, it
should get failed.
Answer: See the Explanation below. Explanation:Create a PSP that will prevent the creation of privileged pods in the namespace.
$ cat clusterrole-use-privileged.yaml---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata:name: use-privileged-psprules:- apiGroups: ['policy']resources: ['podsecuritypolicies']verbs: ['use']resourceNames:- default-psp---apiVersion: rbac.authorization.k8s.io/v1kind: RoleBindingmetadata:name: privileged-role-bindnamespace: psp-testroleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: use-privileged-pspsubjects:- kind: ServiceAccountname: privileged-sa$ kubectl -n psp-test apply -f clusterrole-use-privileged.yamlAfter a few moments, the privileged Pod should be created.Create a new PodSecurityPolicy named prevent-privileged-policy which preventsthe creation of privileged pods.apiVersion: policy/v1beta1kind: PodSecurityPolicymetadata:name: examplespec:privileged: false # Don't allow privileged pods!# The rest fills in some required fields.seLinux:rule: RunAsAnysupplementalGroups:rule: RunAsAnyrunAsUser:rule: RunAsAnyfsGroup:Linux Foundation CKS : Practice Test49rule: RunAsAnyvolumes:- '*'And create it with kubectl:kubectl-admin create -f example-psp.yamlNow, as the unprivileged user, try to create a simple pod:kubectl-user create -f- <<EOFapiVersion: v1kind: Podmetadata:name: pausespec:containers:- name: pauseimage: k8s.gcr.io/pauseEOFThe output is similar to this:Error from server (Forbidden): error when creating "STDIN": pods "pause" is forbidden:unable to validate against any pod security policy: []Create a new ServiceAccount named psp-sa in the namespace default.$ cat clusterrole-use-privileged.yaml---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata:name: use-privileged-psprules:- apiGroups: ['policy']resources: ['podsecuritypolicies']verbs: ['use']resourceNames:- default-psp---apiVersion: rbac.authorization.k8s.io/v1kind: RoleBindingmetadata:name: privileged-role-bindnamespace: psp-testroleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: use-privileged-pspsubjects:- kind: ServiceAccountname: privileged-saLinux Foundation CKS : Practice Test50$ kubectl -n psp-test apply -f clusterrole-use-privileged.yamlAfter a few moments, the privileged Pod should be created.Create a new ClusterRole named prevent-role, which uses the newly created PodSecurity Policy prevent-privileged-policy.apiVersion: policy/v1beta1kind: PodSecurityPolicymetadata:name: examplespec:privileged: false # Don't allow privileged pods!# The rest fills in some required fields.seLinux:rule: RunAsAnysupplementalGroups:rule: RunAsAnyrunAsUser:rule: RunAsAnyfsGroup:rule: RunAsAnyvolumes:- '*'And create it with kubectl:kubectl-admin create -f example-psp.yamlNow, as the unprivileged user, try to create a simple pod:kubectl-user create -f- <<EOFapiVersion: v1kind: Podmetadata:name: pausespec:containers:- name: pauseimage: k8s.gcr.io/pauseEOFThe output is similar to this:Error from server (Forbidden): error when creating "STDIN": pods "pause" is forbidden:unable to validate against any pod security policy: []Create a new ClusterRoleBinding named prevent-role-binding, which binds thecreated ClusterRole prevent-role to the created SA psp-sa.apiVersion: rbac.authorization.k8s.io/v1# This role binding allows "jane" to read pods in the "default" namespace.# You need to already have a Role named "pod-reader" in that namespace.kind: RoleBindingLinux Foundation CKS : Practice Test51metadata:name: read-podsnamespace: defaultsubjects:# You can specify more than one "subject"- kind: Username: jane # "name" is case sensitiveapiGroup: rbac.authorization.k8s.ioroleRef:# "roleRef" specifies the binding to a Role / ClusterRolekind: Role #this must be Role or ClusterRolename: pod-reader # this must match the name of the Role or ClusterRole you wish to bindtoapiGroup: rbac.authorization.k8s.ioapiVersion: rbac.authorization.k8s.io/v1kind: Rolemetadata:namespace: defaultname: pod-readerrules:- apiGroups: [""] # "" indicates the core API groupresources: ["pods"]verbs: ["get", "watch", "list"]
Question # 13
a. Retrieve the content of the existing secret named default-token-xxxxx in the testing
namespace.
Store the value of the token in the token.txt
b. Create a new secret named test-db-secret in the DB namespace with the following
content:
username: mysql
password: password@123
Create the Pod name test-db-pod of image nginx in the namespace db that can access
test-db-secret via a volume at path /etc/mysql-credentials
Answer: See the explanation below:
Explanation:
To add a Kubernetes cluster to your project, group, or instance:
Navigate to your:
Click Add Kubernetes cluster.
Click the Add existing cluster tab and fill in the details:
Get the API URL by running this command:
kubectl cluster-info | grep -E 'Kubernetes master|Kubernetes control plane' | awk '/http/
{print $NF}'
uk.co.certification.simulator.questionpool.PList@113e1f90
kubectl get secret -o jsonpath="{['data']['ca\.crt']}"
Question # 14
Create a new NetworkPolicy named deny-all in the namespace testing which denies all
traffic of type ingress and egress traffic
Answer: See the explanation below:
Explanation:
You can create a "default" isolation policy for a namespace by creating a NetworkPolicy
that selects all pods but does not allow any ingress traffic to those pods.
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-ingress
spec:
podSelector: {}
policyTypes:
- Ingress
You can create a "default" egress isolation policy for a namespace by creating a
NetworkPolicy that selects all pods but does not allow any egress traffic from those pods.
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-all-egress
spec:
podSelector: {}
egress:
- {}
policyTypes:
- Egress
Default deny all ingress and all egress trafficYou can create a "default" policy for a
namespace which prevents all ingress AND egress traffic by creating the following
NetworkPolicy in that namespace.
---
Linux Foundation CKS : Practice Test
21
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
This ensures that even pods that aren't selected by any other NetworkPolicy will not be
allowed ingress or egress traffic.
Question # 15
Create a PSP that will only allow the persistentvolumeclaim as the volume type in the
namespace restricted.
Create a new PodSecurityPolicy named prevent-volume-policy which prevents the pods
which is having different volumes mount apart from persistentvolumeclaim.
Create a new ServiceAccount named psp-sa in the namespace restricted.
Create a new ClusterRole named psp-role, which uses the newly created Pod Security
Policy prevent-volume-policy
Create a new ClusterRoleBinding named psp-role-binding, which binds the created
ClusterRole psp-role to the created SA psp-sa.
Hint:
Also, Check the Configuration is working or not by trying to Mount a Secret in the
pod maifest, it should get failed.
POD Manifest:
apiVersion: v1
kind: Pod
metadata:
name:
spec:
containers:
- name:
image:
volumeMounts:
- name:
mountPath:
volumes:
- name:
secret:
secretName:
Answer: See the Explanation below:
Explanation:
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
Question No : 9 CORRECT TEXT
Linux Foundation CKS : Practice Test
16
metadata:
name: restricted
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames:
'docker/default,runtime/default'
apparmor.security.beta.kubernetes.io/allowedProfileNames: 'runtime/default'
seccomp.security.alpha.kubernetes.io/defaultProfileName: 'runtime/default'
apparmor.security.beta.kubernetes.io/defaultProfileName: 'runtime/default'
spec:
privileged: false
# Required to prevent escalations to root.
allowPrivilegeEscalation: false
# This is redundant with non-root + disallow privilege escalation,
# but we can provide it for defense in depth.
requiredDropCapabilities:
- ALL
# Allow core volume types.
volumes:
- 'configMap'
- 'emptyDir'
- 'projected'
- 'secret'
- 'downwardAPI'
# Assume that persistentVolumes set up by the cluster admin are safe to use.
- 'persistentVolumeClaim'
hostNetwork: false
hostIPC: false
hostPID: false
runAsUser:
# Require the container to run without root privileges.
rule: 'MustRunAsNonRoot'
seLinux:
# This policy assumes the nodes are using AppArmor rather than SELinux.
rule: 'RunAsAny'
supplementalGroups:
rule: 'MustRunAs'
ranges:
# Forbid adding the root group.
- min: 1
max: 65535
Linux Foundation CKS : Practice Test
17
fsGroup:
rule: 'MustRunAs'
ranges:
# Forbid adding the root group.
- min: 1
max: 65535
readOnlyRootFilesystem: false
Question # 16
You can switch the cluster/configuration context using the following command:
[desk@cli] $ kubectl config use-context dev
A default-deny NetworkPolicy avoid to accidentally expose a Pod in a namespace that
doesn't have any other NetworkPolicy defined.
Task: Create a new default-deny NetworkPolicy named deny-network in the namespace
test for all traffic of type Ingress + Egress
The new NetworkPolicy must deny all Ingress + Egress traffic in the namespace test.
Apply the newly created default-deny NetworkPolicy to all Pods running in namespace
test.
You can find a skeleton manifests file at /home/cert_masters/network-policy.yaml
Answer: See the explanation below
Explanation:
master1 $ k get pods -n test --show-labels
uk.co.certification.simulator.questionpool.PList@132b47c0
$ vim netpol.yaml
uk.co.certification.simulator.questionpool.PList@132b4af0
master1 $ k apply -f netpol.yaml
Explanationcontrolplane $ k get pods -n test --show-labels
NAME READY STATUS RESTARTS AGE LABELS
test-pod 1/1 Running 0 34s role=test,run=test-pod
testing 1/1 Running 0 17d run=testing
master1 $ vim netpol1.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-network
namespace: test
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
Question # 17
Using the runtime detection tool Falco, Analyse the container behavior for at least 20
seconds, using filters that detect newly spawning and executing processes in a single
container of Nginx.
store the incident file art /opt/falco-incident.txt, containing the detected incidents. one per
line, in the format
[timestamp],[uid],[processName]
Question # 18
Fix all issues via configuration and restart the affected components to ensure the new
setting takes effect.
Fix all of the following violations that were found against the API server:-
a. Ensure that the RotateKubeletServerCertificate argument is set to true.
b. Ensure that the admission control plugin PodSecurityPolicy is set.
c. Ensure that the --kubelet-certificate-authority argument is set as appropriate.
Fix all of the following violations that were found against the Kubelet:-
a. Ensure the --anonymous-auth argument is set to false.
b. Ensure that the --authorization-mode argument is set to Webhook.
Fix all of the following violations that were found against the ETCD:-
a. Ensure that the --auto-tls argument is not set to true
b. Ensure that the --peer-auto-tls argument is not set to true
Hint: Take the use of Tool Kube-Bench
Answer: See the Explanation below.
Explanation:
Fix all of the following violations that were found against the API server:-
a. Ensure that the RotateKubeletServerCertificate argument is set to true.
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component: kubelet
tier: control-plane
name: kubelet
namespace: kube-system
spec:
containers:
- command:
- kube-controller-manager
+ - --feature-gates=RotateKubeletServerCertificate=true
image: gcr.io/google_containers/kubelet-amd64:v1.6.0
livenessProbe:
failureThreshold: 8
httpGet:
host: 127.0.0.1
path: /healthz
port: 6443
scheme: HTTPS
initialDelaySeconds: 15
timeoutSeconds: 15
name: kubelet
resources:
requests:
cpu: 250m
volumeMounts:
- mountPath: /etc/kubernetes/
name: k8s
readOnly: true
- mountPath: /etc/ssl/certs
name: certs
- mountPath: /etc/pki
name: pki
hostNetwork: true
volumes:
- hostPath:
path: /etc/kubernetes
name: k8s
- hostPath:
path: /etc/ssl/certs
name: certs
- hostPath:
path: /etc/pki
name: pki
b. Ensure that the admission control plugin PodSecurityPolicy is set.
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--enable-admission-plugins"
compare:
op: has
value: "PodSecurityPolicy"
set: true
remediation: |
Follow the documentation and create Pod Security Policy objects as per your environment.
Then, edit the API server pod specification file $apiserverconf
on the master node and set the --enable-admission-plugins parameter to a
value that includes PodSecurityPolicy :
--enable-admission-plugins=...,PodSecurityPolicy,...
Then restart the API Server.
scored: true
c. Ensure that the --kubelet-certificate-authority argument is set as appropriate.
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--kubelet-certificate-authority"
set: true
remediation: |
Follow the Kubernetes documentation and setup the TLS connection between the
apiserver and kubelets. Then, edit the API server pod specification file
$apiserverconf on the master node and set the --kubelet-certificate-authority
parameter to the path to the cert file for the certificate authority.
--kubelet-certificate-authority=
scored: true
Fix all of the following violations that were found against the ETCD:-
a. Ensure that the --auto-tls argument is not set to true
Edit the etcd pod specification file $etcdconf on the masternode and either remove the --
auto-tls parameter or set it to false.--auto-tls=false
b. Ensure that the --peer-auto-tls argument is not set to true
Edit the etcd pod specification file $etcdconf on the masternode and either remove the --
peer-auto-tls parameter or set it to false.--peer-auto-tls=false
Question # 19
Create a network policy named allow-np, that allows pod in the namespace staging to
connect to port 80 of other pods in the same namespace.
Ensure that Network Policy:-
1. Does not allow access to pod not listening on port 80.
2. Does not allow access from Pods, not in namespace staging
Answer: See the explanation below:
Explanation:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: network-policy
spec:
podSelector: {} #selects all the pods in the namespace deployed
policyTypes:
- Ingress
ingress:
- ports: #in input traffic allowed only through 80 port only
- protocol: TCP
port: 80
Question # 20
Create a new ServiceAccount named backend-sa in the existing namespace default, which
has the capability to list the pods inside the namespace default.
Create a new Pod named backend-pod in the namespace default, mount the newly created
sa backend-sa to the pod, and Verify that the pod is able to list pods.
Ensure that the Pod is running.
Answer: See the Explanation below:
Explanation:
A service account provides an identity for processes that run in a Pod.
When you (a human) access the cluster (for example, using kubectl), you are authenticated
by the apiserver as a particular User Account (currently this is usually admin, unless your
cluster administrator has customized your cluster). Processes in containers inside pods can
also contact the apiserver. When they do, they are authenticated as a particular Service
Account (for example, default).
When you create a pod, if you do not specify a service account, it is automatically assigned
the default service account in the same namespace. If you get the raw json or yaml for a
pod you have created (for example, kubectl get pods/ -o yaml), you can see
the spec.serviceAccountName field has been automatically set.
You can access the API from inside a pod using automatically mounted service account credentials, as described in Accessing the Cluster. The API permissions of the service
account depend on the authorization plugin and policy in use.
In version 1.6+, you can opt out of automounting API credentials for a service account by
setting automountServiceAccountToken: false on the service account:
apiVersion: v1
kind: ServiceAccount
metadata:
name: build-robot
automountServiceAccountToken: false
In version 1.6+, you can also opt out of automounting API credentials for a particular pod:
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
serviceAccountName: build-robot
automountServiceAccountToken: false
The pod spec takes precedence over the service account if both specify
a automountServiceAccountToken value.
Question # 21
Create a Pod name Nginx-pod inside the namespace testing, Create a service for the
Nginx-pod named nginx-svc, using the ingress of your choice, run the ingress on tls, secure
port.
Answer: See explanation below. Explanation: $ kubectl get ing -n NAME HOSTS ADDRESS PORTS AGE cafe-ingress cafe.com 10.0.2.15 80 25s
$ kubectl describe ing -n Name: cafe-ingress Namespace: default Address: 10.0.2.15 Default backend: default-http-backend:80 (172.17.0.5:8080) Rules: Host Path Backends ---- ---- -------- cafe.com
/tea tea-svc:80 () /coffee coffee-svc:80 ()
Annotations:
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"networking.k8s.io/v1","kind":"Ingress","metadata":{"annotations":{},"name":"c afeingress","namespace":"default","selfLink":"/apis/networking/v1/namespaces/default/ingress es/cafeingress"},"spec":{"rules":[{"host":"cafe.com","http":{"paths":[{"backend":{"serviceName":"teasvc","servicePort":80},"path":"/tea"},{"backend":{"serviceName":"coffeesvc","servicePort":80},"path":"/coffee"}]}}]},"status":{"loadBalancer":{"ingress":[{"ip":"169.48. 142.110"}]}}}
Events:
Type Reason Age From Message ---- ------ ---- ---- -------
Normal CREATE 1m ingress-nginx-controller Ingress default/cafe-ingress Normal UPDATE 58s ingress-nginx-controller Ingress default/cafe-ingress $ kubectl get pods -n NAME READY STATUS RESTARTS AGE ingress-nginx-controller-67956bf89d-fv58j 1/1 Running 0 1m $ kubectl logs -n ingress-nginx-controller-67956bf89d-fv58j ------------------------------------------------------------------------------- NGINX Ingress controller Release: 0.14.0 Build: git-734361d Repository:
https://github.com/kubernetes/ingress-nginx