第一题

Set configuration context $ kubectl config use-context k8s 

Monitor the logs of Pod foobar and

  1. Extract log lines corresponding to error file-not-found
  2. Write them to /opt/KULM00201/foobar

Question weight 5%

1
# kubectl logs  foobar |grep file-not-found >> /opt/KULM00201/foobar

第二题

Set configuration context $ kubectl config use-context k8s

List all PVs sorted by name saving the full kubectl output to /opt/KUCC0010/my_volumes . Use kubectl’s own functionally for sorting the output, and do not manipulate it any further.

Question weight 3%

1
2
3
4
5
6
7
8
9
10
# kubectl get pv -A --sort-by=.metadata.name >/opt/KUCC0010/my_volumes
# cat /opt/KUCC0010/my_volumes
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nginx-1 5Gi RWX Recycle Available 5m1s
nginx-2 1Gi RWX Recycle Available 4m44s
nginx-3 10Gi RWX Recycle Available 4m44s
tomcat-1 5Gi RWX Recycle Available 3m42s
tomcat-2 1Gi RWX Recycle Available 3m42s
tomcat-3 10Gi RWX Recycle Available 3m42s

pv按容量去排序

1
2
3
4
5
6
7
8
9
# kubectl get pv -A --sort-by=.spec.capacity.storage
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nginx-2 1Gi RWX Recycle Available 3m55s
tomcat-2 1Gi RWX Recycle Available 2m53s
nginx-1 5Gi RWX Recycle Available 4m12s
tomcat-1 5Gi RWX Recycle Available 2m53s
nginx-3 10Gi RWX Recycle Available 3m55s
tomcat-3 10Gi RWX Recycle Available 2m53s

第三题

Set configuration context $ kubectl config use-context k8s

Ensure a single instance of Pod nginx is running on each node of the kubernetes cluster where nginx also represents the image name which has to be used. Do no override any taints currently in place.

Use Daemonsets to complete this task and use ds.kusc00201 as Daemonset name. Question weight 3%

https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
# kubectl create deployment ds.kusc00201 --image=nginx -oyaml >ds.kusc00201.yaml
# vim ds.kusc00201.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
labels:
app: ds.kusc00201
name: ds.kusc00201
namespace: default
spec:
selector:
matchLabels:
app: ds.kusc00201
template:
metadata:
creationTimestamp: null
labels:
app: ds.kusc00201
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: nginx
# kubectl apply -f ds.kusc00201.yaml
daemonset.apps/ds.kusc00201 created
# kubectl get ds
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
ds.kusc00201 2 2 2 2 2 <none> 2m33s

第四题

Set configuration context $ kubectl config use-context k8s 

Perform the following tasks

  1. Add an init container to 
  2. The init container should create an empty file named 
  3. If /workdir/calm.txt is not detected, the Pod should exit
  4. Once the spec file has been updated with the init container definition, the Pod should be created.

Question weight 7%

https://kubernetes.io/docs/concepts/workloads/pods/init-containers/

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
# kubectl run  lumpy--koala --image=nginx  -oyaml --dry-run >/opt/kucc00100/pod-spec-KUCC00100.yaml
# vim /opt/kucc00100/pod-spec-KUCC00100.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
run: lumpy--koala
name: lumpy--koala
spec:
containers:
- image: busybox:1.28
name: lumpy--koala
command: ['sh','-c','if [ !-f /workdir/calm.txt ];then exit 1;else sleep 300;fi']
volumeMounts:
- name: workdir
mountPath: "/workdir"
initContainers:
- name: init-myservice
image: busybox:1.28
command: ['sh','-c','touch /workdir/calm.txt']
volumeMounts:
- name: workdir
mountPath: "/workdir"
volumes:
- name: workdir
emptyDir: {}
# kubectl apply -f /opt/kucc00100/pod-spec-KUCC00100.yaml

第五题

Set configuration context $ kubectl config use-context k8s

Create a pod named kucc4 with a single container for each of the following images running inside (there may be between 1 and 4 images specified): nginx + redis + memcached + consul

Question weight: 4%

https://kubernetes.io/docs/concepts/workloads/controllers/deployment/

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
# kubectl run kucc4 --image=nginx  -oyaml --dry-run > kucc4.yaml
# vim kucc4.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: kucc4
name: kucc4
spec:
containers:
- image: nginx
name: nginx
- image: redis
name: redis
- image: memcached
name: memcached
- image: consul
name: consul
# kubectl apply -f kucc4.yaml
pod/kucc4 created

第六题

Set configuration context $ kubectl config use-context k8s

Schedule a Pod as follows:
Name: nginx-kusc00101
Image: nginx
Node selector: disk=ssd
Question weight: 2%

https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# kubectl run nginx-kusc00101 --image=nginx -oyaml --dry-run >nginx-kusc00101.yaml
# vim nginx-kusc00101.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: nginx-kusc00101
name: nginx-kusc00101
spec:
containers:
- image: nginx
name: nginx-kusc00101
nodeSelector:
disk: ssd
# kubectl apply -f nginx-kusc00101.yaml
pod/nginx-kusc00101 created

第七题

Set configuration context $ kubectl config use-context k8s

Create a deployment as follows
Name: nginx-app
Using container nginx with version 1.10.2-alpine
The deployment should contain 3 replicas
Next, deploy the app with new version 1.13.0-alpine by performing a rolling update and record that update.

Finally, rollback that update to the previous version 1.10.2-alpine

Question weight: 4%

https://kubernetes.io/docs/reference/kubectl/cheatsheet/#updating-resources

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
创建1.10.2版本
# kubectl create deploy nginx-app --image=nginx:1.10.2-alpine -oyaml --dry-run >nginx-app.yaml
# kubectl apply -f nginx-app.yaml
# kubectl scale deploy nginx-app --replicas=3
deployment.apps/nginx-app scaled

更换成1.13.0
# kubectl set image deployment nginx-app nginx=nginx:1.13.0-alpine --record
deployment.apps/nginx-app image updated

查看历史版本
# kubectl rollout history deploy nginx-app
deployment.apps/nginx-app
REVISION CHANGE-CAUSE
1 <none>
2 kubectl set image deploy nginx-app nginx=nginx:1.13.0-alpine --record=true

回滚上一版本
# kubectl rollout undo deploy nginx-app
deployment.apps/nginx-app rolled back

第八题

Set configuration context $ kubectl config use-context k8s

Create and configure the service front-end-service so it’s accessible through NodePort and routes to the existing pod named front-end

Question weight: 4%

1
# kubectl expose pod front-end-service  --type=NodePort --name=front-end --port=80

第九题

Set configuration context $ kubectl config use-context k8s

Create a Pod as follows:
Name: jenkins
Using image: jenkins
In a new Kubenetes namespace named website-frontend
Question weight 3%

1
2
3
4
5
6
7
8
# kubectl create ns website-frontend
namespace/website-frontend created
# kubectl run jenkins --image=jenkins -nwebsite-frontend
pod/jenkins created
# kubectl get po -nwebsite-frontend
NAME READY STATUS RESTARTS AGE
jenkins 0/1 ContainerCreating 0 9s

第十题

Set configuration context $ kubectl config use-context k8s

Create a deployment spec file that will:
Launch 7 replicas of the redis image with the label: app_env_stage=dev
Deployment name: kual00201
Save a copy of this spec file to /opt/KUAL00201/deploy_spec.yaml (or .json)

When you are done, clean up (delete) any new k8s API objects that you produced during this task

Question weight: 3%

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
# mkdir /opt/KUAL00201
# kubectl run kual00201 --image=redis --labels='app_env_stage=dev' -o yaml >/opt/KUAL00201/deploy_spec.yaml
# vim /opt/KUAL00201/deploy_spec.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app_env_stage: dev
name: kual00201
spec:
replicas: 7
selector:
matchLabels:
app_env_stage: dev
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app_env_stage: dev
spec:
containers:
- image: redis
name: kual00201
resources: {}
status: {}

第十一题

Set configuration context $ kubectl config use-context k8s

Create a file /opt/KUCC00302/kucc00302.txt that lists all pods that implement Service foo in Namespace production.

The format of the file should be one pod name per line.

Question weight: 3%

因为本地没有这个服务,用kube-dns代替

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
# kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 9d
# kubectl describe svc kube-dns -nkube-system
Name: kube-dns
Namespace: kube-system
Labels: k8s-app=kube-dns
kubernetes.io/cluster-service=true
kubernetes.io/name=KubeDNS
Annotations: prometheus.io/port: 9153
prometheus.io/scrape: true
Selector: k8s-app=kube-dns
Type: ClusterIP
IP: 10.96.0.10
Port: dns 53/UDP
TargetPort: 53/UDP
Endpoints: 10.244.0.5:53,10.244.2.30:53
Port: dns-tcp 53/TCP
TargetPort: 53/TCP
Endpoints: 10.244.0.5:53,10.244.2.30:53
Port: metrics 9153/TCP
TargetPort: 9153/TCP
Endpoints: 10.244.0.5:9153,10.244.2.30:9153
Session Affinity: None
Events: <none>
# kubectl get po -nkube-system -l k8s-app=kube-dns
NAME READY STATUS RESTARTS AGE
coredns-7ff77c879f-pwpgq 1/1 Running 2 9d
coredns-7ff77c879f-sbg6j 1/1 Running 3 9d
# mkdir /opt/KUCC00302/ -p
# kubectl get po -nkube-system -l k8s-app=kube-dns|grep -v 'NAME'|awk '{print $1}' >/opt/KUCC00302/kucc00302.txt
# cat /opt/KUCC00302/kucc00302.txt
coredns-7ff77c879f-pwpgq
coredns-7ff77c879f-sbg6j

第十二题

Set configuration context $ kubectl config use-context k8s

Create a Kubernetes Secret as follows:
Name: super-secret
Credential: alice or username:bob
Create a Pod named pod-secrets-via-file using the redis image which mounts a secret named super-secret at /secrets

Create a second Pod named pod-secrets-via-env using the redis image, which exports credential as TOPSECRET

Question weight: 9%

https://kubernetes.io/docs/concepts/configuration/secret/

文件挂载

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
# kubectl create secret generic super-secret --from-literal=Credential=alice --from-literal=username=bob
secret/super-secret created
# kubectl run pod-secrets-via-file --image=redis -oyaml --dry-run >pod-secrets-via-file.yaml
# vim pod-secrets-via-file.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: pod-secrets-via-file
name: pod-secrets-via-file
spec:
containers:
- image: redis
name: pod-secrets-via-file
volumeMounts:
- name: super-secret
mountPath: "/secrets"
volumes:
- name: super-secret
secret:
secretName: super-secret

# kubectl apply -f pod-secrets-via-file.yaml
pod/pod-secrets-via-file created

# kubectl exec -it pod-secrets-via-file bash
root@pod-secrets-via-file:/data# cd /secrets/
root@pod-secrets-via-file:/secrets# ls
Credential username
root@pod-secrets-via-file:/secrets# cat Credential
alice
root@pod-secrets-via-file:/secrets# cat username
bobroot

环境变量挂载

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
# kubectl run pod-secrets-via-env --image=redis -oyaml --dry-run >pod-secrets-via-env.yaml
W0807 17:52:23.764346 104935 helpers.go:535] --dry-run is deprecated and can be replaced with --dry-run=client.
# vim pod-secrets-via-env.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
run: pod-secrets-via-env
name: pod-secrets-via-env
spec:
containers:
- image: redis
name: pod-secrets-via-env
env:
- name: TOPSECRET
valueFrom:
secretKeyRef:
name: super-secret
key: Credential
# kubectl apply -f pod-secrets-via-env.yaml
pod/pod-secrets-via-env created

# kubectl exec -it pod-secrets-via-env bash
root@pod-secrets-via-env:/data# echo $TOPSECRET
alice

第十三题

Set configuration context $ kubectl config use-context k8s

Create a pad as follows:
Name: non-persistent-redis
Container image: redis
Named-volume with name: cache-control
Mount path: /data/redis
It should launch in the pre-prod namespace and the volume MUST NOT be persistent.

Question weight: 4%

https://kubernetes.io/docs/concepts/storage/volumes/#emptydir

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
# kubectl create ns pre-prod
namespace/pre-prod created
# kubectl run non-persistent-redis --image=redis -oyaml --dry-run > redis.yaml
# vim redis.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: non-persistent-redis
name: non-persistent-redis
spec:
containers:
- image: redis
name: non-persistent-redis
volumeMounts:
- mountPath: /data/redis
name: cache-control
volumes:
- name: cache-control
emptyDir: {}
# kubectl apply -f redis.yaml -n pre-prod
pod/non-persistent-redis created
# kubectl get po -npre-prod
NAME READY STATUS RESTARTS AGE
non-persistent-redis 1/1 Running 0 40s

第十四题

Set configuration context $ kubectl config use-context k8s

Scale the deployment webserver to 6 pods

Question weight: 1%

1
2
# kubectl scale deploy/webserver --replicas=6

第十五题

Set configuration context $ kubectl config use-context k8s

Check to see how many nodes are ready (not including nodes tainted NoSchedule) and write the number to /opt/nodenum

Question weight: 2%

1
2
3
4
5
6
7
8
9
# for i in `kubectl get nodes|grep Ready|grep -v 'NAME'|awk '{print $1}'` 
do
kubectl describe node $i |grep Taints|grep -v 'NoSchedule'
done
Taints: <none>
Taints: <none>
# for i in `kubectl get nodes|grep Ready|grep -v 'NAME'|awk '{print $1}'` ;do kubectl describe node $i |grep Taints|grep -v 'NoSchedule';done|wc -l >/opt/nodenum
# cat /opt/nodenum
2

第十六题

Set configuration context $ kubectl config use-context k8s

From the Pod label name=cpu-utilizer, find pods running high CPU workloads and write the name of the Pod consuming most CPU to the file /opt/cpu.txt (which already exists)

Question weight: 2%

1
2
3
4
5
6
7
# for i in `kubectl get po -l k8s-app=kube-dns -nkube-system|grep -v 'NAME'|awk '{print $1}'` ;do kubectl top po $i -nkube-system ;done |sort -k 2r
NAME CPU(cores) MEMORY(bytes)
NAME CPU(cores) MEMORY(bytes)
coredns-7ff77c879f-sbg6j 3m 16Mi
coredns-7ff77c879f-pwpgq 3m 12Mi

# echo 'coredns-7ff77c879f-sbg6j' >/opt/cpu.txt

第十七题

Set configuration context $ kubectl config use-context k8s

Create a deployment as follows

Name: nginx-dns
Exposed via a service: nginx-dns
Ensure that the service & pod are accessible via their respective DNS records
The container(s) within any Pod(s) running as a part of this deployment should use the nginx image
Next, use the utility nslookup to look up the DNS records of the service & pod and write the output to /opt/service.dns and /opt/pod.dns respectively.

Ensure you use the busybox:1.28 image(or earlier) for any testing, an the latest release has an unpstream bug which impacts thd use of nslookup.

Question weight: 7%

注:考试中nslookup已提供

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
# kubectl create deploy nginx-ds --image=nginx --image=busybox:1.28 --dry-run -oyaml > nginx-ds.yaml
# vim nginx-ds.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: nginx-ds
name: nginx-ds
spec:
replicas: 1
selector:
matchLabels:
app: nginx-ds
strategy: {}
template:
metadata:
labels:
app: nginx-ds
spec:
containers:
- image: nginx
name: nginx
- image: busybox:1.28
name: busybox
command: ['sh','-c','sleep 3600']

# kubectl apply -f nginx-ds.yaml
deployment.apps/nginx-ds created
# kubectl expose deploy/nginx-ds --port=80

# kubectl exec -it deploy/nginx-ds -c busybox nslookup nginx-ds>/opt/service.dns
# kubectl exec -it deploy/nginx-ds -c busybox nslookup nginx-ds >/opt/service.dns
# cat /opt/service.dns
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name: nginx-ds
Address 1: 10.98.149.137 nginx-ds.default.svc.cluster.local


# kubectl exec -it deploy/nginx-ds -c busybox nslookup 10.244.1.101 >/opt/pod.dns
# cat /opt/pod.dns
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name: 10.244.1.101
Address 1: 10.244.1.101 nginx-ds-576c7d4d77-m47rt

第十八题

No configuration context change required for this item

Create a snapshot of the etcd instance running at https://127.0.0.1:2379 saving the snapshot to the file path /data/backup/etcd-snapshot.db

The etcd instance is running etcd version 3.1.10

The following TLS certificates/key are supplied for connecting to the server with etcdctl

CA certificate: /opt/KUCM00302/ca.crt
Client certificate: /opt/KUCM00302/etcd-client.crt
Clientkey:/opt/KUCM00302/etcd-client.key

Question weight: 7%

https://kubernetes.io/zh/docs/tasks/administer-cluster/configure-upgrade-etcd/#%E5%A4%87%E4%BB%BD-etcd-%E9%9B%86%E7%BE%A4

1
2
3
4
5
6
7
8
export ETCDCTL_API=3
etcdctl \
--endpoints=https://127.0.0.1:2379 \
--cert /etc/kubernetes/pki/ca.crt \
--key /etc/kubernetes/pki/etcd/ca.key \
--cacert /etc/kubernetes/pki/ca.crt \
snapshot save /data/backup/etcd-snapshot.db

第十九题

Set configuration context $ kubectl config use-context ek8s

Set the node labelled with name=ek8s-node-1 as unavailable and reschedule all the pods running on it.

Question weight: 4%

1
2
3
4
5
6
# kubectl label node node02 name=ek8s-node-1
node/node02 labeled
# kubectl get nodes -l name=ek8s-node-1
NAME STATUS ROLES AGE VERSION
node02 Ready node 14d v1.18.3
# kubectl drain node02 --delete-local-data --force --ignore-daemonsets

第二十题

Set configuration context $ kubectl config use-context wk8s

A Kubernetes worker node, labelled with name=wk8s-node-0 is in state NotReady . Investigate why this is the case, and perform any appropriate steps to bring the node to a Ready state, ensuring that any changes are made permanent.

Hints:

You can ssh to the failed node using $ ssh wk8s-node-0
You can assume elevated privileges on the node with the following command $ sudo -i

Question weight: 4%

1
2
3
$ ssh wk8s-node-0
$ sudo -i
# systemctl start kubelet ;systemctl enable kubelet

第二十一题

Set configuration context $ kubectl config use-context wk8s

Configure the kubelet systemd managed service, on the node labelled with name=wk8s-node-1, to launch a Pod containing a single container of image nginx named myservice automatically. Any spec files required should be placed in the /etc/kubernetes/manifests directory on the node.

Hints:

  1. You can ssh to the failed node using $ ssh wk8s-node-1
  2. You can assume elevated privileges on the node with the following command $ sudo -i 

Question weight: 4%

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
$ ssh wk8s-node-1
$ sudo -i
# vim /etc/kubernetes/manifests/myservice.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: myservice
name: myservice
spec:
containers:
- image: nginx
name: myservice
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /usr/lib/systemd/system/kubelet.service.d
└─10-kubeadm.conf
# vim /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf
# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
vironment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig
=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# vim /var/lib/kubelet/config.yaml
staticPodPath: /etc/kubernetes/manifests
# systemctl restart kubelet

第二十二题 (没遇见)

Set configuration context $ kubectl config use-context ik8s

In this task, you will configure a new Node, ik8s-node-0, to join a Kubernetes cluster as follows:

Configure kubelet for automatic certificate rotation and ensure that both server and client CSRs are automatically approved and signed as appropnate via the use of RBAC.
Ensure that the appropriate cluster-info ConfigMap is created and configured appropriately in the correct namespace so that future Nodes can easily join the cluster
Your bootstrap kubeconfig should be created on the new Node at /etc/kubernetes/bootstrap-kubelet.conf (do not remove this file once your Node has successfully joined the cluster)
The appropriate cluster-wide CA certificate is located on the Node at /etc/kubernetes/pki/ca.crt . You should ensure that any automatically issued certificates are installed to the node at /var/lib/kubelet/pki and that the kubeconfig file for kubelet will be rendered at /etc/kubernetes/kubelet.conf upon successful bootstrapping
Use an additional group for bootstrapping Nodes attempting to join the cluster which should be called system:bootstrappers:cka:default-node-token
Solution should start automatically on boot, with the systemd service unit file for kubelet available at /etc/systemd/system/kubelet.service
To test your solution, create the appropriate resources from the spec file located at /opt/…./kube-flannel.yaml This will create the necessary supporting resources as well as the kube-flannel -ds DaemonSet . You should ensure that this DaemonSet is correctly deployed to the single node in the cluster.

Hints:

kubelet is not configured or running on ik8s-master-0 for this task, and you should not attempt to configure it.
You will make use of TLS bootstrapping to complete this task.
You can obtain the IP address of the Kubernetes API server via the following command $ ssh ik8s-node-0 getent hosts ik8s-master-0
The API server is listening on the usual port, 6443/tcp, and will only server TLS requests
The kubelet binary is already installed on ik8s-node-0 at /usr/bin/kubelet . You will not need to deploy kube-proxy to the cluster during this task.
You can ssh to the new worker node using $ ssh ik8s-node-0
You can ssh to the master node with the following command $ ssh ik8s-master-0
No further configuration of control plane services running on ik8s-master-0 is required
You can assume elevated privileges on both nodes with the following command $ sudo -i
Docker is already installed and running on ik8s-node-0

Question weight: 8%

1
2


第二十三题

Set configuration context $ kubectl config use-context bk8s

Given a partially-functioning Kubenetes cluster, identify symptoms of failure on the cluster. Determine the node, the failing service and take actions to bring up the failed service and restore the health of the cluster. Ensure that any changes are made permanently.

The worker node in this cluster is labelled with name=bk8s-node-0 Hints:

You can ssh to the relevant nodes using $ ssh $(NODE) where $(NODE) is one of bk8s-master-0 or bk8s-node-0
You can assume elevated privileges on any node in the cluster with the following command$ sudo -i

Question weight: 4%

1
2
3
4
5
6
7
8
9
这个还是静态yaml文件
# cd /etc/kubernetes/manifests/
# ls
etcd.yaml kube-apiserver.yaml kube-controller-manager.yaml kube-scheduler.yaml
静态文件还存在,这就需要查看静态文件路径是否正常
# vim /var/lib/kubelet/config.yaml
staticPodPath: /etc/kubernetes/DODKSIYF => /etc/kubernetes/manifests
# systemctl restart kubelet

第二十四题

Set configuration context $ kubectl config use-context hk8s

Creae a persistent volume with name app-config of capacity 1Gi and access mode ReadWriteOnce. The type of volume is hostPath and its location is /srv/app-config

Question weight: 3%

https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolume

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# vim app-config.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: app-config
labels:
type: local
spec:
storageClassName: app-config
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/srv/app-config"


# kubectl apply -f app-config.yaml
persistentvolume/app-config created
# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
app-config 1Gi RWO Retain Available app-config 4s

第二十五题

ssh mster node主机创建一个集群

集群Ready即可 初始化指定 –ignore-preflight-errors=xxx
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
https://v1-17.docs.kubernetes.io/zh/docs/reference/setup-tools/kubeadm/kubeadm-init/
https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

1
2
3
$ sudo apt-get install -y kubelet kubeadm kubectl
# kubectl init --ignore-preflight-errors=xxx
# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml