https://training.linuxfoundation.cn/certificate/details/1
注意事项:
Zhen Su
,考试需要检查护照
。身份证
+VISA信用卡
或身份证
+国际驾照
,两个卡需要有签名)3
天预约,如果预约考试成功但是没有准备准时参加考试,取消考试资格 与补考资格。(下次考试在2088)20
分钟登陆考试页面,等待考官Google Chrome
,这个操作需要关闭应用重新打开CKA Clusters
Cluster | Members | 练习环境 节点 |
---|---|---|
- | console | 物理机 |
k8s | 1 master 2 worker | k8s-master k8-worker1, k8s-worker2 |
ek8s | 1 master 2 worker | ek8s-master ek8-worker1, ek8s-worker2 |
kubectl
带有别名k
和 Bash 自动完成功能jq
用于 YAML/JSON 处理tmux
用于终端复用curl
并用于测试 Web 服务wget
man
和手册页以获取更多文档在考试期间,考生可以:
查看命令行终端中显示的考试内容说明
查看发行版安装的文件(即 /usr/share 及其子目录)
使用其Chrome和Chromium浏览器中打开一个附加选项卡,以获取资产:https://kubernetes.io/docs/,https://github.com/kubernetes/, https://kubernetes.io/blog/和他们的子域。这包括这些页面的所有可用的语言翻译(如https://kubernetes.io/zh/docs/)
下载书签,并导入
https://gitee.com/suzhen99/k8s/blob/master/Bookmarks/CKA-Bookmark.html
https://www.examslocal.com/linuxfoundation
不能打开其他选项卡,也不能导航到其他站点(包括https://discuss.kubernetes.io/)。
上述允许的站点可能包含指向外部站点的链接。考生有责任不点击任何导致他们导航到不允许的域的链接。
可以通过运行 sudo -i
获得 root 权限
任何时候都允许重新启动您的服务器
不要停止或篡改 certerminal 进程,因为这将结束您的考试
不要阻止传入端口 8080/tcp、4505/tcp 和 4506/tcp。这包括在发行版的默认防火墙配置文件中找到的防火墙规则以及交互式防火墙命令
使用 Ctrl+Alt+W 代替 Ctrl+W
5.1 Ctrl+W 是一个键盘快捷键,用于关闭谷歌浏览器中的当前选项卡
您的考试终端不支持 Ctrl+C 和 Ctrl+V。 要复制和粘贴文本,请使用;
6.1 Linux:选择文本进行复制和中键进行粘贴(如果没有中键,则可以同时选择左右键)
6.2 Mac:⌘+C 复制,⌘+V 粘贴
6.3 Windows:Ctrl+Insert 复制,Shift+Insert 粘贴
6.4 此外,您可能会发现在粘贴到命令行之前使用记事本(参见“考试控制”下的顶部菜单)操作文本会很有帮助
此考试中包含的服务和应用程序的安装可能需要修改系统安全策略才能成功完成
考试期间只有一个终端控制台可用。GNU Screen 和 tmux 等终端多路复用器可用于创建虚拟控制台
以下是对可接受的测试地点的期望:
有关考试期间政策、程序和规则的更多信息,请参阅[考生手册]
如果您需要其他帮助,请使用您的 LF 帐户登录 https://trainingsupport.linuxfoundation.org 并使用搜索栏查找问题的答案,或从提供的类别中选择您的请求类型
配置补全
$ echo 'source <(kubectl completion bash)' >> ~/.bashrc
注意在哪个集群
操作的
注意在哪个节点
操作的
注意在哪个ns
操作的
Task weight: 4%
Set configuration context:
$ kubectl config use-context ck8s
Context:
You have been asked to create a new
ClusterRole
for a deployment pipeline and bind it to a specificServiceAccount
scoped to a specific namespace.Task:
Create a new ClusterRole named
deployment-clusterrole
, which only allows to create the following resource types:
Deployment
StatefulSet
DaemonSet
Create a new
ServiceAccount
namedcicd-token
in the existing namespaceapp-team1
bind the new ClusterRole
deployment-clusterrole
to the new Service Accountcicd-token
, limited to the namespaceapp-team1
内容:
为部署管道创建一个新的
ClusterRole
并将其绑定到范围为特定 namespace 的特定ServiceAccount
任务:
- 创建一个名字为
deployment-clusterrole
且仅允许创建以下资源类型的新ClusterRole
:
Deployment
StatefulSet
DaemonSet
- 在现有的 namespace
app-team1
中创建一个名为cicd-token
的新ServiceAccount
。- 限于 namespace
app-team1
,将新的 ClusterRoledeployment-clusterrole
绑定到新的 ServiceAccountcicd-token
。
答案:
*$ kubectl config use-context ck8s
$ kubectl create clusterrole --help*$ kubectl create clusterrole deployment-clusterrole \--verb=create \--resource=Deployment,StatefulSet,DaemonSet
*$ kubectl --namespace app-team1 \create serviceaccount cicd-token
$ kubectl create rolebinding --help*$ kubectl create rolebinding cicd-token-deployment-clusterrole \--clusterrole=deployment-clusterrole \--serviceaccount=app-team1:cicd-token \--namespace=app-team1
$ kubectl describe clusterrole deployment-clusterrole
Name: deployment-clusterrole
Labels:
Annotations:
PolicyRule:Resources Non-Resource URLs Resource Names Verbs--------- ----------------- -------------- -----`daemonsets.apps` [] [] [`create`]`deployments.apps` [] [] [`create`]`statefulsets.apps` [] [] [`create`]$ kubectl -n app-team1 get serviceaccounts
NAME SECRETS AGE
`cicd-token` 1 16m
default 1 18m$ kubectl -n app-team1 get rolebindings
NAME ROLE AGE
cicd-token-deployment-clusterrole ClusterRole/deployment-clusterrole 11m$ kubectl -n app-team1 describe rolebindings cicd-token-deployment-clusterrole
Name: cicd-token-deployment-clusterrole
Labels:
Annotations:
Role:Kind: `ClusterRole`Name: `deployment-clusterrole`
Subjects:Kind Name Namespace---- ---- ---------ServiceAccount `cicd-token``app-team1`
Task weight: 4%
Set configuration context:
$ kubectl config use-context ck8s
Task:
Set the node named
k8s-worker1
as unavailable and reschedule all the pods running on it
任务:
将名为
k8s-worker1
的 node 设置为不可用, 并重新调度该 node 上所有运行的 pods
答案:
*$ kubectl config use-context ck8s
$ kubectl get nodes
k8s-master Ready control-plane 9d v1.24.1
k8s-worker1 Ready 9d v1.24.1
k8s-worker2 Ready 9d v1.24.1
$ kubectl drain k8s-worker1
node/k8s-worker1 cordoned
error: unable to drain node "k8s-worker1" due to error:[cannot delete DaemonSet-managed Pods (use `--ignore-daemonsets` to ignore): kube-system/calico-node-g5wj7, kube-system/kube-proxy-8pv56, cannot delete Pods with local storage (use `--delete-emptydir-data` to override): kube-system/metrics-server-5fdbb498cc-k4mgt], continuing command...
There are pending nodes to be drained:k8s-worker1
cannot delete DaemonSet-managed Pods (use `--ignore-daemonsets` to ignore): kube-system/calico-node-g5wj7, kube-system/kube-proxy-8pv56
cannot delete Pods with local storage (use `--delete-emptydir-data` to override): kube-system/metrics-server-5fdbb498cc-k4mgt*$ kubectl drain k8s-worker1 --ignore-daemonsets --delete-emptydir-data
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready control-plane,master 84m v1.24.1
k8s-worker1 Ready,`SchedulingDisabled` 79m v1.24.1
k8s-worker2 Ready 76m v1.24.1$ kubectl get pod -A -owide | grep worker1
kube-system `calico-node-j6r9s` 1/1 Running 1 (9d ago) 9d 192.168.147.129 k8s-worker1
kube-system `kube-proxy-psz2g` 1/1 Running 1 (9d ago) 9d 192.168.147.129 k8s-worker1
Task weight: 7%
Set configuration context:
$ kubectl config use-context ck8s
Be sure to drain the master node before upgrading it and uncordon it after the upgrade. do not upgrade the worker nodes,etcd,the container manager,the CNI plugin,the DNS service or any other addonsTask:
Given an existing Kubernetes cluster running version
1.24.1
, upgrade all of the Kubernetes control plane and node components on the master node only to version1.24.2
You are also expected to upgrade
kubelet
andkubectl
on the master node
确保在升级前 drain 主节点, 并在升级后 uncordon 主节点。请不要升级工作节点,etcd,container 管理器,CNI 插件,DNS 服务或任何其他插件 答案:任务:
现有的 kubernetes 集群正在运行的版本是
1.24.1
。 仅将主节点上的所有 kubernetes 控制平面和节点组件升级到版本1.24.2
。另外, 在主节点上升级
kubelet
和kubectl
*$ kubectl config use-context ck8s
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
`k8s-master` Ready control-plane,master 97m `v1.24.1`
k8s-worker1 Ready,SchedulingDisabled 92m v1.24.1
k8s-worker2 Ready 89m v1.24.1
*$ ssh root@k8s-master
apt-mark unhold kubeadm && \
apt-get update && apt-get install -y kubeadm=1.24.2-00 && \
apt-mark hold kubeadm
kubeadm versionkubeadm upgrade plan
kubeadm upgrade apply v1.24.2 --etcd-upgrade=false
:<
kubectl drain k8s-master --ignore-daemonsets
apt-mark unhold kubelet kubectl && \
apt-get update && apt-get install -y kubelet=1.24.2-00 kubectl=1.24.2-00 && \
apt-mark hold kubelet kubectl
systemctl daemon-reload
systemctl restart kubelet
kubectl uncordon k8s-master
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready control-plane,master 157m `v1.24.2`
k8s-worker1 Ready,SchedulingDisabled 152m v1.24.1
k8s-worker2 Ready 149m v1.24.1$ kubectl version
Client Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.2",....
Task weight: 7%
No configuration context change required for this item.Creating a snapshot of the given instance is expected to complete in seconds. if the operation seems to hang,something's likely wrong with your command.Use CTRL + C to cancel the operation and try againTask:
- First, create a snapshot of the existing etcd instance running at https://127.0.0.1:2379, SAVING THE SNAPSHOT TO
/srv/backup/etcd-snapshot.db
The following TLS certificates/key are supplied for connecting to the server with etcdctl:
- Next, restore an existing,previous snapshot located at
/srv/data/etcd-snapshot-previous.db
为给定实例创建快照预计能在几秒钟内完成。 如果该操作似乎挂起, 则命令可能有问题。 用 CTRL+c 来取消操作, 然后重试。任务:
- 首先,为运行在 https://127.0.0.1:2379上的现有
etcd
实例创建快照,并将快照保存到/srv/backup/etcd-snapshot.db
。
提供了以下TLS证书和密钥,以通过 etcdctl 连接到服务器。
- 然后,还原位于
/srv/data/etcd-snapshot-previous.db
的现有先前快照。
答案:
$ ETCDCTL_API=3 etcdctl snapshot save --help*$ ETCDCTL_API=3 etcdctl \--endpoints=https://127.0.0.1:2379 \--cacert=/opt/KUIN00601/ca.crt \--cert=/opt/KUIN00601/etcd-client.crt \--key=/opt/KUIN00601/etcd-client.key \snapshot save /srv/backup/etcd-snapshot.db
*$ sudo mv /etc/kubernetes/manifests /etc/kubernetes/manifests.bk*$ kubectl get pod -A
The connection to the server 192.168.147.128:6443 was refused - did you specify the right host or port?*$ sudo mv /var/lib/etcd /var/lib/etcd.bk*$ sudo chown $USER /srv/data/etcd-snapshot-previous.db*$ sudo ETCDCTL_API=3 etcdctl \--endpoints=https://127.0.0.1:2379 \--cacert=/opt/KUIN00601/ca.crt \--cert=/opt/KUIN00601/etcd-client.crt \--key=/opt/KUIN00601/etcd-client.key \--data-dir /var/lib/etcd \snapshot restore /srv/data/etcd-snapshot-previous.db*$ sudo mv /etc/kubernetes/manifests.bk /etc/kubernetes/manifests
$ ETCDCTL_API=3 etcdctl snapshot status /srv/backup/etcd-snapshot.db
89703627, 14521, 1929, 4.3 MB$ kubectl get componentstatuses
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true","reason":""}
Task weight: 7%
Set configuration context:
$ kubectl config use-context ck8s
Task:
Create a new
NetworkPolicy
namedallow-port-from-namespace
that allows Pods in the existing namespaceinternal
to connect to port8080
of other pods in the same namespaceEnsure that the new
NetworkPolicy
:
- does not allow access to pods not listening on port
8080
- does not allow access from pods not in namespace
internal
Hint - 提示任务:
创建一个名为
allow-port-from-namespace
的新NetworkPolicy
, 以允许现有 namespaceinternal
中的 Pods 连接到同一 namespace 中其他 Pods 的端口8080
。确保新的
NetworkPolicy
:
- 不允许对没有在监听端口
8080
的 Pods 的访问- 不允许不来自 namespace
internal
的 Pods 的访问
答案:
*$ kubectl config use-context ck8s
$ kubectl get namespaces internal
NAME STATUS AGE
internal Active 41m
*$ kubectl get namespaces internal --show-labels
NAME STATUS AGE LABELS
internal Active 111s `kubernetes.io/metadata.name=internal`
*$ echo set number et ts=2 cuc > ~/.vimrc$ vim 5.yml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
# name: test-network-policyname: allow-port-from-namespace
# namespace: defaultnamespace: internal
spec:
# podSelector:podSelector: {}
# matchLabels:
# role: dbpolicyTypes:- Ingress
# - Egressingress:- from:
# - ipBlock:
# cidr: 172.17.0.0/16
# except:
# - 172.17.1.0/24- namespaceSelector:matchLabels:kubernetes.io/metadata.name: internal
# - podSelector:
# matchLabels:
# role: frontendports:- protocol: TCP
# port: 6379port: 8080
# egress:
# - to:
# - ipBlock:
# cidr: 10.0.0.0/24
# ports:
# - protocol: TCP
# port: 5978
*$ kubectl apply -f 5.yml
$ kubectl -n internal describe networkpolicies allow-port-from-namespace
Name: allow-port-from-namespace
Namespace: internal
Created on: YYYY-mm-dd 21:39:09 +0800 CST
Labels:
Annotations:
Spec:PodSelector: (Allowing the specific traffic to all pods in this namespace)Allowing ingress traffic:To Port: 8080/TCPFrom:NamespaceSelector: kubernetes.io/metadata.name=internalNot affecting egress trafficPolicy Types: Ingress
Task weight: 7%
Set configuration context:
$ kubectl config use-context ck8s
Task:
Reconfigure the existing deployment
front-end
and add a port specification namedhttp
exposing port80/tcp
of the existing containernginx
.Create a new service named
front-end-svc
exposing the container porthttp
.Configure the new service to also expose the individual pods via a
NodePort
on the nodes on which they are scheduled.
任务
- 请重新配置现有的 deployment
front-end
:添加名为http
的端口规范来公开现有容器nginx
的端口80/tcp
。- 创建一个名为
front-end-svc
的新服务, 以公开容器端口http
。- 配置此服务, 以通过在排定的节点上的
NodePort
来公开各个 pods。
答案:
*$ kubectl config use-context ck8s
$ kubectl get deployments front-end
NAME READY UP-TO-DATE AVAILABLE AGE
`front-end` 1/1 1 1 10m
$ kubectl explain --help$ kubectl explain pod.spec.containers$ kubectl explain pod.spec.containers.ports$ kubectl explain deploy.spec.template.spec.containers.ports
*$ kubectl edit deployments front-end
...此处省略...template:
...此处省略...spec:containers:- image: nginx# 添加 3 行ports:- name: httpcontainerPort: 80
...此处省略...
$ kubectl get deployments front-end
NAME READY UP-TO-DATE AVAILABLE AGE
front-end `1/1` 1 1 12m
*$ kubectl expose deployment front-end \--port=80 --target-port=http \--name=front-end-svc \--type=NodePort
*$ kubectl get deployments front-end --show-labels
NAME READY UP-TO-DATE AVAILABLE AGE LABELS
front-end 0/1 1 0 37s `app=front-end`*$ vim 6.yml
apiVersion: v1
kind: Service
metadata:
# name: my-servicename: front-end-svc
spec:# 题意要求(确认)type: NodePortselector:
# app: MyAppapp: front-endports:- port: 80targetPort: http
*$ kubectl apply -f 6.yml
$ kubectl get services front-end-svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
front-end-svc `NodePort` 10.106.46.251 80:`32067`/TCP 39s$ curl k8s-worker1:32067
...输出省略...
Welcome to nginx!
...输出省略...
Task wight: 7%
Set configuration context:
$ kubectl config use-context ck8s
The availability of service hello can be checked using the following command,which should return hi:Task:
- Create a new nginx Ingress resource as follows:
- Name:
ping
- Namespace:
ing-internal
- Exposing service
hi
on path/hi
using service port5678
$ curl -kL /hi
可以使用以下命令检查服务 hello 的可用性, 该命令返回 hi:任务:
- 如下创建一个新的 nginx ingress 资源:
- 名称:
ping
- namespace:
ing-internal
- 使用服务端口
5678
在路径/hi
上公开服务hi
$ curl -kL /hi
答案:
*$ kubectl config use-context ck8s
安装 ingressclasses
🅰️ 考试时
*$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.5.1/deploy/static/provider/cloud/deploy.yaml
🅱️ 练习环境
浏览器打开:
“https://github.com/kubernetes”
ingress-nginx deploy/
static/provider/
cloud/
deploy.yaml复制文件内容,粘贴至d.yml
*$ vim d.yml
2️⃣ 练习环境:做完第二题,需删除 image 后面的hash值
containerd
...image: registry.k8s.io/ingress-nginx/controller:v1.5.1
...image: registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20220916-gd32f8c343
...image: registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20220916-gd32f8c343
...
2️⃣ 练习环境:未做第二题,不用删除 image 后面的hash值
docker
*$ kubectl apply -f d.yml
*$ kubectl get ingressclasses
NAME CONTROLLER PARAMETERS AGE
`nginx` k8s.io/ingress-nginx 11m*$ kubectl get pod -A | grep ingress
ingress-nginx ingress-nginx-admission-create-w2h4k 0/1 Completed 0 92s
ingress-nginx ingress-nginx-admission-patch-k6pgk 0/1 Completed 1 92s
ingress-nginx `ingress-nginx-controller-58b94f55c8-gl7gk` 1/1 `Running` 0 92s
*$ vim 7.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
# name: minimal-ingressname: pingannotations:nginx.ingress.kubernetes.io/rewrite-target: /
# 添加1行namespace: ing-internal
spec:
# ingressClassName: nginx-exampleingressClassName: nginxrules:- http:paths:
# - path: /testpath- path: /hipathType: Prefixbackend:service:
# name: testname: hiport:
# number: 80number: 5678
*$ kubectl apply -f 7.yml$ kubectl -n ing-internal get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
`ping` nginx * 80 11m
*$ kubectl get pods -A -o wide | grep ingress
...输出省略...
ingress-nginx `ingress-nginx-controller`-769f969657-4zfjv 1/1 Running 0 12m `172.16.126.15` k8s-worker2 *$ curl 172.16.126.15/hi
hi
Task weight: 4%
Set configuration context:
$ kubectl config use-context ck8s
Task:
- scale the deployment
webserver
to6
pods
任务:
- 将 deployment
webserver
扩展至6
pods
答案:
*$ kubectl config use-context ck8s
$ kubectl get deployments webserver
NAME READY UP-TO-DATE AVAILABLE AGE
webserver `1/1` 1 1 30s
*$ kubectl edit deployments webserver
...省略...
spec:progressDeadlineSeconds: 600
# replicas: 1replicas: 6
...省略...
🅱️ scale
$ kubectl scale deployment webserver --replicas 6
$ kubectl get deployments webserver -w
NAME READY UP-TO-DATE AVAILABLE AGE
webserver `6/6` 6 6 120s
Task weight 4%
Set configuration context:
$ kubectl config use-context ck8s
Task:
- Schedule a pod as follows:
- Name:
nginx-kusc00401
- image:
nginx
- Node selector:
disk=spinning
Hint - 提示官网手册中搜索 nodeselect任务
- 按如下要求调度一个 pod:
- 名称:
nginx-kusc00401
- image:
nginx
- Node selector:
disk=spinning
答案:
*$ kubectl config use-context ck8s
*$ vim 9.yml
apiVersion: v1
kind: Pod
metadata:
# name: nginxname: nginx-kusc00401labels:env: test
spec:containers:- name: nginx# 符合题题意要求image: nginximagePullPolicy: IfNotPresentnodeSelector:
# disktype: ssddisk: spinning
*$ kubectl apply -f 9.yml
$ kubectl get pod nginx-kusc00401 -o wide -w
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-kusc00401 1/1 Running 0 11s 172.16.126.30 `k8s-worker2`
$ kubectl get nodes -l disk=spinning
NAME STATUS ROLES AGE VERSION
`k8s-worker2` Ready 9d v1.24.1
Task weight 4%
Set configuration context:
$ kubectl config use-context ck8s
Task:
- Check to see how many nodes are ready(not including nodes tainted
NoSchedule
) and write the number to/opt/KUSC00402/kusc00402.txt
Hint - 提示任务:
- 检查有多少个 nodes 已准备就绪(不包括被打上 tainted:
NoSchedule
的节点) , 并将数量写入/opt/KUSC00402/kusc00402.txt
答案:
*$ kubectl config use-context ck8s
$ kubectl get nodes
k8s-master Ready control-plane 9d v1.24.2
k8s-worker1 Ready,SchedulingDisabled 9d v1.24.1
k8s-worker2 Ready 9d v1.24.1
*$ kubectl describe nodes | grep -i taints
Taints: node-role.kubernetes.io/control-plane:`NoSchedule`
Taints: node.kubernetes.io/unschedulable:`NoSchedule`
Taints:
*$ echo 1 > /opt/KUSC00402/kusc00402.txt
Task weight 4%
Set configuration context:
$ kubectl config use-context ck8s
Task:
- Create a pod named
kucc1
with a single app container for each of the following images running in side (there may be between 1 and 4 images specified):
nginx
+redis
+memcached
+consul
任务:
- 创建一个名字为
kucc1
的 pod, 在pod里面分别为以下每个images单独运行一个app container(可能会有 1-4 个 images),容器名称和镜像如下:
nginx
+redis
+memcached
+consul
答案:
*$ kubectl config use-context ck8s
*$ vim 11.yml
apiVersion: v1
kind: Pod
metadata:
# name: myapp-podname: kucc1
spec:containers:
# - name: myapp-container- name: nginx
# image: busybox:1.28image: nginx
# 添加- name: redisimage: redis- name: memcachedimage: memcached- name: consulimage: consul
*$ kubectl apply -f 11.yml
$ kubectl get pod kucc1 -w
NAME READY STATUS RESTARTS AGE
kucc1 `4/4` Running 0 77s
Ctrl-C
Task weight: 4%
Set configuration context:
Task:
- Create a persistent volume with name
app-data
, of capacity1Gi
and access modeReadWriteMany
. The type of volume ishostPath
and its location is/srv/app-data
任务:
- 创建名为
app-data
的 persistent volume, 容量为1Gi
, 访问模式为ReadWriteMany
。 volume类型为hostPath
, 位于/srv/app-data
答案:
网页搜索pv
配置 Pod 以使用 PersistentVolume 作为存储 | Kubernetes
*$ vim 12.yml
apiVersion: v1
kind: PersistentVolume
metadata:
# name: task-pv-volumename: app-data
spec:
# storageClassName: manualcapacity:
# storage: 10Gistorage: 1GiaccessModes:
# - ReadWriteOnce- ReadWriteManyhostPath:
# path: "/mnt/data"path: "/srv/app-data"
# 增加type: DirectoryOrCreate
*$ kubectl apply -f 12.yml
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
`app-data` `1Gi` `RWX` Retain Available 4s
Task weight: 7%
Set configuration context:
$ kubectl config use-context ck8s
Task:
Create a new
PersistenVolumeClaim
:
- Name:
pv-volume
- Class:
csi-hostpath-sc
- Capacity:
10Mi
Create a new pod which mounts the
PersistenVolumeClaim
as a volume:
- Name:
web-server
- image:
nginx
- Mount path:
/usr/share/nginx/html
Configure the new pod to have
ReadWriteOnce
access on the volume.Finally, using
kubectl edit
orkubectl patch
expand thePersistentVolumeClaim
to a capacity of70Mi
and record that change.
任务:
创建一个新的
PersistentVolumeClaim
:
- 名称:
pv-volume
- class:
csi-hostpath-sc
- 容量:
10Mi
创建一个新的 pod, 此 pod 将作为 volume 挂载到
PersistentVolumeClaim
:
- 名称:
web-server
- image:
nginx
- 挂载路径:
/usr/share/nginx/html
配置新的 pod, 以对 volume 具有
ReadWriteOnce
权限。最后, 使用
kubectl edit
或者kubectl patch
将PersistentVolumeClaim
的容量扩展为70Mi
, 并记录此次更改。
答案:
*$ kubectl config use-context ck8s
创建 pvc
网页搜索pvc
配置 Pod 以使用 PersistentVolume 作为存储 | Kubernetes
*$ vim 13pvc.yml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
# name: claim1name: pv-volume
spec:accessModes:- ReadWriteOnce
# storageClassName: faststorageClassName: csi-hostpath-scresources:requests:
# storage: 30Gistorage: 10Mi
*$ kubectl apply -f 13pvc.yml$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pv-volume `Bound` pvc-89935613-3af9-4193-9a68-116067cf1a34 10Mi RWO csi-hostpath-sc 6s$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
app-data 1Gi RWX Retain Available 72m
pvc-89935613-3af9-4193-9a68-116067cf1a34 10Mi RWO Delete `Bound` default/pv-volume csi-hostpath-sc 39s
*$ vim 13pod.yml
apiVersion: v1
kind: Pod
metadata:
# name: task-pv-podname: web-server
spec:volumes:- name: task-pv-storagepersistentVolumeClaim:
# claimName: task-pv-claimclaimName: pv-volumecontainers:
# - name: task-pv-container- name: web-serverimage: nginx
# ports:
# - containerPort: 80
# name: "http-server"volumeMounts:- mountPath: "/usr/share/nginx/html"name: task-pv-storage
*$ kubectl apply -f 13pod.yml
pod/web-server created$ kubectl get pod web-server
NAME READY STATUS RESTARTS AGE
web-server 1/1 `Running` 0 9s
允许扩容
网页搜索storageclass
存储类 | Kubernetes
*$ kubectl edit storageclasses csi-hostpath-sc
...输出省略...
# 添加1行
allowVolumeExpansion: true$ kubectl get storageclasses -A
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
csi-hostpath-sc k8s-sigs.io/nfs-subdir-external-provisioner Delete Immediate `true` 5m51s
并记录此次更改
https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands
*$ kubectl edit pvc pv-volume --record
...此处省略...
spec:
...此处省略...
# storage: 10Mistorage: 70Mi
...此处省略...
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pv-volume Bound pvc-9a5fb9b6-b127-4868-b936-cb4f17ef910e `70Mi` RWO csi-hostpath-sc 31m
Task weight: 5%
Set configuration context:
$ kubectl config use-context ck8s
Task:
- Monitor the logs of pod
bar
and:- Exract log lines corresponding to error
unable-to-access-website
- Write them to
/opt/KUTR00101/bar
任务:
- 监控 pod
bar
的日志:- 提取与错误
unable-to-access-website
相对应的日志行- 将这些日志行写入到
/opt/KUTR00101/bar
答案:
*$ kubectl config use-context ck8s
*$ kubectl logs bar | grep unable-to-access-website > /opt/KUTR00101/bar
$ cat /opt/KUTR00101/bar
YYYY-mm-dd 07:13:03,618: ERROR `unable-to-access-website`
Task weight 7%
Set configuration context:
$ kubectl config use-context ck8s
Don't modify the existing container.context:
Without changing its existing containers, an existing pod needs to be integrated into kubernetes’s built-in logging architecture (e.g. kubectl logs). Adding a streaming sidecar container is a good and common way to accomplish this requirement.
Task:
Add a
busybox
sidecar container name is sidedcar to the existing podbig-corp-app
. The new sidecar container has to run the following command:/bin/sh -c tail -f /var/log/legacy-app.log
Use a volume mount named
logs
to make the file/var/log/legacy-app.log
available to the sidecar container.
不要更改现有容器内容:
在不更改其现有容器的情况下, 需要将一个现有的 pod 集成到 kubernetes 的内置日志记录体系结构中(例如 kubectl logs) 。 添加 streamimg sidecar 容器是实现此要求的一种好方法。
任务:
将一个
busybox
sidecar 容器名称为sidecar添加到现有的big-corp-app
。 新的 sidecar 容器必须运行以下命令:/bin/sh -c tail -f /var/log/legacy-app.log
使用名为
logs
的 volume mount 来让文件/var/log/legacy-app.log
可用于 sidecar 容器
答案:
*$ kubectl config use-context ck8s
*$ kubectl get pod big-corp-app -o yaml > 15.yml*$ vim 15.yml
...此处省略...
spec:containers:
...此处省略...volumeMounts:
# 已有容器 增加2行- name: logsmountPath: /var/log
# 新容器 增加5行- name: sidecarimage: busyboxargs: [/bin/sh, -c, 'tail -f /var/log/legacy-app.log']volumeMounts:- name: logsmountPath: /var/log
...此处省略...volumes:
# 增加 2 行- name: logsemptyDir: {}
...此处省略...
mA
*$ kubectl replace -f 15.yml --force
pod "big-corp-app" deleted
pod/big-corp-app replacedmB
*$ kubectl delete -f 15.yml
*$ kubectl apply -f 15.yml
$ kubectl get pod big-corp-app -w
NAME READY STATUS RESTARTS AGE
big-corp-app `2/2` Running 1 37s$ kubectl logs -c sidecar big-corp-app
Task weight: 5%
Set configuration context:
$ kubectl config use-context ck8s
Task:
- From the pod label
name=cpu-loader
, find pods running high cpu workloads and write the name of the pod consuming most cpu to the file/opt/KUTR00401/KUTR00401.txt
(which already exists)
任务:
- 通过 pod label
name=cpu-loader
, 找到运行时占用大量 CPU 的 pod, 并将占用 CPU 最高的 pod 名称写入到文件/opt/KUTR00401/KUTR00401.txt
(已存在)
答案:
*$ kubectl config use-context ck8s
$ kubectl top pod -h*$ kubectl top pod -l name=cpu-loader -A
NAMESPACE NAME CPU(cores) MEMORY(bytes)
default `bar` `1m` 5Mi
default cpu-loader-5b898f96cd-56jf5 0m 3Mi
default cpu-loader-5b898f96cd-9zlt5 0m 4Mi
default cpu-loader-5b898f96cd-bsvsb 0m 4Mi
*$ echo bar > /opt/KUTR00401/KUTR00401.txt
Task weight: 13%
Set configuration context:
$ kubectl config use-context ck8s
You can ssh to the failed node useing:Task:
- A kubernetes woker node, named
k8s-worker1
is in stateNotReady
. Investigate why this is the case, and perform any appropriate steps to bring the node to aReady
state, ensuring that any changes are made permanent.
可使用以下命令通过 ssh 连接到故障 node:任务:
- 名为
k8s-worker1
的 kubernetes worker node 处于NotReady
状态。 调查发生这种情况的原因, 并采取相应措施将 node 恢复为Ready
状态,确保所做的任何更改永久生效。
答案:
*$ kubectl config use-context ck8s
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready control-plane,master 43d v1.24.1
k8s-worker1 `NotReady` 43d v1.24.1*$ kubectl describe nodes k8s-worker1
...输出省略...
Conditions:Type Status LastHeartbeatTime LastTransitionTime Reason Message---- ------ ----------------- ------------------ ------ -------NetworkUnavailable False Tue, 31 May YYYY 11:25:06 +0000 Tue, 31 May YYYY 11:25:06 +0000 CalicoIsUp Calico is running on this nodeMemoryPressure Unknown Tue, 31 May YYYY 13:51:08 +0000 Tue, 31 May YYYY 13:53:42 +0000 `NodeStatusUnknown Kubelet stopped posting node status.`DiskPressure Unknown Tue, 31 May YYYY 13:51:08 +0000 Tue, 31 May YYYY 13:53:42 +0000 `NodeStatusUnknown Kubelet stopped posting node status.`PIDPressure Unknown Tue, 31 May YYYY 13:51:08 +0000 Tue, 31 May YYYY 13:53:42 +0000 `NodeStatusUnknown Kubelet stopped posting node status.`Ready Unknown Tue, 31 May YYYY 13:51:08 +0000 Tue, 31 May YYYY 13:53:42 +0000 `NodeStatusUnknown Kubelet stopped posting node status.`
...输出省略...
*$ ssh k8s-worker1*$ sudo -i*# systemctl enable --now kubelet.service
# systemctl status kubelet
q 退出status
Ctrl-D 退出sudo
Ctrl-D 退出ssh
*$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready control-plane,master 43d v1.24.1
k8s-worker1 `Ready`,SchedulingDisabled 43d v1.24.1
[VMware/k8s-master]
- 先恢复快照1.24
- Inert *.iso, 复选『连接』
sudo mount -o uid=1000 /dev/sr0 /media//media/cka-setupkubectl get pod -A | grep -v Running
# 确认都 Running 后,做一个关机快照 == CKA
$ media/cka-gradeSpend Time: up 1 hours, 1 minutes Wed 01 Jun YYYY 04:58:06 PM UTC
================================================================================PASS Task1. - RBACPASS Task2. - drainPASS Task3. - upgradePASS Task4. - snapshotPASS Task5. - network-policyPASS Task6. - servicePASS Task7. - ingress-nginxPASS Task8. - replicasPASS Task9. - schedulePASS Task10. - NoSchedulePASS Task11. - multi_podsPASS Task12. - pvPASS Task13. - Dynamic-VolumePASS Task14. - logsPASS Task15. - SidecarPASS Task16. - MetricPASS Task17. - Daemon (kubelet, containerd, docker)
================================================================================The results of your CKA v1.24: `PASS` Your score: `100`
$ media/cka-grade 1Spend Time: up 1 hours, 2 minutes Wed 01 Jun YYYY 04:58:14 PM UTC
================================================================================`PASS` Task1. - RBAC
================================================================================The results of your CKA v1.24: FAIL Your score: 4
下一篇:包管理器npm