手动部署k8s(十二)-部署worker节点kubelet组件
原教程来自 github/opsnull, 现在此基础上记录自己搭建遇到的问题
下载和分发 kubelet 二进制文件
参考 手动部署k8s(八)-部署 master 节点
安装依赖包
参考 (六)手动部署k8s之-部署flannel网络
创建 kubelet bootstrap kubeconfig 文件
cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_name in slave-34 slave-35 slave-36
do
echo ">>> ${node_name}"
# 创建 token
export BOOTSTRAP_TOKEN=$(kubeadm token create /
--description kubelet-bootstrap-token /
--groups system:bootstrappers:${node_name} /
--kubeconfig ~/.kube/config)
# 设置集群参数
kubectl config set-cluster kubernetes /
--certificate-authority=/etc/kubernetes/cert/ca.pem /
--embed-certs=true /
--server=${KUBE_APISERVER} /
--kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig
# 设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap /
--token=${BOOTSTRAP_TOKEN} /
--kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig
# 设置上下文参数
kubectl config set-context default /
--cluster=kubernetes /
--user=kubelet-bootstrap /
--kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig
# 设置默认上下文
kubectl config use-context default --kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig
done
- 向 kubeconfig 写入的是 token,bootstrap 结束后 kube-controller-manager 为 kubelet 创建 client 和 server 证书
查看 kubeadm 为各节点创建的 token
[root@ _38_ /opt/k8s/work]# kubeadm token list --kubeconfig ~/.kube/config
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
41in74.vbutfdhlhq17flhy 23h 2019-08-06T13:41:51+08:00 authentication,signing kubelet-bootstrap-token system:bootstrappers:slave-34
4h412a.ezo1f6iafyko8yul 23h 2019-08-06T13:41:53+08:00 authentication,signing kubelet-bootstrap-token system:bootstrappers:slave-36
y82yeg.8ucvez5eabnzizfi 23h 2019-08-06T13:41:53+08:00 authentication,signing kubelet-bootstrap-token system:bootstrappers:slave-35
[root@ _39_ /opt/k8s/work]#
- token 有效期为 1 天,超期后将不能再被用来 boostrap kubelet,且会被 kube-controller-manager 的 tokencleaner 清理;
- kube-apiserver 接收 kubelet 的 bootstrap token 后,将请求的 user 设置为 system:bootstrap:
,group 设置为 system:bootstrappers,后续将为这个 group 设置 ClusterRoleBinding
查看各 token 关联的 Secret
[root@ _39_ /opt/k8s/work]# kubectl get secrets -n kube-system|grep bootstrap-token
bootstrap-token-41in74 bootstrap.kubernetes.io/token 7 116s
bootstrap-token-4h412a bootstrap.kubernetes.io/token 7 114s
bootstrap-token-y82yeg bootstrap.kubernetes.io/token 7 114s
分发 bootstrap kubeconfig 文件到所有 worker 节点
cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_name in slave-34 slave-35 slave-36
do
echo ">>> ${node_name}"
scp kubelet-bootstrap-${node_name}.kubeconfig root@${node_name}:/etc/kubernetes/kubelet-bootstrap.kubeconfig
done
创建和分发 kubelet 参数配置文件
从 v1.10 开始,部分 kubelet 参数需在配置文件中配置,kubelet --help 会提示
DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag
创建 kubelet 参数配置文件模板(可配置项参考代码中注释 )
cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
cat > kubelet-config.yaml.template <<EOF
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: "##NODE_IP##"
staticPodPath: ""
syncFrequency: 1m
fileCheckFrequency: 20s
httpCheckFrequency: 20s
staticPodURL: ""
port: 10250
readOnlyPort: 0
rotateCertificates: true
serverTLSBootstrap: true
authentication:
anonymous:
enabled: false
webhook:
enabled: true
x509:
clientCAFile: "/etc/kubernetes/cert/ca.pem"
authorization:
mode: Webhook
registryPullQPS: 0
registryBurst: 20
eventRecordQPS: 0
eventBurst: 20
enableDebuggingHandlers: true
enableContentionProfiling: true
healthzPort: 10248
healthzBindAddress: "##NODE_IP##"
clusterDomain: "${CLUSTER_DNS_DOMAIN}"
clusterDNS:
- "${CLUSTER_DNS_SVC_IP}"
nodeStatusUpdateFrequency: 10s
nodeStatusReportFrequency: 1m
imageMinimumGCAge: 2m
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
volumeStatsAggPeriod: 1m
kubeletCgroups: ""
systemCgroups: ""
cgroupRoot: ""
cgroupsPerQOS: true
cgroupDriver: cgroupfs
runtimeRequestTimeout: 10m
hairpinMode: promiscuous-bridge
maxPods: 220
podCIDR: "${CLUSTER_CIDR}"
podPidsLimit: -1
resolvConf: /etc/resolv.conf
maxOpenFiles: 1000000
kubeAPIQPS: 1000
kubeAPIBurst: 2000
serializeImagePulls: false
evictionHard:
memory.available: "100Mi"
nodefs.available: "10%"
nodefs.inodesFree: "5%"
imagefs.available: "15%"
evictionSoft: {}
enableControllerAttachDetach: true
failSwapOn: true
containerLogMaxSize: 20Mi
containerLogMaxFiles: 10
systemReserved: {}
kubeReserved: {}
systemReservedCgroup: ""
kubeReservedCgroup: ""
enforceNodeAllocatable: ["pods"]
EOF
- address:kubelet 安全端口(https,10250)监听的地址,不能为 127.0.0.1,否则 kube-apiserver、heapster 等不能调用 kubelet 的 API;
- readOnlyPort=0:关闭只读端口(默认 10255),等效为未指定;
- authentication.anonymous.enabled:设置为 false,不允许匿名�访问 10250 端口;
- authentication.x509.clientCAFile:指定签名客户端证书的 CA 证书,开启 HTTP 证书认证;
- authentication.webhook.enabled=true:开启 HTTPs bearer token 认证;
- 对于未通过 x509 证书和 webhook 认证的请求(kube-apiserver 或其他客户端),将被拒绝,提示 Unauthorized;
- authroization.mode=Webhook:kubelet 使用 SubjectAccessReview API 查询 kube-apiserver 某 user、group 是否具有操作资源的权限(RBAC);
- featureGates.RotateKubeletClientCertificate、featureGates.RotateKubeletServerCertificate:自动 rotate 证书,证书的有效期取决于 kube-controller-manager 的 --experimental-cluster-signing-duration 参数;
- 需要 root 账户运行;
为各节点创建和分发 kubelet 配置文件
cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_ip in 192.168.1.34 192.168.1.35 192.168.1.36
do
echo ">>> ${node_ip}"
sed -e "s/##NODE_IP##/${node_ip}/" kubelet-config.yaml.template > kubelet-config-${node_ip}.yaml.template
scp kubelet-config-${node_ip}.yaml.template root@${node_ip}:/etc/kubernetes/kubelet-config.yaml
done
创建和分发 kubelet systemd unit 文件
创建 kubelet systemd unit 文件模板
cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
cat > kubelet.service.template <<EOF
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service
[Service]
WorkingDirectory=${K8S_DIR}/kubelet
ExecStart=/opt/k8s/bin/kubelet //
--allow-privileged=true //
--bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig //
--cert-dir=/etc/kubernetes/cert //
--cni-conf-dir=/etc/cni/net.d //
--container-runtime=docker //
--container-runtime-endpoint=unix:///var/run/dockershim.sock //
--root-dir=${K8S_DIR}/kubelet //
--kubeconfig=/etc/kubernetes/kubelet.kubeconfig //
--config=/etc/kubernetes/kubelet-config.yaml //
--hostname-override=##NODE_NAME## //
--pod-infra-container-image=registry.cn-beijing.aliyuncs.com/k8s_images/pause-amd64:3.1 //
--image-pull-progress-deadline=15m //
--volume-plugin-dir=${K8S_DIR}/kubelet/kubelet-plugins/volume/exec/ //
--logtostderr=true //
--v=2
Restart=always
RestartSec=5
StartLimitInterval=0
[Install]
WantedBy=multi-user.target
EOF
- 如果设置了 --hostname-override 选项,则 kube-proxy 也需要设置该选项,否则会出现找不到 Node 的情况;
- --bootstrap-kubeconfig:指向 bootstrap kubeconfig 文件,kubelet 使用该文件中的用户名和 token 向 kube-apiserver 发送 TLS Bootstrapping 请求;
- K8S approve kubelet 的 csr 请求后,在 --cert-dir 目录创建证书和私钥文件,然后写入 --kubeconfig 文件;
- --pod-infra-container-image 不使用 redhat 的 pod-infrastructure:latest 镜像,它不能回收容器的僵尸
为各节点创建和分发 kubelet systemd unit 文件
cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_name in ${NODE_NAMES[@]}
do
echo ">>> ${node_name}"
sed -e "s/##NODE_NAME##/${node_name}/" kubelet.service.template > kubelet-${node_name}.service
scp kubelet-${node_name}.service root@${node_name}:/etc/systemd/system/kubelet.service
done
Bootstrap Token Auth 和授予权限
- kubelet 启动时查找 --kubeletconfig 参数对应的文件是否存在,如果不存在则使用 --bootstrap-kubeconfig 指定的 kubeconfig 文件向 kube-apiserver 发送证书签名请求 (CSR)。
- kube-apiserver 收到 CSR 请求后,对其中的 Token 进行认证,认证通过后将请求的 user 设置为 system:bootstrap:
,group 设置为 system:bootstrappers,这一过程称为 Bootstrap Token Auth。
默认情况下,这个 user 和 group 没有创建 CSR 的权限,kubelet 启动失败,错误日志如下
$ sudo journalctl -u kubelet -a |grep -A 2 'certificatesigningrequests'
May 26 12:13:41 zhangjun-k8s01 kubelet[128468]: I0526 12:13:41.798230 128468 certificate_manager.go:366] Rotating certificates
May 26 12:13:41 zhangjun-k8s01 kubelet[128468]: E0526 12:13:41.801997 128468 certificate_manager.go:385] Failed while requesting a signed certificate from the master: cannot cre
ate certificate signing request: certificatesigningrequests.certificates.k8s.io is forbidden: User "system:bootstrap:82jfrm" cannot create resource "certificatesigningrequests" i
n API group "certificates.k8s.io" at the cluster scope
May 26 12:13:42 zhangjun-k8s01 kubelet[128468]: E0526 12:13:42.044828 128468 kubelet.go:2244] node "zhangjun-k8s01" not found
May 26 12:13:42 zhangjun-k8s01 kubelet[128468]: E0526 12:13:42.078658 128468 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: Unauthor
ized
May 26 12:13:42 zhangjun-k8s01 kubelet[128468]: E0526 12:13:42.079873 128468 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: Unauthorize
d
May 26 12:13:42 zhangjun-k8s01 kubelet[128468]: E0526 12:13:42.082683 128468 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: Unau
thorized
May 26 12:13:42 zhangjun-k8s01 kubelet[128468]: E0526 12:13:42.084473 128468 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Unau
thorized
May 26 12:13:42 zhangjun-k8s01 kubelet[128468]: E0526 12:13:42.088466 128468 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: U
nauthorized
解决办法是:创建一个 clusterrolebinding,将 group system:bootstrappers 和 clusterrole system:node-bootstrapper 绑定
kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --group=system:bootstrappers
启动 kubelet 服务
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
do
echo ">>> ${node_ip}"
ssh root@${node_ip} "mkdir -p ${K8S_DIR}/kubelet/kubelet-plugins/volume/exec/"
ssh root@${node_ip} "/usr/sbin/swapoff -a"
ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kubelet && systemctl restart kubelet"
done
- 启动服务前必须先创建工作目录;
- 关闭 swap 分区,否则 kubelet 会启动失败
修改部分
- 将work节点的/etc/kubernetes/kubelet-bootstrap.kubeconfig中127.0.0.1:8443改为master节点地址
192.168.1.31或192.168.1.32或192.168.1.33 - 将master节点上的kube-nginx监听的地址127.0.0.1:8443 改为 listen 0.0.0.0:8443
- 重启kube-nginx
自动 approve CSR 请求
创建三个 ClusterRoleBinding,分别用于自动 approve client、renew client、renew server 证书
cd /opt/k8s/work
cat > csr-crb.yaml <<EOF
# Approve all CSRs for the group "system:bootstrappers"
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: auto-approve-csrs-for-group
subjects:
- kind: Group
name: system:bootstrappers
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
apiGroup: rbac.authorization.k8s.io
---
# To let a node of the group "system:nodes" renew its own credentials
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: node-client-cert-renewal
subjects:
- kind: Group
name: system:nodes
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
apiGroup: rbac.authorization.k8s.io
---
# A ClusterRole which instructs the CSR approver to approve a node requesting a
# serving cert matching its client cert.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: approve-node-server-renewal-csr
rules:
- apiGroups: ["certificates.k8s.io"]
resources: ["certificatesigningrequests/selfnodeserver"]
verbs: ["create"]
---
# To let a node of the group "system:nodes" renew its own server credentials
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: node-server-cert-renewal
subjects:
- kind: Group
name: system:nodes
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: approve-node-server-renewal-csr
apiGroup: rbac.authorization.k8s.io
EOF
kubectl apply -f csr-crb.yaml
- auto-approve-csrs-for-group:自动 approve node 的第一次 CSR; 注意第一次 CSR 时,请求的 Group 为 system:bootstrappers;
- node-client-cert-renewal:自动 approve node 后续过期的 client 证书,自动生成的证书 Group 为 system:nodes;
- node-server-cert-renewal:自动 approve node 后续过期的 server 证书,自动生成的证书 Group 为 system:nodes
查看 kubelet 的情况
等待一段时间(1-10 分钟),三个节点的 CSR 都被自动 approved
[root@ _81_ /opt/k8s/work]# kubectl get csr
NAME AGE REQUESTOR CONDITION
csr-27shb 47s system:node:slave-36 Pending
csr-2fl9z 4m8s system:bootstrap:41in74 Approved,Issued
csr-6xdh5 4m26s system:bootstrap:y82yeg Approved,Issued
csr-8hf5g 49s system:node:slave-35 Pending
csr-jk257 31s system:node:slave-34 Pending
csr-n4c99 6m47s system:bootstrap:41in74 Pending
csr-t442g 6m52s system:bootstrap:y82yeg Approved,Issued
csr-v4tph 4m23s system:bootstrap:4h412a Approved,Issued
csr-vbjsl 6m48s system:bootstrap:4h412a Pending
Pending 的 CSR 用于创建 kubelet server 证书,需要手动 approve,参考后文
所有节点均 ready
[root@ _82_ /opt/k8s/work]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
slave-34 Ready <none> 58s v1.14.2
slave-35 Ready <none> 77s v1.14.2
slave-36 Ready <none> 74s v1.14.2
kube-controller-manager 为各 node 生成了 kubeconfig 文件和公私钥
[root@ _77_ /opt/k8s/work]# ls -l /etc/kubernetes/kubelet.kubeconfig
-rw------- 1 root root 2313 8月 5 15:26 /etc/kubernetes/kubelet.kubeconfig
[root@ _78_ /opt/k8s/work]# ls -l /etc/kubernetes/cert/|grep kubelet
-rw------- 1 root root 1273 8月 5 15:30 kubelet-client-2019-08-05-15-30-11.pem
lrwxrwxrwx 1 root root 59 8月 5 15:30 kubelet-client-current.pem -> /etc/kubernetes/cert/kubelet-client-2019-08-05-15-30-11.pem
- 没有自动生成 kubelet server 证书
手动 approve server cert csr
基于安全性考虑,CSR approving controllers 不会自动 approve kubelet server 证书签名请求,需要手动 approve
[root@ _84_ /opt/k8s/work]# kubectl get csr
NAME AGE REQUESTOR CONDITION
csr-27shb 3m17s system:node:slave-36 Pending
csr-2fl9z 6m38s system:bootstrap:41in74 Approved,Issued
csr-6xdh5 6m56s system:bootstrap:y82yeg Approved,Issued
csr-8hf5g 3m19s system:node:slave-35 Pending
csr-jk257 3m1s system:node:slave-34 Pending
csr-n4c99 9m17s system:bootstrap:41in74 Approved,Issued
csr-t442g 9m22s system:bootstrap:y82yeg Approved,Issued
csr-v4tph 6m53s system:bootstrap:4h412a Approved,Issued
csr-vbjsl 9m18s system:bootstrap:4h412a Approved,Issued
[root@ _85_ /opt/k8s/work]# kubectl certificate approve csr-27shb
certificatesigningrequest.certificates.k8s.io/csr-27shb approved
[root@ _86_ /opt/k8s/work]# kubectl certificate approve csr-8hf5g
certificatesigningrequest.certificates.k8s.io/csr-8hf5g approved
[root@ _87_ /opt/k8s/work]# kubectl certificate approve csr-jk257
certificatesigningrequest.certificates.k8s.io/csr-jk257 approved
[root@ _79_ /opt/k8s/work]# ls -l /etc/kubernetes/cert/kubelet-*
-rw------- 1 root root 1273 8月 5 15:30 /etc/kubernetes/cert/kubelet-client-2019-08-05-15-30-11.pem
lrwxrwxrwx 1 root root 59 8月 5 15:30 /etc/kubernetes/cert/kubelet-client-current.pem -> /etc/kubernetes/cert/kubelet-client-2019-08-05-15-30-11.pem
-rw------- 1 root root 1313 8月 5 15:34 /etc/kubernetes/cert/kubelet-server-2019-08-05-15-34-17.pem
lrwxrwxrwx 1 root root 59 8月 5 15:34 /etc/kubernetes/cert/kubelet-server-current.pem -> /etc/kubernetes/cert/kubelet-server-2019-08-05-15-34-17.pem
kubelet 提供的 API 接口
kubelet 启动后监听多个端口,用于接收 kube-apiserver 或其它客户端发送的请求
[root@ _80_ /opt/k8s/work]# netstat -lnpt|grep kubelet
tcp 0 0 192.168.1.34:10248 0.0.0.0:* LISTEN 19220/kubelet
tcp 0 0 192.168.1.34:10250 0.0.0.0:* LISTEN 19220/kubelet
tcp 0 0 127.0.0.1:35982 0.0.0.0:* LISTEN 19220/kubelet
- 10248: healthz http 服务;
- 10250: https 服务,访问该端口时需要认证和授权(即使访问 /healthz 也需要);
未开启只读端口 10255; - 从 K8S v1.10 开始,去除了 --cadvisor-port 参数(默认 4194 端口),不支持访问 cAdvisor UI & API
例如执行 kubectl exec -it nginx-ds-5rmws -- sh 命令时,kube-apiserver 会向 kubelet 发送如下请求:
POST /exec/default/nginx-ds-5rmws/my-nginx?command=sh&input=1&output=1&tty=1
kubelet 接收 10250 端口的 https 请求,可以访问如下资源:
- /pods、/runningpods
- /metrics、/metrics/cadvisor、/metrics/probes
- /spec
- /stats、/stats/container
- /logs
- /run/、/exec/, /attach/, /portForward/, /containerLogs/
由于关闭了匿名认证,同时开启了 webhook 授权,所有访问 10250 端口 https API 的请求都需要被认证和授权。
预定义的 ClusterRole system:kubelet-api-admin 授予访问 kubelet 所有 API 的权限(kube-apiserver 使用的 kubernetes 证书 User 授予了该权限):
[root@ _89_ /opt/k8s/work]# kubectl describe clusterrole system:kubelet-api-admin
Name: system:kubelet-api-admin
Labels: kubernetes.io/bootstrapping=rbac-defaults
Annotations: rbac.authorization.kubernetes.io/autoupdate: true
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
nodes/log [] [] [*]
nodes/metrics [] [] [*]
nodes/proxy [] [] [*]
nodes/spec [] [] [*]
nodes/stats [] [] [*]
nodes [] [] [get list watch proxy]
kubelet api 认证和授权
kubelet 配置了如下认证参数:
- authentication.anonymous.enabled:设置为 false,不允许匿名�访问 10250 端口;
- authentication.x509.clientCAFile:指定签名客户端证书的 CA 证书,开启 HTTPs 证书认证;
- authentication.webhook.enabled=true:开启 HTTPs bearer token 认证;
同时配置了如下授权参数
- authroization.mode=Webhook:开启 RBAC 授权
kubelet 收到请求后,使用 clientCAFile 对证书签名进行认证,或者查询 bearer token 是否有效。如果两者都没通过,则拒绝请求,提示 Unauthorized
[root@ _81_ /opt/k8s/work]# curl -s --cacert /etc/kubernetes/cert/ca.pem https://192.168.1.34:10250/metrics
Unauthorized
通过认证后,kubelet 使用 SubjectAccessReview API 向 kube-apiserver 发送请求,查询证书或 token 对应的 user、group 是否有操作资源的权限(RBAC);
证书认证和授权
# 权限不足的证书
[root@ _93_ /opt/k8s/work]# curl -s --cacert /etc/kubernetes/cert/ca.pem --cert /etc/kubernetes/cert/kube-controller-manager.pem --key /etc/kubernetes/cert/kube-controller-manager-key.pem https://192.168.1.34:10250/metrics
Forbidden (user=system:kube-controller-manager, verb=get, resource=nodes, subresource=metrics)
# 使用部署 kubectl 命令行工具时创建的、具有最高权限的 admin 证书
[root@ _94_ /opt/k8s/work]# curl -s --cacert /etc/kubernetes/cert/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem https://192.168.1.34:10250/metrics|head
# HELP apiserver_audit_event_total Counter of audit events generated and sent to the audit backend.
# TYPE apiserver_audit_event_total counter
apiserver_audit_event_total 0
# HELP apiserver_audit_requests_rejected_total Counter of apiserver requests rejected due to an error in audit logging backend.
# TYPE apiserver_audit_requests_rejected_total counter
apiserver_audit_requests_rejected_total 0
# HELP apiserver_client_certificate_expiration_seconds Distribution of the remaining lifetime on the certificate used to authenticate a request.
# TYPE apiserver_client_certificate_expiration_seconds histogram
apiserver_client_certificate_expiration_seconds_bucket{le="0"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="1800"} 0
- --cacert、--cert、--key 的参数值必须是文件路径,如上面的 ./admin.pem 不能省略 ./,否则返回 401 Unauthorized
bear token 认证和授权
创建一个 ServiceAccount,将它和 ClusterRole system:kubelet-api-admin 绑定,从而具有调用 kubelet API 的权限
[root@ _95_ /opt/k8s/work]# kubectl create sa kubelet-api-test
serviceaccount/kubelet-api-test created
[root@ _96_ /opt/k8s/work]# kubectl create clusterrolebinding kubelet-api-test --clusterrole=system:kubelet-api-admin --serviceaccount=default:kubelet-api-test
clusterrolebinding.rbac.authorization.k8s.io/kubelet-api-test created
[root@ _97_ /opt/k8s/work]# SECRET=$(kubectl get secrets | grep kubelet-api-test | awk '{print $1}')
[root@ _98_ /opt/k8s/work]# TOKEN=$(kubectl describe secret ${SECRET} | grep -E '^token' | awk '{print $2}')
[root@ _99_ /opt/k8s/work]# echo ${TOKEN}
eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6Imt1YmVsZXQtYXBpLXRlc3QtdG9rZW4tbndiaDgiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoia3ViZWxldC1hcGktdGVzdCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjhkZjQyZjg3LWI3NTUtMTFlOS04ODllLTAwMGMyOTNkMWRlNyIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0Omt1YmVsZXQtYXBpLXRlc3QifQ.Hh4xl6e8e2GRIQOjZDVPwSM3sQLoDFsFqVX3Entg84dTrERpESe1ST0yCcE22QdB5mnm5b8LHY0j4Uu6W14a0Pzkip3OORshAwDlNqa3hpiBuMo84gdzR0VfkC4SrKMnnNzaC2GLD9eFiMVsX6ZbAsLYziH7Az_81DIz8OH7IH6wTyWJmtlJ70q3AbEGeKjlpbsgEeV7uj1XsrifnIGt_n_Lx4osbjDE6kvlQDEsZZBjQ0aAwusyfu7YjhGOKObvpoc58RF64uWb84x7S0C51eSZS2cxWUtxfcd9itffXvJSWf91nOUNYqkWirHqsOVmJEawBC3ELZaQj9qAemos_g
[root@ _100_ /opt/k8s/work]# curl -s --cacert /etc/kubernetes/cert/ca.pem -H "Authorization: Bearer ${TOKEN}" https://192.168.1.34:10250/metrics|head
# HELP apiserver_audit_event_total Counter of audit events generated and sent to the audit backend.
# TYPE apiserver_audit_event_total counter
apiserver_audit_event_total 0
# HELP apiserver_audit_requests_rejected_total Counter of apiserver requests rejected due to an error in audit logging backend.
# TYPE apiserver_audit_requests_rejected_total counter
apiserver_audit_requests_rejected_total 0
# HELP apiserver_client_certificate_expiration_seconds Distribution of the remaining lifetime on the certificate used to authenticate a request.
# TYPE apiserver_client_certificate_expiration_seconds histogram
apiserver_client_certificate_expiration_seconds_bucket{le="0"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="1800"} 0
### cadvisor 和 metrics
cadvisor 是内嵌在 kubelet 二进制中的,统计所在节点各容器的资源(CPU、内存、磁盘、网卡)使用情况的服务。
浏览器访问 https://192.168.1.34:10250/metrics 和 https://192.168.1.34:10250/metrics/cadvisor 分别返回 kubelet 和 cadvisor 的 metrics。
kubelet.config.json 设置 authentication.anonymous.enabled 为 false,不允许匿名证书访问 10250 的 https 服务;
需要浏览器中手动导入证书,具体方法查看链接 参考A.浏览器访问kube-apiserver安全端口.md,创建和导入相关证书,然后访问上面的 10250 端口
获取 kubelet 的配置
从 kube-apiserver 获取各节点 kubelet 的配置
# 使用部署 kubectl 命令行工具时创建的、具有最高权限的 admin 证书;
[root@ _104_ /opt/k8s/work]# curl -sSL --cacert /etc/kubernetes/cert/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem ${KUBE_APISERVER}/api/v1/nodes/slave-34/proxy/configz | jq '.kubeletconfig|.kind="KubeletConfiguration"|.apiVersion="kubelet.config.k8s.io/v1beta1"'
{
"syncFrequency": "1m0s",
"fileCheckFrequency": "20s",
"httpCheckFrequency": "20s",
"address": "192.168.1.34",
"port": 10250,
"rotateCertificates": true,
"serverTLSBootstrap": true,
"authentication": {
"x509": {
"clientCAFile": "/etc/kubernetes/cert/ca.pem"
},
"webhook": {
"enabled": true,
"cacheTTL": "2m0s"
},
"anonymous": {
"enabled": false
}
},
"authorization": {
"mode": "Webhook",
"webhook": {
"cacheAuthorizedTTL": "5m0s",
"cacheUnauthorizedTTL": "30s"
}
},
"registryPullQPS": 0,
"registryBurst": 20,
"eventRecordQPS": 0,
"eventBurst": 20,
"enableDebuggingHandlers": true,
"enableContentionProfiling": true,
"healthzPort": 10248,
"healthzBindAddress": "192.168.1.34",
"oomScoreAdj": -999,
"clusterDomain": "cluster.local",
"clusterDNS": [
"10.254.0.2"
],
"streamingConnectionIdleTimeout": "4h0m0s",
"nodeStatusUpdateFrequency": "10s",
"nodeStatusReportFrequency": "1m0s",
"nodeLeaseDurationSeconds": 40,
"imageMinimumGCAge": "2m0s",
"imageGCHighThresholdPercent": 85,
"imageGCLowThresholdPercent": 80,
"volumeStatsAggPeriod": "1m0s",
"cgroupsPerQOS": true,
"cgroupDriver": "cgroupfs",
"cpuManagerPolicy": "none",
"cpuManagerReconcilePeriod": "10s",
"runtimeRequestTimeout": "10m0s",
"hairpinMode": "promiscuous-bridge",
"maxPods": 220,
"podCIDR": "172.30.0.0/16",
"podPidsLimit": -1,
"resolvConf": "/etc/resolv.conf",
"cpuCFSQuota": true,
"cpuCFSQuotaPeriod": "100ms",
"maxOpenFiles": 1000000,
"contentType": "application/vnd.kubernetes.protobuf",
"kubeAPIQPS": 1000,
"kubeAPIBurst": 2000,
"serializeImagePulls": false,
"evictionHard": {
"memory.available": "100Mi"
},
"evictionPressureTransitionPeriod": "5m0s",
"enableControllerAttachDetach": true,
"makeIPTablesUtilChains": true,
"iptablesMasqueradeBit": 14,
"iptablesDropBit": 15,
"failSwapOn": true,
"containerLogMaxSize": "20Mi",
"containerLogMaxFiles": 10,
"configMapAndSecretChangeDetectionStrategy": "Watch",
"enforceNodeAllocatable": [
"pods"
],
"kind": "KubeletConfiguration",
"apiVersion": "kubelet.config.k8s.io/v1beta1"
}