kubernetes1.13部署搭建

本文准备部署一个 一主两从 的 三节点 Kubernetes集群,整体节点规划如下表所示:
kub1 master 192.168.3.10
kub2 slave 192.168.3.11
kub3 slave 192.168.3.12
下面介绍一下各个节点的软件版本:
  • 操作系统:CentOS-7.1 1511
  • Docker版本:17.12.0
  • Kubernetes版本:1.13.1
所有节点都需要安装以下组件:
  • Docker:不用多说了吧
  • kubelet:运行于所有 Node上,负责启动容器和 Pod
  • kubeadm:负责初始化集群
准备工作

  • 所有节点关闭防火墙
systemctl disable firewalld.service systemctl stop firewalld.service
  • 禁用SELINUX
setenforce 0 vi /etc/selinux/config SELINUX=disabled
  • 所有节点关闭 swap
swapoff -a
  • 设置所有节点主机名
hostnamectl –static set-hostname kub1 hostnamectl –static set-hostname kub2 hostnamectl –static set-hostname kub3
  • 所有节点 主机名/IP加入 hosts解析
编辑 /etc/hosts文件,加入以下内容:
192.168.5.10 kub1 192.168.5.11 kub2 192.168.5.12 kub3
组件安装
Docker安装(所有节点)
wget https://download.docker.com/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker.repo
yum install docker-ce –y
启动 重启等
systemctl start docker
systemctl enable docker
指定安装版本本例为docker 17.12
yum list docker-ce –showduplicates|sort -r yum install docker-ce-17.12.0.ce -y
kubelet、kubeadm、kubectl安装(所有节点)
  • 首先准备repo
cat>>/etc/yum.repos.d/kubrenetes.repo<<EOF [kubernetes] name=Kubernetes Repo baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ gpgcheck=0 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg EOF
  • 然后执行如下指令来进行安装
setenforce 0 sed -i ‘s/^SELINUX=enforcing$/SELINUX= disabled/’ /etc/selinux/config yum install -y kubelet kubeadm kubectl systemctl enable kubelet && systemctl start kubelet
所有节点(包括swapoff -a,下面参数必须保证为1,否则启动服务会报错)
[root@kub2 ~]# echo 1 > /proc/sys/net/ipv4/ip_forward
[root@kub2 ~]# echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
(以下在master节点上执行)
镜像下载
以下是配置阿里云加速的,国内下载镜像会快很多。否则被墙了,你懂的
[root@kub1 ~]# docker pull mirrorgooglecontainers/kube-apiserver:v1.13.1
[root@kub1 ~]# docker pull mirrorgooglecontainers/kube-controller-manager:v1.13.1
[root@kub1 ~]# docker pull mirrorgooglecontainers/kube-scheduler:v1.13.1
[root@kub1 ~]# docker pull mirrorgooglecontainers/kube-proxy:v1.13.1
[root@kub1 ~]# docker pull mirrorgooglecontainers/pause:3.1
[root@kub1 ~]# docker pull mirrorgooglecontainers/etcd:3.2.24
[root@kub1 ~]# docker pull coredns/coredns:1.2.6
[root@kub1 ~]# docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64
后面当slave节点加入到主节点之后,容器无法创建成功,我在各个slave节点也pull这2个镜像(pause.kube-proxy)后,重新制作标签就可以了。
这里将以上image 重新打上tag自己的阿里云上的标签,方便以后自己使用。(打标签为阿里云的部分可不看。)
[root@kub1 ~]# docker tag mirrorgooglecontainers/kube-apiserver:v1.13.1 registry.cn-hangzhou.aliyuncs.com/momo/kubernetes1.13.1:kube-apiserver-v1.13.1
[root@kub1 ~]# docker tag mirrorgooglecontainers/kube-controller-manager:v1.13.1 registry.cn-hangzhou.aliyuncs.com/momo/kubernetes1.13.1:kube-controller-manager-v1.13.1
[root@kub1 ~]# docker tag mirrorgooglecontainers/kube-scheduler:v1.13.1 registry.cn-hangzhou.aliyuncs.com/momo/kubernetes1.13.1:kube-scheduler-v1.13.1
[root@kub1 ~]# docker tag coredns/coredns:1.2.6 registry.cn-hangzhou.aliyuncs.com/momo/kubernetes1.13.1:coredns-1.2.6
[root@kub1 ~]# docker tag mirrorgooglecontainers/etcd:3.2.24 registry.cn-hangzhou.aliyuncs.com/momo/kubernetes1.13.1:etcd-3.2.24
[root@kub1 ~]# docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64 registry.cn-hangzhou.aliyuncs.com/momo/kubernetes1.13.1:flannel-v0.10.0-amd64
[root@kub1 ~]# docker tag mirrorgooglecontainers/pause:3.1 registry.cn-hangzhou.aliyuncs.com/momo/kubernetes1.13.1:pause-3.1
上传至aliyun仓库后期自己使用(后面执行集群初始化的时候必须打上标签如下:
[root@kub1 ~]# docker tag kube-proxy:v1.13.1 k8s.gcr.io/kube-proxy:v1.13.1
删除原有镜像,保留现有的
重新打标签,供自己使用。
[root@kub1 ~]# docker tag registry.cn-hangzhou.aliyuncs.com/momo/kubernetes1.13.1:kube-controller-manager-v1.13.1 k8s.gcr.io/kube-controller-manager:v1.13.1
删除原有
[root@kub1 ~]# docker rmi kube-proxy:v1.13.1 kube-apiserver:v1.13.1 kube-controller-manager:v1.13.1 kube-scheduler:v1.13.1 coredns:1.2.6 etcd:3.2.24 flannel:v0.10.0-amd64 pause:3.1
Untagged: kube-proxy:v1.13.1
Untagged: kube-apiserver:v1.13.1
Untagged: kube-controller-manager:v1.13.1
Untagged: kube-scheduler:v1.13.1
Untagged: coredns:1.2.6
Untagged: etcd:3.2.24
Untagged: flannel:v0.10.0-amd64
Untagged: pause:3.1
[root@kub1 ~]#
[root@kub1 ~]#
[root@kub1 ~]#
[root@kub1 ~]#
[root@kub1 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-proxy v1.13.1 fdb321fd30a0 3 months ago 80.2MB
k8s.gcr.io/kube-scheduler v1.13.1 ab81d7360408 3 months ago 79.6MB
k8s.gcr.io/kube-apiserver v1.13.1 40a63db91ef8 3 months ago 181MB
k8s.gcr.io/kube-controller-manager v1.13.1 26e6f1db2a52 3 months ago 146MB
k8s.gcr.io/coredns 1.2.6 f59dcacceff4 4 months ago 40MB
k8s.gcr.io/etcd 3.2.24 3cab8e1b9802 5 months ago 220MB
k8s.gcr.io/flannel v0.10.0-amd64 f0fad859c909 13 months ago 44.6MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 14 months ago 742kB
[root@kub1 ~]#
部署
然后再在 Master节点上执行如下命令初始化 k8s集群:
kubeadm init –kubernetes-version=v1.13.1 –apiserver-advertise-address 192.168.5.10 –pod-network-cidr=10.244.0.0/16
  • –kubernetes-version: 用于指定 k8s版本
  • –apiserver-advertise-address:用于指定使用 Master的哪个network interface进行通信,若不指定,则 kubeadm会自动选择具有默认网关的 interface
  • –pod-network-cidr:用于指定Pod的网络范围。该参数使用依赖于使用的网络方案,本文将使用经典的flannel网络方案。
执行命令后,控制台给出了如下所示的详细集群初始化过程:
解决:
[root@kub1 ~]# cat /proc/sys/net/bridge/bridge-nf-call-iptables
0
[root@kub1 ~]# echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
继续执行:
[root@kub1 ~]# kubeadm init –kubernetes-version=v1.13.1 –apiserver-advertise-address 192.168.5.10 –pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.13.1
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 17.12.0-ce. Latest validated version: 18.06
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using ‘kubeadm config images pull’
[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder “/etc/kubernetes/pki”
[certs] Generating “ca” certificate and key
[certs] Generating “apiserver” certificate and key
[certs] apiserver serving cert is signed for DNS names [kub1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.5.10]
[certs] Generating “apiserver-kubelet-client” certificate and key
[certs] Generating “front-proxy-ca” certificate and key
[certs] Generating “front-proxy-client” certificate and key
[certs] Generating “etcd/ca” certificate and key
[certs] Generating “apiserver-etcd-client” certificate and key
[certs] Generating “etcd/server” certificate and key
[certs] etcd/server serving cert is signed for DNS names [kub1 localhost] and IPs [192.168.5.10 127.0.0.1 ::1]
[certs] Generating “etcd/peer” certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kub1 localhost] and IPs [192.168.5.10 127.0.0.1 ::1]
[certs] Generating “etcd/healthcheck-client” certificate and key
[certs] Generating “sa” key and public key
[kubeconfig] Using kubeconfig folder “/etc/kubernetes”
[kubeconfig] Writing “admin.conf” kubeconfig file
[kubeconfig] Writing “kubelet.conf” kubeconfig file
[kubeconfig] Writing “controller-manager.conf” kubeconfig file
[kubeconfig] Writing “scheduler.conf” kubeconfig file
[control-plane] Using manifest folder “/etc/kubernetes/manifests”
[control-plane] Creating static Pod manifest for “kube-apiserver”
[control-plane] Creating static Pod manifest for “kube-controller-manager”
[control-plane] Creating static Pod manifest for “kube-scheduler”
[etcd] Creating static Pod manifest for local etcd in “/etc/kubernetes/manifests”
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory “/etc/kubernetes/manifests”. This can take up to 4m0s
[apiclient] All control plane components are healthy after 23.503698 seconds
[uploadconfig] storing the configuration used in ConfigMap “kubeadm-config” in the “kube-system” Namespace
[kubelet] Creating a ConfigMap “kubelet-config-1.13” in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information “/var/run/dockershim.sock” to the Node API object “kub1” as an annotation
[mark-control-plane] Marking the node kub1 as control-plane by adding the label “node-role.kubernetes.io/master=””
[mark-control-plane] Marking the node kub1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: cbj9gt.n06gpj9d6h9eimv9
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the “cluster-info” ConfigMap in the “kube-public” namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join 192.168.5.10:6443 –token cbj9gt.n06gpj9d6h9eimv9 –discovery-token-ca-cert-hash sha256:a52aea37ec5dd3f610b9d43f4c7a4bcd32b7026f89108263798b57f510412a9c
[root@kub1 ~]#
配置 kubectl
在 Master上用 root用户执行下列命令来配置 kubectl:
echo “export KUBECONFIG=/etc/kubernetes/admin.conf” >> /etc/profile source /etc/profile echo $KUBECONFIG
0x03. 安装Pod网络
安装 Pod网络是 Pod之间进行通信的必要条件,k8s支持众多网络方案,这里我们依然选用经典的 flannel方案
  • 首先设置系统参数:
sysctl net.bridge.bridge-nf-call-iptables=1 (之前已经设置为1了)
  • 然后在 Master节点上执行如下命令:
kubectl apply -f kube-flannel.yaml
kube-flannel.yaml 文件在此 (或者自己网盘内kubernetes目录下)
[root@kub1 ~]# kubectl apply -f kube-flannel.yaml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created
一旦 Pod网络安装完成,可以执行如下命令检查一下 CoreDNS Pod此刻是否正常运行起来了,一旦其正常运行起来,则可以继续后续步骤
kubectl get pods –all-namespaces -o wide
同时我们可以看到主节点已经就绪:kubectl get nodes
[root@kub1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kub1 Ready master 39m v1.13.4
添加slave节点
(以下在其余slave节点执行)
在两个 Slave节点上分别执行如下命令来让其加入Master上已经就绪了的 k8s集群:
[root@kub2 ~]# echo 1 > /proc/sys/net/ipv4/ip_forward
[root@kub2 ~]# echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
[root@kub2 ~]# kubeadm join 192.168.5.10:6443 –token cbj9gt.n06gpj9d6h9eimv9 –discovery-token-ca-cert-hash sha256:a52aea37ec5dd3f610b9d43f4c7a4bcd32b7026f89108263798b57f510412a9c
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 17.12.0-ce. Latest validated version: 18.06
[discovery] Trying to connect to API Server “192.168.5.10:6443”
[discovery] Created cluster-info discovery client, requesting info from “https://192.168.5.10:6443”
[discovery] Requesting info from “https://192.168.5.10:6443” again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server “192.168.5.10:6443”
[discovery] Successfully established connection with API Server “192.168.5.10:6443”
[join] Reading configuration from the cluster…
[join] FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -oyaml’
[kubelet] Downloading configuration for the kubelet from the “kubelet-config-1.13” ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap…
[patchnode] Uploading the CRI Socket information “/var/run/dockershim.sock” to the Node API object “kub2” as an annotation
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run ‘kubectl get nodes’ on the master to see this node join the cluster.
在kub3上执行如上命令
效果验证
在主节点上,查看节点状态
[root@kub1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kub1 Ready master 46m v1.13.4
kub2 NotReady <none> 2m23s v1.13.4
kub3 NotReady <none> 23s v1.13.4
  • 查看所有 Pod状态
  • [root@kub1 ~]# kubectl get pods –all-namespaces -o wide
(这里在slave节点上pull镜像
docker pull mirrorgooglecontainers/pause:3.1
docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1
docker rmi mirrorgooglecontainers/pause:3.1
docker pull mirrorgooglecontainers/kube-proxy:v1.13.1
docker tag mirrorgooglecontainers/kube-proxy:v1.13.1 k8s.gcr.io/kube-proxy:v1.13.1
docker rmi mirrorgooglecontainers/kube-proxy:v1.13.1
好了,集群现在已经正常运行了
拆卸集群
首先处理各节点:
kubectl drain <node name> –delete-local-data –force –ignore-daemonsets kubectl delete node <node name>
一旦节点移除之后,则可以执行如下命令来重置集群:
kubeadm reset
安装 dashboard
就像给elasticsearch配一个可视化的管理工具一样,我们最好也给 k8s集群配一个可视化的管理工具,便于管理集群。
因此我们接下来安装 v1.10.0版本的 kubernetes-dashboard,用于集群可视化的管理。
  • 首先手动下载镜像并重新打标签:(所有节点)
[root@kub1 ~]# docker pull registry.cn-hangzhou.aliyuncs.com/kuberneters/kubernetes-dashboard-amd64:v1.10.1
v1.10.1: Pulling from kuberneters/kubernetes-dashboard-amd64
9518d8afb433: Pull complete
Digest: sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747
Status: Downloaded newer image for registry.cn-hangzhou.aliyuncs.com/kuberneters/kubernetes-dashboard-amd64:v1.10.1
[root@kub1 ~]#
[root@kub1 ~]# docker tag registry.cn-hangzhou.aliyuncs.com/kuberneters/kubernetes-dashboard-amd64:v1.10.1 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
[root@kub1 ~]# docker rmi registry.cn-hangzhou.aliyuncs.com/kuberneters/kubernetes-dashboard-amd64:v1.10.1
Untagged: registry.cn-hangzhou.aliyuncs.com/kuberneters/kubernetes-dashboard-amd64:v1.10.1
Untagged: registry.cn-hangzhou.aliyuncs.com/kuberneters/kubernetes-dashboard-amd64@sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747
  • 安装 dashboard:
首先查看所有节点是否有目录/home/share/certs没有的全部创建之后在执行如下命令
kubectl create -f dashboard.yaml
dashboard.yaml 文件在此(百度网盘kubernetes)
  • 查看 dashboard的 pod是否正常启动,如果正常说明安装成功:
kubectl get pods –namespace=kube-system
[root@k8s-master ~]# kubectl get pods –namespace=kube-system NAME READY STATUS RESTARTS AGE coredns-86c58d9df4-4rds2 1/1 Running 0 81m coredns-86c58d9df4-rhtgq 1/1 Running 0 81m etcd-k8s-master 1/1 Running 0 80m kube-apiserver-k8s-master 1/1 Running 0 80m kube-controller-manager-k8s-master 1/1 Running 0 80m kube-flannel-ds-amd64-8qzpx 1/1 Running 0 78m kube-flannel-ds-amd64-jvp59 1/1 Running 0 77m kube-flannel-ds-amd64-wztbk 1/1 Running 0 78m kube-proxy-crr7k 1/1 Running 0 81m kube-proxy-gk5vf 1/1 Running 0 78m kube-proxy-ktr27 1/1 Running 0 77m kube-scheduler-k8s-master 1/1 Running 0 80m kubernetes-dashboard-79ff88449c-v2jnc 1/1 Running 0 21s
(以下密钥部分只在master节点执行)
  • 查看 dashboard的外网暴露端口
kubectl get service –namespace=kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 5h38m kubernetes-dashboard NodePort 10.99.242.186 <none> 443:31234/TCP 14
  • 生成私钥和证书签名:
openssl genrsa -des3 -passout pass:x -out dashboard.pass.key 2048 openssl rsa -passin pass:x -in dashboard.pass.key -out dashboard.key rm dashboard.pass.key openssl req -new -key dashboard.key -out dashboard.csr【如遇输入,一路回车即可】
  • 生成SSL证书:
openssl x509 -req -sha256 -days 365 -in dashboard.csr -signkey dashboard.key -out dashboard.crt
  • 然后将生成的 dashboard.key 和 dashboard.crt置于路径 /home/share/certs下,该路径会配置到下面即将要操作的
dashboard-user-role.yaml文件中
  • 创建 dashboard用户
kubectl create -f dashboard-user-role.yaml
dashboard-user-role.yaml 文件在此(或者百度网盘中)
  • 获取登陆token
kubectl describe secret/$(kubectl get secret -nkube-system |grep admin|awk ‘{print $1}’) -nkube-system
[root@k8s-master ~]# kubectl describe secret/$(kubectl get secret -nkube-system |grep admin|awk ‘{print $1}’) -nkube-system Name: admin-token-9d4vl Namespace: kube-system Labels: <none> Annotations: kubernetes.io/service-account.name: admin kubernetes.io/service-account.uid: a320b00f-07ed-11e9-93f2-000c2978f207 Type: kubernetes.io/service-account-token Data ==== ca.crt: 1025 bytes namespace: 11 bytes token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi10b2tlbi05ZDR2bCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJhZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImEzMjBiMDBmLTA3ZWQtMTFlOS05M2YyLTAwMGMyOTc4ZjIwNyIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTphZG1pbiJ9.WbaHx-BfZEd0SvJwA9V_vGUe8jPMUHjKlkT7MWJ4JcQldRFY8Tdpv5GKCY25JsvT_GM3ob303r0yE6vjQdKna7EfQNO_Wb2j1Yu5UvZnWw52HhNudHNOVL_fFRKxkSVjAILA_C_HvW6aw6TG5h7zHARgl71I0LpW1VESeHeThipQ-pkt-Dr1jWcpPgE39cwxSgi-5qY4ssbyYBc2aPYLsqJibmE-KUhwmyOheF4Lxpg7E3SQEczsig2HjXpNtJizCu0kPyiR4qbbsusulH-kdgjhmD9_XWP9k0BzgutXWteV8Iqe4-uuRGHZAxgutCvaL5qENv4OAlaArlZqSgkNWw
token既然生成成功,接下来就可以打开浏览器,输入 token来登录进集群管理页面:
这里使用谷歌浏览器以及360浏览器都不行,最终使用火狐浏览器登入成功。
集群概览
常用命令
查看各个服务状态
kubectl get pods –all-namespaces -o wide
查看具体某个服务状态描述
kubectl describe pod kubernetes-dashboard-5d686bd8cf-f5x22 –namespace=kube-system
删除某个服务,会自动启动一个
kubectl delete pods kubernetes-dashboard-5d686bd8cf-f5x22 –namespace=kube-system

发布者

deelaaay

己所不欲,勿施于人。