kubeadm部署k8s高可用集群

at 3年前  ca K8S  pv 724  by touch  

HA的2种部署方式
一种是将etcd与Master节点组件混布在一起
kubeadm部署k8s高可用集群 K8S 第1张

另外一种方式是,使用独立的Etcd集群,不与Master节点混布
kubeadm部署k8s高可用集群 K8S 第2张

本章是用第一种叠加式安装的

通过kubeadm搭建一个高可用的k8s集群,kubeadm可以帮助我们快速的搭建k8s集群,高可用主要体现在对master节点组件及etcd存储的高可用,文中使用到的服务器ip及角色对应如下:

  1. 192.168.200.3 master1
  2. 192.168.200.4 master2
  3. 192.168.200.5 master3
  4. 192.168.200.6 node1
  5. 192.168.200.7 node2
  6. 192.168.200.8 node3
  7. 192.168.200.16 VIP

删除host信息

  1. /etc/hosts这行也删除掉 ::1     localhost       localhost.localdomain   localhost6      localhost6.localdomain6

1、各节点下载docker源

  1. # step 1: 安装必要的一些系统工具
  2. sudo yum install -y yum-utils device-mapper-persistent-data lvm2
  3. # Step 2: 添加软件源信息
  4. sudo yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
  5. # Step 3
  6. sudo sed -'s+download.docker.com+mirrors.aliyun.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo
  7. # Step 4: 更新并安装Docker-CE
  8. sudo yum makecache fast
  9. sudo yum -y install docker-ce
  10. # Step 4: 开启Docker服务
  11. sudo service docker start
  12.  
  13. # 注意:
  14. # 官方软件源默认启用了最新的软件,您可以通过编辑软件源的方式获取各个版本的软件包。例如官方并没有将测试版本的软件源置为可用,您可以通过以下方式开启。同理可以开启各种测试版本等。
  15. # vim /etc/yum.repos.d/docker-ce.repo
  16. #   将[docker-ce-test]下方的enabled=0修改为enabled=1
  17. #
  18. # 安装指定版本的Docker-CE:
  19. # Step 1: 查找Docker-CE的版本:
  20. # yum list docker-ce.x86_64 --showduplicates | sort -r
  21. #   Loading mirror speeds from cached hostfile
  22. #   Loaded plugins: branch, fastestmirror, langpacks
  23. #   docker-ce.x86_64            17.03.1.ce-1.el7.centos            docker-ce-stable
  24. #   docker-ce.x86_64            17.03.1.ce-1.el7.centos            @docker-ce-stable
  25. #   docker-ce.x86_64            17.03.0.ce-1.el7.centos            docker-ce-stable
  26. #   Available Packages
  27. # Step2: 安装指定版本的Docker-CE: (VERSION例如上面的17.03.0.ce.1-1.el7.centos)
  28. # sudo yum -y install docker-ce-[VERSION]

2、各节点安装docker服务并加入开机启动

  1. yum -y install docker-ce
  2. systemctl start docker && systemctl enable docker

3、各节点配置docker加速器并修改成k8s驱动
daemon.json文件如果没有自己创建

  1. sudo mkdir -/etc/docker
  2. sudo tee /etc/docker/daemon.json <<-'EOF'
  3. {
  4.   "registry-mirrors": [
  5.     "https://hub.uuuadc.top",
  6.         "https://docker.anyhub.us.kg",
  7.         "https://dockerhub.jobcher.com",
  8.         "https://dockerhub.icu",
  9.         "https://docker.ckyl.me",
  10.         "https://docker.awsl9527.cn"
  11.   ],
  12.   "exec-opts": ["native.cgroupdriver=systemd"],
  13.   "log-driver": "json-file",
  14.   "log-opts": {
  15.     "max-size": "100m"
  16.   },
  17.   "storage-driver": "overlay2",
  18.   "storage-opts": [
  19.     "overlay2.override_kernel_check=true"
  20.   ]
  21. }
  22. EOF

4、重启docker服务

  1. systemctl daemon-reload
  2. systemctl restart docker

5、更改各节点主机名

  1. hostnamectl set-hostname 主机名

6、配置各节点hosts文件

  1. [root@master1 ~]# cat /etc/hosts
  2. 127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
  3. ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
  4.  
  5. 192.168.200.3 master1
  6. 192.168.200.4 master2
  7. 192.168.200.5 master3
  8. 192.168.200.6 node1
  9. 192.168.200.7 node2
  10. 192.168.200.8 node3

7、关闭各个节点防火墙

  1. systemctl stop firewalld && systemctl disable firewalld

8、关闭各节点SElinux

  1. [root@master1 ~]# cat /etc/selinux/config 
  2.  
  3. # This file controls the state of SELinux on the system.
  4. # SELINUX= can take one of these three values:
  5. #     enforcing - SELinux security policy is enforced.
  6. #     permissive - SELinux prints warnings instead of enforcing.
  7. #     disabled - No SELinux policy is loaded.
  8. SELINUX=disabled        # 改成disabled
  9. # SELINUXTYPE= can take one of three two values:
  10. #     targeted - Targeted processes are protected,
  11. #     minimum - Modification of targeted policy. Only selected processes are protected. 
  12. #     mls - Multi Level Security protection.
  13. SELINUXTYPE=targeted

9、关闭各节点swap分区

  1. [root@master1 ~]# cat /etc/fstab 
  2.  
  3. #
  4. # /etc/fstab
  5. # Created by anaconda on Wed Dec 30 15:01:07 2020
  6. #
  7. # Accessible filesystems, by reference, are maintained under '/dev/disk'
  8. # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
  9. #
  10. /dev/mapper/centos-root /                       xfs     defaults        0 0
  11. UUID=7321cb15-9220-4cc2-be0c-a4875f6d8bbc /boot                   xfs     defaults        0 0
  12. #/dev/mapper/centos-swap swap                    swap    defaults        0 0         # 注释这行

临时关闭

  1. swapoff -&& sudo sed -'/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

10、重启服务器

  1. reboot

11、同步各节点的时间

  1. timedatectl set-timezone Asia/Shanghai
  2. chronyc -a makestep

12、各节点内核调整,将桥接的IPv4流量传递到iptables的链

  1. cat > /etc/sysctl.d/k8s.conf << EOF
  2. net.bridge.bridge-nf-call-ip6tables = 1
  3. net.bridge.bridge-nf-call-iptables = 1
  4. net.ipv4.ip_nonlocal_bind = 1
  5. net.ipv4.ip_forward = 1
  6. net.ipv6.conf.all.disable_ipv6 = 1
  7. EOF
  8.  
  9. sysctl -/etc/sysctl.d/k8s.conf

执行命令验证是否生效

  1. sysctl -| grep net.ipv4.ip_forward

13、配置各节点k8s的yum源

  1. cat <<EOF > /etc/yum.repos.d/kubernetes.repo
  2. [kubernetes]
  3. name=Kubernetes
  4. baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
  5. enabled=1
  6. gpgcheck=1
  7. repo_gpgcheck=1
  8. gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
  9. EOF
  10. setenforce 0

14、各节点安装ipset服务(如果使用iptables 请忽略这步)

  1. yum -y install ipvsadm ipset sysstat conntrack libseccomp

15、各节点开启ipvs模块(如果使用iptables 请忽略这步)

  1. cat > /etc/sysconfig/modules/ipvs.modules <<EOF
  2.      #!/bin/sh
  3.      modprobe -- ip_vs
  4.      modprobe -- ip_vs_rr
  5.      modprobe -- ip_vs_wrr
  6.      modprobe -- ip_vs_sh
  7.      modprobe -- nf_conntrack_ipv4
  8. EOF
  1. chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

16、所有master节点安装haproxy和keepalived服务

  1. yum -y install haproxy keepalived

17、修改master1节点keepalived配置文件

  1. [root@master1 ~]# cat /etc/keepalived/keepalived.conf 
  2. ! Configuration File for keepalived
  3.  
  4. global_defs {
  5.    router_id LVS_DEVEL
  6.  
  7. # 添加如下内容
  8.    script_user root
  9.    enable_script_security
  10. }
  11.  
  12.  
  13.  
  14. vrrp_script check_haproxy {
  15.     script "/etc/keepalived/check_haproxy.sh"         # 检测脚本路径
  16.     interval 3
  17.     weight -2 
  18.     fall 10
  19.     rise 2
  20. }
  21.  
  22.  
  23.  
  24. vrrp_instance VI_1 {
  25.     state MASTER            # MASTER
  26.     interface ens33         # 本机网卡名
  27.     virtual_router_id 51
  28.     priority 100             # 权重100
  29.     advert_int 1
  30.     authentication {
  31.         auth_type PASS
  32.         auth_pass 1111
  33.     }
  34.     virtual_ipaddress {
  35.         192.168.200.16      # 虚拟IP
  36.     }
  37.     track_script {
  38.         check_haproxy       # 模块
  39.     }
  40. }

18、修改master2节点keepalived配置文件

  1. [root@master2 ~]# cat /etc/keepalived/keepalived.conf 
  2. ! Configuration File for keepalived
  3.  
  4. global_defs {
  5.    router_id LVS_DEVEL
  6.  
  7. # 添加如下内容
  8.    script_user root
  9.    enable_script_security
  10. }
  11.  
  12.  
  13.  
  14. vrrp_script check_haproxy {
  15.     script "/etc/keepalived/check_haproxy.sh"         # 检测脚本路径
  16.     interval 3
  17.     weight -2 
  18.     fall 10
  19.     rise 2
  20. }
  21.  
  22.  
  23.  
  24. vrrp_instance VI_1 {
  25.     state BACKUP            # BACKUP
  26.     interface ens33         # 本机网卡名
  27.     virtual_router_id 51
  28.     priority 99             # 权重99
  29.     advert_int 1
  30.     authentication {
  31.         auth_type PASS
  32.         auth_pass 1111
  33.     }
  34.     virtual_ipaddress {
  35.         192.168.200.16      # 虚拟IP
  36.     }
  37.     track_script {
  38.         check_haproxy       # 模块
  39.     }
  40. }

19、修改master3节点keepalived配置文件

  1. [root@master3 ~]# cat /etc/keepalived/keepalived.conf 
  2. ! Configuration File for keepalived
  3.  
  4. global_defs {
  5.    router_id LVS_DEVEL
  6.  
  7. # 添加如下内容
  8.    script_user root
  9.    enable_script_security
  10. }
  11.  
  12.  
  13.  
  14. vrrp_script check_haproxy {
  15.     script "/etc/keepalived/check_haproxy.sh"         # 检测脚本路径
  16.     interval 3
  17.     weight -2 
  18.     fall 10
  19.     rise 2
  20. }
  21.  
  22.  
  23.  
  24. vrrp_instance VI_1 {
  25.     state BACKUP            # BACKUP
  26.     interface ens33         # 本机网卡名
  27.     virtual_router_id 51
  28.     priority 98            # 权重98
  29.     advert_int 1
  30.     authentication {
  31.         auth_type PASS
  32.         auth_pass 1111
  33.     }
  34.     virtual_ipaddress {
  35.         192.168.200.16      # 虚拟IP
  36.     }
  37.     track_script {
  38.         check_haproxy       # 模块
  39.     }
  40. }

20、三台master节点haproxy配置都一样

  1. [root@master1 ~]# cat /etc/haproxy/haproxy.cfg 
  2. #---------------------------------------------------------------------
  3. # Example configuration for a possible web application.  See the
  4. # full configuration options online.
  5. #
  6. #   http://haproxy.1wt.eu/download/1.4/doc/configuration.txt
  7. #
  8. #---------------------------------------------------------------------
  9.  
  10. #---------------------------------------------------------------------
  11. # Global settings
  12. #---------------------------------------------------------------------
  13. global
  14.     # to have these messages end up in /var/log/haproxy.log you will
  15.     # need to:
  16.     #
  17.     # 1) configure syslog to accept network log events.  This is done
  18.     #    by adding the '-r' option to the SYSLOGD_OPTIONS in
  19.     #    /etc/sysconfig/syslog
  20.     #
  21.     # 2) configure local2 events to go to the /var/log/haproxy.log
  22.     #   file. A line like the following can be added to
  23.     #   /etc/sysconfig/syslog
  24.     #
  25.     #    local2.*                       /var/log/haproxy.log
  26.     #
  27.     log         127.0.0.1 local2
  28.  
  29.     chroot      /var/lib/haproxy
  30.     pidfile     /var/run/haproxy.pid
  31.     maxconn     4000
  32.     user        haproxy
  33.     group       haproxy
  34.     daemon
  35.  
  36.     # turn on stats unix socket
  37.     stats socket /var/lib/haproxy/stats
  38.  
  39. #---------------------------------------------------------------------
  40. # common defaults that all the 'listen' and 'backend' sections will
  41. # use if not designated in their block
  42. #---------------------------------------------------------------------
  43. defaults
  44.     mode                    http
  45.     log                     global
  46.     option                  httplog
  47.     option                  dontlognull
  48.     option http-server-close
  49.     option forwardfor       except 127.0.0.0/8
  50.     option                  redispatch
  51.     retries                 3
  52.     timeout http-request    10s
  53.     timeout queue           1m
  54.     timeout connect         10s
  55.     timeout client          1m
  56.     timeout server          1m
  57.     timeout http-keep-alive 10s
  58.     timeout check           10s
  59.     maxconn                 3000
  60.  
  61. #---------------------------------------------------------------------
  62. # main frontend which proxys to the backends
  63. #---------------------------------------------------------------------
  64. frontend  kubernetes-apiserver
  65.     mode                        tcp
  66.     bind                        *:16443
  67.     option                      tcplog
  68.     default_backend             kubernetes-apiserver
  69.  
  70. #---------------------------------------------------------------------
  71. # static backend for serving up images, stylesheets and such
  72. #---------------------------------------------------------------------
  73. listen stats
  74.     bind            *:1080
  75.     stats auth      admin:awesomePassword
  76.     stats refresh   5s
  77.     stats realm     HAProxyStatistics
  78.     stats uri       /admin?stats
  79.  
  80. #---------------------------------------------------------------------
  81. # round robin balancing between the various backends
  82. #---------------------------------------------------------------------
  83. backend kubernetes-apiserver
  84.     mode        tcp
  85.     balance     roundrobin
  86.     server  master1 192.168.200.3:6443 check
  87.     server  master2 192.168.200.4:6443 check
  88.     server  master3 192.168.200.5:6443 check

21、每台master节点编写健康监测脚本

  1. [root@master1 ~]# cat /etc/keepalived/check_haproxy.sh 
  2. #!/bin/sh
  3. # HAPROXY down
  4. A=`ps -C haproxy --no-header | wc -l`
  5. if [ $A -eq 0 ]
  6. then
  7. systmectl start haproxy
  8. if [ ps -C haproxy --no-header | wc --eq 0 ]
  9. then
  10. killall -9 haproxy
  11. echo "HAPROXY down" | mail -"haproxy"
  12. sleep 3600
  13. fi 
  14.  
  15. fi

22、给脚本增加执行权限

  1. chmod +x check_haproxy.sh

23、启动keepalived和haproxy服务并加入开机启动

  1. systemctl start keepalived && systemctl enable keepalived
  2. systemctl start haproxy && systemctl enable haproxy

24、查看vip IP地址

  1. [root@master1 ~]# ip a
  2. 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
  3.     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  4.     inet 127.0.0.1/8 scope host lo
  5.        valid_lft forever preferred_lft forever
  6.     inet6 ::1/128 scope host 
  7.        valid_lft forever preferred_lft forever
  8. 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
  9.     link/ether 00:0c:29:4e:4c:fe brd ff:ff:ff:ff:ff:ff
  10.     inet 192.168.200.3/24 brd 192.168.200.255 scope global noprefixroute ens33
  11.        valid_lft forever preferred_lft forever
  12.     inet 192.168.200.16/32 scope global ens33        # 虚拟IP
  13.        valid_lft forever preferred_lft forever
  14.     inet6 fe80::9047:3a26:97fd:4d07/64 scope link noprefixroute 
  15.        valid_lft forever preferred_lft forever
  16. 3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
  17.     link/ether 02:42:dc:dd:f0:d7 brd ff:ff:ff:ff:ff:ff
  18.     inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
  19.        valid_lft forever preferred_lft forever

25、每个节点安装kubeadm,kubelet和kubectl # 安装的kubeadm、kubectl和kubelet要和kubernetes版本一致,kubelet加入开机启动之后不手动启动,要不然会报错,初始化集群之后集群会自动启动kubelet服务!!!

  1. yum install ---nogpgcheck kubelet kubeadm kubectl
  2. systemctl enable kubelet && systemctl daemon-reload

26、获取默认配置文件

  1. kubeadm config print init-defaults > kubeadm-config.yaml

27、修改初始化配置文件

  1. cat kubeadm-config.yaml
  2.  
  3. apiVersion: kubeadm.k8s.io/v1beta3
  4. bootstrapTokens:
  5. - groups:
  6.   - system:bootstrappers:kubeadm:default-node-token
  7.   token: abcdef.0123456789abcdef
  8.   ttl: 24h0m0s
  9.   usages:
  10.   - signing
  11.   - authentication
  12. kind: InitConfiguration
  13. localAPIEndpoint:
  14.   advertiseAddress: 192.168.200.3     # 本机IP
  15.   bindPort: 6443
  16. nodeRegistration:
  17.   criSocket: /var/run/dockershim.sock
  18.   imagePullPolicy: IfNotPresent
  19.   name: master1  # 本主机名
  20.   taints: 
  21.   - effect: NoSchedule
  22.     key: node-role.kubernetes.io/master
  23. ---
  24. apiServer:
  25.   timeoutForControlPlane: 4m0s
  26. apiVersion: kubeadm.k8s.io/v1beta3
  27. certificatesDir: /etc/kubernetes/pki
  28. clusterName: kubernetes
  29. controlPlaneEndpoint: "192.168.200.16:16443"    # 虚拟IP和haproxy端口
  30. controllerManager: {}
  31. dns: {}
  32. etcd:
  33.   local:
  34.     dataDir: /var/lib/etcd
  35. imageRepository: registry.aliyuncs.com/google_containers    # 镜像仓库源要根据自己实际情况修改
  36. kind: ClusterConfiguration
  37. kubernetesVersion: 1.23.0
  38. networking:
  39.   dnsDomain: cluster.local
  40.   podSubnet: "10.244.0.0/16"  #新增
  41.   serviceSubnet: 10.96.0.0/12
  42. scheduler: {}
  43.  
  44. ---
  45. apiVersion: kubeproxy.config.k8s.io/v1alpha1
  46. kind: KubeProxyConfiguration
  47. mode: ipvs

28、下载相关镜像

  1. [root@master3 ~]# kubeadm config images pull --config kubeadm-config.yaml
  2. W0131 18:44:21.933608   25680 strict.go:55] error unmarshaling configuration schema.GroupVersionKind{Group:"kubeadm.k8s.io", Version:"v1beta3", Kind:"ClusterConfiguration"}: error unmarshaling JSON: while decoding JSON: json: unknown field "type"
  3. [config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.0
  4. [config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.0
  5. [config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.0
  6. [config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.23.0
  7. [config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.6
  8. [config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.1-0
  9. [config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.8.6

29、初始化集群

  1. [root@master1 ~]# kubeadm init --config kubeadm-config.yaml
  2. W1231 14:11:50.231964  120564 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
  3. [init] Using Kubernetes version: v1.18.2
  4. [preflight] Running pre-flight checks
  5.         [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
  6.         [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.1. Latest validated version: 19.03
  7. [preflight] Pulling images required for setting up a Kubernetes cluster
  8. [preflight] This might take a minute or two, depending on the speed of your internet connection
  9. [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
  10. [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
  11. [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
  12. [kubelet-start] Starting the kubelet
  13. [certs] Using certificateDir folder "/etc/kubernetes/pki"
  14. [certs] Generating "ca" certificate and key
  15. [certs] Generating "apiserver" certificate and key
  16. [certs] apiserver serving cert is signed for DNS names [master1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.200.3 192.168.200.16]
  17. [certs] Generating "apiserver-kubelet-client" certificate and key
  18. [certs] Generating "front-proxy-ca" certificate and key
  19. [certs] Generating "front-proxy-client" certificate and key
  20. [certs] Generating "etcd/ca" certificate and key
  21. [certs] Generating "etcd/server" certificate and key
  22. [certs] etcd/server serving cert is signed for DNS names [master1 localhost] and IPs [192.168.200.3 127.0.0.1 ::1]
  23. [certs] Generating "etcd/peer" certificate and key
  24. [certs] etcd/peer serving cert is signed for DNS names [master1 localhost] and IPs [192.168.200.3 127.0.0.1 ::1]
  25. [certs] Generating "etcd/healthcheck-client" certificate and key
  26. [certs] Generating "apiserver-etcd-client" certificate and key
  27. [certs] Generating "sa" key and public key
  28. [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
  29. [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
  30. [kubeconfig] Writing "admin.conf" kubeconfig file
  31. [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
  32. [kubeconfig] Writing "kubelet.conf" kubeconfig file
  33. [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
  34. [kubeconfig] Writing "controller-manager.conf" kubeconfig file
  35. [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
  36. [kubeconfig] Writing "scheduler.conf" kubeconfig file
  37. [control-plane] Using manifest folder "/etc/kubernetes/manifests"
  38. [control-plane] Creating static Pod manifest for "kube-apiserver"
  39. [control-plane] Creating static Pod manifest for "kube-controller-manager"
  40. W1231 14:11:53.776346  120564 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
  41. [control-plane] Creating static Pod manifest for "kube-scheduler"
  42. W1231 14:11:53.777078  120564 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
  43. [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
  44. [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
  45. [apiclient] All control plane components are healthy after 14.013316 seconds
  46. [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
  47. [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
  48. [upload-certs] Skipping phase. Please see --upload-certs
  49. [mark-control-plane] Marking the node master1 as control-plane by adding the label "node-role.kubernetes.io/master=''"
  50. [mark-control-plane] Marking the node master1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
  51. [bootstrap-token] Using token: abcdef.0123456789abcdef
  52. [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
  53. [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
  54. [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
  55. [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
  56. [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
  57. [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
  58. [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
  59. [addons] Applied essential addon: CoreDNS
  60. [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
  61. [addons] Applied essential addon: kube-proxy
  62.  
  63. Your Kubernetes control-plane has initialized successfully!
  64.  
  65. To start using your cluster, you need to run the following as a regular user:
  66.  
  67.   mkdir -p $HOME/.kube
  68.   sudo cp -/etc/kubernetes/admin.conf $HOME/.kube/config
  69.   sudo chown $(id -u):$(id -g) $HOME/.kube/config
  70.  
  71. You should now deploy a pod network to the cluster.
  72. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  73.   https://kubernetes.io/docs/concepts/cluster-administration/addons/
  74.  
  75. You can now join any number of control-plane nodes by copying certificate authorities
  76. and service account keys on each node and then running the following as root:
  77.  
  78.   kubeadm join 192.168.200.16:16443 --token abcdef.0123456789abcdef \
  79.     --discovery-token-ca-cert-hash sha256:f0489748e3b77a9a29443dae2c4c0dfe6ff4bde0daf3ca8740dd9ab6a9693a78 \
  80.     --control-plane 
  81.  
  82. Then you can join any number of worker nodes by running the following on each as root:
  83.  
  84. kubeadm join 192.168.200.16:16443 --token abcdef.0123456789abcdef \
  85.     --discovery-token-ca-cert-hash sha256:f0489748e3b77a9a29443dae2c4c0dfe6ff4bde0daf3ca8740dd9ab6a9693a78

30、集群初始化失败重置集群

  1. kubeadm reset

31、在其它两个master节点创建以下目录

  1. mkdir -/etc/kubernetes/pki/etcd

32、把主master节点证书分别复制到其他master节点

  1. scp /etc/kubernetes/pki/ca.* root@192.168.200.4:/etc/kubernetes/pki/
  2. scp /etc/kubernetes/pki/sa.* root@192.168.200.4:/etc/kubernetes/pki/
  3. scp /etc/kubernetes/pki/front-proxy-ca.* root@192.168.200.4:/etc/kubernetes/pki/
  4. scp /etc/kubernetes/pki/etcd/ca.* root@192.168.200.4:/etc/kubernetes/pki/etcd/
  5. scp /etc/kubernetes/admin.conf root@192.168.200.4:/etc/kubernetes/
  6. scp /etc/kubernetes/pki/ca.* root@192.168.200.5:/etc/kubernetes/pki/
  7. scp /etc/kubernetes/pki/sa.* root@192.168.200.5:/etc/kubernetes/pki/
  8. scp /etc/kubernetes/pki/front-proxy-ca.* root@192.168.200.5:/etc/kubernetes/pki/
  9. scp /etc/kubernetes/pki/etcd/ca.* root@192.168.200.5:/etc/kubernetes/pki/etcd/
  10. scp /etc/kubernetes/admin.conf root@192.168.200.5:/etc/kubernetes/

或者脚本执行

  1. cat cert-main-master.sh
  2. USER=root
  3. CONTROL_PLANE_IPS="test-k8s-master-2 test-k8s-master-3"
  4. for host in ${CONTROL_PLANE_IPS}; do
  5. ssh "${USER}"@$host "mkdir -p /etc/kubernetes/pki/etcd"
  6. scp /etc/kubernetes/pki/ca.* "${USER}"@$host:/etc/kubernetes/pki/
  7. scp /etc/kubernetes/pki/sa.* "${USER}"@$host:/etc/kubernetes/pki/
  8. scp /etc/kubernetes/pki/front-proxy-ca.* "${USER}"@$host:/etc/kubernetes/pki/
  9. scp /etc/kubernetes/pki/etcd/ca.* "${USER}"@$host:/etc/kubernetes/pki/etcd/
  10. scp /etc/kubernetes/admin.conf "${USER}"@$host:/etc/kubernetes/
  11. done


33、把 master 主节点的 admin.conf 复制到其他 node 节点

  1. scp /etc/kubernetes/admin.conf root@192.168.200.6:/etc/kubernetes/

34、master节点加入集群执行以下命令

  1. kubeadm join 192.168.200.16:16443 --token abcdef.0123456789abcdef \
  2.     --discovery-token-ca-cert-hash sha256:f0489748e3b77a9a29443dae2c4c0dfe6ff4bde0daf3ca8740dd9ab6a9693a78 \
  3.     --control-plane

35、node节点加入集群执行以下命令

  1. kubeadm join 192.168.200.16:16443 --token abcdef.0123456789abcdef \
  2.     --discovery-token-ca-cert-hash sha256:f0489748e3b77a9a29443dae2c4c0dfe6ff4bde0daf3ca8740dd9ab6a9693a78

36、所有master节点执行以下命令,node节点随意

root用户执行以下命令

  1. echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
  2. source .bash_profile

非root用户执行以下命令

  1. mkdir -p $HOME/.kube
  2. sudo cp -/etc/kubernetes/admin.conf $HOME/.kube/config
  3. sudo chown $(id -u):$(id -g) $HOME/.kube/config

37、查看所有节点状态

  1. [root@master1 ~]# kubectl get nodes 
  2. NAME      STATUS     ROLES    AGE     VERSION
  3. master1   NotReady   master   4m54s   v1.18.2
  4. master2   NotReady   master   2m27s   v1.18.2
  5. master3   NotReady   master   93s     v1.18.2
  6. node1     NotReady   <none>   76s     v1.18.2

38、安装网络插件

  1. kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

或者使用Calico的CNI插件

  1. kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml


39、查看节点状态

  1. [root@master1 ~]# kubectl get nodes 
  2. NAME      STATUS   ROLES    AGE   VERSION
  3. master1   Ready    master   39m   v1.18.2
  4. master2   Ready    master   37m   v1.18.2
  5. master3   Ready    master   36m   v1.18.2
  6. node1     Ready    <none>   35m   v1.18.2

修改node规则

  1. kubectl label no k8s-worker-1 kubernetes.io/role=worker
  2. kubectl label no k8s-worker-2 kubernetes.io/role=worker
  3. kubectl label no k8s-worker-3 kubernetes.io/role=worker

40、下载etcdctl客户端命令行工具

  1. wget https://github.com/etcd-io/etcd/releases/download/v3.4.14/etcd-v3.4.14-linux-amd64.tar.gz

41、解压并加入环境变量

  1. tar -zxf etcd-v3.4.14-linux-amd64.tar.gz
  2. mv etcd-v3.4.14-linux-amd64/etcdctl /usr/local/bin
  3. chmod +/usr/local/bin/

42、验证etcdctl是否能用,出现以下结果代表已经成功了

  1. [root@master1 ~]# etcdctl 
  2. NAME:
  3. etcdctl - A simple command line client for etcd3.
  4.  
  5. USAGE:
  6. etcdctl [flags]
  7.  
  8. VERSION:
  9. 3.4.14
  10.  
  11. API VERSION:
  12. 3.4
  13.  
  14.  
  15. COMMANDS:
  16. alarm disarm Disarms all alarms
  17. alarm list Lists all alarms
  18. auth disable Disables authentication
  19. auth enable Enables authentication
  20. check datascale Check the memory usage of holding data for different workloads on a given server endpoint.
  21. check perf Check the performance of the etcd cluster
  22. compaction Compacts the event history in etcd
  23. defrag Defragments the storage of the etcd members with given endpoints
  24. del Removes the specified key or range of keys [key, range_end)
  25. elect Observes and participates in leader election
  26. endpoint hashkv Prints the KV history hash for each endpoint in --endpoints
  27. endpoint health Checks the healthiness of endpoints specified in `--endpoints` flag
  28. endpoint status Prints out the status of endpoints specified in `--endpoints` flag
  29. get Gets the key or a range of keys
  30. help Help about any command
  31. lease grant Creates leases
  32. lease keep-alive Keeps leases alive (renew)
  33. lease list List all active leases
  34. lease revoke Revokes leases
  35. lease timetolive Get lease information
  36. lock Acquires a named lock
  37. make-mirror Makes a mirror at the destination etcd cluster
  38. member add Adds a member into the cluster
  39. member list Lists all members in the cluster
  40. member promote Promotes a non-voting member in the cluster
  41. member remove Removes a member from the cluster
  42. member update Updates a member in the cluster
  43. migrate Migrates keys in a v2 store to a mvcc store
  44. move-leader Transfers leadership to another etcd cluster member.
  45. put Puts the given key into the store
  46. role add Adds a new role
  47. role delete Deletes a role
  48. role get Gets detailed information of a role
  49. role grant-permission Grants a key to a role
  50. role list Lists all roles
  51. role revoke-permission Revokes a key from a role
  52. snapshot restore Restores an etcd member snapshot to an etcd directory
  53. snapshot save Stores an etcd node backend snapshot to a given file
  54. snapshot status Gets backend snapshot status of a given file
  55. txn Txn processes all the requests in one transaction
  56. user add Adds a new user
  57. user delete Deletes a user
  58. user get Gets detailed information of a user
  59. user grant-role Grants a role to a user
  60. user list Lists all users
  61. user passwd Changes password of user
  62. user revoke-role Revokes a role from a user
  63. version Prints the version of etcdctl
  64. watch Watches events stream on keys or prefixes
  65.  
  66. OPTIONS:
  67.       --cacert="" verify certificates of TLS-enabled secure servers using this CA bundle
  68.       --cert="" identify secure client using this TLS certificate file
  69.       --command-timeout=5s timeout for short running command (excluding dial timeout)
  70.       --debug[=false] enable client-side debug logging
  71.       --dial-timeout=2s dial timeout for client connections
  72.   -d, --discovery-srv="" domain name to query for SRV records describing cluster endpoints
  73.       --discovery-srv-name="" service name to query when using DNS discovery
  74.       --endpoints=[127.0.0.1:2379] gRPC endpoints
  75.   -h, --help[=false] help for etcdctl
  76.       --hex[=false] print byte strings as hex encoded strings
  77.       --insecure-discovery[=true] accept insecure SRV records describing cluster endpoints
  78.       --insecure-skip-tls-verify[=false] skip server certificate verification (CAUTION: this option should be enabled only for testing purposes)
  79.       --insecure-transport[=true] disable transport security for client connections
  80.       --keepalive-time=2s keepalive time for client connections
  81.       --keepalive-timeout=6s keepalive timeout for client connections
  82.       --key="" identify secure client using this TLS key file
  83.       --password="" password for authentication (if this option is used, --user option shouldn't include password)
  84.       --user="" username[:password] for authentication (prompt if password is not supplied)
  85.   -w, --write-out="simple" set the output format (fields, json, protobuf, simple, table)

43、查看etcd高可用集群健康状态

  1. [root@master1 ~]# ETCDCTL_API=3 etcdctl --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/peer.crt --key=/etc/kubernetes/pki/etcd/peer.key --write-out=table --endpoints=192.168.200.3:2379,192.168.200.4:2379,192.168.200.5:2379 endpoint health
  2. +--------------------+--------+-------------+-------+
  3. |      ENDPOINT      | HEALTH |    TOOK     | ERROR |
  4. +--------------------+--------+-------------+-------+
  5. | 192.168.200.3:2379 |   true | 60.655523ms |       |
  6. | 192.168.200.4:2379 |   true |  60.79081ms |       |
  7. | 192.168.200.5:2379 |   true | 63.585221ms |       |
  8. +--------------------+--------+-------------+-------+

44、查看etcd高可用集群列表

  1. [root@master1 ~]# ETCDCTL_API=3 etcdctl --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/peer.crt --key=/etc/kubernetes/pki/etcd/peer.key --write-out=table --endpoints=192.168.200.3:2379,192.168.200.4:2379,192.168.200.5:2379 member list
  2. +------------------+---------+---------+----------------------------+----------------------------+------------+
  3. |        ID        | STATUS  |  NAME   |         PEER ADDRS         |        CLIENT ADDRS        | IS LEARNER |
  4. +------------------+---------+---------+----------------------------+----------------------------+------------+
  5. | 4a8537d90d14a19b | started | master1 | https://192.168.200.3:2380 | https://192.168.200.3:2379 |      false |
  6. | 4f48f36de1949337 | started | master2 | https://192.168.200.4:2380 | https://192.168.200.4:2379 |      false |
  7. | 88fb5c8676da6ea1 | started | master3 | https://192.168.200.5:2380 | https://192.168.200.5:2379 |      false |
  8. +------------------+---------+---------+----------------------------+----------------------------+------------+

45、查看etcd高可用集群leader

  1. [root@master1 ~]# ETCDCTL_API=3 etcdctl --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/peer.crt --key=/etc/kubernetes/pki/etcd/peer.key --write-out=table --endpoints=192.168.200.3:2379,192.168.200.4:2379,192.168.200.5:2379 endpoint status
  2. +--------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
  3. |      ENDPOINT      |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
  4. +--------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
  5. | 192.168.200.3:2379 | 4a8537d90d14a19b |   3.4.3 |  2.8 MB |      true |      false |         7 |       2833 |               2833 |        |        
  6. | 192.168.200.4:2379 | 4f48f36de1949337 |   3.4.3 |  2.7 MB |     false |      false |         7 |       2833 |               2833 |        |        
  7. | 192.168.200.5:2379 | 88fb5c8676da6ea1 |   3.4.3 |  2.7 MB |     false |      false |         7 |       2833 |               2833 |        |        
  8. +--------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+

46、部署k8s的dashboard

1.1、下载recommended.yaml文件

  1. wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml

1.2、修改recommended.yaml文件

  1. ---
  2. kind: Service
  3. apiVersion: v1
  4. metadata:
  5.   labels:
  6.     k8s-app: kubernetes-dashboard
  7.   name: kubernetes-dashboard
  8.   namespace: kubernetes-dashboard
  9. spec:
  10.   type: NodePort #增加
  11.   ports:
  12.     - port: 443
  13.       targetPort: 8443
  14.       nodePort: 30000 #增加
  15.   selector:
  16.     k8s-app: kubernetes-dashboard
  17. ---

1.3、创建证书

  1. mkdir dashboard-certs
  2.  
  3. cd dashboard-certs/
  4.  
  5. #创建命名空间
  6. kubectl create namespace kubernetes-dashboard
  7.  
  8. # 创建key文件
  9. openssl genrsa -out dashboard.key 2048
  10.  
  11. #证书请求
  12. openssl req -days 36000 -new -out dashboard.csr -key dashboard.key -subj '/CN=dashboard-cert'
  13.  
  14. #自签证书
  15. openssl x509 -req -in dashboard.csr -signkey dashboard.key -out dashboard.crt
  16.  
  17. #创建kubernetes-dashboard-certs对象
  18. kubectl create secret generic kubernetes-dashboard-certs --from-file=dashboard.key --from-file=dashboard.crt -n kubernetes-dashboard

1.4、安装dashboard (如果报错:Error from server (AlreadyExists): error when creating "./recommended.yaml": namespaces "kubernetes-dashboard" already exists这个忽略不计,不影响。)

  1. kubectl apply -f recommended.yaml

1.5、查看安装结果

  1. [root@master1 ~]# kubectl get pods -A  -o wide
  2. NAMESPACE              NAME                                         READY   STATUS    RESTARTS   AGE   IP              NODE      NOMINATED NODE   READINESS GATES
  3. kube-system            coredns-66bff467f8-b97kc                     1/1     Running   0          16h   10.244.0.2      master1   <none>           <none>
  4. kube-system            coredns-66bff467f8-w2bbp                     1/1     Running   0          16h   10.244.1.2      master2   <none>           <none>
  5. kube-system            etcd-master1                                 1/1     Running   1          16h   192.168.200.3   master1   <none>           <none>
  6. kube-system            etcd-master2                                 1/1     Running   2          16h   192.168.200.4   master2   <none>           <none>
  7. kube-system            etcd-master3                                 1/1     Running   1          16h   192.168.200.5   master3   <none>           <none>
  8. kube-system            kube-apiserver-master1                       1/1     Running   2          16h   192.168.200.3   master1   <none>           <none>
  9. kube-system            kube-apiserver-master2                       1/1     Running   2          16h   192.168.200.4   master2   <none>           <none>
  10. kube-system            kube-apiserver-master3                       1/1     Running   2          16h   192.168.200.5   master3   <none>           <none>
  11. kube-system            kube-controller-manager-master1              1/1     Running   3          16h   192.168.200.3   master1   <none>           <none>
  12. kube-system            kube-controller-manager-master2              1/1     Running   2          16h   192.168.200.4   master2   <none>           <none>
  13. kube-system            kube-controller-manager-master3              1/1     Running   1          16h   192.168.200.5   master3   <none>           <none>
  14. kube-system            kube-flannel-ds-6wbrh                        1/1     Running   0          83m   192.168.200.5   master3   <none>           <none>
  15. kube-system            kube-flannel-ds-gn2md                        1/1     Running   1          16h   192.168.200.4   master2   <none>           <none>
  16. kube-system            kube-flannel-ds-rft78                        1/1     Running   1          16h   192.168.200.3   master1   <none>           <none>
  17. kube-system            kube-flannel-ds-vkxfw                        1/1     Running   0          54m   192.168.200.6   node1     <none>           <none>
  18. kube-system            kube-proxy-7p72p                             1/1     Running   2          16h   192.168.200.4   master2   <none>           <none>
  19. kube-system            kube-proxy-g44fx                             1/1     Running   1          16h   192.168.200.3   master1   <none>           <none>
  20. kube-system            kube-proxy-nwnzf                             1/1     Running   1          16h   192.168.200.6   node1     <none>           <none>
  21. kube-system            kube-proxy-xxmgl                             1/1     Running   1          16h   192.168.200.5   master3   <none>           <none>
  22. kube-system            kube-scheduler-master1                       1/1     Running   3          16h   192.168.200.3   master1   <none>           <none>
  23. kube-system            kube-scheduler-master2                       1/1     Running   2          16h   192.168.200.4   master2   <none>           <none>
  24. kube-system            kube-scheduler-master3                       1/1     Running   1          16h   192.168.200.5   master3   <none>           <none>
  25. kubernetes-dashboard   dashboard-metrics-scraper-6b4884c9d5-sn2rd   1/1     Running   0          14m   10.244.3.5      node1     <none>           <none>
  26. kubernetes-dashboard   kubernetes-dashboard-7b544877d5-mxfp2        1/1     Running   0          14m   10.244.3.4      node1     <none>           <none>
  27.  
  28.  
  29. [root@master1 ~]# kubectl get service -n kubernetes-dashboard  -o wide
  30. NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE   SELECTOR
  31. dashboard-metrics-scraper   ClusterIP   10.97.196.112   <none>        8000/TCP        14m   k8s-app=dashboard-metrics-scraper
  32. kubernetes-dashboard        NodePort    10.100.10.34    <none>        443:30000/TCP   14m   k8s-app=kubernetes-dashboard

1.6、创建dashboard管理员

  1. vim dashboard-admin.yaml
  2. apiVersion: v1
  3. kind: ServiceAccount
  4. metadata:
  5.   labels:
  6.     k8s-app: kubernetes-dashboard
  7.   name: dashboard-admin
  8.   namespace: kubernetes-dashboard

1.7、部署dashboard-admin.yaml文件

  1. kubectl apply -f dashboard-admin.yaml

1.8、为用户分配权限

  1. vim dashboard-admin-bind-cluster-role.yaml
  2. apiVersion: rbac.authorization.k8s.io/v1
  3. kind: ClusterRoleBinding
  4. metadata:
  5.   name: dashboard-admin-bind-cluster-role
  6.   labels:
  7.     k8s-app: kubernetes-dashboard
  8. roleRef:
  9.   apiGroup: rbac.authorization.k8s.io
  10.   kind: ClusterRole
  11.   name: cluster-admin
  12. subjects:
  13. - kind: ServiceAccount
  14.   name: dashboard-admin
  15.   namespace: kubernetes-dashboard

1.9、部署dashboard-admin-bind-cluster-role.yaml

  1. kubectl apply -f dashboard-admin-bind-cluster-role.yaml

2.0、查看并复制用户Token

  1. [root@master1 ~]# kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep dashboard-admin | awk '{print $1}')
  2. Name:         dashboard-admin-token-f5ljr
  3. Namespace:    kubernetes-dashboard
  4. Labels:       <none>
  5. Annotations:  kubernetes.io/service-account.name: dashboard-admin
  6.               kubernetes.io/service-account.uid: 95741919-e296-498e-8e10-233c4a34b07a
  7.  
  8. Type:  kubernetes.io/service-account-token
  9.  
  10. Data
  11. ====
  12. ca.crt:     1025 bytes
  13. namespace:  20 bytes
  14. token:      eyJhbGciOiJSUzI1NiIsImtpZCI6Iko5TVI0VVQ2TndBSlBLc2Rxby1CWGxSNHlxYXREWGdVOENUTFVKUmFGakEifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tZjVsanIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiOTU3NDE5MTktZTI5Ni00OThlLThlMTAtMjMzYzRhMzRiMDdhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmRhc2hib2FyZC1hZG1pbiJ9.tP3EFBeOIJg7_cgJ-M9SDqTtPcfmoJU0nTTGyb8Sxag6Zq4K-g1lCiDqIFbVgrd-4nM7cOTMBfMwyKgdf_Xz573omNNrPDIJCTYkNx2qFN0qfj5qp8Txia3JV8FKRdrmqsap11ItbGD9a7uniIrauc6JKPgksK_WvoXZbKglEUla98ZU9PDm5YXXq8STyUQ6egi35vn5EYCPa-qkUdecE-0N06ZbTFetIYsHEnpswSu8LZZP_Zw7LEfnX9IVdl1147i4OpF4ET9zBDfcJTSr-YE7ILuv1FDYvvo1KAtKawUbGu9dJxsObLeTh5fHx_JWyqg9cX0LB3Gd1ZFm5z5s4g

2.1、复制token并访问dashboard
ip:30000

kubeadm部署k8s高可用集群 K8S 第3张

kubeadm部署k8s高可用集群 K8S 第4张

kubeadm部署k8s高可用集群 K8S 第5张

Notice:
k8s高可用集群安装完成,至于dashboard可以部署本章的原生dashboard,也可以部署中国开源kubesphere,见链接:https://www.cnblogs.com/lfl17718347843/p/14131111.html

kubesphere github地址:https://github.com/kubesphere/kubesphere

kubesphere中文官网:https://kubesphere.com.cn/

上图中的cpu和内存使用情况必须部署Metrics-Server才能出现,部署链接:https://www.cnblogs.com/lfl17718347843/p/14283796.html

默认k8s的master节点是不能跑pod的业务,需要执行以下命令解除限制

  1. kubectl taint nodes --all node-role.kubernetes.io/master-





版权声明

本文仅代表作者观点,不代表码农殇立场。
本文系作者授权码农殇发表,未经许可,不得转载。

 

扫一扫在手机阅读、分享本文

冀ICP备14009681号-2 Powered By 码农殇 Theme By zb脚本
您是本站第413名访客 今日有0篇新文章