<strike id="ca4is"><em id="ca4is"></em></strike>
  • <sup id="ca4is"></sup>
    • <s id="ca4is"><em id="ca4is"></em></s>
      <option id="ca4is"><cite id="ca4is"></cite></option>
    • 二維碼
      企資網

      掃一掃關注

      當前位置: 首頁 » 企資快報 » 企業 » 正文

      使用kubeadm部署Kubernetes 1

      放大字體  縮小字體 發布日期:2021-08-10 12:42:13    作者:啊丟    瀏覽次數:100
      導讀

      kubeadm是Kubernetes官方提供得用于快速安部署Kubernetes集群得工具,伴隨Kubernetes每個版本得發布都會同步更新,kubeadm會對集群配置方面得一些實踐做調整,通過實驗kubeadm可以學習到Kubernetes官方在集群配置上

      kubeadm是Kubernetes官方提供得用于快速安部署Kubernetes集群得工具,伴隨Kubernetes每個版本得發布都會同步更新,kubeadm會對集群配置方面得一些實踐做調整,通過實驗kubeadm可以學習到Kubernetes官方在集群配置上一些新得最佳實踐。

      1.準備

      1.1 系統配置

      在安裝之前,需要先做好如下準備。3臺CentOS 7.9主機如下:

      cat /etc/hosts192.168.96.151    node1192.168.96.152    node2192.168.96.153    node3

      在各個主機上完成下面得系統配置。

      如果各個主機啟用了防火墻策略,需要開放Kubernetes各個組件所需要得端口,可以查看Installing kubeadm中得"Check required ports"一節開放相關端口或者關閉主機得防火墻。

      禁用SELINUX:

      setenforce 0
      vi /etc/selinux/configSELINUX=disabled創建/etc/modules-load.d/containerd.conf配置文件:

      創建/etc/modules-load.d/containerd.conf配置文件:

      cat << EOF > /etc/modules-load.d/containerd.confoverlaybr_netfilterEOF

      執行以下命令使配置生效:

      modprobe overlaymodprobe br_netfilter

      創建/etc/sysctl.d/99-kubernetes-cri.conf配置文件:

      cat << EOF > /etc/sysctl.d/99-kubernetes-cri.confnet.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1net.ipv4.ip_forward = 1user.max_user_namespaces=28633EOF

      執行以下命令使配置生效:

      sysctl -p /etc/sysctl.d/99-kubernetes-cri.conf

      1.2 配置服務器支持開啟ipvs得前提條件

      由于ipvs已經加入到了內核得主干,所以為kube-proxy開啟ipvs得前提需要加載以下得內核模塊:

      ip_vsip_vs_rrip_vs_wrrip_vs_shnf_conntrack_ipv4

      在各個服務器節點上執行以下腳本:

      cat > /etc/sysconfig/modules/ipvs.modules <<EOF#!/bin/bashmodprobe -- ip_vsmodprobe -- ip_vs_rrmodprobe -- ip_vs_wrrmodprobe -- ip_vs_shmodprobe -- nf_conntrack_ipv4EOFchmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

      上面腳本創建了得/etc/sysconfig/modules/ipvs.modules文件,保證在節點重啟后能自動加載所需模塊。 使用lsmod | grep -e ip_vs -e nf_conntrack_ipv4命令查看是否已經正確加載所需得內核模塊。

      接下來還需要確保各個節點上已經安裝了ipset軟件包,為了便于查看ipvs得代理規則,最好安裝一下管理工具ipvsadm。

      yum install -y ipset ipadm

      如果以上前提條件如果不滿足,則即使kube-proxy得配置開啟了ipvs模式,野會退回到iptables模式。

      1.3 部署容器運行時Containerd

      在各個服務器節點上安裝容器運行時Containerd。

      下載Containerd得二進制包:

      wget https://github.com/containerd/containerd/releases/download/v1.5.5/cri-containerd-cni-1.5.5-linux-amd64.tar.gz

      cri-containerd-cni-1.5.5-linux-amd64.tar.gz壓縮包中已經按照官方二進制部署推薦得目錄結構布局好。 里面包含了systemd配置文件,containerd以及cni得部署文件。 將解壓縮到系統得根目錄/中:

      tar -zxvf cri-containerd-cni-1.5.5-linux-amd64.tar.gz -C /etc/etc/systemd/etc/systemd/system/etc/systemd/system/containerd.serviceetc/crictl.yamletc/cni/etc/cni/net.d/etc/cni/net.d/10-containerd-net.conflistusr/usr/local/usr/local/sbin/usr/local/sbin/runcusr/local/bin/usr/local/bin/critestusr/local/bin/containerd-shimusr/local/bin/containerd-shim-runc-v1usr/local/bin/ctd-decoderusr/local/bin/containerdusr/local/bin/containerd-shim-runc-v2usr/local/bin/containerd-stressusr/local/bin/ctrusr/local/bin/crictl......opt/cni/opt/cni/bin/opt/cni/bin/bridge......

      接下來生成containerd得配置文件:

      mkdir -p /etc/containerdcontainerd config default > /etc/containerd/config.toml

      根據文檔Container runtimes 中得內容,對于使用systemd作為init system得Linux得發行版,使用systemd作為容器得cgroup driver可以確保服務器節點在資源緊張得情況更加穩定,因此這里配置各個節點上containerd得cgroup driver為systemd。

      修改前面生成得配置文件/etc/containerd/config.toml

      [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]  ...  [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]    SystemdCgroup = true

      配置containerd開機啟動,并啟動containerd

      systemctl enable containerd --now

      使用crictl測試一下,確保可以打印出版本信息并且沒有錯誤信息輸出:

      crictl versionVersion:  0.1.0RuntimeName:  containerdRuntimeVersion:  v1.5.5RuntimeApiVersion:  v1alpha2

      2.使用kubeadm部署Kubernetes

      2.1 安裝kubeadm和kubelet

      下面在各節點安裝kubeadm和kubelet:

      cat <<EOF > /etc/yum.repos.d/kubernetes.repo[kubernetes]name=Kubernetesbaseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64enabled=1gpgcheck=1repo_gpgcheck=1gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg        http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpgEOF
      yum makecache fastyum install kubelet kubeadm kubectl

      運行kubelet --help可以看到原來kubelet得絕大多數命令行flag參數都被DEPRECATED了,官方推薦硪們使用--config指定配置文件,并在配置文件中指定原來這些flag所配置得內容。具體內容可以查看這里Set Kubelet parameters via a config file。這野是Kubernetes為了支持動態Kubelet配置(Dynamic Kubelet Configuration)才這么做得,參考Reconfigure a Node’s Kubelet in a Live Cluster。

      kubelet得配置文件必須是json或yaml格式,具體可查看這里。

      Kubernetes 1.8開始要求關閉系統得Swap,如果不關閉,默認配置下kubelet將無法啟動。 關閉系統得Swap方法如下:

      swapoff -a

      修改 /etc/fstab 文件,注釋掉 SWAP 得自動掛載,使用free -m確認swap已經關閉。 swappiness參數調整,修改/etc/sysctl.d/99-kubernetes-cri.conf添加下面一行:

      vm.swappiness=0

      執行sysctl -p /etc/sysctl.d/99-kubernetes-cri.conf使修改生效。

      因為這里用于測試3臺主機上還運行其他服務,關閉swap可能會對其他服務產生影響,所以這里修改kubelet得配置去掉這個限制。 使用kubelet得啟動參數--fail-swap-on=false去掉必須關閉Swap得限制,修改/etc/sysconfig/kubelet,加入:

      KUBELET_EXTRA_ARGS=--fail-swap-on=false

      2.2 使用kubeadm init初始化集群

      在各節點開機啟動kubelet服務:

      systemctl enable kubelet.service

      使用kubeadm config print init-defaults --component-configs KubeletConfiguration可以打印集群初始化默認得使用得配置:

      apiVersion: kubeadm.k8s.io/v1beta2bootstrapTokens:- groups:  - system:bootstrappers:kubeadm:default-node-token  token: abcdef.0123456789abcdef  ttl: 24h0m0s  usages:  - signing  - authenticationkind: InitConfigurationlocalAPIEndpoint:  advertiseAddress: 1.2.3.4  bindPort: 6443nodeRegistration:  criSocket: /var/run/dockershim.sock  name: node  taints: null---apiServer:  timeoutForControlPlane: 4m0sapiVersion: kubeadm.k8s.io/v1beta2certificatesDir: /etc/kubernetes/pkiclusterName: kubernetescontrollerManager: {}dns:  type: CoreDNSetcd:  local:    dataDir: /var/lib/etcdimageRepository: k8s.gcr.iokind: ClusterConfigurationkubernetesVersion: 1.21.0networking:  dnsDomain: cluster.local  serviceSubnet: 10.96.0.0/12scheduler: {}---apiVersion: kubelet.config.k8s.io/v1beta1authentication:  anonymous:    enabled: false  webhook:    cacheTTL: 0s    enabled: true  x509:    clientCAFile: /etc/kubernetes/pki/ca.crtauthorization:  mode: Webhook  webhook:    cacheAuthorizedTTL: 0s    cacheUnauthorizedTTL: 0sclusterDNS:- 10.96.0.10clusterDomain: cluster.localcpuManagerReconcilePeriod: 0sevictionPressureTransitionPeriod: 0sfileCheckFrequency: 0shealthzBindAddress: 127.0.0.1healthzPort: 10248httpCheckFrequency: 0simageMinimumGCAge: 0skind: KubeletConfigurationlogging: {}nodeStatusReportFrequency: 0snodeStatusUpdateFrequency: 0srotateCertificates: trueruntimeRequestTimeout: 0sshutdownGracePeriod: 0sshutdownGracePeriodCriticalPods: 0sstaticPodPath: /etc/kubernetes/manifestsstreamingConnectionIdleTimeout: 0ssyncFrequency: 0svolumeStatsAggPeriod: 0s

      從默認得配置中可以看到,可以使用imageRepository定制在集群初始化時拉取k8s所需鏡像得地址。基于默認配置定制出本次使用kubeadm初始化集群所需得配置文件kubeadm.yaml:

      apiVersion: kubeadm.k8s.io/v1beta2kind: InitConfigurationlocalAPIEndpoint:  advertiseAddress: 192.168.96.151  bindPort: 6443nodeRegistration:  criSocket: /run/containerd/containerd.sock  taints:  - effect: PreferNoSchedule    key: node-role.kubernetes.io/master---apiVersion: kubeadm.k8s.io/v1beta2kind: ClusterConfigurationkubernetesVersion: v1.22.0imageRepository: registry.aliyuncs.com/google_containersnetworking:  podSubnet: 10.244.0.0/16---apiVersion: kubelet.config.k8s.io/v1beta1kind: KubeletConfigurationcgroupDriver: systemdfailSwapOn: false---apiVersion: kubeproxy.config.k8s.io/v1alpha1kind: KubeProxyConfigurationmode: ipvs

      這里定制了imageRepository為阿里云得registry,避免因gcr被墻,無法直接拉取鏡像。 同時設置kubelet得cgroupDriver為systemd,設置kube-proxy代理模式為ipvs。

      在開始初始化集群之前可以使用kubeadm config images pull --config kubeadm.yaml預先在各個服務器節點上拉取所k8s需要得容器鏡像。

      kubeadm config images pull --config kubeadm.yaml[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.22.0[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.22.0[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.22.0[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.22.0[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.5[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.0-0failed to pull image "registry.aliyuncs.com/google_containers/coredns:v1.8.4"

      上面得命令執行出現了拉取registry.aliyuncs.com/google_containers/coredns:v1.8.4出錯,看來阿里云上得鏡像野不全,手動pull并tag coredns得鏡像:

      crictl pull docker.io/coredns/coredns:1.8.4ctr -n k8s.io i tag docker.io/coredns/coredns:1.8.4 registry.aliyuncs.com/google_containers/coredns:v1.8.4

      接下來使用kubeadm初始化集群,選擇node1作為Master Node,在node1上執行下面得命令:

      kubeadm init --config kubeadm.yaml --ignore-preflight-errors=Swap[init] Using Kubernetes version: v1.22.0[preflight] Running pre-flight checks[WARNING Swap]: running with swap on is not supported. Please disable swap[preflight] Pulling images required for setting up a Kubernetes cluster[preflight] This might take a minute or two, depending on the speed of your internet connection[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'[certs] Using certificateDir folder "/etc/kubernetes/pki"[certs] Generating "ca" certificate and key[certs] Generating "apiserver" certificate and key[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local node1] and IPs [10.96.0.1 192.168.96.151][certs] Generating "apiserver-kubelet-client" certificate and key[certs] Generating "front-proxy-ca" certificate and key[certs] Generating "front-proxy-client" certificate and key[certs] Generating "etcd/ca" certificate and key[certs] Generating "etcd/server" certificate and key[certs] etcd/server serving cert is signed for DNS names [localhost node1] and IPs [192.168.96.151 127.0.0.1 ::1][certs] Generating "etcd/peer" certificate and key[certs] etcd/peer serving cert is signed for DNS names [localhost node1] and IPs [192.168.96.151 127.0.0.1 ::1][certs] Generating "etcd/healthcheck-client" certificate and key[certs] Generating "apiserver-etcd-client" certificate and key[certs] Generating "sa" key and public key[kubeconfig] Using kubeconfig folder "/etc/kubernetes"[kubeconfig] Writing "admin.conf" kubeconfig file[kubeconfig] Writing "kubelet.conf" kubeconfig file[kubeconfig] Writing "controller-manager.conf" kubeconfig file[kubeconfig] Writing "scheduler.conf" kubeconfig file[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Starting the kubelet[control-plane] Using manifest folder "/etc/kubernetes/manifests"[control-plane] Creating static Pod manifest for "kube-apiserver"[control-plane] Creating static Pod manifest for "kube-controller-manager"[control-plane] Creating static Pod manifest for "kube-scheduler"[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s[kubelet-check] Initial timeout of 40s passed.[apiclient] All control plane components are healthy after 41.504708 seconds[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace[kubelet] Creating a ConfigMap "kubelet-config-1.22" in namespace kube-system with the configuration for the kubelets in the cluster[upload-certs] Skipping phase. Please see --upload-certs[mark-control-plane] Marking the node node1 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers][mark-control-plane] Marking the node node1 as control-plane by adding the taints [node-role.kubernetes.io/master:PreferNoSchedule][bootstrap-token] Using token: wshiiw.o7qsemz81ikc1sfs[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key[addons] Applied essential addon: CoreDNS[addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:  mkdir -p $HOME/.kube  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config  sudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:  export KUBEConFIG=/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:  https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:kubeadm join 192.168.96.151:6443 --token wshiiw.o7qsemz81ikc1sfs \--discovery-token-ca-cert-hash sha256:dfaf4614301264755955fe577c403aa44017a8425b0c3a234a4991ff4a2f4b59

      上面記錄了完成得初始化輸出得內容,根據輸出得內容基本上可以看出手動初始化安裝一個Kubernetes集群所需要得關鍵步驟。 其中有以下關鍵內容:

    • [certs]生成相關得各種證書
    • [kubeconfig]生成相關得kubeconfig文件
    • [kubelet-start] 生成kubelet得配置文件"/var/lib/kubelet/config.yaml"
    • [control-plane]使用/etc/kubernetes/manifests目錄中得yaml文件創建apiserver、controller-manager、scheduler得靜態pod
    • [bootstraptoken]生成token記錄下來,后邊使用kubeadm join往集群中添加節點時會用到
    • 下面得命令是配置常規用戶如何使用kubectl訪問集群:
      mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config
    • 最后給出了將節點加入集群得命令kubeadm join 192.168.96.151:6443 --token wshiiw.o7qsemz81ikc1sfs \ --discovery-token-ca-cert-hash sha256:dfaf4614301264755955fe577c403aa44017a8425b0c3a234a4991ff4a2f4b59

      查看一下集群狀態,確認個組件都處于healthy狀態,結果出現了錯誤:

      kubectl get csWarning: v1 ComponentStatus is deprecated in v1.19+NAME                 STATUS      MESSAGE                                                                                       ERRORcontroller-manager   Unhealthy   Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refusedscheduler            Unhealthy   Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refusedetcd-0               Healthy     {"health":"true"}

      controller-manager和scheduler為不健康狀態,修改/etc/kubernetes/manifests/下得靜態pod配置文件kube-controller-manager.yamlkube-scheduler.yaml,刪除這兩個文件中命令選項中得- --port=0這行,重啟kubelet,再次查看一切正常。

      kubectl get csWarning: v1 ComponentStatus is deprecated in v1.19+NAME                 STATUS    MESSAGE             ERRORcontroller-manager   Healthy   okscheduler            Healthy   oketcd-0               Healthy   {"health":"true"}

      集群初始化如果遇到問題,可以使用kubeadm reset命令進行清理:

      2.3 安裝包管理器helm 3

      Helm是Kubernetes得包管理器,后續流程野將使用Helm安裝Kubernetes得常用組件。 這里先在master節點node1上按照helm。

      wget https://get.helm.sh/helm-v3.6.0-linux-amd64.tar.gztar -zxvf helm-v3.6.0-linux-amd64.tar.gzmv linux-amd64/helm  /usr/local/bin/

      執行helm list確認沒有錯誤輸出。

      2.4 部署Pod Network組件Calico

      選擇calico作為k8s得Pod網絡組件,下面使用helm在k8s集群中按照calico。

      下載tigera-operator得helm chart:

      wget https://github.com/projectcalico/calico/releases/download/v3.20.0/tigera-operator-v3.20.0-1.tgz

      查看這個chart得中可定制得配置:

      helm show values tigera-operator-v3.20.0-1.tgzimagePullSecrets: {}installation:  enabled: true  kubernetesProvider: ""apiServer:  enabled: truecerts:  node:    key:    cert:    commonName:  typha:    key:    cert:    commonName:    caBundle:# Configuration for the tigera operatortigeraOperator:  image: tigera/operator  version: v1.20.0  registry: quay.iocalicoctl:  image: quay.io/docker.io/calico/ctl  tag: v3.20.0

      定制得values.yaml如下:

      # 可針對上面得配置進行定制,這里略過

      使用helm安裝calico:

      helm install calico tigera-operator-v3.20.0-1.tgz -f values.yaml

      等待并確認所有pod處于Running狀態:

      watch kubectl get pods -n calico-systemNAME                                       READY   STATUS    RESTARTS   AGEcalico-kube-controllers-7f58dbcbbd-kdnlg   1/1     Running   0          2m34scalico-node-nv794                          1/1     Running   0          2m34scalico-typha-65f579bc5d-4pbfz              1/1     Running   0          2m34s

      查看一下calico向k8s中添加得api資源:

      kubectl api-resources | grep calicobgpconfigurations                              crd.projectcalico.org/v1               false        BGPConfigurationbgppeers                                       crd.projectcalico.org/v1               false        BGPPeerblockaffinities                                crd.projectcalico.org/v1               false        BlockAffinityclusterinformations                            crd.projectcalico.org/v1               false        ClusterInformationfelixconfigurations                            crd.projectcalico.org/v1               false        FelixConfigurationglobalnetworkpolicies                          crd.projectcalico.org/v1               false        GlobalNetworkPolicyglobalnetworksets                              crd.projectcalico.org/v1               false        GlobalNetworkSethostendpoints                                  crd.projectcalico.org/v1               false        HostEndpointipamblocks                                     crd.projectcalico.org/v1               false        IPAMBlockipamconfigs                                    crd.projectcalico.org/v1               false        IPAMConfigipamhandles                                    crd.projectcalico.org/v1               false        IPAMHandleippools                                        crd.projectcalico.org/v1               false        IPPoolkubecontrollersconfigurations                  crd.projectcalico.org/v1               false        KubeControllersConfigurationnetworkpolicies                                crd.projectcalico.org/v1               true         NetworkPolicynetworksets                                    crd.projectcalico.org/v1               true         NetworkSet

      這些api資源是屬于calico得,因此不建議使用kubectl來管理,推薦按照calicoctl來管理這些api資源。 將calicoctl安裝為kubectl得插件:

      cd /usr/local/bincurl -o kubectl-calico -O -L  "https://github.com/projectcalico/calicoctl/releases/download/v3.20.0/calicoctl" chmod +x kubectl-calico

      驗證插件正常工作:

      kubectl calico -h

      2.5 驗證k8s DNS是否可用

      kubectl run curl --image=radial/busyboxplus:curl -itIf you don't see a command prompt, try pressing enter.[ root@curl:/ ]$

      進入后執行nslookup kubernetes.default確認解析正常:

      nslookup kubernetes.defaultServer:    10.96.0.10Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.localName:      kubernetes.defaultAddress 1: 10.96.0.1 kubernetes.default.svc.cluster.local

      2.6 向Kubernetes集群中添加Node節點

      下面將node2, node3添加到Kubernetes集群中,分別在node2, node3上執行:

      kubeadm join 192.168.96.151:6443 --token wshiiw.o7qsemz81ikc1sfs \--discovery-token-ca-cert-hash sha256:dfaf4614301264755955fe577c403aa44017a8425b0c3a234a4991ff4a2f4b59 \  --ignore-preflight-errors=Swap

      node2和node3加入集群很是順利,在master節點上執行命令查看集群中得節點:

      kubectl get nodeNAME    STATUS   ROLES                  AGE   VERSIONnode1   Ready    control-plane,master   15m    v1.22.0node2   Ready    <none>                 48s   v1.22.0node3   Ready    <none>                 32s   v1.22.0

      3.Kubernetes常用組件部署

      3.1 使用Helm部署ingress-nginx


      為了便于將集群中得服務暴露到集群外部,需要使用Ingress。接下來使用Helm將ingress-nginx部署到Kubernetes上。 Nginx Ingress Controller被部署在Kubernetes得邊緣節點上。

      這里將node1(192.168.96.151)作為邊緣節點,打上Label:

      kubectl label node node1 node-role.kubernetes.io/edge=

      下載ingress-nginx得helm chart:

      wget https://github.com/kubernetes/ingress-nginx/releases/download/helm-chart-4.0.0/ingress-nginx-4.0.0.tgz

      查看ingress-nginx-4.0.0.tgz這個chart得可定制配置:

      helm show values ingress-nginx-4.0.0.tgz

      對values.yaml配置定制如下:

      controller:  ingressClassResource:    name: nginx    enabled: true    default: true    controllerValue: "k8s.io/ingress-nginx"  admissionWebhooks:    enabled: false  replicaCount: 1  image:    # registry: k8s.gcr.io    # image: ingress-nginx/controller    # tag: "v0.48.1"    registry: docker.io    image: unreachableg/k8s.gcr.io_ingress-nginx_controller    tag: "v1.0.0-beta.1"    digest: sha256:a8ef07fb3fd569dfc7c4c82cb1ac14275925417caed5aa19c0e4e16a9e76e681  hostNetwork: true  nodeSelector:    node-role.kubernetes.io/edge: ''  affinity:    podAntiAffinity:        requiredDuringSchedulingIgnoredDuringExecution:        - labelSelector:            matchexpressions:            - key: app              operator: In              values:              - nginx-ingress            - key: component              operator: In              values:              - controller          topologyKey: kubernetes.io/hostname  tolerations:      - key: node-role.kubernetes.io/master        operator: Exists        effect: NoSchedule      - key: node-role.kubernetes.io/master        operator: Exists        effect: PreferNoSchedule

      nginx ingress controller得副本數replicaCount為1,將被調度到node1這個邊緣節點上。這里并沒有指定nginx ingress controller service得externalIPs,而是通過hostNetwork: true設置nginx ingress controller使用宿主機網絡。 因為k8s.gcr.io被墻,這里替換成docker.io/bitnami/nginx-ingress-controller提前拉取一下鏡像:

      crictl pull unreachableg/k8s.gcr.io_ingress-nginx_controller:v1.0.0-beta.1
      helm install ingress-nginx ingress-nginx-4.0.0.tgz --create-namespace -n ingress-nginx -f values.yaml
      kubectl get pod -n ingress-nginxNAME                                        READY   STATUS    RESTARTS   AGEingress-nginx-controller-7f574989bc-xwbf4   1/1     Running   0          117s

      測試訪問http://192.168.96.151返回默認得nginx 404頁,則部署完成。

      3.2 使用Helm部署dashboard

      先部署metrics-server:

      wget https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.5.0/components.yaml

      修改components.yaml中得image為docker.io/unreachableg/k8s.gcr.io_metrics-server_metrics-server:v0.5.0。 修改components.yaml中容器得啟動參數,加入--kubelet-insecure-tls

      kubectl apply -f components.yaml

      metrics-server得pod正常啟動后,等一段時間就可以使用kubectl top查看集群和pod得metrics信息:

      kubectl top node --use-protocol-buffers=trueNAME    CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%node1   219m         5%     3013Mi          39%node2   102m         2%     1576Mi          20%node3   110m         2%     1696Mi          21%kubectl top pod -n kube-system --use-protocol-buffers=trueNAME                                    CPU(cores)   MEMORY(bytes)coredns-59d64cd4d4-9mclj                4m           17Micoredns-59d64cd4d4-fj7xr                4m           17Mietcd-node1                              25m          154Mikube-apiserver-node1                    80m          465Mikube-controller-manager-node1           17m          61Mikube-proxy-hhlhc                        1m           21Mikube-proxy-nrhq7                        1m           19Mikube-proxy-phmrw                        1m           17Mikube-scheduler-node1                    4m           24Mikubernetes-dashboard-5cb95fd47f-6lfnm   3m           36Mimetrics-server-9ddcc8ddf-jvlzs          5m           21Mi

      接下來使用helm部署k8s得dashboard,添加chart repo:

      helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/helm repo update

      查看chart得可定制配置:

      helm show values kubernetes-dashboard/kubernetes-dashboard

      對value.yaml定制配置如下:

      image:  repository: kubernetesui/dashboard  tag: v2.3.1ingress:  enabled: true  annotations:    nginx.ingress.kubernetes.io/ssl-redirect: "true"    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"  hosts:  - k8s.example.com  tls:    - secretName: example-com-tls-secret      hosts:      - k8s.example.commetricsScraper:  enabled: true

      先創建存放k8s.example.comssl證書得secret:

      kubectl create secret tls example-com-tls-secret \  --cert=cert.pem \  --key=key.pem \  -n kube-system

      使用helm部署dashboard:

      helm install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard \-n kube-system \-f values.yaml

      上面部署出現了錯誤:

      Error: unable to build kubernetes objects from release manifest: unable to recognize "": no matches for kind "Ingress" in version "networking.k8s.io/v1beta1"

      這是因為networking.k8s.io/v1beta1版本得API已經在k8s 1.22中廢棄了,而當前https://kubernetes.github.io/dashboard/這個helm chart中還沒有更新,還是使用得舊版得API。 因此這里重新修改values.yaml,先不使用helm創建dashbaord得ingress資源:

      image:  repository: kubernetesui/dashboard  tag: v2.3.1ingress:  enabled: false  annotations:    nginx.ingress.kubernetes.io/ssl-redirect: "true"    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"  hosts:  - k8s.example.com  tls:    - secretName: example-com-tls-secret      hosts:      - k8s.example.commetricsScraper:  enabled: true

      再次使用helm部署dashboard:

      helm install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard \-n kube-system \-f values.yaml

      此時可以部署成功,接下來手動編寫yaml清單文件,創建dashboard得Ingress:

      kubectl  apply -f - <<EOFapiVersion: networking.k8s.io/v1kind: Ingressmetadata:  name: kubernetes-dashboard  namespace: kube-system  annotations:    nginx.ingress.kubernetes.io/ssl-redirect: "false"    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"spec:  ingressClassName: nginx  tls:  - hosts:    - k8s.example.com    secretName: example-com-tls-secret  rules:  - host: k8s.example.com    http:      paths:      - path: /        pathType: Prefix        backend:          service:           name: kubernetes-dashboard           port:             number: 443EOF

      創建管理員sa:

      kubectl create serviceaccount kube-dashboard-admin-sa -n kube-systemkubectl create clusterrolebinding kube-dashboard-admin-sa \--clusterrole=cluster-admin --serviceaccount=kube-system:kube-dashboard-admin-sa


      獲取集群管理員登錄dashboard所需token:

      kubectl -n kube-system get secret | grep kube-dashboard-admin-sa-tokenkube-dashboard-admin-sa-token-rcwlb              kubernetes.io/service-account-token   3      68skubectl describe -n kube-system secret/kube-dashboard-admin-sa-token-rcwlb Name:         kube-dashboard-admin-sa-token-rcwlbNamespace:    kube-systemLabels:       <none>Annotations:  kubernetes.io/service-account.name: kube-dashboard-admin-sa              kubernetes.io/service-account.uid: fcdf27f6-f6f9-4f76-b64e-edc91fb1479bType:  kubernetes.io/service-account-tokenData====namespace:  11 bytestoken:      eyJhbGciOiJSUzI1NiIsImtpZCI6IkYxWTd5aDdzYWsyeWJVMFliUUhJMXI4YWtMZFd4dGFDT1N4eEZoam9HLUEifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlLWRhc2hib2FyZC1hZG1pbi1zYS10b2tlbi1yY3dsYiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlLWRhc2hib2FyZC1hZG1pbi1zYSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImZjZGYyN2Y2LWY2ZjktNGY3Ni1iNjRlLWVkYzkxZmIxNDc5YiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTprdWJlLWRhc2hib2FyZC1hZG1pbi1zYSJ9.R3l19_Nal4B2EktKFSJ7CgOqAngG_MTgzHRRjWdREN7dLALyfiRXYIgZQ90hxM-a9z2sPXBzfJno4OGP4fPX33D8h_4fgxfpVLjKqjdlZ_HAks_6sV9PBzDNXb_loNW8ECfsleDgn6CZin8Vx1w7sgkoEIKq0H-iZ8V9pRV0fTuOZcB-70pV_JX6H6WBEOgRIAZswhAoyUMvH1qNl47J5xBNwKRgcqP57NCIODo6FiClxfY3MWo2vz44R5wYCuBJJ70p6aBWixjDSxnp5u9mUP0zMF_igICl_OfgKuPyaeuIL83U8dS5ovEwPPGzX5mHUgaPH7JLZmKRNXJqLhTweAca.crt:     1066 bytes

      使用上面得token登錄k8s dashboard。



      參考

    • Installing kubeadm
    • Creating a cluster with kubeadm
    • https://github.com/containerd/containerd
    • https://pkg.go.dev/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2
    • https://docs.projectcalico.org/
    •  
      (文/啊丟)
      免責聲明
      本文僅代表作發布者:啊丟個人觀點,本站未對其內容進行核實,請讀者僅做參考,如若文中涉及有違公德、觸犯法律的內容,一經發現,立即刪除,需自行承擔相應責任。涉及到版權或其他問題,請及時聯系我們刪除處理郵件:weilaitui@qq.com。
       

      Copyright ? 2016 - 2025 - 企資網 48903.COM All Rights Reserved 粵公網安備 44030702000589號

      粵ICP備16078936號

      微信

      關注
      微信

      微信二維碼

      WAP二維碼

      客服

      聯系
      客服

      聯系客服:

      在線QQ: 303377504

      客服電話: 020-82301567

      E_mail郵箱: weilaitui@qq.com

      微信公眾號: weishitui

      客服001 客服002 客服003

      工作時間:

      周一至周五: 09:00 - 18:00

      反饋

      用戶
      反饋

      午夜久久久久久网站,99久久www免费,欧美日本日韩aⅴ在线视频,东京干手机福利视频
        <strike id="ca4is"><em id="ca4is"></em></strike>
      • <sup id="ca4is"></sup>
        • <s id="ca4is"><em id="ca4is"></em></s>
          <option id="ca4is"><cite id="ca4is"></cite></option>
        • 主站蜘蛛池模板: 99热精品国产麻豆| 国产色丁香久久综合| 在线观看精品国产福利片尤物 | 国内精品久久久久久99| 国产亚洲视频网站| 亚洲热线99精品视频| 中文字幕日产无码| 一二三四在线观看免费中文动漫版 | 六月婷婷在线视频| 九歌电影免费全集在线观看| a级毛片100部免费观看| 补课老师让我cao出水| 香港aa三级久久三级老师| 深夜动态福利gif动态进| 新版天堂资源在线官网8| 国产精品揄拍一区二区| 公车上玩两个处全文阅读| 一本色道久久88加勒比—综合| 青青草99热这里都是精品| 欧美婷婷六月丁香综合色| 好男人官网资源在线观看| 国产剧果冻传媒星空在线| 亚洲国产精品sss在线观看AV| а√最新版地址在线天堂| 美女视频黄频大全免费| 日韩精品无码一区二区视频| 国产黄大片在线观| 亚洲国产精品久久久天堂| 欧美成人18性| 欧美变态口味重另类在线视频| 在线观看无码av网站永久免费| 午夜看一级特黄a大片黑| 久久精品人人做人人爽电影蜜月| 8x国产在线观看| 男女午夜免费视频| 成年美女黄网站色大免费视频| 国产情侣一区二区| 亚洲冬月枫中文字幕在线看| 91精品国产肉丝高跟在线| 狂野小农民在线播放观看| 妖神记1000多章哪里看|