1. <strong id="7actg"></strong>
    2. <table id="7actg"></table>

    3. <address id="7actg"></address>
      <address id="7actg"></address>
      1. <object id="7actg"><tt id="7actg"></tt></object>

        kubernetes 1.18集群安裝

        共 7940字,需瀏覽 16分鐘

         ·

        2021-04-13 16:24

        準(zhǔn)備環(huán)境

        本次實(shí)驗(yàn)以k8s 1.8版本為例:

        • kubelet-1.18.0-0:在集群中的每個(gè)節(jié)點(diǎn)上用來啟動(dòng) Pod 和容器等

        • kubeadm-1.18.0-0:在集群中的每個(gè)節(jié)點(diǎn)上用來啟動(dòng) Pod 和容器等

        • kubectl-1.18.0-0:用來與集群通信的命令行工具

        • docker-19.03.13-ce:k8s基于docker拉取鏡像、啟動(dòng)服務(wù)

        準(zhǔn)備兩臺(tái)Linux服務(wù)器(CentOS系統(tǒng)):

        • 控制節(jié)點(diǎn):10.0.0.1,安裝docker、kubelet、kubectl和kubeadm

        • worker節(jié)點(diǎn):10.0.0.2,安裝docker、kubelet、kubectl和kubeadm


        控制節(jié)點(diǎn)操作

        安裝docker

        sudo yum install docker sudo systemctl start docker

        配置kubernetes鏡像地址

        cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo[kubernetes]name=Kubernetesbaseurl=http://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64enabled=1gpgcheck=1repo_gpgcheck=1gpgkey=http://packages.cloud.google.com/yum/doc/yum-key.gpg http://packages.cloud.google.com/yum/doc/rpm-package-key.gpgEOF

        禁用SELinux,為了允許容器訪問主機(jī)文件系統(tǒng),這么操作是為了保證后續(xù) Pod 網(wǎng)絡(luò)工作正常

        setenforce 0sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

        Kubernetes 1.8開始要求關(guān)閉系統(tǒng)的Swap,如果不關(guān)閉,默認(rèn)配置下kubelet將無法啟動(dòng)

        swapoff -a

        安裝kubeadm、kubelet、kubectl

        sudo yum install -y kubeadm-1.18.0-0 kubelet-1.18.0-0 kubectl-1.18.0-0  --disableexcludes=kubernetes

        查看kubeadm是否安裝成功

        $ kubeadm versionkubeadm version: &version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.0", GitCommit:"9befc2b8928a9426501d3bf62f72849d5cbcd5a3", GitTreeState:"clean", BuildDate:"2017-11-20T05:17:43Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

        通過systemctl 啟動(dòng)kubelet

        #啟動(dòng)kubeletsudo systemctl start kubelet#顯示日志:Warning: kubelet.service changed on disk. Run 'systemctl daemon-reload' to reload units.# 重載所有修改過的配置文件sudo systemctl daemon-reload#開機(jī)啟動(dòng)kubeletsudo systemctl enable kubelet

        必須保證kubelet安裝成功,執(zhí)行下面命令不能報(bào)錯(cuò)

        $ kubelet version

        初始化kubeadm

        #重置kubeadm的初始化sudo kubeadm reset# –-pod-network-cidr:用于指定Pod的網(wǎng)絡(luò)范圍,物理機(jī)ip地址,10.99必須是存在的物理機(jī)ip地址# –-service-cidr:用于指定service的網(wǎng)絡(luò)范圍,虛擬ip地址# --feature-gates:在 Kubernetes 1.18 中,用 kubeadm 來安裝 kube-dns 這一做法已經(jīng)被廢棄sudo kubeadm init --kubernetes-version=v1.18.0 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.1.0.0/16

        執(zhí)行初始化后,可以在初始化的日志中看到,需要在集群其他機(jī)器中執(zhí)行的命令

        Your Kubernetes control-plane has initialized successfully!
        To start using your cluster, you need to run the following as a regular user:
        mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
        You should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/
        Then you can join any number of worker nodes by running the following on each as root:
        kubeadm join 10.0.0.1:6443 --token wzghpa.mwvdt4ho0fn936dg \ --discovery-token-ca-cert-hash sha256:aa101787f7398ac95755b1e61aa56c69cbf7205d5035184622ba8cad57abf3e1

        在控制節(jié)點(diǎn)中配置kube相關(guān)的配置,并且export配置

        mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configexport KUBECONFIG=$HOME/.kube/config

        執(zhí)行完上述命令后,通過kubectl version查看kubectl 是否安裝成功,能正常顯示出Client Version和Server Version等信息,即表示kubectl 集群安裝成功了,只是目前集群中只有控制節(jié)點(diǎn)

        $ kubectl versionClient Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.0", GitCommit:"9e991415386e4cf155a24b1da15becaa390438d8", GitTreeState:"clean", BuildDate:"2020-03-25T14:58:59Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.0", GitCommit:"9e991415386e4cf155a24b1da15becaa390438d8", GitTreeState:"clean", BuildDate:"2020-03-25T14:50:46Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}

        查看集群節(jié)點(diǎn)

        #查看集群節(jié)點(diǎn)$ kubectl get nodeNAME          STATUS     ROLES    AGE   VERSIONmaster   NotReady   master   14h   v1.18.0#查看命名空間$ kubectl get ns NAME              STATUS   AGEdefault           Active   15hkube-node-lease   Active   15hkube-public       Active   15hkube-system       Active   15h

        kubectl 安裝成功后,需要為集群部署flannel網(wǎng)絡(luò),執(zhí)行完成以后,可以看到創(chuàng)建的一些配置

        $ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.ymlpodsecuritypolicy.policy/psp.flannel.unprivileged createdclusterrole.rbac.authorization.k8s.io/flannel createdclusterrolebinding.rbac.authorization.k8s.io/flannel createdserviceaccount/flannel createdconfigmap/kube-flannel-cfg createddaemonset.apps/kube-flannel-ds created

        在命名空間kube-system中查看kube-flannel的pod是否啟動(dòng)成功

        $ kubectl get pod -n kube-system | grep kube-flannelkube-flannel-ds-hgkk6                 0/1     Init:ImagePullBackOff   0          15m

        看到kube-flannel沒有啟動(dòng)成功,原因是kube-flannel的鏡像沒有下載下來,需要手動(dòng)下載對(duì)應(yīng)的kube-flannel版本

        $ wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

        將kube-flannel.yml下載到服務(wù)器上后,查看yml中使用的flannel版本

        $ cat kube-flannel.ymlcontainers:  - name: kube-flannel    image: quay.io/coreos/flannel:v0.13.1-rc2

        看到kube-flannel使用的鏡像是quay.io/coreos/flannel:v0.13.1-rc2,通過docker命令下載鏡像

        $ sudo docker pull quay.io/coreos/flannel:v0.13.1-rc2$ sudo docker image ls | grep flannelquay.io/coreos/flannel    v0.13.1-rc2         dee1cac4dd20        7 weeks ago         64.3MB

        flannel鏡像下載下來后,kube-flannel的pod就自動(dòng)啟動(dòng)成功了

        $ kubectl get pod -n kube-system | grep kube-flannelkube-flannel-ds-hgkk6                 1/1     Running   0          6m

        添加worker節(jié)點(diǎn)

        安裝docker

        sudo yum install docker sudo systemctl start docker

        配置kubernetes鏡像地址

        cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo[kubernetes]name=Kubernetesbaseurl=http://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64enabled=1gpgcheck=1repo_gpgcheck=1gpgkey=http://packages.cloud.google.com/yum/doc/yum-key.gpg http://packages.cloud.google.com/yum/doc/rpm-package-key.gpgEOF

        禁用SELinux,為了允許容器訪問主機(jī)文件系統(tǒng),這么操作是為了保證后續(xù) Pod 網(wǎng)絡(luò)工作正常

        setenforce 0sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

        Kubernetes 1.8開始要求關(guān)閉系統(tǒng)的Swap,如果不關(guān)閉,默認(rèn)配置下kubelet將無法啟動(dòng)

        swapoff -a

        安裝kubeadm、kubelet

        sudo yum install -y kubeadm-1.18.0-0 kubelet-1.18.0-0 --disableexcludes=kubernetes

        通過systemctl 啟動(dòng)kubelet

        #啟動(dòng)kubeletsudo systemctl start kubelet#顯示日志:Warning: kubelet.service changed on disk. Run 'systemctl daemon-reload' to reload units.# 重載所有修改過的配置文件sudo systemctl daemon-reload#開機(jī)啟動(dòng)kubeletsudo systemctl enable kubelet

        接下來,需要將worker服務(wù)器,加入到k8s集群中,在worker節(jié)點(diǎn)服務(wù)器執(zhí)行命令

        $ sudo kubeadm join 10.0.0.1:6443 --token wzghpa.mwvdt4ho0fn936dg \    --discovery-token-ca-cert-hash sha256:aa101787f7398ac95755b1e61aa56c69cbf7205d5035184622ba8cad57abf3e1

        如果執(zhí)行過程中,出現(xiàn)以下錯(cuò)誤信息

        [preflight] Some fatal errors occurred:    /var/lib/kubelet is not empty[preflight] If you know what you are doing, you can skip pre-flight checks with `--skip-preflight-checks`

        則需要重置kubeadm,重置后,會(huì)刪除/var/lib/kubelet下面一些文件

        $ sudo kubeadm reset

        重置以后,重新執(zhí)行kubeadm join命令,將worker節(jié)點(diǎn)加入到集群中

        $ sudo kubeadm join 10.0.0.1:6443 --token wzghpa.mwvdt4ho0fn936dg \    --discovery-token-ca-cert-hash sha256:aa101787f7398ac95755b1e61aa56c69cbf7205d5035184622ba8cad57abf3e1#日志輸出[preflight] Starting the kubelet service[discovery] Trying to connect to API Server "10.0.0.1:6443"[discovery] Created cluster-info discovery client, requesting info from "https://10.0.0.1:6443"[discovery] Requesting info from "https://10.0.0.1:6443" again to validate TLS against the pinned public key[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.0.0.1:6443"[discovery] Successfully established connection with API Server "10.0.0.1:6443"[bootstrap] Detected server version: v1.18.4[bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1)
        Node join complete:* Certificate signing request sent to master and response received.* Kubelet informed of new secure connection details.
        Run 'kubectl get nodes' on the master to see this machine join

        到此位置,就將worker節(jié)點(diǎn)加入了集群,最后到控制節(jié)點(diǎn)去執(zhí)行命令,即可看到集群中的node節(jié)點(diǎn)

        $ kubectl get nodesNAME                            STATUS   ROLES    AGE   VERSIONnode1                           Ready    <none>   17s   v1.18.0master                          Ready    master   19h   v1.18.0

        若后續(xù)還需要向集群里添加服務(wù)器,則按照worker節(jié)點(diǎn)安裝的流程操作即可

        瀏覽 42
        點(diǎn)贊
        評(píng)論
        收藏
        分享

        手機(jī)掃一掃分享

        分享
        舉報(bào)
        評(píng)論
        圖片
        表情
        推薦
        點(diǎn)贊
        評(píng)論
        收藏
        分享

        手機(jī)掃一掃分享

        分享
        舉報(bào)
        1. <strong id="7actg"></strong>
        2. <table id="7actg"></table>

        3. <address id="7actg"></address>
          <address id="7actg"></address>
          1. <object id="7actg"><tt id="7actg"></tt></object>
            久久久天堂影院 | 天天操夜夜操B天天拍 | 乱情家庭(家庭乱情)小说 | 欧美日韩激情性爱 | 搞逼片| 国产日产久久高清欧美一区 | 日韩欧美福利电影 | 99re这里只有精品8 | www.日本黄色视频 | 91AV电影在线观看 |