一键部署kubernetes高可用集群_第1页
一键部署kubernetes高可用集群_第2页
一键部署kubernetes高可用集群_第3页
一键部署kubernetes高可用集群_第4页
一键部署kubernetes高可用集群_第5页
已阅读5页,还剩16页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

1、这里安装的kubernetes 版本为 1.5.1,docker 版本为 1.12.3三个master 节点通过keepalived 实现高可用。结构如下图,参考官网。IP26ROLEmaster016master0206master0327node0128node024node036node04一键部署 Kubernetes 高可用集群三台master,四台node,系统版本为CentOS7以下是内容:# vim k8s-deploy.sh#!/bin/bashset -xset -eHTTP_SERVER=3:8000KUBE_HA=trueKUBE_REPO_PREFIX=gcr.io/

2、_containersKUBE_ETCD_IMAGE=quay.io/coreos/etcd:v3.0.15root=$(id -u)if $root -ne 0 ;thenecho must run as rootexit 1fikube:install_docker()set +edocker info /dev/null2&1i=$?set -eif $i -ne 0 ; thencurl -L http:/$HTTP_SERVER/rpms/docker.tar.gz/tmp/docker.tar.gztar zxf /tmp/docker.tar.gz -C /tmpyum loca

3、linstall -y /tmp/docker/*.rpmsystemctl enable docker.service & docker.servicesystemctlstartkube:config_dockerfiecho docker has been installedrm -rf /tmp/docker /tmp/docker.tar.gzkube:config_docker()setenforce 0 /dev/null 2&1 & sed -i -e INUX=enforcing/SELINUX=disabled/g /etc/selinux/configsysctl -w

4、net.bridge.bridge-nf-call-iptables=1sysctl -w net.bridge.bridge-nf-call-ip6tables=1cat/etc/sysctl.confnet.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1EOFmkdir -p /etc/systemd/system/docker.service.dcat/etc/systemd/system/docker.service.d/10-docker.confServiceExecStart=Ex

5、ecStart=/usr/bin/dockerd-s overlay -selinux-enabled=falseEOFsystemctl daemon-reload &systemctl restartdocker.servicekube:load_images()mkdir -p /tmp/k8simages=(kube-apiserver-amd64_v1.5.1kube-controller-manager-amd64_v1.5.1kube-scheduler-amd64_v1.5.1kube-proxy-amd64_v1.5.1pause-amd64_3.0kube-discover

6、y-amd64_1.0kubedns-amd64_1.9exechealthz-amd64_1.2kube-dnsmasq-amd64_1.4dnsmasq-metrics-amd64_1.0etcd_v3.0.15flannel-amd64_v0.7.0)for i in $!images; doret=$(docker images | awk NR!=1pr$KUBE_REPO_PREFIX/$images$i | wc -l)$1_$2| grepif $ret -lt 1 ;thencurl -L http:/$HTTP_SERVER/images/$images$i.tar/tmp

7、/k8s/$images$i.tarodocker load-i/tmp/k8s/$images$i.tarfidonerm /tmp/k8s* -rfkube:install_bin()set +ewhich kubeadm /dev/null2&1i=$?set -eif $i -ne0 ; thencurl -Lhttp:/$HTTP_SERVER/rpms/k8s.tar.gz /tmp/k8s.tar.gztar zxf/tmp/k8s.tar.gz -C /tmpyum localinstall -y/tmp/k8s/*.rpmrm -rf /tmp/k8s*systemctl e

8、nable kubelet.service & systemctl start kubelet.service & rm -rf /etc/kubernetesfikube:wait_apiserver()until curl; do sleep 1; donekube:disable_sic_pod()# remove thewaring login kubeletsed -i s/-pod-manifest-path=/etc/kubernetes/manifests/g/etc/systemd/system/kubelet.service.d/10-kubeadm.confsystemc

9、tl daemon-reload & systemctl restart kubelet.servicekube:get_env()HA_SE=$1 $HA_SE = MASTER & HA_PRIORITY=200 | HA_PRIORITY=expr 200- $RANDOM / 1000 + 1KUBE_VIP=$(echo $2 |awk -F= pr$2)VIP_PREFIX=$(echo $KUBE_VIP | cut -d . -f 1,2,3)#dhcp 和sic 地址的不同取法VIP_prERFACE=$(ip addr show | grep $VIP_PREFIX | a

10、wk -F dynamic$2 | head -1) -z$VIP_ERFACE & VIP_ERFACE=$(ip addr show | grep$VIP_PREFIX | awk -F global pr$2 | head -1)#LOCAL_IP=$(ip addr show | grep $VIP_PREFIX | awk -F / pr$1| awk -F pr$2 | head -1)MASTER_NODES=$(echo $3 | grep -o0-91,3.0-91,3.0-91,3.0-91,3)MASTER_NODES_NO_LOCAL_IP=$(echo $MASTER

11、_NODESs/$LOCAL_IP/g)|sed-ekube:install_keepalived()kube:get_env $set +ewhich keepalived /dev/null2&1i=$?set -eif $i -ne0 ; thenip addradd $KUBE_VIP/32 dev $VIP_ERFACEcurl -Lhttp:/$HTTP_SERVER/rpms/keepalived.tar.gz /tmp/keepalived.tar.gztar zxf /tmp/keepalived.tar.gz -C /tmpyum localinstall -y/tmp/k

12、eepalived/*.rpmrm -rf /tmp/keepalived*systemctl enable keepalived.service keepalived.service&systemctlstartkube:config_keepalivedfikube:config_keepalived()echo gen keepalived configurationcat /etc/keepalived/keepalived.confglobal_defs router_id LVS_k8svrrp_script CheckK8sMaster script curlerval 3tim

13、eout 9fall 2rise 2vrrp_instance VI_1 se $HA_SEerface $VIP_ERFACEvirtual_router_id 61priority $HA_PRIORITYadvert_1mcast_src_ip $LOCAL_IPnopreemptauthentication auth_type PASSauth_pass 378378unicast_peer $MASTER_NODES_NO_LOCAL_IPvirtual_ipaddress $KUBE_VIPtrack_script CheckK8sMasterEOFmodprobe ip_vssy

14、stemctl daemon-reload & systemctl restart keepalived.servicekube:save_master_ip()set +e# 应该从$2 里拿到 etcd 集群的 -endpos, 这里默认走的:2379 $KUBE_HA = true & etcdctl mk ha_master $LOCAL_IPset -ekube:copy_master_config()local master_ip=$(etcdctl ge_master)mkdir -p /etc/kubernetesscp -r root$master_ip:/etc/kuber

15、netes/* /etc/kubernetes/systemctl start kubeletkube:set_label()until kubectl get no | grep hostname; do sleep 1; donekubectl label node hostname kubeadm.alpha.kubernetes.io/role=masterkube:master_up()shiftkube:install_dockerkube:load_imageskube:install_bin $KUBE_HA = true & kube:install_keepalived M

16、ASTER $#master_ip,master02 和 master03 需要用这个信息来 copy 配置kube:save_master_ip# 这里一定要带上-pod-network-cidr 参数,不然后面的 flannel 网络会出问题kubeadm init -use-kubernetes-ver-pod-network-cidr=/16 $=v1.5.1# 使 master 节点可以被调度# kubectl tanodes -all dedicated-echo -e 03332m 注意下 token 信息,node 加入集群时需要使用!0330m# install flanne

17、l networkkubectl apply -f http:/$HTTP_SERVER/network/kube-flannel.yaml-namespace=kube-system# show podskubectl get pod -all-namespakube:replica_up()shiftkube:install_dockerkube:load_imageskube:install_binkube:install_keepalivedBACKUP$kube:copy_master_configkube:set_labelkube:node_up()kube:install_do

18、ckerkube:load_imageskube:install_binkube:disable_sic_podkubeadm join $kube:tear_down()systemctl stop kubelet.servicedocks -aq|xargs -I dockerstop docks -aq|xargs -I dockerrm df |grep /var/lib/kubelet|awk pr$6 |xargs -I umount rm -rf /var/lib/kubelet & rm -rf /etc/kubernetes/var/lib/etcd&rm-rfyum rem

19、ove -y kubectl kubeadm kubeletiif $KUBE_HA = true thenyum remove -y keepalivedrm -rf /etc/keepalived/keepalived.conffirm -rfiip linkdel cni0main()case $1inm | master )kube:master_up$;r | replica )kube:replica_up $;j| join )shiftkube:node_up $;d| down )kube:tear_down;*)echo usage: $0 mmaster | rrepli

20、ca | jjoin token | ddownecho$0master to setup master echo$0replica to setup replica master echo$0jointojoaster with token echo becarefull$0downtotearall down ,inlude alldata! soechounkownd $0$;esacmain $使用方法1、在一台单独的server 上启动一个http-server,用来存放image 和rpm 包等文件,取文件。会从此处拉# nohup-m SimpleHTTPServer &Serv

21、ing HTTP on port8000.这是http-server 地址:# tree. etcd deploy-etcd.sh temp-etcd etcd etcdctlimagesdnsmasq-metrics-amd64_1.0.taretcd_v3.0.15.tarexechealthz-amd64_1.2.tarflannel-git_0.7.0.tarkube-apiserver-amd64_v1.5.1.tarkube-controller-manager-amd64_v1.5.1.tarkube-discovery-amd64_1.0.tarkubedns-amd64_1.

22、9.tarkube-dnsmasq-amd64_1.4.tarkube-proxy-amd64_v1.5.1.tarkubernetes-dashboard-amd64.tarkube-scheduler-amd64_v1.5.1.tarpause-amd64_3.0.tar k8s-deploy.sh network kube-flannel.yamlnohup.outREADME.mdrpmsdocker.tar.gzhaproxy.tar.gzk8s.tar.gzkeepalived.tar.gz2、部署master01,在master01 上执行。# curl -L-api-adver

23、tise-addresses=7|bash-smaster-external-etcd-endpo:2379,s=3:8000是 http-server-api-advertise-addresses是 vip 地址-external-etcd-endpos是 etcd 集群的地址下你的 token 输出,node 节点加入集群时需要使用该 token。3、部署master02 和master03。这里需要分别设置两个节点与master01 的ssh 互信。然后分别在master02 和master03 上执行。完成后会自动和master01 组成冗余。# curl -L-api-advert

24、ise-addresses=7| bash -s replica-external-etcd-endpo:2379,s=上面步骤完成之后,就实现了master 节点的高可用。4、部署node。在每个node 上分别执行就即可。# curl -L-token=3635d0.6d0caa140b219bc0 7|bash -s join这里的 token 就是部署 master01 完成后下的 token加入集群时,这里有可能会报 refuse 错误,将 kube-discovery 扩容到三个副本即可。# kubectl scale deployment -replicas3kube-disco

25、very-nkube-system5、完成后就得到了一个完整的高可用集群。# kubectl getnodeNAMESUSAGEkube-node02Ready22hkuber-master01Ready,master23hkuber-master02Ready,master23hkuber-master03Ready,master23hkuber-node01Ready23hkuber-node03Ready23hkuber-node04Ready23h# kubectlgetpod -all-namespa-owideNAMESPACENAME RESTARTSREADYSUSAGEIPN

26、ODEkube-systemdummy-2088944543-191tw1/1 kuber-master01Running01d7kube-systemkube-apiserver-kuber-master011/1 kuber-master01Running01d7kube-systemkube-apiserver-kuber-master021/1 kuber-master02Running023h6kube-systemkube-apiserver-kuber-master031/1 kuber-master03Running023h7kube-systemkube-controller

27、-manager-kuber-master011/1Running01d7kuber-master01kube-systemkube-controller-manager-kuber-master021/1Running023h6kuber-master02kube-systemkube-controller-manager-kuber-master031/1Running023h7kuber-master03kube-systemkube-discovery-1769846148-53vs51/1 kuber-master01Running01d7kube-systemkube-discov

28、ery-1769846148-m18d01/1 kuber-master03Running023h7kube-systemkube-discovery-1769846148-tf0m91/1 kuber-master02Running023h6kube-systemkube-dns-2924299975-80fnn4/4 kuber-master01Running01dkube-systemkube-flannel-ds-51db42/2 kuber-master01Running023h7kube-systemkube-flannel-ds-gsn3m2/2 kuber-node01Running423h27kube-systemkube-flannel-ds-httmj2/2 kuber-master02Running023h6ku

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论