部署一套k8s高可用集群二进制_第1页
部署一套k8s高可用集群二进制_第2页
部署一套k8s高可用集群二进制_第3页
部署一套k8s高可用集群二进制_第4页
部署一套k8s高可用集群二进制_第5页
已阅读5页,还剩44页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

《部署一套完整的企业级K8s集群v1.20,二进制方作者信( DevOps实战学说该文档有导航窗格,方便阅读,如果左侧没有显示,请检查word是否启用请注明作者,不道德行为最后更新时2021-04-一、前置知识生产环境部署K8s集群的两种Kubeadm是一个K8s部署工具,提供kubeadminitkubeadmjoin,用于快速部署Kubernetes群。二进制从版的二进制包,手动部署每个组件,组成Kubernetes集群小结:Kubeadm低部署门槛,但了很多细节,遇到问题很难排查。如果想更容易可控,推荐使用二进制包部署Kubernetes集群,虽然手动部署麻烦点,期间可以学习很多工作原理,也利于后期。建议最小硬件配置:2核CPU、2G内存、30G硬需要提前对应镜像并导入节点软件环境软版操作系CentOS7.x_x64容器引DockerCEKubernetes服务器整体规划角组k8s-kube-apiserver,kube-controller-manager,kube-scheduler,kubelet,kube-proxy,k8s-kube-apiserver,kube-controller-managerkube-scheduler,kubelet,kube-proxyk8s-kubelet,kube-k8s-kubelet,kube-负载均衡须知:考虑到有些朋友电脑配置较低,开四台机器会跑不动,所以搭建这套K8s可用集群分两部分实施,先部署一套单Master构(3),再扩容为多Master构(46),顺便再熟悉下Master容流程。Master架构图Master服务器规划角组k8s-kube-apiserver,kube-controller-managerkube-k8s-kubelet,kube-k8s-kubelet,kube-操作系统初始化配systemctlstopfirewalldsystemctldisablefirewalldsed-i's/enforcing/disabled/' setenforce swapoff- sed-ri's/.*swap.*/#&/'hostnamectlset-hostnamemasterhostscat/etc/hostsEOFk8s-k8s-IPv4iptablescat>/etc/sysctl.d/k8s.conf<<EOFnet.bridge.bridge-nf-call-ip6tables1net.bridge.bridge-nf-call-iptables=1sysctl-- yuminstallntpdate-y二、部Etcd集Etcd是一个分布式键值系统,Kubernetes使用Etcd进行数据,所以先准备一个Etcd数据库,为解决Etcd单点故障,应采用集群方式部署,这里使用3台组建集群,可1台机器故障,当然,你也可以使用5台组建集群,可2节点名etcd-etcd-etcd-注:为了节省机器,这里与K8s节点机器复用。也可以独立于k8s集群之外部署,只要apiserver连接到就行。准备cfssl生成工找任意一台服务器操作,这里用Master节点od+xcfssl_linux-amd64cfssljson_linux-amd64cfssl-certinfo_linux-amd64mvcfssl_linux-amd64/usr/local/bin/cfsslmvcfssljson_linux-amd64mvcfssl-certinfo_linux-amd64/usr/bin/cfssl- mkdir-pmkdir-pcd自签cat>ca-config.json<<{"signing":"default":{"expiry":"87600h""profiles":"www":"expiry":"87600h","usages":["keyencipherment","serverauth","clientauth"]}}}}cat>ca-csr.json<<{"CN":"etcd"key":"algo":"rsa","size":2048"names":{"C":"L":"ST":}]]生成cfsslgencert-initcaca-csr.json|cfssljson-barecacfsslgencert-initcaca-csr.json|cfssljson-bareca会生成ca.pemca-key.pem文件使用自签CAEtcdHTTPS创建申请文件cat>server-csr.json<<cat>server-csr.json<<{"CN":"etcd","hosts":["key":"algo":"rsa","size":2048"names":{"C":"L":"ST":}]注:上述文件hosts段中IP所有etcd点的集群内部通信IP,一个都不能少!为了方便后期扩容可以多写几个预留的IP。生成cfsslcfsslgencert-ca=ca.pem-ca-key=ca-key.pem-config=ca-config.json-profile=wwwserver-csr.json|cfssljson-bareserver会生成server.pem和server-key.pem文件从二进制文 Etcd集以下在节点1上操作,为简化操作,待会将节点1生成的所有文件拷贝到节点2节点创建工 并解压二进制mkdir/opt/etcd/{bin,cfg,ssl}-mkdir/opt/etcd/{bin,cfg,ssl}-tarzxvfetcd-v3.4.9-linux-mvetcd-v3.4.9-linux-amd64/{etcd,etcdctl}etcd配置文cat>/opt/etcd/cfg/etcd.conf<<cat>/opt/etcd/cfg/etcd.conf<<"""""ETCD_NAME:节点名称,集群中唯ETCD_DATA_DIR:数ETCD_LISTEN_PEER_URLS:集群通信地ETCD_LISTEN_CLIENT_URLS:客户端地ETCD_INITIAL_ADVERTISE_PEERURLS:集群通告地ETCD_ADVERTISE_CLIENT_URLS:客户端通告地ETCD_INITIAL_CLUSTER:集群节点地ETCD_INITIALCLUSTER_TOKEN:集群ETCD_INITIALCLUSTER_STATE:加入集群的当前状态,new是新集群,existingsystemdcatcat>/usr/lib/systemd/system/etcd.service<<EOFDescription=EtcdServerWants=network-ExecStart=/opt/etcd/bin/etcd\--cert-file=/opt/etcd/ssl/server.pem--key-file=/opt/etcd/ssl/server-key.pem--peer-cert-file=/opt/etcd/ssl/server.pem--peer-key-file=/opt/etcd/ssl/server-key.pem--trusted-ca-file=/opt/etcd/ssl/ca.pem--peer-trusted-ca-file=/opt/etcd/ssl/ca.pem拷贝刚才生把刚才生成的拷贝到配置文件中的路径cp~/TLS/etcd/ca*pem~/TLS/etcd/server*pemcp~/TLS/etcd/ca*pem~/TLS/etcd/server*pem启动并设置开机启systemctlsystemctldaemon-reloadsystemctlstartetcdsystemctlenableetcd将上面节点1所有生成的文件拷贝到节点2和节scp-r/opt/etcd/scp-r/opt/etcd/scp/usr/lib/systemd/system/etcd.serviceroot@2:/usr/lib/systemd/system/scp-r/opt/etcd/root@3:/opt/scp/usr/lib/systemd/system/etcd.service然后在节点2和节点3分别修改etcd.conf置文件中的节点名称和当前服务vivi 2etcd-2,3etcd-3 ETCD_INITIAL_CLUSTER_TOKEN="etcd-"最后启动etcd并设置开机启动,同上查看集群状ETCDCTL_API=3/opt/etcd/bin/etcdctl--cacert=/opt/etcd/ssl/ca.pem-ETCDCTL_API=3/opt/etcd/bin/etcdctl--cacert=/opt/etcd/ssl/ca.pem-cert=/opt/etcd/ssl/server.pem--key=/opt/etcd/ssl/server-key.pem--73:2379"endpointhealth--write-.+|+|||++|HEALTH++++|ERROR+++|||+true|10.301506mstrue|12.87467mstrue|13.225954ms|||+++如果输出上面信息,就说明集群部署成功如果有问题第一步先看日志:/var/log/messagejournalctlu三、安这里使用Docker作为容器引擎,也可以换成别的,例如 以下在所有节点操作。这里采用二进制安装,用yum安装也一样解压二进制tartarzxvfdocker-19.03.9.tgzmvdocker/*/usr/binsystemdcatcat>/usr/lib/systemd/system/docker.service<<EOFDescription=DockerApplicationContainerEngineAfter=network-online.targetfirewalld.serviceExecReload=/bin/kill-sHUP$MAINPID创建配置文mkdirmkdircat>/etc/docker/daemon.json<<{"registry-mirrors":}启动并设置开机启systemctlsystemctldaemon-reloadsystemctlstartdockersystemctlenabledocker四、部Master如果你在学习中遇到问题或者文档有误可联系 :kube-apiserver11书颁发机构cdcat>ca-config.json<<{"signing":"default":{"expiry":"87600h""profiles":{"kubernetes":{"expiry":"87600h","usages":["keyencipherment","serverauth","clientauth"]}}}}cat>ca-csr.json<<{"CN":"kubernetes","key":{"algo":"rsa","size":2048"names":{]

"C":"L":"ST":"O":"OU":}生成cfsslgencert-initcaca-csr.json|cfssljson-barecacfsslgencert-initcaca-csr.json|cfssljson-bareca会生成ca.pemca-key.pem文件2.使用自签CAkube-apiserverHTTPS创建申请文件cat>server-csr.json<<{"CN":"kubernetes","hosts":["key":"algo":"rsa","size":2048"names":{]

"C":"L":"ST":"O":"OU":}注:上述文件hosts段中IP所有Master/LB/VIPIP,一个都不能少!为了方便后期扩容可以多写几个预留的IP。生成cfsslcfsslgencert-ca=ca.pem-ca-key=ca-key.pem-config=ca-config.json-profile=kubernetesserver-csr.json|cfssljson-bareserver会生成server.pem和server-key.pem文件从二进制文件-注:打开你会发现里面有很多包,一个server包就够了,包含了WorkerNode进制文件解压二进制mkdir-p/opt/kubernetes/{bin,cfg,ssl,logs}tarmkdir-p/opt/kubernetes/{bin,cfg,ssl,logs}tarzxvfkubernetes-server-linux-amd64.tar.gzcdkubernetes/server/bincpkube-apiserverkube-schedulerkube-controller-manager/opt/kubernetes/bincpkubectl/usr/bin/kube-创建配置文cat>/opt/kubernetes/cfg/kube-apiserver.conf<<cat>/opt/kubernetes/cfg/kube-apiserver.conf<<KUBE_APISERVER_OPTS="--logtostderr=false--v=2--log-dir=/opt/kubernetes/logs2379\\--bind-address=1--secure-port=6443--advertise-address=1--allow-privileged=true:--service-cluster-ip-range=/24--authorization-mode=RBAC,Node--enable-bootstrap-token-auth=true--token-auth-file=/opt/kubernetes/cfg/token.csv--service-node-port-range=30000-32767--kubelet- =/opt/kubernetes/ssl/server.pem--kubelet-client-key=/opt/kubernetes/ssl/server-key.pem--tls-cert-file=/opt/kubernetes/ssl/server.pem--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem--client-ca-file=/opt/kubernetes/ssl/ca.pem--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem--service-account-issuer=api--service-account-signing-key-file=/opt/kubernetes/ssl/server-key.pem--etcd-cafile=/opt/etcd/ssl/ca.pem--etcd-certfile=/opt/etcd/ssl/server.pem--etcd-keyfile=/opt/etcd/ssl/server-key.pem--requestheader-client-ca-file=/opt/kubernetes/ssl/ca.pem--proxy-client-cert-file=/opt/kubernetes/ssl/server.pem--proxy-client-key-file=/opt/kubernetes/ssl/server-key.pem--requestheader-allowed-names=kubernetes--requestheader-extra-headers-prefix=X-Remote-Extra---requestheader-group-headers=X-Remote-Group--requestheader-username-headers=X-Remote-User--enable-aggregator-routing=true--audit-log-maxage=30--audit-log-maxbackup=3--audit-log-maxsize=100注:上面两个\\第一个是转义符,第二个是换行符,使用转义符是为了使用保留换行符--logtostderr:启用日---v:日志等--log-dir:日--etcd-servers:etcd集群地--bind-address:地--secure-port:https安全端--advertise-address:集群通告地--allow-privileged:启--service-cluster-ip-range:Service虚拟IP地址--enable-admission-plugins:准入控制模--authorization-mode:认证,启用RBAC和节点自管--enable-bootstrap-token-auth:启用TLSbootstrap机--token-auth-file:bootstraptoken文--service-node-port-range:Servicenodeport类型默认分配端口范--kubelet-client-xxx:apiserverkubelet客户--tls-xxx-file:apiserverhttps--etcd-xxxfile:连接Etcd集--audit-log-xxx:审计日cert-file,--proxy-client-key-file,--requestheader-allowed--requestheader-extra-headers-prefix,--requestheader-group---requestheader-username-headers,--enable-aggregator-拷贝刚才生把刚才生成的拷贝到配置文件中的路径cp~/TLS/k8s/ca*pem~/TLS/k8s/server*pemcp~/TLS/k8s/ca*pem~/TLS/k8s/server*pem启用TLSBootstrap机TLSBootstra:Masterapiserver启用TLS认证后,Node节点kubelet和kube-proxy要与kube-apiserver进行通信,必须使用CA签发的有效才可以,当Node节点很多时,这种客户端颁发需要大量工作,同样也会增加集群扩展复杂度。为了简化流程,Kubernetes引入了TLSbootstra机制来自动颁发客户端,kubelet会以一个低权限用户自apiserver申请,kubelet的由apiserver动态签署。所以强烈建议在Node上使用这种方式,目前主要用于kubelet,kube-proxy还是由我们统一颁发一个。TLSbootstra工作流程创建上述配置文件中token文件cat>/opt/kubernetes/cfg/token.csv<<cat>/opt/kubernetes/cfg/token.csv<<格式:token,用户名,UID,用户token也可自行生成替换head-c16/dev/urandom|od-An-tx|tr-d'head-c16/dev/urandom|od-An-tx|tr-d'systemdcatcat>/usr/lib/systemd/system/kube-apiserver.service<<EOFDescription=KubernetesAPI ExecStart=/opt/kubernetes/bin/kube-apiserver\$KUBE_APISERVER_OPTS启动并设置开机启systemctldaemon-reloadsystemctlstartkube-apiserversystemctlsystemctldaemon-reloadsystemctlstartkube-apiserversystemctlenablekube-apiserver部署kube-controller-创建配置文catcat>/opt/kubernetes/cfg/kube-controller-manager.conf<<EOFKUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false\\--v=2--log-dir=/opt/kubernetes/logs--leader-elect=true--kubeconfig=/opt/kubernetes/cfg/kube-controller-manager.kubeconfig--kubeconfig=/opt/kubernetes/cfg/kube-controller-manager.kubeconfig--bind-address=--allocate-node-cidrs=true--cluster-cidr=/16--service-cluster-ip-range=/24--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem--cluster-signing-key-file=/opt/kubernetes/ssl/ca- --root-ca-file=/opt/kubernetes/ssl/ca.pem--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem--kubeconfig:连接apiserver配置文--leader-elect:当该组件启动多个时,自动kubelet颁发的CA,与apiserver保持一致kubeconfig文生成kube-controller-managercdcat>kube-controller-manager-csr.json<<{"CN":"system:kube-controller-manager","hosts":[],"key":"algo":"rsa","size":2048"names":{"C":"L":"ST":"O":"OU":}}]cfsslgencert-ca=ca.pem-ca-key=ca-key.pem-config=ca-config.json-profile=kuberneteskube-controller-manager-csr.json|cfssljson-barekube-生成kubeconfig文件(以下是s命令,直接在终端执行 kubectlconfigset-clusterkubernetes- -authority=/opt/kubernetes/ssl/ca.pem--embed-certs=true--server=${KUBE_APISERVER}--kubectlconfigset-credentialskube-controller-manager-- =./kube-controller-manager.pem--client-key=./kube-controller-manager-key.pem--embed-certs=truekubectlconfigset-contextdefault\--cluster=kubernetes--user=kube-controller-manager--kubectlconfiguse-contextdefault--systemd管理controller-catcat>/usr/lib/systemd/system/kube-controller-manager.service<<EOFDescription=KubernetesController ExecStart=/opt/kubernetes/bin/kube-controller-manager\$KUBE_CONTROLLER_MANAGER_OPTSRestart=on-Restart=on-启动并设置开机启systemctldaemon-systemctldaemon-systemctlstartkube-controller-managersystemctlenablekube-controller-managerkube-创建配置文catcat>/opt/kubernetes/cfg/kube-scheduler.conf<<EOFKUBE_SCHEDULER_OPTS="--logtostderr=false\\--v=2--log-dir=/opt/kubernetes/logs--leader-elect--kubeconfig=/opt/kubernetes/cfg/kube-scheduler.kubeconfig--kubeconfig:连接apiserver配置文--leader-elect:当该组件启动多个时,自动kubeconfig件生成kube-scheduler:cdcat>kube-scheduler-csr.json<<{"CN":"system:kube-scheduler","hosts":[],"key":"algo":"algo":"rsa","size":2048"names":{"C":"L":"ST":"O":"OU":}]cfsslgencert-ca=ca.pem-ca-key=ca-key.pem-config=ca-config.json-profile=kuberneteskube-scheduler-csr.json|cfssljson-barekube-scheduler生成kubeconfig文件(以下是s命令,直接在终端执行 kubectlconfigset-clusterkubernetes- -authority=/opt/kubernetes/ssl/ca.pem--embed-certs=true--server=${KUBE_APISERVER}--kubectlconfigset-credentialskube-scheduler-- =./kube-scheduler.pem--client-key=./kube-scheduler-key.pem--embed-certs=truekubectlconfigset-contextdefault\--cluster=kubernetes--user=kube-scheduler--kubectlconfiguse-contextdefault--systemdcatcat>/usr/lib/systemd/system/kube-scheduler.service<<EOFDescription=Kubernetes ExecStart=/opt/kubernetes/bin/kube-scheduler\$KUBE_SCHEDULER_OPTS启动并设置开机启systemctldaemon-reloadsystemctlstartkube-schedulersystemctlsystemctldaemon-reloadsystemctlstartkube-schedulersystemctlenablekube-scheduler查看集群状生成kubectl连接集群 cat>admin-csr.jsoncat>admin-csr.json{"CN":"hosts":[],"key":{"algo":"rsa","size":2048"names":{"C":"L":"ST":"O":"OU":}}]cfsslgencert-ca=ca.pem-ca-key=ca-key.pem-config=ca-config.json-profile=kubernetesadmin-csr.json|cfssljson-bareadmin生成kubeconfig文件mkdirmkdir"kubectlconfigset-clusterkubernetes- -authority=/opt/kubernetes/ssl/ca.pem--embed-certs=true--server=${KUBE_APISERVER}--kubectlconfigset-credentialscluster-admin-- =./admin.pem--client-key=./admin-key.pem--embed-certs=truekubectlconfigset-contextdefault\--cluster=kubernetes--user=cluster-admin--kubectlconfiguse-contextdefault--通过kubectl工具查看当前集群组件状态kubectlgetkubectlgetetcd-etcd- 如上输出说明Master节点组件运行正常kubelet-bootstrap用户允许请求kubectlcreateclusterrolebindingkubelet-bootstrapkubectlcreateclusterrolebindingkubelet-bootstrap--clusterrole=system:node-bootstrapper--user=kubelet-五、部Worker如果你在学习中遇到问题或者文档有误可联系 :下面还MasterNode操作,即同时作为Worker 在所有workernode创建工 mkdir-pmkdir-pmaster节点拷贝cdcdcpkubeletkube-proxy 创建配置文catcat>/opt/kubernetes/cfg/kubelet.conf<<EOFKUBELET_OPTS="--logtostderr=false\\--v=2--log-dir=/opt/kubernetes/logs--hostname-override=k8s-master1- i--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig--config=/opt/kubernetes/cfg/kubelet-config.yml--cert-dir=/opt/kubernetes/ssl--hostname-override:显示名称,集群中唯--network-plugin:启用--kubeconfig:空路径,会自动生成,后面用于连接--bootstrap-kubeconfig:首次启apiserver申--config:配置参数文--cert-dir:kubelet生--pod-infra-container-image:管理Pod网络容器的镜配置参数文catcat>/opt/kubernetes/cfg/kubelet-config.yml<<EOFkind:KubeletConfigurationapiVersion:kubelet.config.k8s.io/v1beta1address:port:readOnlyPort:10255cgroupDriver:cgroupfs- :cluster.localfailSwapOn:falseenabled:falsecacheTTL:2m0senabled:trueclientCAFile:/opt/kubernetes/ssl/ca.pemmode:WebhookcacheAuthorizedTTL:5m0scacheUnauthorizedTTL:30simagefs.available:15%memory.available:100Minodefs.available:10%nodefs.inodesFree:nodefs.inodesFree:5%maxPods:110kubelet初次加入集群引导kubeconfig文 "#apiserverIP:PORTTOKEN="c47ffb939f5ca36231d9e3121a252940"token.csvkubeletbootstrapkubeconfigkubectlconfigset-clusterkubernetes- -authority=/opt/kubernetes/ssl/ca.pem--embed-certs=true--server=${KUBE_APISERVER}--kubectlconfigset-credentials"kubelet-bootstrap"--token=${TOKEN}kubectlconfigset-contextdefault\--cluster=kubernetes--user="kubelet-bootstrap"--kubectlconfiguse-contextdefault--systemdcatcat>/usr/lib/systemd/system/kubelet.service<<EOFDescription=KubernetesKubeletExecStart=/opt/kubernetes/bin/kubelet\$KUBELET_OPTS启动并设置开机启systemctldaemon-reloadsystemctlstartkubeletsystemctlenablesystemctldaemon-reloadsystemctlstartkubeletsystemctlenablekubectl k8s- 5.3申请并加入kubectlgetcsrnode-csr-uCEGPOIiDdlLODKts8J658HrFq9CZ-- 5.3申请并加入kubectlgetcsrnode-csr-uCEGPOIiDdlLODKts8J658HrFq9CZ-- apiserver-client- kubelet- approvenode-csr-uCEGPOIiDdlLODKts8J658HrFq9CZ--kube-创建配置文catcat>/opt/kubernetes/cfg/kube-proxy.conf<<EOFKUBE_PROXY_OPTS="--logtostderr=false\\--v=2--log-dir=/opt/kubernetes/logs配置参数文catcat>/opt/kubernetes/cfg/kube-proxy-config.yml<<EOFkind:KubeProxyConfigurationapiVersion:kubeproxy.config.k8s.io/v1alpha1bindAddress:metricsBindAddress::10249kubeconfig:/opt/kubernetes/cfg/kube-proxy.kubeconfighostnameOverride:k8s-master1clusterCIDR:/24生成kube-proxy.kubeconfig文cdcat>kube-proxy-csr.json<<{"CN":"system:kube-"hosts":[],"key":{"algo":"rsa","size":2048"names":{"C":"L":"ST":"O":"OU":}]cfsslgencert-ca=ca.pem-ca-key=ca-key.pem-config=ca-config.json-profile=kuberneteskube-proxy-csr.json|cfssljson-barekube-proxy生成kubeconfig生成kubeconfig文件: kubectlconfigset-clusterkubernetes- -authority=/opt/kubernetes/ssl/ca.pem--embed-certs=true--server=${KUBE_APISERVER}--kubectlconfigset-credentialskube-proxy-- =./kube-proxy.pem--client-key=./kube-proxy-key.pem--embed-certs=truekubectlconfigset-contextdefault\--cluster=kubernetes--user=kube-proxy--kubectlconfiguse-contextdefault--systemdkube-catcat>/usr/lib/systemd/system/kube-proxy.service<<EOFDescription=KubernetesProxyExecStart=/opt/kubernetes/bin/kube-proxy\$KUBE_PROXY_OPTS启动并设置开机启systemctldaemon-reloadsystemctlstartkube-proxysystemctlsystemctldaemon-reloadsystemctlstartkube-proxysystemctlenablekube-proxy部署网络组Calico是一个纯三层的数据中心网络方案,是目前Kubernetes主流的网络方案。部署Calico:kubectlapply-fcalico.yamlkubectlkubectlapply-fcalico.yamlkubectlgetpods-nkube-systemCalicoPodRunning,节点也会准备就绪kubectlk8s-apiserverkubelet应用场景:例如kubectlcatcat>apiserver-to-kubelet-rbac.yaml<<EOFapiVersion:rbac.authorization.k8s.io/v1kind:ClusterRolerbac.authorization.kubernetes.io/autoupdate:"true" :rbac-defaultsname:system:kube-apiserver-to-kubelet---apiVersion:rbac.authorization.k8s.io/v1kind:ClusterRoleBindingname:system:kube-apiservernamespace:""apiGroup:rbac.authorization.k8s.iokind:ClusterRolename:system:kube-apiserver-to-kubelet-apiGroup:rbac.authorization.k8s.iokind:Username:kubectlapply-fapiserver-to-kubelet-新增Worker拷贝已部署好的Node相关文件到新节Master节点将WorkerNode涉及文件拷贝到新节点scp-r/opt/kubernetesscp-r/opt/kubernetesscp-r/usr/lib/systemd/system/{kubelet,kube-proxy}.servicescp/opt/kubernetes/ssl/ca.pem删除kubelet和kubeconfig文rmrm-f/opt/kubernetes/cfg/kubelet.kubeconfigrm-f/opt/kubernetes/ssl/kubelet*注:这几个文件是申请后自动生成的,每个Node不同,必须删修改主机vivi--hostname-override=k8s-vi/opt/kubernetes/cfg/kube-proxy-config.ymlhostnameOverride:k8s-node1启动并设置开机启systemctldaemon-systemctldaemon-systemctlstartkubeletkube-proxysystemctlenablekubeletkube-proxykubectlget node-csr-apiserver-client- Kei- kubelet-#approvenode-csr-Kei-5Master批准Node申Node状kubectlgetkubectlget k8s- k8s- Node2(3)节点同上。记得修改主机名!六、部DashboardCoreDNSkubectlkubectlapplyfkubernetes-dashboard.yaml#查看部署kubectlgetpods,svc-nkubernetes-kubectlgetpods,svc-nkubernetes-地址:创建serviceaccount并绑定默认cluster-admin管理员集群角色kubectlcreateserviceaccountdashboard-admin-nkube-kubectlcreateserviceaccountdashboard-admin-nkube-kubectlcreateclusterrolebindingdashboard-admin--clusterrole=cluster-admin--kubectldescribesecrets-nkube-system$(kubectl-nkube-systemgetsecret|awk'/dashboard-admin/{print$1}')使用输出的token登录DashboardCoreDNS用于集群内部Service名称解析apply-get-nkube- coredns-5ffbfd976d-0DNS解析测试kubectlrun-it--rmdns-test--image=busybox:1.28.4kubectlrun-it--rmdns-test--image=busybox:1.28.4Ifyoudon'tseeacommandprompt,trypressing/#nslookupkubernetes Address1:kube-dns.kube-Address1:解析没问题至此一个单Master集群就搭建完成了!这个环境就足以满足学习实验了,如果你的服务器配置较高,可继续扩容多Master群!七、扩容多Master(高可用架构Kubernetes作为容器集群系统,通过健康检查+重启策略实现了Pod故障自我修复能力,通过调度算法实现将Pod分布式部署,并保持预期副本数,根据Node失效状态自动在其他Node拉起Pod,实现了应用层的高可用性。针对Kubernees集群,高可用性还应包含以下两个层面的考虑:Etcd数据库的高可用性和KuberneesMastr组件的高可用性。而Etcd我们已经采用3个节点组建集群实现高可用,本节将对Master节点高可用进行说明和实施。Master节点扮演着总控中心的角色,通过不断与工作节点上的Kubelet和kube-proxy进行通信来整个集群的健康工作状态。如果Master节点故障,将无法使用kubectl具或者API任何集群管理。Master点主要有三个服务kube-apiserver、kube-controller-managerkube-scheduler,其中kube-controller-managerkube-scheduler件自身通过选择组件是以HTTPAPI供服务,因此对他高可用与Web务器类似,增加负载均衡Master架构图Master2现在需要再增加一台新服务器,作为Master2Node,IP是4为了节省资源你也可以将之前部署好的WorkerNode1复用为Master2Node角(即部署Master组件Master2已部署的Master1有操作一致。所以我们只需将Master1K8s件拷贝过来,再修改下服务器IP主机名启动即可。scpscp/usr/bin/docker*root@4:/usr/binscp/usr/bin/runcroot@4:/usr/binscp/usr/bin/containerd*root@4:/usr/binscp/usr/lib/systemd/system/docker.servicescp-r/etc/dockersystemctldaemon-reloadsystemctlstartdockersystemctlenabledockeretcd在Master2创建etcdmkdir-pmkdir-p拷贝文件(Master1操作拷贝Master1上所有K8s文件和etcd到scp-r/opt/kubernetesroot@4:/optscpscp-r/opt/kubernetesroot@4:/optscp-r/opt/etcd/sslroot@4:/opt/etcdscp/usr/lib/systemd/system/kube*root@4:/usr/lib/systemd/systemscp/usr/bin/kubectl scp-r~/.kube删除文删除kubelet和kubeconfig文件rmrm-f/opt/kubernetes/cfg/kubelet.kubeconfigrm-f/opt/kubernetes/ssl/kubelet*修改配置文件IP和主机修改apiserver、kubelet和kube-proxy配置文件为本地vi/opt/kubernetes/cfg/kube-vi/opt/kubernetes/cfg/kube---bind-address=4--advertise-address=4vivi/opt/kubernetes/cfg/kube-controller-manager.kubeconfigvi/opt/kubernetes/cfg/kube-scheduler.kubeconfigvi--hostname-override=k8s-vi/opt/kubernetes/cfg/kube-proxy-config.ymlhostnameOverride:k8s-master2vi启动设置开机启systemctldaemon-systemctldaemon-systemctlstartkube-apiserverkube-controller-managerkube-schedulerkubeletkube-systemctlenablekube-apiserverkube-controller-managerkube-schedulerkubeletkube-查看集群状kubectlgetkubectlgetetcd-etcd- kubectlget8申 approvenode-csr-JYNknakEa_YpHz797oKaN-#查看Nodekubectlgetnode 如果你在学习中遇到问题或者文档有误可联系 :部署Nginx+Keepalived高可用负载均衡kube-apiserver高可用架构图Nginx是一个主流Web服务和反向服务器,这里用四层实现对实现负载均衡Keepalived是一个主流高可用软件,基于VIP绑定实现服务器双机热备,在上述拓扑中,Keepalived要根据Nginx行状态判断是否需要故障转移(漂移VIP),例如当Nginx主节点挂掉,VIP会自动绑定在Nginx备节点,从而保VIP一直可用,实现Nginx高可用1:为了节省机器,这里与K8sMaster点机器复用。也可以独立于k8s群之外部署,只要nginxapiserver通信就行。注2:如果你是在公有云上,一般都不支持keepalived,那么你可以直接用它们的负载均衡器产品,直接负载均衡多台Masterkube-apiserver,架构与上面一样。在两台Master节点操作安装软件包(主/备yuminstallepel-release-yuminstallepel-release-yuminstallnginxkeepalived-Nginx配置文件(主/备一样catcat>/etc/nginx/nginx.conf<<"EOF"usernginx;worker_processeserror_log/var/log/nginx/error.log;pid/run/nginx.pid;include/usr/share/nginx/modules/*.conf;events{worker_connections}stream '$remote_addr$upstream_addr-[$time_local] upstreamk8s-apiserver{server1:6443; #Master1APISERVERIP:PORTserver #Master2APISERVER}serverlisten16443;#由于nginx与master节点复用,这 端口不能是6443,否则proxy_passk8s-}}http '$remote_addr-$remote_user[$time_local]"$request"''$status$body_bytes_sent"$http_referer"''"$http_user_agent""$http_x_forwarded_for"'; types_hash_max_size server 80default_server; }

location/}keepalived置文件(Nginxcatcat>/etc/keepalived/keepalived.conf<<EOFglobal_defs{ }_fromsmtp_serversmtp_connect_timeout30router_idNGINX_MASTER}vrrp_scriptcheck_nginxscript}vrrp_instanceVI_1{stateMASTERinterface virtual_router_id51#VRRP路由IDpriority}_fromsmtp_serversmtp_connect_timeout30router_idNGINX_MASTER}vrrp_scriptcheck_nginxscript}vrrp_instanceVI_1{stateMASTERinterface virtual_router_id51#VRRP路由IDpriority #优先级,备服务器设置advert_int VRRP1authentication{auth_typePASSauth_pass}IPvirtual_ipaddress}track_script{}}virtual_ipaddres

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论