Openstack高可用部署指导手册_第1页
Openstack高可用部署指导手册_第2页
Openstack高可用部署指导手册_第3页
Openstack高可用部署指导手册_第4页
Openstack高可用部署指导手册_第5页
已阅读5页,还剩48页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

1、0.简介21.需求21.1 四类节点:21.2 整体架构31.3 IP 地址规划42.网络节点52.1 准备工作52.2网络规划52.3 OpenvSWtich(部分1)62.4 Quantum62.5. OpenVSwitch (部分2)82.6. HAProxy82.7. Corosync and Pacemaker132.8. Ceph (on R610-5 node only)153.控制节点153.1准备工作153.2. Networking163.3. MySQL163.4. RabbitMQ173.5. DRBD183.6. Pacemaker and Corosync193.7

2、. Create Databases223.8. Ceph233.9. Keystone253.10. Glance263.11. Quantum283.12. Nova293.13. Cinder313.14. Horizon334. Compute Nodes334.1. Preparing the Node334.2. Networking344.3. KVM344.4. OpenVSwitch354.5. Quantum364.6. Ceph364.7. Nova375. Swift Node395.1. Preparing the Node395.2. Networking395.3

3、. Swift Storage405.4. Swift Proxy500.简介本手册指导大家如何一步一步构建一套多节点的高可用性(High Availability)Openstack 云平台,该套平台同时利用Ceph作为Glance和Cinder的后端存储,Swift作为对象存储,Openvswitch作为Quantum 组件1.需求1.1 四类节点:Controller,Network,Compute and Swift1.2 整体架构1.3 IP 地址规划HostnameHW modelRoleeth0(external)eth1(mgmt)eth2(vm traffic)eth3(st

4、orage)R710-1R710SwiftR710-2R710Controller_bakR710-3R710ControllerR710-4R710NetworkR710-5R710Network_bakR710-7R710ComputeR710-8R710ComputeVIP-APIVIP-MysqlVIP-Rabbitmq2.网络节点2.1 准备工作·安装Uubuntu 13.04·添加ceph节点条目到 /etc/hosts文件

5、 R710-3 R710- R610-5更新系统apt-get update -yapt-get upgrade -yapt-get dist-upgrade -y·安装ntp服务 apt-get install -y ntp添加控制节点作为NTP服务器,然后重启服务echo "server " >> /etc/ntp.confecho "server " >> /etc/ntp.confservice ntp rest

6、art·安装其他服务apt-get install -y vlan bridge-utils·开启 IP_Forwardingsed -i 's/#net.ipv4.ip_forward=1/net.ipv4.ip_forward=1/' /etc/sysctl.confsysctl -p2.2网络规划 编辑/etc/network/interfaces, 下面的示例是R610-5 节点, R610-4 节点根据实际作出相关修改. 同时R610-4 不需要配置eth3 storage IP ,因为其上并没有ceph组件运行auto eth0iface eth

7、0 inet static0dns-nameservers #Openstack managementauto eth1iface eth1 inet staticaddress netmask #VM trafficauto eth2iface eth2 inet staticaddress netmask #Storage network for cephauto eth3iface eth3 inet staticaddress netmask 255.255.2

8、55.0重启网络service networking restart2.3 OpenvSWtich(部分1) 安装openVSwitchapt-get install -y openvswitch-switch openvswitch-datapath-dkms 创建网桥#br-int will be used for VM integrationovs-vsctl add-br br-int#br-ex is used to make to VM accessible from the external networkovs-vsctl add-br br-ex#br-eth2 is use

9、d to establish VM internal trafficovs-vsctl add-br br-eth2ovs-vsctl add-port br-eth2 eth22.4 Quantum 安装Quantum openvswitch agent, l3 agent, dhcp agent and metadata-agent组件apt-get -y install quantum-plugin-openvswitch-agent quantum-dhcp-agent quantum-l3-agent quantum-metadata-agent 编辑/etc/quantum/qua

10、ntum.confDEFAULTauth_strategy = keystonerabbit_host = 01rabbit_password=yourpasswordkeystone_authtokenauth_host = 00auth_port = 35357auth_protocol = httpadmin_tenant_name = serviceadmin_user = quantumadmin_password = yourpasswordsigning_dir = /var/lib/quantum/keystone-signing 编辑

11、/etc/quantum/api-paste.inifilter:authtokenpaste.filter_factory = keystoneclient.middleware.auth_token:filter_factoryauth_host = 00auth_port = 35357auth_protocol = httpadmin_tenant_name = serviceadmin_user = quantumadmin_password = yourpassword 编辑 the OVS plugin configuration file /etc/quan

12、tum/plugins/openvswitch/ovs_quantum_plugin.ini with:DATABASEsql_connection = mysql:/quantum:yourpassword00/quantumOVStenant_network_type = vlannetwork_vlan_ranges = physnet1:1000:1100integration_bridge = br-intbridge_mappings = physnet1:br-eth2SECURITYGROUPfirewall_driver = quantum.agent.l

13、inux.iptables_firewall.OVSHybridIptablesFirewallDriver Update /etc/quantum/metadata_agent.ini with:DEFAULTauth_url = 00:35357/v2.0auth_region = RegionOneadmin_tenant_name = serviceadmin_user = quantumadmin_password = yourpasswordnova_metadata_ip = 00nova_metadata_port = 877

14、5metadata_proxy_shared_secret = demo 编辑 /etc/sudoers to give quantum user full access like:Defaults:quantum !requirettyquantum ALL=NOPASSWD: ALL 重启所有服务cd /etc/init.d/; for i in $( ls quantum-* ); do sudo service $i restart; done2.5. OpenVSwitch (部分2) 编辑 the /etc/network/interfaces to become like thi

15、s:auto eth0iface eth0 inet manualup ifconfig $IFACE upup ip link set $IFACE promisc ondown ip link set $IFACE promisc offdown ifconfig $IFACE down 添加eth0 到网桥br-ex#此步骤执行后,会失去internet连接,但不会影响openstack工作ovs-vsctl add-port br-ex eth0 添加external IP 到br-ex 使其恢复internet连接, 添加如下内容到 /etc/network/inte

16、rfacesauto br-exiface br-ex inet static 重启网络以及quantum服务service networking restartcd /etc/init.d/; for i in $( ls quantum-* ); do sudo service $i restart; done2.6. HAProxy 安装包到两个网络节点apt-get install -y haproxy Disable auto-start by 编辑ing /etc/default/haproxyENABLED=0 编辑 两个节点的/etc/haproxy/haproxy.cfg配置

17、文件,内容一致。global log local0 log local1 notice #log loghost local0 info maxconn 4096 #chroot /usr/share/haproxy user haproxy group haproxy daemon #debug #quietdefaultslog globalmaxconn 8000option redispatchretries 3timeout http-request 10stimeout queue 1mtimeout connect 10stimeout c

18、lient 1mtimeout server 1mtimeout check 10slisten dashboard_clusterbind 00:80balance sourceoption tcpkaoption httpchkoption tcplogserver R710-3 :80 check inter 2000 rise 2 fall 5server R710-2 :80 check inter 2000 rise 2 fall 5listen dashboard_cluster_internetbind 192.168

19、.1.43:80balance sourceoption tcpkaoption httpchkoption tcplogserver R710-3 :80 check inter 2000 rise 2 fall 5server R710-2 :80 check inter 2000 rise 2 fall 5listen glance_api_clusterbind 00:9292balance sourceoption tcpkaoption httpchkoption tcplogserver R710-3 10.10.10.

20、3:9292 check inter 2000 rise 2 fall 5server R710-2 :9292 check inter 2000 rise 2 fall 5listen glance_api_internet_clusterbind 3:9292balance sourceoption tcpkaoption httpchkoption tcplogserver R710-3 :9292 check inter 2000 rise 2 fall 5server R710-2 :9292 chec

21、k inter 2000 rise 2 fall 5listen glance_registry_clusterbind 00:9191balance sourceoption tcpkaoption tcplogserver R710-3 :9191 check inter 2000 rise 2 fall 5server R710-2 :9191 check inter 2000 rise 2 fall 5listen glance_registry_internet_clusterbind 3:9191ba

22、lance sourceoption tcpkaoption tcplogserver R710-3 :9191 check inter 2000 rise 2 fall 5server R710-2 :9191 check inter 2000 rise 2 fall 5listen keystone_admin_clusterbind 00:35357balance sourceoption tcpkaoption httpchkoption tcplogserver R710-3 :35357 check i

23、nter 2000 rise 2 fall 5server R710-2 :35357 check inter 2000 rise 2 fall 5server control03 3:35357 check inter 2000 rise 2 fall 5listen keystone_internal_clusterbind 00:5000balance sourceoption tcpkaoption httpchkoption tcplogserver R710-3 :5000 check inter

24、 2000 rise 2 fall 5server R710-2 :5000 check inter 2000 rise 2 fall 5listen keystone_public_clusterbind 3:5000balance sourceoption tcpkaoption httpchkoption tcplogserver R710-3 :5000 check inter 2000 rise 2 fall 5server R710-2 :5000 check inter 2000 rise 2 fa

25、ll 5listen memcached_clusterbind 00:11211balance sourceoption tcpkaoption tcplogserver R710-3 :11211 check inter 2000 rise 2 fall 5server R710-2 :11211 check inter 2000 rise 2 fall 5listen nova_compute_api1_clusterbind 00:8773balance sourceoption tcpkaoption t

26、cplogserver R710-3 :8773 check inter 2000 rise 2 fall 5server R710-2 :8773 check inter 2000 rise 2 fall 5listen nova_compute_api1_internet_clusterbind 3:8773balance sourceoption tcpkaoption tcplogserver R710-3 :8773 check inter 2000 rise 2 fall 5server R710-2

27、 :8773 check inter 2000 rise 2 fall 5listen nova_compute_api2_clusterbind 00:8774balance sourceoption tcpkaoption httpchkoption tcplogserver R710-3 :8774 check inter 2000 rise 2 fall 5server R710-2 :8774 check inter 2000 rise 2 fall 5listen nova_compute_api2_i

28、nternet_clusterbind 3:8774balance sourceoption tcpkaoption httpchkoption tcplogserver R710-3 :8774 check inter 2000 rise 2 fall 5server R710-2 :8774 check inter 2000 rise 2 fall 5listen nova_compute_api3_clusterbind 00:8775balance sourceoption tcpkaoption tcp

29、logserver R710-3 :8775 check inter 2000 rise 2 fall 5server R710-2 :8775 check inter 2000 rise 2 fall 5listen nova_compute_api3_internet_clusterbind 3:8775balance sourceoption tcpkaoption tcplogserver R710-3 :8775 check inter 2000 rise 2 fall 5server R710-2 1

30、:8775 check inter 2000 rise 2 fall 5listen cinder_api_clusterbind 00:8776balance sourceoption tcpkaoption httpchkoption tcplogserver R710-3 :8776 check inter 2000 rise 2 fall 5server R710-2 :8776 check inter 2000 rise 2 fall 5listen cinder_api_internet_clusterb

31、ind 3:8776balance sourceoption tcpkaoption httpchkoption tcplogserver R710-3 :8776 check inter 2000 rise 2 fall 5server R710-2 :8776 check inter 2000 rise 2 fall 5listen novnc_clusterbind 00:6080balance sourceoption tcpkaoption tcplogserver R710-3 :

32、6080 check inter 2000 rise 2 fall 5server R710-2 :6080 check inter 2000 rise 2 fall 5listen novnc_internet_clusterbind 3:6080balance sourceoption tcpkaoption tcplogserver R710-3 :6080 check inter 2000 rise 2 fall 5server R710-2 :6080 check inter 2000 rise 2 f

33、all 5listen quantum_api_clusterbind 00:9696balance sourceoption tcpkaoption httpchkoption tcplogserver R710-3 :9696 check inter 2000 rise 2 fall 5server R710-2 :9696 check inter 2000 rise 2 fall 5listen quantum_api_internet_clusterbind 3:9696balance sourceopt

34、ion tcpkaoption httpchkoption tcplogserver R710-3 :9696 check inter 2000 rise 2 fall 5server R710-2 :9696 check inter 2000 rise 2 fall 5如果haproxy正在运行,令其停止,随后用pacemaker来接管它service haproxy stop2.7. Corosync and Pacemaker 安装包apt-get install pacemaker corosync 在一个节点生成corosync密钥文件 (R6

35、10-5)corosync-keygen#拷贝到另外一个节点scp /etc/corosync/authkey R610-4:/etc/corosync/authkey 编辑两个节点的 /etc/corosync/corosync.conf , 用节点的eth1和eth3的地址替代"bindnetaddr" rrp_mode: active interface # The following values need to be set based on your environment ringnumber: 0 mcastport: 5405 interface # Th

36、e following values need to be set based on your environment ringnumber: 1 mcastport: 5405 开启Corosync的自动启动#编辑 /etc/default/corosyncSTART=yes·开启corosync服务service corosync start 检查corosync状态crm status应该可以看到两个节点处于online状态 Download 下载Haproxy的OCF agent脚本cd /usr/lib/ocf/resource.d/heartbeatwget chmod

37、755 haproxy 配置集群资源crm configureproperty stonith-enabled=falseproperty no-quorum-policy=ignorersc_defaults resource-stickiness=100rsc_defaults failure-timeout=0rsc_defaults migration-threshold=10property pe-warn-series-max="1000"property pe-input-series-max="1000"property pe-error

38、-series-max="1000"property cluster-recheck-interval="5min"primitive vip-mgmt ocf:heartbeat:IPaddr2 params ip=00 cidr_netmask=24 op monitor interval=5sprimitive vip-internet ocf:heartbeat:IPaddr2 params ip=3 cidr_netmask=24 op monitor interval=5sprimitive hapr

39、oxy ocf:heartbeat:haproxy params conffile="/etc/haproxy/haproxy.cfg" op monitor interval="5s"colocation haproxy-with-vips INFINITY: haproxy vip-mgmt vip-internetorder haproxy-after-IP mandatory: vip-mgmt vip-internet haproxyverifycommit#检查pacemaker资源是否正确运行crm status2.8. Ceph (on

40、R610-5 node only)我们利用R610-5 作为第三方ceph监控节点 安装Ceph库文件以及相应包wget -q -O- ' | sudo apt-key add -echo deb apt-get update -yapt-get install ceph 建立ceph-c 监控目录#R610-5mkdir /var/lib/ceph/mon/ceph-c3.控制节点3.1准备工作 安装ubuntu 13.04在系统安装进行分区过程中,请注意要为MYSQL和RabbitMQ预留一定的空间给DRBD使用,笔者这里直接使用两个独立的硬盘,也可以使用同一块磁盘的不同分区添加c

41、eph节点条目到/etc/hosts文件 R710-3 R710- R610-5 更新系统apt-get update -yapt-get upgrade -yapt-get dist-upgrade -y 设置ntp服务器apt-get install -y ntp 添加另外一个控制节点作为ntp服务器,然后重启ntp服务.#use for controller_bak nodeecho "server " >> /etc/ntp.confservice n

42、tp restart安装其他服务apt-get install -y vlan bridge-utils 开启ip转发sed -i 's/#net.ipv4.ip_forward=1/net.ipv4.ip_forward=1/' /etc/sysctl.confsysctl -p3.2. Networking 编辑 /etc/network/interfaces, R710-3 node, R710-2 node 根据实际情况进行修改,下面的示例是R70-3的auto eth0iface eth0 inet static#Openstack managementauto et

43、h1iface eth1 inet static#Storage network for cephauto eth2iface eth2 inet static重启网络service networking restart3.3. MySQL安装MySQLapt-get install -y mysql-server python-mysqldb确保两个节点的mysql有同样的UID和GID,如果不同,请修改一致 配置mysql 接受所有请求sed -i 's///g' /etc/mysql/f还要注意修改f文件中的内容mysql部分要补充修改/e

44、tc/mysql/f中datadir目录指向预挂载drbd0的目录,如我指定的/mnt/mysql,同时要编辑/etc/apparmod.d/usr.sbin.mysqld文件,将/var/lib/mysql的两行全部注释,同时添加/mnt/mysql/ r,/mnt/mysql/* rwk, 两行 保存退出 编辑etc/init/mysql.conf注释掉此行 #start on runlevel 2345#经过实际测试,不应该先关掉mysql,否则会出现pacemaker中的mysql无法启动的错误 Stop mysql to let pacemaker to manageservice

45、mysql stop#在首次做mysql和rabbitmq的HA配置时,如果出现pacemaker中出现mysql和rabbitmq stopped的错误,应该在本地预先启动mysql和rabbitmq的进程(但二者数据文件目录一定要指向的是DRBD共享目录),最后用crm resource cleanup resoucename 来重启指定资源或资源组。资源出现stopped的错误,实际上有时候可能是因为某些相关资源没有启动导致的,即资源和资源之间存在必然的联系,启动也因此具有先后顺序,比如正常的顺序是ms-drbd,fs,ip,service本身,所以排错的时候要注意这一点3.4. Rab

46、bitMQ安装RabbitMQapt-get install rabbitmq-server -y 确保两个节点的rabbitmq有同样的UID和GID,如果不同,请修改一致 关闭rabbitmq-server的开机自启动 update-rc.d -f rabbitmq-server removeStop mysql to let pacemaker to manageservice rabbitmq-server stop3.5. DRBD安装包apt-get install drbd8-utils xfsprogs -y关闭DRBD的开机自动启动update-rc.d -f drbd re

47、move这个步骤执行之前应该用pvcreate和vgcreate在两台控制节点分别创建好了名字为R710-3-vg和R710-2-vg的卷组,才能够执行下面的命令#注意在R710-2节点要更换VG name为R710-2-vg lvcreate R710-3-vg -n drbd0 -L 10Glvcreate R710-3-vg -n drbd1 -L 10G 加载DRBD模块,Load DRBD modulemodprobe drbd#添加drbd到/etc/modules 文件echo "drbd" >> /etc/modules创建mysql DRBD

48、资源文件/etc/drbd.d/mysql.resresource drbd-mysql device /dev/drbd0; meta-disk internal; on R710-2 address :7788; disk /dev/mapper/R710-2-vg-drbd0; on R710-3 address :7788; disk /dev/mapper/R710-3-vg-drbd0; syncer rate 40M; net after-sb-0pri discard-zero-changes; after-sb-1pri discard

49、-secondary; Create rabbitmq DRBD resource file /etc/drbd.d/rabbitmq.resresource drbd-rabbitmq device /dev/drbd1; meta-disk internal; on R710-2 address :7789; disk /dev/mapper/R710-2-vg-drbd1; on R710-3 address :7789; disk /dev/mapper/R710-3-vg-drbd1; syncer rate 40M; net after-sb

50、-0pri discard-zero-changes; after-sb-1pri discard-secondary; After did configuration above on both nodes, bring up DRBD resources 在两个节点都做好了上述配置之后,开启DRBD resources#Both nodedrbdadm dump drbd-mysqldrbdadm dump drbd-rabbitmq #Both nodeCreate the metadata as follows:drbdadm create-md drbd-mysqldrbdadm c

51、reate-md drbd-rabbitmq #Both nodeBring resources up:drbdadm up drbd-mysqldrbdadm up drbd-rabbitmq#Both nodeCheck that both DRBD nodes have made communication and we'll see that the data is inconsistent as no initial synchronization has been made. For this we do the following:drbd-overview#And th

52、e result will be similar to: 0:drbd-mysql Connected Secondary/Secondary Inconsistent/Inconsistent C r- Initial DRBD Synchronization#Do this on 1st node only:drbdadm - -overwrite-data-of-peer primary drbd-mysqldrbdadm - -overwrite-data-of-peer primary drbd-rabbitmq #And this should show something similar to: 0:drbd-mysql Connected Secondary/Secondary UpToDate/UpToDate C r- Create filesystem#Do this on 1st node only:mkfs -t xfs /dev/drbd0mkfs -t xfs /dev/drbd1 Move/Copy mysql and rabbitmq files to DRBD resources#Do following on 1st node only 拷贝mysql and rabbitmq 文件到DRBD 资源#下面的配置两个

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论