实战案例构建ceph分布式存储系统_第1页
实战案例构建ceph分布式存储系统_第2页
实战案例构建ceph分布式存储系统_第3页
实战案例构建ceph分布式存储系统_第4页
实战案例构建ceph分布式存储系统_第5页
已阅读5页,还剩40页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

第5

了解Ceph文件系统的构使用3台云主机搭建 使用Ceph分布式集群存

口口02 0202[root@ceph-node1[root@ceph-node1~]#hostnamectlset-hostnameceph-node1[root@ceph-node1~]#hostnamectlStatichostname:ceph-node1Iconname:computer-vmChassis:vmMachineID:dae72fe0cc064eb0b7797f25bfaf69dfBootID:b5805a7f0cce470bac5aeacf294edfc0OperatingSystem:CentOSLinux7(Core)CPEOSName:cpe:/o:centos:centos:7Kernel:Linux3.10.0-02NAMEMAJ:MINRMSIZEROTYPE 040G0 040G0part 253:16050G0192.168.1.101192.168.1.101ceph-192.168.1.102ceph-192.168.1.103ceph-02 -将CentOS-7-x86_64-DVD-1511.iso和XianDian--v2.2.iso这两个镜像 挂在到创建的下:#mkdir/opt/#mountXianDian- -v2.2.iso/opt/02[root@ceph-node1yum.repos.d]#mv*/media/[root@ceph-node1yum.repos.d]#vilocal.repo[root@ceph-node1[root@ceph-node1yum.repos.d]#mv*/media/[root@ceph-node1yum.repos.d]#vilocal.repo[root@ceph-node1yum.repos.d]#catlocal.repo 02[root@ceph-node1yum.repos.d]#yumlist02

口安装和配置Ceph03 03②通过在[root@ceph-node1~]#cd/etc/ceph03 total12-rw-r--r--1rootroot229Sep2016:20-rw-r--r--1rootroot2960Sep2016:20-rw-------1rootroot73Sep2016:200303cephversion0.94.5cephversion0.94.5cephversion0.94.503ceph-node1上创建第一个Cephmonitor03healthHEALTH_ERR64pgsstuckinactive64pgsstuckuncleannoosdselectionepoch2,quorum0ceph-node1osdmape1:0osds:0up,0pgmapv2:64pgs,1pools,0bytesdata,00kBused,0kB/0kB6403[root@ceph-node1ceph]#ceph-deploydisklistceph-node1[root@ceph-node1ceph]#ceph-deploydisklistceph-node1[ceph-node1][DEBUG]/dev/vda1other,xfs,mountedon03②创建共享磁盘,3个节点都要执行。首先对系统上的空闲硬盘进行分区、GNUParted3.1UsingetoGNUParted!Type'help'toviewalistofcommands.(parted)pDisk/dev/vdb:53.7GBDiskFlags:NumberStart Filesystem 0.00B53.7GB53.7GB03(parted)Newdisklabeltype?Warning:Theexistingdisklabelon/dev/vdbwillbedestroyedandalldataonthisdiskwillbelost.Doyouwanttocontinue?(parted)pModel:VirtioBlockDeviceDisk/dev/vdb:PartitionTable:gptDiskNumberStartEndSizeFilesystemNameFlags(parted)mkpartPartitionname?Filesystemtype?[ext2]?Start?0%End?(parted)03Disk/dev/vdb:53.7GBPartitionTable:gptDiskNumberStart FilesystemName 1049kB53.7GB(parted)Information:Youmayneedtoupdate03NAMEMAJ:MINRMSIZEROTYPE 040G0 040G0part 253:16050G0└─vdb1253:17050G0== =naming=version2 = agcount=4,sectsz=512attr=2,bsize=4096,bsize=4096ascii-ci=0ftype=0bsize=4096blocks=6399,version=2sectsz=512sunit=0blks,lazy-extsz=4096blocks=0, [root@ceph-node1ceph]#mkdir/opt/osd1[root@ceph-node1ceph]#mkdir/opt/osd1SizeUsedAvailUse%Mountedon40G1.1G39G3%/ 02.0G0%02.0G0%17M2.0G1%02.0G0%50G33M50G1% od77703[root@ceph-node1[root@ceph-node1ceph]#ll/opt/total0drwxrwxrwx2rootroot6Sep2017:40osd1[root@ceph-node1ceph]#lsblkNAMEMAJ:MINRMSIZEROTYPE 040G0└─vda1 040G0part 253:16050G0└─vdb1253:17050G0part/opt/osd1[root@ceph-node2~]#ll/opt/totaldrwxrwxrwx2rootroot6Sep2017:49osd2[root@ceph-node2~]#lsblkNAMEMAJ:MINRMSIZEROTYPE 040G0└─vda1 040G0part 253:16050G0└─vdb1253:17050G0part03total0drwxrwxrwx2rootroot6Sep2017:51osd3[root@ceph-node3~]#lsblkNAMEMAJ:MINRMSIZEROTYPE 040G0 040G0part 253:16050G0└─vdb1253:17050G0part03④在 [root@ceph-node1[root@ceph-node1ceph]#cd/opt/osd1/[root@ceph-node1osd1]#lltotal-rw-r--r--1rootroot37Sep2017:52-rw-r--r--1rootroot37Sep2017:52-rw-r--r--1rootroot21Sep2017:52magic[root@ceph-node1osd1]# od777*[root@ceph-node1osd1]#lltotal-rwxrwxrwx1rootroot37Sep2017:52-rwxrwxrwx1rootroot37Sep2017:52-rwxrwxrwx1rootroot21Sep2017:5203[ceph_deploy.conf][DEBUG]foundconfigurationfileat:/root/.cephdeploy.conf[ceph_deploy.conf][DEBUG]foundconfigurationfileat:/root/.cephdeploy.conf 03healthHEALTH_OKelectionepoch2,quorum0ceph-node1osdmape14:3osds:3up,3pgmapv23:64pgs,1pools,0bytesdata,015459MBused,134GB/149GB6403[ceph_deploy.cli][INFO]Invoked(1.5.31):/usr/bin/ceph-deployadminceph-node1ceph-node2ceph-node3[ceph_deploy.admin][DEBUG]Pushingadminkeysandconftoceph-node1[ceph_deploy.admin][DEBUG]Pushingadminkeysandconftoceph-node2[ceph-node2][DEBUG]connectedtohost:ceph-node2[ceph-node2][DEBUG]detect tforminformationfromremotehost[ceph_deploy.admin][DEBUG]Pushingadminkeysandconftoceph-node3[ceph-node3][DEBUG]connectedtohost:ceph-node3 od+r/etc/ceph/ceph. 03 Ceph##ceph##cephCephmonitor03#cephmon#cephmon⑤##ceph⑥检查CephMonitor、OSD和PG(配置组)##cephmonstat#cephosdstat#cephpgstat##cephpg03PG⑧列表##cephosdOSD的CRUSH##cephosd##cephauth~]#~]#hostnamectlset-hostname~]#Statichostname:Iconname:computer-vmChassis:vmMachineID:dae72fe0cc064eb0b7797f25bfaf69dfBootID:3db7ec8e OperatingSystem:CentOSLinux7(Core)CPEOSName:cpe:/o:centos:centos:7Kernel:Linux3.10.0-229.el7.x86_64Architecture:x86_64 机,名字叫 127.0.0.1localhostlocalhost6localhost6.local46 ][DEBUG]DependencyUpdated: ][DEBUG]cryptsetup-libs.x86_640:1.6.7-1.el7 ][DEBUG] ][DEBUG] ][INFO]Runningcommand:ceph-- ][DEBUG]cephversion0.94.5 回到ceph-node1节点通过ceph-deploy把Ceph安装到ceph- [ceph_deploy.admin][DEBUG]Pushingadminkeysandconfto][DEBUG]connectedtohost:][DEBUG]][DEBUG]detectmachine][DEBUG]writeclusterconfigurationto在ceph-node1ceph-deployCeph od+r ceph]#rbdcreatefoo--size4096-m10.0.1.11-ceph]#rbdmapfoo--.admin-m10.0.1.11-ceph]#NAMEMAJ:MINRMSIZEROTYPE 040G0└─vda1 040G0partrbd0 04G0 ceph]#mkfs.ext4/dev/rbd0mke2fs1.42.9(28-Dec-2013)Discardingdeviceblocks:doneFilesystemlabel=OStype:Blocksize=4096(log=2)Stride=1024blocks,Stripewidth=102426214452428blocks(5.00%)Firstdatablock=0forthesuperumfilesystem回到 32block32block32768blockspergroup,32768fragmentspergroup8192inodespergrou

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论