hadoop集群环境搭建new_第1页
hadoop集群环境搭建new_第2页
hadoop集群环境搭建new_第3页
hadoop集群环境搭建new_第4页
hadoop集群环境搭建new_第5页
已阅读5页,还剩10页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

一.准备环境安装包1)准备4台PC2)安装配置Linux系统:CentOS-7.0-1406-x86_64-DVD.iso3)安装配置Java环境:jdk-8u121-linux-x64.gz4)安装配置Hadoop:hadoop-2.7.4-x64.tar.gz5)安装配置Hbase:hbase-1.2.1-bin.tar.gz网络配置主机名IPmaster02slave103slave204slave305常用命令systemctlstartfoo.service#运行一个服务systemctlstopfoo.service#停止一个服务systemctlrestartfoo.service#重启一个服务systemctlstatusfoo.service#显示一个服务(无论运行与否)的状态systemctlenablefoo.service#在开机时启用一个服务systemctldisablefoo.service#在开机时禁用一个服务systemctlis-enablediptables.service#查看服务是否开机启动reboot#重启主机shutdown-hnow#立即关机source/etc/profile#配置文件修改立即生效yuminstallnet-tools二.安装配置CentOS安装CentOS1)选择启动盘CentOS-7.0-1406-x86_64-DVD.iso,启动安装2)选择InstallCentOS7,回车,继续安装3)选择语言,默认是English,学习可以选择中文,正时环境选择English4)配置网络和主机名,主机名:master,网络选择开启,配置手动的IPV45)选择安装位置;在分区处选择手动配置;选择标准分区,点击这里自动创建他们,点击完成,收受更改6)修改root密码,密码:Jit1237)重启,安装完毕。配置IP检查IPipaddr或iplink配置IP和网关#cd/etc/sysconfig/network-scripts#进入网络配置文件目录findifcfg-em*#查到网卡配置文件,例如ifcfg-em1viifcfg-em1#编辑网卡配置文件或vi/etc/sysconfig/network-scripts/ifcfg-em1#编辑网卡配置文件配置内容:BOOTPROTO=static#静态IP配置为static,动态配置为dhcpONBOOT=yes#开机启动IPADDR=02#IP地址NETMASK=#子网掩码GATEWAY=DNS1=5systemctlrestartnetwork.service#重启网络配置hosts#vi/etc/hosts编辑内容:masterslave1slave205slave3. .systemctlstatusfirewalld.service#检查防火墙状态systemctlstopfirewalld.service#关闭防火墙systemctldisablefirewalld.service#禁止开机启动防火墙yuminstall-yntp#安装ntp服务ntpdate#同步网络时间安装配置jdk卸载自带jdk安装好的CentOS会自带OpenJdk,用命令java-version,会有下面的信息:Javaversion"1.6.0"OpenJDKRuntimeEnvironment(build1.6.0-b09)OpenJDK64-BitServerVM(build1.6.0-b09,mixedmode)最好还是先卸载掉openjdk,在安装sun公司的jdk.先查看rpm-qa|grepjava显示如下信息:java-1.4.2-gcj-compat--40jpp.115java-1.6.0-openjdk--1.7.b09.el5卸载:rpm-e--nodepsjava-1.4.2-gcj-compat--40jpp.115rpm-e--nodepsjava-1.6.0-openjdk--1.7.b09.el5还有一些其他的命令rpm-qa|grepgcjrpm-qa|grepjdk如果出现找不到openjdksource的话,那么还可以这样卸载yum-yremovejavajava-1.4.2-gcj-compat--40jpp.115yum-yremovejavajava-1.6.0-openjdk--1.7.b09.el5安装jdk上传jdk-8u121-linux-x64.gz安装包到root根目录mkdir/hometar-zxvfjdk-8u121-linux-x64.gz-C/home/rm-rfjdk-8u121-linux-x64.gz各个主机之间复制jdkscp-r/homeroot@slave1:/home/hadoopscp-r/homeroot@slave2:/home/hadoopscp-r/homeroot@slave3:/home/hadoop各个主机配置jdk环境变量vi/etc/profile编辑内容exportJAVA_HOME=/home/jdk1.8.0_121exportPATH=$JAVA_HOME/bin:$PATHexportCLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jarsource/etc/profile#使配置文件生效java-version#查看java版本创建hadoop用户(每台主机上执行)[root@Master1〜]#groupaddhadoop〃创建用户组[root@Master1〜]#useradd-ghadoophadoop//新建hadoop用户并增力加至Uhadoop工作组[root@Master1〜]#passwdhadoop〃设置密码配置ssh无密钥访问分别在各个主机上检查ssh服务状态:systemctlstatussshd.service#检查ssh服务状态yuminstallopenssh-serveropenssh-clients#安装ssh服务,如果已安装,则不用执行该步骤systemctlstartsshd.service#启动ssh服务,如果已安装,则不用执行该步骤分别在各个主机上生成密钥(每台主机分别执行)su-hadoop〃登录至Uhadoop用户ssh-keygen-trsa-P#生成密钥(按三次回车完成),如下图所示[hadoop@jit:-ICS(ssh-keygen一七rsaGeneratingpiiblie/privatersakeypair.Ente工fileinlitiichtogavethekey(/hone/hadoop/..sid^raa>:Createddirectory1/haine/hadcop/*ssh1,一Enterpassphrase(■emptyfornopassphrase]:EntermamepassphraaeagaLn:Yoiiridentificationhasbeensavedin/hGSse/tiadaop/.ssh/id^rsa.Ycmupublicteyhaabeensavedin/home/hadoop/.ssh/i-d_rsa.pub.Thekeyfingerprintis:58:01:bl:d7:22:3a£13:92:aa:792f3:36:be:3a:e2:4dhadoDpejit-135Thekey'srajudamartiErjageis;TOC\o"1-5"\h\z--[RSA2D48] +Ioo. II..二 I|o.a-k. II..。■ II.+.S II..o IIoaE II..-IHMi |||-D.4-hs I+ +[had&opSjic-lCS$|在slave1上cp-/.ssh/id_rsa.pub-/.ssh/slave1.id_rsa.pubscp-/.ssh/slave1.id_rsa.pubhadoop@master:〜/.ssh在slave2上cp-/.ssh/id_rsa.pub-/.ssh/slave2.id_rsa.pubscp-/.ssh/slave2.id_rsa.pubhadoop@master:〜/.ssh在slave3上cp-/.ssh/id_rsa.pub-/.ssh/slave3.id_rsa.pubscp-/.ssh/slave3.id_rsa.pubhadoop@master:〜/.ssh在master上cd〜/.sshcatid_rsa.pub>>authorized_keyscatslave1.id_rsa.pub>>authorized_keyscatslave2.id_rsa.pub>>authorized_keyscatslave3.id_rsa.pub>>authorized_keysscpauthorized_keyshadoop@slave1:〜/.sshscpauthorized_keyshadoop@slave2:〜/.sshscpauthorized_keyshadoop@slave3:~/.ssh分别在各个主机上执行如下命令(赋予权限)su-hadoopchmod600~/.ssh/authorized_keys测试ssh免密登录sshslave1 #第一次登录需要输入yes然后回车,如没提示输入密码,则配置成功。三.安装配置hadoop安装hadoop上传hadoop-2.7.4.tar.gz安装包到root根目录tar-zxvfhadoop-2.7.4.tar.gz-C/home/hadooprm-rfhadoop-2.7.4.tar.gzmkdir/home/hadoop/hadoop-2.7.4/tmpmkdir/home/hadoop/hadoop-2.7.4/logsmkdir/home/hadoop/hadoop-2.7.4/hdfmkdir/home/hadoop/hadoop-2.7.4/hdf/datamkdir/home/hadoop/hadoop-2.7.4/hdf/name在hadoop中配置hadoop-env.sh文件editthefileetc/hadoop/hadoop-env.shtodefinesomeparametersasfollows:#settotherootofyourJavainstallationexportJAVA_HOME=/home/jdk1.8.0_121修改yarn-env.sh#exportJAVA_HOME=/home/y/libexec/jdk1.7.0/exportJAVA_HOME=/home/jdk1.8.0_121修改slaves#vi/home/hadoop/hadoop-2.7.4/etc/hadoop/slaves配置内容:删除:localhost添加:slave1slave2slave3修改core-site.xml#vi/home/hadoop/hadoop-2.7.4/etc/hadoop/core-site.xml配置内容:<configuration><property><name></name><value>hdfs://master:9000</value></property><property><name>hadoop.tmp.dir</name><value>file:/home/hadoop/hadoop-2.7.4/tmp</value></property><property><name>io.file.buffer.size</name><value>131072</value><description>该属性值单位为KB,131072KB即为默认的64M</description></property></configuration>修改hdfs-site.xml#vi/home/hadoop/hadoop-2.7.4/etc/hadoop/hdfs-site.xml配置内容:<configuration><property><name>services</name><value>hadoop-cluster1</value></property><property><name>dfs.datanode.data.dir</name><value>/home/hadoop/hadoop-2.7.4/hdf/data</value><final>true</final></property><property><name>.dir</name><value>/home/hadoop/hadoop-2.7.4/hdf/name</value><final>true</final></property><property><name>dfs.replication</name><value>1</value>〈description,分片数量,伪分布式将其配置成1即可〈/description〉</property><property><name>dfs.permissions</name><value>false</value></property></configuration>修改mapred-site.xmlcp/home/hadoop/hadoop-2.7.4/etc/hadoop/mapred-site.xml.template/home/hadoop/hadoop-2.7.4/etc/hadoop/mapred-site.xmlvi/home/hadoop/hadoop-2.7.4/etc/hadoop/mapred-site.xml配置内容:<configuration><property><name></name><value>yarn</value></property><property><name>mapreduce.jobhistory.address</name><value>master:10020</value></property><property><name>mapreduce.jobhistory.webapp.address</name><value>master:19888</value></property></configuration>修改yarn-site.xml#vi/home/hadoop/hadoop-2.7.4/etc/hadoop/yarn-site.xml配置内容:<configuration><property><name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name><value>org.apache.mapred.ShuffleHandler</value></property><property><name>yarn.nodemanager.aux-services</name><value>mapreduce_shuffle</value></property><property><name>yarn.resourcemanager.address</name><value>master:8032</value></property><property><name>yarn.resourcemanager.scheduler.address</name><value>master:8030</value></property><property><name>yarn.resourcemanager.resource-tracker.address</name><value>master:8031</value></property><property><name>yarn.resourcemanager.admin.address</name><value>master:8033</value></property><property><name>yarn.resourcemanager.webapp.address</name><value>master:8088</value></property></configuration>、 、 .各个主机之间复制hadoopscp-r/home/hadoop/hadoop-2.7.4hadoop@slave1:/home/hadoopscp-r/home/hadoop/hadoop-2.7.4hadoop@slave2:/home/hadoopscp-r/home/hadoop/hadoop-2.7.4hadoop@slave3:/home/hadoop各个主机配置hadoop环境变量su-rootvi/etc/profile编辑内容:exportHADOOP_HOME=/home/hadoop/hadoop-2.7.4exportPATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATHexportHADOOP_LOG_DIR=/home/hadoop/hadoop-2.7.4/logsexportYARN_LOG_DIR=$HADOOP_LOG_DIRsource/etc/profile#使配置文件生效格式化namenodecd/home/hadoop/hadoop-2.7.4/sbinhdfsnamenode-format3.5启动hadoop启动hdfs:cd/home/hadoop/hadoop-2.7.4/sbinstart-all.sh检查hadoop启动情况:02:50070#如下图所示HadbOpOverviewEiatanDdea 口日汩门ateVolumeFatfuresSnapshotstartup LUitiesOverview,mongodbl.90001(active)弱成d:ThUAug1010DB34CST20172.7.Af(X3915e1eEWM01314^230ti73015We17572B^22O17-C8-01T0029Z ksn^ChkTramoraAOI-2.74□lusterio:ClD-9te47e8d-50b7-4e3d-al43-a3d4.e1131126BlockFDOlID=6P-2O67633353-1721B.1S1O2-iaDZ2KMa2B1SummaryS&:urrt^'IEC1TSafemadeisoft.30friesanddireclones,IS=帖tataJlfileEy'stem□bj&ctl.s).H&apMefnaryused16337M6of321.5MBHwMemoryMaxReamMemoryIsS&9MB.NonHeapMurwyused5723MBorseaimbCommlteil同皿HeapMemoryM司耳NonHe®Memoryis-1BConfiguredCap^cit^:3.1TB□FSUsed::1.DE5MB(0%)NOflDFSUsed:118.93GBDFSMOHiniHQ:2.96TB(96.26%)BteKP81U总比1.05MB(0%)

hlfldwp Dan3rKdgs OaianiMZE!WumeFaiures 如平hoi SoaraupPr^e^sJuXDatanodeInformationNcdeLBr5tC-9htMLAdnvCipantrUsedNcdeLBr5tC-9htMLAdnvCipantrUsed厢nDF5UsriRemarnng日藤事B囹0k.晔IusedFadedValuniMVprai-snMd皿知M口(111K1B1045M1的2InSsr^ioE-1H37B7C4KBW2*>G311TMK3:喀iD27Ji-Fl-106(P?吐的iK网肾5h9«v%iMTSM:El弼的厢LMTg0g电也蜥0274£in&IIYiW103TB1KB388OB1硒畤C66泄蛾027.4Sdaitt«dFrodiog如FFRnnlnfAjpFC4«plet«dCsa±larMTr:nHwj中UsedBesjxv7ataiM»C-E¥Ei!irEWcrsarTots]0020flX24CBCLB0a±0UM刘,ihxWwaIP*U*域Q0AppllaiEitiCH.T^pe中■3ueu£aSianTlA!中!如,曲n里0Stm/。fl而■薄期:因值将Ih春力中mdJk<steC1皿L口KiHERICEdfffwiltTbuAw;10i03:e:ifitOBOj3D1Trbu.姓m1g09:制+000?301TFl机5HETita3犹2可也」建检查进程:jpsmaster主机包含ResourceManager、SecondaryNameNode、NameNode等,则表示启动成功,例如2212ResourceManager2484Jps1917NameNode2078SecondaryNameNode各个slave主机包含DataNode、NodeManager等,则表示启用成功,例如17153DataNode17334Jps17241NodeManager停止hadoop命名#stop-all.sh四.安装配置zookeeper4.1配置zookeeper环境变量vi/etc/profileexportZOOKEEPER_HOME=/home/hadoop/zookeeper-3.4.6exportPATH=$ZOOKEEPER_HOME/bin:$PATHsource/etc/profilezookeeper4.2配置zookeeper1、至Uzookeeper官网下载zookeeper/apache/zookeeper/zookeeper-3.4.6/2、在slave1,slave2,slave3上面搭建zookeeper例如:slave103slave204slave3053、上传zookeeper-3.4.6.tar.gz到任意一台服务器的根目录,并解压:zookeeper:tarzxvfzookeeper-3.4.6.tar.gz-C/home/hadoop4、在zookeeper目录下建立zookeeper-data目录,同时将zookeeper目录下conf/zoo_simple.cfg文件复制一份成zoo.cfgcp/home/hadoop/zookeeper-3.4.6/conf/zoo_sample.cfgzoo.cfg5、修改zoo.cfgThenumberofmillisecondsofeachticktickTime=2000Thenumberofticksthattheinitial#synchronizationphasecantakeinitLimit=10Thenumberofticksthatcanpassbetween#sendingarequestandgettinganacknowledgementsyncLimit=5thedirectorywherethesnapshotisstored.donotuse/tmpforstorage,/tmphereisjust#examplesakes.dataDir=/home/hadoop/zookeeper-3.4.6/zookeeper-datatheportatwhichtheclientswillconnectclientPort=2181themaximumnumberofclientconnections.increasethisifyouneedtohandlemoreclients#maxClientCnxns=60#Besuretoreadthemaintenancesectionoftheadministratorguidebeforeturningonautopurge.##/doc/current/zookeeperAdmin.html#sc_maintenance#ThenumberofsnapshotstoretainindataDir#autopurge.snapRetainCount=3PurgetaskintervalinhoursSetto"0"todisableautopurgefeature#autopurge.purgeInterval=1server.1=slave1:2888:3888server.2=slave2:2888:3888server.3=slave3:2888:38886、拷贝zookeeper目录到另外两台服务器:scp-r/home/hadoop/zookeeper-3.4.6slave2:/home/hadoopscp-r/home/hadoop/zookeeper-3.4.6slave3:/home/hadoop分别在几台服务器的zookeeper-data目录下建立myid其ip对应相应的server.*server.1的myid内容为1server.2的myid内容为2server.3的myid为37、启动ZooKeeper集群,在每个节点上分别启动ZooKeeper服务:cd/home/hadoop/zookeeper-3.4.6/bin/zkServer.shstart8、可以查看ZooKeeper集群的状态,保证集群启动没有问题:分别查看每台服务器的zookeeper状态zookeeper#bin/zkServer.shstatus查看那些是following那个是leaderEg:zkServer.shstatus五.安装配置hbase安装hbase上传hbase-1.2.1-bin.tar.gz安装包到root根目录tar-zxvfhbase-1.2.1-bin.tar.gz-C/home/hadoopmkdir/home/hadoop/hbase-1.2.1/logs、、一配置hbase环境变量vi/etc/profileexportHBASE_HOME=/home/hadoop/hbaseexportPATH=$PATH:$HBASE_HOME/binsource/etc/profile修改hbase-env.sh#vi/home/hadoop/hbase-1.2.1/conf/hbase-env.sh配置内容:exportJAVA_HOME=/home/jdk1.8.0_121exportHBASE_LOG_DIR=${HBASE_HOME}/10gsexportHBASE_MANAGES_ZK=false修改regionservers#vi/home/hadoop/hbase-1.2.1/conf/regionservers配置内容:删

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论