版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领
文档简介
1软件环境整体状况说明
JDKVVVV
HadoopV(Master)V(Slave)V(Slave)V(Slave)
HiveV
ScalaVVVV
SparkV(Master)V(Worker)V(Worker)V(Worker)
2安装包下载路径
系统名软件包名下我路径
Spark开源软件/
.tar.gz
.tar.gz
3Hadoop2.2安装和配置
3,集群网络环境
节点IP地址和主机名分布如下:
IPHostName用户名
DashDBOl.yunvod
sparkOl.yunvod
spark02.yunvod
spark03.yunvod
3.2环境搭建(每台机器都要操作)
3.2.1修改HostName(非必需)
vim/etc/sysconfig/network
修改HOSTNAME为须要的名称
重启服务器,进行生效
reboot
3.2.2设置Host映射文件
1.运用root身份编辑/etc/hosts映射文件,设置IP地址及机器名的映射,设置信息如下:
vim/etc/hosts
4DashDBOl.yun
5sparkOl.yun
6spark02.yun
7spark03.yun
0localhosclocalhost.localdomainlocalhost4localhost4.Iocaldomain4centoscentos.yun
::1localhostlocalhost.localdomainlocalhostfilocalhost6.localdomains
4DashDBOl.yun
5sparkOl.yun
6spark02.yun
7spark03.yxin
2.运用如下吩咐对网络设置进行重启
/etc/init.d/networkrestart
[rootQDashDBOlcommon]f/etc/init.d/networkrestart
正在关闭娘口ethO:[确定】
笑闲环回接口:[确定]
弹出环回镂口:[确定]
弹出界面ethO:Determiningifipaddress4isalreadyinusefordeviceethO...
[确定]
[root0DashDBOlcommon]f|
3.验证设置是否胜利
[vod0DashDBOlpingsparkOl.yun
PINGspark01.yun(17Z.lb.15U.Z5)bb(b4)bytesofdata.
64bytesfromsparkOl.yun(5):icmp_seq=l81=64time=2.07ms
64bytesfromsparkOl.yun(5):icmp_seq=2ttl=64time=0.299ms
3.2.3设置操作系统环境
3.2.3.1关闭防火墙
在Hadoop安装过程中须要关闭防火墙和SEIinux,否则会出现异样
1.serviceiptablesstatus查看防火墙状态,如下所示表示iptables已经开启
©hadoop@hadoopl:/home/hadoop-□X
FileEditViewSearchTerminalHelp
[root@hadooplhadoop]#serviceiptablesstatus£
Table:filter
ChainINPUT(policyACCEPT)
numtargetprotoptsourcedestination
1ACCEPTall--/0/0stateRELATED,
ESTABLISHED
2ACCEPTicmp--/0/0
3ACCEPTall--/0/0
4ACCEPTtcp--/0/0stateNEWtcp
dpt:22
5REJECTall--/0/0reject-withic
mp-host-prohibited
ChainFORWARD(policyACCEPT)
numtargetprotoptsourcedestination
1REJECTall--/0/0reject-withic
mp-host-prohibited
ChainOUTPUT(policyACCEPT)
numtargetprotoptsourcedestination
2.以root用户运用如下吩咐关闭iptables
chkconfigiptablesoff
3.2.3.2关闭SElimix
1.运用getenforce吩咐查看是否关闭
团hadoop@hadoopl:/home/hadoop-□x
FileEditViewSearchTerminalHelp
[root@hadooplhadoop]#
[root@hadooplhadoop]#getenforce
Enforcing
2.修改/etc/selinux/config文件
将SELINUX=enforcing改为SELINUX=disabled,执行该吩咐后重启机器生效
国hadoop@hadoopl:/home/hadoop_□x
FileEditViewSearchTerminalHelp
I
#ThisfilecontrolsthestateofSELinuxonthesystem.
#SELINUX=cantakeoneofthesethreevalues:
#enforcing-SELinuxsecuritypolicyisenforced.
#permissive-SELinuxprintswarningsinsteadofenforcing.
,disabled-No?ELinuxpolicyisloaded.
^SELINUX=enforcing
SELINUX:disable
#SELINUXTYPE=cantakeoneofthesetwovalues:
#targeted-Targetedprocessesareprotected,
#mis-MultiLevelSecurityprotection.
SELINUXTYPE=targeted
3.2.3.3JDK安装及配置
给予vod用户/usr/lib/java书目可读写权限,运用吩咐如下:
sudochmod-R777/usr/lib/java
把下载的安装包,上传到/usr/lib/java书目下,运用如下吩咐进行解压
tar-zxvf
解压后书目如下图所示:
[vod@DashDB01java]$pud
/usr/lib/java
[vodSDashDBOljava]$11
总用量134988
drwxr-xr-x8vodadmins40963月182014
-rw-r-r--1vodadmins138ZZ0064工0月1409:46Jdk-7u53-iinux-x64.tar.gz
运用root用户配置/etc/profile,该设置对全部用户均生效
vim/etc/profile
添加以下信息:
exportPATH=$PATH:$JAVA_HOME/bin:$JAVA_HOME/jre/bin
exportCLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/lib
修改完毕后,运用
source/etc/pro-version
[vod@DashDB01java]djava-version
javaversion,rl.7.0_531
Java(TH)SERuntimeEnvironment(build1.7.0_55-bl3)
JavaHotSpot(TM)64-picServerVMfbui1d24.5S-bO3,mixedmode)
[vod0DashDBOl-java]$|echogJAVA_H0ME1
/usr/lib/java/jdkl.77O_55
3.2.3.4更新OpenSSL
yumupdateopenssl
3.2.3.5无密码验证配置
1.以root用户运用vim/etc/ssh/sshd_config,打开sshd_config配置文件,开放4个配
置,如下图所示:
RSAAuthenticationyes
PubkeyAuthenticationyes
AuthorizedKeys
StrictModesno
authorizedKeysFile.ssh/authorized_keys
2.配置后重启服务
servicesshdrestart
Troot(madooplnadoopT-
[root®hadooplhadoopj*servicesshdrestart
stoppingsshd:
Startingsshd:
[root®hadooplhadoop]0
3.运用root用户登录在4个节点,g/home/common书目下,执行吩咐
mkdir.ssh
4.运用vod用户登录在4个节点中运用如下吩咐生成私钥和公钥;
sudochown-Rvod.ssh
ssh-keygen-trsa
[vod0DashDBOl-]$ssh-keygen-trsa
Generatingpublic/privatersakeypair.
Enterfileinwhichtosavethekey(/home/common/.ssh/id_rsa):
/home/common/.ssh/id_rsaalreadyexists.
Overwrite(y/n)?y
Enterpassphrase(emptyfornopassphrase):
Entersamepassphraseagain:
Youridentificationhasbeensavedin/home/common/.ssh/id_rsa.
Yourpublickeyhasbeensavedin/hone/common/.ssh/id_rsa.pub.
Thekeyfingerprintis:
6d:a3:33:8c:14:le:9b:ff:26:44:3d:9b:ac:eb:d6:12vod@DashDB01.yun
Thekey'srandomartimageis:
+—[RSA2048]---+
I..oI
I=.S.++I
I.+Eo+.I
I..*+I
I=4-0I
Ioo=.I
+-----------------+
[vodeDashDBOl|
5.进入/home/common/.ssh书目在4个节点中分别
运用如下吩咐cpid_rsa.pubauthorized_keys_DashDB01.yun
把公钥命名
authorized_keys_DashDB01.yun
authorized_keys_spark01.yun
authorized_keys_spark02.yun
authorized_keys_spark03.yun
[vospark03*]$cd.ssh
[vod@spark03.ssh]$ll
总用量8
-rv1vodadmins167510月1317:55id_rsa
-rw-r--r--1vodadmins39710月1317:55id_rsa.pub
[vod@spark03.ssh]$cpid_rsa.pubauthorized_keys_spark03
6.把3个从节点(sparkOl,spark02,sparkO3)的公钥运用scp吩咐传送到DashDBOl.yun
节点的/home/common/.ssh文件夹中;
scpauthorized_keys_spark01.yun:/home/common/.ssh
[vodBsparkOl.ssh]专scpauthorized_keys_spark01vod0DashDBOl.yun:/home/coimon/.ssh
Theauthenticityofhost'dashdbOl.yun(4)1can'tbeestablished.
RSAkeyfingerprintis76:98:61:09:6a:6b:b6:f3:2e:95:98:b7:08:5c:26:78.
Areyousureyouwanetocontinueconnecting(yes/no)2yes
11
Warning:PermanentlyaddeddashdbOl.yunz4(RSA)tochelisto£knownhosts.
vod@dashdbOl.yun—password:
authorized_keys_spark01100%3970.4KB/S
vod0:Nosuchfileordirectory
[vodBsparkOl.ssh](1
最终DashDBOl.yun节点中文件如下
01.ssh]$11
总用量24
-rw-r--r--1vodadmins398月1317:56authorized_keys_naster
月
1vodadmins3971317:59authorized_keys_spark01
月
1vodadmins3971318:02authorized_keYS_spark02
月
1vodadmins3971318:01authorized_keYS_spark03
月
1vodadmins1675月1317:52id_rsa
1vodadmins3981317:52idrsa.pub
01.ssh]$|
7.把4个节点的公钥信息保存到authorized_key文件中
运用catauthorized_keys_DashDB01.yun>>authorized_keys吩咐
[vodSDashDBOl.ssh]$catauthorized_keY3_master»authorized_keys
[vod0DashDBOl.ssh]$catauthorized_keys_spark01»authorized_keys
[vodSDashDBOl.ssh]$catauthorized_keys_spark03»authorized_keys
[vodSDashDBOl.ssh”catauthorized_keys_spax:k02»authorized_keys
[vodeDashDBOl.ssh]$catauthorized_keys
ssh-rsaAAAAB3NzaClYc2EAAAABIwAAAQEAvqW2x2ZV5In86rBsusEp9V47BF0Ku6tdzs2rlthuUG0XjchQTG/H9ryDg5obd
M4Zbfe4gRhvPMa4qDY9E9SkMCK8BO\iAjJohahuif/lmPdzdRcpgvjkpnv6Y2g7jlm7qgUH+ULGKafarnFeSneLVD^n2NnXc0
0Iy9DXtqH2FiXLZX4-lXQAp5Cn7hiajlKLdM39sI/00hzMb964Gcxi3DAFnxlYNVlFDgTzHpqlCeQi95AuZZqd3SL¥0fTwD3v0
6RCjKGj6LSPWw4M4fR+AJ2A/bgKKrgT0azKh/jlPXs3Y0oC8c3xitQ==vod0DashDBOl.yun
ssh-rsaAAAAB3NzaClyc2EAAAABIwAAAQEAue0eGcGhl0AeCi20EM8GqUlGavK9d09HqR3i4rl4vczbdbDQYK+lGQtH2RV3N
y5QQGLAqBF1iju4PwUtkFxBK92Tc/ijNAAc5C5ZfR2/ZaD39pR7PNDfmWX2DH650D/KFZvY3KBSYf0M0dUittijIppHizgQGMGQ4
TXFArkCcH58MgTLVnX8IfPpiICNbXBBJpwMhq8AyDStHDkLPVYj0I651HlnWU6FIaEMi:W3j:SbYRQlbq00cAkn87u60EUr7bnR
pGQMmUl+ESUPTLPnKAOUEmaaVE9FFYmZ/Y\ryTftKQf7AfDBUvU71v==vod@spark01.yun
ssh-rsaAAAAB3NzaClYc2EAAAABIwAAAQEArKMnXwBtAUB9fQdg7QpyaW3id54uBLBKVplIGVskX7hw7j50rAIyfd49abwb5
yqhd78czUjY/LsvQAbCEhHQNz2QCjktesxXLsbRdljNFTiLvLZBXd21DdjZLbtDEpp6S0L6t0eUxy7rAvFDuleY+/JoPt6S9o
K5RARosWFpdR/STQlpo2D7sYGfb/PRA+6X4de0BUS+n0cVLa0tSxY05LLlos9Fe0gn7V3JjLxGH9rWswkpk+GvO4so/Z3kMY
QIpP54vIE3orbUi4W8iIW3ulWYS0nP5xb061xBfUDKUyf3LR725Q==vod@spark03.yun
ssh-rsaAAAAB3NzaClyc2EAAAABIwAAAQEA/MqICzVQNwXlt73ahS6PfAxwe7HuouHGeax9SNxjwRjgaGGncW+LQvkDu6R
3hcDt/pXTwvCf0N4kGDztxgI2Sds+wiSUDtV5bGniXxMF2Z3iWroyc0TctcUp0+MT+CvkxNH7mtzgaNV33iz8cgHi:WE/VN4BS0
V+30A0QH7QvLlzlHv0MbyZlhucTsuPkde0USJRmQMQXy35T6+d4X9nshJ2IYSmIPtAUy9dRVG201L6Xh6jQbEz/lnVv2n4THh
9YEiu0fW5AJc0qBBSLSM0ZHNLTVRX7JhYLYI68d2HLosSuGdYHPHuw==vod@spark02.yun
[vodSDashDBOl.ssh]$|
8.把该文件分发到其他两个从节点上
运用scpauthorized_keys:/home/common/.ssh把密码文件分发出
[vod@DashDB01.ssh]$scpauthorized_keysvodSsparkOl.yun:/hoae/common/,ssh
Theauthenticityofhost'sparkOl.yun(5)1can,tbeestablished.
RSAkeyfingerprintisf7:de:20:93:44:33:76:2e:bd:0c:29:3d:b0:6f:37:cc.
Areyousureyouwanttocontinueconnecting(yes/no)?yes
Warning:Permanentlyadded1sparkOl.yun,172.16.158.251(RSA)toChelistcfknownhosts.
vodBsparkOl.yun'spassword:
authorized_keys100%15891.6KB/S
[vod0DashDBOl.ssh]$scpauthorized_keysvod@spark02.yun:/hone/coiimon/.ssh
Theauthenticityofhost*spark02.yun(6)1can'tbeestablished.
RSAkeyfingerprintis45:46:e2:ba:a2:7f:08:ld:8b:ba:ed:ll:4c:27:ab:0e.
Areyousureyouwanttocontinueconnecting(yes/no)?yes
,,
Warning:Permanentlyaddedspai:k02.yun/6(RSA)tothe113Ccfknownhosts.
vod@spark02.yun'spassword:
authorized_keys100%15891.6KB/s
[vodSDashDBOl.ssh]$scpauthorized_keysvod0sparkO3.yun:/hoMe/common/.ssh
Theauthenticityofhost'spark03.yun(7)1can'tbeestablished.
RSAkeyfingerprintis6a:d3:e4:a4:21:52:7b:£7:84:al:61:£0:3b:0c:89:8b.
Areyousureyouwanttocontinueconnecting(yes/no)?yes
Uarning:Permanentlyadded'sparkOO.yun,172.1G.150.271(RSA)tothelistcfknownhosts.
vod0sparkO3.yun'spassword:
authorized_keys100%15891.6KB/S
[vodeDashDBOl.ssh]?|
其余三台机器的.ssh文件包含如下:
I[vod@spark01.ssh]$11
总用量20
-rv-r——r--1vodadmins158910月1318:04authorized_keYS
-rw-r--r-1vodadmins39710月1317:56authorized_keys_3Park01
-rv------------1vodadmins167510月1317:54id_rsa
-rw-r--r--1vodadmins39710月1317:54id_rsa.pub
-ru-r——r——1vodadmins40810月1317:59knoun_hosts
___■____
9.在4台机器中运用如下设置authorized.keys读写权限
chmod775authorized_keys
[vodSDashDBOl.ssh]$chmod4J0authorized_keys
10.测试ssh免密码登录是否生效
[vod@DashDB01.ssh]$sshsparkOl.yun
Lastlogin:TueOct1318:06:432015fromdashdbOl.yun
[vod@spark01-]$exit
logout
ConnectiontosparkOl.yunclosed.
[vod@DashDB01.ssh]$sshspark02.yun
Lastlogin:TueOct1318:06:492015fromdashdbOl.yxin
[vod@spark02-]$exit
logout
Connectiontospark02.yunclosed.
[vod@DashDB01.ssh]$sshspark03.yun
Lastlogin:TueOct1317:10:232015£rom8
[vod@spark03专exit
logout
Connectiontospark03.yunclosed.
-----------------•・_____________________
3.3配置Hadooop设置
3.3.1打算hadoop文件
1.把书目移到/usr/local书目下
cd/home/hadoop/Downloads/
sudocphadoop-2.2.0/usr/local
2.运用chown吩咐遍历修改书目全部者为hadoop
sudochown-Rvod
3.3.2在Hadoop书目下创建子书目
运用vod用户在书目下创建tmp、name和data书目,保证书目全部者为vod
cd
mkdirtmp
mkdirname
mkdirdata
Is
[vodGDashDBOlhadoop-2.2.0]$11
总用量40
drvxr-xr-x2vodadmins409610月1409:31
drwxr-xr-x2vodadmins409610月1409:34
druxr-xr-x3vodadmins409610月1409:31
drvxr-xr-x2vodadmins409610月1409:31
drwxr-xr-x3vodadmins409610月1409:31
drvxr-xr-x2vodadmins409610月1409:31
drvxr-xr-x2vodadmins409610月1409:34
drvxr-xr-x2vodadmins409610月1409:31
drvxr-xr-x4vodadmins409610月1409:31
drwxr-xr-x2vodadmins409610月1409:34
配置/etc/provim/etc/profile
添加以下内容
exportHADOOP_HOME=/usr/local/hadoop
exportPATH=$PATH:$HADOOP_HOME/bin
exportYARN_HOME=$HADOOP_HOME
exportHADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
exportYARN_CONF_DIR=$HADOOP_HOMR/etc/hadoop
运用吩咐使其生效
source/etc/profile
3.3.3配置hadoop-env.sh
1.打开配置文件hadoop-env.sh
cdetc/hadoop
sudovimhadoop-env.sh
2.加入配置内容,设置了hadoop中jdk和hadoop/bin路径
exportJAVA_HOME=/usr/lib/java/jdkl.7.0_55
exportPATH=t?:/usr/local;hadoop-2.2.0/bin
I
"hadoop-env.sh"81L,348JLC
3.编译配置文件hadoop-env.sh,并确认生效
sourcehadoop-env.sh
3.3.4配置yarn-env.sh
1.在打开配置文件yarn-env.sh
sudovimyarn-env.sh
2.加入配置内容,设置了hadoop中jdk和hadoop/bin路径
export
3.编译配置文件yarn-env.sh,并确认生效
sourceyarn-env.sh
3.3.5配置core-site,xml
1.运用如下吩咐打开core-site.xml配置文件
sudovimcore-site.xml
2.在配置文件中,根据如下内容进行配置
<configuration>
<property>
<name></name>
<value>hdfs://4:9000</value>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://4:9000</value>
</property>
<property>
<name>io.</name>
<value>131072</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<valuex/value>
<description>Abaseforothertemporarydirectories.</description>
</property>
<property>
<name>xyuser.hduser.hosts</name>
<value>*</value>
</property>
<property>
<name>xyuser.hduser.groups</name>
<value>*</value>
</property>
</configuration>
3.3.6配置hdfs-site.xml
1.运用如下吩咐打开hdfs-site.xml配置文件
sudovimhdfs-site.xml
2.在配置文件中,根据如下内容进行配置
<configuration>
<property>
<name>node.secondary.-address</name>
<value>4:9001</value>
</property>
<property>
<name>.dir</name>
<value></value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<valuex/value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
</configuration>
3.3.7配置mapred-site.xml
1.默认状况下不存在mapred-site.xml文件,可以从模板拷贝一份
cpmapred-site.xml.templatemapred-site.xml
2.运用如下吩咐打开mapred-site.xml配置文件
sudovimmapred-site.xml
3.在配置文件中,根据如下内容进行配置
<configuration>
<property>
<name></name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>4:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>4:19888</value>
</property>
</configuration>
3.3.8配置yarn-site.xml
1.运用如下吩咐打开yarn-site.xml配置文件
sudovimyarn-site.xml
2.在配置文件中,根据如下内容进行配置
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</valje>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>:8088</value>
</property>
</configuration>
3.3.9配置slaves文件
1.设置从节点
sudovimslaves
修改为
sparkOl.yun
spark02.yun
spark03.yun
3.3.10向各节点分发hadoop程序
1.在sparkOl.yunspark02.yunspark03.yun机器中创建书目,然后修改该书目全
部权限
sudochown-Rvod
sudo
2.在DashDBOl.yun机器上进入书目,运用如下吩咐把hadoop文件夹复制到其他3台
运用吩咐
scp-r
3.在从节点查看是否复制胜利
执行
4.每个节点配置/etc/provim/etc/profile
添加以下内容
exportHADOOP_HOME=/usr/local/hadoop
exportPATH=$PATH:$HADOOP_HOME/bin
exportYARN_HOME=$HADOOP_HOME
exportHADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
exportYARN_CONF_DIR=$HADOOP_HOMR/etc/hadoop
运用吩咐使其生效
source/etc/profile
3.4启动hadoop
3.4.1格式化namenode
./bin/hdfsnamenode-format
H0.88447.226[Tw.88.jy.227[|W.^lV.Zg□
,l
rhadoop®hadoopi^Tfc3r/usr/ioca1/hadoop-2.5.6/~A
Lhadoop@hadoopl/usr/local/hadoop-2.2.0]Sls
binetc11bLlCENSE.txtNOTiCE.txtsbintmp
dataincludelibexecnameREADME.txtshare
[hadoop^hadoopl/usr/local/hadoop-2.2.0]$./bin/hdfsnamenode-format.
z
14/09zz410:12:00INFOnamenode.NameNode:STARTUP_MSG:
/9■金•・食**•会衾•贪・食*鲁••■食食•食偷食•・帝•会・食食翁・翕金■,食
STARTUP_MSG:StartingNameNode
STARTUPJSG:host-hadoopl/126
STARTUPJSG:args-[-format]
STARTUPJSG:version-2.2.0
STARTUP/%:classpath-/usr/local/hadoop-2.2.0/etc/hadoop:/usr/local/hadoop-2.2.0/share/had
oop/common/1ib/co<nmons-net-3.1.jar:/usr/local/hadoop-2.2.0/snare/hadoop/c<xnmon/lib/commons-http
client-3.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/avro-1.7.4.jar:/usr/localhadoop
-2.2.0/share/hadoop/co<nmon/lib/jasper-compi1er-5.5.23.jar:/usr/local/hadoop-2.2.0/share/hadoop/
common/Hb/jetty-6.1.26.jar:/usr/local/hadoop-2.2,0/share/hadoop/common/1ib/s1f4j-api-1.7,5.jar
14,09/2410:12:17INFOnamenode.FSNanesystem:node.safemode.min.datanodes•0
14/09/2410:12:17INFOnamenode.FSNamesystem:node.safemode.extension-30000
14/09/2410:12:17INFOnamenode.FSNamesystem:Retrycacheonnamenodeisenabled
14/09/2410:12:17INFOnamenode.FSNamesystem:Retrycachewilluse0.03oftotalheapandretry
cacheentryexpirytimeis600000millys
14/09/2410:12:17INFOutil.GSet:ComputingcapacityformapNamenodeRetrycache
14/09/2410:12:17INFOutil.GSet:VMtype-64-bit
14/09/2410:12:17INFOutil.GSet:0.029999999329447746%maxmemory-966.7MB
IN'。UHLGS。。:capac'cy:2Als=/768en*匚___
宜INCCcommon.nragA:cHrocrcry/uqr/Iccal/hadoop・5.?.0/naia。hasbe0n
successfullyformatted.
14/0^/2410:12:18INFOnan»efM3de.FSlmage:Savingimagerile/usr/iocai/nadoop-2.2.0/narae/current
/fsimage.ckpt_0000000000000000000usingnocompression
14/09/2410:12:18INFOnamenode.FSlmaqe:Imagefile/usr/local/hadoop-2.2.0/najne/current/fsimag
e.ckpt_0000000000000000000ofsize198bytessavedin0seconds.
14/09/2410:12:18INFOnamenode.NNStorageRetentiorwanager:Goingtoretain1imageswithtxid>
■0
14/09/2410:12:18INFOutil.Exituti1:Exitingwithstatus0
14/09/2410:12:18INFOnamenode.NameNode:SHUTOO¥ft4,>*SG:
SHUTDOWNSSG:ShuttingdownNameNodeathadoopl/126
3.4.2启动hadoop
cd/usr/local/
./start-all.sh
3.4.3验证当前进行
此时执行jps吩咐
在DashDBOl.yun上运彳亍的进程有:namenode,secondarynamenode,
resourcemanager
sparkOl.yunspark02.yun和spark03.yun上面运彳亍的进程有datanode1nodemanager
4H安装和配置
4.1拷贝项目
更改文件夹所属
sudochown-Rvod/usr/local/
sudochmod775-R/usr/local/
配置/etc/provim/etc/profile
exportHIVE_HOME=/usr/local
exportPATH=$HIVE_HOME/bin:SPATH
exportHIVE_CONF_DIR=$HIVE_HOME/conf
source/etc/profile
4.2配置hive(运用mysql数据源)
前提条件:在mysql数据库建立hive用户并给予相关权限
mysql>CREATEUSER'hive'IDENTIFIEDBY'mysql';
mysql〉GRANTALLPRIVILEGESON*.*TO'hive'@'%'WITHGRANTOPTION:
mysql>flushprivileges;
cd$HIVE_CONF_DIR/
cphive-default.xml.templatehive-site.xml
vimhive-site.xml
修改下列参数:
<name>javaxjdo.option.ConnectionllRL</name>
<value>jdbc:mysql://50:3306/hive?createDatabaseIfNotExist=true</value>
<name>javaxjdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
<namex/name>
<value>hive</value>
<name></name>
<value>hive</value>
执行吩咐
chmod775-R/usr/local/hive-/
4.3启动HiveServer2(后台启动)
cd$HIVE_HOME/bin
nohuphive-servicehiveserver2&
测试:netstat-an|grep10000或者运用jdbc连接测试
4.4测试
输入hive吩咐,启动hive
hive>showtables;
OK
Timetak
温馨提示
- 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
- 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
- 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
- 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
- 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
- 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
- 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。
最新文档
- 二零二五版商铺购买意向定金合同模板3篇
- 2024年版校车租赁合同范本
- 二零二五版车辆行驶安全协议及事故责任认定3篇
- 二零二五年度甜品店经营管理权及专利转让合同3篇
- 2024授权签订合同的委托书
- 二零二五年灯具及五金配件销售合同3篇
- 二零二五年快餐外卖平台加盟与合作协议3篇
- 潇湘职业学院《用户画像与精准营销》2023-2024学年第一学期期末试卷
- 西藏藏医药大学《锅炉原理及设备》2023-2024学年第一学期期末试卷
- 温州医科大学《中医护理技能》2023-2024学年第一学期期末试卷
- 二零二五年度无人驾驶车辆测试合同免责协议书
- 北京市海淀区2024-2025学年高一上学期期末考试历史试题(含答案)
- 常用口服药品的正确使用方法
- 2025年湖北华中科技大学招聘实验技术人员52名历年高频重点提升(共500题)附带答案详解
- 2024年钻探工程劳务协作协议样式版B版
- 《心肺复苏机救治院内心搏骤停患者护理专家共识》解读
- 计算机二级WPS考试试题
- 智联招聘行测题库及答案
- 前程无忧测评题库及答案
- 毛渣采购合同范例
- 《2025年日历》电子版模板年历月历工作学习计划横版整年带农历
评论
0/150
提交评论