hadoop实战手册思数科技集群部署入门_第1页
hadoop实战手册思数科技集群部署入门_第2页
hadoop实战手册思数科技集群部署入门_第3页
hadoop实战手册思数科技集群部署入门_第4页
hadoop实战手册思数科技集群部署入门_第5页
已阅读5页,还剩43页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

科技改变生活思数科技集群部科技改变生活修订记Hadoop 添加添加EasyHiveEasyHadoop集群部 文 文档概 背 #Hadoop试验集群的部署结 RedhatLinux基础环境搭 #linux安装(vm虚拟机 #配置集群hosts列 #并安装JAVAJDK系统软 #创建用户账号和Hadoop部 和数 Hadoop单机系统安装配 #Hadoop文件和解 #配置hadoop-env.sh环境变 #HadoopCommon组件配置core- #HDFSNameNode,DataNode组建配置hdfs- #Hadoop单机系统,启动执行和异常检 #通过执行Hadooppi运行样例检查集群是否成 #安装部署常见错 Hadoop集群系统配置安装配 #检查node节点linux基础环境是否正常,参考[linux基础环境搭建]一节 #配置从master机器到node节点无密钥登 #检查master到每个node节点在hadoop用户下使用密钥登陆是否正 #配置master集群服务器地址stop-all.shstart-all.sh的时候调 #通过执行Hadooppi运行样例检查集群是否成 Hive仓库集群部署文 Hive的作用和原理说 #Hive仓库流程 #hive内部结构 Hive部署和安 #安装Hadoop集群,看EasyHadoop安装文档 #解压Hive包并配置JDBC连接地址 #启动HivethriftServer #启动内置的HiveUI HiveCli的基本用 HQL基本语法(创建表,加载表,分析查询,删除表 使用Mysql构建简单数据集 #Mysql的两种引擎介 #创建一个数据表使用Hivecli进行数据分 #使用s编写Hsql并使用HiveCli导出数据,使用Mysql命令加载到数据库中 #使用crontab新增每日运行任务定时 使用FineReport数据展现数 #FineReport的问题和局 文档概本文档是Hadoop部署文档,提供了Hadoop单机安装和Hadoop集群安装的方法和步骤,Hadoop安装部署更简单(Easy)centos5redhat5.232位,64位版本,ubuntu背Hadoop为分布式文件系统和计算的基础框架系统,其中包含hadoop程序,hdfs系统等 Apache开源的分布式框架 hadoop的分布式文件系NameNode,hadoopHDFS元数据主节点服务器,负责保存DataNode文件元数据信息JobTracker,hadoop的Map/Reduce调度器,负责与TackTracker通信分配计算任务并任务进度DataNode,hadoop数据节点,负责数据TaskTracker,hadoop调度程序,负责Map,Reduce任务的具体启动和执行 多文件系统内核程序,可将不同的文件系统mountlinux可读写模服务器结#Hadoop试验集群的部署结部署路径:/opt/modules/hadoop/hadoop-Hadoop试验集群部署结(hadoopfs+hadoop网(hadoopfs+hadoop网VM网络结构(net,桥接 SSHHadoop组件依赖关#生产环境的部署结HDFSHDFSJob任nodenode(n-点Hadoop生产集群部署结 RedhatLinux基础环境搭#linux(vm虚拟机vmwarenetroot#配置机器时间同crontab-e001***/usr/sbin/ntpdate#配置机器网络环#hostnamemasterhostnamemaster#(hostname)vi/etc/sysconfig/network #setup/ #ip##AdvancedMicroDevices[AMD]79c970[PCnet32LANCE]/sbin/servicenetwork #ip#关闭如果不关闭报错如下2012-07-1802:47:26,331INFOorg.apache.hadoop.metrics2.impl.MetricsConfig:loadedpropertiesfromhadoop- MetricsSystem,sub=Statsregistered.2012-07-1802:47:26,533ERRORorg.apache.hadoop.metrics2.impl.MetricsSystemImpl:Errorgettinglocalhostname.Usingat#配置集群hosts列vi#添加一下内容到vi#并安装JAVAJDK系统软#wget#jdkod vi#粘贴一下内容到vi中exportexportexportPATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$PATH#生成登陆密#Hadoopsucdssh-keygen-q-trsa-N""-f/home/hadoop/.ssh/id_rsacd.sshcatid_rsa.pub>authorized_keysodgo- #公钥:文件内容id_rsa.pub到#集群环境id_rsa.pub到 -a 和数#hadoop#hadoophadoop/usr/sbin/useraddhadoop-g#创建hadoop代 结mkdir-p#创建hadoop数 结mkdir-p 结构权限为为chown-Rhadoop:hadoop chown-Rhadoop:hadoop #检查基础环[hadoop@master[hadoop@masterroot]$ Link HWaddrinetinet inet6addr:fe80::20c:29ff:fe7a:de12/64UPBROADCASTRUNNING RXpackets:14errors:0dropped:0overruns:0frame:0TXpackets:821errors:0dropped:0overruns:0carrier:0collisions:0txqueuelen:1000RXbytes:1591(1.5 TXbytes:81925(80.0Interrupt:67BasemastersshmasterechoechoHadoop单机系装配#Hadoop文件和解#hadoopcd#从Hadoop安装文wget #如果已经请文件到安装hadoop文件cphadoop- #加压或者的Hadoop文cd/opt/modules/hadoop/#hadoop-env.sh环境变#HadoopHADOOP_HEAPSIZE大小,1000,512m,这里配置较小。#配置压缩类库地址exportexport#HadoopCommoncore-#core-site.xml<!--hadoopnamenode服务器地址和端口, 形式--<!--hadoopsecondary数 editlog30 <!--editlog32m的时候触发一次合并 <!--Hadoop<description>Hadoop文件回收站,自动回收时间,单位分钟,1天。#HDFSNameNode,DataNode组建配置hdfs-<?xml<?xml-stylesheettype="text/xsl"<!--HDFSnamenodeimage文件保存地址 <!--HDFS数据文 路径,可以配置多个不同的分区和磁盘中,使用,号分隔--<description><!---HDFS NameNode查看主机和端口master是我们配置的主机<!--辅控HDFS <!--HDFS3 datanode1G,而非写满,bytes HDFS<?xml-stylesheettype="text/xsl"<!--Putsite-specificpropertyoverridesinthisfile.--<!--MapReduce产生的中间文件数据,按照磁盘可以配置成多个MapReduce的系统控制文件 <!--最大map槽位数量,3 <!--reduce<!--reduce排序使用内存大小,100M,要小于mapred.child.java.optsmapreduceJVM=系统datanodetasktracker(mapreduce)16*? mapreduce#Hadoop单机系统,启动执行和异常检#创建Hadoopmapred和hdfsnamenode和 在rootmkdir-pchown-Rhadoop:hadoop#hadoopsumkdir-p/data/hadoop/mapred/mrlocalmkdir-p/data/hadoop/hdfs/namemkdir-p/data/hadoop/hdfs/data mkdir- #hadoopsu/opt/modules/hadoop/hadoop-1.0.3/bin/hadoopnamenode-#Masternode #启动#/opt/modules/hadoop/hadoop-1.0.3/bin/hadoop-daemon.shstartsecondarynamenode#启动DataNode&&TaskTracker:startstoptail-f/opt/modules/hadoop/hadoop-#通过界面查看集群部署部署成#namenodedatanode11#jobtrackertasktracker11#通过执行Hadooppi运行样例检查集群是否成12/07/1510:50:48INFOmapred.FileInputFormat:Totalinputpaths12/07/1510:50:48INFOmapred.FileInputFormat:Totalinputpathstoprocess:12/07/1510:50:48INFO12/07/1510:50:49INFO12/07/1510:51:42INFO12/07/1510:52:07INFO12/07/1510:52:10INFO12/07/1510:52:11INFO12/07/1510:52:22INFO12/07/1510:52:28INFO12/07/1510:52:28INFOmapred.JobJobFinishedin100.608secondsEstimatedvalueofPiis:Runningjob:::::::map0%reducemap40%reducemap70%reducemap80%reducemap90%reducemap100%reduce::Virtualmemory(bytes)Mapoutput#安装调试方#namenodedatanodejobtrackertasktrackerhadoop-daemon.shstartxxxx #hdfshadoopfs-lshadoopfs-mkdir/data/hadoopfs-putxxx.log/data/#mapreducehadoopjarhadoop-examples-1.0.3.jarpi100#安装部署见错主机文件/etc/hostsIP错误。mapred-site.xmlhadoop:hadoopsuhadoop用户。比较常见是出现权限错误导致无法启动如果遇到服务无法启动。请检查 具体日志情况tail-n100$HADOOP_HOME/logs/*namenode* #检查namenode服务日志tail-n100$HADOOP_HOME/logs/*datanode* #检查datanode服务日志Tail-n100$HADOOP_HOME/logs/*jobtracker* #检查jobtracker服务日志Hadoop集群系置安装配#nodelinux基础环境是否正常,[linux基础环境搭建]一节#配置master机器node节点无密钥登#Hadoopsucdssh-keygen-q-trsa-N""-fcd/home/hadoop/.sshcatid_rsa.pub[hadoop@master.ssh]$cat[hadoop@master.ssh]$catid_rsa.pubyko/TtGNWVOtESBT8/Ya3wBzZd+Ef2ppsWuBbMOhvwB++gqlIfmM5UtYJkfYuUMr6SuQAJ1W6n+gA3VHRWIS2stlEVQ+F#id_rsa.pub公钥到authorized_keyscatid_rsa.pub>authorized_keys#master密钥权限,odgo-rwx#masterauthorized_keyscopynode1 #输入hadoop#node1odgo-rwx#验证本机无密钥登陆,如果无需算成功sshexitsshmasterexit#陆01,如果无需算成功sshexitsshnode1exit#master到每个node节点hadoop用户下使用密钥登陆是否正su#mastermastersshmasterexit#masternode1sshnode1exit#master集群服务器地址stop-all.shstart-all.sh的时候调#hadoopsecondarynodehostname#secondary#datanodetasktracker#masterhadoop到node1node2节点服务器#hadoopsuscp-r/opt/modules/hadoop/hadoop-1.0.3/#node1ssh mkdir-p/data/hadoop/mapred/mrsystemmkdir-p/data/hadoop/hdfs/namemkdir-p/data/hadoop/hdfs/dataodgo-w#批量启动和关闭集#通过界面查看集群部署部署成#namenodedatanode#jobtrackertasktrackerhadoopfs-lshadoopfs-mkdir#通过执行Hadooppi运行样例检查集群是否成12/07/1510:50:48INFOmapred.FileInputFormat:Totalinputpaths12/07/1510:50:48INFOmapred.FileInputFormat:Totalinputpathstoprocess:12/07/1510:50:48INFO12/07/1510:50:49INFO12/07/1510:51:42INFO12/07/1510:52:07INFO12/07/1510:52:10INFO12/07/1510:52:11INFO12/07/1510:52:22INFO12/07/1510:52:28INFO12/07/1510:52:28INFOmapred.JobJobFinishedin100.608secondsEstimatedvalueofPiis:Runningjob:::::::map0%reducemap40%reducemap70%reducemap80%reducemap90%reducemap100%reduce::Virtualmemory(bytes)Mapoutput自动化安装为加快服务器集群的安装和部署,会使用自动化安装安装。以下为自动化部署样例。中#红色部分具体参考以上配置做具体修改。本里面的安装包用于64位服务器安装,32位安装包需要单独修改。#master服务器自动安装 中并执行viyumyinstalllrzszgccgcc-clibstdc++-develntp#安装gcc基础环echo"01***root/usr/sbin/"/etc/crontab#配置时间同/usr/sbin/ntpdate#手动同步时/usr/sbin/groupaddhadoop新增hadoop群/usr/sbin/useraddhadoopghadoop新增Hadoop用户并绑hadoop群#安装依赖包并设hadoop用户mkdir-p/opt/modules/hadoop/mkdir-p/opt/data/hadoop/#配置/etc/hostsip对应主机echo-e"\tlocalhost.local 6localhost6#机架1">/etc/hosts#获取服务 IP并替换host中collect-IP=`/sbin/ifconfigeth0|grep"inetaddr"|awk-F":"'{print$2}'|awk-F""'{print$1}'`sed-i"s/^\tcollect/${IP}\tcollect/g"/etc/hostsecho"----------------envinitfinishandpreparesu cd#生成sudo-uhadoopmkdirssh-keygen-q-trsa-N""-fCd$HADOOP/.ssh&&echo此处需要catmasterid_rsa.pubodgo-rwx$HADOOP/.ssh/authorized_keys#修改文件权cd 已经配置好Hadoopcd$HADOOP/hadooptarzxvfhadoop_gz.tar.gzrpm-ivhjdk-6u21-linux-amd64.rpmrpm-ivhlrzsz-0.12.20-19.x86_64.rpmrpm-ivhhadoop-gpl-packaging-0.2.8-tarxzvflzo-cdlzo-2.06&&./configure--enable-shared&&make&&makeinstallcp/usr/local/lib/liblzo2.*/usr/lib/cdtarxzvflzop-1.03.tar.gzcdlzop-1.03./configure&&make&&makeinstall&&cdchown-Rhadoop:hadoopcphadoop-node-.tar.gzchown-Rhadoop:hadoop/opt/modules/hadoop/chown-Rhadoop:hadoop/home/hadoop开启集#相关LZOwgetwgetwgetwgetwget#LZO相关#lzocdlzo-2.06./configure&&make&&makeinstallcd../cdlzop-1.03./configure&&make&&make <value>-Djava.library.path=/opt/hadoopgpl/native/Linux-amd64-64:/opt/modules/hadoop/hadoop-1.0.3/lib/native/Linux-amd64-开启任务调度#mapred-#capacity-scheduler.xml添加hivestreaming等队列<?xml<?xml<!--ThisistheconfigurationfilefortheresourcemanagerinHadoop.--<!--Youcanconfigurevariousschedulingparametersrelatedtoqueues.--<!--Thepropertiesforaqueuefollowanamingconvention,suchas,--<!--mapred.capacity-scheduler.queue.<queue-name>.property-name.--<name>mapred.capacity-um-system-umnumberofjobsinthesystemwhichcanbeconcurrently,bythe<name>mapred.capacity-<description>Percentageofthenumberofslotsintheclusterthataretobeavailableforjobsinthisqueue.<name>mapred.capacity- um-<value>-um-capacitydefinesalimitbeyondwhichaqueuecannotusethecapacityoftheThisprovidesameanstolimithowmuchexcesscapacityaqueuecanuse.Bydefault,thereisnolimit. um-capacityofaqueuecanonlybegreaterthanorequaltoitsminimumcapacity.Defaultvalueof-1impliesaqueuecanusecompletecapacityoftheThispropertycouldbetocurtailcertainjobswhicharelongrunninginnaturefromoccupyingmorethanacertainpercentageofthecluster,whichintheabsenceofpre-emption,couldleadtocapacityguaranteesofotherqueuesbeingaffected.Oneimportantthingtonoteisthat um-capacityisapercentage,sobasedonthecluster'scapacitythemaxcapacitywouldchange.Soiflargenoofnodesorracksgetaddedtothecluster,maxCapacityinabsolutetermswouldincreaseaccordingly.<name>mapred.capacity-scheduler.queue.default.supports-<description>Iftrue,prioritiesofjobswillbetakenintoaccountinschedulingdecisions.<name>mapred.capacity-scheduler.queue.default.minimum-user-limit-<description>Eachqueueen salimitonthepercentageofresourcesallocatedtoauseratanygiventime,ifthereiscompetitionforthem.Thisuserlimitcanvarybetweenaminimumand umvalue.Theformerdependsonthenumberofuserswhohavesubmittedjobs,andthelatterissettothispropertyvalue.Forexample,supposethevalueofthispropertyis25.Iftwousershavesubmittedjobstoaqueue,nousercanusemorethan50%ofthequeueresources.Ifathirdusersubmitsajob,nosingleusercanusemorethan33%ofthequeueresources.With4ormoreusers,nousercanusemorethan25%ofthequeue'sresources.Avalueof100impliesnouserlimitsareimposed.<name>mapred.capacity-scheduler.queue.default.user-limit-<description>Themultipleofthequeuecapacitywhichcanbeconfiguredtoallowasingleusertoacquiremoreslots.<name>mapred.capacity- um-initialized-active- umnumberoftasks,acrossalljobsinthequeue,whichcanbeinitializedconcurrently.Oncethequeue'sjobsexceedthislimittheywillbequeuedondisk.<name>mapred.capacity- um-initialized-active-tasks-per- umnumberoftasksper-user,acrossalltheoftheuser'sjobsinthequeue,whichcanbeinitializedconcurrently.Oncetheuser'sjobsexceedthislimittheywillbequeuedondisk.<name>mapred.capacity-scheduler.queue.default.init-accept-jobs-<description>Themultipeof( um-system-jobs*queue-capacity)usedtodeterminethenumberofjobswhichareacceptedbythescheduler.<!--Thedefaultconfigurationsettingsforthecapacitytaskscheduler--<!--Thedefaultvalueswouldbeappliedtoallthequeueswhichdon'thave--<!--theappropriatepropertyfortheparticularqueue--<name>mapred.capacity-scheduler.default-supports-<description>Iftrue,prioritiesofjobswillbetakenaccountinschedulingdecisionsbydefaultinajob<name>mapred.capacity-scheduler.default-minimum-user-limit-<description>Thepercentageoftheresourceslimitedtoaparticularuserforthejobqueueatanygivenpointoftimebydefault.<name>mapred.capacity-scheduler.default-user-limit-<description>Thedefaultmultipleofqueue-capacitywhichisusedtodeterminetheamountofslotsasingleusercanconsumeconcurrently.<name>mapred.capacity- um-active-tasks-per-<description>Thedefault umnumberoftasks,acrossalljobsinthequeue,whichcanbeinitializedconcurrently.Oncethequeue'sjobsexceedthislimittheywillbequeuedondisk.<name>mapred.capacity- um-active-tasks-per-<description>Thedefault umnumberoftasksper-user,acrossalltheoftheuser'sjobsinthequeue,whichcanbeinitializedconcurrently.Oncetheuser'sjobsexceedthislimittheywillbequeuedon<name>mapred.capacity-scheduler.default-init-accept-jobs-<description>Thedefaultmultipeof( um-system-jobs*queue-capacity)usedtodeterminethenumberofjobswhichareacceptedbythescheduler.<!--CapacityschedulerJobInitializationconfigurationparameters--<name>mapred.capacity-scheduler.init-poll-<description>Theamountoftimeinmilisecondswhichisusedtopollthejobqueuesforjobstoinitialize.<name>mapred.capacity-scheduler.init-worker-<description>NumberofworkerthreadswhichwouldbeusedbyInitializationpollertoinitializejobsinasetofqueue.Ifnumbermentionedinpropertyisequaltonumberofjobqueuesthenasinglethreadwouldinitializejobsinaqueue.Iflesserthenathreadwouldgetasetofqueuesassigned.Ifthenumberisgreaterthennumberofthreadswouldbeequaltonumberofjobqueues.<name>mapred.capacity-<name>mapred.capacity- um-<name>mapred.capacity-scheduler.queue.hive.supports-<name>mapred.capacity-scheduler.queue.hive.minimum-user-limit-<name>mapred.capacity-scheduler.queue.hive.user-limit-<name>mapred.capacity- um-initialized-active-<name>mapred.capacity- um-initialized-active-tasks-per-<name>mapred.capacity-scheduler.queue.hive.init-accept-jobs-<!--streaming--<name>mapred.capacity-<name>mapred.capacity- um-<name>mapred.capacity-scheduler.queue.streaming.supports-<name>mapred.capacity-scheduler.queue.streaming.minimum-user-limit-<name>mapred.capacity-scheduler.queue.streaming.user-limit-<name>mapred.capacity- um-initialized-active-<name>mapred.capacity- um-initialized-active-tasks-per-<name>mapred.capacity-scheduler.queue.streamingh.init-accept-jobs-开启机架感#hadoop配置文件core-site.xml添加机架感知代<value>/opt/modules/hadoop/hadoop-#RackAware.py文#-*-coding:UTF-8-*-importsysrack={"hadoopnode-101":"rack1",}ifname=="mainprint"/"+配置详#Hadoop系统配置详#core-site.xml为公共配置,hdfs-site.xmlmapred-site.xmlhdfsmapreduce启动的时候加hadoop-含JDK所在路/lib/native/Linux-amd64-Lzo,Snappy,gzip等压缩算最大HEAPSIZE小,默认core-含指定默认的文件系统,默认端口8020辅助NameNode检查点,分别到各 ,持冗余备份editlog和fsimage,合并触发周期30分钟editlog和fsimage,合并触发日志大小32M文件清理周期24小 压缩类LZO码指定缓冲区的大小,默认4K太小,64k(65536128k(131072)更为常配置架感知的代hdfs-含NameNode上持久化元数据和事务日志的路径。指定多 使用NFS在加载一个,以便后续主机宕机,快速复DataNode上数据块的地方。如果指定多个 HDFS理界secondarynamenodehttp整数据的份预留文件数HDFS件块大小,默认datanode同时打开的文件上限。默认256太小默认是true,则打开前文所述的权限系统。如false,权限检查是关闭支持文件append,主要是支持mapred-说Jobtracker的RPC服务器所在的主机名称和端作业中间数据 列表,作业结束后,数据被清作业运行期间共的,必须是HDFS之上sks.运行在tasktracker之上的最大map任务.tasks.4运行在tasktracker之上的最大reduce任务(MAP+RED=CPU*2)-JVM项,默认Map输出后压缩传输,可以缩短文件传输使用Lzo库作为压缩算-加载Lzo使用能力调度配置能力调度器队为reduce阶段合并map输出所需的内存文件系的内reduce序时的内存上默认默认:5,reduce行copy线程说SecondaryNameNodeHostName地说DataNodeTaskTrackerHostName址列样mon/docs/r0.20.2/core-default.htmlmon/docs/r0.20.2/hdfs-default.htmlmon/docs/r0.20.0/mapred-default.html#机器配置推编机器类系数内硬网推荐机说网络设1交换千兆以网交换和管理节11核ID10)太网21核ID太网31核ID10)太网4负载均衡(Ha2核太网互6Nagios1核太网7HiveMetaStoreDBServer(Mysql)2核太网压力较和运算节5N核太网磁盘不做62核SATA太网 开源的数据仓库系统,可以基于Hql语句操作Hadoop集群数据Mysql,开源数据库系统,Hive原数据使用FineReportalexa国际知名统Sogou数据资源 Hive的作用和原理说#Hadoop仓库和传统数据仓库的协作结构属于互为补充的关系,相比传统数据仓库技术,HadoopWEB展邮件报数WEB展邮件报数据仓库报Nginx报数据仓库报数据仓库报ExcelExcel用户HivehttpLoad程httphttp数据数据仓库业务流客户客户数据质量报#hive内部结构驱动执行编译MysqlHiveHveCJDBC驱动Hive部署和安#安装Hadoop集群,看EasyHadoop集群部署章节#编译安装Mysql,启动Mysql,检查gc++包--with------make&&makestatic''-- -ldflags=-static''--enable-assembler''--enable-local-infile''--with-fast-mutexes''--with-user=mysql''--with-unix-socket-path=/var/lib/mysql/mysql.sock''--with-pic''--prefix=/opt/modules/mysql/''--with- plex''--with-ssl''--sysconfdir=/etc''--datadir=/opt/data/modules/mysql/''--enable-thread-safe-''--with-readline''--with-innodb''--with-plugin-innodb_plugin''--without-ndbcluster''--with-archive-storage-engine''--with-csv-storage-engine''--with-blackhole-storage-engine''--with-federated-storage-engine''--without--enable-shared''CC=gcc''CFLAGS=-O2-g-pipe''LDFLAGS=''CXX=gcc''CXXFLAGS=-O2-g-pipe-felide-constructors ment=MySQLCommunityServer(GPL)''--with-mysqld- '--with-embedded-server'#RPMMysql地址#mysqlrpm-rpm-#启动mysql/sbin/servicemysql[root@master~]#/sbin/servicemysqlShuttingdown[ Starting[ #Mysql服务mysqlmysql-uroot-#添加Hive用户名和 法。例如,myusermypasswordmysqlGRANTALLPRIVILEGESON*.*TO'hive'@'%'IDENTIFIEDBY'hive'WITHGRANTOPTION#myuserip为192.168.1.%mysqlmypasswordGRANTALLPRIVILEGESON*.*TO'hive'@'192.168.1.%'IDENTIFIEDBY'hive'WITHGRANT[root@master~]#mysql-uhive-phive-ERROR1130(HY000):Host'::ffff:00'isnotallowedtoconnecttothisMySQL#Hive包并配置JDBC连接地址mkdirmkdir- cd/opt/modules/tar-xzvfhive- #HiveexportHADOOP_HEAPSIZE=64默认exportHADOOP_HEAPSIZE=64默认vi/opt/modules/hive/hive-0.9.0/conf/hive-site.xml<description>JDBCconnectstring<description>JDBCconnectstringforaJDBC<description>DriverclassnameforaJDBC<description>usernametouseagainstmetastore<description>passwordtouseagainstmetastore#添加MysqlHive用户名和,创建Hive仓#登陆mysqlcreatedatabasehive;#Hivethrift#hive/opt/modules/hive/hive-0.9.0/bin/hiveservicehiveserver10001[root@hadoop-231bin]#netstat-nap|grep[root@hadoop-231bin]#netstat-nap|grep 0HiveCli的基本用#登陆查#hivehive[root@hadoop-231[root@hadoop-231bin]#./hivehive>showdatabases;Timetaken:3.103seconds#查询文件方#HQLviuseuseselect*fromtest_textlimit#查询文件查询方[root@hadoop-231bin]#./hive-f[root@hadoop-231bin]#./hive-fTimetaken:3.306#命令行模./hivee"select*fromtest.test_textlimit30"HQL基本语(创建表,加载表,分析查询,删除表#快速案 #hiveCli模式CREATEdatabasealexa;建库usealexa; uid3(uidstring)PARTITIONEDBY(dtSTRING)ROWFORMATDELIMITEDFIELDSTERMINATEDBYcollectionitemsterminatedby"\n"STOREDASTEXTFILECOMMENT'alexaSTRINGCOMMENTROWFORMATDELIMITEDFIELDSTERMINATEDBYcollectionitemsterminatedby"\n"STOREDASTEXTFILELOAD LOADDATA loaddatainpath'/data/uid.txt'overwriteintotableuid;SELECT*FROMalexa.top100wlimitselect*fromalexa.top100wwhere #创建 STRING)COMMENT'alexaCREATECREATEEXTERNALTABLEtop100w(idCOMMENT'alexaROWFORMATDELIMITEDFIELDSTERMINATEDBYcollectionitemsterminatedby"\n"STOREDASTEXTFILESTRINGCOMMENT#putcopyFromLocalHadoopfs-mkdir/data/dw/alexa/top100w/hadoopfsputroot/top-1m.csvdata

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论