Kafka安装配置和使用说明_第1页
Kafka安装配置和使用说明_第2页
Kafka安装配置和使用说明_第3页
Kafka安装配置和使用说明_第4页
Kafka安装配置和使用说明_第5页
已阅读5页,还剩39页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

./Kafka安装配置及使用说明〔铁树2018-08-08〔Windows平台,5个分布式节点,修改消息大小,调用程序范例安装配置采用5台服务器作为集群节点,IP地址为:XX.XX.0.12-XX.XX.0.16.每台机器依次安装配置JDK、zookeeper、kafka,先安装完一台机器,然后拷贝到其他机器,再修改配置文件。JDK安装配置JDK版本:解压版〔解压到C盘kafka目录下,如图所示。设置环境变量:JAVA_HOME:PATH:C:\kafka\jdk1.7.0_51_x64\binzookeeper安装配置解压安装zookeeper版本:〔解压到C盘kafka目录下,如图所示。创建zookeeper数据目录和日志目录zkdata#存放快照C:\kafka\zookeeper-3.4.12\zkdatazkdatalog#存放日志C:\kafka\zookeeper-3.4.12\zkdatalog修改配置文件进入到""目录下的conf目录中,复制zoo_sample.cfg〔官方提供的zookeeper的样板文件,重命名为zoo.cfg〔官方指定的文件命名规则。默认内容:修改后配置文件为:#ThenumberofmillisecondsofeachticktickTime=2000#Thenumberofticksthattheinitial#synchronizationphasecantakeinitLimit=10#Thenumberofticksthatcanpassbetween#sendingarequestandgettinganacknowledgementsyncLimit=5#thedirectorywherethesnapshotisstored.#donotuse/tmpforstorage,/tmphereisjust#examplesakes.#theportatwhichtheclientswillconnectclientPort=12181server.1=XX.XX.0.12:12888:13888server.2=XX.XX.0.13:12888:13888server.3=XX.XX.0.14:12888:13888server.4=XX.XX.0.15:12888:13888server.5=XX.XX.0.16:12888:13888#themaximumnumberofclientconnections.#increasethisifyouneedtohandlemoreclients#maxClientCnxns=60##Besuretoreadthemaintenancesectionofthe#administratorguidebeforeturningonautopurge.##/doc/current/zookeeperAdmin.html#sc_maintenance##ThenumberofsnapshotstoretainindataDirautopurge.snapRetainCount=100#Purgetaskintervalinhours#Setto"0"todisableautopurgefeatureautopurge.purgeInterval=24配置文件解释:#tickTime:这个时间是作为Zookeeper服务器之间或客户端与服务器之间维持心跳的时间间隔,也就是每个tickTime时间就会发送一个心跳。#initLimit:这个配置项是用来配置Zookeeper接受客户端〔这里所说的客户端不是用户连接Zookeeper服务器的客户端,而是Zookeeper服务器集群中连接到Leader的Follower服务器初始化连接时最长能忍受多少个心跳时间间隔数。当已经超过10个心跳的时间〔也就是tickTime长度后Zookeeper服务器还没有收到客户端的返回信息,那么表明这个客户端连接失败。总的时间长度就是10*2000=20秒#syncLimit:这个配置项标识Leader与Follower之间发送消息,请求和应答时间长度,最长不能超过多少个tickTime的时间长度,总的时间长度就是5*2000=10秒#dataDir:快照日志的存储路径#dataLogDir:事物日志的存储路径,如果不配置这个那么事物日志会默认存储到dataDir制定的目录,这样会严重影响zk的性能,当zk吞吐量较大的时候,产生的事物日志、快照日志太多#clientPort:这个端口就是客户端连接Zookeeper服务器的端口,Zookeeper会监听这个端口,接受客户端的访问请求。修改他的端口改大点通过配置autopurge.snapRetainCount和autopurge.purgeInterval这两个参数能够实现定时清理了。这两个参数都是在zoo.cfg中配置的:autopurge.purgeInterval

这个参数指定了清理频率,单位是小时,需要填写一个1或更大的整数,默认是0,表示不开启自己清理功能。autopurge.snapRetainCount

这个参数和上面的参数搭配使用,这个参数指定了需要保留的文件数目。默认是保留3个。创建myid文件在"C:\kafka\zookeeper-3.4.12\zkdata"目录下,创建myid文件〔无后缀名,内容为对应IP地址的主机号。如server.1则内容为1。Kafka安装配置解压安装kafka版本:〔解压到C盘kafka目录下,如图所示。创建消息目录kafkalogs:C:\kafka\kafka_2.11-1.1.1\kafkalogs修改配置文件打开C:\kafka\kafka_2.11-1.1.1\config\perties实际的修改项为:broker.id=1listeners=PLAINTEXT://:19092log.dirs=C:/kafka//kafkalogs#在log.retention.hours=168下面新增下面三项〔消息大小最大1GBmessage.max.byte=1073741824replica.fetch.max.bytes=1073741824log.segment.bytes=1073741824default.replication.factor=2#设置zookeeper的连接端口zookeeper.connect=XX.XX.0.12:12181,XX.XX.0.13:12181,XX.XX.0.14:12181,XX.XX.0.15:12181,XX.XX.0.16:12181配置说明:broker.id=0#当前机器在集群中的唯一标识,和zookeeper的myid性质一样port=19092#当前kafka对外提供服务的端口默认是9092=00#这个参数默认是关闭的,在0.8.1有个bug,DNS解析问题,失败率的问题。work.threads=3#这个是borker进行网络处理的线程数num.io.threads=8#这个是borker进行I/O处理的线程数log.dirs=/opt/kafka/kafkalogs/#消息存放的目录,这个目录可以配置为","逗号分割的表达式,上面的num.io.threads要大于这个目录的个数这个目录,如果配置多个目录,新创建的topic他把消息持久化的地方是,当前以逗号分割的目录中,那个分区数最少就放那一个socket.send.buffer.bytes=102400#发送缓冲区buffer大小,数据不是一下子就发送的,先回存储到缓冲区了到达一定的大小后在发送,能提高性能socket.receive.buffer.bytes=102400#kafka接收缓冲区大小,当数据到达一定大小后在序列化到磁盘socket.request.max.bytes=104857600#这个参数是向kafka请求消息或者向kafka发送消息的请请求的最大数,这个值不能超过java的堆栈大小num.partitions=1#默认的分区数,一个topic默认1个分区数log.retention.hours=168#默认消息的最大持久化时间,168小时,7天message.max.byte=5242880#消息保存的最大值5Mdefault.replication.factor=2#kafka保存消息的副本数,如果一个副本失效了,另一个还可以继续提供服务replica.fetch.max.bytes=5242880#取消息的最大字节数log.segment.bytes=1073741824#这个参数是:因为kafka的消息是以追加的形式落地到文件,当超过这个值的时候,kafka会新起一个文件erval.ms=300000#每隔300000毫秒去检查上面配置的log失效时间〔log.retention.hours=168,到目录查看是否有过期的消息如果有,删除log.cleaner.enable=false#是否启用log压缩,一般不用启用,启用的话可以提高性能zookeeper.connect=00:12181,01:12181,07:1218#设置zookeeper的连接端口其他节点配置将安装以上配置好的目录c:\kafka拷贝到其他节点的c盘目录,并修改如下配置。1、JAVA环境变量:JAVA_HOME:PATH:C:\kafka\jdk1.7.0_51_x64\bin2、zookeeper的myidC:\kafka\zookeeper-3.4.12\zkdata\myid,修改为对应的数值XX.XX.0.12:1XX.XX.0.13:2XX.XX.0.14:3XX.XX.0.15:4XX.XX.0.16:53、kafka配置C:\kafka\kafka_2.11-1.1.1\config\perties的broker.id,修改为对应的数值XX.XX.0.12:1XX.XX.0.13:2XX.XX.0.14:3XX.XX.0.15:4XX.XX.0.16:5服务启动启动zookeeperC:\kafka\zookeeper-3.4.12\bin\zkServer.cmdXX.XX.0.12-16,依次双击启动。启动kafka运行cmd,cd目录,再执行命令:[cd]C:\kafka\kafka_2.11-1.1.1>.\bin\windows\kafka-server-start.bat.\config\perties服务状态测试创建Topics打开cmd进入C:\kafka\kafka_2.11-1.1.1\bin\windowsC:\kafka\kafka_2.11-1.1.1\bin\windows>kafka-topics.bat--create--zookeeperlocalhost:12181--replication-factor1--partitions1--topictest001打开一个Producer打开cmd进入C:\kafka\kafka_2.11-1.1.1\bin\windowsC:\kafka\kafka_2.11-1.1.1\bin\windows>kafka-console-producer.bat--broker-listlocalhost:19092--topictest001>等待输入消息内容。打开一个Consumer打开cmd进入C:\kafka\kafka_2.11-1.1.1\bin\windowsC:\kafka\kafka_2.11-1.1.1\bin\windows>kafka-console-consumer.bat--zookeeperlocalhost:12181--topictest001然后就可以在Producer控制台窗口输入消息了,很快Consumer窗口就会显示出Producer发送的消息。查看所有主题C:\Users\Develop>C:\kafka\kafka_2.11-1.1.1\bin\windows\kafka-topics.bat--list--zookeeperlocalhost:12181查看Topic分区和副本C:\Users\Develop>C:\kafka\kafka_2.11-1.1.1\bin\windows\kafka-topics.bat--describe--zookeeperlocalhost:12181消息大小调整Kafka对于10KB大小的消息吞吐率最好,默认配置最大支持1MB的消息大小。对于大消息的传输,需要修改kafka的perties、consumer、producer的相关配置。perties修改:打开C:\kafka\kafka_2.11-1.1.1\config\perties〔按照最大1GBs=1073741824replica.fetch.max.bytes=1073741824log.segment.bytes=1073741824consumer配置:max.partition.fetch.bytes=1073741824Producer配置:=1073741824#33554432,默认32Mbuffer.memory=1073741824:Themessageis36428062byteswhenserializedwhichislargerthanthetotalmemorybufferyouhaveconfiguredwiththebuffer.memoryconfiguration.附件太大可能会内存溢出,还会涉及超时参数配置等。JAVA程序示例Producer程序示例Properties文件配置##producerbootstrap.servers=XX.XX.0.12:19092,XX.XX.0.13:19092,XX.XX.0.14:19092,XX.XX.0.15:19092,XX.XX.0.16:19092producer.type=syncrequest.required.acks=1##mit=true#latest,earliest,noneauto.offset.reset=earliest 建议公共参数〔如服务地址配置在properties文件里。其他参数根据接口需要程序中配置。//创建ProducerprivateProducer<Integer,String>createProducer<>{ Propertiesprops=newProperties<>; Stringpath=ProducerDemo.class.getResource<"/">.getFile<>.toString<>+"perties";try{FileInputStreamfis=newFileInputStream<newFile<path>>;props.load<fis>;props.put<"key.serializer","mon.serialization.IntegerSerializer">;props.put<"value.serializer","mon.serialization.StringSerializer">;fis.close<>;}catch<Exceptione>{e.printStackTrace<>;}returnnewKafkaProducer<Integer,String><props>; }Properties配置详解#0:producer不会等待broker发送ack#1:当leader接收到消息后发送ack#all<-1>:当所有的follower都同步消息成功后发送ackrequest.required.acks=0主题+VALUEimportjava.io.File;importjava.io.FileInputStream;importjava.util.Properties;importducer.KafkaProducer;importducer.Producer;importducer.ProducerRecord;publicclassTopicValue{//创建ProducerprivateProducer<String,String>createProducer<>{ Propertiesprops=newProperties<>; Stringpath=ProducerDemo.class.getResource<"/">.getFile<>.toString<> +"perties";try{ FileInputStreamfis=newFileInputStream<newFile<path>>; props.load<fis>; props.put<"key.serializer","mon.serialization.StringSerializer">; props.put<"value.serializer","mon.serialization.StringSerializer">;fis.close<>; }catch<Exceptione>{ e.printStackTrace<>; }returnnewKafkaProducer<String,String><props>; }publicstaticvoidmain<String[]args>{//消息主题 StringtopicName="test001"; TopicValuetopicValueProducer=newTopicValue<>; Producer<String,String>producer=topicValueProducer.createProducer<>; producer.send<newProducerRecord<String,String><topicName,"消息:TopicValue">>; producer.flush<>; producer.close<>; System.out.println<"Messagesendsuccessfully">; }}主题+KEY+VALUE<Integer,String>packageducer;importjava.io.File;importjava.io.FileInputStream;importjava.util.Properties;importducer.KafkaProducer;importducer.Producer;importducer.ProducerRecord;publicclassTopicIntegerString{ //创建Producer privateProducer<Integer,String>createProducer<>{ Propertiesprops=newProperties<>; Stringpath=ProducerDemo.class.getResource<"/">.getFile<>.toString<> +"perties"; try{ FileInputStreamfis=newFileInputStream<newFile<path>>; props.load<fis>; props.put<"key.serializer","mon.serialization.IntegerSerializer">; props.put<"value.serializer","mon.serialization.StringSerializer">;fis.close<>; }catch<Exceptione>{ e.printStackTrace<>; } returnnewKafkaProducer<Integer,String><props>; } publicstaticvoidmain<String[]args>{ //消息主题 StringtopicName="test001"; TopicIntegerStringtopicValueProducer=newTopicIntegerString<>; Producer<Integer,String>producer=topicValueProducer.createProducer<>; producer.send<newProducerRecord<Integer,String><topicName,1,"消息:TopicIntegerString1">>; producer.flush<>; producer.close<>; System.out.println<"Messagesendsuccessfully">; }}<String,String>importjava.io.File;importjava.io.FileInputStream;importjava.util.Properties;importducer.KafkaProducer;importducer.Producer;importducer.ProducerRecord;publicclassTopicStringString{ //创建Producer privateProducer<String,String>createProducer<>{ Propertiesprops=newProperties<>; Stringpath=ProducerDemo.class.getResource<"/">.getFile<>.toString<> +"perties"; try{ FileInputStreamfis=newFileInputStream<newFile<path>>; props.load<fis>; props.put<"key.serializer","mon.serialization.StringSerializer">; props.put<"value.serializer","mon.serialization.StringSerializer">;fis.close<>; }catch<Exceptione>{ e.printStackTrace<>; } returnnewKafkaProducer<String,String><props>; } publicstaticvoidmain<String[]args>{ //消息主题 StringtopicName="test001"; TopicStringStringtopicValueProducer=newTopicStringString<>; Producer<String,String>producer=topicValueProducer.createProducer<>; producer.send<newProducerRecord<String,String><topicName,"TopicStringString001","消息:TopicStringString001">>; producer.flush<>; producer.close<>; System.out.println<"Messagesendsuccessfully">; }}<String,byte[]>packageducer;importjava.io.File;importjava.io.FileInputStream;importjava.util.Properties;importducer.KafkaProducer;importducer.Producer;importducer.ProducerRecord;publicclassTopicStringByte{ //创建Producer privateProducer<String,byte[]>createProducer<>{ Propertiesprops=newProperties<>; Stringpath=ProducerDemo.class.getResource<"/">.getFile<>.toString<> +"perties"; try{ FileInputStreamfis=newFileInputStream<newFile<path>>; props.load<fis>; props.put<"key.serializer","mon.serialization.StringSerializer">; props.put<"value.serializer","mon.serialization.ByteArraySerializer">;fis.close<>; }catch<Exceptione>{ e.printStackTrace<>; } returnnewKafkaProducer<String,byte[]><props>; } publicstaticvoidmain<String[]args>{ //消息主题 StringtopicName="test001"; TopicStringBytetopicValueProducer=newTopicStringByte<>; Producer<String,byte[]>producer=topicValueProducer.createProducer<>; producer.send<newProducerRecord<String,byte[]><topicName,"TopicStringByte001","消息:TopicStringByte001".getBytes<>>>; producer.flush<>; producer.close<>; System.out.println<"Messagesendsuccessfully">; }}<byte[],byte[]>packageducer;importjava.io.File;importjava.io.FileInputStream;importjava.util.Properties;importducer.KafkaProducer;importducer.Producer;importducer.ProducerRecord;publicclassTopicByteByte{ //创建Producer privateProducer<byte[],byte[]>createProducer<>{ Propertiesprops=newProperties<>; Stringpath=ProducerDemo.class.getResource<"/">.getFile<>.toString<> +"perties"; try{ FileInputStreamfis=newFileInputStream<newFile<path>>; props.load<fis>; props.put<"key.serializer","mon.serialization.ByteArraySerializer">; props.put<"value.serializer","mon.serialization.ByteArraySerializer">;fis.close<>; }catch<Exceptione>{ e.printStackTrace<>; } returnnewKafkaProducer<byte[],byte[]><props>; } publicstaticvoidmain<String[]args>{ //消息主题 StringtopicName="test001"; TopicByteBytetopicValueProducer=newTopicByteByte<>; Producer<byte[],byte[]>producer=topicValueProducer.createProducer<>; producer.send<newProducerRecord<byte[],byte[]><topicName,"TopicByteByte001".getBytes<>,"消息:TopicByteByte001".getBytes<>>>; producer.flush<>; producer.close<>; System.out.println<"Messagesendsuccessfully">; }}发送文件消息/****/packageducer;importjava.io.BufferedOutputStream;importjava.io.ByteArrayOutputStream;importjava.io.File;importjava.io.FileInputStream;importjava.io.FileOutputStream;importducer.Producer;importducer.ProducerRecord;/***@authorDevelop*发送消息类*/publicclassSendMsgFileTest{ //根据文件名获取字节数组 publicstaticbyte[]getFileBytes<StringfileName>{ byte[]buffer=null; FileInputStreamfis=null; ByteArrayOutputStreambos=null; try{ Filefile=newFile<fileName>; fis=newFileInputStream<file>; longfileSize=file.length<>; if<fileSize>Integer.MAX_VALUE> { System.out.println<"文件太大,无法处理!">; returnnull; } bos=newByteArrayOutputStream<<int>fileSize>; byte[]b=newbyte[1024]; intlen=0; while<<len=fis.read<b,0,1024>>!=-1> { bos.write<b,0,len>; } buffer=bos.toByteArray<>; }catch<Exceptionex>{ ex.printStackTrace<>; }finally{ if<bos!=null> try{ bos.close<>; }catch<Exceptionex>{ ex.printStackTrace<>; } if<fis!=null> try{ fis.close<>; }catch<Exceptionex>{ ex.printStackTrace<>; } } returnbuffer; } publicstaticvoidmain<String[]args>{ try{ //消息主题 StringtopicName="test001"; TopicStringBytetopicValueProducer=newTopicStringByte<>; Producer<String,byte[]>producer=topicValueProducer.createProducer<>; System.out.println<"开始发送消息!">; intcount=0; while<count<1>{ count++; StringfilePath="C:/kafka/workspace/linecount3.7.rar"; Stringkey=count+"_"+filePath.substring<filePath.lastIndexOf<"/">+1>; byte[]buffer=getFileBytes<filePath>; ProducerRecord<String,byte[]>pr=newProducerRecord<String,byte[]>< topicName,key,buffer>; producer.send<pr>; Thread.sleep<100>; } producer.flush<>; producer.close<>; System.out.println<"消息发送完成!">; }catch<Exceptionex>{ ex.printStackTrace<>; } }}回调函数packageducer;importjava.io.File;importjava.io.FileInputStream;importjava.util.Properties;importducer.Callback;importducer.KafkaProducer;importducer.Producer;importducer.ProducerRecord;importducer.RecordMetadata;publicclassCallbackProducer{ //创建Producer privateProducer<String,byte[]>createProducer<>{ Propertiesprops=newProperties<>; Stringpath=ProducerDemo.class.getResource<"/">.getFile<>.toString<> +"perties"; try{ FileInputStreamfis=newFileInputStream<newFile<path>>; props.load<fis>; props.put<"key.serializer","mon.serialization.StringSerializer">; props.put<"value.serializer","mon.serialization.ByteArraySerializer">;fis.close<>; }catch<Exceptione>{ e.printStackTrace<>; } returnnewKafkaProducer<String,byte[]><props>; } publicstaticvoidmain<String[]args>{ //消息主题 StringtopicName="test001"; CallbackProducertopicValueProducer=newCallbackProducer<>; Producer<String,byte[]>producer=topicValueProducer.createProducer<>; producer.send<newProducerRecord<String,byte[]><topicName,"TopicStringByte001","消息:TopicStringByte001".getBytes<>>, newCallback<>{ @Override publicvoidonCompletion<RecordMetadatametadata, Exceptionexception>{ if<exception!=null>{ exception.printStackTrace<>; }else{ System.out.println<metadata.toString<>>; System.out.println<metadata.offset<>>; } } }>; producer.flush<>; producer.close<>; System.out.println<"Messagesendsuccessfully">; }}独立线程packageducer;//importpertiespackagesimportjava.io.File;importjava.io.FileInputStream;importjava.util.Properties;importjava.util.concurrent.TimeUnit;importducer.Callback;importducer.KafkaProducer;//importsimpleproducerpackagesimportducer.Producer;//importProducerRecordpackagesimportducer.ProducerRecord;importducer.RecordMetadata;//生产者代码publicclassProducerDemoextendsThread{//消息主题privateStringtopicName;publicProducerDemo<Stringtopic>{super<>;this.topicName=topic; }//创建ProducerprivateProducer<String,byte[]>createProducer<>{ Propertiesprops=newProperties<>; Stringpath=ProducerDemo.class.getResource<"/">.getFile<>.toString<>+"perties";try{FileInputStreamfis=newFileInputStream<newFile<path>>;props.load<fis>;props.put<"key.serializer","mon.serialization.StringSerializer">;props.put<"value.serializer","mon.serialization.ByteArraySerializer">;fis.close<>;}catch<Exceptione>{e.printStackTrace<>;}returnnewKafkaProducer<String,byte[]><props>; }@Overridepublicvoidrun<>{ Producer<String,byte[]>producer=createProducer<>;inti=0;while<i<1E1>{ producer.send<newProducerRecord<String,byte[]><topicName,String.valueOf<i>, <"消息:"+i>.getBytes<>>,newCallback<>{@OverridepublicvoidonCompletion<RecordMetadatametadata, Exceptionexception>{if<exception!=null>{ exception.printStackTrace<>; }else{ System.out.println<metadata.toString<>>; System.out.println<metadata.offset<>>; } } }>; i++;try{ TimeUnit.MILLISECONDS.sleep<100>; }catch<InterruptedExceptione>{ e.printStackTrace<>; } } producer.flush<>; producer.close<>; System.out.println<"Messagesendsuccessfully">; }publicstaticvoidmain<String[]args>{//使用kafka集群中创建好的主题test001newProducerDemo<"test001">.start<>; }}Consumer程序示例接收文本〔字节消息packagekjsp.kafka.consumer;importjava.io.File;importjava.io.FileInputStream;importjava.util.Arrays;importjava.util.Properties;importorg.apache.kafka.clients.consumer.ConsumerRecord;importorg.apache.kafka.clients.consumer.ConsumerRecords;importorg.apache.kafka.clients.consumer.KafkaConsumer;//消费者实例publicclassConsumerDemo{ publicvoidgetMsg<>{ try{ Propertiesprops=newProperties<>; Stringpath=ConsumerDemo.class.getResource<"/">.getFile<> .toString<> +"perties"; FileInputStreamfis=newFileInputStream<newFile<path>>; props.load<fis>; props.put<"group.id","KJSP">; props.put<"key.deserializer", "mon.serialization.StringDeserializer">; props.put<"value.deserializer", "mon.serialization.ByteArrayDeserializer">;fis.close<>; KafkaConsumer<String,byte[]>consumer=newKafkaConsumer<><props>; consumer.subscribe<Arrays.asList<"test001">>; while<true>{ ConsumerRecords<String,byte[]>records=consumer.poll<10>; for<ConsumerRecord<String,byte[]>record:records> System.out.printf<"offset=%d,key=%s,value=%s\n", record.offset<>,record.key<>,newString<record.value<>,"GB2312">>; } }catch<Exceptionex>{ System.out.println<ex.getMessage<>>; } } publicstaticvoidmain<String[]args>{ System.out.println<"开始接收消息!">; try{ ConsumerDemoconsumer=newConsumerDemo<>; consumer.getMsg<>; }catch<Exceptionex>{ System.out.println<ex.getMessage<>>; } }}接收文件消息packagekjsp.kafka.consumer;importjava.io.BufferedOutputStr

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论