Spark笔记-技术点汇总_第1页
Spark笔记-技术点汇总_第2页
Spark笔记-技术点汇总_第3页
Spark笔记-技术点汇总_第4页
Spark笔记-技术点汇总_第5页
已阅读5页,还剩42页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

1、目录 HYPERLINK /netoxi/p/7223412.html l %E6%A6%82%E5%86%B5 概况 HYPERLINK /netoxi/p/7223412.html l %E6%89%8B%E5%B7%A5%E6%90%AD%E5%BB%BA%E9%9B%86%E7%BE%A4 手工搭建集群 HYPERLINK /netoxi/p/7223412.html l %E5%BC%95%E8%A8%80 引言 HYPERLINK /netoxi/p/7223412.html l %E5%AE%89%E8%A3%85Scala 安装Scala HYPERLINK /netoxi/p

2、/7223412.html l %E9%85%8D%E7%BD%AE%E6%96%87%E4%BB%B6 配置文件 HYPERLINK /netoxi/p/7223412.html l %E5%90%AF%E5%8A%A8%E4%B8%8E%E6%B5%8B%E8%AF%95 启动与测试 HYPERLINK /netoxi/p/7223412.html l %E5%BA%94%E7%94%A8%E9%83%A8%E7%BD%B2 应用部署 HYPERLINK /netoxi/p/7223412.html l %E9%83%A8%E7%BD%B2%E6%9E%B6%E6%9E%84 部署架构 H

3、YPERLINK /netoxi/p/7223412.html l %E5%BA%94%E7%94%A8%E7%A8%8B%E5%BA%8F%E9%83%A8%E7%BD%B2 应用程序部署 HYPERLINK /netoxi/p/7223412.html l %E6%A0%B8%E5%BF%83%E5%8E%9F%E7%90%86 核心原理 HYPERLINK /netoxi/p/7223412.html l RDD%E6%A6%82%E5%BF%B5 RDD概念 HYPERLINK /netoxi/p/7223412.html l RDD%E6%A0%B8%E5%BF%83%E7%BB%8

4、4%E6%88%90 RDD核心组成 HYPERLINK /netoxi/p/7223412.html l RDD%E4%BE%9D%E8%B5%96%E5%85%B3%E7%B3%BB RDD依赖关系 HYPERLINK /netoxi/p/7223412.html l DAG%E5%9B%BE DAG图 HYPERLINK /netoxi/p/7223412.html l RDD%E6%95%85%E9%9A%9C%E6%81%A2%E5%A4%8D%E6%9C%BA%E5%88%B6 RDD故障恢复机制 HYPERLINK /netoxi/p/7223412.html l Standal

5、one%E6%A8%A1%E5%BC%8F%E7%9A%84Spark%E6%9E%B6%E6%9E%84 Standalone模式的Spark架构 HYPERLINK /netoxi/p/7223412.html l YARN%E6%A8%A1%E5%BC%8F%E7%9A%84Spark%E6%9E%B6%E6%9E%84 YARN模式的Spark架构 HYPERLINK /netoxi/p/7223412.html l %E5%BA%94%E7%94%A8%E7%A8%8B%E5%BA%8F%E8%B5%84%E6%BA%90%E6%9E%84%E5%BB%BA 应用程序资源构建 HYP

6、ERLINK /netoxi/p/7223412.html l API API HYPERLINK /netoxi/p/7223412.html l WordCount%E7%A4%BA%E4%BE%8B WordCount示例 HYPERLINK /netoxi/p/7223412.html l RDD%E6%9E%84%E5%BB%BA RDD构建 HYPERLINK /netoxi/p/7223412.html l RDD%E7%BC%93%E5%AD%98%E4%B8%8E%E6%8C%81%E4%B9%85%E5%8C%96 RDD缓存与持久化 HYPERLINK /netoxi/p

7、/7223412.html l RDD%E5%88%86%E5%8C%BA%E6%95%B0 RDD分区数 HYPERLINK /netoxi/p/7223412.html l %E5%85%B1%E4%BA%AB%E5%8F%98%E9%87%8F 共享变量 HYPERLINK /netoxi/p/7223412.html l RDD%20Operation RDD Operation HYPERLINK /netoxi/p/7223412.html l RDD%20Operation%E9%9A%90%E5%BC%8F%E8%BD%AC%E6%8D%A2 RDD Operation隐式转换

8、 HYPERLINK /netoxi/p/7223412.html l RDDT%E5%88%86%E5%8C%BAOperation RDDT分区Operation HYPERLINK /netoxi/p/7223412.html l RDDT%E5%B8%B8%E7%94%A8%E8%81%9A%E5%90%88Operation RDDT常用聚合Operation HYPERLINK /netoxi/p/7223412.html l RDD%E9%97%B4%E6%93%8D%E4%BD%9COperation RDD间操作Operation HYPERLINK /netoxi/p/72

9、23412.html l DoubleRDDFunctions%E5%B8%B8%E7%94%A8Operation DoubleRDDFunctions常用Operation HYPERLINK /netoxi/p/7223412.html l PairRDDFunctions%E8%81%9A%E5%90%88Operation PairRDDFunctions聚合Operation HYPERLINK /netoxi/p/7223412.html l PairRDDFunctions%E9%97%B4%E6%93%8D%E4%BD%9COperation PairRDDFunctions

10、间操作Operation HYPERLINK /netoxi/p/7223412.html l OrderedRDDFunctions%E5%B8%B8%E7%94%A8Operation OrderedRDDFunctions常用Operation HYPERLINK /netoxi/p/7223412.html l %E6%A1%88%E4%BE%8B%EF%BC%9A%E7%A7%BB%E5%8A%A8%E7%BB%88%E7%AB%AF%E4%B8%8A%E7%BD%91%E6%95%B0%E6%8D%AE%E5%88%86%E6%9E%90 案例:移动终端上网数据分析 HYPERLI

11、NK /netoxi/p/7223412.html l %E6%95%B0%E6%8D%AE%E5%87%86%E5%A4%87 数据准备 HYPERLINK /netoxi/p/7223412.html l %E5%8A%A0%E8%BD%BD&%E9%A2%84%E5%A4%84%E7%90%86 加载&预处理 HYPERLINK /netoxi/p/7223412.html l %E7%BB%9F%E8%AE%A1App%E8%AE%BF%E9%97%AE%E6%AC%A1%E6%95%B0 统计App访问次数 HYPERLINK /netoxi/p/7223412.html l %E7

12、%BB%9F%E8%AE%A1DAU 统计DAU HYPERLINK /netoxi/p/7223412.html l %E7%BB%9F%E8%AE%A1MAU 统计MAU HYPERLINK /netoxi/p/7223412.html l %E7%BB%9F%E8%AE%A1App%E4%B8%8A%E4%B8%8B%E6%B5%81%E9%87%8F 统计App上下流量概况1. Spark相对MapReduce的优势: a)支持迭代计算; b)中间结果存储在内存而不是硬盘,降低延迟。2. Spark已成为轻量级大数据快速处理统一平台,“Onestacktorulethemall”,一个

13、平台完成:即席查询(ad-hocqueries)、批处理(batchprocessing)、流式处理(streamprocessing)。3. Spark集群搭建方式: a)集成部署工具,如ClouderaManager; b)手工搭建。4. Spark源码编译方式: a)SBT编译; b) Maven编译。手工搭建集群引言1. 环境:RoleHostnameMastercentos1Slavecentos2centos32. Standalone模式需在Master和Slave节点部署,YARN模式仅需在命令提交机器部署。3. 假设已成功安装JDK、Hadoop集群。安装Scala1. Ma

14、ster(Standalone模式)或命令提交机器(YARN模式)安装Scala到/opt/app目录下。tar zxvf scala-2.10.6.tgz -C /opt/app2. Master(Standalone模式)或命令提交机器(YARN模式)配置环境变量。vi /etc/profileexport SCALA_HOME=/opt/app/scala-2.10.6export PATH=$SCALA_HOME/bin:$PATHsource /etc/profile # 生效env | grep SCALA_HOME # 验证配置文件3. Master(Standalone模式)或

15、命令提交机器(YARN模式)tar zxvf spark-1.6.3-bin-hadoop2.6.tgz -C /opt/appcd /opt/app/spark-1.6.3-bin-hadoop2.6/confcp spark-env.sh.template spark-env.shvi spark-env.shexport JAVA_HOME=/opt/app/jdk1.8.0_121export SCALA_HOME=/opt/app/scala-2.10.6export HADOOP_HOME=/opt/app/hadoop-2.6.5export HADOOP_CONF_DIR=$H

16、ADOOP_HOME/etc/hadoopexport YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop# For standalone modeexport SPARK_WORKER_CORES=1export SPARK_DAEMON_MEMORY=512mcp spark-defaults.conf.template spark-defaults.confhadoop fs -mkdir /spark.eventLog.dirvi spark-defaults.confspark.driver.extraClassPath /opt/app/apache-hiv

17、e-1.2.2-bin/lib/mysql-connector-java-5.1.22-bin.jarspark.eventLog.enabled truespark.eventLog.dir hdfs:/centos1:9000/spark.eventLog.dircp slaves.template slavesvi slavescentos2centos3ln -s /opt/app/apache-hive-1.2.2-bin/conf/hive-site.xml .4. Master(Standalone模式)从Master复制Spark目录到各Slave。注意:仅Standalone

18、集群需要执行本步骤。scp -r /opt/app/spark-1.6.3-bin-hadoop2.6 hadoopcentos2:/opt/appscp -r /opt/app/spark-1.6.3-bin-hadoop2.6 hadoopcentos3:/opt/app启动与测试5. Master(Standalone模式)或命令提交机器(YARN模式)配置Spark环境变量。export SPARK_HOME=/opt/app/spark-1.6.3-bin-hadoop2.6export PATH=$PATH:$SPARK_HOME/bin6. Master(Standalone模式

19、)启动Spark,测试。sbin/start-all.shjpsMaster # Master机器进程Worker # Slave机器进程7. Master(Standalone模式)或命令提交机器(YARN模式)测试。bin/spark-submit -master spark:/centos1:7077 -deploy-mode client -class org.apache.spark.examples.SparkPi -driver-memory 512m -executor-memory 512m -num-executors 1 -executor-cores 1 lib/spa

20、rk-examples-1.6.3-hadoop2.6.0.jar # Standalone Client模式运行bin/spark-submit -master spark:/centos1:7077 -deploy-mode cluster -class org.apache.spark.examples.SparkPi -driver-memory 512m -executor-memory 512m -num-executors 1 -executor-cores 1 lib/spark-examples-1.6.3-hadoop2.6.0.jar # Standalone Clust

21、er模式运行bin/spark-submit -master yarn-client -class org.apache.spark.examples.SparkPi -driver-memory 512m -executor-memory 512m -num-executors 1 -executor-cores 1 lib/spark-examples-1.6.3-hadoop2.6.0.jar # Yarn Client模式运行bin/spark-submit -master yarn-cluster -class org.apache.spark.examples.SparkPi -d

22、river-memory 512m -executor-memory 512m -num-executors 1 -executor-cores 1 lib/spark-examples-1.6.3-hadoop2.6.0.jar # Yarn Custer模式运行bin/yarn application -list # 查看YARN运行的应用bin/yarn application -kill ApplicationID # 杀死YARN运行的应用bin/spark-shell -master spark:/centos1:7077 -deploy-mode client -driver-m

23、emory 512m -executor-memory 512m -num-executors 1 -executor-cores 1 # Standalone Client模式运行bin/spark-shell -master yarn -deploy-mode client -driver-memory 512m -executor-memory 512m -num-executors 1 -executor-cores 1 # Yarn Client模式运行8. 监控页面。http:/centos1:8080Spark监控http:/centos1:8088YARN监控应用部署部署架构1

24、. Application:Spark应用程序,包括一个DriverProgram和集群中多个WorkNode中的Executor,其中每个WorkNode为每个Application仅提供一个Executor。2. DriverProgram:运行Application的main函数。通常也用SparkContext表示。负责DAG构建、Stage划分、Task管理及调度、生成SchedulerBackend用于Akka通信,主要组件有DAGScheduler、TaskScheduler、SchedulerBackend。3. ClusterManager:集群管理器,可封装如SparkSt

25、andalone、YARN等不同集群管理器。DriverProgram通过ClusterManager分配资源,并将任务发送到多个WorkNode执行。4. WorkNode:集群节点。应用程序在运行时的Task在WorkNode的Executor中执行。5. Executor:WorkNode为Application启动的一个进程,负责执行Task。6. Stage:一个Applicatoin一般包含一到多个Stage。7. Task:被DriverProgram发送到Executor的计算单元,通常一个Task处理一个split(即一个分区),每个split一般是一个Block大小。一个S

26、tage包含一到多个Task,通过多个Task实现并行计算。8. DAGScheduler:将Application分解成一到多个Stage,每个Stage根据RDD分区数决定Task个数,然后生成相应TaskSet放到TaskScheduler中。9. DeployMode:Driver进程部署模式,有cluster和client两种。10. 注意: a)DriverProgram必须与Spark集群处于同一网络环境。因为SparkContext要发送任务给不同WorkNode的Executor并接受Executor的执行结果。 b)生产环境中,DriverProgram所在机器性能配置,尤

27、其CPU较好。应用程序部署1. 分类: a)spark-shell:交互式,用于开发调试。已创建好“valsc:SparkContext”和“valsqlContext:SQLContext”实例。 b)spark-submit:应用提交式,用于生产部署。2. spark-shell参数:bin/spark-shell -helpUsage: ./bin/spark-shell optionsOptions: -master MASTER_URL spark:/host:port, mesos:/host:port, yarn, or local. -deploy-mode DEPLOY_MO

28、DE Whether to launch the driver program locally (client) or on one of the worker machines inside the cluster (cluster) (Default: client). -class CLASS_NAME Your applications main class (for Java / Scala apps). -name NAME A name of your application. -jars JARS Comma-separated list of local jars to in

29、clude on the driver and executor classpaths. -packages Comma-separated list of maven coordinates of jars to include on the driver and executor classpaths. Will search the local maven repo, then maven central and any additional remote repositories given by -repositories. The format for the coordinate

30、s should be groupId:artifactId:version. -exclude-packages Comma-separated list of groupId:artifactId, to exclude while resolving the dependencies provided in -packages to avoid dependency conflicts. -repositories Comma-separated list of additional remote repositories to search for the maven coordina

31、tes given with -packages. -py-files PY_FILES Comma-separated list of .zip, .egg, or .py files to place on the PYTHONPATH for Python apps. -files FILES Comma-separated list of files to be placed in the working directory of each executor. -conf PROP=VALUE Arbitrary Spark configuration property. -prope

32、rties-file FILE Path to a file from which to load extra properties. If not specified, this will look for conf/spark-defaults.conf. -driver-memory MEM Memory for driver (e.g. 1000M, 2G) (Default: 1024M). -driver-java-options Extra Java options to pass to the driver. -driver-library-path Extra library

33、 path entries to pass to the driver. -driver-class-path Extra class path entries to pass to the driver. Note that jars added with -jars are automatically included in the classpath. -executor-memory MEM Memory per executor (e.g. 1000M, 2G) (Default: 1G). -proxy-user NAME User to impersonate when subm

34、itting the application. -help, -h Show this help message and exit -verbose, -v Print additional debug output -version, Print the version of current Spark Spark standalone with cluster deploy mode only: -driver-cores NUM Cores for driver (Default: 1). Spark standalone or Mesos with cluster deploy mod

35、e only: -supervise If given, restarts the driver on failure. -kill SUBMISSION_ID If given, kills the driver specified. -status SUBMISSION_ID If given, requests the status of the driver specified. Spark standalone and Mesos only: -total-executor-cores NUM Total cores for all executors. Spark standalo

36、ne and YARN only: -executor-cores NUM Number of cores per executor. (Default: 1 in YARN mode, or all available cores on the worker in standalone mode) YARN-only: -driver-cores NUM Number of cores used by the driver, only in cluster mode (Default: 1). -queue QUEUE_NAME The YARN queue to submit to (De

37、fault: default). -num-executors NUM Number of executors to launch (Default: 2). -archives ARCHIVES Comma separated list of archives to be extracted into the working directory of each executor. -principal PRINCIPAL Principal to be used to login to KDC, while running on secure HDFS. -keytab KEYTAB The

38、 full path to the file that contains the keytab for the principal specified above. This keytab will be copied to the node running the Application Master via the Secure Distributed Cache, for renewing the login tickets and the delegation tokens periodically.3. spark-submit参数(除Usage外,其他参数与spark-shell一

39、样):bin/spark-submit -helpUsage: spark-submit options app argumentsUsage: spark-submit -kill submission ID -master spark:/.Usage: spark-submit -status submission ID -master spark:/.Options: -master MASTER_URL spark:/host:port, mesos:/host:port, yarn, or local. -deploy-mode DEPLOY_MODE Whether to laun

40、ch the driver program locally (client) or on one of the worker machines inside the cluster (cluster) (Default: client). -class CLASS_NAME Your applications main class (for Java / Scala apps). -name NAME A name of your application. -jars JARS Comma-separated list of local jars to include on the drive

41、r and executor classpaths. -packages Comma-separated list of maven coordinates of jars to include on the driver and executor classpaths. Will search the local maven repo, then maven central and any additional remote repositories given by -repositories. The format for the coordinates should be groupI

42、d:artifactId:version. -exclude-packages Comma-separated list of groupId:artifactId, to exclude while resolving the dependencies provided in -packages to avoid dependency conflicts. -repositories Comma-separated list of additional remote repositories to search for the maven coordinates given with -pa

43、ckages. -py-files PY_FILES Comma-separated list of .zip, .egg, or .py files to place on the PYTHONPATH for Python apps. -files FILES Comma-separated list of files to be placed in the working directory of each executor. -conf PROP=VALUE Arbitrary Spark configuration property. -properties-file FILE Pa

44、th to a file from which to load extra properties. If not specified, this will look for conf/spark-defaults.conf. -driver-memory MEM Memory for driver (e.g. 1000M, 2G) (Default: 1024M). -driver-java-options Extra Java options to pass to the driver. -driver-library-path Extra library path entries to p

45、ass to the driver. -driver-class-path Extra class path entries to pass to the driver. Note that jars added with -jars are automatically included in the classpath. -executor-memory MEM Memory per executor (e.g. 1000M, 2G) (Default: 1G). -proxy-user NAME User to impersonate when submitting the applica

46、tion. -help, -h Show this help message and exit -verbose, -v Print additional debug output -version, Print the version of current Spark Spark standalone with cluster deploy mode only: -driver-cores NUM Cores for driver (Default: 1). Spark standalone or Mesos with cluster deploy mode only: -supervise

47、 If given, restarts the driver on failure. -kill SUBMISSION_ID If given, kills the driver specified. -status SUBMISSION_ID If given, requests the status of the driver specified. Spark standalone and Mesos only: -total-executor-cores NUM Total cores for all executors. Spark standalone and YARN only:

48、-executor-cores NUM Number of cores per executor. (Default: 1 in YARN mode, or all available cores on the worker in standalone mode) YARN-only: -driver-cores NUM Number of cores used by the driver, only in cluster mode (Default: 1). -queue QUEUE_NAME The YARN queue to submit to (Default: default). -

49、num-executors NUM Number of executors to launch (Default: 2). -archives ARCHIVES Comma separated list of archives to be extracted into the working directory of each executor. -principal PRINCIPAL Principal to be used to login to KDC, while running on secure HDFS. -keytab KEYTAB The full path to the

50、file that contains the keytab for the principal specified above. This keytab will be copied to the node running the Application Master via the Secure Distributed Cache, for renewing the login tickets and the delegation tokens periodically.4. 默认参数: a)默认应用程序参数配置文件:conf/spark-defaults.conf b)默认JVM参数配置文

51、件:conf/spark-env.sh c)常用的jar文件可通过“-jar”参数配置。5. 参数优先级(由高到低): a)SparkConf显示配置参数; b)spark-submit指定参数; c)conf/spark-defaults.conf配置文件参数。6. MASTER_URL格式MASTER_URL说明local以单线程在本地运行(完全无并行)localK在本地以K个Worker线程运行,K设置为CPU核数较理想local*K=CPU核数spark:/HOST:PORT连接Standalone集群的Master,即Spark监控页面的URL,端口默认为7077(不支持省略)yar

52、n-client以client模式连接到YARN集群,通过HADOOP_CONF_DIR环境变量查找集群yarn-cluster以cluster模式连接到YARN集群,通过HADOOP_CONF_DIR环境变量查找集群7. 注意: a)spark-shell默认使用4040端口,当4040端口被占用时,程序打印日志警告WARN并尝试递增端口(4041、4042)直到找到可用端口为止。 b)Executor节点上每个DriverProgram的jar包和文件会被复制到工作目录下,可能占用大量空间。YARN集群会自动清除,Standalone集群需配置“spark.worker.cleanup.a

53、ppDataTtl”开启自动清除。8. 应用程序模板import org.apache.spark.SparkConfimport org.apache.spark.SparkContextimport org.apache.spark.sql.SQLContextimport org.apache.spark.sql.hive.HiveContextobject Test def main(args: ArrayString): Unit = val conf = new SparkConf().setAppName(Test) val sc = new SparkContext(conf)

54、 / . 9. 提交示例:bin/spark-submit -master spark:/ubuntu1:7077 -class org.apache.spark.examples.SparkPi lib/spark-examples-1.6.3-hadoop2.6.0.jar核心原理RDD概念1. RDD:ResilientDistributedDataset,弹性分布式数据集。2. 意义:Spark最核心的抽象概念;具有容错性基于内存的集群计算方法。RDD核心组成1. 5个核心方法。 a)getPartitions:分区列表(数据块列表) b)compute:计算各分区数据的函数。 c)g

55、etDependencies:对父RDD的依赖列表。 d)partitioner:key-valueRDD的分区器。 e)getPreferredLocations:每个分区的预定义地址列表(如HDFS上的数据块地址)。2. 按用途分类以上5个方法: a)前3个:描述RDD间的血统关系(Lineage),必须有的方法; b)后2个:用于优化执行。3. RDD的实例:RDDT,T为泛型,即实例。4. 分区: a)分区概念:将大数据量T实例集合split成多个小数据量的T实例子集合。 b)分区源码:实际上是IteratorT。 c)分区存储:例如以Block方式存在HDFS。5. 依赖: a)依赖

56、列表:一个RDD可有多个父依赖,所以是父RDD依赖列表。 b)与分区关系:依赖是通过RDD分区间的依赖体现的,通过依赖列表和getPartitions方法可知RDD各分区是如何依赖一组父RDD分区的。6. compute方法: a)延时(lazy)特性,当触发Action时才真正执行compute方法; b)计算粒度是分区,而不是T元素。7. partitioner方法:T实例为key-value对类型的RDD。8. RDD抽象类源码(节选自v1.6.3): 1 package org.apache.spark.rdd 2 3 / 4 5 /* 6 * A Resilient Distribu

57、ted Dataset (RDD), the basic abstraction in Spark. Represents an immutable, 7 * partitioned collection of elements that can be operated on in parallel. This class contains the 8 * basic operations available on all RDDs, such as map, filter, and persist. In addition, 9 * org.apache.spark.rdd.PairRDDF

58、unctions contains operations available only on RDDs of key-value10 * pairs, such as groupByKey and join;11 * org.apache.spark.rdd.DoubleRDDFunctions contains operations available only on RDDs of12 * Doubles; and13 * org.apache.spark.rdd.SequenceFileRDDFunctions contains operations available on RDDs

59、that14 * can be saved as SequenceFiles.15 * All operations are automatically available on any RDD of the right type (e.g. RDD(Int, Int)16 * through implicit.17 *18 * Internally, each RDD is characterized by five main properties:19 *20 * - A list of partitions21 * - A function for computing each spli

60、t22 * - A list of dependencies on other RDDs23 * - Optionally, a Partitioner for key-value RDDs (e.g. to say that the RDD is hash-partitioned)24 * - Optionally, a list of preferred locations to compute each split on (e.g. block locations for25 * an HDFS file)26 *27 * All of the scheduling and execut

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论