基于Kafka和Spark的实时数据质量监控平台课件_第1页
基于Kafka和Spark的实时数据质量监控平台课件_第2页
基于Kafka和Spark的实时数据质量监控平台课件_第3页
基于Kafka和Spark的实时数据质量监控平台课件_第4页
基于Kafka和Spark的实时数据质量监控平台课件_第5页
已阅读5页,还剩31页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

1、基于Kafka和Spark的实时数据质量监控平台邢国东 资深产品经理 微软改变中的微软我们服务的业务共享大数据团队AI&R我们要解决什么问题Kafka as data busDevicesServicesStreaming ProcessingBatchProcessingApplicationsScalable pub/sub for NRT data streamsInteractive analytics数据流快速增长的实时数据1.3 millionEVENTS PER SECOND INGRESS AT PEAK1 trillionEVENTS PER DAY PROCESSED AT

2、 PEAK3.5 petabytesPROCESSED PER DAY100 thousandUNIQUE DEVICES AND MACHINES1,300PRODUCTION KAFKA BROKERS1 Sec99th PERCENTILE LATENCYKafka上下游的数据质量保证ProducerKafkaHLCDestinationDestinationProducerProducerProducerProducerProducerProducerProducerProducerKafkaHLCKafkaHLC100K QPS, 300 Gb per hourData = Mone

3、yLost Data = Lost Money工作原理简介工作原理3 个审计粒度文件层级(file)批次层级(batch)记录层级 (record level)Metadata“Action” : “Produced or Uploaded”,“ActionTimeStamp” : “action date and time (UTC)”,“Environment” : “environment (cluster) name”,“Machine” : “computer name”,“StreamID” : “type of data (sheeps, ducks, etc.)”,“Sourc

4、eID” : “e.g. file name”,“BatchID” : “a hash of data in this batch”,“NumBytes” : “size in bytes”,“NumRecords” : “number of records in the batch”,“DestinationID” : “destination ID”工作原理 数据与审计流Audit systemKafka + HLCunder auditDestination 1ProducerFile 1:Produced: file 1: 3 recordsRecord1Record2Record3U

5、ploaded: file 1: 3 recordsRecord4Record5Produced24 bytes3 recordsTimestamp“File 1”BatchID=abc123Produced40 bytes5 recordsTimestamp“File 1”BatchID=def456Produced: file 1: 5 recordsUploaded24 bytes3 recordsTimestampBatchIDDestination 1ProducerData Center数据时延的Kibana图表数据完整性Kibana图表 3 lines Green how man

6、y records producedBlue: how many reached destination #1Green: how many reached destination #2基于Power BI更丰富的图表4 阶段实时数据处理pipeline的监控发送Audit的代码Create a client objectPrepare audit objectLastlyclient.SendBondObject(audit);查询统计信息的APIs基于Audit数据的异常检测Audit数据实际是数据的meta data, 可以用来做各种数据流量的异常检测和监控异常检测算法1Holt-Win

7、ters 算法用来训练模型和预测强健性上的改进 使用Median Absolute Deviation (MAD) 得到更好的估值处理数据丢点和噪声 (例如数据平滑)自动获取趋势和周期信息允许用户人工标记和反馈来更好的处理趋势变化GLR (Generalized Likelihood Ratio)通过比较预测值和实际值来发现异常点改进包括Floating Threshold GLR, 基于新的输入数据动态调整模型对于噪声比较大的数据做去除异常点异常检测算法2基于Exchangeability Martingale时间序列的在线异常检测分布是否发生变化?基于历史数据,定义 “new value

8、strangeness” 在时刻t,我们收到一个新的值Add it to the history. For each item i in the historysi = strangeness function of (valuei, history)Let pt = (#i: si st+ r*#i: si=st)/N, where r is uniform in (0,1)Uniform r makes sure p is uniform异常检测算法2异常检测算法3设计概述数据监控系统设计目标监控streaming数据的完整性和时延数据pipeline中,Multi-producer, m

9、ulti-stage, multi-destination数据流In near real time提供诊断信息:哪个DC, 机器, event/file发生问题超级稳定 99.9% 在线Scale out审计数据可信系统设计ProducerKafkaHLCDestinationFront End Web ServiceTransient storage (Kafka)“Produced” audits“Uploaded” auditsKafka ensures high-availabilityWe dont want ingress/egress to stuck sending audit

10、 information系统设计ProducerKafkaHLCDestinationFront End Web ServiceTransient storage (Kafka)Audit data processing pipeline (Spark)“Produced” audits“Uploaded” auditsPre-aggregates data to 1-minute chunksKeeps track of duplicates and increments, handles late arrival, out-of-order data and fault tolerant系

11、统设计ProducerKafkaHLCDestinationFront End Web ServiceTransient storage (Kafka)Audit data processing pipeline (Spark)Aggregated data storage (ElasticSearch)“Produced” audits“Uploaded” auditsStores pre-aggregated data for reporting through DASAllows for NRT charts using Kibana系统设计ProducerKafkaHLCDestina

12、tionFront End Web ServiceTransient storage (Kafka)Audit data processing pipeline (Spark)Aggregated data storage (ElasticSearch)Data Analysis Web Service“Produced” audits“Uploaded” auditsFinal reporting endpoint for consumersDoes destination have complete data for that time?Which files are missing?高可

13、靠性Front End Web ServiceTransient storage (Kafka)Audit data processing pipeline (Spark)Aggregated data storage (ElasticSearch)Data Analysis Web Service高可靠性Front End Web ServiceTransient storage (Kafka)Audit data processing pipeline (Spark)Aggregated data storage (ElasticSearch)Data Analysis Web Servi

14、ceTransient storage (Kafka)Audit data processing pipeline (Spark)Aggregated data storage (ElasticSearch)Active-Active disaster recoveryMonitor for each key componentDC1DC2可信的质量监控ProducerKafkaHLCDestinationFront End Web ServiceTransient storage (Kafka)Audit data processing pipeline (Spark)Aggregated

15、data storage (ElasticSearch)Data Analysis Web Service“Produced” audits“Uploaded” auditsElasticSearch“Produced” audits“Uploaded” auditsAudit for audit问题的诊断ProducerKafkaHLCDestinationFront End Web ServiceTransient storage (Kafka)Audit data processing pipeline (Spark)Aggregated data storage (ElasticSea

16、rch)“Produced” audits“Uploaded” auditsWhen loss happens, rich diagnostics info are needed for ad hoc queriesData Analysis Web Service问题的诊断ProducerKafkaHLCDestinationFront End Web ServiceTransient storage (Kafka)Audit data processing pipeline (Spark)Aggregated data storage (ElasticSearch)“Produced” audits“Uploaded” auditsStore raw audits for diagnosticsData Analysis Web ServiceRaw audit storage(Cosmos)问题的诊断Joining audit data with traces for interactive diagnostics 目标回顾监

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论