存储数据保护Raid技术DDP说明_第1页
存储数据保护Raid技术DDP说明_第2页
存储数据保护Raid技术DDP说明_第3页
存储数据保护Raid技术DDP说明_第4页
存储数据保护Raid技术DDP说明_第5页
已阅读5页,还剩15页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

1、存储高级数据保护技术-DDP说明1VolumesSANtricity RAID Protection Volume groups RAID 0, 1, 10, 5, 6 Intermix RAID levels Various group sizes Dynamic disk pools Min 11 SSDs Max 120 SSDs Up to 10 disk pools per system2NetApp ConfidentialVolume GroupsVolumesHost LUNsSSDsDisk PoolSSDsHost LUNsSANtricity RAID Levels RA

2、ID 0 striped RAID 5 data disks and rotating parity RAID 1 (10) mirrored and striped RAID 6 (P+Q) data disks and rotating dual parity3NetApp ConfidentialBlock-level striping with a distributed parityDataDataDataDataDataDataMirrorMirrorDataDataParityDataDataParityDataDataParityDataDataQ ParityParityDa

3、taDataQ ParityParityDataDataQ ParityDataDataDataTraditional RAID Volumes Disk drives organized into RAID groups Volumes reside across the drives in a RAID groupPerformance is dictated by the number of spindles Hot spares sit idle until a drive failsSpare capacity is “stranded”24-drive system with 2

4、10-drive groups (8+2) and 4 hot spares4Traditional RAIDDrive Failure Data is reconstructed onto hot spareSingle drive responsible for all writes (bottleneck) Reconstruction happens linearly (one stripe at a time) All volumes in that group are significantly impacted24-drive system with 2 10-drive gro

5、ups (8+2) and 4 hot spares5The ProblemThe Large-Disk-Drive Challenge Staggering amounts of data to store, protect, access Some sites have thousands of large-capacity drives Drive failures are continual, particularly with NL-SAS drives Production I/O is impacted during rebuilds Up to 40% in many case

6、s As drive capacities continue to grow, traditional RAID protection is pushed to its limit Drive transfer rates have not kept up with capacities Larger drives equal longer rebuildsanywhere from 10+ hours to several days64TB+Dynamic Disk PoolsMaintain SLAs during drive failure Stay in the green Perfo

7、rmance drop is minimized following drive failure Dynamic rebalance completes up to 8x faster than traditional RAID in random environments and up to 2x faster in sequential environments Large pool of spindles for every volume reduces hot spots Each volume spread across all drives in pool Dynamic dist

8、ribution/redistribution is a nondisruptive background operation7Balanced: Algorithm randomly spreads data across all drives, balancing workload and rebuilding if necessary.Easy: No RAID or idle spares to manage active spare capacity on all drives.Combining effort: All drives in the pool sustain the

9、workloadperfect for virtual mixed workloads or fast reconstruction if needed. Flexible: Add ANY* number of drives for additional capacitysystem automatically rebalances data for optimal performance.Traditional RAID TechnologyInnovative Dynamic Disk Pools8“With Dynamic Disk Pools, you can add or lose

10、 disk drives without impact, reconfiguration, or headaches.”* After the minimum of 11.9Data Rebalancing in Minutes vs. Days020406080100120300GB Drive 900GB Drive 2TB Drive 3TB Drive DDPRAID 6Hours2.5 Days1.3 DaysTypical rebalancing improvements are based on a 24-disk mixed workloadMore than 4 DaysDD

11、PRAIDBusiness ImpactBusiness Impact96 Minutes(Estimated)99% ExposureImprovementMaintain business SLAs with a drive failureRAID Level Comparison10NetApp ConfidentialRAID-0RAID-1 and 1+0RAID-5RAID-6DescriptionData is striped across multiple SSDs.RAID 1 uses mirroring to write data to two duplicate SSD

12、s simultaneously. RAID 10 uses striping to stripe data across a set of mirrored SSD pairsSSDs operated independently with user data and redundant information (parity) are striped across the SSDs. The equivalent capacity of one SSD is used for redundant information.SSDs operated independently with us

13、er data and redundant information (dual parity) are striped across the SSDs. The equivalent capacity of two SSDs is used for redundant information.Min # of SSDs1235Max # of SSDsSystem maxSystem max3030Usable capacity as % of raw capacity100%50%67% to 97%60% to 93%ApplicationIOPS | MB/sIOPSIOPS | MB/

14、sIOPS | MB/sAdvantagesPerformance due to parallel operation of the accessPerformance as multiple requests can be fulfilled simultaneously. Also offers the highest data availability Good for reads, small IOPS, many concurrent IOPS and random I/Os. Parity utilizes small portion of raw capacity.Good fo

15、r reads, small IOPS, many concurrent IOPS and random I/Os. Parity utilizes small portion of raw capacity.DisadvantagesNo redundancy. One drive fails, data is lostStorage costs are doubledWrites are particularly demandingWrites are particularly demandingDynamic Disk Pools Overview DDP dynamically dis

16、tributes data, spare capacity,and parity information across a pool of SSDs All drives are active (no idle hot spares) Spare capacity is available to all volumes Data is dynamically recreated/redistributed whenever pools grows or shrinks11NetApp ConfidentialDDP: Simplicity, Performance, Protection Si

17、mplified administration No RAID sets or hot spares to manage Data is automatically balanced within pool Flexible disk pool sizing optimizes capacity utilization Consistent performance Data is distributed throughput the pool (no hot spots) Performance drop is minimized during a drive rebuild Signific

18、antly faster return to optimal state Relentless data protection Significantly faster rebuild times as data is reconstructed throughout the disk pool Prioritized reconstruction minimizes exposure12NetApp ConfidentialDDP Insight: How It Works Each DDP volume is composed of some number of 4GB “virtual

19、stripes” called dynamic stripes Each D-stripe resides on a pseudo-randomly selected set of 10 drives from within the pool D-stripes are allocated at time of volume creation and allocated sequentially on a per-volume basis13NetApp Confidential24-SSD poolDDP SSD Failure For each D-Stripe which has dat

20、a on failed SSD: Segments on other SSDs are read to recreate data A new SSD is chosen to write segments from failed SSD Rebuild operations run in parallel across all SSDs14NetApp Confidential23DDP Multiple Disk Failure If two SSDs have failed system will rebuild critical segments firstBrown and Light Blue15NetApp Confidential If additional SSDs fail new critical segments will be identified and rebuiltBlue, Orange and PinkDDP Adding SSDs To Pool Add a single SSD or add multiple SSDs simultaneously Immediately rebalances data to maintain equilibrium Segments are just m

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论