




版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领
文档简介
1、Solutions 5 Chapter 5 Solutions S-3 5.1 5.1.1 4 5.1.2 I, J 5.1.3 AIJ 5.1.4 3596 ? 8 ? 800/4 ? 2?8?8/4 ? 8000/4 5.1.5 I, J 5.1.6 A(J, I) 5.2 5.2.1 Word Address Binary AddressTagIndexHit/Miss 30000 001103M 1801011 0100114M 430010 1011211M 20000 001002M 1911011 11111115M 880101 100058M 1901011 11101114
2、M 140000 1110014M 1811011 0101115M 440010 1100212M 1861011 10101110M 2531111 11011513M 5.2.2 Word Address Binary AddressTagIndexHit/Miss 30000 001101M 1801011 0100112M 430010 101125M 20000 001001H 1911011 1111117M 880101 100054M 1901011 1110117H 140000 111007M 1811011 0101112H 440010 110026M 1861011
3、 1010115M 2531111 1101156M S-4 Chapter 5 Solutions 5.2.3 Cache 1Cache 2Cache 3 Word Address Binary AddressTagindexhit/missindexhit/missindexhit/miss 30000 001103M1M0M 1801011 0100224M2M1M 430010 101153M1M0M 20000 001002M1M0M 1911011 1111237M3M1M 880101 1000110M0M0M 1901011 1110236M3H1H 140000 111016
4、M3M1M 1811011 0101225M2H1M 440010 110054M2M1M 1861011 1010232M1M0M 2531111 1101315M2M1M Cache 1 miss rate ? 100% Cache 1 total cycles ? 12 ? 25 ? 12 ? 2 ? 324 Cache 2 miss rate ? 10/12 ? 83% Cache 2 total cycles ? 10 ? 25 ? 12 ? 3 ? 286 Cache 3 miss rate ? 11/12 ? 92% Cache 3 total cycles ? 11 ? 25
5、? 12 ? 5 ? 335 Cache 2 provides the best performance. 5.2.4 First we must compute the number of cache blocks in the initial cache confi guration. For this, we divide 32 KiB by 4 (for the number of bytes per word) and again by 2 (for the number of words per block). Th is gives us 4096 blocks and a re
6、sulting index fi eld width of 12 bits. We also have a word off set size of 1 bit and a byte off set size of 2 bits. Th is gives us a tag fi eld size of 32 ? 15 ? 17 bits. Th ese tag bits, along with one valid bit per block, will require 18 ? 4096 ? 73728 bits or 9216 bytes. Th e total cache size is
7、thus 9216 ? 32768 ? 41984 bytes. Th e total cache size can be generalized to totalsize ? datasize ? (validbitsize ? tagsize) ? blocks totalsize ? 41984 datasize ? blocks ? blocksize ? wordsize wordsize ? 4 tagsize ? 32 ? log2(blocks) ? log2(blocksize) ? log2(wordsize) validbitsize ? 1 Chapter 5 Solu
8、tions S-5 Increasing from 2-word blocks to 16-word blocks will reduce the tag size from 17 bits to 14 bits. In order to determine the number of blocks, we solve the inequality: 41984 ? 64 ? blocks ? 15 ? blocks Solving this inequality gives us 531 blocks, and rounding to the next power of two gives
9、us a 1024-block cache. Th e larger block size may require an increased hit time and an increased miss penalty than the original cache. Th e fewer number of blocks may cause a higher confl ict miss rate than the original cache. 5.2.5 Associative caches are designed to reduce the rate of confl ict mis
10、ses. As such, a sequence of read requests with the same 12-bit index fi eld but a diff erent tag fi eld will generate many misses. For the cache described above, the sequence 0, 32768, 0, 32768, 0, 32768, , would miss on every access, while a 2-way set associate cache with LRU replacement, even one
11、with a signifi cantly smaller overall capacity, would hit on every access aft er the fi rst two. 5.2.6 Yes, it is possible to use this function to index the cache. However, information about the fi ve bits is lost because the bits are XOR d, so you must include more tag bits to identify the address
12、in the cache. 5.3 5.3.1 8 5.3.2 32 5.3.3 1? (22/8/32) ? 1.086 5.3.4 3 5.3.5 0.25 5.3.6 ?Index, tag, data? ?0000012, 00012, mem1024? ?0000012, 00112, mem16? ?0010112, 00002, mem176? ?0010002, 00102, mem2176? ?0011102, 00002, mem224? ?0010102, 00002, mem160? S-6 Chapter 5 Solutions 5.4 5.4.1 Th e L1 c
13、ache has a low write miss penalty while the L2 cache has a high write miss penalty. A write buff er between the L1 and L2 cache would hide the write miss latency of the L2 cache. Th e L2 cache would benefi t from write buff ers when replacing a dirty block, since the new block would be read in befor
14、e the dirty block is physically written to memory. 5.4.2 On an L1 write miss, the word is written directly to L2 without bringing its block into the L1 cache. If this results in an L2 miss, its block must be brought into the L2 cache, possibly replacing a dirty block which must fi rst be written to
15、memory. 5.4.3 Aft er an L1 write miss, the block will reside in L2 but not in L1. A subsequent read miss on the same block will require that the block in L2 be written back to memory, transferred to L1, and invalidated in L2. 5.4.4 One in four instructions is a data read, one in ten instructions is
16、a data write. For a CPI of 2, there are 0.5 instruction accesses per cycle, 12.5% of cycles will require a data read, and 5% of cycles will require a data write. Th e instruction bandwidth is thus (0.0030 ? 64) ? 0.5 ? 0.096 bytes/cycle. Th e data read bandwidth is thus 0.02 ? (0.13?0.050) ? 64 ? 0.
17、23 bytes/cycle. Th e total read bandwidth requirement is 0.33 bytes/cycle. Th e data write bandwidth requirement is 0.05 ? 4 ? 0.2 bytes/cycle. 5.4.5 Th e instruction and data read bandwidth requirement is the same as in 5.4.4. Th e data write bandwidth requirement becomes 0.02 ? 0.30 ? (0.13?0.050)
18、 ? 64 ? 0.069 bytes/cycle. 5.4.6 For CPI?1.5 the instruction throughput becomes 1/1.5 ? 0.67 instructions per cycle. Th e data read frequency becomes 0.25 / 1.5 ? 0.17 and the write frequency becomes 0.10 / 1.5 ? 0.067. Th e instruction bandwidth is (0.0030 ? 64) ? 0.67 ? 0.13 bytes/cycle. For the w
19、rite-through cache, the data read bandwidth is 0.02 ? (0.17 ?0.067) ? 64 ? 0.22 bytes/cycle. Th e total read bandwidth is 0.35 bytes/cycle. Th e data write bandwidth is 0.067 ? 4 ? 0.27 bytes/cycle. For the write-back cache, the data write bandwidth becomes 0.02 ? 0.30 ? (0.17?0.067) ? 64 ? 0.091 by
20、tes/cycle. Address041613223216010243014031001802180 Line ID001814100191118 Hit/missMHMMMMMHHMMM ReplaceNNNNNNYNNYNY Chapter 5 Solutions S-7 5.5 5.5.1 Assuming the addresses given as byte addresses, each group of 16 accesses will map to the same 32-byte block so the cache will have a miss rate of 1/1
21、6. All misses are compulsory misses. Th e miss rate is not sensitive to the size of the cache or the size of the working set. It is, however, sensitive to the access pattern and block size. 5.5.2 Th e miss rates are 1/8, 1/32, and 1/64, respectively. Th e workload is exploiting temporal locality. 5.
22、5.3 In this case the miss rate is 0. 5.5.4 AMAT for B ? 8: 0.040 ? (20 ? 8) ? 6.40 AMAT for B ? 16: 0.030 ? (20 ? 16) ? 9.60 AMAT for B ? 32: 0.020 ? (20 ? 32) ? 12.80 AMAT for B ? 64: 0.015 ? (20 ? 64) ? 19.20 AMAT for B ? 128: 0.010 ? (20 ? 128) ? 25.60 B ? 8 is optimal. 5.5.5 AMAT for B ? 8: 0.04
23、0 ? (24 ? 8) ? 1.28 AMAT for B ? 16: 0.030 ? (24 ? 16) ? 1.20 AMAT for B ? 32: 0.020 ? (24 ? 32) ? 1.12 AMAT for B ? 64: 0.015 ? (24 ? 64) ? 1.32 AMAT for B ? 128: 0.010 ? (24 ? 128) ? 1.52 B ? 32 is optimal. 5.5.6 B?128 5.6 5.6.1 P11.52 GHz P21.11 GHz 5.6.2 P16.31 ns9.56 cycles P25.11 ns5.68 cycles
24、 5.6.3 P112.64 CPI8.34 ns per inst P27.36 CPI6.63 ns per inst S-8 Chapter 5 Solutions 5.6.4 6.50 ns9.85 cyclesWorse 5.6.5 13.04 5.6.6 P1 AMAT ? 0.66 ns ? 0.08 ? 70 ns ? 6.26 ns P2 AMAT ? 0.90 ns ? 0.06 ? (5.62 ns ? 0.95 ? 70 ns) ? 5.23 ns For P1 to match P2 s performance: 5.23 ? 0.66 ns ? MR ? 70 ns
25、 MR ? 6.5% 5.7 5.7.1 Th e cache would have 24 / 3 ? 8 blocks per way and thus an index fi eld of 3 bits. Word Address Binary AddressTagIndexHit/MissWay 0Way 1Way 2 30000 001101MT(1)?0 1801011 0100112M T(1)?0 T(2)?11 430010 101125M T(1)?0 T(2)?11 T(5)?2 20000 001001M T(1)?0 T(2)?11 T(5)?2 T(1)?0 1911
26、011 1111117M T(1)?0 T(2)?11 T(5)?2 T(7)?11 T(1)?0 880101 100054M T(1)?0 T(2)?11 T(5)?2 T(7)?11 T(4)?5 T(1)?0 1901011 1110117H T(1)?0 T(2)?11 T(5)?2 T(7)?11 T(4)?5 T(1)?0 140000 111007M T(1)?0 T(2)?11 T(5)?2 T(7)?11 T(4)?5 T(1)?0 T(7)?0 1811011 0101112H T(1)?0 T(2)?11 T(5)?2 T(7)?11 T(4)?5 T(1)?0 T(7
27、)?0 Chapter 5 Solutions S-9 440010 110026M T(1)?0 T(2)?11 T(5)?2 T(7)?11 T(4)?5 T(6)?2 T(1)?0 T(7)?0 1861011 1010115M T(1)?0 T(2)?11 T(5)?2 T(7)?11 T(4)?5 T(6)?2 T(1)?0 T(7)?0 T(5)?11 2531111 1101156M T(1)?0 T(2)?11 T(5)?2 T(7)?11 T(4)?5 T(6)?2 T(1)?0 T(7)?0 T(5)?11 T(6)?15 5.7.2 Since this cache is
28、 fully associative and has one-word blocks, the word address is equivalent to the tag. Th e only possible way for there to be a hit is a repeated reference to the same word, which doesn t occur for this sequence. TagHit/MissContents 3M3 180M3, 180 43M3, 180, 43 2M3, 180, 43, 2 191M3, 180, 43, 2, 191
29、 88M3, 180, 43, 2, 191, 88 190M3, 180, 43, 2, 191, 88, 190 14M3, 180, 43, 2, 191, 88, 190, 14 181M181, 180, 43, 2, 191, 88, 190, 14 44M181, 44, 43, 2, 191, 88, 190, 14 186M181, 44, 186, 2, 191, 88, 190, 14 253M181, 44, 186, 253, 191, 88, 190, 14 5.7.3 AddressTag Hit/ MissContents 31M1 18090M1, 90 43
30、21M1, 90, 21 21H1, 90, 21 19195M1, 90, 21, 95 8844M1, 90, 21, 95, 44 19095H1, 90, 21, 95, 44 147M1, 90, 21, 95, 44, 7 18190H1, 90, 21, 95, 44, 7 4422M1, 90, 21, 95, 44, 7, 22 186143M1, 90, 21, 95, 44, 7, 22, 143 253126M1, 90, 126, 95, 44, 7, 22, 143 S-10 Chapter 5 Solutions Th e fi nal reference rep
31、laces tag 21 in the cache, since tags 1 and 90 had been re- used at time?3 and time?8 while 21 hadn t been used since time?2. Miss rate ? 9/12 ? 75% Th is is the best possible miss rate, since there were no misses on any block that had been previously evicted from the cache. In fact, the only evicti
32、on was for tag 21, which is only referenced once. 5.7.4 L1 only: .07 ? 100 ? 7 ns CPI ? 7 ns / .5 ns ? 14 Direct mapped L2: .07 ? (12 ? 0.035 ? 100) ? 1.1 ns CPI ? ceiling(1.1 ns/.5 ns) ? 3 8-way set associated L2: .07 ? (28 ? 0.015 ? 100) ? 2.1 ns CPI ? ceiling(2.1 ns / .5 ns) ? 5 Doubled memory ac
33、cess time, L1 only: .07 ? 200 ? 14 ns CPI ? 14 ns / .5 ns ? 28 Doubled memory access time, direct mapped L2: .07 ? (12 ? 0.035 ? 200) ? 1.3 ns CPI ? ceiling(1.3 ns/.5 ns) ? 3 Doubled memory access time, 8-way set associated L2: .07 ? (28 ? 0.015 ? 200) ? 2.2 ns CPI ? ceiling(2.2 ns / .5 ns) ? 5 Halv
34、ed memory access time, L1 only: .07 ? 50 ? 3.5 ns CPI ? 3.5 ns / .5 ns ? 7 Halved memory access time, direct mapped L2: .07 ? (12 ? 0.035 ? 50) ? 1.0 ns CPI ? ceiling(1.1 ns/.5 ns) ? 2 Halved memory access time, 8-way set associated L2: Chapter 5 Solutions S-11 .07 ? (28 ? 0.015 ? 50) ? 2.1 ns CPI ?
35、 ceiling(2.1 ns / .5 ns) ? 5 5.7.5 .07 ? (12 ? 0.035 ? (50 ? 0.013 ? 100) ? 1.0 ns Adding the L3 cache does reduce the overall memory access time, which is the main advantage of having a L3 cache. Th e disadvantage is that the L3 cache takes real estate away from having other types of resources, suc
36、h as functional units. 5.7.6 Even if the miss rate of the L2 cache was 0, a 50 ns access time gives AMAT ? .07 ? 50 ? 3.5 ns, which is greater than the 1.1 ns and 2.1 ns given by the on-chip L2 caches. As such, no size will achieve the performance goal. 5.8 5.8.1 1096 days26304 hours 5.8.2 0.9990875
37、912% 5.8.3 Availability approaches 1.0. With the emergence of inexpensive drives, having a nearly 0 replacement time for hardware is quite feasible. However, replacing fi le systems and other data can take signifi cant time. Although a drive manufacturer will not include this time in their statistic
38、s, it is certainly a part of replacing a disk. 5.8.4 MTTR becomes the dominant factor in determining availability. However, availability would be quite high if MTTF also grew measurably. If MTTF is 1000 times MTTR, it the specifi c value of MTTR is not signifi cant. 5.9 5.9.1 Need to fi nd minimum p
39、 such that 2p ? p ? d ? 1 and then add one. Th us 9 total bits are needed for SEC/DED. 5.9.2 Th e (72,64) code described in the chapter requires an overhead of 8/64?12.5% additional bits to tolerate the loss of any single bit within 72 bits, providing a protection rate of 1.4%. Th e (137,128) code f
40、rom part a requires an overhead of 9/128?7.0% additional bits to tolerate the loss of any single bit within 137 bits, providing a protection rate of 0.73%. Th e cost/performance of both codes is as follows: (72,64) code ? 12.5/1.4 ? 8.9 (136,128) code ? 7.0/0.73 ? 9.6 Th e (72,64) code has a better
41、cost/performance ratio. 5.9.3 Using the bit numbering from section 5.5, bit 8 is in error so the value would be corrected to 0 x365. S-12 Chapter 5 Solutions 5.10 Instructors can change the disk latency, transfer rate and optimal page size for more variants. Refer to Jim Gray s paper on the fi ve-mi
42、nute rule ten years later. 5.10.1 32 KB 5.10.2 Still 32 KB 5.10.3 64 KB. Because the disk bandwidth grows much faster than seek latency, future paging cost will be more close to constant, thus favoring larger pages. 5.10.4 1987/1997/2007: 205/267/308 seconds. (or roughly fi ve minutes) 5.10.5 1987/1
43、997/2007: 51/533/4935 seconds. (or 10 times longer for every 10 years). 5.10.6 (1) DRAM cost/MB scaling trend dramatically slows down; or (2) disk $/ access/sec dramatically increase. (2) is more likely to happen due to the emerging fl ash technology. 5.11 5.11.1 TLB AddressVirtual PageTLB H/MValidT
44、agPhysical Page TLB miss PT hit PF 11112 46691174 136 1 (last access 0)113 22270 TLB miss PT hit 1 (last access 1)05 174 136 1 (last access 0)113 139163 TLB hit 1 (last access 1)05 174 1 (last access 2)36 1 (last access 0)113 345878 TLB miss PT hit PF 1 (last access 1)05 1 (last access 3)814 1 (last
45、 access 2)36 1 (last access 0)113 4887011 TLB miss PT hit 1 (last access 1)05 1 (last access 3)814 1 (last access 2)36 1 (last access 4)1112 126083 TLB hit 1 (last access 1)05 1 (last access 3)814 1 (last access 5)36 1 (last access 4)1112 4922512 TLB miss PT miss 1 (last access 6)1215 1 (last access
46、 3)814 1 (last access 5)36 1 (last access 4)1112 Chapter 5 Solutions S-13 5.11.2 TLB AddressVirtual PageTLB H/MValidTagPhysical Page 46690 TLB miss PT hit 11112 174 136 1 (last access 0)05 22270 TLB hit 11112 174 136 1 (last access 1)05 139160 TLB hit 11112 174 136 1 (last access 2)05 345872 TLB mis
47、s PT hit PF 1 (last access 3)213 174 136 1 (last access 2)05 488702 TLB hit 1 (last access 4)213 174 136 1 (last access 2)05 126080 TLB hit 1 (last access 4)213 174 136 1 (last access 5)05 492253 TLB hit 1 (last access 4)213 174 1 (last axxess 6)36 1 (last access 5)05 A larger page size reduces the
48、TLB miss rate but can lead to higher fragmentation and lower utilization of the physical memory. S-14 Chapter 5 Solutions 5.11.3 Two-way set associative TLB Address Virtual PageTagIndex TLB H/MValidTag Physical PageIndex 4669101 TLB miss PT hit PF 111120 1741 1360 1 (last access 0)0131 2227000 TLB m
49、iss PT hit 1 (last access 1)050 1741 1360 1 (last access 0)0131 13916311 TLB miss PT hit 1 (last access 1)050 1 (last access 2)161 1360 1 (last access 0)1131 34587840 TLB miss PT hit PF 1 (last access 1)050 1 (last access 2)161 1 (last access 3)4140 1 (last access 0)1131 488701151 TLB miss PT hit 1
50、(last access 1)050 1 (last access 2)161 1 (last access 3)4140 1 (last access 4)5121 12608311TLB hit 1 (last access 1)050 1 (last access 5)161 1 (last access 3)4140 1 (last access 4)5121 492251260 TLB miss PT miss 1 (last access 6)6150 1 (last access 5)161 1 (last access 3)4140 1 (last access 4)5121
51、Chapter 5 Solutions S-15 TLB Address Virtual PageTagIndex TLB H/MValidTag Physical PageIndex 4669101 TLB miss PT hit PF 111120 10131 1362 0493 2227000 TLB miss PT hit 1050 10131 1362 0493 13916303 TLB miss PT hit 1050 10131 1362 1063 34587820 TLB miss PT hit PF 12140 10131 1362 1063 488701123 TLB mi
52、ss PT hit 12140 10131 1362 12123 12608303 TLB miss PT hit 12140 10131 1362 1063 492251230 TLB miss PT miss 13150 10131 1362 1063 All memory references must be cross referenced against the page table and the TLB allows this to be performed without accessing off -chip memory (in the common case). If t
53、here were no TLB, memory access time would increase signifi cantly. 5.11.4 Assumption: “half the memory available” means half of the 32-bit virtual address space for each running application. Th e tag size is 32 ? log2(8192) ? 32 ? 13 ? 19 bits. All fi ve page tables would require 5 ? (219/2 ? 4) by
54、tes ? 5 MB. 5.11.5 In the two-level approach, the 219 page table entries are divided into 256 segments that are allocated on demand. Each of the second-level tables contain 2(19?8) ? 2048 entries, requiring 2048 ? 4 ? 8 KB each and covering 2048 ? 8 KB ? 16 MB (224) of the virtual address space. Dir
55、ect mapped S-16 Chapter 5 Solutions If we assume that “half the memory” means 231 bytes, then the minimum amount of memory required for the second-level tables would be 5 ? (231 / 224) * 8 KB ? 5 MB. Th e fi rst-level tables would require an additional 5 ? 128 ? 6 bytes ? 3840 bytes. Th e maximum am
56、ount would be if all segments were activated, requiring the use of all 256 segments in each application. Th is would require 5 ? 256 ? 8 KB ? 10 MB for the second-level tables and 7680 bytes for the fi rst-level tables. 5.11.6 Th e page index consists of address bits 12 down to 0 so the LSB of the t
57、ag is address bit 13. A 16 KB direct-mapped cache with 2-words per block would have 8-byte blocks and thus 16 KB / 8 bytes ? 2048 blocks, and its index fi eld would span address bits 13 down to 3 (11 bits to index, 1 bit word off set, 2 bit byte off set). As such, the tag LSB of the cache tag is add
58、ress bit 14. Th e designer would instead need to make the cache 2-way associative to increase its size to 16 KB. 5.12 5.12.1 Worst case is 2(43?12) entries, requiring 2(43?12) ? 4 bytes ? 2 33 ? 8 GB. 5.12.2 With only two levels, the designer can select the size of each page table segment. In a mult
59、i-level scheme, reading a PTE requires an access to each level of the table. 5.12.3 In an inverted page table, the number of PTEs can be reduced to the size of the hash table plus the cost of collisions. In this case, serving a TLB miss requires an extra reference to compare the tag or tags stored in the hash table. 5.12.4 It would be invalid if it was paged out to disk. 5.12.5 A write to page 30 would generate a TLB miss. Soft ware-managed TLBs are faster in cases where the soft ware can pre-fetch TLB entries. 5.12.6 When an instruction writes to VA page 200, an
温馨提示
- 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
- 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
- 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
- 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
- 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
- 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
- 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。
最新文档
- 院感新冠肺炎试题及答案
- 25年公司员工安全培训考试试题含答案(精练)
- 2024-2025项目管理人员安全培训考试试题带解析答案可打印
- CPBA考试价值体系试题及答案
- 2025年化学单质:碳项目建议书
- 2024-2025企业员工岗前安全培训考试试题(a卷)
- 2025年工厂职工安全培训考试试题含答案(巩固)
- 2025厂里职工安全培训考试试题(4A)
- 2024-2025公司安全管理人员安全培训考试试题带答案(基础题)
- 2024年统计学考试策略运用试题及答案
- 合同管理知识培训课件
- 2025届山西省高三一模地理试题(原卷版+解析版)
- 2024年电信销售员工年终总结
- 2025年度执业药师职务聘用协议模板
- Unit3 Weather Part A(说课稿)-2023-2024学年人教PEP版英语四年级下册
- 2-山东工业技师学院申报国家级高技能人才培训基地项目申报书
- 常用消毒剂的分类、配制及使用课件演示幻灯片
- GB 45069-2024悬崖秋千安全技术要求
- 员工反恐怖协议
- 2025年高考政治一轮复习知识清单必修四《哲学与文化》重难点知识
- 12万吨年丁二烯抽提装置、10-3万吨年MTBE-丁烯-1装置总承包工程施工组织设计
评论
0/150
提交评论