1 M8 {' s& N1 y0 f图2:冷却单元的构成2 c: I/ ?8 j8 J; W" z
8枚冷却板覆盖在CPU和ICC上方,彼此由配管相连。配管顶端为供水系统、排水系统2个接口。每个接口伸出2根配管,分别通入2个CPU和2个ICC的冷却水。 ! L- [4 t6 N0 h. P* @7 i, B( ~ 其实,对于富士通,大型计算机水冷技术是“即将遗忘的技术”。要想令已经停止持续开发的技术复活,并将其应用于前所未有的新型计算机,就需要克服各种各样的课题。 5 O% y) i) Z# w3 k
- h. T e U8 r8 x水冷技术再度成为焦点 % Y% U# s" B0 E9 n# T# `0 V- T2 }6 G5 V
“1980年代的大型计算机全都是水冷”(富士通先进技术HPC应用推进统括部统括部长铃木正博)。对于当时的大型计算机,ECL(射极耦合逻辑)类型的LSI是主流。ECL的发热量大,必须使用冷却性能高的水冷。 - ^ a0 ?) X4 A6 b2 j
2 g7 K2 n i. Z* D* D 然而,随着CMOS(互补型金属氧化膜半导体)在1990年代之后问世,大型计算机对于冷却的需求发生了改变。CMOS只在工作时发热,因此,“空冷也可以满足需要”(铃木)。而空冷的设备成本也比较小,水冷技术逐渐淡出了大型计算机。 5 ^2 i. V5 M* M ' {: S; F9 r6 g5 `( q F 而此次京的开发却把目光投向了水冷技术。这是因为开发的目的是实现前所未有的高运算性能,而且设置空间有限。 2 e" r! ^) x B- t2 `) o7 h
! Z5 c1 F; T- ~) |
有研究表明,计算机的运算性能温度每降低1℃,性能将提高0.2%。也就是说,如果温度能够降低50℃,性能就有望提升约10%。而且,通过进一步冷却,还可以降低CPU的故障率。一般来说,CPU的温度降低10℃,故障率将会减半。对于8万多个CPU联动的京,确保CPU的可靠性是系统稳定运行必不可少的条件。 ) G5 h5 ~+ s+ H4 G% l3 r8 f 1 m. Y1 B. U. V: S+ x 冷却虽然也可以利用向翅片送风的空冷方式实现,但设置翅片需要的空间与水冷差别巨大。例如,在冷却性能相同的情况下,空冷翅片需要的空间是水冷冷却板的10倍。 & k6 s+ |) L0 t
* h% C0 C) ~0 N! K9 p
在京还在开发之中时,设置位置就已经确定,冷却系统必须能够放入空间。鉴于能够实现运算性能目标的CPU和ICC数量庞大,其安装密度相当之高。 : s' K+ q1 V. m, E+ q, ]5 s8 |) y- j. {4 @2 G
具体来说,各配备了4个CPU、4个ICC、32枚存储模块的24枚系统板需要容纳在宽度、纵深不到750~800mm,高度约为2m的机架中。而且,机架内还必须容纳电源等装置。 - G9 Q% M* r' T3 H5 t 8 H! t- c' b! F" [/ Z( P* r1 t 为了满足需要,如上所述,冷却系统采用了从整体向机架、系统板、CPU和ICC逐级分岔的形式。对于开发方富士通而言,为京开发由总长70km的铜配管、23km的橡胶配管,以及50万个接头组成的水冷系统是一项巨大的挑战。 , M# C9 b$ G. C+ i
) A. V2 U/ }* d; M! s借鉴老技术 . ?0 i h# X. l( U5 h
+ y8 V6 f6 C+ g* B" n “当时,曾经从事大型计算机水冷技术的技术人员有几十人。但随着计算机冷却的主流变成空冷,这些技术人员逐渐调往存储等部门,分散到了公司的各个角落”(富士通新一代技术计算开发本部系统开发统括部第一开发部部长高田恭一地)。京的水冷系统的开发就是在这种情况下开始的。 6 [: v2 P1 |0 l
}# L C4 U2 F! w3 v( T: ]
“使用什么冷却材料能够降温多少”、“如何评价配管材料的腐蚀性并选择软管”,汇总这些技术经验的文件应该还在这个世上,“但却不知道放在哪里”(高田)。 9 M7 r d" C' b. J3 o7 d$ { {) \0 D$ A3 f* Y
于是,开发人员开始咨询过去从事过水冷技术的技术人员。当然,单是挖掘过去的技术并不能实现京的冷却系统。因为京对于发热量、设置空间的要求都比过去更加严格。 2 T/ Z& C. Z. c- M+ p( k
* E' g3 I9 B/ D2 t! R- n4 \' F2 A 以前面提到的与8枚冷却板经配管相连,端部安装了接口的冷却单元(图2)为例。冷却板虽然铺设在CPU和ICC的上表面,但其内部构造采用了使用微细加工技术在金属板上形成流路的微通道方式。这是富士通在1999年上市的通用计算机“GS8900”上采用的技术*3。 - f2 t; j: d% v8 L E% C# o# e% f" F- L9 V*3 GS8900为了提高处理性能而令水冷技术一度重生。 6 f4 D" X; h/ M2 P$ Q/ @; e. z0 S& X3 Y# I7 k3 O
微通道的具体流路虽然没有公开,但有消息表明,京的流路与GS8900相比有了更大的改进。加工技术和热流体仿真技术也在这10年间大幅进步,通过使用这些技术,设计出的流路冷却效果超过了以往。 $ z, i5 a- C& s0 [- b" Y: j$ ?) b # I; |* D0 Z% o( x配管复杂的路径 ( ^: G0 w5 t3 c: y
: l4 j6 k3 q0 g1 u
冷却单元的配管并不是以最短的距离连接冷却板。负责生产的合作公司甚至抱怨“为什么非要这么弯弯曲曲的”。因为这样做制造需要使用特殊的夹具和方法(图3、4)。 4 P0 t$ ]/ g* ~( }
作者: stupid 时间: 2012-7-3 14:27 本帖最后由 stupid 于 2012-7-3 14:28 编辑 ) V" J9 r* |5 _* r6 m$ q3 A B" b
For the first time since November 2009, a United States supercomputer sits atop the TOP500 list of the world’s top supercomputers. Named Sequoia, the IBM BlueGene/Q system installed at the Department of Energy’s Lawrence Livermore National Laboratory achieved an impressive 16.32 petaflop/s on the Linpack benchmark using 1,572,864 cores. % _( V% @' `$ ^" O+ x6 ], A1 [/ h, e
Sequoia is also one of the most energy efficient systems on the list, which will be released Monday, June 18, at the 2012 International Supercomputing Conference in Hamburg, Germany. This will mark the 39th edition of the list, which is compiled twice each year.+ k& E% r: b" {
r/ c ~7 b( b1 x. s3 F% n) z# I
Complete information on the trends indicated by the latest list, as well as the complete list, can be found on the TOP500 website. 8 t$ d$ p/ A$ m' e+ Z& J5 P5 h+ }5 Z, A y8 K
On the latest list, Fujitsu’s “K Computer” installed at the RIKEN Advanced Institute for Computational Science (AICS) in Kobe, Japan, is now the No. 2 system with 10.51 Pflop/s on the Linpack benchmark using 705,024 SPARC64 processing cores. The K Computer held the No. 1 spot on the previous two lists.- n# F: m; W: i" M6 o4 q' n
% `5 s! h* H! B: a" ]8 c/ v
The new Mira supercomputer, an IBM BlueGene/Q system at Argonne National Laboratory in Illinois, debuted at No. 3, with 8.15 petaflop/s on the Linpack benchmark using 786,432 cores. The other U.S. system in the Top 10 is the upgraded Jaguar at Oak Ridge National Laboratory in Tennessee, which was the top U.S. system on the previous list and now clocks in at No. 6.5 n$ N- a4 i4 K: U
. D' A/ t! F( x2 _
The newest list also marks a return of European systems in force. The most powerful system in Europe and No.4 on the List is SuperMUC, an IBM iDataplex system installed at Leibniz Rechenzentrum in Germany. Another German machine, the JuQUEEN BlueGene/Q at Forschungszentrum Juelich, is No. 8. A( t* j, U# [; {1 D' f7 P5 h- R
7 q, y) G. Z2 D3 S- f9 ZItaly makes its debut in the Top 10 with an IBM BlueGene/Q system installed at CINECA. The system is at No. 7 on the list with 1.72 Pflop/s performance. In all, four of the top 10 supercomputers are IBM BlueGene/Q systems. France occupies the No. 9 spot with a homegrown Bull supercomputer.# y: r1 p" o" }3 V3 l/ U3 S
`( O2 a# T2 ` |China, which briefly took the No. 1 and No.3 spots in November 2010, has two systems in the Top 10, with Tianhe-1Aat the National Supercomputing Center in Tianjin in No. 5 and Nebulae at the National Supercomputing Centre in Shenzhen No. 10. 2 |* ~! i3 J/ H3 ?; }9 s - K" R" {8 U' R& ?Total performance of all the systems on the list has increased considerably since November 2011, reaching 123.4 Pflop/s. The combined performance of the last list was 74.2 Pflop/s. In all, 20 of the supercomputers on the newest list reached performance levels of 1 Pflop/s or more. The No. 500 machine on the list notched a performance level of 60.8 teraflop/s, which was enough to reach No. 332 just seven months ago. 3 Z) U K- d4 l8 `: J: A 8 w$ Z' p" I& c" g6 R' I; IA look at processors ! ]8 a. y# _" n1 c, JA total of 372 systems (74.4 percent) are now using Intel processors, down from 384 systems (76.8 percent) on the last list. Intel is now followed by the AMD Opteron family with 63 systems (12.6 percent), same as in the in the previous list. The share of IBM Power processors has increased from 49 to 58 systems (11.6 percent). * a! [7 b2 r8 s2 _ , p2 h. I9 S& L% ]58 systems use accelerators or co-processors (up from 39 six months ago), 53 of these use NVIDIA chips, two use Cell processors, two use ATI Radeon and there is one new system with Intel MIC technology. ; t, _& u T) w9 I/ I ! d3 R. A. ~0 n* f4 t& Z 1 T2 ?) R/ L6 C: C) l0 Y 8 {0 m) t0 l3 P8 K. M' UThe top vendors: \ ^/ t% L k8 d6 A& k
IBM kept its lead in systems and has now 213 systems (42.6 percent) compared to HP with 138 systems (27.6 percent). HP is slightly down from 141 systems (28.2 percent) seven months ago, compared to IBM with 223 systems (44.6 percent). In the system category, Cray, Appro, SGI and Bull follow with 5.4 percent, 3.6 percent, 3.2 percent, and 3.2 percent respectively. {* q& Y: Q* u" H9 Z3 N0 V
; o! d) W: s; E& v+ \4 Z DIBM remains the clear leader in the TOP500 list in performance and considerably increased its share with 47.5 percent of installed total performance (up from 27.3 percent). HP is second with 10.2 percent down from 13.1 percent. Due to the impressive performance of the No. 1 K Computer, Fujitsu follows closely in the third spot with 9.9 percent, down from 14.7 percent. Cray follows in fourth place in this category with 8.9 percent, down from 14.3 percent.% ]! Z( \- D! b4 ^
5 @, ^+ k5 Q1 p' L9 \( Y8 D ) E' r" P; z+ r1 t" n& j9 ?0 s0 c% V: H3 |
Where are they now? 0 I3 J& Y% B! @The U.S. is clearly the leading consumer of HPC systems with 253 of the 500 systems (down from 263). The European share (107 systems – up from 103) is still lower than the Asian share (121 systems – up from 118). Dominant countries in Asia are China with 68 systems (down from 74), Japan with 34 systems (up from 30). In Europe, UK, France, and Germany, are almost equal with 25, 22, and 20 respectively.作者: shark4685 时间: 2012-7-3 14:28
把水换成氟利昂,外面接个压缩机,效果估计会更好!作者: stupid 时间: 2012-7-3 14:31
8 P* M3 K! i2 e p" R- h
IBM水冷技术2 \8 n! f$ E( z3 @ {
8 R/ `8 D4 v- T5 E& W4 p0 K
SuperMUC采用的水冷技术是和IBM x iDataPlex一样的技术,确切的说其实是温水制冷技术,因为采用的40摄氏度的入水,而一般情况下,数据中心水冷都是采用16摄氏度的入水。) y' A/ ~. P! B( Z
' t* V" k0 i$ L o8 h8 W) o) K/ u
之所以采用温水,原因在于水的二次利用,常见的数据中心水冷技术中,16度的水经过数据中心的服务器后,排出的水的温度在20度左右,这些水由于温度太低而不能在做其它用途,而IBM采用的温水制冷技术,经过系统后排出的水能够达到70多度。这些水能够二次利用,例如为建筑物供暖等。
作者: stupid 时间: 2012-10-17 16:41 Energy Efficiency 6 b( d4 Y9 I- }- a$ c; b7 XSuperMUC uses a new, revolutionary form of warm water cooling developed by IBM. Active components like processors and memory are directly cooled with water that can have an inlet temperature of up to 40 degrees Celsius. The "High Temperature Liquid Cooling" together with very innovative system software promises to cut the energy consumption of the system. In addition, all LRZ buildings will be heated re-using this energy.4 \; q6 u# B0 E/ h" S# K
( S1 D' K$ A% t, fWhy "warm" water cooling? 8 Y6 k1 g" `7 r' C. N% Y: d. xTypically water used in data centers has an inlet temperature of approx 16 degrees Celsius and, after leaving the system, an outlet temperature of approx. 20 degrees Celsius. To make water with 16 degrees Celsius requires complex and energy-hungry cooling equipment. At the same time there is hardly any use for the warmed-up water as it is too cold to be uses in any technical processes.# ^+ O* y. b. R7 S: W7 p0 H- [
SuperMUC allows an increased inlet temperature. It is easily possible to provide water having up to 40 degrees Celsius using simple "free-cooling" equipment as outside temperatures in Germany hardly ever exceed 35 degrees Celsius. At the same time the outlet water can be made quite hot (up to 70 degrees Celsius) and re-used in other technical processes - for example to heat buildings or in other technical processes. , s& u. q9 j& n# lBy reducing the number of cooling components and using free cooling LRZ expects to save several millions of Euros in cooling costs over the 5-year lifetime of the system.& m" _/ P9 e' @8 r