Instead, this hunk of metal allows Cray to achieve something almost unimaginable by 2019. 0000006306 00000 n
0000006195 00000 n
Both +- the same TDP. It’s an entirely new design created to address today’s diversifying needs.
Imagine having a suite of software solutions that allow you to conduct an entire AI workflow on one system. The Frontier system will be composed of more than 100 Cray Shasta cabinets with high density compute blades powered by HPC and AI- optimized AMD EPYC™ processors and Radeon Instinct™ GPU accelerators purpose-built for the needs of exascale comput-ing.
With these needs driving scientific discovery and technology, the next generation of supercomputing will be characterized by the fastest exascale performance, data-centric workloads and diversification of processor architectures. trailer
<<49FBECAF378B41E792CD6932F31C6F24>]/Prev 3815093/XRefStm 1270>>
startxref
0
%%EOF
581 0 obj
<>stream
The Cray Shasta supercomputing system is our answer. Get the best of STH delivered weekly to your inbox. It allows for multiple processor and accelerator architectures and a choice of system interconnect technologies, including our new Cray, a Hewlett Packard Enterprise company, designed and developed interconnect we call Slingshot.Today’s scientific research, technology and big data questions are bigger, more complex and more urgent than ever.
A rack, or string of racks would have connections to in building water pipes and a heat exchanger.
Using liquid cooling by CoolIT Systems, Cray is able to build systems with up to 1024 threads per U in 2019 with room to increase that number as processors evolve. 0000002483 00000 n
0000007267 00000 n
h�b```b``9�����;�A��X��,;���00X�u�LZf ǑU"J
Shasta systems are designed to support multiple exascale workloads simultaneously, providing the power you need for advanced simulation, modeling, AI and analytics. 0000013413 00000 n
0000010693 00000 n
This is for good reason.
( I wonder if the weight of the liquid is also a factor in DC design )Anyway, I have often wonder if CPU / Rack Density cost were ever a problem. But you need them to work together.
0000003057 00000 n
Shasta continues this leadership into much larger compute capabilities, up to exascale and beyond, supporting all fields of extreme-scale science, innovation and discovery. Fully submerged racks with greatly simplified systems looked appealing. On the Supercomputing 2018 floor, we saw a hunk of metal that seemed inert at first. 0000028404 00000 n
���Z�0�98�T���E���h���kP��K\;����R�1H�ؤx^��#2�����f�aP80�y��cNو;�f�X�[�ns`9{���z�R �d~��Y��eB�f�m +
endstream
endobj
302 0 obj
<>/Filter/FlateDecode/Index[22 253]/Length 31/Size 275/Type/XRef/W[1 1 1]>>stream
Has to be marketed on TCO. Quality-of-service and novel congestion management features limit the impact to critical workloads from other applications, system services, I/O traffic or co-tenant workloads. Answering them demands an entirely new approach to scientific computing. It took the date centre a long time to find out that they can reduce cost by going liquid way.
0000003685 00000 n
Cray supercomputer systems consistently lead in performance and efficient scaling.