THE CURRENT STATE OF HIGH-PERFORMANCE COMPUTING
Comprised of performance-intensive applications such as weather forecasting and various simulations, high-performance computing (HPC) is not your father's computing. Multi-core and multi-thread supercomputer systems are often processing very complicated mathematical calculations 24 hours a day 7 days a week.
Due to the constant loads placed upon HPC systems, absolute performance has been more important than energy efficiency. Recently, however, network latency has begun supplanting the machine's overall performance as the most important metric.
Data growth is exploding, and Big Data platforms are entering the mainstream. As the number of cores increases, more threads contend for memory capacity and bandwidth.Applications today, such as In Memory Databases, require more memory capacity as well as higher bandwidth. Without an increase in memory subsystem performance, cores get starved for data, impacting overall system performance.Samsung Green Memory is specifically designed to meet the needs of HPCs' ever-increasing memory capacity and bandwidth requirements while still adding energy efficiency to the equation.
- Data is exploding
- "New" data types are breaking legacy data platforms
- Big Data Platforms are becoming mainstreaming
- "Native" Big Data applications and services will quickly emerge
WORLDWIDE ECONOMIC LANDSCAPE
According to IDC, 2011's HPC server revenue rebounded to the pre-recession high point of about $10 billion. IDC also forecasts that 2012 server revenue will reach a record level of $10.6 billion, on its way to $13.4 billion in 2015 (7.2% CAGR). The HPC market will continue to benefit from the global economic recovery.
In addition, the worldwide high-end supercomputer race will accelerate. IDC reports show that China, France, Italy, Japan, India, Russia, and the US all have HPC vendors today.The largest supercomputers cost $100-400 million, compared to $25-30 million in the vector era; and the EU has announced a plan to double HPC investment.
Coupled with expanding data traffic, this increase in growth and investment will only amplify the inherent latency concerns of HPC systems, not to mention greatly adding to their energy usage.
SAMSUNG LRDIMM VS. HPC'S NEW PAIN POINT
As the industry pain point moves from core limitations to memory limitations, the solution now lies with Samsung's Load-Reduced DIMM technology. As channel speed and capacity, the industry's two main drivers, increase, Samsung LRDIMM lowers four major constraints: cost, latency, power consumption, and form-factor.
As IDC reports, performance, power consumption and reliability are interdependent factors; architecting the optimal balance among these factors is extremely challenging. Server architectures with unprecedented scalability and density will lead the market and create new business opportunities. And power efficiency is moving from being a feel-good buzzword to an absolute business necessity. Memory Buffer is at the heart of Samsung's new LRDIMM technology. Memory Buffer LRDIMM solves the channel speed and SI challenges by “reinforcing” ALL signals for command, address, control, and data on the memory-controller interface.
HEAT, LATENCY, LOCATION AND NEW CONCERNS OVER POWER USAGE
Because they generate large amounts of heat, High-Performance Computing centers are usually located in colder locations. While the remote location assists the cooling process, its long distance from service areas has the added drawback of adding to network latency.
To alleviate latency issues, HPC centers are now moving to cities close to customers (such as the National Weather Service, Ministry of National Defense, NASA, government offices, schools, etc). Thus, the costs for space and electrical power are becoming more important than before, and efficiency is becoming a key concern.
Trying to meet this efficiency need, industry leader Samsung is working toward lowering heat, noise, power consumption, and the number and size of machines. Logically, fewer machines will use less floor space and less electrical power. Two of the most important HPC system-level elements are memory and storage, and Samsung's Green Memory products are more than up to the challenge of meeting these needs.
SAMSUNG GREEN DDR3 IN HPC ENVIRONMENTS
Key Performance Indicators (KPIs) in High-Performance Computing include performance, efficiency, and the power envelope. The right choice of memory contributes dramatically to improving these KPIs. What does this mean for you?
Samsung's Green DDR3 continues to reduce its die size, thus reducing its power consumption in a predictable manner (Moore's Law).
The ever decreasing form factor of Samsung Green DDR 3 LRDIMM produces a clear advantage over RDIMM, as can be seen in the chart below from Hewlett-Packard Development Company, 2010.
SAMSUNG GREEN SSD IN HPC ENVIRONMENTS
Samsung's flash-based Green SSDs help maintain 24 x 7 uptime for mission-critical servers, RAID systems, and network-attached storage (NAS). Because SSDs have no moving parts, they start up instantly and are not subject to the physical wear and tear that can gradually degrade the reliability of traditional hard disk drives. Design advantages of Samsung Green SSDs include:
- Heightened storage capacity based on advanced semiconductor technology.
- Minimized cooling costs achieved through built-in power management.
- Accelerated read and write operations with outstanding multitasking capabilities..
- Varied interfaces enabling connections to different types of host devices.
- Exceptional durability with high shock and vibration tolerance.
Samsung maintains outstanding quality control over the design and production of its SSDs, from NAND flash and DRAM components to controllers and firmware. Samsung's leadership in the SSD market is a direct result of unmatched expertise in semiconductor technology, along with their demonstrated ability to design and develop memory solutions in-house.