Modern supercomputers can achieve one petaflack processing capacity
According to a recent study by the University of Virginia Technical Research, HPC can be used in innovative ways to work with many flexibility, scalability and performance.
In the modern world, high performance computing (HPC) systems or supercomputer play an important role. Researchers use this data to handle a wide range of issues; such as quantum mechanics, climatic studies and molecular modeling. Supercomputer can handle such requests in very little time; modern supercomputers can perform floating-point operations almost in quadrillion a second (FLOPS) and also called petaflops.
However, storage platforms that are required to create such HPC data systems are limited to those that require selectivity between users with high availability or customization capabilities.
Today, researchers from Virginia technology have developed a new structure called "BespoKV" (exascale computing), which allows scientists to create a supercomputer that can bill billions billion in a second.
It is based on the concept of Basic Data (KV), which keeps important data to be kept faster than slow disks.
According to the researchers, the unique point of sale of the company "BespokV" – possibility to create shops up to several kV. It datalet & # 39; (one server KV store) and then immediately creates ready-to-use KV distributions.
Researchers point out that BespoKV excludes the system from scratch and needs to be rebuilt to perform a specific task: "The developer can dump Datalet into BespoKV and scheme of distributed systems 'untreated plumbing'.
This study relates to areas that handle large volumes of data, such as major credit card companies, movie streaming websites, and social media.
"Big company developers can actually create their own teeth to design innovative HPC storage systems with BespoKV," said Ali Butt, a computer science professor.
"Data access is a major constraint on HPC storage systems and usually involves the use of solutions to ensure flexibility, which is extremely difficult. The degree of consistency, consistency, and reliability. «
The results of the research are presented at the Conference on Computing Techniques in Dallas, Texas, at the IEEE Supercomputing Conference today.