What is Super Computer?
A supercomputer is a type of computer that is designed to perform extremely complex calculations and data processing tasks at very high speeds. Supercomputers are capable of executing many millions of instructions per second and can perform calculations that would be impossible for conventional computers to complete within a reasonable amount of time.
Supercomputers are typically made up of thousands of individual processors, each of which is capable of executing instructions in parallel with the other processors. This allows supercomputers to break down large computational problems into smaller tasks that can be processed simultaneously, greatly reducing the amount of time required to complete the task.
Supercomputers are used in a wide range of applications, including scientific research, weather forecasting, aerospace engineering, and financial modeling. They are used to simulate complex physical processes, analyze large data sets, and solve computationally intensive problems that require a massive amount of computational power.
One of the most famous supercomputers is IBM's Summit, which is located at Oak Ridge National Laboratory in Tennessee, USA. Summit is currently the world's most powerful supercomputer, with a peak performance of over 200 petaflops, which means it can perform more than 200 quadrillion calculations per second.
Supercomputers are expensive to build and operate, and require specialized expertise to develop and maintain. However, they play a critical role in advancing scientific and engineering research, and are an essential tool for solving some of the most complex problems facing society today.
History
The supercomputer was first developed in the 1960s, with UNIVAC building the Livermore Atomic Research Computer (LARC) and IBM building the 7030 Stretch. The IBM 7030 was built for the Los Alamos National Laboratory and used transistors, magnetic core memory, and pioneering random access disk drives. The third pioneering supercomputer project in the early 1960s was the Atlas at the University of Manchester, which introduced time-sharing to supercomputing. The CDC 6600, designed by Seymour Cray and finished in 1964, marked the transition from germanium to silicon transistors and became the fastest computer in the world. Cray left CDC in 1972 to form his own company, Cray Research, which developed the successful Cray-1 and Cray-2 supercomputers in the following years. These developments defined the supercomputing market, with increasing processing speeds and capabilities, and led to their use in scientific research, weather forecasting, and other fields.
Energy usage and heat management
The management of heat density is a key issue for centralized supercomputers, as the large amount of heat generated can reduce the lifetime of system components. Various approaches to heat management have been used, such as liquid cooling and hybrid liquid-air cooling systems. The cost of powering and cooling these systems can be significant. The energy efficiency of computer systems is generally measured in terms of "FLOPS per watt." Low power processors have been used to deal with heat density in some systems. The ability of cooling systems to remove waste heat is a limiting factor, and designs for future supercomputers are power-limited.
comment here guys
ReplyDelete