High Performance Computing

‘High Performance Computing’ (HPC) and ‘supercomputing’ are terms referring to technologies that can be used to make the maximum computing performance available for the processing of a single technical or scientific task.

We have been active in this area since the founding of SCS (1993) and are in direct contact with the most important suppliers and users of supercomputers such as CSCS. In line with this area's specific requirements, SCS has a great deal of experience in high-speed communication (high-bandwidth networks, low-latency communication, low-latency software stacks) and scaling hardware and software architectures for reliable operation with hundreds or even thousands of computers. To give an example: Through employing discrete event simulations, the behaviour of very large parallel supercomputers dealing with specific workloads and failure scenarios could be simulated virtually in order to identify design bottlenecks.

SCS also supports custormers/clients when it comes to improving the performance of new or existing application code, as well as facilitating their portability and performance-portability from x86-architecture CPUs to accelerator hardware such as general-purpose GPUs:

  • Schnee von morgen
  • A Stencil Library for the New Dynamic Core of COSMO (NVIDIA)
  • How MeteoSwiss is preparing for emerging/future HPC architectures (MeteoSchweiz)

High Performance Computing – Blog Postings

Graphic Processors (GPUs) offer better performance at lower costs and energy demand compared to regular processors (CPUs). A consortium linked to the Swiss National Supercomputing Centere (CSCS) in Lugano, the Federal Office of ...
The weather forecast by MeteoSwiss and many other European meteorological services are based on COSMO, the numerical weather prediction system. The ‘dynamic core’ of this model was reprogrammed by SCS. This was done in co-operation with the ...