Denver CO, 18 November, 2013

As a PRACE implementation phase project, the successful easy-to-program large shared memory cluster is ready for general use.

Numascale today announced the successful validation of the large shared memory NumaConnect cluster at University of Oslo in Norway. The cluster is installed under a prototype PRACE study assessing emerging new technologies for European HPC.

The IBM/Numascale system, installed at University of Oslo in 2012, consists of 72 IBM x3755 2U servers connected in a 3D torus with NumaConnect, using four cabinets with 18 servers apiece in a 3x6x4 topology. Each server has 24 cores and 64 GBytes, providing a single system image to all 1728 cores and 4.6 TBytes. The system was designed to meet user demand for “very large memory” hardware solutions running a standard single image Linux OS on commodity x86 based servers.

“We focus on providing our users with flexible computing resources, including capabilities for handling very large data sets like those found in applications for next generation sequencing for life sciences,” says Dr. Ole W. Saastad, Senior Analyst and HPC expert at USIT, the University of Oslo’s central IT resource department. “Our new system with NumaConnect can be used as one single system or partitioned in smaller systems where each partition runs one instance of the OS. With proper Numa-awareness, applications with high bandwidth requirements will be able to utilize the combined bandwidth of all of the memory controllers and still be able to share data with low latency access through the coherent shared memory.

“Eliminating the difficulty of MPI coding for large data problems has increased the productivity of our scientists who are not trained in MPI programming,” Dr. Saastad continued. “Systems with NumaConnect now provide shared memory and MPI capabilities with the same cost structure as a cluster. This alternative represents a compelling solution for scientists who are used to working with their shared memory codes on x86 desktops and laptops, who can now scale up their data sets without any extra effort within a familiar, standard Linux OS environment.”

The PRACE system is a prototype system used for theoretical studies and testing, for users at both USIT center and at the PRACE partner entities. These include Greek Research and Technology Network (GRNET), Finnish IT Center for Science (CSC), Finland Forschungszentrum Juelich (FZJ), Germany Computation based Science and Technology Research Centre (CaSToRC), The Cyprus Institute, Cyprus, and the Pozna? Supercomputing and Networking Center (PSNC), Poland.

The single memory image cluster provides both shared memory — including threads and OpenMP — and MPI programming options. The scalable system takes advantage of low cost commodity x86 hardware and NumaConnect to offer significant savings compared to conventional shared memory systems. In addition, system administration is identical to a single server because there are no separate node images to maintain and distribute.

NumaConnect works with AMD Opteron-based servers and provides up to 256 TBytes of system-wide shared memory using cache coherency logic with a directory-based protocol that scales to 4096 nodes. The cache coherency logic is implemented in an ASIC together with interconnect fabric circuitry with routing tables for multi-dimensional Torus topologies. This type of fabric is very scalable and the same topology is used in many of the world’s largest supercomputers.


Visit Numascale in booth 2505 at SC13 to see live product demos.



The mission of the PRACE (Partnership for Advanced Computing in Europe) Research Infrastructure (RI) is to enable high impact scientific discovery and engineering research and development across all disciplines to enhance European competitiveness for the benefit of society. PRACE seeks to realize this mission by offering world class computing and data management resources and services through a peer review process. PRACE Implementation Phase project receives funding from the EU’s Seventh Framework Programme (FP7/2007-2013) under grant agreements n° RI-261557 and n° RI-283493.



About Numascale

With offices in Europe, Asia, and USA, Numascale’s groundbreaking NumaConnect interconnect technology enables scalable server computer systems to be built at cluster prices. NumaConnect allows high-volume manufactured server boards to be used as building blocks for systems with features that are only found in the high-end enterprise servers. NumaConnect includes full support for virtualization of processing, memory, and I/O resources, and can be used with standard operating systems.

Numascale is supported by: Statoil, ProVenture, Investinor, Innovation Norway, Norges forskningsråd, and Eurostars.


About University of Oslo (UiO) and USIT

The Research Computing Services department (RCS) within USIT (University Center for Information Technology) is a competence-focused resource for researchers both inside and outside the University of Oslo. The RCS provides HPC infrastructure and high-level research and technical competence, in order to efficiently translate research problems into solutions with the aid of a wide range of IT tools.


The responsibilities of the RCS include performing the duties for UiO in national and international HPC and e-science projects. UiO is a partner in the Notur II project (2005-2014), which provides the Norwegian infrastructure for High-Performance Computing. UiO is through RCS participating in international research and grid activities, such as the World-wide LHC Computation Grid (WLGC) and the Nordic distributed Tier-1 site, PRACE, and EGI.



Einar Rustad, CTO Numascale, +47 92484510


Booth phone at SC13: 1 508 873 3174