Some members of the Association manage High Performance Computing (HPC) and High Throughput Computing (HTC) infrastructures at national and international level.
CINECA is the national supercomputing facility.
At the time of their installation, most of the CINECA supercomputer systems were ranked in the top, from the 7th to the 12th position, of the TOP500, enabling and supporting national and European excellence in science and technology innovation. The high end tier 0 systems have been continuously integrated in a complex environment that complements the tier 0 supercomputers with a high quality of service tier 1 system, a large scale data repository, and, from time to time, innovative prototypes to keep pace with the cycles of innovation and to develop and test new architectures with the objectives of increasing and improving the effectiveness and the efficiency of the next cycle highest performing computing system.
The current computing architecture facility hosted by CINECA integrates a tier 0 high end computing system with a peak performance on the order of 20 petaflops, ranked number 12 in TOP500 at the time of its installation, a tier 1 system for quality of service, an HPC cloud system, and a prototype production system for artificial intelligence and machine learning applications. In all, the current computing architecture integrates more than 9000 server nodes. A large-scale data repository, with a full capacity in excess of 50 petabytes, completes and integrates the computing architecture as well.
The CINECA HPC facility enables a wide range of scientific research through open access granted by independent international peer reviewed processes by mean of the PRACE association at the European level, and the ISCRA at the national level.
CINECA provides services to the Worldwide Eurofusion community, being the contractor of the European tender for that specific service that will run till the end of 2023, and also provides operational computing service for weather forecasting for National Civic Protection under the supervision of Emilia-Romagna ARPA-SMR. CINECA is part of the European digital infrastructure for many ESFRI RI facilities and initiatives, among others, EPOS: European Plate Observing System, led by INGV, ELIXIR: Infrastructure for Life Science, and the HPB European Human Brain Flagship Project. Also, with reference to those RI initiatives (but not only those), CINECA is a core partner of the European Open Science Cloud HUB infrastructure and the European Data Infrastructure.
At the national level, many joint development partnership agreements are in force with INFN, ENEA, INAF, SISSA, ICTP and many R&D collaboration actions and agreements are in force with qualified national research institutes, universities, and public administrations. CINECA received recognition as a ‘Golden Digital Hub for Innovation’ from the Big Data Value Association, is a Competence Centre for supporting innovation towards industries, and has led many proofs of concepts in collaboration with industries and private organisations. CINECA entered formal partnerships for added value services and R&D activities with ENI and manages and operates the ENI corporate supercomputing facility, one of the world’s largest industrial supercomputing infrastructures.
The HTC facility is hosted at CNAF Bologna, which is one of the INFN National Centres defined in the INFN charter. CNAF has been charged with the primary task of setting up and running the so-called Tier-1 data centre for the Large Hadron Collider (LHC) experiments at CERN in Geneva. Nowadays it hosts computing not only for LHC but also for many experiments, ranging from high-energy physics to astroparticles, dark matter searches in underground laboratories, etc. CNAF also participated as a primary contributor in the development of grid middleware and in the operation of the Italian grid infrastructure. This facility operates within the framework of a national INFN HTC infrastructure consisting of the CNAF Tier 1 and 10 smaller facilities, called Tier 2 centres, placed all over Italy.
The CNAF data centre operates about 1,000 computing servers providing ~30,000 computing cores, allowing the concurrent run of an equivalent number of general-purpose processing tasks. All the computing resources are centrally managed by a single batch system (LSF by IBM) and dynamically allocated through a fair-share mechanism, which allows full exploitation of the available CPU-production with an efficiency of about 95%. Part of the CNAF computing power is currently hosted at CINECA, and connected to the main site via dedicated high-speed network links traversing the city of Bologna, allowing the remote computing nodes to access the mass storage located at CNAF with maximum throughput and minimal latency, as if they were local.
CNAF operates a very large storage infrastructure based on industry standards, for connections (about 100 disk servers and disk enclosures are interconnected through a dedicated Storage Area Network) and for data access (data are hosted on parallel file systems, GPFS by IBM, typically one per major experiment). This solution allows the implementation of a completely redundant data-access system. In total, CNAF hosts about 40 PB of online disk space, with a total I/O bandwidth of about 1.5 Tb/s and 90 PB of nearline tape space arranged in a robotic library read out by 20 tape enterprise-level drives.
The 10 additional Tier 2 centres distributed across Italy have a similar aggregate of resources, both in terms of computing and disk storage, but not tape. They are connected to the CNAF Tier1 through dedicated high-speed links, and the resources can work in a coordinated way in the global INFN computing ecosystem.
The user community of the CNAF Tier1 facility is primarily composed of research groups from INFN and most of the Italian universities (including the University of Bologna, University of Ferrara and University of Parma), working on nuclear, subnuclear, astroparticle and theoretical physics. In the coming years, the HTC computing resources of INFN CNAF will be increased at least by an order of magnitude in order to meet the computing and storage requirements of forthcoming experiments and upgrades.