Since early 2020, Extremadura has had the LUSITANIA III supercomputer, located in Cáceres. 

LUSITANIA III has been funded with ERDF funds managed by the Regional Ministry of Economy, Science, and Digital Agenda of the Regional Government of Extremadura through the General Secretariat for Science, Technology, Innovation, and Universities.

For COMPUTAEX, this represents an extraordinarily important milestone in the foundation's ten-year history, during which it has provided intensive computing services for projects developed by the scientific community and businesses across all sectors.

Specifically, LUSITANIA III significantly increases the computing resources offered by the COMPUTAEX Foundation, reaching a total supercomputing capacity of 93 TFlops plus 120 TFlops of graphics computing, on an Infiniband network of up to 100Gbps, and providing a total of 3,696 cores and 40,960 cuda cores, which are available to researchers, innovators, technologists, and all types of users who need their resources.

LUSITANIA III features architecture IBM Power Systems ACP which stands out for its high storage (IBM Elastic Storage) and computing capabilities, as well as the software needed to facilitate its use and make the most of available resources.

It also includes the IBM's Watson Machine Learning Community Edition platform, which allows scientists to simulate the behavior of physical and chemical processes as they would in real life.

Please, visit the following link if you wish to request supercomputing resources and services.

The following are the characteristics of LUSITANIA III:

Computing nodes

  • IBM Power Systems Accelerated Compute Server (AC922) with 2 POWER9 with 20 cores each, 2.4GHz (40 cores per node), 1TB RAM and 2 Nvidia Tesla V100 GPU with NVLink SXM2.
  • IBM Power Systems Accelerated Compute Server (AC922) with 2 POWER9 with 20 cores each, 2.4GHz (40 cores per node), 128GB RAM and 2 Nvidia Tesla V100 GPU with NVLink SXM2.

High-memory node (fat node)

  • Primergy RX4770 M2 with 4 Intel Xeon E7-4830v3 processors with 12 cores each, 2.1GHz, 30MB cache (48 cores in total), 1.5TB DDR4 RAM, 4 power supplies, and 300GB SAS disks.

Distributed memory cluster

  • 10 Fujitsu Primergy CX400 chasis with capacity for up to 4 servers each.
  • 40 Fujitsu Primergy CX2550 servers with 2 Intel Xeon E5-2660v3processors, each with 10 cores, at 2.6GHz (20 cores per node, 800 cores in total) and 25MB cache, with 80GB RAM and 2 x 128GB SSD disks.
  • 168 IBM System x iDataPlex dx360 M4 servers with 2 Intel E5-2670 SandyBridge-EP processors, each with 8 cores, at 2.6GHz (16 cores per node, 2688 cores in total), 20MB cache and 32GB RAM.
  • 2 racks IBM iDPx con RDHX (water cooling) with capacity to house 84 servers each.

Hyperconverged cloud computing cluster

  • HX-5522 nodes with 2 Intel Xeon Gold 5220 (Cascade Lake) processors with 18 cores each at 2.2GHz. Each node Each node has 512GB of RAM, 2 x 128GB M.2 SSDs, 2 x 1.92TB SSDs, and 4 x 8TB HDDs. In total, the cluster has 96TB HDD11.52TB SSD and 768 GB SSD M.2. In addition, these nodes have a 25GbE network interconnection.

Service nodes

  • IBM Power Systems Accelerated Compute Server (AC922) with 2 POWER9 with 16 cores each, at 2.7GHz (32 cores per node) and 128GB RAM.
  • Fujitsu Primergy RX2530 M1 servers, each with 2 Intel Xeon E5-2620v3 processors (6 cores at 2.4GHz and 15 MB cache); 32GB DDR4 RAM, 2 x 300GB SAS disks.
  • IBM System x x3550 M4 server with 1 Intel SandyBridge-EP processor (8 cores at 2.6GHz and 20MB cache); 16GB RAM, 2 x 300GB SAS disks.

Development nodes

  • Fujitsu Primergy RX2530 M1 servers with 2 Intel Xeon E5-2620v3 processors (6 cores at 2.4GHz and 15 MB cache); 64GB DDR4 RAM, 2 x 300GB SAS disks.

Storage

  • Elastic Storage Server GL1S with a storage capacity of 656TB RAW:
    • 1 enclosure with 84 slots: 82 x 8TB Enterprise HDD and 2 x 800GB SSD.
    • 2 Data Servers:
      • 2 x 10-core 3.42 GHz POWER8 Processor Card and 256GB RAM each.
    • 1 ESS Management Server:
      • 10-core 3.42 GHz POWER8 Processor Card and 64GB RAM.
    • IBM Spectrum Scale licensing.
  • Metadata Data Terminal (MDT) Eternus DX 200S3 (15 x 900GB SAS disks) = 12 TB.
  • Fujitsu Primergy RX2530 M1 servers with 2 Intel Xeon E5-2620v3 processors (6 cores at 2.4GHz y 15 MB cache); 64GB DDR4 RAM and 2 x 300GB SAS disks for metadata management with Lustre.
  • Data cabin (OST) Eternus DX 200 (41 x 2TB NL-SAS disks and 31 x 4TB NL-SAS disks) = 206 TB
  • Fujitsu Primergy RX2530 M1 servers with 2 Intel Xeon E5-2620v3 processors (6 cores at 2.4GHz and 15 MB cache); 64GB DDR4 RAM, 2 x 300GB SAS disks for object management with Lustre.

Network topology

The supercomputer's connectivity with the outside world is provided by a connection of up to 10Gbps with the Extremadura Scientific and Technological Network, which connects the region's main cities and technology centers. It is also interconnected with RedIRIS and the European GÉANT network.

Internally, the service and computing infrastructure is structured around:

  • EDR Mellanox TOR 36-port IB2 FAF 100 Gb/s IB Switch 1:8828 Model G36.
  • 1 Ethernet IBM Switch (48x1Gb+4x10Gb) 1:8831 Model S52.
  • Fortinet Fortigate 1000C firewalls as a perimeter security system, firewall capability, VPN, antivirus, intrusion detection, and bandwidth management per connection, configured as a high-performance active-passive redundant cluster with high processing capacity.
  • 14 switches Infiniband Mellanox SX6036 with 36 FDR 56Gbps ports for computing networks.
  • BNT G8052F switches with 48 ports and 1 BNT G8000 with 48 ports.
  • Brocade ICX6430 switches with 48 ports and one Brocade ICX6430 switch with 24 ports for communication and management networks.
  • InfiniBand Mellanox IS5030 with 36 QDR 40Gbps ports for computing networks.