Technologies
Our high-performance computing and data analytics platform (NEMO) is available to all Crick researchers.
Hardware:
- High-performance research data storage with full offsite backup and long-term archive services. This has 22PB capacity as of January 2025, which continues to grow in line with new data requirements.
- CPU Compute, including High Memory nodes. 12,500 physical compute cores with 2TB RAM standard nodes and 4TB high memory nodes.
- GPU Compute.
- Standard: 160 A100 (80GB SXM4) GPUs across 40 nodes.
- Highest performance: 64 H100 (80GB SXM4) GPUs across 16 nodes.
- Structural biology: 64 L40 GPUs across 16 nodes.
- Networking: a high performance, low latency InfiniBand network is used to connect compute and storage nodes at speeds of up to 400Gbps.
Services:
- Slurm scheduler-based batch computing using CPU and GPU.
- Interactive nodes for CPU compute and Visualisation nodes (interactive GPU)
- OnDemand – GUI web front end for those who do not wish to use ssh/command line. Includes MATLAB, Jupyter Notebook, RStudio Server, and image analysis tools as well as a Linux-based general desktop.
- File Transfer including Globus, SFTP and to/from the major cloud providers
- Support and provision of training.
- We also have a GPU VM service for edge cases not covered by other services, such as Windows-based software.
- We provide high-performance computing systems essential for capturing, storing and analysing scientific data.