Galaxy formation re-simulation
DiRAC is the integrated supercomputing facility for theoretical modelling and HPC-based research in particle physics, nuclear physics, astronomy and cosmology, areas in which the UK is world-leading. It was funded as a result of investment of £12.32 million, from the Government's Large Facilities Capital Fund, together with investment from STFC and from universities. In 2012, the DiRAC facility was upgraded with a further £15 million capital investment from government (DiRAC II).
HPC-based modelling remains an essential tool for the exploitation of observational and experimental facilities in astronomy and particle physics. The investment in new hardware has provided UK particle physicists and astronomers with upgraded HPC technology to address some of the most challenging scientific problems and to test theories and run simulations from the data gathered in experiments.
The DiRAC facility provides a variety of computer architectures, matching machine architecture to the algorithm design and requirements of the research problems to be solved. The science facilitated includes:
The continued pooling of complementary expertise within DiRAC ensures that the UK remains one of the world-leaders of theoretical modelling in particle physics, astronomy and cosmology.
DiRAC is both an academic-led and academic supervised Facility with an active Project Management Board and Technical Working Group that ensures that the science goals of the community are solved by the most appropriate technical and algorithmic solution. DiRAC is managed as a single Facility with the DiRAC II funding providing five installations (see below) and the DiRAC I funding awarded to 8 consortia.
There are five installations:
Cambridge HPCS Service: Data Analytic Cluster - 10000 cores, 1PB Parallel File Store, High Performance IO and Interconnect, Non Blocking Switch Architecture, 4GB RAM Per core. Further information can be obtained by emailing support.
Cambridge COSMOS SHARED MEMORY Service - 1856 cores, 14.8TB Globally shared memory (8GB RAM per core), 146TB High Performance scratch storage (~5GB/sec sequential r/w), Intel Xeon Phi Co-processors Capability (coming in Q4'2012). Further information can be obtained by emailing cosmos help.
Leicester IT Services: Complexity Cluster - 4352 cores, 0.8PB Parallel File Store, High Performance IO and Interconnect, Non Blocking Switch Architecture, 8GB RAM Per core. Further information can be obtained by emailing Leicester support.
Durham ICC Service: Data Centric Cluster - 6500 cores, 2PB Parallel File Store, High Performance IO and Interconnect, 2:1 Blocking Switch Architecture, 8GB RAM Per core. Further information can be obtained by emailing cosma support.
Edinburgh 6144 node BlueGene/Q - 65000 cores, 5D Torus Interconnect, High Performance IO and Interconnect. Further information can be obtained by emailing DiRAC support.
For general enquiries about DiRAC II, please e-mail DiRAC support.
Both academic and non-academic users are welcome to use the DiRAC facilities.
Non-academic users should see the information on DiRAC e-Infrastructure Service for Industry and Public Sector.
Academic users can gain access by applying to the DiRAC Resource Allocation Committee (RAC). The DiRAC Resource Allocation Committee is responsible to the STFC Executive and will provide reports to the STFC DiRAC Oversight Committee. Terms of Reference and membership provided for reference. The DiRAC RAC holds two calls a year for projects to gain time on the DiRAC facility. The closing dates are in early March (short projects only) and early September (all projects) for allocations to start on 1 July and 1 January respectively. The call guidelines and application form can be found below. In addition, it is possible to apply for small amounts of time (up to 50,000 core hours) at any time via the seedcorn and discretionary routes – for details please see the call guidelines.
Application form and guidelines below.