Resources

Currently Science-IT operates Triton cluster that is a versatile system with

  • ~5200 CPU cores
  • 27x servers with 2-8 GPU cards each including P100 and V100 accelerators
  • 2x 1TB RAM servers for memory intensive research
  • Fast Infiniband network
  • Specialized servers for Hadoop
  • 2PB fast Lustre storage over Infiniband

Triton is connected to DDN SFA12k-Lustre storage filesystem with ~2 PB storage capacity available to end-users.  Our storage is cross-mounted to certain department workstations for a seamless research experience.

Detailed information about Triton cluster, full text overview and usage guide is available at User Guide.

Usage model and joining

Science-IT operates with a community stakeholder model and is administered by the School of Science.  Users from members of the community are allocated resources using a fair-share algorithm that guarantees a level of resources at least proportional to the stake, without the need for individual users to engage in separate application processes and billing.

Each participating department/unit funds a fraction of costs and is given an agreed share of resources.  These discussions are carried out with the board of the Science-IT project.  Based on this agreed share, units cover the running expenses of the project.  There is also direct Aalto funding, which allows the entire Aalto community to access a share of Triton for free.

However, computing is not just hardware: support and training is just as critical.  To provide support, each unit that is a full member of Science-IT is required to nominate a local support contact as their first contact point.  Our staff tries to provide scientific computing support to units without a support contact on a best-effort basis (currently, that effort is good), but we must assume a basic level of knowledge and attendance at our training courses.

Interested parties may open discussion with Science-IT at any time.  Using our standing procurement contracts, parties may order hardware to be integrated into our cluster with dedicated or priority access (or standalone usage), allowing you to take advantage of our extensive software stack and management expertise, with varying levels of dedicated access: a share of total compute time, partitions with priority access, private interactive nodes, and so on.  Please contact us for details.