Skip to content

Including e-Research Costs on Grants

Introduction to e-Research Costing

This page should provide the information required to build a budget for e-Research staff time and resources (compute and storage) when writing grant applications. The information provided here should be sufficient for grant applications for resources to be completed independently but if anything is unclear please do get in touch to discuss this.

While the focus of this document is primarily aimed at grant writers the same approach can be used for departments or facilities wishing to use internal budgets to secure more e-Research resources.

For infrastructure, when the funding is available with an associated activity code you should contact us to ask for the purchase to be executed. At this point the charge will be reflected as an internal budget transfer between your activity code and e-Research's. Each transfer of funds will have a clear identifier tying it back to your ticket to facilitate ease of audit.

All of our infrastructure charges are calculated using the standard King's facility costing process which follows the TRAC methodology. These facilities are TRAC listed as of March 2024. You will see that some charges vary depending the source of funds, this is because different funders contribute to different costs. In these cases we label costs UKRI (EPSRC, BBSRC, MRC, HIHR, etc), Charity (Wellcome, CRUK, etc) or EC (Horizon).

Research Software Engineering

If you would like to work with the e-Research Research Software Engineering group on one of your projects, please get in touch. We will meet with you to discuss the project, your proposed timeline and will let you know our availability.

Where a bid is being prepared for a new project, once provisional scheduling has been agreed, our Research Software Engineers (RSEs) can be included on bids in much the same way as research staff while assembling the costing on Worktribe. Depending on project requirements and staff availability, RSEs will be included at either professional services grade 6 or grade 7. This will usually be an existing, named member of staff, but for some larger projects it may be appropriate for us to recruit new staff. We are also able to offer assistance with bid writing for projects with a substantial software component.

For work on existing grants, for example where surplus staff budget has become available, there are several ways in which we may be able to reallocate staff costs. Please get in touch and we will coordinate with finance and post-award teams to select an appropriate mechanism.

For large projects which require leadership of a substantial research software component (e.g. Hub bids), we may be able to join you as a co-investigator. These projects represent a substantial commitment for us, so must be discussed well in advance of the funder deadline.

Storage

Storage Type Free tier Cost (TB/annum) Description Further information
Research Data Storage (RDS) 5TB £50 2 copies in independent sites, nightly back up to tape, general purpose research data storage RDS docs
High Performance Computing (HPC) scratch 1TB £50 Large data processing area mounted on HPC compute nodes, not backed up CREATE HPC docs
OpenStack HDD volume storage 40GB £50 see CREATE Cloud Storage below
OpenStack SSD volume storage 0GB £0.47 (GB/annum UKRI/EC), £0.37 (GB/annum charity) see CREATE Cloud Storage below
  • Charges can be paid per annum or for multiple years upfront, where possible paying for multiple years upfront is preferable
  • Allocations are generally made per research project, where projects are typically associated with a PI and funding body
  • A small amount of meta data will be requested as part of the project's registration process
  • Where the cost of storage types is the same (e.g. RDS and HPC scratch) quotas are transferable between the systems for duration funded
  • e-Research storage capacity is typically available at the point of request, requests for larger amounts of storage (e.g. > 100TB) may require system expansion prior to fulfillment

CREATE Cloud Storage

Storage for funded research projects using CREATE Cloud should be costed using:

  • The RDS rate: for project data storage
  • The SSD volume storage rate: for virtual machine (VM) root disks (i.e where the operating system is installed)

Using RDS mounted to the VMs via SMB to store research project data will ensure that data is backed up and recoverable in the event of disaster.

Some applications may require SSD volume storage in addition to that required for the VM root disks (e.g. SQL databases). Please get in touch if you wish to discuss your requirement in more detail.

Compute

There are currently three options for costing compute capacity in to your research grants. Purchasing either:

  1. Cloud quota
  2. GPU day rental
  3. Dedicated servers

Option 1. is more suitable for smaller requirements such as hosting for a typical web application or database. Option 2 is a way to get guaranteed access to a specific GPU or GPUs for a fixed number of days, which is particularly suitable for developing inference applications in CREATE Cloud. Option 3 is best suited to larger computational workloads, e.g. those requiring many CPU cores or GPUs for the entire duration of the research project.

Cloud Quota

Using the following rates you can cost CREATE Cloud (OpenStack) quota for your projects. These rates are generated using the standard King's Facility Costing Return to calculate a unit rate that is funder compliant. A single unit of CREATE Cloud quota translates to 1 vCPU and 2 GB of memory per annum. It is not possible to convert vCPU quota into memory or vice-versa. To calculate the required quota for your project, consider the total number of CPU cores and halve the GBs of memory, take the higher of the two numbers and purchase that many unit of quota for each year of the project.

Funder Rate
UKRI and EC £69.30
Charity (e.g. Wellcome, CRUK) £60.53

CREATE Cloud Storage

Cloud Quota costs are exclusive of any storage requirements. See CREATE Cloud Storage above for details.

GPU day rental

GPU rental is under development

While we now have the costs for GPU day rental to include in grant budgets the
mechanisms to allow GPUs to be dynamically dedicated to projects in either
CREATE Cloud or HPC are under development. In the meantime if you need this
service please do include the costs in your funding applications. We are
able to manually dedicate GPU resources while this service matures.

GPU vCPU cores VM memory Day rate
A30 12 120GB £14.72
A40 12 60GB £16.30
A100 12 120GB £39.27

Dedicated Servers

To select the appropriate server(s) for your project please review our Server Price List which is updated on a quarterly basis by our preferred supplier. If you are unsure of which server(s) fit your requirement or want something that isn't listed please get in touch to discuss.

In addition to the cost of server and any warranty extension you should include a research facility access charge for CREATE Server Hosting in the DI costs on your grant for £2,808.67 which covers hosting, maintenance, monitoring and power for the lifetime of the server.

As with any other capital equipment cost, if submitting a grant application which covers less than 100% FEC, you will need to request the remaining share from the College Equipment Fund (CEF). Depending on the funder, it is common for grants to cover only 50% or 80% of the total equipment cost - notable exceptions are specific infrastructure calls which may cover 100% FEC for equipment. There are three pathways for to request this contribution, depending on the amount being requested:

  • For contributions below £50k, use form CEF001
  • For contributions above £50k, but below £138k, use form CEF002
  • For contributions above £138k, contact corefacilities@kcl.ac.uk for discussion

In the expandable sections below, we provide some standard responses which you may use in your CEF001 and CEF002 forms for server purchases. These responses describe how e-Research manages compute infrastructure on behalf of research projects and our policies aimed at ensuring maximum efficient usage. In addition to our standard text, you should describe how your project intends to use the server. For CEF002, you should also explain why this resource is important for your research and any additional benefit the resource would bring to your school or faculty.

Once you have completed your CEF form, it should be emailed to corefacilities@kcl.ac.uk two weeks in advance of the application submission deadline. After your request is approved, you will receive a signed Letter of Support which you should upload to the Documents tab within your Worktribe project alongside the completed CEF001 or CEF002 form.

Standard Response Text for CEF001

Support

The server will be added by the e-Research Scientific Computing Infrastructure team as a reserved node in KCL's existing cluster (CREATE). The infrastructure, processes and staff required to maintain this cluster are well established, meaning that the marginal overhead is minimal.

The design of CREATE allows unused capacity on reserved nodes to be made available for low-priority compute tasks from other researchers, so we expect to be able to maintain high utilisation rates even during periods where our project may not have immediate need of the server.

Space

The server will be hosted as part of the existing CREATE cluster managed by e-Research in one of the KCL datacenters (e.g. Virtus London4 or London7). These datacenters provide us with sufficient space, power, networking and physical security to continue to expand. No substantial changes are required and e-Research staff are experienced in managing infrastructure of this kind.

Data and Compute

This server forms part of the compute capability available to both our project and to KCL as a whole. As part of KCL’s established cluster, it will be connected via high-speed network to our existing data storage platforms and will be able to host services accessible securely via the public internet, internal services, or provide High Performance Computing capacity as needed.

Standard Response Text for CEF002

Strategic Case

The expected minimum useful lifetime of the equipment is five years as the KCL compute cluster continues to grow and older machines are gradually replaced. However, we may be able to keep equipment running longer than this if demand is high.

Management of the Equipment

The server will be added by the e-Research Scientific Computing Infrastructure team as a reserved node in KCL's existing cluster (CREATE). The infrastructure, processes and staff required to maintain this cluster are well established, meaning that the marginal overhead is minimal.

The design of CREATE allows unused capacity on reserved nodes to be made available for low-priority compute tasks from other researchers, so we expect to be able to maintain high utilisation rates even during periods where our project may not have immediate need of the server.

Underpinning Research and Environment

During times where some capacity remains available, we have the ability to contribute this back to the pool of resources available to all researchers at KCL.

Space

The server will be hosted as part of the existing CREATE cluster managed by e-Research in one of the KCL datacenters (e.g. Virtus London4 or London7). These datacenters provide us with sufficient space, power, networking and physical security to continue to expand. No substantial changes are required and e-Research staff are experienced in managing infrastructure of this kind.

Data and Compute

This server forms part of the compute capability available to both our project and to KCL as a whole. As part of KCL’s established cluster, it will be connected via high-speed network to our existing data storage platforms and will be able to host services accessible securely via the public internet, internal services, or provide High Performance Computing capacity as needed.

More information about the CEF process, including the CEF001 and CEF002 templates, is available on the Research Facilities SharePoint site.

Once purchased servers will be hosted in the CREATE ecosystem in one of the following configurations:

Configuration Description Use case
HPC node Installed in the CREATE HPC cluster within in a private Slurm partition, available to other users when idle Multi-user CPU or GPU based batch processing
Hypervisor Installed in the CREATE Cloud as an OpenStack hypervisor Long-lived multiple virtual machine based deployments
Stand-alone Installed within the CREATE Cloud as a stand-alone server When Slurm and/or OpenStack are not suitable

Include volume storage costs for hypervisors

Unless strictly required by the research use case we prefer to deploy servers
as HPC compute nodes or OpenStack hypervisors. As such it is important for
those ordering dedicated servers that will be deployed as hypervisors to
include volume storage costs alongside their purchase (typically
SSD).

GPU lead times

Nvidia H100 servers are currently taking months to arrive, placing orders
ASAP once funding is secured is highly recommended.