Skip to content

Including e-Research Costs on Grants

Introduction to e-Research Costing

This page should provide the information required to build a budget for e-Research staff time and resources (compute and storage) when writing grant applications. The information provided here should be sufficient for grant applications for resources to be completed independently but if anything is unclear please do get in touch to discuss this.

While the focus of this document is primarily aimed at grant writers the same approach can be used for departments or facilities wishing to use internal budgets to secure more e-Research resources. Internal KCL budgets are charged at the Charity rate.

For infrastructure, when the funding is available with an associated activity code you should contact us to ask for the purchase to be executed. At this point the charge will be reflected as an internal budget transfer between your activity code and e-Research's.

All of our infrastructure charges are calculated using the standard King's facility costing process which follows the TRAC methodology. These facilities are no longer TRAC listed as of February 2026. Recharge rates vary depending the source of funds, this is because different funders contribute to different costs. In these cases we label costs UKRI (EPSRC, BBSRC, MRC, HIHR, etc), Charity/Internal (Wellcome, CRUK, etc) and EC (Horizon). Please get in touch for costing industry funded projects.

e-Research Rate Card

Validity dates

For grant funding you should use rates applicable at the time your funding application was submitted.

Vat

If your project requires invoicing services externally because the budget is held elsewhere (industry partner, another university, NHS Trust, etc), VAT is applied at 20%

Service Validity dates Unit UKRI Charity/Internal EC
RDS Feb 26 - Jan 27 TB/annum £16.36 £60.00 £60.00
CREATE HDD Feb 26 - Jan 27 TB/annum £36.28 £60.00 £60.00
CREATE SSD Feb 26 - Jan 27 GB/annum £0.08 £0.17 £0.26
CREATE Cloud Feb 26 - Jan 27 1 vCPU + 2GB mem / annum £40.62 £103.57 £120.32
CREATE server hosting Feb 26 - Jan 27 per server hosted £2,194.50 £3,275.30 £3,275.30
Tier 1 GPU Feb 26 - Jan 27 GPU hour £0.9847 £0.4189 £0.9847
Tier 2 GPU Feb 26 - Jan 27 GPU hour £0.4374 £0.1955 £0.4374
Tier 3 GPU Feb 26 - Jan 27 GPU hour £0.2420 £0.1204 £0.2420

Previous years' rates

Service Validity dates Unit UKRI/EC Charity/Internal
RDS Feb 25 - Jan 26 TB/annum £60 £60
CREATE HDD Feb 25 - Jan 26 TB/annum £60 £60
CREATE SSD Feb 25 - Jan 26 GB/annum £0.64 £0.55
CREATE Cloud Feb 25 - Jan 26 1 vCPU + 2GB mem / annum £90.30 £81.51
CREATE server hosting Feb 25 - Jan 26 £3,129.46 £3,129.46

Research Software Engineering

If you would like to work with the e-Research Research Software Engineering group on one of your projects, please get in touch. We will meet with you to discuss the project, your proposed timeline and will let you know our availability.

Where a bid is being prepared for a new project, once provisional scheduling has been agreed, our Research Software Engineers (RSEs) can be included on bids in much the same way as research staff while assembling the costing on Worktribe. Depending on project requirements and staff availability, RSEs will be included at either professional services grade 6 or grade 7. This will usually be an existing, named member of staff, but for some larger projects it may be appropriate for us to recruit new staff. We are also able to offer assistance with bid writing for projects with a substantial software component.

For work on existing grants, for example where surplus staff budget has become available, there are several ways in which we may be able to reallocate staff costs. Please get in touch and we will coordinate with finance and post-award teams to select an appropriate mechanism.

For large projects which require leadership of a substantial research software component (e.g. Hub bids), we may be able to join you as a co-investigator. These projects represent a substantial commitment for us, so must be discussed well in advance of the funder deadline.

Storage

Storage Type Free tier Description Further information
Research Data Storage (RDS) 5TB RDS on Rate Card 2 copies in independent sites, nightly back up to tape, general purpose research data storage
High Performance Computing (HPC) scratch 1TB CREATE HDD on Rate Card Large data processing area mounted on HPC compute nodes, not backed up
OpenStack HDD volume storage 40GB CREATE HDD on Rate Card see CREATE Cloud Storage below
OpenStack SSD volume storage 0GB CREATE SSD on Rate Card see CREATE Cloud Storage below
  • Charges can be paid per annum or for multiple years upfront, where possible paying for multiple years upfront is preferable
  • Allocations are generally made per research project, where projects are typically associated with a PI and funding body
  • A small amount of meta data will be requested as part of the project's registration process
  • Where the cost of storage types for your funder are the same (e.g. RDS and HPC scratch) quotas are transferable between the systems for duration funded
  • e-Research storage capacity is typically available at the point of request, requests for larger amounts of storage (e.g. > 100TB) may require system expansion prior to fulfillment

CREATE Cloud Storage

Storage for funded research projects using CREATE Cloud should be costed using:

  • The RDS rate: for project data storage
  • The SSD volume storage rate: for virtual machine (VM) root disks (i.e where the operating system is installed)

Using RDS mounted to the VMs via SMB to store research project data will ensure that data is backed up and recoverable in the event of disaster.

Some applications may require SSD volume storage in addition to that required for the VM root disks (e.g. SQL databases). Please get in touch if you wish to discuss your requirement in more detail.

Compute

There are currently three options for costing compute capacity in to your research grants. Purchasing either:

  1. Cloud quota
  2. GPU pay per use
  3. Dedicated servers

Option 1. is more suitable for smaller requirements such as hosting for a typical web application or database. Option 2 is a way to pay for access to one or more GPUs, either paying for a reservation or paying for prioritised access via a queue. Option 3 is best suited to projects expecting to require large amounts of CPU capacity throughout their lifetime.

Cloud Quota

Using the CREATE Cloud rates you can cost CREATE Cloud (OpenStack) quota for your projects. These rates are generated using the standard King's Facility Costing Return to calculate a unit rate that is funder compliant. A single unit of CREATE Cloud quota translates to 1 vCPU and 2 GB of memory per annum. It is not possible to convert vCPU quota into memory or vice-versa. To calculate the required quota for your project, consider the total number of CPU cores and halve the GBs of memory, take the higher of the two numbers and purchase that many units of quota for each year of the project.

CREATE Cloud Storage

Cloud Quota costs are exclusive of any storage requirements. See CREATE Cloud Storage above for details.

GPU pay per use

You can pay for prioritised access to GPUs within CREATE HPC and TRE or (subject to availability) reserved access within CREATE Cloud or TRE. The following models are available in the GPU tiers listed on the rate card above.

Tier Models
1 B200, H200
2 A100, L40S
3 A30, A40

Pay for GPU priority

This method is our default option for grant funded GPU access and works as follows:

  • Grant application budgets include the cost of specific number of GPU hours based on the rate card
  • On award email support@er.kcl.ac.uk to arrange the transfer of funds for that amount
  • Your project's Slurm account is configured with this amount of credit
  • The project is given access to a Slurm partition which has a higher priority than the general access partitions
  • As jobs are run on GPUs the project's account balance is reduced based on the GPU tier's rate

Pay for GPU reservation

Pay for GPU priority is our preferred method of providing grant funded GPU access. However, some projects may require a permanent reservation of one or more GPUs for a fixed duration e.g. a VM running an inference pipeline. Subject to GPU availability and assesment of project requirements it is possible to reserve a GPU for use within CREATE Cloud or TRE. These reservations are charged as per the rate card.

GPU models currently available for reservation:

  • L40S 48GB

Dedicated Servers

To select the appropriate server(s) for your project please review our Server Price List which is updated on a quarterly basis by our preferred supplier. If you are unsure of which server(s) fit your requirement or want something that isn't listed please get in touch to discuss.

In addition to the cost of server and any warranty extension you should include a research facility access charge for CREATE Server Hosting in the DI costs on your grant as found on the rate card. This covers hosting, maintenance, monitoring and power for the lifetime of the server.

As with any other capital equipment cost, if submitting a grant application which covers less than 100% FEC, you will need to request the remaining share from the College Equipment Fund (CEF). Depending on the funder, it is common for grants to cover only 50% or 80% of the total equipment cost - notable exceptions are specific infrastructure calls which may cover 100% FEC for equipment. There are three pathways for to request this contribution, depending on the amount being requested:

In the expandable sections below, we provide some standard responses which you may use in your CEF001 and CEF002 forms for server purchases. These responses describe how e-Research manages compute infrastructure on behalf of research projects and our policies aimed at ensuring maximum efficient usage. In addition to our standard text, you should describe how your project intends to use the server. For CEF002, you should also explain why this resource is important for your research and any additional benefit the resource would bring to your school or faculty.

Once you have completed your CEF form, it should be emailed to researchinfrastructure@kcl.ac.uk two weeks in advance of the application submission deadline. After your request is approved, you will receive a signed Letter of Support which you should upload to the Documents tab within your Worktribe project alongside the completed CEF001 or CEF002 form.

Standard Response Text for CEF001

Support

The server will be added by the e-Research Scientific Computing Infrastructure team as a reserved node in KCL's existing cluster (CREATE). The infrastructure, processes and staff required to maintain this cluster are well established, meaning that the marginal overhead is minimal.

The design of CREATE allows unused capacity on reserved nodes to be made available for low-priority compute tasks from other researchers, so we expect to be able to maintain high utilisation rates even during periods where our project may not have immediate need of the server.

Space

The server will be hosted as part of the existing CREATE cluster managed by e-Research in one of the KCL datacenters (e.g. Virtus London4 or London7). These datacenters provide us with sufficient space, power, networking and physical security to continue to expand. No substantial changes are required and e-Research staff are experienced in managing infrastructure of this kind.

Data and Compute

This server forms part of the compute capability available to both our project and to KCL as a whole. As part of KCL’s established cluster, it will be connected via high-speed network to our existing data storage platforms and will be able to host services accessible securely via the public internet, internal services, or provide High Performance Computing capacity as needed.

Standard Response Text for CEF002

Strategic Case

The expected minimum useful lifetime of the equipment is five years as the KCL compute cluster continues to grow and older machines are gradually replaced. However, we may be able to keep equipment running longer than this if demand is high.

Management of the Equipment

The server will be added by the e-Research Scientific Computing Infrastructure team as a reserved node in KCL's existing cluster (CREATE). The infrastructure, processes and staff required to maintain this cluster are well established, meaning that the marginal overhead is minimal.

The design of CREATE allows unused capacity on reserved nodes to be made available for low-priority compute tasks from other researchers, so we expect to be able to maintain high utilisation rates even during periods where our project may not have immediate need of the server.

Underpinning Research and Environment

During times where some capacity remains available, we have the ability to contribute this back to the pool of resources available to all researchers at KCL.

Space

The server will be hosted as part of the existing CREATE cluster managed by e-Research in one of the KCL datacenters (e.g. Virtus London4 or London7). These datacenters provide us with sufficient space, power, networking and physical security to continue to expand. No substantial changes are required and e-Research staff are experienced in managing infrastructure of this kind.

Data and Compute

This server forms part of the compute capability available to both our project and to KCL as a whole. As part of KCL’s established cluster, it will be connected via high-speed network to our existing data storage platforms and will be able to host services accessible securely via the public internet, internal services, or provide High Performance Computing capacity as needed.

More information about the CEF process, including the CEF001 and CEF002 templates, is available on the Research Facilities SharePoint site.

Once purchased servers will be hosted in the CREATE ecosystem in one of the following configurations:

Configuration Description Use case
HPC node Installed in the CREATE HPC cluster within in a private Slurm partition, available to other users when idle Multi-user CPU or GPU based batch processing
Hypervisor Installed in the CREATE Cloud as an OpenStack hypervisor Long-lived multiple virtual machine based deployments
Stand-alone Installed within the CREATE Cloud as a stand-alone server When Slurm and/or OpenStack are not suitable

Include volume storage costs for hypervisors

Unless strictly required by the research use case we prefer to deploy servers
as HPC compute nodes or OpenStack hypervisors. As such it is important for
those ordering dedicated servers that will be deployed as hypervisors to
include volume storage costs alongside their purchase (typically
SSD).

GPU lead times

Nvidia B and H series GPU servers are currently taking months to arrive, placing orders
ASAP once funding is secured is highly recommended.