Alphafold

Overview

  • DeepMind: AlphaFold: can accurately predict 3D models of protein structures and has the potential to accelerate research in every field of biology.

Documentation

  • AlphaFold Protein Structure Database: Developed by DeepMind and EMBL-EBI

  • AlphaFold Colab: “This Colab notebook allows you to easily predict the structure of a protein using a slightly simplified version of AlphaFold v2.1.0.“

  • General Articles:

  • ARCC are NOT domain experts on the science behind using Alphafold. We can best effort support on errors you come across, but not the use of the flags and databases used by Alphafold.

    • Please share any feedback you have, and we will develop this page for the wider community.

Using

Use the module name alphafold to discover versions available and to load the application.

Loading the particular alphafold module version will appropriate set the following environment variables: ALPHADB and ALPHABIN and set the associated singularity module version.

Running Alphafold

Alphafold has been built via a docker image, but has been converted to a Singularity image, so must be run using Singularity.

Flag Help

As versions of alphafold update, available options will change. On loading the alphafold module, a full list of flags can be found by running:

singularity run -B .:/etc $ALPHABIN/alphafold220.sif --help singularity run -B .:/etc $ALPHABIN/alphafold220.sif --helpfull

Data Files and Examples

Version

Data Tree

Example

Version

Data Tree

Example

2.3.0

├── bfd │ ├── bfd_metaclust_clu_complete_id30_c90_final_seq.sorted_opt_a3m.ffdata │ ├── bfd_metaclust_clu_complete_id30_c90_final_seq.sorted_opt_a3m.ffindex │ ├── bfd_metaclust_clu_complete_id30_c90_final_seq.sorted_opt_cs219.ffdata │ ├── bfd_metaclust_clu_complete_id30_c90_final_seq.sorted_opt_cs219.ffindex │ ├── bfd_metaclust_clu_complete_id30_c90_final_seq.sorted_opt_hhm.ffdata │ └── bfd_metaclust_clu_complete_id30_c90_final_seq.sorted_opt_hhm.ffindex ├── mgnify │ └── mgy_clusters_2022_05.fa ├── params │ ├── LICENSE │ ├── params_model_1_multimer_v3.npz │ ├── params_model_1.npz │ ├── params_model_1_ptm.npz │ ├── params_model_2_multimer_v3.npz │ ├── params_model_2.npz │ ├── params_model_2_ptm.npz │ ├── params_model_3_multimer_v3.npz │ ├── params_model_3.npz │ ├── params_model_3_ptm.npz │ ├── params_model_4_multimer_v3.npz │ ├── params_model_4.npz │ ├── params_model_4_ptm.npz │ ├── params_model_5_multimer_v3.npz │ ├── params_model_5.npz │ └── params_model_5_ptm.npz ├── pdb70 │ ├── md5sum │ ├── pdb70_a3m.ffdata │ ├── pdb70_a3m.ffindex │ ├── pdb70_clu.tsv │ ├── pdb70_cs219.ffdata │ ├── pdb70_cs219.ffindex │ ├── pdb70_hhm.ffdata │ ├── pdb70_hhm.ffindex │ └── pdb_filter.dat ├── pdb_mmcif │ ├── mmcif_files │ └── obsolete.dat ├── pdb_seqres │ └── pdb_seqres.txt ├── uniprot │ └── uniprot.fasta ├── uniref30 │ ├── UniRef30_2021_03_a3m.ffdata │ ├── UniRef30_2021_03_a3m.ffindex │ ├── UniRef30_2021_03_cs219.ffdata │ ├── UniRef30_2021_03_cs219.ffindex │ ├── UniRef30_2021_03_hhm.ffdata │ ├── UniRef30_2021_03_hhm.ffindex │ └── UniRef30_2021_03.md5sums └── uniref90 └── uniref90.fasta 10 directories, 43 files
singularity run -B .:/etc --nv $ALPHABIN/alphafold.sif \ --fasta_paths=T1050.fasta \ --output_dir=./<output_folder> \ --model_preset=monomer \ --db_preset=full_dbs \ --bfd_database_path=$ALPHADB/bfd/bfd_metaclust_clu_complete_id30_c90_final_seq.sorted_opt \ --pdb70_database_path=$ALPHADB/pdb70/pdb70 \ --uniref30_database_path=$ALPHADB/uniref30/UniRef30_2021_03 \ --max_template_date=2020-05-14 \ --use_gpu_relax=<False|True> \ --data_dir=$ALPHADB \ --uniref90_database_path=$ALPHADB/uniref90/uniref90.fasta \ --mgnify_database_path=$ALPHADB/mgnify/mgy_clusters_2022_05.fa \ --template_mmcif_dir=$ALPHADB/pdb_mmcif/mmcif_files \ --obsolete_pdbs_path=$ALPHADB/pdb_mmcif/obsolete.dat

2.2.0

Our test file T1050.fasta looks like this.

If you have alternative examples, please share.

TPU Warnings

TPUs are Google's specialized ASICs and are thus not available on our NVidia GPUs. The following form of warnings can be ignored:

CPU Mode:

Slurm parameters and alphafold flag:

The mem value will depend on your data, please share your findings/observations.

Notice that neither GPUs/TPUs are detecting, so only running in CPU mode. Since running in CPU mode, your output will contain very slow compile messages

GPU Mode:

With: use_gpu_relax=True

Slurm parameters and alphafold flag:

The P100 GPUs do not have TPU capabilities, so expect to see the Unable to initialize backend 'tpu' message. The job will finish with a Final timings message that lists timings for all the number of models details at the start - so in this example 5.

With: use_gpu_relax=False

At this stage, we haven’t noticed any observable difference with setting use_gpu_relax to True or False.

Looking at the flag help:

  • The test that we are running might not be performing the “final relaxation step on the predicted models“ which is why we’re not seeing any significant difference.

  • And/Or this might be because it uses the tensor core capabilities of the GPU, which we do not have on the P100s.

Current Recommendations:

  • Our test file appears to require at least 64G, otherwise we have run into out of memory issues.

  • If using a GPU, we have only had successes using the P100/A30s.

  • The P100s do NOT have tensor cores.

    • Using 2 GPUs showed no speed increase over one.

  • Although our V100s do have tensor cores, we have not had any successful tests. We believe this is due to the current NVidia drivers / cuda versions - we have ongoing testing for these.

Run Times: Observations

The timings below are all using the same dataset (see below) but each will have a different random seed so the runs are not deterministic (i.e. they have an element of randomness) so we can not expect the same resource allocation to run in the same time.

  • Within a CPU only mode, 16 cores appears to run faster than 8 or 32.

  • The cascade nodes are generally faster than the teton nodes - to be expected as they have a newer chipset.

  • The teton-gpu P100s on are significantly faster. We only have 8 of these so expecting jobs to be queued.

These timings are only a small subset of possible resource dimensions that can be changed within a job submission, but provide some basic insight into what to consider. Also, consider that we have 180 teton nodes and 56 cascade nodes, so if you have time to run simulations (e.g. over the weekend) then you can submit a lot more jobs that have a better chance of not being queued than simply requesting the P100s and potentially having your jobs queued for hours/days.

Partition

cores:8

cores:16

cores:32

Partition

cores:8

cores:16

cores:32

teton: cpu

20:28:1

18:54:09

13:15:21

19:50:55

teton-cascade: cpu

16:02:35

10:56:07

11:34.45

12:07:54

12:06:52

12:08:08

teton-gpu: 1 p100

02:38:10

02:49:23

03:04:28

02:43:51

02:39:39

02:53:48

02:53:40

02:58:28

 

beartooth-gpu: 1 a30

 

02:39:51