scrna4/6 Jupyter Notebook lamindata

Analyze a collection in memory

Here, we’ll analyze the growing collection by loading it into memory. This is only possible if it’s not too large. If your data is large, you’ll likely want to iterate over the collection to train a model, the topic of the next page (scrna5/6).

import lamindb as ln
import bionty as bt
💡 connected lamindb: testuser1/test-scrna
ln.settings.transform.stem_uid = "mfWKm8OtAzp8"
ln.settings.transform.version = "1"
💡 notebook imports: bionty==0.44.0 lamindb==0.74.0 scanpy==1.10.1
💡 saved: Transform(uid='mfWKm8OtAzp85zKv', version='1', name='Analyze a collection in memory', key='scrna4', type='notebook', created_by_id=1, updated_at='2024-06-19 23:18:29 UTC')
💡 saved: Run(uid='wVooPzhngIIxJgHaa6ye', transform_id=4, created_by_id=1)
Run(uid='wVooPzhngIIxJgHaa6ye', started_at='2024-06-19 23:18:29 UTC', is_consecutive=True, transform_id=4, created_by_id=1)
uid version name description hash reference reference_type visibility transform_id artifact_id run_id created_by_id updated_at
2 FIxCBX5OLsfoWyhQUMd3 2 My versioned scRNA-seq collection None Umjxg4HR1wkZqKROsyz1 None None 1 2 None 2 1 2024-06-19 23:18:19.908259+00:00
1 FIxCBX5OLsfoWyhQO2Hz 1 My versioned scRNA-seq collection None exJtsBYH53iiebYH-Qx0 None None 1 1 None 1 1 2024-06-19 23:17:51.349635+00:00
collection = ln.Collection.filter(
    name="My versioned scRNA-seq collection", version="2"
uid version description key suffix type accessor size hash hash_type n_objects n_observations visibility key_is_virtual storage_id transform_id run_id created_by_id updated_at
2 kSs9STRn8LxsjEBeSyAW None 10x reference adata None .h5ad dataset AnnData 857752 PnpU6XI5Fbzwc49XgrgdNg md5 None 70 1 True 1 2 2 1 2024-06-19 23:18:17.030108+00:00
1 ewuV08zrWRrpUyJEjE6N None Human immune cells from Conde22 None .h5ad dataset AnnData 57612943 9sXda5E7BYiVoDOQkTC0KB sha1-fl None 1648 1 True 1 1 1 1 2024-06-19 23:17:50.237700+00:00

If the collection isn’t too large, we can now load it into memory.

Under-the-hood, the AnnData objects are concatenated during loading.

The amount of time this takes depends on a variety of factors.

If it occurs often, one might consider storing a concatenated version of the collection, rather than the individual pieces.

adata = collection.load()

The default is an outer join during concatenation as in pandas:

AnnData object with n_obs × n_vars = 1718 × 36508
    obs: 'cell_type', 'n_genes', 'percent_mito', 'louvain', 'donor', 'tissue', 'assay', 'artifact_uid'
    obsm: 'X_pca', 'X_umap'

The AnnData has the reference to the individual artifacts in the .obs annotations:
Index(['kSs9STRn8LxsjEBeSyAW', 'ewuV08zrWRrpUyJEjE6N'], dtype='object')

We can easily obtain ensemble IDs for gene symbols using the look up object:

genes = bt.Gene.lookup(field="symbol")

Let us create a plot:

import scanpy as sc

sc.pp.pca(adata, n_comps=2)
        f"{genes.itm2b.symbol} / {genes.itm2b.ensembl_gene_id} /"
        f" {genes.itm2b.description}"
WARNING: saving figure to file figures/pca_itm2b.pdf

We could save a plot as a pdf and then see it in the flow diagram:

artifact = ln.Artifact("./figures/pca_itm2b.pdf", description="My result on ITM2B")
Hide code cell output

But given the image is part of the notebook, we can also rely on the report that we create when saving the notebook:


To see the current notebook, visit: