scrna4/6 Jupyter Notebook lamindata

Analyze a collection in memory

Here, we’ll analyze the growing collection by loading it into memory. This is only possible if it’s not too large. If your data is large, you’ll likely want to iterate over the collection to train a model, the topic of the next page (scrna5/6).

import lamindb as ln
import bionty as bt
💡 connected lamindb: testuser1/test-scrna
ln.settings.transform.stem_uid = "mfWKm8OtAzp8"
ln.settings.transform.version = "1"
💡 notebook imports: bionty==0.43.0 lamindb==0.72.0 scanpy==1.10.1
💡 saved: Transform(version='1', uid='mfWKm8OtAzp85zKv', name='Analyze a collection in memory', key='scrna4', type='notebook', updated_at=2024-05-20 13:15:09 UTC, created_by_id=1)
💡 saved: Run(uid='sW6XGIzylt5n4Vb9IGwm', transform_id=4, created_by_id=1)
version created_at created_by_id updated_at uid name description hash reference reference_type transform_id run_id artifact_id visibility
2 2 2024-05-20 13:14:59.946434+00:00 1 2024-05-20 13:14:59.946477+00:00 3BTWl0pz9pZkuckKRV4Q My versioned scRNA-seq collection None HNR3VFV60_yqRnUka11E None None 2 2 None 1
1 1 2024-05-20 13:14:35.454623+00:00 1 2024-05-20 13:14:35.454666+00:00 3BTWl0pz9pZkuckKTRgo My versioned scRNA-seq collection None exJtsBYH53iiebYH-Qx0 None None 1 1 None 1
collection = ln.Collection.filter(
    name="My versioned scRNA-seq collection", version="2"
version created_at created_by_id updated_at uid storage_id key suffix accessor description size hash hash_type n_objects n_observations transform_id run_id visibility key_is_virtual
2 None 2024-05-20 13:14:56.560079+00:00 1 2024-05-20 13:14:57.125483+00:00 LPUXSz3UWuSE3RhVEwxK 1 None .h5ad AnnData 10x reference adata 857752 0Fozmib89XWbFoD6hSq5yA md5 None 70 2 2 1 True
1 None 2024-05-20 13:14:30.603031+00:00 1 2024-05-20 13:14:34.459785+00:00 m60bp7MXfPXt0v0vwY13 1 None .h5ad AnnData Human immune cells from Conde22 57612943 9sXda5E7BYiVoDOQkTC0KB sha1-fl None 1648 1 1 1 True

If the collection isn’t too large, we can now load it into memory.

Under-the-hood, the AnnData objects are concatenated during loading.

The amount of time this takes depends on a variety of factors.

If it occurs often, one might consider storing a concatenated version of the collection, rather than the individual pieces.

adata = collection.load()

The default is an outer join during concatenation as in pandas:

AnnData object with n_obs × n_vars = 1718 × 36508
    obs: 'cell_type', 'n_genes', 'percent_mito', 'louvain', 'donor', 'tissue', 'assay', 'artifact_uid'
    obsm: 'X_pca', 'X_umap'

The AnnData has the reference to the individual artifacts in the .obs annotations:
Index(['LPUXSz3UWuSE3RhVEwxK', 'm60bp7MXfPXt0v0vwY13'], dtype='object')

We can easily obtain ensemble IDs for gene symbols using the look up object:

genes = bt.Gene.lookup(field="symbol")

Let us create a plot:

import scanpy as sc

sc.pp.pca(adata, n_comps=2)
        f"{genes.itm2b.symbol} / {genes.itm2b.ensembl_gene_id} /"
        f" {genes.itm2b.description}"
WARNING: saving figure to file figures/pca_itm2b.pdf

We could save a plot as a pdf and then see it in the flow diagram:

artifact = ln.Artifact("./figures/pca_itm2b.pdf", description="My result on ITM2B")
Hide code cell output

But given the image is part of the notebook, we can also rely on the report that we create when saving the notebook via the command line via:

lamin save <notebook_path>

To see the current notebook, visit: