Jupyter Notebook Binder

Project flow#

LaminDB allows tracking data flow on the entire project level.

Here, we walk through exemplified app uploads, pipelines & notebooks following Schmidt et al., 2022.

A CRISPR screen reading out a phenotypic endpoint on T cells is paired with scRNA-seq to generate insights into IFN-γ production.

These insights get linked back to the original data through the steps taken in the project to provide context for interpretation & future decision making.

More specifically: Why should I care about data flow?

Data flow tracks data sources & transformations to trace biological insights, verify experimental outcomes, meet regulatory standards, increase the robustness of research and optimize the feedback loop of team-wide learning iterations.

While tracking data flow is easier when it’s governed by deterministic pipelines, it becomes hard when it’s governed by interactive human-driven analyses.

LaminDB interfaces workflow mangers for the former and embraces the latter.

Setup#

Init a test instance:

!lamin init --storage ./mydata
Hide code cell output
✅ saved: User(id='DzTjkKse', handle='testuser1', email='testuser1@lamin.ai', name='Test User1', updated_at=2023-10-01 16:42:15)
✅ saved: Storage(id='ZduDN4Gs', root='/home/runner/work/lamin-usecases/lamin-usecases/docs/mydata', type='local', updated_at=2023-10-01 16:42:15, created_by_id='DzTjkKse')
💡 loaded instance: testuser1/mydata
💡 did not register local instance on hub (if you want, call `lamin register`)

Import lamindb:

import lamindb as ln
from IPython.display import Image, display
💡 loaded instance: testuser1/mydata (lamindb 0.54.4)

Steps#

In the following, we walk through exemplified steps covering different types of transforms (Transform).

Note

The full notebooks are in this repository.

App upload of phenotypic data #

Register data through app upload from wetlab by testuser1:

ln.setup.login("testuser1")
transform = ln.Transform(name="Upload GWS CRISPRa result", type="app")
ln.track(transform)
output_path = ln.dev.datasets.schmidt22_crispra_gws_IFNG(ln.settings.storage)
output_file = ln.File(output_path, description="Raw data of schmidt22 crispra GWS")
output_file.save()
Hide code cell output
💡 Transform(id='12fcGZ8vnoQxOc', name='Upload GWS CRISPRa result', type='app', updated_at=2023-10-01 16:42:17, created_by_id='DzTjkKse')
💡 Run(id='yNLhjuPEnTTdTPm7XDC0', run_at=2023-10-01 16:42:17, transform_id='12fcGZ8vnoQxOc', created_by_id='DzTjkKse')

Hit identification in notebook #

Access, transform & register data in drylab by testuser2:

ln.setup.login("testuser2")
transform = ln.Transform(name="GWS CRIPSRa analysis", type="notebook")
ln.track(transform)
# access
input_file = ln.File.filter(key="schmidt22-crispra-gws-IFNG.csv").one()
# identify hits
input_df = input_file.load().set_index("id")
output_df = input_df[input_df["pos|fdr"] < 0.01].copy()
# register hits in output file
ln.File(output_df, description="hits from schmidt22 crispra GWS").save()
Hide code cell output
💡 Transform(id='y3w8bCef41Q39B', name='GWS CRIPSRa analysis', type='notebook', updated_at=2023-10-01 16:42:19, created_by_id='bKeW4T6E')
💡 Run(id='WHYMsdidljoF28nbbABW', run_at=2023-10-01 16:42:19, transform_id='y3w8bCef41Q39B', created_by_id='bKeW4T6E')

Inspect data flow:

file = ln.File.filter(description="hits from schmidt22 crispra GWS").one()
file.view_flow()
https://d33wubrfki0l68.cloudfront.net/858a2b0c854a81996841b26457198ac0364c7392/fd3dd/_images/ab1b9647b1b20c3661631494c8d845cbb0ccc63881dccc09740a1ffeccbfb987.svg

Sequencer upload #

Upload files from sequencer:

ln.setup.login("testuser1")
ln.track(ln.Transform(name="Chromium 10x upload", type="pipeline"))
# register output files of upload
upload_dir = ln.dev.datasets.dir_scrnaseq_cellranger(
    "perturbseq", basedir=ln.settings.storage, output_only=False
)
ln.File(upload_dir.parent / "fastq/perturbseq_R1_001.fastq.gz").save()
ln.File(upload_dir.parent / "fastq/perturbseq_R2_001.fastq.gz").save()
ln.setup.login("testuser2")
Hide code cell output
💡 Transform(id='EXvTl5gOn2bima', name='Chromium 10x upload', type='pipeline', updated_at=2023-10-01 16:42:20, created_by_id='DzTjkKse')
💡 Run(id='WrsTDTGxKWDmnd41cXcR', run_at=2023-10-01 16:42:20, transform_id='EXvTl5gOn2bima', created_by_id='DzTjkKse')
❗ file has more than one suffix (path.suffixes), inferring: '.fastq.gz'
❗ file has more than one suffix (path.suffixes), inferring: '.fastq.gz'

scRNA-seq bioinformatics pipeline #

Process uploaded files using a script or workflow manager: Pipelines and obtain 3 output files in a directory filtered_feature_bc_matrix/:

transform = ln.Transform(name="Cell Ranger", version="7.2.0", type="pipeline")
ln.track(transform)
# access uploaded files as inputs for the pipeline
input_files = ln.File.filter(key__startswith="fastq/perturbseq").all()
input_paths = [file.stage() for file in input_files]
# register output files
output_files = ln.File.from_dir("./mydata/perturbseq/filtered_feature_bc_matrix/")
ln.save(output_files)
Hide code cell output
💡 Transform(id='Dfc0BNiMfvc2Dq', name='Cell Ranger', version='7.2.0', type='pipeline', updated_at=2023-10-01 16:42:21, created_by_id='bKeW4T6E')
💡 Run(id='5oUjK0NiHPydPLst1PZP', run_at=2023-10-01 16:42:21, transform_id='Dfc0BNiMfvc2Dq', created_by_id='bKeW4T6E')
❗ file has more than one suffix (path.suffixes), inferring: '.tsv.gz'
❗ file has more than one suffix (path.suffixes), inferring: '.mtx.gz'
❗ file has more than one suffix (path.suffixes), inferring: '.tsv.gz'

Post-process these 3 files:

transform = ln.Transform(name="Postprocess Cell Ranger", version="2.0", type="pipeline")
ln.track(transform)
input_files = [f.stage() for f in output_files]
output_path = ln.dev.datasets.schmidt22_perturbseq(basedir=ln.settings.storage)
output_file = ln.File(output_path, description="perturbseq counts")
output_file.save()
Hide code cell output
❗ record with similar name exist! did you mean to load it?
id __ratio__
name
Cell Ranger Dfc0BNiMfvc2Dq 90.0
💡 Transform(id='WMZ9fOVdeZuO3V', name='Postprocess Cell Ranger', version='2.0', type='pipeline', updated_at=2023-10-01 16:42:21, created_by_id='bKeW4T6E')
💡 Run(id='LGvUqQsVXdEnULhxDMBz', run_at=2023-10-01 16:42:21, transform_id='WMZ9fOVdeZuO3V', created_by_id='bKeW4T6E')

Inspect data flow:

output_files[0].view_flow()
https://d33wubrfki0l68.cloudfront.net/06e500e6c20582c7496b65ee7f6ae4252b84b890/e8445/_images/deaa6d93f50b7892ed801a44e68ae41153bf745f4abfeda60af5eeded556629e.svg

Integrate scRNA-seq & phenotypic data #

Integrate data in a notebook:

transform = ln.Transform(
    name="Perform single cell analysis, integrate with CRISPRa screen",
    type="notebook",
)
ln.track(transform)

file_ps = ln.File.filter(description__icontains="perturbseq").one()
adata = file_ps.load()
file_hits = ln.File.filter(description="hits from schmidt22 crispra GWS").one()
screen_hits = file_hits.load()

import scanpy as sc

sc.tl.score_genes(adata, adata.var_names.intersection(screen_hits.index).tolist())
filesuffix = "_fig1_score-wgs-hits.png"
sc.pl.umap(adata, color="score", show=False, save=filesuffix)
filepath = f"figures/umap{filesuffix}"
file = ln.File(filepath, key=filepath)
file.save()
filesuffix = "fig2_score-wgs-hits-per-cluster.png"
sc.pl.matrixplot(
    adata, groupby="cluster_name", var_names=["score"], show=False, save=filesuffix
)
filepath = f"figures/matrixplot_{filesuffix}"
file = ln.File(filepath, key=filepath)
file.save()
Hide code cell output
💡 Transform(id='AU8c57KcGj8G6K', name='Perform single cell analysis, integrate with CRISPRa screen', type='notebook', updated_at=2023-10-01 16:42:22, created_by_id='bKeW4T6E')
💡 Run(id='hQKSDfXO1prLSfXpgu9n', run_at=2023-10-01 16:42:22, transform_id='AU8c57KcGj8G6K', created_by_id='bKeW4T6E')
WARNING: saving figure to file figures/umap_fig1_score-wgs-hits.png
WARNING: saving figure to file figures/matrixplot_fig2_score-wgs-hits-per-cluster.png

Review results#

Let’s load one of the plots:

ln.track()
file = ln.File.filter(key__contains="figures/matrixplot").one()
file.stage()
Hide code cell output
💡 notebook imports: ipython==8.16.0 lamindb==0.54.4 scanpy==1.9.5
💡 Transform(id='1LCd8kco9lZUz8', name='Project flow', short_name='project-flow', version='0', type=notebook, updated_at=2023-10-01 16:42:24, created_by_id='bKeW4T6E')
💡 Run(id='nQMEVNxcDL1RSEatTSjy', run_at=2023-10-01 16:42:24, transform_id='1LCd8kco9lZUz8', created_by_id='bKeW4T6E')
PosixUPath('/home/runner/work/lamin-usecases/lamin-usecases/docs/mydata/figures/matrixplot_fig2_score-wgs-hits-per-cluster.png')
display(Image(filename=file.path))
https://d33wubrfki0l68.cloudfront.net/b3b5eb3f53a7759762d1dca2d67bd76974729731/e5dd6/_images/f096e9d4768812e880e81babbd6eeae4f64efc120154dc379ad9c346ea2ebe9d.png

We see that the image file is tracked as an input of the current notebook. The input is highlighted, the notebook follows at the bottom:

file.view_flow()
https://d33wubrfki0l68.cloudfront.net/f835438150a13d43dd168214a4420ad08c3eb6cd/bd667/_images/57834b41fea5ea990745a71cea81d2e2c59f1e7f2778617e4b42ef9a1cf472fc.svg

Alternatively, we can also look at the sequence of transforms:

transform = ln.Transform.search("Bird's eye view", return_queryset=True).first()
transform.parents.df()
name short_name version type reference reference_type initial_version_id updated_at created_by_id
id
WMZ9fOVdeZuO3V Postprocess Cell Ranger None 2.0 pipeline None None None 2023-10-01 16:42:22 bKeW4T6E
y3w8bCef41Q39B GWS CRIPSRa analysis None None notebook None None None 2023-10-01 16:42:19 bKeW4T6E
transform.view_parents()
https://d33wubrfki0l68.cloudfront.net/cd4232a962489f2b2b92f047653b480d7c61fbd4/0aef0/_images/b168a23bde9f2271d6a85ac68006bc996bfebf9e4856c06cbc7e7a93ff9c7a98.svg

Understand runs#

We tracked pipeline and notebook runs through run_context, which stores a Transform and a Run record as a global context.

File objects are the inputs and outputs of runs.

What if I don’t want a global context?

Sometimes, we don’t want to create a global run context but manually pass a run when creating a file:

run = ln.Run(transform=transform)
ln.File(filepath, run=run)
When does a file appear as a run input?

When accessing a file via stage(), load() or backed(), two things happen:

  1. The current run gets added to file.input_of

  2. The transform of that file gets added as a parent of the current transform

You can then switch off auto-tracking of run inputs if you set ln.settings.track_run_inputs = False: Can I disable tracking run inputs?

You can also track run inputs on a case by case basis via is_run_input=True, e.g., here:

file.load(is_run_input=True)

Query by provenance#

We can query or search for the notebook that created the file:

transform = ln.Transform.search("GWS CRIPSRa analysis", return_queryset=True).first()

And then find all the files created by that notebook:

ln.File.filter(transform=transform).df()
storage_id key suffix accessor description version size hash hash_type transform_id run_id initial_version_id updated_at created_by_id
id
XM4MyeDa8mY3S5mev5W3 ZduDN4Gs None .parquet DataFrame hits from schmidt22 crispra GWS None 18368 TufBUAIQVzLPDJ4sCV_kTg md5 y3w8bCef41Q39B WHYMsdidljoF28nbbABW None 2023-10-01 16:42:19 bKeW4T6E

Which transform ingested a given file?

file = ln.File.filter().first()
file.transform
Transform(id='12fcGZ8vnoQxOc', name='Upload GWS CRISPRa result', type='app', updated_at=2023-10-01 16:42:18, created_by_id='DzTjkKse')

And which user?

file.created_by
User(id='DzTjkKse', handle='testuser1', email='testuser1@lamin.ai', name='Test User1', updated_at=2023-10-01 16:42:20)

Which transforms were created by a given user?

users = ln.User.lookup()
ln.Transform.filter(created_by=users.testuser2).df()
name short_name version type reference reference_type initial_version_id updated_at created_by_id
id
y3w8bCef41Q39B GWS CRIPSRa analysis None None notebook None None None 2023-10-01 16:42:19 bKeW4T6E
Dfc0BNiMfvc2Dq Cell Ranger None 7.2.0 pipeline None None None 2023-10-01 16:42:21 bKeW4T6E
WMZ9fOVdeZuO3V Postprocess Cell Ranger None 2.0 pipeline None None None 2023-10-01 16:42:22 bKeW4T6E
AU8c57KcGj8G6K Perform single cell analysis, integrate with C... None None notebook None None None 2023-10-01 16:42:24 bKeW4T6E
1LCd8kco9lZUz8 Project flow project-flow 0 notebook None None None 2023-10-01 16:42:24 bKeW4T6E

Which notebooks were created by a given user?

ln.Transform.filter(created_by=users.testuser2, type="notebook").df()
name short_name version type reference reference_type initial_version_id updated_at created_by_id
id
y3w8bCef41Q39B GWS CRIPSRa analysis None None notebook None None None 2023-10-01 16:42:19 bKeW4T6E
AU8c57KcGj8G6K Perform single cell analysis, integrate with C... None None notebook None None None 2023-10-01 16:42:24 bKeW4T6E
1LCd8kco9lZUz8 Project flow project-flow 0 notebook None None None 2023-10-01 16:42:24 bKeW4T6E

We can also view all recent additions to the entire database:

ln.view()
Hide code cell output
File
storage_id key suffix accessor description version size hash hash_type transform_id run_id initial_version_id updated_at created_by_id
id
g9Jfla1153Mijl5uxrew ZduDN4Gs figures/matrixplot_fig2_score-wgs-hits-per-clu... .png None None None 28814 H0Pxpa-fZOvigo74eXHZsQ md5 AU8c57KcGj8G6K hQKSDfXO1prLSfXpgu9n None 2023-10-01 16:42:24 bKeW4T6E
IreOZ10YvYx9n3edW6TO ZduDN4Gs figures/umap_fig1_score-wgs-hits.png .png None None None 118999 1-WtAvRL1d_SSjZvMMOMkg md5 AU8c57KcGj8G6K hQKSDfXO1prLSfXpgu9n None 2023-10-01 16:42:23 bKeW4T6E
TH8pQcTssHyUwrxzKA7e ZduDN4Gs schmidt22_perturbseq.h5ad .h5ad AnnData perturbseq counts None 20659936 la7EvqEUMDlug9-rpw-udA md5 WMZ9fOVdeZuO3V LGvUqQsVXdEnULhxDMBz None 2023-10-01 16:42:22 bKeW4T6E
uCIZ94Gk11EZ9rAIGLSM ZduDN4Gs perturbseq/filtered_feature_bc_matrix/barcodes... .tsv.gz None None None 6 rdjbqVzTyKzvmybQ5kRoqA md5 Dfc0BNiMfvc2Dq 5oUjK0NiHPydPLst1PZP None 2023-10-01 16:42:21 bKeW4T6E
FNmwEQS6T6NfF4YIe4jE ZduDN4Gs perturbseq/filtered_feature_bc_matrix/features... .tsv.gz None None None 6 3rLFl0mYyQRvNFohzb4f-w md5 Dfc0BNiMfvc2Dq 5oUjK0NiHPydPLst1PZP None 2023-10-01 16:42:21 bKeW4T6E
cndLvdgagbamMV0MOmdR ZduDN4Gs perturbseq/filtered_feature_bc_matrix/matrix.m... .mtx.gz None None None 6 SMXstcKHx_jfAZh2egk18w md5 Dfc0BNiMfvc2Dq 5oUjK0NiHPydPLst1PZP None 2023-10-01 16:42:21 bKeW4T6E
TpcYuMzTfrz3p6Dmzl79 ZduDN4Gs fastq/perturbseq_R2_001.fastq.gz .fastq.gz None None None 6 qo75nUoIKDil7HenXMkTFQ md5 EXvTl5gOn2bima WrsTDTGxKWDmnd41cXcR None 2023-10-01 16:42:20 DzTjkKse
Run
transform_id run_at created_by_id reference reference_type
id
yNLhjuPEnTTdTPm7XDC0 12fcGZ8vnoQxOc 2023-10-01 16:42:17 DzTjkKse None None
WHYMsdidljoF28nbbABW y3w8bCef41Q39B 2023-10-01 16:42:19 bKeW4T6E None None
WrsTDTGxKWDmnd41cXcR EXvTl5gOn2bima 2023-10-01 16:42:20 DzTjkKse None None
5oUjK0NiHPydPLst1PZP Dfc0BNiMfvc2Dq 2023-10-01 16:42:21 bKeW4T6E None None
LGvUqQsVXdEnULhxDMBz WMZ9fOVdeZuO3V 2023-10-01 16:42:21 bKeW4T6E None None
hQKSDfXO1prLSfXpgu9n AU8c57KcGj8G6K 2023-10-01 16:42:22 bKeW4T6E None None
nQMEVNxcDL1RSEatTSjy 1LCd8kco9lZUz8 2023-10-01 16:42:24 bKeW4T6E None None
Storage
root type region updated_at created_by_id
id
ZduDN4Gs /home/runner/work/lamin-usecases/lamin-usecase... local None 2023-10-01 16:42:15 DzTjkKse
Transform
name short_name version type reference reference_type initial_version_id updated_at created_by_id
id
1LCd8kco9lZUz8 Project flow project-flow 0 notebook None None None 2023-10-01 16:42:24 bKeW4T6E
AU8c57KcGj8G6K Perform single cell analysis, integrate with C... None None notebook None None None 2023-10-01 16:42:24 bKeW4T6E
WMZ9fOVdeZuO3V Postprocess Cell Ranger None 2.0 pipeline None None None 2023-10-01 16:42:22 bKeW4T6E
Dfc0BNiMfvc2Dq Cell Ranger None 7.2.0 pipeline None None None 2023-10-01 16:42:21 bKeW4T6E
EXvTl5gOn2bima Chromium 10x upload None None pipeline None None None 2023-10-01 16:42:20 DzTjkKse
y3w8bCef41Q39B GWS CRIPSRa analysis None None notebook None None None 2023-10-01 16:42:19 bKeW4T6E
12fcGZ8vnoQxOc Upload GWS CRISPRa result None None app None None None 2023-10-01 16:42:18 DzTjkKse
User
handle email name updated_at
id
bKeW4T6E testuser2 testuser2@lamin.ai Test User2 2023-10-01 16:42:21
DzTjkKse testuser1 testuser1@lamin.ai Test User1 2023-10-01 16:42:20
Hide code cell content
!lamin login testuser1
!lamin delete --force mydata
!rm -r ./mydata
✅ logged in with email testuser1@lamin.ai and id DzTjkKse
💡 deleting instance testuser1/mydata
✅     deleted instance settings file: /home/runner/.lamin/instance--testuser1--mydata.env
✅     instance cache deleted
✅     deleted '.lndb' sqlite file
❗     consider manually deleting your stored data: /home/runner/work/lamin-usecases/lamin-usecases/docs/mydata