gwl-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Managing data files in workflows


From: zimoun
Subject: Re: Managing data files in workflows
Date: Fri, 26 Mar 2021 08:02:40 +0100

Hi Konrad,

It does not answer your concrete question but instead open a new
one. :-)

Well, I never finished this drafts, maybe it can be worth to discuss

 1. how to deal with data?
 2. on which does the workflow trigger a recomputation?


Cheers,
simon


-------------------- Start of forwarded message --------------------
Hi,

The recent features of the Guix Workflow Language [1] are really neat!
The end-to-end paper by Ludo [2] is also really cool!  For the online
Guix Day back on December, it would have been cool to be able to
distribute the videos via a channel.  Or it could be cool to have all
the material talks [3] in a channel.

But a package is not the right abstraction here.  First because a “data”
can have multiple sources, second data can be really large and third
data are not always readable as source and do not have an output; data
are kind of fixed output.  (Code is data but data is not code. :-))

Note that data is already fetched via packages, see
’r-bsgenome-hsapiens-ucsc-hg19’ or ’r-bsgenome-hsapiens-ucsc-hg38’
(’guix import’ reports ~677.3MiB and ’guix size’ reports ~748.0 MiB).  I
am not speaking about these.


If I might, let take the example of Lars’s talk from Guix Day:

  <https://www.psycharchives.org/handle/20.500.12034/3938>

There is 2 parts: the video itself and the slides.  Both are part of the
same.  Another example is Konrad’s paper:

  <https://dx.doi.org/10.1063/1.5054887>

with the paper and the supplementary (code+data).


With these 2 examples, ’package’ with some tweaks could be used.  But
for the data I deal at work, the /gnu/store is not designed for that.
To fix the idea, about (large) genomics study, let say 100 patients and
0.5-10GB data for each.  In addition to genomics reference which means a
couple of GB.  At work, these days we do not have too much new genomic
projects; let say there 3 projects in parallel.  I let you complete the
calculus. ;-)


There is 3 levels:

 1- the methods for fetching: URL (http or ftp), Git, IPFS, Dat, etc.
 2- the record representing a “data”
 3- how to effectively locally store and deal with it

And if it makes sense that a ’data’ is an input of a
’package’, and conversely, is a question.

Long time ago, with GWL folks we discussed “backend”, as git-annex or
something else, but from my understanding, it would answer about #3 and
what git-annex accepts as protocol would answer to #1.  Remaining #2.

In my project, I would like to have 3 files: manifest describing which
tools, channels describing at which version, and data describing how to
fetch the data.  Then, I have the tool to work reproducibly: I can apply
a workflow (GWL, my custom Python script, etc.).


1: <https://guixwl.org/>
2: 
<https://hpc.guix.info/blog/2020/06/reproducible-research-articles-from-source-code-to-pdf/>
3: <https://git.savannah.gnu.org/cgit/guix/maintenance.git/tree/talks>


Cheers,
simon
-------------------- End of forwarded message --------------------



reply via email to

[Prev in Thread] Current Thread [Next in Thread]