[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Dragora-users] Distribution of ARM rootfs tarballs

From: Matias Fonzo
Subject: Re: [Dragora-users] Distribution of ARM rootfs tarballs
Date: Wed, 12 Feb 2020 16:12:19 -0300
User-agent: Roundcube Webmail/1.3.8

El 2020-02-07 09:40, Kevin "The Nuclear" Bloom escribió:
Matias Fonzo writes:

El 2020-01-31 09:38, Kevin "The Nuclear" Bloom escribió:
Matias Fonzo writes:

El 2020-01-31 00:06, Kevin "The Nuclear" Bloom escribió:
Thanks for the quick reply, Matias. See my comments below:

El 2020-01-29 16:50, Kevin "The Nuclear" Bloom escribió:

Hello Kevin.  :-)

Those of us who have a C201 know that installation on this device is quite nontraditional. Instead of booting off of a USB stick and running an installer, one must do it manually by loading an sd card (or usb
stick) with a special kernel partition and a special root
partition. What this means is that creating an ISO for this machine is pointless. Due to that, most distros that support the machine have a rootfs tarball that you unpack into the root partition and, normally, inside of /boot there is a linux.kpart or something that gets written to
the kernel partition using `dd`.

Okay. Question: what format would be appropriate for create the rootfs?.

Arch-arm uses tar.gz and we probably should stick to that because some people might be unpacking it from ChromeOS which doesn't come with lzip
installed. It can, however, unpack gzip.

Okay, tar+gzip then.

That being said, I'm curious as to how we wish to handle the
distribution of Dragora 3 rootfs tarballs for this machine. Most
distros' tarball is quite small and only contains the core system with simple network tools such as wpa-supplicant for connecting the machine
to the internet (there is no Ethernet port, so wpa will be
required). Once the core system is booted the user is expected to
install the rest of the system via their package manager. Since Dragora doesn't have a package repo that contains precompiled binaries (that I'm
aware of), I'm not sure how we want to do this.

Here we could say that Dragora's "kernel" includes everything needed to
system, as well as the network part, including the wpa_supplicant
for the packages, we can say that the official packages are provided and distributed after each release[1]. In this sense, it is not a high
(for me) to provide updates to pre-compiled packages like any other
package, since the distribution has to be finished, or at least until it
the stable one.

[1] http://rsync.dragora.org/v3/packages/

I think that is a good idea. Would take the stress away from trying to keep every package up-to-date all the time. I'm still curious about how we should manage downloading the binaries and then installing them in
the correct order. Any ideas how to do this? (i.e. `wget -i
BINARY-LIST.txt | qi -i` or something)

Qi can read from standard input, for example if the file currently contains
full (local) path of one or more packages, it can install them, e.g: qi -i -

What you want is to read, download and install. Currently Qi has the code to download and generate the .sha256 on the source side. As a pending issue, we could use or adapt this code (as it declares the General Network Downloader)
tell Qi to download the packages when using the -i option and if "http(s)://"
specified on the command line.

Of course, this has to be studied to make it as reliable as possible

That would be quite handy! If this would be valuable to the other D3
archs then I think it would be great addition, otherwise, we may want to
just have a shell script that does this using wget+qi or

In theory this was going to be part of 'jul', unfortunately Jul doesn't continue[1]. I prefer a separate program to do the whole remote package

[1] http://git.savannah.nongnu.org/cgit/dragora.git/tree/testing/jul

Yes, I agree that it should be separate. I created an emacs package
called `jul-mode` back when jul was still around - it only requires jul
for the actual downloading/installing part and parses the HTML to see
what packages were available. I'm sure I could write another program
that could do the same sort of thing but use `qi` to install. We could
come up with a better way to check for packages rather than parsing HTML
(which could be slow if we had a lot of stuff), maybe an sqlite db or
something. If there is a faster way that you know of, let me know. I
would probably write it in guile scheme since I don't know anything
about tcl.


If the database is binary, we are lost, this breaks with a principle of software development in Unix-like systems. Specifically what would you use the database for, is the sqlite database compatible with other system tools?.

My idea is this: we do the same thing that other distros do, for the most part. Keep the tarball small and use just the core system with some networking programs. The kernel will be in /boot under a name like kernel.kpart or something. Inside of the root home directory there will be a few different text files that contain urls to pre-compiled binary packages. Each file will have names that match up with the .order files when building D3: editors.txt, sound.txt, xorg.txt, etc. They will have all the programs in the orders that they need to be in to insure a safe installation. Then, the user uses a few commands to download and install each package (probably something with wget that passes the binary into a qi command). Once they've installed all the stuff they need, they'll be
good to go!

What I see here is that it is possible that the kernel configuration needs
adjusted[2], in addition to testing it (very important), I do not own such
computer, and if I did, I would not have enough time now to focus
this, considering all that needs to be done. I keep thinking about how
lists will facilitate the installation of the packages (how to produce them
Qi), for the moment you can compile the core[3] and produce the rootfs,
compile the rest to get the packages...


Yes, I just completed the core build with the current master
branch. Everything went smoothly except for meson, which has always been a problem child on the C201. I will be creating the signed kernel and
attempting booting tomorrow, if time permits.

Let me know if this is a good idea or if it need tweaked at all! This is quite a lot of work for only 1 machine but it's the only way I can think of other than just having all that stuff in the tarball but that would
make it very large.

I will try to assist you and provide you with what you need.

What I can think of is that we can create a new scenario for the
process. This would be a minimal system to boot and log in to, from there
could install whatever you want, reusing the minimal system tools. This
allow you:

- Check and test the kernel configuration.
- Save time instead of building the stage1, the whole core, etc.
- Accessible via enter-chroot.
- Have the rootfs small.
- Ready to boot.

For example, you would set the cross compiler in motion:

./boostrap -s0 -a armv7_hf

Then you would produce the minimum system using the cross compiler for your

./bootstrap -s201 -a armv7_hf

If you already have the cross-compiler in place, you would use the "201"
scenario/stage as many times as necessary (related changes, kernel
configuration, busybox, settings, etc.)

In time, the new produced rootfs will be adjusted to what is "just and

... Cum on feel the noize! ;-)

This would be great! Would it be possible to just have another special
.order file called `base.order` or `minimal.order`? Which would just
build the essentials and some network stuff. Using the bootstrap command
you mention would work too!

What I can think of is that we can simplify the structure of the Dragora series. For example, we currently have the output sorted for packages as (default):


Instead of having <series_name>, we can try to have "hashtags" for the packages,
so we would categorize in the recipes "essential", "networking", etc.

This will be reflected in the package name and the destination directory
($destdir), for example:

    Package name: util-linux-2.34-x86_64+1#essential.tlz
    It will be installed as: util-linux-2.34-x86_64+1#essential

This would make it easier to find the essential packages and other series to be
installed or removed.

The "essential" label would deal with the minimum or essential packages to run
the system.

What do you think?.

Hmm, how would it work if there was a lib that was required by another
program in another category? For example, you wish to build emacs and it requires XML support. `libxml` would probably be under a `#lib` or maybe
`#networking` but emacs would be in `#editors` or something. How would
we know that you need libxml before emacs?

The package "hash tag" should not be used to determine dependencies, instead it can be a complement to the package name, version, architecture.

(disclaimer: emacs doesn't _require_ xml, it is optional. Just for
argument's sake.)

Metadata must be generated, the metadata that Qi generates for the package is in text format. As you can see here:


Some dependencies are required before building a package, some are optional, and some are required at runtime. The simplest thing is to list what the package requires to work, that is, what it depends on, commonly all dependencies are related to dynamic linking. The simplest thing would be to list these requirements (for example, in the metadata file) and have the new program analyze them, I guess.

What I saw years ago that some distributions based on Slackware do, is that they add or list the packages that the package requires to work, then the program in charge of verifying this, checks if it is installed or if it doesn't try to download it, install it.

reply via email to

[Prev in Thread] Current Thread [Next in Thread]