[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Dragora-users] Distribution of ARM rootfs tarballs
Re: [Dragora-users] Distribution of ARM rootfs tarballs
Thu, 06 Feb 2020 19:37:02 -0300
El 2020-01-31 09:38, Kevin "The Nuclear" Bloom escribió:
Matias Fonzo writes:
El 2020-01-31 00:06, Kevin "The Nuclear" Bloom escribió:
Thanks for the quick reply, Matias. See my comments below:
El 2020-01-29 16:50, Kevin "The Nuclear" Bloom escribió:
Hello Kevin. :-)
Those of us who have a C201 know that installation on this device
quite nontraditional. Instead of booting off of a USB stick and
an installer, one must do it manually by loading an sd card (or usb
stick) with a special kernel partition and a special root
partition. What this means is that creating an ISO for this machine
pointless. Due to that, most distros that support the machine have
rootfs tarball that you unpack into the root partition and,
inside of /boot there is a linux.kpart or something that gets
the kernel partition using `dd`.
Okay. Question: what format would be appropriate for create the
Arch-arm uses tar.gz and we probably should stick to that because
people might be unpacking it from ChromeOS which doesn't come with
installed. It can, however, unpack gzip.
Okay, tar+gzip then.
That being said, I'm curious as to how we wish to handle the
distribution of Dragora 3 rootfs tarballs for this machine. Most
distros' tarball is quite small and only contains the core system
simple network tools such as wpa-supplicant for connecting the
to the internet (there is no Ethernet port, so wpa will be
required). Once the core system is booted the user is expected to
install the rest of the system via their package manager. Since
doesn't have a package repo that contains precompiled binaries
aware of), I'm not sure how we want to do this.
Here we could say that Dragora's "kernel" includes everything needed
system, as well as the network part, including the wpa_supplicant
for the packages, we can say that the official packages are provided
distributed after each release. In this sense, it is not a high
(for me) to provide updates to pre-compiled packages like any other
package, since the distribution has to be finished, or at least
the stable one.
I think that is a good idea. Would take the stress away from trying
keep every package up-to-date all the time. I'm still curious about
we should manage downloading the binaries and then installing them in
the correct order. Any ideas how to do this? (i.e. `wget -i
BINARY-LIST.txt | qi -i` or something)
Qi can read from standard input, for example if the file currently
full (local) path of one or more packages, it can install them, e.g:
qi -i - <
What you want is to read, download and install. Currently Qi has the
download and generate the .sha256 on the source side. As a pending
could use or adapt this code (as it declares the General Network
tell Qi to download the packages when using the -i option and if
specified on the command line.
Of course, this has to be studied to make it as reliable as possible
That would be quite handy! If this would be valuable to the other D3
archs then I think it would be great addition, otherwise, we may want
just have a shell script that does this using wget+qi or
In theory this was going to be part of 'jul', unfortunately Jul doesn't
continue. I prefer a separate program to do the whole remote package
My idea is this: we do the same thing that other distros do, for
most part. Keep the tarball small and use just the core system with
networking programs. The kernel will be in /boot under a name like
kernel.kpart or something. Inside of the root home directory there
be a few different text files that contain urls to pre-compiled
packages. Each file will have names that match up with the .order
when building D3: editors.txt, sound.txt, xorg.txt, etc. They will
all the programs in the orders that they need to be in to insure a
installation. Then, the user uses a few commands to download and
each package (probably something with wget that passes the binary
qi command). Once they've installed all the stuff they need,
good to go!
What I see here is that it is possible that the kernel configuration
adjusted, in addition to testing it (very important), I do not
own such a
computer, and if I did, I would not have enough time now to focus
this, considering all that needs to be done. I keep thinking about
lists will facilitate the installation of the packages (how to
Qi), for the moment you can compile the core and produce the
compile the rest to get the packages...
Yes, I just completed the core build with the current master
branch. Everything went smoothly except for meson, which has always
a problem child on the C201. I will be creating the signed kernel and
attempting booting tomorrow, if time permits.
Let me know if this is a good idea or if it need tweaked at all!
quite a lot of work for only 1 machine but it's the only way I can
of other than just having all that stuff in the tarball but that
make it very large.
I will try to assist you and provide you with what you need.
What I can think of is that we can create a new scenario for the
process. This would be a minimal system to boot and log in to, from
could install whatever you want, reusing the minimal system tools.
- Check and test the kernel configuration.
- Save time instead of building the stage1, the whole core, etc.
- Accessible via enter-chroot.
- Have the rootfs small.
- Ready to boot.
For example, you would set the cross compiler in motion:
./boostrap -s0 -a armv7_hf
Then you would produce the minimum system using the cross compiler for
./bootstrap -s201 -a armv7_hf
If you already have the cross-compiler in place, you would use the
scenario/stage as many times as necessary (related changes, kernel
configuration, busybox, settings, etc.)
In time, the new produced rootfs will be adjusted to what is "just and
... Cum on feel the noize! ;-)
This would be great! Would it be possible to just have another special
.order file called `base.order` or `minimal.order`? Which would just
build the essentials and some network stuff. Using the bootstrap
you mention would work too!
What I can think of is that we can simplify the structure of the Dragora
series. For example, we currently have the output sorted for packages
Instead of having <series_name>, we can try to have "hashtags" for the
packages, so we would categorize in the recipes "essential",
This will be reflected in the package name and the destination directory
($destdir), for example:
Package name: util-linux-2.34-x86_64+1#essential.tlz
It will be installed as: util-linux-2.34-x86_64+1#essential
This would make it easier to find the essential packages and other
series to be installed or removed.
The "essential" label would deal with the minimum or essential packages
to run the system.
What do you think?.
- Re: [Dragora-users] Distribution of ARM rootfs tarballs,
Matias Fonzo <=