[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Discuss-gnustep Digest, Vol 2, Issue 12
From: |
Richard Frith-Macdonald |
Subject: |
Re: Discuss-gnustep Digest, Vol 2, Issue 12 |
Date: |
Mon, 6 Jan 2003 08:51:08 +0000 |
On Sunday, January 5, 2003, at 12:44 pm, Jonathan Gapen wrote:
On Sun, 5 Jan 2003, Richard Frith-Macdonald wrote:
I'm not sure if I'm just curious, or if I'm writing in defence of unix
... but what
real or theoretical alternative are you using as a basis for this
judgment?
For the most part, the AmigaOS system. It works like this:
The programmer (or built-in compiler support) would use the
OpenLibrary() call to open a shared library, specifying a minimum
version
number. That function returned a memory address, called the library
base,
that the programmer assigned to a global variable. The library has an
array of pointers to its functions immediately after the library base
in
memory-- a jump table. The compiler would simply generate code to
load
an address at a certain offset from the library base to call a
function.
Sounds like a crude version of a shared or dynamic library (not clear
which),
and, from the description, I can't tell in what way it differs from
unix libraries in
practice. It it's really as simple as it sounds (just a jump table at
a base address)
then I'd expect it to be relatively fast, but be likely to break with
each new library
revision as the table layout would change when functions are
added/removed
in later releases.
As I understand it, the point of the major version number in a shared
library is precisely to indicate whether versions are compatible or
not
[...]
So if you link with libxml2.so.4 then your app will work with
libxml2.so.4.0.1 and libxml2.so.4.2.3 and libxml2.so.4.5.0 etc etc.
That would work on systems that have major and minor numbers. Of
the
systems I have installed, NetBSD has them. FreeBSD 4.7 and Solaris
2.6 do
not. Also, I think it's a strange convention to break backwards
compatibility with each major revision number-- especially since it's
often *not* broken with Unix libraries but just assumed so because
that's
the convention. (E.g. I have symlinked libintl.so.3 to libintl.so.4
for a
few binary-only programs that need it.)
It's a convention used by package managers to determine dependencies,
and
a recognition that APIs do change ... If you don't have versioning, the
libraries
still change, but the only way of finding out is when the program using
the library
fails.
Loading your
executable fails if libxml2.so.4 isn't available--
Yes ... I view that as analagous to saying that your car doesn't run
if
you don't put fuel in it. Package/dependency management software is
Of
course, you do have the option of using static rather than shared
libraries if you view this as a big problem.
That would be the case if we're talking about, say, libc! But
there
are lots of cases where a program might want to support an optional
feature if the library is available. A car still runs if it doesn't
have
a leather interior and an in-dash GPS.
I think you are wanting dynamic libraries/bundles here ...
not the same sort of thing as linking with shared libraries at all.
With a shared library, the library is loaded at program startup without
the
developer having to do any special coding, but if the library is not
available
the program won't start. With a dynamic library, the developer writes
code
to load the library while the program is running, and takes some action
to
deal with the case where the library is not available.
Unix libraries support both options (as well as static linkage of
course).
For instance, some programs will
use X11 or run as a command-line utility. Or look at GNUstep's
optional
libraries from the package manager's point of view. Either the package
maintainer imposes a certain set of supported libraries, or provides:
gnustep-base.rpm
gnustep-base+iconv.rpm
gnustep-base+xml2.rpm
gnustep-base+iconv+xml2.rpm
...ad nauseum if there are more options...
I don't know anyone who would do the latter ... iconv and xml are not
optional
in the sense of the system working without them, they are only optional
in that
it's possible to have a cut-down system that doesn't have them and
therefore
lacks some standard functionality. You normally only expect packages to
contain the standard system, and let developers build odd variants
themselves.
In the case of code that a developer really wants to be an optional
extra,
dynamic libraries (bundles) are ideal. eg. The SSL support in
GNUstep-base
works like this ... if it's there, NSURL supports https, if it's not,
it doesn't.
You can 'upgrade' in which case the new package replaces the old one,
and if the dynamic libraries in the new package have a different major
version number than executables linked to the old one will need to be
upgraded.
I guess this is where our opinions vary. I don't see why a higher
major revision must break backwards compatibility.
Ideally they wouldn't ... it's actually just an acknowledgment of
reality ...
people *do* make changes that are not backward compatible, so it's
good to have a convention to minimise the impact of that .... a good
package
manager can use the conventions to keep a system working - as long as
the
package developers use them too of course :-)
As far as package managers, I haven't used a lot of them.
However,
in FreeBSD, you can't upgrade a package if others depend on it. Sure,
you
can force a deinstall and install the new one, but that leaves the
package
database inconsistent. You can install a third-party package which
does
the upgrade and fixes up the package database for you, though. When I
last used RPM, it was the same way.
I think most people have complaints about package managers (I was just
making the point that poor package management is not the same as there
being anything wrong with shared libraries).
Or you can 'install' the new package, leaving the old one in place,
and
other packages won't need upgrading.
You can't leave the old package in place-- files from the new one
will overwrite the old package's files and the package database is
again
inconsistent. Then if you delete the old package, the new one often
ceases to work as its files are gone. If you remove the old package,
but
keep the old shared objects in place, then the package database is
incomplete. I suppose you could add the old objects into the database
as
part of the new package, but you don't know what external files the
shared
library needed, and that'll break if the user re-installs that package.
Welcome to package manager hell. :-)
Sounds like a lousy package manager ... in the context of shared
libraries,
I would expect a package manager to install the new library, adjust
symbolic
links to make it the default, and leave the old library in place ...
that's what
the package managers I've seen do.
The best package manager I've used is the debian one ... primarily
because it
can sort out dependecies and automatically download everything you need
to upgrade in a single operation ... so you *don't* need to install new
libraries
while keeping the old ones around for older executables.
My feeling is that unix shared libraries are good for backward
compatibility, and flexible. I don't really argue with slow, perhaps
implementations could be faster of course, but startup time is
fundamentally always going to be slower than static linked software.
Well, can we just agree that there's room for improvment?
Of course ... but I suspect most (all?) of your complaints are not
actually
problems with shared libraries ... rather they might be with package
managers.