gnu-arch-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gnu-arch-users] the state of the union


From: Tom Lord
Subject: Re: [Gnu-arch-users] the state of the union
Date: Thu, 26 Aug 2004 12:11:09 -0700 (PDT)

    > From: address@hidden (James Blackwell)



    > I don't agree. I think that what we're seeing is a parallel of
    > exactly how science was performed prior to the 20th
    > century. Sure, these days, science is supported by government
    > and educational institutions, but back in the
    > 'pillars-of-science' days, science was either performed or
    > sponsored by the well-to-do.

It's the money (and increased understanding of money) that changed
things.

It looks to me like there are many creative and interested hackers in
the world covering a wide spectrum of interests and competences.  Yet
there are next to no substantial programs in existence for incubating
promsing work from this crowd to help it become something of broader
economic value.  It's a waste and a big missed opportunity.

Without economic incentive to become professional innovators, what
happens to people?  Generally, a large subset of them get drawn into
various parts of industry where there's plenty of day-to-day work to
do but precious little opportunity to work on early-stage projects.
The danger is that we will wind up completely stagnating, poised to be
leapfrogged by non-free technology that discovered some fundamentally
better approach to computing.



    > s-exps suck, Suck, SUCK. I mean really SUCK. Suck so badly, that earlier
    > today I told someone that if I had enough tanks and guns, I would round
    > up all of you s-exp lovers into concentration camps for proper disposal.
    > :)

    > 90% of the argument would go right out the window if you had a different
    > syntax. 


Calling these things s-exps is a bit of a stretch.   I'll respond
about syntax in a moment.

Traditional lisp s-exps are built out of cons pairs, 1d arrays, and
some high-level atomic types (e.g., numbers; characters).   They are 
mutable (programs can modify them) and may have any graph structure
(e.g., you can have ciruclar s-exps).

The s-exp-ish thing in libxl is simpler than lisp s-exps.   Values are
built out of atomic types vaguely close to the machine but
mathematically precise (e.g., integers of a certain bit size) plus
possibly nested array-like sequences.   There is no separate, special
cons-pair type.

Every value in the libxl data structure world has a "type tag" which
is represented as an "atom" (symbol-like thing with a unique name).
Users can create array-like values with any tag they choose.
Integers, characters, strings and the like have fixed type tags.

The libxl data structures are immutable: once one is constructed, it
never changes.   This is strictly enforced.   New libxl values can
only be created from previously created values, thus, libxl data
structures contain no circular references.

The libxl data structure usefully resembles many disparate things:

A) It resembles what we already have in src/tla/libawk: it
   is roughly the same idea but with a richer type system.

B) It resembles most systems of "sum and union types" (aka `struct and
   union'-style type systems.   The tag of a value tells you what
   is stored there (it is a union discriminator).   Arrays are sum
   values (aka struct values).

C) It resembles the core of XML but with different primitive types 
   and without attributes.   What in xl might be:

        <thing>some data</thing>

   might in libxl be:

        #thing{some data}
        
   There is no native translation in libxl, though, for attributes:

        <p align="center">tra la la</p>

   has no built-in libxl equivalent.    (There is a reason for that 
   omission but it's not important right now, I hope).

   Also like XML, all libxl data is printable and readable -- it is
   always portable between any two environments which can communicate.

D) The tree-like nature of libxl data resembles the way that data
   is stored in memory higherarchies.   A "Locality" metric can be
   defined for the subtrees of a libxl value --- a function that 
   tells you how "far away" the value is from a reference to the root
   of the value.   That locality metric approximates the real-world
   performance characteristics of libxl data structres.

   If a complete memory hierchy of a virtual machine is modeled as a
   small number of (arbitrarilly large) libxl values (the values
   stored in registers), then there is some constant K and all the
   parts of the values at distance less than K from the roots of the
   values are "in cpu registers" and the other values are "in main
   memory".   If the VM CPU is a finite state machine, it's
   transitions are taken on the basis of the <K-distant parts of the
   values in registers;  as side effects, the CPU performs reads and
   writes that describe the changes to the frontier between the
   <K-distant values and the >=K-distant values.

   An interesting question then arises for the programming language
   designer: what if we make a language in which programmers specify
   such virtual CPUs (finite state machines operating on <K-distant
   parts of a few registers) and then specify how to run them and ways
   to compose multiple machines to form larger machines.  Might that
   be an interesting alternative to more traditional procedural
   abstraction?  What high-level structures are easy or difficult to
   construct with this approach?  What are the most important
   compositional operators and how can they be reduced to a tiny basis
   set?


For immediate (for a while to come, I think) needs in arch, I think we
are entirely interested in properties (A) and (B), from the list
above.  The others are very interesting and will (I'm certain) prove
to be quite useful -- but they are not an immediate priority.

Summing up: at least in the abstractions involved, calling these
things s-exps is a bit of a stretch.   They are typed trees of values
over some simple atomic types and that's about it.   They are barely
different from libawk.

Now, syntax:  Basically, "Oh, come on.   do you really think we can't
get around to making really nice syntaxes even if initially we use
crude ones?   That's half the point of getting serious about the "type
system" we use where libawk is now: so that these files don't become
syntax bound --- so that we have the flexibility to make incremental
changes to improve syntaxes in compatible, disciplined ways.   Geeze."

Y'know? 


    > >   That's a "faster, cheaper, better" approach to incubating new
    > >   projects.   Companies (IBM, RH, Novell, HP, Sun) could certainly
    > >   afford to pony up a few million bucks over a couple of years to
    > >   experiment with this approach.

    > Companies don't make that sort of investment. That opens them up to
    > investor lawsuits for throwing money away if you don't succeed in
    > turning that money into more money. (Remember? Companies are owned by
    > investors, and its not the company's money -- its the investor's money)

    > IBM is making money, and they're dumping a *lot* of money into free
    > software development. You're just not one of the people they picked.

Many people on this list are not the people they picked.   Nor would
be likely to pick under almost any circumstances.

Those big teams that IBM puts together are an expensive tactical
necessity.  You are quite correct that that is funded innovation: at
rates of between 100K..200K per hacker per year, plus non-trivial
costs whenever a hacker joins or leaves.  Qualifying hackers for those
positions is time consuming and difficult and mistakes can be costly.
It is an approach to take only for the most tactically important
issues: such as Java and the Linux Kernel, in IBM's case.

In return for all of the spending on select areas, IBM gets a
competitive level of in-house expertese on key software systems *and*
gets to be a widely felt presense in the public communities that
develop those systems.

IBM didn't ramp up those large teams when the first saw the new
technologies appear.  It wasn't tactically necessary until it was
clear that these technologies would be very popular.  Examples like
this have changed the way the big companies think about software.
General Public licensing is much better understood as is the intrinsic
value of having a shared commons of software around which each company
can build differentiated product and service offerings.

Given that having such software around is the source of new business
opportunities, these companies have an interest in helping to re-seed
the commons.    The big team approach used for things like Java and
the Kernel is horribly expensive:  I'm proposing a much less expensive
program of microgrants.


    > RH is barely over breaking even, but they too donate a lot of code into
    > free software. 

The big companies are all inevitably going to contribute that code
which they immediately need to sustain/improve their individual
product/service offerings.   Ok, let's call that innovation but let's
distinguish then between *new* innovation and *continuing* innovation.

Generally, those big companies will pick a long term (say, 3-5 yr)
course and then ramp up to work on that.   The spending during that
period is the continuation of -- the playing out of --- the initial
innovative idea.   That's fine but it's not agile.   It can't
efficiently make small innovations.   It's unlikely to take much risk
persuing an interesting but unproven idea.

Creating a market for small-scale innovation has been tried (probably
still is being tried) in a few different ways.   Generally, the
funding opportunities of these are too small (or too precious);
often the mechanisms for making and judging proposals are awkward or
downright inappropriate;  the contractual forms of some of these
programs are needlessly onerous....

It's as if the industry as a whole has the right idea but hasn't quite
got it together yet and, meanwhile, is lowballing (underfunding) the
effort just a little too much.


    > Novell is in huge trouble. Their main product is dead, and they haven't
    > found new products to sell.

    > HP is still choking on the Compaq merger, and I believe they're having
    > money problems as well.

    > Sun is also in a huge amount of trouble because of lintel. I'm sure
    > they're well aware that they need to either change or die, but you won't
    > see them admit that in public.

Sure.  In some sense, the micro-grant idea competes against the R&D
spending they make in-house.

    > The way I see it, you're approaching the wrong companies. Don't approach
    > the companies that are in the business of (in)?directly profiting from
    > propreitary software. Instead, approach the companies that are hurt by
    > proprietary software, and convince them "hey, if you give me $300 a
    > month along with all these other guys, you won't have to pay $xxx,xxx a
    > year to these companies that are selling you crap you hate, and then not
    > helping you when the crap breaks"

I agree that opening the market to smaller customers is important.

I'm suggesting seeding the market with many small purchases (plus
logistical participation) from a few companies that have plenty of
cash around, even if times are rough in some ways.


-t




reply via email to

[Prev in Thread] Current Thread [Next in Thread]