The motivation for using constructors generously, in my understanding,
is that in principle, it offers an additional security property: when a
process can use some resources not available to the invoker (the
apparent parent), those resources are protected even when the invoker
This is not correct.
The constructor is a general-purpose fabrication mechanism. It can build either confined or non-confined processes. If (and only if) it's yield is confined, it will certify that this is true. Construction is the universal pattern for process fabrication. The ability to isolate sensitive resources is a useful side property, but it turns out that it is very rarely used.
The fact is that the overwhelming majority of processes perform services solely for a single client, and have no need to be unconfined. Unconfined processes are a very, very rare case.
Further: it's not that processes are unconfined broadly. It is possible for the fabricating client to say which "holes" are permitted. For example: the display subsystem needs [mediated] access to the display, and this is normal. The access must be mediated because the display is a shared resource - when someone else logs in, your logical display is not lost; it is merely severed from the physical display.
The issue with that is that unlike in a hierarchical system, the
apparent parent can't inspect the process even for perfectly valid
This is correct, and it is essential in robust systems. If I have accountability for implementing a service, I cannot certify that the service will operate correctly if it can be tampered (or in some cases even inspected) by an outside party. The term "inspect" here is misleading. It suggests that the intention is to examine the state of the service without being able to modify anything. But this is not the case. The ability to inspect a process necessarily includes the ability to extract it's capabilities. At that point all encapsulation is lost.
It would be possible to remove this concern by offering a weak capability for inspection purposes.
But consider that it is not normally the case that a service is inspectable. Can a typical client inspect the running state of a web server or a crypto service? Should it? Inspection of a crypto service would entirely defeat effective crypto.
The distinction between a client requesting fabrication and a "parent" is a very, very big distinction. Part of your thought error is that you are thinking solely about hierarchical cases in a system whose structure is fundamentally not limited to hierarchy. Whatever solution you adopt, it has to work for general system structures, not just hierarchical ones.
As long as the user session has access to the resources in question,
this is not strictly limiting user freedom: since presumably, the user
still has access to full control capabilities for the child activity,
which they can use to explicitly launch a debug session in order to
inspect the child process in question.
This is typically not the case with constructed processes. In fact, the constructor goes to some lengths to ensure that the yield is constructed from non-controlled resources. The requester supplies a space bank, and can revoke the space, but they cannot inspect the space that is allocated from that space bank.
The decision about whether the yield should be debuggable is up to the yield: if it wants you to be able to debug it, it will have a protocol that gives you a debugging capability. This, again, is fundamental to encapsulation.
However, that's less obvious; more cumbersome; the resulting
relationships are less straightforward; and arguably it offers less
flexibility in organising nested activities.
That certainly was not our experience. If anything, service relationships become much more flexible because they are not restricted to hierarchical arrangements.
While inspecting a child is natural in a hierarchical system, it becomes
more indirect in one involving constructors -- discouraging exploration.
Up to this point though I can understand why some people would consider
it a worthwhile trade-off, even if personally I don't like it. Where it
becomes really problematic however is when a program uses resources the
user has no access to, and thus the user can't get control over it. This
too seems to be considered a desirable property in systems like EROS...
I think this is getting to the heart of your objection. It is a philosophy objection rather than a technical objection: you want everything, at run time, to be inspectable. This goes very far beyond open source. Open source allows you to see what a program will do by inspecting its code. It does not go as far as proactive runtime voyeurism. It cannot, because functioning systems are not possible if that requirement is imposed. Apache is (and needs to be) privileged, and it is correct that the Linux kernel will not allow you to inspect its state! Both technical merit and functional requirements reveal this goal as doomed (and highly undesirable).
...with the argument that a password vault for example shouldn't be
accessible, even when the user session is compromised...
Absolutely it must not be accessible. What is required is to be able to change the password and revoke the active session. Examining the password vault is not required to do either of these things.
Since you are arguing for "naked" processes, you are not actually asking for access to the vault (the password data store). You are asking for access to the runtime state of the password engine, which would provide universal access to clear-text passwords. This would not be a good thing.
Knowing what the password subsystem does can be determined by examining its source code. As with Linux, you are free to replace it. Our intended license policy for Coyotos required that a minimum set of services be unchanged in order to call it Coyotos (similar to Android). The purpose was to ensure that a customer who received a Coyotos system could know, with confidence, what guarantees were and were not provided. This would not have stopped you from doing something else. You just couldn't call that result Coyotos.
The big issue we were concerned about was undetectable back doors introduced by developers. There is a (very) short list of services that need to be present and unmodified in order to detect these reliably. That is the set we had planned to restrict.
I do think it would be very cool to be able to attest that a binary you are running was obtained by compiling the source code you examined. This, of course, would require DRM, because we would not want you to be able to extract the cryptographic signing key from the compiler...
Now the user is dealing with a piece of software they don't control:
that they can't inspect or modify; that requires trusting third parties
to follow the desires of the user -- supposedly for their own good...
Since the source code is available for inspection, this is nonsense.
The "obverse" of this is permitting promiscuous tampering. In a classroom that is a good thing. In production it means that nobody can ever know whether the services they rely on are actually meeting their contract.
Software that they can't fix for bugs, annoying behavious, missing
functionality etc.; and that could also have any number of intentional
anti-features: be it backdoors, cryptominers or other trojan horses;
adware; or digital restrictions management. Literally everything that
Free Software is supposed to protect users from.
This is also nonsense, because you are free to build a system variant. So this is exactly like saying "I don't approve of something the standard Apache web server does, so Linux violates everything Free Software is supposed to protect users from." Which could be true, but it isn't Apache's fault. :-)
Backdoors, adware, and cryptominers are impossible to build in confined systems, because all of them require independent access to unconfined resources. You can't remove them, but you can definitely stop them from acting usefully. Introducing trojan horses in conventional systems is only possible because applications cannot defend themselves. When the only way to talk to an application is through its defined protocol, a line of defense actually exists. When you can reach in and tamper with a program, it does not.
The Free Software argument about DRM is silly, because it does not distinguish between good cases and bad cases. I definitely think there are bad cases! But it must be acknowledged that a password vault only works because it implements a form of DRM! A more nuanced claim (and goal) regarding DRM is required before we can talk about this issue rationally.
It is a mistake to assume that all problems must or can be solved in software. Some are questions of law. For better or worse, DRM has become one of those.
Since in such a design, the admin is also not supposed to have access to
a password vault, this of course implies not having an almighty admin --
meaning that even if there is a "friendly" admin, or the user is the
admin, they are still at the mercy of third parties....
The admin is precisely the third party that users have always been at the mercy of - and overwhelmingly the greatest source of system instability and configuration errors. More than 50 years of accumulated evidence confirms that removing the need for an admin is the single best thing you can do to improve system reliability.
Yes, you remain dependent on the decisions of the group that assembles your distribution (assuming you do not change it). The reality is that you are dependent on that group regardless. The question is how many other random would-be kings you are also subject to.
This is exactly the
same scenario as Apple and Google jailing users "for their own good". I
hope we can all agree that this is *not* a desirable situation?...
It depends. If you mean that you hate iMessage because it does everything it can to break SMS and convince you to buy more Apple products, then I agree this is bad. But nothing requires you to use iMessage! It's just an application. Your real objection is that it is a successful, good, and useful application whose authors do not agree with your priorities.
But if you mean a system in which developers can be held accountable for the behavior of their applications, then no. It isn't the users who are being jailed. It's the hostile developers. I understand that you would prefer to operate in a libertine (not to be confused with "free") system that is maximally virus and trojan horse compatible. Most humans do not prefer that. And I will say, without hesitation, that it is a more important objective to me to construct a functional and safe environment for the majority of users in the world than for the small number who prioritize indefensibility over common sense.
I also believe that you should be free to build that indefensible system if you choose to do so. Note that "side loading" is possible and supported on all of these systems.
When your indefensible system starts attacking my systems with a DDOS attack because you made it virus and trojan compatible, I believe that you should be legally liable for the consequences of deploying that system on an open network. Your right to be indefensible ends at your network boundary.
This, I believe, is why Marcus concluded -- and why I am concluding --
that a system following GNU ideals MUST NOT come with this sort of
functionality. This is not really a matter of opinion: it's an
inevitable conclusion. Such a system is *not* facilitating user freedom
-- at least not in the GNU sense. There is really no arguing that.
I will not attempt to speak for Marcus. He's more than able to do that for himself!
But the key words here are "in the GNU sense". It has never been a goal of EROS or Coyotos to support user freedom in such a fundamentally flawed way. Your right to be indefensible ends at your network boundary.
I don't wish to be confrontational, but I do wish to be direct: I do not believe that the "GNU Ideals" are the best way to build a safe, viable, and open ecosystem. I think there are many good ideas in the GNU Ideals. I think that part of their success has been unwillingness to compromise about those ideals. I think it has made some amazing inroads, and it has changed the face of software. I have personally contributed a lot of effort to that process.
But I also think that all systems and ideologies that are unable to balance between competing ideals and requirements are in principle doomed unless they learn how to balance needs. GNU is not an exception to this rule.
> What a constructor enables is local secure collaboration. If we use
> the same machine, then I can run a program that you've shared with me,
> and if you have provided it as a constructor then I can check that it
> cannot leak any data or capabilities I provide it back to you (or
> anyone else).
The "local collaboration" case is extremely niche, to the point that
it's not worth considering IMHO. It's one failed promise of the original
Hurd design that I really don't see as a loss...
In a system where processes are first class and persistent, the local collaboration case is the universal case, so in such a system it is not niche. It is incredibly powerful, but it is very difficult to see the power if you are starting from experience with systems where processes are non-persistent and second (or perhaps third) class.
> The process is "mine" in the sense that I have the authority to
> reclaim its resources.
That doesn't make it yours. You are only providing a runtime resource
lease. You can revoke the lease: but otherwise, you have no control over
the leased resources -- and certainly do not own the activities
performed with these resources.
Yes. This is true in exactly the same way that your landlord does not control what you do in your apartment. Nor should they.
I don't see how that requires privileged constructors?...
Constructors are not privileged. In fact, the entire notion of "privileged" processes as it is understood in UNIX/Windows does not exist in capability systems.