qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] linux-user: Use *at functions instead of cachin


From: Richard Henderson
Subject: Re: [Qemu-devel] [PATCH] linux-user: Use *at functions instead of caching interp_prefix contents
Date: Thu, 12 Jan 2017 08:21:26 -0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.6.0

On 01/12/2017 02:35 AM, Peter Maydell wrote:
On 12 January 2017 at 04:05, Richard Henderson <address@hidden> wrote:
If the interp_prefix is a complete chroot, it may have a *lot* of files.
Setting up the cache for this is quite expensive.  Instead, use the *at
versions of various syscalls to attempt the operation in the prefix.


Presumably this also fixes the completely broken handling of
symlinks in the interp_prefix tree?

Presumably.

Unfortunately patch will break bsd-user as it stands,
because bsd-user also calls init_paths. There's also a
usage in tests/tcg/.

Oops, yes.

Awkward problem: I suspect you'll find this breaks a few
guest programs which don't like the fact that there's now
an open fd which doesn't belong to the guest. (For instance
IIRC some LTP test programs do "check that the rlimit on
number of open files hits after opening exactly N files".)

Hum.  Probably.  I don't see how I can avoid that.

Incidentally it's a shame C doesn't make it easier to
abstract out the repeated pattern

        switch (PATHNAME[0]) {
        case '/':
            ret = SOMEFN(interp_dirfd, PATHNAME + 1, STUFF);
            if (ret == 0 || errno != ENOENT) {
                break;
            }
            /* fallthru */
        default:
            ret = SOMEFN(FD, PATHNAME, STUFF);
            break;
        }

(and the variant which uses a 2nd function).

A macro would get awkwardly heavily parameterised, I think.

I did briefly play with such a macro, and I found it to be uglier than just replicating the pattern.


r~



reply via email to

[Prev in Thread] Current Thread [Next in Thread]