bug-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] - gnumach: Implement offset in device map


From: Samuel Thibault
Subject: Re: [PATCH] - gnumach: Implement offset in device map
Date: Sun, 22 Aug 2021 20:25:24 +0200
User-agent: NeoMutt/20170609 (1.8.3)

Hello,

Joan Lledó, le dim. 22 août 2021 12:51:09 +0200, a ecrit:
> I made the changed in the device mapper to implement the offset,

Thanks!

> limited the number of pagers per device to 1, so once the system got a
> pager for "mem" at offset 0, I couldn't give it another offset without
> that affecting all other memory mappings for that device.

Ah, right, ok.

> (BTW, why is the the hash table limited to 127 slots?)

Well, we don't have too many devices in a system?

> On the other hand, I couldn't think of any other way to get rid of the struct 
> pci_user_data to get the pager from the arbiter. Any ideas? I'll work on the 
> new libpciaccess interface for the Hurd to get the pager unless somebody has 
> a better idea.

That looks like a nice interface to me. The question is rather whether
upstream will be fine with it, i.e. an API with one function that is
OS-specific. Possibly they'd rather see what it looks like, so a MR +
mail on the proper mailing list would be what gives us an answer.

That being said, as mentioned previously, adding to gnu mach a function
that returns a memory proxy for a given range of memory, that would also
make a lot of sense, and then you'd be able to just use it.


Just a nitpoint (don't bother fixing it here) : avoid mixing code
behavior change and variable/macro renames, keep such changes in
separate patches.  I ended up reverting the renames to make the patch
much easier to review.


Joan Lledó, le dim. 22 août 2021 12:51:10 +0200, a ecrit:
> -queue_head_t dev_pager_hashtable[DEV_PAGER_HASH_COUNT];
> +queue_head_t dev_device_hashtable[DEV_HASH_COUNT];
>  struct kmem_cache    dev_pager_hash_cache;
> +mach_device_t        dev_pager_hashtable[DEV_HASH_COUNT];

Please leave a comment that explicits what is in which hashtable, and
how each is indexed.

Actually I don't understand how they are indexed: it seems from

> device = dev_pager_hashtable[dev_hash(name_port)];

that you are assuming that the dev_pager_hashtable[] hash table will
never have a conflict, but that's not true, you may have several port
names that have the same hash key.  And indeed, in a hash table, the
hash function does not have to strictly separate out all elements,
that's why dev_pager_hashtable was a queue: all elements that have the
same hash key end up in the same list, which one can quickly go through
to find the proper element.

See for instance the original dev_pager_hash_lookup, it hashs the name,
but possibly several names have the same hash, so it goes through the
list, and checks the name. That is the way things work.

Your dev_device_hash_lookup (and dev_pager_hash_delete) has to do
the same: there can be several devices with the same hash, so while
looking through the list you have to check not only the offset, but
also the device. Actually, to make hashing more efficient, you should
probably add the device and the offset before calling dev_hash, so that
if we have a lot of dev+offset pairs for the same device, they will
be distributed over the 127 slots, and thus the lookup will be very
efficient.

> -     dev_pager_hash_insert(d->pager, d);
> -     dev_pager_hash_insert((ipc_port_t)device, d);   /* HACK */
> +     dev_hash_insert(d->device, d->offset, d);

Mmm, I'd say it'd be simpler to keep it inserted in both hash tables.

AIUI you need to kinds of lookups:
- from pager name to struct dev_pager_entry
- from device name + offset to struct dev_pager_entry

so just make the hashtables that way:
- dev_device_hashtable indexed by dev_hash(device + offset), returns the pager
- dev_pager_hashtable indexed by dev_hash(pager_name), returns the pager

You won't even have to modify dev_pager_hash_insert,
dev_pager_hash_delete, and dev_pager_hash_lookup, they are already doing
it this way. Yes, it's fine to be adding to two hash tables, the cost is
not a problem since the addition/removal is not a frequent thing. The
lookup, however, is a frequent thing so we want to optimize for that,
and thus have the two hashtables ready for that.

Samuel



reply via email to

[Prev in Thread] Current Thread [Next in Thread]