qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v2 2/2] ppc/spapr_hcall: Implement H_RANDOM hype


From: Thomas Huth
Subject: Re: [Qemu-devel] [PATCH v2 2/2] ppc/spapr_hcall: Implement H_RANDOM hypercall in QEMU
Date: Mon, 7 Sep 2015 17:05:48 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.2.0

On 01/09/15 02:47, David Gibson wrote:
> On Mon, Aug 31, 2015 at 08:46:02PM +0200, Thomas Huth wrote:
>> The PAPR interface provides a hypercall to pass high-quality
>> hardware generated random numbers to guests. So let's provide
>> this call in QEMU, too, so that guests that do not support
>> virtio-rnd yet can get good random numbers, too.
>> Please note that this hypercall should provide "good" random data
>> instead of pseudo-random, so the function uses the RngBackend to
>> retrieve the values instead of using a "simple" library function
>> like rand() or g_random_int(). Since there are multiple RngBackends
>> available, the user must select an appropriate backend via the
>> "h-random" property of the the machine state to enable it, e.g.
>>
>>  qemu-system-ppc64 -M pseries,h-random=rng-random ...
...
>> diff --git a/hw/ppc/spapr_hcall.c b/hw/ppc/spapr_hcall.c
>> index 652ddf6..ff9d4fd 100644
>> --- a/hw/ppc/spapr_hcall.c
>> +++ b/hw/ppc/spapr_hcall.c
>> @@ -1,4 +1,8 @@
>>  #include "sysemu/sysemu.h"
>> +#include "sysemu/rng.h"
>> +#include "sysemu/rng-random.h"
>> +#include "qom/object_interfaces.h"
>> +#include "qemu/error-report.h"
>>  #include "cpu.h"
>>  #include "helper_regs.h"
>>  #include "hw/ppc/spapr.h"
>> @@ -929,6 +933,77 @@ static target_ulong 
>> h_client_architecture_support(PowerPCCPU *cpu_,
>>      return H_SUCCESS;
>>  }
>>  
>> +typedef struct HRandomData {
>> +    QemuSemaphore sem;
>> +    union {
>> +        uint64_t v64;
>> +        uint8_t v8[8];
>> +    } val;
>> +    int received;
>> +} HRandomData;
>> +
>> +static RndRandom *hrandom_rng;
> 
> Couldn't you avoid the new global by looking this up through the
> sPAPRMachineState?
> 
>> +
>> +static void random_recv(void *dest, const void *src, size_t size)
>> +{
>> +    HRandomData *hrcrdp = dest;
>> +
>> +    if (src && size > 0) {
>> +        memcpy(&hrcrdp->val.v8[hrcrdp->received], src, size);
> 
> I'd be happier with an assert() ensuring that size doesn't exceed the
> buffer space we have left.
> 
>> +        hrcrdp->received += size;
>> +    }
>> +    qemu_sem_post(&hrcrdp->sem);
> 
> Could you avoid a few wakeups by only posting the semaphore once the
> buffer is filled?

I tried that now, but calling rng_backend_request_entropy() from within
the callback function does not work (since entropy_available() in
rng-random.c clears the callback function variable after having called
the callback).

And since you normally seem get 8 bytes in the first shot already anyway
when using a good random number generator source, I think it's best to
simply keep the logic as I've currently got it - at least that's easiest
to understand when reading the source code.

>> +}
>> +
>> +static target_ulong h_random(PowerPCCPU *cpu, sPAPRMachineState *spapr,
>> +                             target_ulong opcode, target_ulong *args)
>> +{
>> +    HRandomData hrcrd;
>> +
>> +    if (!hrandom_rng) {
>> +        return H_HARDWARE;
>> +    }
>> +
>> +    qemu_sem_init(&hrcrd.sem, 0);
>> +    hrcrd.val.v64 = 0;
>> +    hrcrd.received = 0;
>> +
>> +    qemu_mutex_unlock_iothread();
>> +    while (hrcrd.received < 8) {
>> +        rng_backend_request_entropy((RngBackend *)hrandom_rng,
>> +                                    8 - hrcrd.received, random_recv, 
>> &hrcrd);
>> +        qemu_sem_wait(&hrcrd.sem);
>> +    }
>> +    qemu_mutex_lock_iothread();
>> +
>> +    qemu_sem_destroy(&hrcrd.sem);
>> +    args[0] = hrcrd.val.v64;
>> +
>> +    return H_SUCCESS;
>> +}

I'll post a new version with the other changes soon.

 Thomas


Attachment: signature.asc
Description: OpenPGP digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]