qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v4] s390: diagnose 318 info reset and migration


From: David Hildenbrand
Subject: Re: [Qemu-devel] [PATCH v4] s390: diagnose 318 info reset and migration support
Date: Tue, 14 May 2019 11:05:58 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.6.1

On 14.05.19 11:03, David Hildenbrand wrote:
> On 14.05.19 11:00, Cornelia Huck wrote:
>> On Tue, 14 May 2019 10:56:43 +0200
>> Christian Borntraeger <address@hidden> wrote:
>>
>>> On 14.05.19 10:50, David Hildenbrand wrote:
>>
>>>> Another idea for temporary handling: Simply only indicate 240 CPUs to
>>>> the guest if the response does not fit into a page. Once we have that
>>>> SCLP thingy, this will be fixed. Guest migration back and forth should
>>>> work, as the VCPUs are fully functional (and initially always stopped),
>>>> the guest will simply not be able to detect them via SCLP when booting
>>>> up, and therefore not use them.  
>>>
>>> Yes, that looks like a good temporary solution. In fact if the guest relies
>>> on simply probing it could even make use of the additional CPUs. Its just
>>> the sclp response that is limited to 240 (or make it 247?)
>>
>> Where did the 240 come from - extra spare room? If so, 247 would
>> probably be all right?
>>
> 
> +++ b/include/hw/s390x/sclp.h
> @@ -133,6 +133,8 @@ typedef struct ReadInfo {
>      uint16_t highest_cpu;
>      uint8_t  _reserved5[124 - 122];     /* 122-123 */
>      uint32_t hmfai;
> +    uint8_t  _reserved7[134 - 128];     /* 128-133 */
> +    uint8_t  fac134;
>      struct CPUEntry entries[0];
>  } QEMU_PACKED ReadInfo;
> 
> 
> So we have "4096 - 135 + 1" memory. Each element is 16 bytes wide.
> -> 246 CPUs fit.

(I meant 247 :( )


-- 

Thanks,

David / dhildenb



reply via email to

[Prev in Thread] Current Thread [Next in Thread]