qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC for 3.0] tests/tpm-emu: double the timeout


From: Alex Bennée
Subject: Re: [Qemu-devel] [RFC for 3.0] tests/tpm-emu: double the timeout
Date: Fri, 06 Jul 2018 12:07:14 +0100
User-agent: mu4e 1.1.0; emacs 26.1.50

Marc-André Lureau <address@hidden> writes:

> On Fri, Jul 6, 2018 at 12:19 PM, Alex Bennée <address@hidden> wrote:
>>
>> Marc-André Lureau <address@hidden> writes:
>>
>>> Hi
>>>
>>> On Fri, Jul 6, 2018 at 10:06 AM, Alex Bennée <address@hidden> wrote:
>>>> We see various failures on Travis so lets just double the timeout and
>>>> see if that makes them go away.
>>>
>>> This is just waiting for the thread to start and open a socket. It
>>> shouldn't be a problem to wait longer, but do you have a Travis error
>>> log?
>>
>> For example:
>>
>> https://travis-ci.org/qemu/qemu/jobs/400436724#L8971
>>
>>   GTESTER check-qtest-i386
>> **
>> ERROR:tests/tpm-emu.c:27:tpm_emu_test_wait_cond: code should not be reached
>>
>
> thanks, what about increasing the timeout to 30s ? I am afraid x2
> might not be enough for such overloaded systems.

Sure - the Travis servers are certainly hammered most of the time.

>
>>>>
>>>> Signed-off-by: Alex Bennée <address@hidden>
>>>> ---
>>>>  tests/tpm-emu.c | 2 +-
>>>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>>>
>>>> diff --git a/tests/tpm-emu.c b/tests/tpm-emu.c
>>>> index 8c2bd53cad..308f1884f6 100644
>>>> --- a/tests/tpm-emu.c
>>>> +++ b/tests/tpm-emu.c
>>>> @@ -20,7 +20,7 @@
>>>>
>>>>  void tpm_emu_test_wait_cond(TestState *s)
>>>>  {
>>>> -    gint64 end_time = g_get_monotonic_time() + 5 * G_TIME_SPAN_SECOND;
>>>> +    gint64 end_time = g_get_monotonic_time() + 10 * G_TIME_SPAN_SECOND;
>>>>
>>>>      g_mutex_lock(&s->data_mutex);
>>>>      if (!g_cond_wait_until(&s->data_cond, &s->data_mutex, end_time)) {
>>>> --
>>>> 2.17.1
>>>>
>>>>
>>
>>
>> --
>> Alex Bennée


--
Alex Bennée



reply via email to

[Prev in Thread] Current Thread [Next in Thread]