qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 08/11] replay: introduce a central report point for sync erro


From: Pavel Dovgalyuk
Subject: Re: [PATCH 08/11] replay: introduce a central report point for sync errors
Date: Thu, 7 Dec 2023 11:45:34 +0300
User-agent: Mozilla Thunderbird

On 06.12.2023 19:48, Richard Henderson wrote:
On 12/6/23 03:35, Philippe Mathieu-Daudé wrote:
Hi Alex,

On 5/12/23 21:41, Alex Bennée wrote:
Figuring out why replay has failed is tricky at the best of times.
Lets centralise the reporting of a replay sync error and add a little
bit of extra information to help with debugging.

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
---
  replay/replay-internal.h | 12 ++++++++++++
  replay/replay-char.c     |  6 ++----
  replay/replay-internal.c |  1 +
  replay/replay.c          |  9 +++++++++
  4 files changed, 24 insertions(+), 4 deletions(-)

diff --git a/replay/replay-internal.h b/replay/replay-internal.h
index 1bc8fd5086..709e2eb4cb 100644
--- a/replay/replay-internal.h
+++ b/replay/replay-internal.h
@@ -74,6 +74,7 @@ enum ReplayEvents {
   * @cached_clock: Cached clocks values
   * @current_icount: number of processed instructions
   * @instruction_count: number of instructions until next event
+ * @current_event: current event index
   * @data_kind: current event
   * @has_unread_data: true if event not yet processed
   * @file_offset: offset into replay log at replay snapshot
@@ -84,6 +85,7 @@ typedef struct ReplayState {
      int64_t cached_clock[REPLAY_CLOCK_COUNT];
      uint64_t current_icount;
      int instruction_count;
+    unsigned int current_event;
      unsigned int data_kind;
      bool has_unread_data;
      uint64_t file_offset;
Shouldn't this field be migrated?

No, it's for diagnostic use only.

It should be migrated, because RR may be started from the snapshot, which references the middle of replayed scenario.


Pavel Dovgalyuk





reply via email to

[Prev in Thread] Current Thread [Next in Thread]