qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH V8 03/14] Add persistent state handling to TPM T


From: Stefan Berger
Subject: Re: [Qemu-devel] [PATCH V8 03/14] Add persistent state handling to TPM TIS frontend driver
Date: Sun, 11 Sep 2011 12:45:05 -0400
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.18) Gecko/20110621 Fedora/3.1.11-1.fc14 Lightning/1.0b3pre Thunderbird/3.1.11

On 09/09/2011 05:13 PM, Paul Moore wrote:
On Wednesday, August 31, 2011 10:35:54 AM Stefan Berger wrote:
Index: qemu-git/hw/tpm_tis.c
===================================================================
--- qemu-git.orig/hw/tpm_tis.c
+++ qemu-git/hw/tpm_tis.c
@@ -6,6 +6,8 @@
   * Author: Stefan Berger<address@hidden>
   *         David Safford<address@hidden>
   *
+ * Xen 4 support: Andrease Niederl<address@hidden>
+ *
   * This program is free software; you can redistribute it and/or
   * modify it under the terms of the GNU General Public License as
   * published by the Free Software Foundation, version 2 of the
@@ -839,3 +841,167 @@ static int tis_init(ISADevice *dev)
   err_exit:
      return -1;
  }
+
+/* persistent state handling */
+
+static void tis_pre_save(void *opaque)
+{
+    TPMState *s = opaque;
+    uint8_t locty = s->active_locty;
Is it safe to read s->active_locty without the state_lock?  I'm not sure at
this point but I saw it being protected by the lock elsewhere ...
It cannot change anymore since no vCPU is in the TPM TIS emulation layer anymore but all we're doing is wait for the last outstanding command to be returned to use from the TPM thread. I don't mind putting this reading into the critical section, though, just to have it be consistent.

If the state_lock does not protect all of the structure, it might be nice to
add some comments in the structure declaration explaining what fields are
protected by the state_lock and which are not.

+    qemu_mutex_lock(&s->state_lock);
+
+    /* wait for outstanding requests to complete */
+    if (IS_VALID_LOCTY(locty)&&  s->loc[locty].state == STATE_EXECUTION) {
+        if (!s->be_driver->ops->job_for_main_thread) {
+            qemu_cond_wait(&s->from_tpm_cond,&s->state_lock);
+        } else {
+            while (s->loc[locty].state == STATE_EXECUTION) {
+                qemu_mutex_unlock(&s->state_lock);
+
+                s->be_driver->ops->job_for_main_thread(NULL);
+                usleep(10000);
+
+                qemu_mutex_lock(&s->state_lock);
Hmm, this may be right, but it looks dangerous to me; can the active_locty
change while the state_lock is dropped?  What about loc[locty].state?
This is correct since at this time the VM is not executing anymore, so no vCPU can be in the TPM TIS emulation code anymore, but we're waiting for the last outstanding TPM command finish processing in the TPM thread (to have it's response 'caught' and stored as part of the TPM TIS state). The locking is against the thread at this point that may change the .state variable, although I don't think it would be necessary to hold the lock there at all except for in the case where the condition is being waited for in the other else branch.
+            }
+        }
+    }
+
+#ifdef DEBUG_TIS_SR
+    fprintf(stderr,
+            "tpm_tis: suspend: locty 0 : r_offset = %d, w_offset = %d\n",
+            s->loc[0].r_offset, s->loc[0].w_offset);
+    if (s->loc[0].r_offset) {
+        tis_dump_state(opaque, 0);
+    }
+#endif
+
+    qemu_mutex_unlock(&s->state_lock);
+
+    /* copy current active read or write buffer into the buffer
+       written to disk */
+    if (IS_VALID_LOCTY(locty)) {
+        switch (s->loc[locty].state) {
More concerns about loc[locty].state without the state_lock.

The section you are quoting here is further down in the same function that prepares the TPM TIS for state serialization before final migration/suspend. At this point we have caught the last outstanding response from the TPM thread and that thread will not process any more commands at this point (queuing of commands it not possible with TPM TIS but strictly sending a single request to it, have it processed, getting that response -- so the thread will be idle). Also since no more vCPU is in the TPM TIS emulation layer the state cannot change anymore. Again, also here I can have the critical section extended over this area.
+        case STATE_RECEPTION:
+            memcpy(s->buf,
+                   s->loc[locty].w_buffer.buffer,
+                   MIN(sizeof(s->buf),
+                       s->loc[locty].w_buffer.size));
+            s->offset = s->loc[locty].w_offset;
Same thing, just different fields ...

+        break;
+        case STATE_COMPLETION:
+            memcpy(s->buf,
+                   s->loc[locty].r_buffer.buffer,
+                   MIN(sizeof(s->buf),
+                       s->loc[locty].r_buffer.size));
+            s->offset = s->loc[locty].r_offset;
Again ...
Ok, I can move that single qemu_mutex_unlock(&s->state_lock) above to after the switch() though I don't think it is necessary in this case due the state the emulation is in. Though I agree that the code 'looks' more correct.
+        break;
+        default:
+            /* leak nothing */
+            memset(s->buf, 0x0, sizeof(s->buf));
Maybe?

What do you mean?
This command just makes sure that no previous response still stored in the TPM TIS buffer is being stored as part of the TPM TIS state serialization.

Thanks for the review.

   Stefan
+        break;
+        }
+    }
+
+    s->be_driver->ops->save_volatile_data();
+}




reply via email to

[Prev in Thread] Current Thread [Next in Thread]