qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 2/2] 9p-synth: use mutex on read-side


From: Harsh Bora
Subject: Re: [Qemu-devel] [PATCH 2/2] 9p-synth: use mutex on read-side
Date: Tue, 14 Aug 2012 00:43:00 +0530
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:14.0) Gecko/20120717 Thunderbird/14.0

On 08/08/2012 05:25 PM, Paolo Bonzini wrote:
Even with the fix in the previous patch, the lockless handling of paths
in 9p-synth is wrong.  Paths can outlive rcu_read_unlock arbitrarily
via the V9fsPath objects that 9p-synth creates.  This would require
a reference counting mechanism that is not there and is quite hard to
retrofit into V9fsPath.

It seems to me that this was a premature optimization, so replace
everything with a simple mutex.

Hi Paolo,

The rcu_read_[un]lock() macros were added as no-ops (based on your inputs on #qemu) to replace reader-writer locks with RCU based locking as suggested while proposing QemuRWLock API for RW locks (See http://lists.gnu.org/archive/html/qemu-devel/2011-10/msg00192.html).

v9fs_synth_mutex was actually a pthread_rwlock_t earlier.
I am not sure if reader lock would be better than having a plain mutex for readers as well.

Aneesh, inputs ?

Also, if we are going forward with this change, we may also want to remove definition of QLIST_INSERT_HEAD_RCU, since the code being remove below is the only consumer of that macro.

regards,
Harsh


Signed-off-by: Paolo Bonzini <address@hidden>
---
  hw/9pfs/virtio-9p-synth.c | 12 ++++++------
  qemu-thread.h             |  3 ---
  2 file modificati, 6 inserzioni(+), 9 rimozioni(-)

diff --git a/hw/9pfs/virtio-9p-synth.c b/hw/9pfs/virtio-9p-synth.c
index a91ebe1..426605e 100644
--- a/hw/9pfs/virtio-9p-synth.c
+++ b/hw/9pfs/virtio-9p-synth.c
@@ -59,7 +59,7 @@ static V9fsSynthNode *v9fs_add_dir_node(V9fsSynthNode 
*parent, int mode,
      }
      node->private = node;
      strncpy(node->name, name, sizeof(node->name));
-    QLIST_INSERT_HEAD_RCU(&parent->child, node, sibling);
+    QLIST_INSERT_HEAD(&parent->child, node, sibling);
      return node;
  }

@@ -133,7 +133,7 @@ int qemu_v9fs_synth_add_file(V9fsSynthNode *parent, int 
mode,
      node->attr->mode   = mode;
      node->private      = arg;
      strncpy(node->name, name, sizeof(node->name));
-    QLIST_INSERT_HEAD_RCU(&parent->child, node, sibling);
+    QLIST_INSERT_HEAD(&parent->child, node, sibling);
      ret = 0;
  err_out:
      qemu_mutex_unlock(&v9fs_synth_mutex);
@@ -229,7 +229,7 @@ static int v9fs_synth_get_dentry(V9fsSynthNode *dir, struct 
dirent *entry,
      int i = 0;
      V9fsSynthNode *node;

-    rcu_read_lock();
+    qemu_mutex_lock(&v9fs_synth_mutex);
      QLIST_FOREACH(node, &dir->child, sibling) {
          /* This is the off child of the directory */
          if (i == off) {
@@ -245,7 +245,7 @@ static int v9fs_synth_get_dentry(V9fsSynthNode *dir, struct 
dirent *entry,
      v9fs_synth_direntry(node, entry, off);
      *result = entry;
  out:
-    rcu_read_unlock();
+    qemu_mutex_unlock(&v9fs_synth_mutex);
      return 0;
  }

@@ -476,7 +476,7 @@ static int v9fs_synth_name_to_path(FsContext *ctx, V9fsPath 
*dir_path,

      }

-    rcu_read_lock();
+    qemu_mutex_lock(&v9fs_synth_mutex);
      if (!dir_path) {
          dir_node = &v9fs_synth_root;
      } else {
@@ -504,7 +504,7 @@ static int v9fs_synth_name_to_path(FsContext *ctx, V9fsPath 
*dir_path,
      memcpy(target->data, &node, sizeof(void *));
      target->size = sizeof(void *);
  err_out:
-    rcu_read_unlock();
+    qemu_mutex_unlock(&v9fs_synth_mutex);
      return ret;
  }

diff --git a/qemu-thread.h b/qemu-thread.h
index 05fdaaf..3c9715e 100644
--- a/qemu-thread.h
+++ b/qemu-thread.h
@@ -23,9 +23,6 @@ void qemu_mutex_lock(QemuMutex *mutex);
  int qemu_mutex_trylock(QemuMutex *mutex);
  void qemu_mutex_unlock(QemuMutex *mutex);

-#define rcu_read_lock() do { } while (0)
-#define rcu_read_unlock() do { } while (0)
-
  void qemu_cond_init(QemuCond *cond);
  void qemu_cond_destroy(QemuCond *cond);





reply via email to

[Prev in Thread] Current Thread [Next in Thread]