qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [Qemu-block] [PATCH 1/2] block: vpc - prevent overflow


From: Stefan Hajnoczi
Subject: Re: [Qemu-devel] [Qemu-block] [PATCH 1/2] block: vpc - prevent overflow if max_table_entries >= 0x40000000
Date: Fri, 26 Jun 2015 10:57:05 +0100
User-agent: Mutt/1.5.23 (2014-03-12)

On Thu, Jun 25, 2015 at 11:05:20AM -0400, Jeff Cody wrote:
> On Thu, Jun 25, 2015 at 03:28:35PM +0100, Stefan Hajnoczi wrote:
> > On Wed, Jun 24, 2015 at 03:54:27PM -0400, Jeff Cody wrote:
> > > @@ -269,7 +270,9 @@ static int vpc_open(BlockDriverState *bs, QDict 
> > > *options, int flags,
> > >              goto fail;
> > >          }
> > >  
> > > -        s->pagetable = qemu_try_blockalign(bs->file, 
> > > s->max_table_entries * 4);
> > > +        pagetable_size = (size_t) s->max_table_entries * 4;
> > > +
> > > +        s->pagetable = qemu_try_blockalign(bs->file, pagetable_size);
> > 
> > On 32-bit hosts size_t is 32-bit so the overflow hasn't been solved.
> > 
> > Does it make sense to impose a limit on pagetable_size?
> 
> Good point.  Yes, it does.
> 
> The VHD spec says that the "Max Table Entries" field should be equal
> to the disk size / block size.  I don't know if there are images out
> there that treat that as ">= disk size / block size" rather than "==",
> however.  But if we assume max size of 2TB for a VHD disk, and a
> minimal block size of 512 bytes, that would give us a
> max_table_entries of 0x100000000, which exceeds 32 bits by itself.
> 
> For pagetable_size to fit in a 32-bit, that means to support 2TB on a
> 32-bit host in the current implementation, the minimum block size is
> 4096.
> 
> We could check during open / create that 
> (disk_size / block_size) * 4 < SIZE_MAX, and refuse to open if this is
> not true (and also validate max_table_entries to fit in the same).

Sounds good.

Attachment: pgpegeSpIwCEa.pgp
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]