[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-discuss] Kernel panic when booting from SCSI disk

From: Mike Lovell
Subject: Re: [Qemu-discuss] Kernel panic when booting from SCSI disk
Date: Tue, 07 Aug 2012 01:17:49 -0600
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:14.0) Gecko/20120714 Thunderbird/14.0

On 08/06/2012 12:28 PM, Mike Lovell wrote:
On 07/30/2012 12:12 AM, Thomas Oberhammer wrote:
I tried different things, but the situation is unchanged.

Conversion of the disks:

I started with the vmdk files that were contained in the ova.
I was not able to boot them, neither with if=ide, nor with if=scsi.
(The error message was: no bootable devices found)

After that, I converted the disks:
vmware-vdiskmanager -r disk1.vmdk -t 0 disk1_t0.vmdk

Now, qemu found a bootable device, but the error was the same as described in the original post.

The next step was to convert the disk to qcow2 format:
qemu-img convert disk1_t0.vmdk -O qcow2 disk1.qcow2

But also after this, the situation was still the same.

Boot loader parameters:

The current boot loader parameters are:

root (hd0,1)
kernel /vmlinuz root=LABEL=ROOT1
initrd /initrd.img

I tried root=/dev/... with sda, sda1, sda5, hda, hda1, hda5, disk0p5
in combination with if=ide and if=scsi

The only combination that did not create a kernel panic was
if=ide and root=/dev/hda5

I also enabled the debugging option, giving a little more insights what actually caused the kernel panic:
(as you said, it cannot find the root file system)

Setting up fail make_request
Freeing unused kernel memory: 200k freed
Mounted /proc filesystem
Mounting sysfs
Creating /dev
Starting udev
Waiting for root device
Waiting for ROOT filesystem to become available [ROOT1]
Waiting for ROOT filesystem to become available [ROOT1]
ERROR: Unable to find root device ROOT1
Creating root device
Mounting root filesystem
mount: Mounting /dev/root on /sysroot failed: No such file or directory
mount: Mounting /dev on /sysroot/dev failed: No such file or directory
mount: Mounting /proc on /sysroot/proc failed: No such file or directory
mount: Mounting /dev/root on /sysroot failed: Invalid argument
Switching to new root
switch_root: bad newroot /sysroot
Kernel panic - not syncing: Attempted to kill init!

I wonder what the right path would be to access a disk that is attached via
-drive file=diskname,if=scsi

sorry for the delay in a response. the email got lost in the inbox.

if you use 'if=scsi' on the command line, the disk should show up to the guest as /dev/sd*. if the system is still throwing a kernel panic during boot it is possible that the guest doesn't have drivers for the proper scsi controller. i don't remember what scsi controller vmware emulates but i just checked on qemu and the scsi controller it emulates is a LSI/Symbios 53c895a. it looks like the linux 3.2 kernel driver is sym53c8xx. i don't know what distribution you're using or if the 2.6.9 kernel compiles this by default. checking to make sure the guest can see this kind of controller would be the next place that i would look.

i had some time this evening and did some experimentation myself on this. i'm assuming you are using some form of RHEL4 since you are still using a system with a 2.6.9 kernel. i don't use rhel and friends myself but i grabbed a dvd of centos 4.8 and installed it into a qemu 1.1 guest using an ide disk. then i shut the guest down and restarted it with only changing the drive if= to scsi and add a -option-rom for the lsi rom. after changing the hdd interface, the system panicked with the same error you originally listed.

after getting the error, i started inspecting the system to see how things were set up. the kernel 2.6.9 rpm package from the distribution did include the sym53c8xx driver. the problem is that the driver is not included in the initrd when the system was installed to a drive on a different controller. this means that during the initrd phase of start up, the kernel could not find the disks attached to the scsi controller and failed. i tried to rebuild the initrd by doing something like `mkinitrd -f --with=sym53c8xx /boot/initrd-<kernel version>.img <kernel version>` but that still resulted in an unbootable system. the kernel output was slightly different because the kernel did detect the controller and drives during boot but lvm still wasn't able to load the VG.

finally, what i did was boot using if=ide with a second disk attached that was configured to use if=scsi. something like '-drive file=test.qcow,if=ide -drive file=test2.qcow,if=scsi' and let the system boot normally this way. after the system booted, i did all the system updates through yum which included a kernel update. while yum was installing the new kernel package, it generated a new initrd for the updated kernel and this new initrd included the appropriate scsi drivers since the scsi controller is present in the system. i never did anything with the second disk in the guest. after the new kernel package was installed, i shutdown the vm and restarted with just the first disk and with interface set for scsi. this finally worked. i guess how the kernel installation runs mkinitrd was different from how i was running mkinitrd.

i hope that all makes sense and gives you information to make progress on this if you are still stuck. this particular problem wouldn't be an issue on other distributions (like debian and derivatives) since the initrd they build includes most of the kernel disk controllers by default even if the system doesn't use them.

the tl;dr version. its most likely the initrd in the guest doesn't include the sym53c8xx driver and the initrd needs to be updated to include the driver.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]