-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
QEMU-1.2.0 based TLMu #4
Comments
On Thu, Nov 01, 2012 at 03:06:29AM -0700, Yutetsu TAKATSUKASA wrote:
Hi, This looks like cool stuff! I'm going to give it a try but am pretty A few questions:
Thanks for working on this! Best regads, |
Hi
Yes. But output looks changed. It shows like below on my computer. They looks running sequentially.
No. Whole environment can be downloaded from http://www.hdlab.co.jp/web/a050consulting/b009armcpumodel/
Not tested, but it should. Let me summarize the memory access handling in my implementation.
Regards, |
$ ~/usr/bin/qemu-system-x86_64 -enable-kvm -m 1024 -drive if=none,id=drive0,cache=none,aio=native,format=raw,file=/root/Image/centos-6.4.raw -device virtio-blk-pci,drive=drive0,scsi=off,x-data-plane=on,config-wce=on # make dataplane fail to initialize qemu-system-x86_64: -device virtio-blk-pci,drive=drive0,scsi=off,x-data-plane=on,config-wce=on: device is incompatible with x-data-plane, use config-wce=off *** glibc detected *** /root/usr/bin/qemu-system-x86_64: free(): invalid pointer: 0x00007f001fef12f8 *** ======= Backtrace: ========= /lib64/libc.so.6(+0x7d776)[0x7f00153a5776] /root/usr/bin/qemu-system-x86_64(+0x2c34ec)[0x7f001cf5b4ec] /root/usr/bin/qemu-system-x86_64(+0x342f9a)[0x7f001cfdaf9a] /root/usr/bin/qemu-system-x86_64(+0x33694e)[0x7f001cfce94e] .................... (gdb) bt #0 0x00007f3bf3a12015 in raise () from /lib64/libc.so.6 #1 0x00007f3bf3a1348b in abort () from /lib64/libc.so.6 #2 0x00007f3bf3a51a4e in __libc_message () from /lib64/libc.so.6 #3 0x00007f3bf3a57776 in malloc_printerr () from /lib64/libc.so.6 #4 0x00007f3bfb60d4ec in free_and_trace (mem=0x7f3bfe0129f8) at vl.c:2786 #5 0x00007f3bfb68cf9a in virtio_cleanup (vdev=0x7f3bfe0129f8) at /root/Develop/QEMU/qemu/hw/virtio.c:900 #6 0x00007f3bfb68094e in virtio_blk_device_init (vdev=0x7f3bfe0129f8) at /root/Develop/QEMU/qemu/hw/virtio-blk.c:666 #7 0x00007f3bfb68dadf in virtio_device_init (qdev=0x7f3bfe0129f8) at /root/Develop/QEMU/qemu/hw/virtio.c:1092 #8 0x00007f3bfb50da46 in device_realize (dev=0x7f3bfe0129f8, err=0x7fff479c9258) at hw/qdev.c:176 ............................. In virtio_blk_device_init(), the memory which vdev point to is a static member of "struct VirtIOBlkPCI", not heap memory, and it does not get freed. So we shoule use virtio_common_cleanup() to clean this VirtIODevice rather than virtio_cleanup(), which attempts to free the vdev. This error was introduced by commit 05ff686 recently. Signed-off-by: Dunrong Huang <[email protected]> Signed-off-by: Kevin Wolf <[email protected]>
…d region The memory API allows a MemoryRegion's size to be 2^64, as a special case (otherwise the size always fits in a 64 bit integer). This meant that attempts to access address zero in a 2^64 sized region would assert in address_space_translate(): #3 0x00007ffff3e4d192 in __GI___assert_fail#(assertion=0x555555a43f32 "!a.hi", file=0x555555a43ef0 "include/qemu/int128.h", line=18, function=0x555555a4439f "int128_get64") at assert.c:103 #4 0x0000555555877642 in int128_get64 (a=...) at include/qemu/int128.h:18 #5 0x00005555558782f2 in address_space_translate (as=0x55555668d140, /addr=0, xlat=0x7fffafac9918, plen=0x7fffafac9920, is_write=false) at exec.c:221 Fix this by doing the 'min' operation in 128 bit arithmetic rather than 64 bit arithmetic (we know the result of the 'min' definitely fits in 64 bits because one of the inputs did). Signed-off-by: Peter Maydell <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
The docs for glfs_init suggest that the function sets errno on every failure. In fact it doesn't. As other functions such as qemu_gluster_open() in the gluster block code report their errors based on this fact we need to make sure that errno is set on each failure. This fixes a crash of qemu-img/qemu when a gluster brick isn't accessible from given host while the server serving the volume description is. Thread 1 (Thread 0x7ffff7fba740 (LWP 203880)): #0 0x00007ffff77673f8 in glfs_lseek () from /usr/lib64/libgfapi.so.0 #1 0x0000555555574a68 in qemu_gluster_getlength () #2 0x0000555555565742 in refresh_total_sectors () #3 0x000055555556914f in bdrv_open_common () #4 0x000055555556e8e8 in bdrv_open () #5 0x000055555556f02f in bdrv_open_image () #6 0x000055555556e5f6 in bdrv_open () #7 0x00005555555c5775 in bdrv_new_open () #8 0x00005555555c5b91 in img_info () #9 0x00007ffff62c9c05 in __libc_start_main () from /lib64/libc.so.6 #10 0x00005555555648ad in _start () Signed-off-by: Stefan Hajnoczi <[email protected]>
If libusb_get_device_list() fails, the uninitialized local variable libusb_device would be passed to libusb_free_device_list(), that will cause a crash, like: (gdb) bt #0 0x00007fbbb4bafc10 in pthread_mutex_lock () from /lib64/libpthread.so.0 #1 0x00007fbbb233e653 in libusb_unref_device (dev=0x6275682d627375) at core.c:902 #2 0x00007fbbb233e739 in libusb_free_device_list (list=0x7fbbb6e8436e, unref_devices=<optimized out>) at core.c:653 #3 0x00007fbbb6cd80a4 in usb_host_auto_check (unused=unused@entry=0x0) at hw/usb/host-libusb.c:1446 #4 0x00007fbbb6cd8525 in usb_host_initfn (udev=0x7fbbbd3c5670) at hw/usb/host-libusb.c:912 #5 0x00007fbbb6cc123b in usb_device_init (dev=0x7fbbbd3c5670) at hw/usb/bus.c:106 ... So initialize libusb_device at the begin time. Signed-off-by: Jincheng Miao <[email protected]> Reviewed-by: Gonglei <[email protected]> Signed-off-by: Gerd Hoffmann <[email protected]>
VirtIOBlockReq is freed later by virtio_blk_free_request() in hw/block/virtio-blk.c. Remove this extraneous g_slice_free(). This patch fixes the following segfault: 0x00005555556373af in virtio_blk_rw_complete (opaque=0x5555565ff5e0, ret=0) at hw/block/virtio-blk.c:99 99 bdrv_acct_done(req->dev->bs, &req->acct); (gdb) print req $1 = (VirtIOBlockReq *) 0x5555565ff5e0 (gdb) print req->dev $2 = (VirtIOBlock *) 0x0 (gdb) bt #0 0x00005555556373af in virtio_blk_rw_complete (opaque=0x5555565ff5e0, ret=0) at hw/block/virtio-blk.c:99 #1 0x0000555555840ebe in bdrv_co_em_bh (opaque=0x5555566152d0) at block.c:4675 #2 0x000055555583de77 in aio_bh_poll (ctx=ctx@entry=0x5555563a8150) at async.c:81 #3 0x000055555584b7a7 in aio_poll (ctx=0x5555563a8150, blocking=blocking@entry=true) at aio-posix.c:188 #4 0x00005555556e520e in iothread_run (opaque=0x5555563a7fd8) at iothread.c:41 #5 0x00007ffff42ba124 in start_thread () from /usr/lib/libpthread.so.0 #6 0x00007ffff16d14bd in clone () from /usr/lib/libc.so.6 Reported-by: Max Reitz <[email protected]> Cc: Fam Zheng <[email protected]> Signed-off-by: Stefan Hajnoczi <[email protected]> Tested-by: Christian Borntraeger <[email protected]> Signed-off-by: Kevin Wolf <[email protected]>
If the pci bridge enters in error flow as part of init process it will only delete the shpc mmio subregion but not remove it from the properties list, resulting in segmentation fault when the bridge runs the exit function. Example: add a pci bridge without specifing the chassis number: <qemu-bin> ... -device pci-bridge,id=p1 Result: (qemu) qemu-system-x86_64: -device pci-bridge,id=p1: Bridge chassis not specified. Each bridge is required to be assigned a unique chassis id > 0. qemu-system-x86_64: -device pci-bridge,id=p1: Device initialization failed. Segmentation fault (core dumped) if (child->class->unparent) { #0 0x00005555558d629b in object_finalize_child_property (obj=0x555556d2e830, name=0x555556d30630 "shpc-mmio[0]", opaque=0x555556a42fc8) at qom/object.c:1078 #1 0x00005555558d4b1f in object_property_del_all (obj=0x555556d2e830) at qom/object.c:367 #2 0x00005555558d4ca1 in object_finalize (data=0x555556d2e830) at qom/object.c:412 #3 0x00005555558d55a1 in object_unref (obj=0x555556d2e830) at qom/object.c:720 #4 0x000055555572c907 in qdev_device_add (opts=0x5555563544f0) at qdev-monitor.c:566 #5 0x0000555555744f16 in device_init_func (opts=0x5555563544f0, opaque=0x0) at vl.c:2213 #6 0x00005555559cf5f0 in qemu_opts_foreach (list=0x555555e0f8e0 <qemu_device_opts>, func=0x555555744efa <device_init_func>, opaque=0x0, abort_on_failure=1) at util/qemu-option.c:1057 #7 0x000055555574a11b in main (argc=16, argv=0x7fffffffdde8, envp=0x7fffffffde70) at vl.c:423 Unparent the shpc mmio region as part of shpc cleanup. Signed-off-by: Marcel Apfelbaum <[email protected]> Reviewed-by: Michael S. Tsirkin <[email protected]> Signed-off-by: Michael S. Tsirkin <[email protected]> Reviewed-by: Amos Kong <[email protected]>
Hi Edgar,
I have 'rebase'd tlmu onto QEMU-1.2.0.
Rebase command of git did not help because QEMU core has changed pretty much. Ofcourse you know :)
I had to rewrite the memory access hooks.
Now tlm_mem is treated as ramd which is almost as same as romd of QEMU.
I have added another mem_map function, tlmu_map_ram_nosync().
Memories specified with tlmu_map_ram_nosync() is accessed via DMI without any timing calculation.
Search walk of TLMRegisterRamEntry never happens anymore.
Linux on ARM now boots up in 3sec, which is 7 times faster.
I only tested the functionality of ARM.
There must be new bugs about timing calculation and syncing.
I am planning to keep up with mainline QEMU release.
If you are interested, please try tlmu-1.2.0 branch on our repository.
https://github.com/hdlab/tlmu/tree/tlmu-1.2.0
Best regards,
Yutetsu.
The text was updated successfully, but these errors were encountered: