Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

virsh qemu-monitor-command does not change cpu index #7

Closed
sathnaga opened this issue Nov 13, 2017 · 3 comments
Closed

virsh qemu-monitor-command does not change cpu index #7

sathnaga opened this issue Nov 13, 2017 · 3 comments

Comments

@sathnaga
Copy link
Member

sathnaga commented Nov 13, 2017

cde:info Mirrored with LTC bug https://bugzilla.linux.ibm.com/show_bug.cgi?id=161249 </cde:info>

libvirt-3.6.0-3.rel.gitdd9401b.el7.centos.ppc64le

cpu index is not getting changed using virsh qemu-monitor-command command

virsh qemu-monitor-command virt-tests-vm1 --cmd "info cpus" --hmp
* CPU #0: nip=0xc0000000000b715c thread_id=34297
  CPU #1: nip=0xc0000000000b715c thread_id=63839

# virsh qemu-monitor-command virt-tests-vm1 --cmd "cpu 1" --hmp

# echo $?
0

# virsh qemu-monitor-command virt-tests-vm1 --cmd "info cpus" --hmp
* CPU #0: nip=0xc0000000000b715c thread_id=34297
  CPU #1: nip=0xc0000000000b715c thread_id=63839-----------------------NOK

# virsh qemu-monitor-command virt-tests-vm1 --cmd "cpu 2" --hmp
invalid CPU index-----------OK


the same works fine with just qemu-kvm
qemu-kvm -M pseries,max-cpu-compat=power9,accel=kvm -smp 2,cores=2,threads=1,sockets=1 -m 256 -nographic -enable-kvm /var/lib/libvirt/images/workspace/runAvocadoFVTTest/avocado-fvt-wrapper/data/avocado-vt/images/hostos-3.0-ppc64le.qcow2 -monitor stdio -serial /dev/pts/5

(qemu) info cpus
* CPU #0: nip=0xc0000000000b190c thread_id=59666
  CPU #1: nip=0xc0000000000b190c thread_id=60775
(qemu) cpu 1
(qemu) info cpus
  CPU #0: nip=0xc0000000000b190c thread_id=59666
* CPU #1: nip=0xc0000000000b190c thread_id=60775-----------------OK

(qemu) cpu 2
invalid CPU index
@sathnaga
Copy link
Member Author

# virsh qemu-monitor-command virt-tests-vm1 --cmd "info cpus" --hmp
* CPU #0: nip=0xc0000000000b715c thread_id=34297
  CPU #1: nip=0xc0000000000b715c thread_id=63839

# virsh qemu-monitor-command virt-tests-vm1 --cmd '{ "execute": "cpu", "arguments": { "index": 1 } }'
{"return":{},"id":"libvirt-60"}

# virsh qemu-monitor-command virt-tests-vm1 --cmd '{ "execute": "query-cpus" }' --pretty
{
  "return": [
    {
      "arch": "ppc",
      "current": true,
      "props": {
        "core-id": 0,
        "node-id": 0
      },
      "CPU": 0,
      "nip": -4611686018426637988,
      "qom_path": "/machine/unattached/device[0]/thread[0]",
      "halted": false,
      "thread_id": 34297
    },
    {
      "arch": "ppc",
      "current": false,
      "props": {
        "core-id": 1,
        "node-id": 0
      },
      "CPU": 1,
      "nip": -4611686018426637988,
      "qom_path": "/machine/peripheral/vcpu1/thread[0]",
      "halted": false,
      "thread_id": 63839
    }
  ],
  "id": "libvirt-61"
}

# virsh qemu-monitor-command virt-tests-vm1 --cmd "info cpus" --hmp
* CPU #0: nip=0xc0000000000b715c thread_id=34297
  CPU #1: nip=0xc0000000000b715c thread_id=63839

@cdeadmin
Copy link

------- Comment From [email protected] 2017-12-11 15:46:55 EDT-------
After investigating I'm tempted to close this as a NOTABUG, but first I'll ask for thoughts/opinions from the Libvirt team.

The issue here is what the 'cpu' command does and how "virsh qemu-monitor-command" executes it. The 'cpu' command sets a default CPU for the current monitor session. It is not a VM state, but a monitor state. It is possible for multiple monitors in the same VM to have different settings. In the example below I've started a VM with 2 HMP monitors, one in local port 4444 and other in local port 5555. Note that setting the default CPU in one monitor does not change the 'info cpus' output from the other:

First monitor:

(qemu) cpu 1
cpu 1
(qemu)
(qemu) info cpus
info cpus
CPU #0: pc=0xffffffff900d3d66 (halted) thread_id=29478

Second monitor:

(qemu)
(qemu) info cpus
info cpus

Which bring us back to "virsh qemu-monitor-command". This virsh command makes Libvirt issue a QMP command called "human-monitor-command". This command will open up a new HMP monitor, execute the command and then exit, destroying the HMP monitor it created. There is no state written in both the VM or the current QMP monitor that Libvirt uses to execute the commands. So, this command:

virsh qemu-monitor-command virt-tests-vm1 --cmd "cpu 1" --hmp

  • executes QMP 'human-monitor-command' with 'cpu 1' as the command to be execute
  • QEMU creates a new HMP monitor and executes 'cpu 1' in it
  • QEMU destroys the HMP monitor

And this is why you'll always see the same output from both this and the "info cpu" command - they are being executed in a fresh HMP monitor every time. In the end, this will never work as intended by the tester. This alone is enough to close this as NOTABUG.

Now, an alternative would be to use the QMP commands "cpu" and "query-cpus" to do the same thing, but then I realized that "cpu" does not have a QMP implementation and "query-cpus" will not return the intended values at it is. I'll send patches to try to get this implemented in QMP to offer an alternative for Libvirt.

Thoughts?

@cdeadmin
Copy link

------- Comment From [email protected] 2017-12-16 13:06:56 EDT-------
In the QEMU mailing list, the QMP/HMP maintainer explained that QMP is supposed to be stateless, as opposed to HMP. It was also mentioned that any QMP command that might make use of the current monitor CPU should be pass it as a parameter. This means that the community does not want 'qmp_cpu' to be implemented. In fact, I'll send a patch to put the command in the deprecated list.

As far as this bug is concerned, if Libvirt wants to use a specific CPU index to execute a command, it should use the cpu as a parameter instead of setting the active CPU. Given what I've already said in my last comment about how qemu-monitor-command --hmp works, I'm closing this bug as NOTABUG.

Thanks,

Daniel

ShivaprasadGBhat pushed a commit that referenced this issue Jan 16, 2018
Direct leak of 104 byte(s) in 1 object(s) allocated from:
    #0 0x7f904bfbe12b  (/lib64/liblsan.so.0+0xe12b)
    #1 0x7f904ba0ad67 in virAlloc ../../src/util/viralloc.c:144
    #2 0x7f904bbc11a4 in virNetMessageNew ../../src/rpc/virnetmessage.c:42
    #3 0x7f904bbb8e77 in virNetServerClientNewInternal ../../src/rpc/virnetserverclient.c:392
    #4 0x7f904bbb9921 in virNetServerClientNew ../../src/rpc/virnetserverclient.c:440
    #5 0x402ce5 in testIdentity ../../tests/virnetserverclienttest.c:55
    #6 0x403bed in virTestRun ../../tests/testutils.c:180
    #7 0x402c1e in mymain ../../tests/virnetserverclienttest.c:146
    #8 0x404c80 in virTestMain ../../tests/testutils.c:1119
    #9 0x4030d5 in main ../../tests/virnetserverclienttest.c:152
    #10 0x7f9047f7f889 in __libc_start_main (/lib64/libc.so.6+0x20889)

Indirect leak of 4 byte(s) in 1 object(s) allocated from:
    #0 0x7f904bfbe12b  (/lib64/liblsan.so.0+0xe12b)
    #1 0x7f904ba0adc7 in virAllocN ../../src/util/viralloc.c:191
    #2 0x7f904bbb8ec7 in virNetServerClientNewInternal ../../src/rpc/virnetserverclient.c:395
    #3 0x7f904bbb9921 in virNetServerClientNew ../../src/rpc/virnetserverclient.c:440
    #4 0x402ce5 in testIdentity ../../tests/virnetserverclienttest.c:55
    #5 0x403bed in virTestRun ../../tests/testutils.c:180
    #6 0x402c1e in mymain ../../tests/virnetserverclienttest.c:146
    #7 0x404c80 in virTestMain ../../tests/testutils.c:1119
    #8 0x4030d5 in main ../../tests/virnetserverclienttest.c:152
    #9 0x7f9047f7f889 in __libc_start_main (/lib64/libc.so.6+0x20889)

SUMMARY: LeakSanitizer: 108 byte(s) leaked in 2 allocation(s).

Signed-off-by: Marc Hartmayer <[email protected]>
Reviewed-by: John Ferlan <[email protected]>
ShivaprasadGBhat pushed a commit that referenced this issue Apr 3, 2018
The fix for CVE-2018-6764 introduced a potential deadlock scenario
that gets triggered by the NSS module when virGetHostname() calls
getaddrinfo to resolve the hostname:

 #0  0x00007f6e714b57e7 in futex_wait
 #1  futex_wait_simple
 #2  __pthread_once_slow
 #3  0x00007f6e71d16e7d in virOnce
 #4  0x00007f6e71d0997c in virLogInitialize
 #5  0x00007f6e71d0a09a in virLogVMessage
 #6  0x00007f6e71d09ffd in virLogMessage
 #7  0x00007f6e71d0db22 in virObjectNew
 #8  0x00007f6e71d0dbf1 in virObjectLockableNew
 #9  0x00007f6e71d0d3e5 in virMacMapNew
 #10 0x00007f6e71cdc50a in findLease
 #11 0x00007f6e71cdcc56 in _nss_libvirt_gethostbyname4_r
 #12 0x00007f6e724631fc in gaih_inet
 #13 0x00007f6e72464697 in __GI_getaddrinfo
 #14 0x00007f6e71d19e81 in virGetHostnameImpl
 #15 0x00007f6e71d1a057 in virGetHostnameQuiet
 #16 0x00007f6e71d09936 in virLogOnceInit
 #17 0x00007f6e71d09952 in virLogOnce
 #18 0x00007f6e714b5829 in __pthread_once_slow
 libvirt#19 0x00007f6e71d16e7d in virOnce
 libvirt#20 0x00007f6e71d0997c in virLogInitialize
 libvirt#21 0x00007f6e71d0a09a in virLogVMessage
 libvirt#22 0x00007f6e71d09ffd in virLogMessage
 libvirt#23 0x00007f6e71d0db22 in virObjectNew
 libvirt#24 0x00007f6e71d0dbf1 in virObjectLockableNew
 libvirt#25 0x00007f6e71d0d3e5 in virMacMapNew
 libvirt#26 0x00007f6e71cdc50a in findLease
 libvirt#27 0x00007f6e71cdc839 in _nss_libvirt_gethostbyname3_r
 libvirt#28 0x00007f6e71cdc724 in _nss_libvirt_gethostbyname2_r
 libvirt#29 0x00007f6e7248f72f in __gethostbyname2_r
 libvirt#30 0x00007f6e7248f494 in gethostbyname2
 libvirt#31 0x000056348c30c36d in hosts_keys
 libvirt#32 0x000056348c30b7d2 in main

Fortunately the extra stuff virGetHostname does is totally irrelevant to
the needs of the logging code, so we can just inline a call to the
native hostname() syscall directly.

Signed-off-by: Daniel P. Berrangé <[email protected]>
ShivaprasadGBhat pushed a commit that referenced this issue Apr 3, 2018
Currently if the virNetServer instance is created with max_workers==0 to
request a non-threaded dispatch process, we deadlock during dispatch

  #0  0x00007fb845f6f42d in __lll_lock_wait () from /lib64/libpthread.so.0
  #1  0x00007fb845f681d3 in pthread_mutex_lock () from /lib64/libpthread.so.0
  #2  0x000055a6628bb305 in virMutexLock (m=<optimized out>) at util/virthread.c:89
  #3  0x000055a6628a984b in virObjectLock (anyobj=<optimized out>) at util/virobject.c:435
  #4  0x000055a66286fcde in virNetServerClientIsAuthenticated (client=client@entry=0x55a663a7b960)
      at rpc/virnetserverclient.c:1565
  #5  0x000055a66286cc17 in virNetServerProgramDispatchCall (msg=0x55a663a7bc50, client=0x55a663a7b960,
      server=0x55a663a77550, prog=0x55a663a78020) at rpc/virnetserverprogram.c:407
  #6  virNetServerProgramDispatch (prog=prog@entry=0x55a663a78020, server=server@entry=0x55a663a77550,
      client=client@entry=0x55a663a7b960, msg=msg@entry=0x55a663a7bc50) at rpc/virnetserverprogram.c:307
  #7  0x000055a662871d56 in virNetServerProcessMsg (msg=0x55a663a7bc50, prog=0x55a663a78020, client=0x55a663a7b960,
      srv=0x55a663a77550) at rpc/virnetserver.c:148
  #8  virNetServerDispatchNewMessage (client=0x55a663a7b960, msg=0x55a663a7bc50, opaque=0x55a663a77550)
      at rpc/virnetserver.c:227
  #9  0x000055a66286e4c0 in virNetServerClientDispatchRead (client=client@entry=0x55a663a7b960)
      at rpc/virnetserverclient.c:1322
  #10 0x000055a66286e813 in virNetServerClientDispatchEvent (sock=<optimized out>, events=1, opaque=0x55a663a7b960)
      at rpc/virnetserverclient.c:1507
  #11 0x000055a662899be0 in virEventPollDispatchHandles (fds=0x55a663a7bdc0, nfds=<optimized out>)
      at util/vireventpoll.c:508
  #12 virEventPollRunOnce () at util/vireventpoll.c:657
  #13 0x000055a6628982f1 in virEventRunDefaultImpl () at util/virevent.c:327
  #14 0x000055a6628716d5 in virNetDaemonRun (dmn=0x55a663a771b0) at rpc/virnetdaemon.c:858
  #15 0x000055a662864c1d in main (argc=<optimized out>,
  #argv=0x7ffd105b4838) at logging/log_daemon.c:1235

Reviewed-by: John Ferlan <[email protected]>
Reviewed-by: Jim Fehlig <[email protected]>
Signed-off-by: Daniel P. Berrangé <[email protected]>
ShivaprasadGBhat pushed a commit that referenced this issue Jul 9, 2018
On start up of libvirtd the worker pool of the QEMU driver must be
initialized before trying to reconnect to all the running QEMU
instances. Otherwise segmentation faults can occur if there are QEMU
monitor events emitted.

 #0  __GI___pthread_mutex_lock
 #1  0x000003fffdba9e62 in virMutexLock
 #2  0x000003fffdbab2dc in virThreadPoolSendJob
 #3  0x000003ffd8343b70 in qemuProcessHandleSerialChanged
 #4  0x000003ffd836a776 in qemuMonitorEmitSerialChange
 #5  0x000003ffd8378e52 in qemuMonitorJSONHandleSerialChange
 #6  0x000003ffd8378930 in qemuMonitorJSONIOProcessEvent
 #7  0x000003ffd837edee in qemuMonitorJSONIOProcessLine
 #8  0x000003ffd837ef86 in qemuMonitorJSONIOProcess
 #9  0x000003ffd836757a in qemuMonitorIOProcess
 #10 0x000003ffd836863e in qemuMonitorIO
 #11 0x000003fffdb4033a in virEventPollDispatchHandles
 #12 0x000003fffdb4055e in virEventPollRunOnce
 #13 0x000003fffdb3e782 in virEventRunDefaultImpl
 #14 0x000003fffdc89400 in virNetDaemonRun
 #15 0x000000010002a816 in main

Signed-off-by: Marc Hartmayer <[email protected]>
Reviewed-by: Bjoern Walk <[email protected]>
Reviewed-by: Boris Fiuczynski <[email protected]>
Reviewed-by: Erik Skultety <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants