Bug 1997725
| Summary: | RFE: enable pulseaudio backend on QEMU | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 9 | Reporter: | Klaus Heinrich Kiwi <kkiwi> |
| Component: | qemu-kvm | Assignee: | Gerd Hoffmann <kraxel> |
| qemu-kvm sub component: | Audio | QA Contact: | Guo, Zhiyi <zhguo> |
| Status: | CLOSED ERRATA | Docs Contact: | |
| Severity: | medium | ||
| Priority: | high | CC: | ashwsing, berrange, coli, drjones, dyuan, jinzhao, juzhang, kkiwi, kraxel, lizhu, mrezanin, virt-maint, xuzhang, ymankad, zhguo |
| Version: | 9.0 | Keywords: | FutureFeature, Triaged |
| Target Milestone: | rc | Flags: | pm-rhel:
mirror+
|
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | qemu-kvm-6.1.0-2.el9 | Doc Type: | Enhancement |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2022-05-17 12:23:26 UTC | Type: | Feature Request |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Klaus Heinrich Kiwi
2021-08-25 17:27:56 UTC
Miroslav, this is probably a simple .spec file change, ideally before beta closes. If this is something you or Danilo can make without needing Gerd, feel free to reassign to you. Tested against qemu-kvm-6.1.0-2.el9.x86_64, below are the steps:
1. Install rhel 9 workstation on a Lenovo T480s with stero speaker and microphone.
2. Login desktop via a normal user(I'm using user kvm-qe, this user is a normal user who doesn't have sudo privilege)
3. Enable audio service for my normal user:
$systemctl --user enable --now pipewire-media-session.service
4. Install VM via virt-install or virt-manager but connect to user session:
$virt-install \
--connect qemu:///session \
...
5. Boot VM with libvirt xml:
...
<graphics type='vnc' port='-1' autoport='yes' listen='0.0.0.0'>
<listen type='address' address='0.0.0.0'/>
</graphics>
<sound model='ich9'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x1b' function='0x0'/>
</sound>
<audio id='1' type='pulseaudio' serverName="/run/user/1000/pulse/native"/>
...
qemu cli generated:
...
-audiodev id=audio1,driver=pa,server=/run/user/1000/pulse/native \
-vnc 0.0.0.0:0,audiodev=audio1 \
-device virtio-vga,id=video0,max_outputs=1,bus=pcie.0,addr=0x1 \
-device ich9-intel-hda,id=sound0,bus=pcie.0,addr=0x1b \
-device hda-duplex,id=sound0-codec0,bus=sound0.0,cad=0,audiodev=audio1 \
...
6. Connect remote-viewer/virt-viewer to VM and perform audio playback and voice recording inside VM, I have tested rhel 9 VM as well windows 10 x64 VM, audio playback and voice recording now can be redirected to host speaker and microphone and the quality of audio playback and voice recording seems good I think. Adjust playback volume inside VM also works well
7. Test save/restore VM during playback/recording:
$virsh save VM current.stat
$virsh restore current.stat
Save/restore VM with pulseaudio backend also works normally.
I have not tested vm migration between two hosts as seems normal user cannot mount a nfs storage without root user's participation(record nfs info to /ets/fstab with user permission) and I'm not sure whether user session has migration support...
Audio device info on my host:
Server String: /run/user/1000/pulse/native
Library Protocol Version: 35
Server Protocol Version: 35
Is Local: yes
Client Index: 53
Tile Size: 65472
...
Server Name: PulseAudio (on PipeWire 0.3.32)
Server Version: 14.0.0
Default Sample Specification: float32le 2ch 48000Hz
Default Channel Map: front-left,front-right
Default Sink: alsa_output.pci-0000_00_1f.3.analog-stereo
Default Source: alsa_input.pci-0000_00_1f.3.analog-stereo
Cookie: fe28:48d0
Migration under session mode with shared storage: 1. mount the nfs dir using root 2. change the images under the nfs mount point to the non-root user # ll /mnt/nfs/lizhu/tpm.qcow2 -rw-r--r--. 1 lizhu lizhu 1212219392 Sep 14 2021 /mnt/nfs/lizhu/tpm.qcow2 3. start the guest $ virsh start avocado-vt-vm1 Domain 'avocado-vt-vm1' started 4. migrate the guest to target host $ virsh migrate avocado-vt-vm1 qemu+ssh://$target_hostname/session --verbose --live lizhu@$target_hostname's password: Migration: [100 %] 5. check the guest on host $ virsh list --all Id Name State -------------------------------- 5 avocado-vt-vm1 running Migration under session mode without shared storage:
1. create a storage pool on both host
$ virsh pool-dumpxml lizhu
<pool type='dir'>
<name>lizhu</name>
<uuid>aede9f23-1872-45d6-bfb8-500b50c40d47</uuid>
<capacity unit='bytes'>489433313280</capacity>
<allocation unit='bytes'>14186229760</allocation>
<available unit='bytes'>475247083520</available>
<source>
</source>
<target>
<path>/home/lizhu/.local/share/libvirt/images</path>
<permissions>
<mode>0775</mode>
<owner>1001</owner>
<group>1002</group>
<label>unconfined_u:object_r:svirt_home_t:s0</label>
</permissions>
</target>
</pool>
2. start the guest
$ virsh start avocado-vt-vm1
Domain 'avocado-vt-vm1' started
$ virsh domblklist avocado-vt-vm1
Target Source
-------------------------------------------------------------
vda /home/lizhu/.local/share/libvirt/images/tpm.qcow2
3. migrate the guest with flag --copy-storage-all
$ virsh migrate avocado-vt-vm1 qemu+ssh://$target_hostname/session --verbose --live --copy-storage-all
lizhu@$target_hostname's password:
Migration: [100 %]
4. check the guest on target host
$ virsh list --all
Id Name State
--------------------------------
2 avocado-vt-vm1 running
$ virsh domblklist avocado-vt-vm1
Target Source
-------------------------------------------------------------
vda /home/lizhu/.local/share/libvirt/images/tpm.qcow2
From the testing of comment #17 and comment #18, under session mode, we can migrate guest with shared storage, but need root user's participation. Or we can migrate guest with non-shared storage, this seems not need root user's participation. Hi, Daniel It seems migrating guest under session mode with non-shared storage without root user's participation can work from libvirt side. Do we need to support migrating guest with audio backend? Or maybe there are other methods can make it happen, please help to check. Thanks (In reply to Lili Zhu from comment #19) > From the testing of comment #17 and comment #18, under session mode, we can > migrate guest with shared storage, but need root user's participation. Or we > can migrate guest with non-shared storage, this seems not need root user's > participation. You're only using 'root' here to setup a shared NFS volume. Based on your comments, migration is working fine once that one-time setup is done, so I'm not seeing any problem here QE bot(pre verify): Set 'Verified:Tested,SanityOnly' as gating/tier1 test pass. (In reply to Lili Zhu from comment #19) > From the testing of comment #17 and comment #18, under session mode, we can > migrate guest with shared storage, but need root user's participation. Or we > can migrate guest with non-shared storage, this seems not need root user's > participation. > > Hi, Daniel > It seems migrating guest under session mode with non-shared storage without > root user's participation can work from libvirt side. Do we need to support > migrating guest with audio backend? Or maybe there are other methods can > make it happen, please help to check. Thanks Hi Daniel, QE would like to know should we need to test VM live migration when VM has been configured with pulseaudio backend? I must admit that I'm not familiar with the supported scenarios when running the VM under a normal user session and whether VM live migration is a supported scenario or not in this situation. Try to investigate a bit, seems offline migration with pulseaudio backend indeed works. However, for multi hosts migration, I'm not sure how this can be performed as a normal user seems 1) not able to have ssh server enabled for a normal user and 2) not possible to have a shared storage for normal user without the help from root or sudo user. Even these two are not blockers, I'm wondering if qemu can tell the differences for pulseaudio backend with default libvirt audiodev options if src/des have different host audio devices configurations, for example, maybe src has both input and output audio devices but des only has output audio device. If live migration with pulseaudio backend is not a supported scenario by any reasons, should we add some code to abort live migration if user attempt to do this? Thanks! Zhiyi (In reply to Guo, Zhiyi from comment #24) > (In reply to Lili Zhu from comment #19) > > From the testing of comment #17 and comment #18, under session mode, we can > > migrate guest with shared storage, but need root user's participation. Or we > > can migrate guest with non-shared storage, this seems not need root user's > > participation. > > > > Hi, Daniel > > It seems migrating guest under session mode with non-shared storage without > > root user's participation can work from libvirt side. Do we need to support > > migrating guest with audio backend? Or maybe there are other methods can > > make it happen, please help to check. Thanks > > Hi Daniel, > > QE would like to know should we need to test VM live migration when VM > has been configured with pulseaudio backend? > I must admit that I'm not familiar with the supported scenarios when > running the VM under a normal user session and whether VM live migration is > a supported scenario or not in this situation. Historically live migration would not have been usable for session mode libvirt, as we never officially supported remote access for the session mode libvirt. ie qemu+ssh://host/session wouldn't generally work as it would not auto-start libvirtd on the remote host. TCP/TLS was also not supported for session mode. Now we have the virt-ssh-helper, we can use qemu+ssh://host/session remote access and it will auto-start libvirtd. In theory this means live migration should also now be possible (assuming $HOME is on NFS, or hte user has configured the VM storage on a separate NFS mount, or has told qemu todo storage copy) > Try to investigate a bit, seems offline migration with pulseaudio backend > indeed works. However, for multi hosts migration, I'm not sure how this can > be performed as a normal user seems 1) not able to have ssh server enabled > for a normal user and 2) not possible to have a shared storage for normal > user without the help from root or sudo user. Even these two are not > blockers, I'm wondering if qemu can tell the differences for pulseaudio > backend with default libvirt audiodev options if src/des have different host > audio devices configurations, for example, maybe src has both input and > output audio devices but des only has output audio device. > If live migration with pulseaudio backend is not a supported scenario by > any reasons, should we add some code to abort live migration if user attempt > to do this? Thanks! pulseaudio is just a backend, and generally backends should not impact live migration. The guest on the target just connects to a new backend on the target. I don't know if there's anything special about audio backends here, but naively i would expect it to just work. ie if you can cold-boot the same VM config on both hosts, I would have expected migration to be viable. I've not personally tried migraiton with pulseaudio though. (In reply to Daniel Berrangé from comment #25) > (In reply to Guo, Zhiyi from comment #24) > > (In reply to Lili Zhu from comment #19) > > > From the testing of comment #17 and comment #18, under session mode, we can > > > migrate guest with shared storage, but need root user's participation. Or we > > > can migrate guest with non-shared storage, this seems not need root user's > > > participation. > > > > > > Hi, Daniel > > > It seems migrating guest under session mode with non-shared storage without > > > root user's participation can work from libvirt side. Do we need to support > > > migrating guest with audio backend? Or maybe there are other methods can > > > make it happen, please help to check. Thanks > > > > Hi Daniel, > > > > QE would like to know should we need to test VM live migration when VM > > has been configured with pulseaudio backend? > > I must admit that I'm not familiar with the supported scenarios when > > running the VM under a normal user session and whether VM live migration is > > a supported scenario or not in this situation. > > Historically live migration would not have been usable for session mode > libvirt, as we never officially supported remote access for the session mode > libvirt. ie qemu+ssh://host/session wouldn't generally work as it would not > auto-start libvirtd on the remote host. TCP/TLS was also not supported for > session mode. > > Now we have the virt-ssh-helper, we can use qemu+ssh://host/session remote > access and it will auto-start libvirtd. In theory this means live migration > should also now be possible (assuming $HOME is on NFS, or hte user has > configured the VM storage on a separate NFS mount, or has told qemu todo > storage copy) > > Yeah, I have confirmed live migration indeed works. The configurations works from my side: 1)Both laptop A and B has a user called kvm-qe 2)user kvm-qe has been configured with sshd enabled(add "AllowUsers kvm-qe root" into /etc/ssh/sshd_config with user root) 3)A shared nfs storage mounted to path /home/kvm-qe/nfs on both laptops nfs server configuration: # cat /etc/exports /home/nfs *(rw,all_squash,anonuid=1001,anongid=1001) client configuration(added by user root): # cat /etc/fstab ... (my nfs server ip):/home/nfs /home/kvm-qe/nfs nfs rw,relatime,user,noauto 0 0 nfs mounted by command: $mount /home/kvm-qe/nfs 4)Enable certain port for allowing migration: # firewall-cmd --add-port=49152/tcp --permanent # firewall-cmd --reload With vm image been placed on nfs storage and start VM with this image, I'm able to do ping-pong live migration between laptop A and B via command: $virsh migrate rhel9 --live qemu+ssh://des_ip(src_ip)/session > > Try to investigate a bit, seems offline migration with pulseaudio backend > > indeed works. However, for multi hosts migration, I'm not sure how this can > > be performed as a normal user seems 1) not able to have ssh server enabled > > for a normal user and 2) not possible to have a shared storage for normal > > user without the help from root or sudo user. Even these two are not > > blockers, I'm wondering if qemu can tell the differences for pulseaudio > > backend with default libvirt audiodev options if src/des have different host > > audio devices configurations, for example, maybe src has both input and > > output audio devices but des only has output audio device. > > If live migration with pulseaudio backend is not a supported scenario by > > any reasons, should we add some code to abort live migration if user attempt > > to do this? Thanks! > > pulseaudio is just a backend, and generally backends should not impact live > migration. The guest on the target just connects to a new backend on the > target. I don't know if there's anything special about audio backends here, > but naively i would expect it to just work. ie if you can cold-boot the same > VM config on both hosts, I would have expected migration to be viable. I've > not personally tried migraiton with pulseaudio though. VM migration indeed works well with audio backend configured, I'm able to do pong-pong migration between laptop A and B without interrupt the audio playback and recording(default both laptops have an audio input and output). Change laptop B to use a usb audio output and disable input and migrate between laptop A and B also works fine, audio playback just been switched to different audio output and audio recording is *turned off and on* automatically depend on whether there is a audio input managed by pulseaudio(pipewire) The audio backend option used in vm xml is the same as comment 16 Giving the result of function tests looks good, so mark the bug verified Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (new packages: qemu-kvm), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2022:2307 |