[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [vfio-users] R7 240 and RX 480 gpu passthrough



Hi,

regarding updating the working VM, it really helped to use 1 core only. With more experiments I managed to snatch part of dmesg when running the working VM (with R7 240) with RX 480 instead.

too bad I didn't manage to get anything written to disk.

Any idea what am I doing wrong? It kills the host during boot, so obviusly the already present amd driver tries to init the RX 480 somehow.

[173007.037801] DMAR: DRHD: handling fault status reg 700
[173007.038078] DMAR: DRHD: handling fault status reg 700
[173007.038351] DMAR: DRHD: handling fault status reg 700
[173007.038625] DMAR: DRHD: handling fault status reg 700
[173007.038897] DMAR: DRHD: handling fault status reg 700
[173007.039179] DMAR: DRHD: handling fault status reg 700
[173012.040536] dmar_fault: 48839 callbacks suppressed
[173012.040540] DMAR: DRHD: handling fault status reg 700
[173012.040915] DMAR: DRHD: handling fault status reg 700
[173012.041266] DMAR: DRHD: handling fault status reg 700
[173012.041618] DMAR: DRHD: handling fault status reg 700
[173012.041969] DMAR: DRHD: handling fault status reg 700
[173012.042315] DMAR: DRHD: handling fault status reg 700
[173012.042662] DMAR: DRHD: handling fault status reg 700
[173012.043010] DMAR: DRHD: handling fault status reg 700
[173012.043356] DMAR: DRHD: handling fault status reg 700
[173012.043702] DMAR: DRHD: handling fault status reg 700
[173017.044406] dmar_fault: 48832 callbacks suppressed
[173017.044408] DMAR: DRHD: handling fault status reg 700
[173017.044652] DMAR: DRHD: handling fault status reg 700
[173017.044869] DMAR: DRHD: handling fault status reg 700
[173017.045086] DMAR: DRHD: handling fault status reg 700
[173017.045301] DMAR: DRHD: handling fault status reg 700
[173017.045517] DMAR: DRHD: handling fault status reg 700
[173017.045731] DMAR: DRHD: handling fault status reg 700
[173017.045944] DMAR: DRHD: handling fault status reg 700
[173017.046157] DMAR: DRHD: handling fault status reg 700
[173017.046369] DMAR: DRHD: handling fault status reg 700
[173022.048277] dmar_fault: 48845 callbacks suppressed
[173022.048288] DMAR: DRHD: handling fault status reg 700
[173022.048525] DMAR: DRHD: handling fault status reg 700
[173022.048740] DMAR: DRHD: handling fault status reg 700
[173022.048955] DMAR: DRHD: handling fault status reg 700
[173022.049170] DMAR: DRHD: handling fault status reg 700
[173022.049385] DMAR: DRHD: handling fault status reg 700
[173022.049599] DMAR: DRHD: handling fault status reg 700
[173022.049812] DMAR: DRHD: handling fault status reg 700
[173022.050025] DMAR: DRHD: handling fault status reg 700
[173022.050239] DMAR: DRHD: handling fault status reg 700
[173027.052148] dmar_fault: 48845 callbacks suppressed
[173027.052150] DMAR: DRHD: handling fault status reg 700
[173027.052385] DMAR: DRHD: handling fault status reg 700
[173027.052599] DMAR: DRHD: handling fault status reg 700
[173027.052812] DMAR: DRHD: handling fault status reg 700
[173027.053026] DMAR: DRHD: handling fault status reg 700
[173027.053240] DMAR: DRHD: handling fault status reg 700
[173027.053450] DMAR: DRHD: handling fault status reg 700
[173027.053662] DMAR: DRHD: handling fault status reg 700
[173027.053874] DMAR: DRHD: handling fault status reg 700
[173027.054086] DMAR: DRHD: handling fault status reg 700
[173030.044201] INFO: task txg_sync:8043 blocked for more than 120 seconds.
[173030.044306]       Tainted: P           OE   4.9.0-0.bpo.2-amd64 #1
[173030.044390] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[173030.044478] txg_sync        D    0  8043      2 0x00000000
[173030.044482]  ffff94435eda8800 0000000000000000 ffff944378ec2040 ffff945359620140
[173030.044485]  ffff94437fd58700 ffffbe6743b8bb50 ffffffffa09f668d ffffffffc0c35465
[173030.044487]  ffff94535917c080 00000000c1441ec0 ffff94436ddcc478 ffff945359620140
[173030.044489] Call Trace:
[173030.044496]  [<ffffffffa09f668d>] ? __schedule+0x23d/0x6d0
[173030.044506]  [<ffffffffc0c35465>] ? taskq_dispatch_ent+0xc5/0xf0 [spl]
[173030.044508]  [<ffffffffa09f6b52>] ? schedule+0x32/0x80
[173030.044511]  [<ffffffffa09fa089>] ? schedule_timeout+0x249/0x300
[173030.044552]  [<ffffffffc14412be>] ? zio_taskq_dispatch+0x8e/0xa0 [zfs]
[173030.044583]  [<ffffffffc14412de>] ? zio_issue_async+0xe/0x20 [zfs]
[173030.044613]  [<ffffffffc1444b87>] ? zio_nowait+0x77/0xf0 [zfs]
[173030.044638]  [<ffffffffc13b36f8>] ? dmu_objset_sync+0x298/0x340 [zfs]
[173030.044639]  [<ffffffffa09f63d4>] ? io_schedule_timeout+0xb4/0x130
[173030.044645]  [<ffffffffc0c386cf>] ? cv_wait_common+0xaf/0x120 [spl]
[173030.044648]  [<ffffffffa04bb7a0>] ? wake_up_atomic_t+0x30/0x30
[173030.044677]  [<ffffffffc144499d>] ? zio_wait+0xad/0x130 [zfs]
[173030.044705]  [<ffffffffc13d45de>] ? dsl_pool_sync+0x2be/0x450 [zfs]
[173030.044736]  [<ffffffffc13ebd70>] ? spa_sync+0x370/0xb20 [zfs]
[173030.044738]  [<ffffffffa04bb394>] ? __wake_up+0x34/0x50
[173030.044769]  [<ffffffffc13fdf66>] ? txg_sync_thread+0x3c6/0x620 [zfs]
[173030.044800]  [<ffffffffc13fdba0>] ? txg_sync_stop+0xd0/0xd0 [zfs]
[173030.044804]  [<ffffffffc0c33ce6>] ? thread_generic_wrapper+0x76/0x90 [spl]
[173030.044809]  [<ffffffffc0c33c70>] ? __thread_exit+0x20/0x20 [spl]
[173030.044811]  [<ffffffffa04974c0>] ? kthread+0xe0/0x100
[173030.044813]  [<ffffffffa042476b>] ? __switch_to+0x2bb/0x700
[173030.044816]  [<ffffffffa04973e0>] ? kthread_park+0x60/0x60
[173030.044818]  [<ffffffffa09fb675>] ? ret_from_fork+0x25/0x30
[173030.048079] mpt2sas_cm0: mpt3sas_scsih_issue_tm: timeout
[173030.048212] mf:
   
[173030.048214] 01000009
[173030.048215] 00000100
[173030.048215] 00000000
[173030.048216] 00000000
[173030.048217] 00000000
[173030.048227] 00000000
[173030.048229] 00000000
[173030.048230] 00000000
[173030.048235]
   
[173030.048235] 00000000
[173030.048236] 00000000
[173030.048244] 00000000
[173030.048245] 00000000
[173030.048250] 00000004

[173030.056082] sd 0:0:0:0: attempting task abort! scmd(ffff945372473640)
[173030.056087] sd 0:0:0:0: [sdd] tag#1 CDB: Synchronize Cache(10) 35 00 00 00 00 00 00 00 00 00
[173030.056090] scsi target0:0:0: handle(0x000a), sas_address(0x4433221107000000), phy(7)
[173030.056092] scsi target0:0:0: enclosure_logical_id(0x500304801103c400), slot(4)
[173032.056019] dmar_fault: 48845 callbacks suppressed
[173032.056021] DMAR: DRHD: handling fault status reg 700
[173032.056374] DMAR: DRHD: handling fault status reg 700
[173032.056690] DMAR: DRHD: handling fault status reg 700
[173032.057006] DMAR: DRHD: handling fault status reg 700
[173032.057323] DMAR: DRHD: handling fault status reg 700
[173032.057640] DMAR: DRHD: handling fault status reg 700
[173032.057955] DMAR: DRHD: handling fault status reg 700
[173032.058273] DMAR: DRHD: handling fault status reg 700
[173032.058589] DMAR: DRHD: handling fault status reg 700
[173032.058904] DMAR: DRHD: handling fault status reg 700
[173033.979983] INFO: rcu_sched detected stalls on CPUs/tasks:
[173033.980334]     3-...: (0 ticks this GP) idle=737/140000000000000/0 softirq=3187486/3187486 fqs=14742
[173033.980673]     (detected by 22, t=36762 jiffies, g=5606332, c=5606331, q=277011)
[173033.981032] Task dump for CPU 3:
[173033.981034] qemu-system-x86 R  running task        0 17967      1 0x00000008
[173033.981038]  ffff9443759fedc8 ffff9443759ff4c8 0000000000000002 ffffffffa0859a66
[173033.981040]  0000020000000003 00000000e0001001 00000000e025b1ce ffffffffa085b6ea
[173033.981042]  00000000000e0000 ffff9443759feac0 0000000000000000 0000000000000004
[173033.981044] Call Trace:
[173033.981054]  [<ffffffffa0859a66>] ? qi_flush_dev_iotlb+0x86/0xc0
[173033.981056]  [<ffffffffa085b6ea>] ? iommu_flush_dev_iotlb.part.47+0x6a/0x90
[173033.981058]  [<ffffffffa085d417>] ? intel_iommu_unmap+0xf7/0x140
[173033.981061]  [<ffffffffa084cdfa>] ? iommu_unmap+0xba/0x190
[173033.981065]  [<ffffffffc09c6a9a>] ? vfio_remove_dma+0x10a/0x200 [vfio_iommu_type1]
[173033.981067]  [<ffffffffc09c71bd>] ? vfio_iommu_type1_ioctl+0x41d/0xa72 [vfio_iommu_type1]
[173033.981095]  [<ffffffffc0adbf80>] ? kvm_set_memory_region+0x30/0x40 [kvm]
[173033.981104]  [<ffffffffc0adc3dc>] ? kvm_vm_ioctl+0x44c/0x7e0 [kvm]
[173033.981108]  [<ffffffffc095a603>] ? vfio_fops_unl_ioctl+0x73/0x260 [vfio]
[173033.981112]  [<ffffffffa061753b>] ? do_vfs_ioctl+0x9b/0x600
[173033.981115]  [<ffffffffa04fb1e3>] ? SyS_futex+0x83/0x180
[173033.981116]  [<ffffffffa0617b16>] ? SyS_ioctl+0x76/0x90
[173033.981120]  [<ffffffffa09fb3fb>] ? system_call_fast_compare_end+0xc/0x9b
[173037.059994] dmar_fault: 48836 callbacks suppressed
[173037.059997] DMAR: DRHD: handling fault status reg 700
[173037.060423] DMAR: DRHD: handling fault status reg 700
[173037.060831] DMAR: DRHD: handling fault status reg 700
[173037.061248] DMAR: DRHD: handling fault status reg 700
[173037.061656] DMAR: DRHD: handling fault status reg 700
[173037.062062] DMAR: DRHD: handling fault status reg 700
[173037.062472] DMAR: DRHD: handling fault status reg 700
[173037.062879] DMAR: DRHD: handling fault status reg 700
[173037.063290] DMAR: DRHD: handling fault status reg 700
[173037.063699] DMAR: DRHD: handling fault status reg 700
[173040.283847] mpt2sas_cm0: sending diag reset !!
[173041.317872] mpt2sas_cm0: diag reset: SUCCESS
[173041.423459] mpt2sas_cm0: LSISAS2308: FWVersion(16.00.01.00), ChipRevision(0x05), BiosVersion(07.31.00.00)
[173041.423460] mpt2sas_cm0: Protocol=(
[173041.423461] Initiator
[173041.423462] ),
[173041.423462] Capabilities=(
[173041.423463] Raid
[173041.423464] ,TLR
[173041.423464] ,EEDP
[173041.423465] ,Snapshot Buffer
[173041.423465] ,Diag Trace Buffer
[173041.423466] ,Task Set Full
[173041.423466] ,NCQ
[173041.423467] )
[173041.423514] mpt2sas_cm0: sending port enable !!
[173042.063866] dmar_fault: 48826 callbacks suppressed
[173042.063868] DMAR: DRHD: handling fault status reg 700
[173042.064299] DMAR: DRHD: handling fault status reg 700
[173042.064710] DMAR: DRHD: handling fault status reg 700
[173042.065125] DMAR: DRHD: handling fault status reg 700
[173042.065539] DMAR: DRHD: handling fault status reg 700
[173042.065950] DMAR: DRHD: handling fault status reg 700
[173042.066361] DMAR: DRHD: handling fault status reg 700
[173042.066774] DMAR: DRHD: handling fault status reg 700
[173042.067185] DMAR: DRHD: handling fault status reg 700
[173042.067593] DMAR: DRHD: handling fault status reg 700
[173047.067738] dmar_fault: 48826 callbacks suppressed
[173047.067740] DMAR: DRHD: handling fault status reg 700
[173047.068178] DMAR: DRHD: handling fault status reg 700
[173047.068590] DMAR: DRHD: handling fault status reg 700
[173047.069000] DMAR: DRHD: handling fault status reg 700
[173047.069410] DMAR: DRHD: handling fault status reg 700
[173047.069820] DMAR: DRHD: handling fault status reg 700
[173047.070237] DMAR: DRHD: handling fault status reg 700
[173047.070651] DMAR: DRHD: handling fault status reg 700
[173047.071063] DMAR: DRHD: handling fault status reg 700
[173047.071472] DMAR: DRHD: handling fault status reg 700
[173049.050882] mpt2sas_cm0: port enable: SUCCESS
[173049.050889] mpt2sas_cm0: search for end-devices: start
[173049.051544] scsi target0:0:1: handle(0x0009), sas_addr(0x4433221104000000)
[173049.051545] scsi target0:0:1: enclosure logical id(0x500304801103c400), slot(7)
[173049.052320] scsi target0:0:2: handle(0x000a), sas_addr(0x4433221105000000)
[173049.052323] scsi target0:0:2: enclosure logical id(0x500304801103c400), slot(6)
[173049.052325]     handle changed from(0x000b)!!!
[173049.052548] scsi target0:0:3: handle(0x000b), sas_addr(0x4433221106000000)
[173049.052552] scsi target0:0:3: enclosure logical id(0x500304801103c400), slot(5)
[173049.052553]     handle changed from(0x000c)!!!
[173049.052804] scsi target0:0:0: handle(0x000c), sas_addr(0x4433221107000000)
[173049.052819] scsi target0:0:0: enclosure logical id(0x500304801103c400), slot(4)
[173049.052819]     handle changed from(0x000a)!!!
[173049.053058] mpt2sas_cm0: search for end-devices: complete
[173049.053061] mpt2sas_cm0: search for raid volumes: start
[173049.053062] mpt2sas_cm0: search for responding raid volumes: complete
[173049.053063] mpt2sas_cm0: search for expanders: start
[173049.053076] mpt2sas_cm0: search for expanders: complete
[173049.053090] sd 0:0:1:0: task abort: SUCCESS scmd(ffff945353b164c0)
[173049.053112] mpt2sas_cm0: removing unresponding devices: start
[173049.053113] sd 0:0:1:0: attempting task abort! scmd(ffff945353b16340)
[173049.053115] mpt2sas_cm0: removing unresponding devices: end-devices
[173049.053116] mpt2sas_cm0: removing unresponding devices: volumes
[173049.053116] mpt2sas_cm0: removing unresponding devices: expanders
[173049.053117] mpt2sas_cm0: removing unresponding devices: complete
[173049.053118] sd 0:0:1:0: [sde] tag#4 CDB: Write(16) 8a 00 00 00 00 01 5d 50 96 10 00 00 00 10 00 00
[173049.053121] scsi target0:0:1: handle(0x0009), sas_address(0x4433221104000000), phy(4)
[173049.053121] mpt2sas_cm0: scan devices: start
[173049.053123] scsi target0:0:1: enclosure_logical_id(0x500304801103c400), slot(7)
[173049.053149] sd 0:0:1:0: task abort: SUCCESS scmd(ffff945353b16340)
[173049.053154] sd 0:0:1:0: attempting task abort! scmd(ffff945353b16dc0)
[173049.053157] sd 0:0:1:0: [sde] tag#2 CDB: Write(16) 8a 00 00 00 00 01 5d 50 98 10 00 00 00 10 00 00
[173049.053158] scsi target0:0:1: handle(0x0009), sas_address(0x4433221104000000), phy(4)
[173049.053159] scsi target0:0:1: enclosure_logical_id(0x500304801103c400), slot(7)
[173049.053166] sd 0:0:1:0: task abort: SUCCESS scmd(ffff945353b16dc0)
[173049.053170] sd 0:0:1:0: attempting task abort! scmd(ffff94534caaa0c0)
[173049.053172] sd 0:0:1:0: [sde] tag#0 CDB: Write(16) 8a 00 00 00 00 00 8b c2 aa 90 00 00 00 38 00 00
[173049.053173] scsi target0:0:1: handle(0x0009), sas_address(0x4433221104000000), phy(4)
[173049.053174] scsi target0:0:1: enclosure_logical_id(0x500304801103c400), slot(7)
[173049.053180] sd 0:0:1:0: task abort: SUCCESS scmd(ffff94534caaa0c0)
[173049.053187] sd 0:0:1:0: [sde] tag#0 FAILED Result: hostbyte=DID_TIME_OUT driverbyte=DRIVER_OK
[173049.053188] sd 0:0:1:0: [sde] tag#0 CDB: Write(16) 8a 00 00 00 00 00 8b c2 aa 90 00 00 00 38 00 00
[173049.053190] blk_update_request: I/O error, dev sde, sector 2344790672
[173049.057371] mpt2sas_cm0:     scan devices: expanders start
[173049.057457] mpt2sas_cm0:     break from expander scan: ioc_status(0x0022), loginfo(0x310f0400)
[173049.057458] mpt2sas_cm0:     scan devices: expanders complete
[173049.057459] mpt2sas_cm0:     scan devices: phys disk start
[173049.057514] mpt2sas_cm0:     break from phys disk scan: ioc_status(0x0022), loginfo(0x00000000)
[173049.057515] mpt2sas_cm0:     scan devices: phys disk complete
[173049.057515] mpt2sas_cm0:     scan devices: volumes start
[173049.057565] mpt2sas_cm0:     break from volume scan: ioc_status(0x0022), loginfo(0x00000000)
[173049.057566] mpt2sas_cm0:     scan devices: volumes complete
[173049.057566] mpt2sas_cm0:     scan devices: end devices start
[173049.078973] mpt2sas_cm0:     break from end device scan: ioc_status(0x0022), loginfo(0x310f0400)
[173049.078974] mpt2sas_cm0:     scan devices: end devices complete
[173049.078974] mpt2sas_cm0: scan devices: complete
[173052.071619] dmar_fault: 48826 callbacks suppressed
[173052.071621] DMAR: DRHD: handling fault status reg 700
[173052.071989] DMAR: DRHD: handling fault status reg 700
[173052.072340] DMAR: DRHD: handling fault status reg 700
[173052.072689] DMAR: DRHD: handling fault status reg 700
[173052.073037] DMAR: DRHD: handling fault status reg 700
[173052.073387] DMAR: DRHD: handling fault status reg 700
[173052.073738] DMAR: DRHD: handling fault status reg 700
[173052.074087] DMAR: DRHD: handling fault status reg 700
[173052.074416] DMAR: DRHD: handling fault status reg 700
[173052.074732] DMAR: DRHD: handling fault status reg 700
[173057.075479] dmar_fault: 48833 callbacks suppressed
[173057.075481] DMAR: DRHD: handling fault status reg 700
[173057.075874] DMAR: DRHD: handling fault status reg 700
[173057.076221] DMAR: DRHD: handling fault status reg 700
[173057.076571] DMAR: DRHD: handling fault status reg 700
[173057.076919] DMAR: DRHD: handling fault status reg 700
[173057.077267] DMAR: DRHD: handling fault status reg 700
[173057.077616] DMAR: DRHD: handling fault status reg 700
[173057.077963] DMAR: DRHD: handling fault status reg 700
[173057.078310] DMAR: DRHD: handling fault status reg 700
[173057.078660] DMAR: DRHD: handling fault status reg 700
[173062.079350] dmar_fault: 48832 callbacks suppressed
[173062.079352] DMAR: DRHD: handling fault status reg 700
[173062.079722] DMAR: DRHD: handling fault status reg 700
[173062.080057] DMAR: DRHD: handling fault status reg 700
[173062.080368] DMAR: DRHD: handling fault status reg 700
[173062.080650] DMAR: DRHD: handling fault status reg 700
[173062.080911] DMAR: DRHD: handling fault status reg 700
[173062.081171] DMAR: DRHD: handling fault status reg 700
[173062.081430] DMAR: DRHD: handling fault status reg 700
[173062.081688] DMAR: DRHD: handling fault status reg 700
[173062.081950] DMAR: DRHD: handling fault status reg 700
[173067.083230] dmar_fault: 48839 callbacks suppressed
[173067.083235] DMAR: DRHD: handling fault status reg 700
[173067.083646] DMAR: DRHD: handling fault status reg 700
[173067.084019] DMAR: DRHD: handling fault status reg 700
[173067.084392] DMAR: DRHD: handling fault status reg 700
[173067.084761] DMAR: DRHD: handling fault status reg 700
[173067.085125] DMAR: DRHD: handling fault status reg 700
[173067.085491] DMAR: DRHD: handling fault status reg 700
[173067.085857] DMAR: DRHD: handling fault status reg 700
[173067.086223] DMAR: DRHD: handling fault status reg 700
[173067.086587] DMAR: DRHD: handling fault status reg 700
[173072.087195] dmar_fault: 48831 callbacks suppressed
[173072.087197] DMAR: DRHD: handling fault status reg 700
[173072.087482] DMAR: DRHD: handling fault status reg 700
[173072.087742] DMAR: DRHD: handling fault status reg 700
[173072.088002] DMAR: DRHD: handling fault status reg 700
[173072.088261] DMAR: DRHD: handling fault status reg 700
[173072.088518] DMAR: DRHD: handling fault status reg 700
[173072.088776] DMAR: DRHD: handling fault status reg 700
[173072.089034] DMAR: DRHD: handling fault status reg 700
[173072.089291] DMAR: DRHD: handling fault status reg 700
[173072.089548] DMAR: DRHD: handling fault status reg 700
[173077.091069] dmar_fault: 48841 callbacks suppressed
[173077.091071] DMAR: DRHD: handling fault status reg 700
[173077.091347] DMAR: DRHD: handling fault status reg 700
[173077.091604] DMAR: DRHD: handling fault status reg 700
[173077.091860] DMAR: DRHD: handling fault status reg 700
[173077.092118] DMAR: DRHD: handling fault status reg 700
[173077.092354] DMAR: DRHD: handling fault status reg 700
[173077.092569] DMAR: DRHD: handling fault status reg 700
[173077.092786] DMAR: DRHD: handling fault status reg 700
[173077.093002] DMAR: DRHD: handling fault status reg 700
[173077.093217] DMAR: DRHD: handling fault status reg 700
[173079.194956] mpt2sas_cm0: mpt3sas_scsih_issue_tm: timeout
[173079.195071] mf:
   
[173079.195073] 0100000a
[173079.195073] 00000100
[173079.195074] 00000000
[173079.195074] 00000000
[173079.195074] 00000000
[173079.195075] 00000000
[173079.195075] 00000000
[173079.195075] 00000000
[173079.195076]
   
[173079.195076] 00000000
[173079.195076] 00000000
[173079.195077] 00000000
[173079.195077] 00000000
[173079.195077] 00000001

[173079.195099] mpt2sas_cm0: sending diag reset !!
[173080.239937] mpt2sas_cm0: diag reset: SUCCESS
[173080.344033] mpt2sas_cm0: LSISAS2308: FWVersion(16.00.01.00), ChipRevision(0x05), BiosVersion(07.31.00.00)
[173080.344034] mpt2sas_cm0: Protocol=(
[173080.344035] Initiator
[173080.344036] ),
[173080.344036] Capabilities=(
[173080.344037] Raid
[173080.344038] ,TLR
[173080.344038] ,EEDP
[173080.344039] ,Snapshot Buffer
[173080.344039] ,Diag Trace Buffer
[173080.344039] ,Task Set Full
[173080.344040] ,NCQ
[173080.344040] )
[173080.344081] mpt2sas_cm0: sending port enable !!
[173082.094941] dmar_fault: 48843 callbacks suppressed
[173082.094944] DMAR: DRHD: handling fault status reg 700
[173082.095323] DMAR: DRHD: handling fault status reg 700
[173082.095680] DMAR: DRHD: handling fault status reg 700
[173082.096037] DMAR: DRHD: handling fault status reg 700
[173082.096392] DMAR: DRHD: handling fault status reg 700
[173082.096747] DMAR: DRHD: handling fault status reg 700
[173082.097099] DMAR: DRHD: handling fault status reg 700
[173082.097452] DMAR: DRHD: handling fault status reg 700
[173082.097804] DMAR: DRHD: handling fault status reg 700
[173082.098175] DMAR: DRHD: handling fault status reg 700
[173087.098811] dmar_fault: 48831 callbacks suppressed
[173087.098814] DMAR: DRHD: handling fault status reg 700
[173087.099089] DMAR: DRHD: handling fault status reg 700
[173087.099337] DMAR: DRHD: handling fault status reg 700
[173087.099585] DMAR: DRHD: handling fault status reg 700
[173087.099834] DMAR: DRHD: handling fault status reg 700
[173087.100077] DMAR: DRHD: handling fault status reg 700
[173087.100320] DMAR: DRHD: handling fault status reg 700
[173087.100562] DMAR: DRHD: handling fault status reg 700
[173087.100806] DMAR: DRHD: handling fault status reg 700
[173087.101049] DMAR: DRHD: handling fault status reg 700
[173088.083170] mpt2sas_cm0: port enable: SUCCESS
[173088.083176] mpt2sas_cm0: search for end-devices: start
[173088.084296] scsi target0:0:1: handle(0x0009), sas_addr(0x4433221104000000)
[173088.084298] scsi target0:0:1: enclosure logical id(0x500304801103c400), slot(7)
[173088.084363] scsi target0:0:3: handle(0x000a), sas_addr(0x4433221106000000)
[173088.084364] scsi target0:0:3: enclosure logical id(0x500304801103c400), slot(5)
[173088.084365]     handle changed from(0x000b)!!!
[173088.084431] scsi target0:0:2: handle(0x000b), sas_addr(0x4433221105000000)
[173088.084431] scsi target0:0:2: enclosure logical id(0x500304801103c400), slot(6)
[173088.084432]     handle changed from(0x000a)!!!
[173088.084497] scsi target0:0:0: handle(0x000c), sas_addr(0x4433221107000000)
[173088.084498] scsi target0:0:0: enclosure logical id(0x500304801103c400), slot(4)
[173088.084565] mpt2sas_cm0: search for end-devices: complete
[173088.084566] mpt2sas_cm0: search for raid volumes: start
[173088.084566] mpt2sas_cm0: search for responding raid volumes: complete
[173088.084567] mpt2sas_cm0: search for expanders: start
[173088.084567] mpt2sas_cm0: search for expanders: complete
[173088.084576] sd 0:0:0:0: task abort: SUCCESS scmd(ffff945372473640)
[173088.084603] mpt2sas_cm0: removing unresponding devices: start
[173088.084604] mpt2sas_cm0: removing unresponding devices: end-devices
[173088.084606] mpt2sas_cm0: removing unresponding devices: volumes
[173088.084606] mpt2sas_cm0: removing unresponding devices: expanders
[173088.084607] mpt2sas_cm0: removing unresponding devices: complete
[173088.084612] mpt2sas_cm0: scan devices: start
[173088.085172] mpt2sas_cm0:     scan devices: expanders start
[173088.085241] mpt2sas_cm0:     break from expander scan: ioc_status(0x0022), loginfo(0x310f0400)
[173088.085242] mpt2sas_cm0:     scan devices: expanders complete
[173088.085243] mpt2sas_cm0:     scan devices: phys disk start
[173088.085297] mpt2sas_cm0:     break from phys disk scan: ioc_status(0x0022), loginfo(0x00000000)
[173088.085298] mpt2sas_cm0:     scan devices: phys disk complete
[173088.085298] mpt2sas_cm0:     scan devices: volumes start
[173088.085349] mpt2sas_cm0:     break from volume scan: ioc_status(0x0022), loginfo(0x00000000)
[173088.085349] mpt2sas_cm0:     scan devices: volumes complete
[173088.085349] mpt2sas_cm0:     scan devices: end devices start
[173088.086197] mpt2sas_cm0:     break from end device scan: ioc_status(0x0022), loginfo(0x310f0400)
[173088.086198] mpt2sas_cm0:     scan devices: end devices complete
[173088.086198] mpt2sas_cm0: scan devices: complete
[173092.102681] dmar_fault: 48842 callbacks suppressed
[173092.102703] DMAR: DRHD: handling fault status reg 700
[173092.102948] DMAR: DRHD: handling fault status reg 700
[173092.103177] DMAR: DRHD: handling fault status reg 700
[173092.103406] DMAR: DRHD: handling fault status reg 700
[173092.103635] DMAR: DRHD: handling fault status reg 700
[173092.103863] DMAR: DRHD: handling fault status reg 700
[173092.104091] DMAR: DRHD: handling fault status reg 700
[173092.104319] DMAR: DRHD: handling fault status reg 700
[173092.104548] DMAR: DRHD: handling fault status reg 700
[173092.104777] DMAR: DRHD: handling fault status reg 700
[173096.998536] INFO: rcu_sched detected stalls on CPUs/tasks:
[173096.998816]     3-...: (0 ticks this GP) idle=737/140000000000000/0 softirq=3187486/3187486 fqs=20921
[173096.999091]     (detected by 22, t=52517 jiffies, g=5606332, c=5606331, q=395698)
[173096.999387] Task dump for CPU 3:
[173096.999388] qemu-system-x86 R  running task        0 17967      1 0x00000008
[173096.999392]  ffff9443759fedc8 ffff9443759ff4c8 0000000000000002 ffffffffa0859a66
[173096.999394]  0000020000000003 00000000e0001001 00000000e025b1ce ffffffffa085b6ea
[173096.999396]  00000000000e0000 ffff9443759feac0 0000000000000000 0000000000000004
[173096.999398] Call Trace:
[173096.999407]  [<ffffffffa0859a66>] ? qi_flush_dev_iotlb+0x86/0xc0
[173096.999409]  [<ffffffffa085b6ea>] ? iommu_flush_dev_iotlb.part.47+0x6a/0x90
[173096.999411]  [<ffffffffa085d417>] ? intel_iommu_unmap+0xf7/0x140
[173096.999414]  [<ffffffffa084cdfa>] ? iommu_unmap+0xba/0x190
[173096.999418]  [<ffffffffc09c6a9a>] ? vfio_remove_dma+0x10a/0x200 [vfio_iommu_type1]
[173096.999420]  [<ffffffffc09c71bd>] ? vfio_iommu_type1_ioctl+0x41d/0xa72 [vfio_iommu_type1]
[173096.999447]  [<ffffffffc0adbf80>] ? kvm_set_memory_region+0x30/0x40 [kvm]
[173096.999457]  [<ffffffffc0adc3dc>] ? kvm_vm_ioctl+0x44c/0x7e0 [kvm]
[173096.999461]  [<ffffffffc095a603>] ? vfio_fops_unl_ioctl+0x73/0x260 [vfio]
[173096.999464]  [<ffffffffa061753b>] ? do_vfs_ioctl+0x9b/0x600
[173096.999467]  [<ffffffffa04fb1e3>] ? SyS_futex+0x83/0x180
[173096.999468]  [<ffffffffa0617b16>] ? SyS_ioctl+0x76/0x90
[173096.999473]  [<ffffffffa09fb3fb>] ? system_call_fast_compare_end+0xc/0x9b
[173097.106554] dmar_fault: 48844 callbacks suppressed
[173097.106579] DMAR: DRHD: handling fault status reg 700
[173097.107044] DMAR: DRHD: handling fault status reg 700
[173097.107485] DMAR: DRHD: handling fault status reg 700
[173097.107923] DMAR: DRHD: handling fault status reg 700
[173097.108361] DMAR: DRHD: handling fault status reg 700
[173097.108798] DMAR: DRHD: handling fault status reg 700
[173097.109239] DMAR: DRHD: handling fault status reg 700
[173097.109676] DMAR: DRHD: handling fault status reg 700
[173097.110112] DMAR: DRHD: handling fault status reg 700
[173097.110611] DMAR: DRHD: handling fault status reg 700
[173102.110424] dmar_fault: 48822 callbacks suppressed
[173102.110427] DMAR: DRHD: handling fault status reg 700
[173102.110784] DMAR: DRHD: handling fault status reg 700
[173102.111121] DMAR: DRHD: handling fault status reg 700
[173102.111455] DMAR: DRHD: handling fault status reg 700
[173102.111790] DMAR: DRHD: handling fault status reg 700
[173102.112125] DMAR: DRHD: handling fault status reg 700
[173102.112460] DMAR: DRHD: handling fault status reg 700
[173102.112794] DMAR: DRHD: handling fault status reg 700
[173102.113128] DMAR: DRHD: handling fault status reg 700
[173102.113466] DMAR: DRHD: handling fault status reg 700
[173107.114398] dmar_fault: 48834 callbacks suppressed
[173107.114409] DMAR: DRHD: handling fault status reg 700
[173107.114764] DMAR: DRHD: handling fault status reg 700
[173107.115086] DMAR: DRHD: handling fault status reg 700
[173107.115389] DMAR: DRHD: handling fault status reg 700
[173107.115691] DMAR: DRHD: handling fault status reg 700
[173107.115996] DMAR: DRHD: handling fault status reg 700
[173107.116301] DMAR: DRHD: handling fault status reg 700
[173107.116607] DMAR: DRHD: handling fault status reg 700
[173107.116914] DMAR: DRHD: handling fault status reg 700
[173107.117221] DMAR: DRHD: handling fault status reg 700
[173112.118272] dmar_fault: 48836 callbacks suppressed
[173112.118275] DMAR: DRHD: handling fault status reg 700
[173112.118814] DMAR: DRHD: handling fault status reg 700
[173112.119296] DMAR: DRHD: handling fault status reg 700
[173112.119779] DMAR: DRHD: handling fault status reg 700
[173112.120261] DMAR: DRHD: handling fault status reg 700
[173112.120744] DMAR: DRHD: handling fault status reg 700
[173112.121226] DMAR: DRHD: handling fault status reg 700
[173112.121707] DMAR: DRHD: handling fault status reg 700
[173112.122257] DMAR: DRHD: handling fault status reg 700
[173112.122854] DMAR: DRHD: handling fault status reg 700
[173117.122140] dmar_fault: 48817 callbacks suppressed
[173117.122142] DMAR: DRHD: handling fault status reg 700
[173117.122415] DMAR: DRHD: handling fault status reg 700
[173117.122660] DMAR: DRHD: handling fault status reg 700
[173117.122904] DMAR: DRHD: handling fault status reg 700
[173117.123146] DMAR: DRHD: handling fault status reg 700
[173117.123389] DMAR: DRHD: handling fault status reg 700
[173117.123631] DMAR: DRHD: handling fault status reg 700
[173117.123874] DMAR: DRHD: handling fault status reg 700
[173117.124116] DMAR: DRHD: handling fault status reg 700
[173117.124359] DMAR: DRHD: handling fault status reg 700
[173118.374063] sd 0:0:0:0: attempting task abort! scmd(ffff945373d4c180)
[173118.374076] sd 0:0:0:0: [sdd] tag#0 CDB: Write(10) 2a 08 01 d3 a8 00 00 00 08 00
[173118.374091] scsi target0:0:0: handle(0x000c), sas_address(0x4433221107000000), phy(7)
[173118.374092] scsi target0:0:0: enclosure_logical_id(0x500304801103c400), slot(4)


Jiri 'Ghormoon' Novak wrote:
Hi,

even with the root.1 it kills the host without warning :(

Gh.

Stano Lano wrote:
Hi,

to update from win10 1511 to win10 1607 I had to setup my VM to have only 1 core and 1 thread per core or the update would fail.
Also for the next big update I had to do the same.

For the Win RX 480 setup you need to bind the GPU to ioh3420 or the driver will not work:

-device vfio-pci,host=02:00.0,id=hostdev6,bus=root.1,multifunction=on,addr=0x2,x-vga=on \
-device vfio-pci,host=02:00.1,id=hostdev7,bus=root.1,addr=0x2.0x1 \

You may also need to set the addr to 0x0 and 0x0.0x1

I cant help with linux guest but you can try the same.

Good luck

On Mon, Mar 20, 2017 at 8:09 AM, Jiří Novák <jiri novak actum cz> wrote:
Hi,
previously I've had R7 250x (borrowed) which worked for both linux and
windows guests. Now i'm trying to make R7 240 to work with linux and RX
480 with windows quest and both is failing miserably.

The R7 240 in linux doesn't boot at all. bios says boot from disk, but
nothing happens, black screen, I think the VM hangs (no ping/ssh after
time). The windows VM used with r7 250x does work correctly with similar
setup, except that update from win10 1511 to win 10 1607 fails, which it
did previously too.

This is the original linux config:

LC_ALL=C
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
QEMU_AUDIO_DRV=pa /usr/bin/qemu-system-x86_64 \
    -name 10-debian \
    -machine pc-i440fx-2.4,accel=kvm,usb=off \
    -cpu
SandyBridge,+invtsc,+osxsave,+pcid,+pdcm,+xtpr,+tm2,+est,+smx,+vmx,+ds_cpl,+monitor,+dtes64,+pbe,+tm,+ht,+ss,+acpi,+ds,+vme,hv_time,hv_relaxed,hv_vapic,hv_spinlocks=0x1fff
\
    -m 4096 \
    -realtime mlock=off \
    -smp 2,sockets=1,cores=2,threads=1 \
    -nographic -no-user-config -nodefaults \
    -rtc base=utc,driftfix=slew \
    -global kvm-pit.lost_tick_policy=discard \
    -no-hpet -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 \
    -boot menu=off,strict=on \
    -device pci-bridge,chassis_nr=2,id=pci.2,bus=pci.0,addr=0x3 \
    -device pci-bridge,chassis_nr=3,id=pci.3,bus=pci.0,addr=0x4 \
    -device pci-bridge,chassis_nr=4,id=pci.4,bus=pci.0,addr=0x5 \
    -device pci-bridge,chassis_nr=5,id=pci.5,bus=pci.0,addr=0x6 \
    -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 \
    -drive
file=/dev/X-gzfs/backups/pools/C-nas/qemu/10-debian,if=none,id=drive-virtio-disk0,format=raw,cache=none,aio=native
\
    -device
virtio-blk-pci,scsi=off,bus=pci.2,addr=0x1,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
\
    -netdev tap,id=netdev0,ifname=V0300t10,script=no,downscript=no \
    -device
virtio-net-pci,netdev=netdev0,id=net0,mac=42:42:42:00:00:0a,bus=pci.2,addr=0x2
\
    -chardev pty,id=charserial0 \
    -device isa-serial,chardev=charserial0,id=serial0 \
    -device vfio-pci,host=07:00.0,id=hostdev2,bus=pci.4,addr=0x1 \
    -device vfio-pci,host=00:1a.0,id=hostdev3,bus=pci.2,addr=0x4 \
    -device vfio-pci,host=00:1d.0,id=hostdev5,bus=pci.2,addr=0x6 \
    -device
vfio-pci,host=03:00.0,id=hostdev6,bus=pci.5,multifunction=on,addr=0x1,x-vga=on,romfile=/root/roms/R7.240.176679.rom
\
    -device vfio-pci,host=03:00.1,id=hostdev7,bus=pci.5,addr=0x1.0x1 \
    -device virtio-balloon-pci,id=balloon0,bus=pci.2,addr=0x7 \
    -vga none \
    -soundhw hda \
    -chardev stdio,id=seabios \
    -device isa-debugcon,iobase=0x402,chardev=seabios \
    -msg timestamp=on \
    >>/var/log/kvm/10-debian.stdout 2>>/var/log/kvm/10-debian.stderr &

I've been recomennded to change it to q35 and attach the radeon directly
to root port, without any change though.

changed config:

LC_ALL=C
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
QEMU_AUDIO_DRV=pa /usr/bin/taskset -c 2-3,18-19
/usr/bin/qemu-system-x86_64 \
    -name 10-debian \
    -machine pc-q35-2.4,accel=kvm,usb=off \
    -cpu
SandyBridge,+invtsc,+osxsave,+pcid,+pdcm,+xtpr,+tm2,+est,+smx,+vmx,+ds_cpl,+monitor,+dtes64,+pbe,+tm,+ht,+ss,+acpi,+ds,+vme,hv_time,hv_relaxed,hv_vapic,hv_spinlocks=0x1fff,kvm=off
\
    -object
memory-backend-file,id=mem0,size=16G,mem-path=/dev/hugepages,share=off \
    -numa node,nodeid=0,memdev=mem0 \
    -m 16G \
    -realtime mlock=off \
    -smp sockets=1,cores=2,threads=2 \
    -nographic -no-user-config -nodefaults -no-hpet \
    -rtc base=localtime,driftfix=slew \
    -global kvm-pit.lost_tick_policy=discard \
    -boot menu=off,strict=on \
    -device
ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1 \
    -device piix3-usb-uhci,id=usb,bus=pcie.0,addr=0x7 \
    -drive
file=/dev/X-gzfs/backups/pools/C-nas/qemu/10-debian,if=none,id=drive-virtio-disk0,format=raw,cache=none,aio=native
\
    -device virtio-scsi-pci,bus=pcie.0,addr=0x5 \
    -device scsi-hd,drive=drive-virtio-disk0 \
    -device
vfio-pci,host=03:00.0,id=hostdev6,bus=pcie.0,multifunction=on,addr=0x2,x-vga=on,romfile=/root/roms/R7.240.176679.rom
\
    -device vfio-pci,host=03:00.1,id=hostdev7,bus=pcie.0,addr=0x2.0x1 \
    -device vfio-pci,host=00:1a.0,id=hostdev3,bus=pcie.0,addr=0x3 \
    -device vfio-pci,host=00:1d.0,id=hostdev5,bus=pcie.0,addr=0x4 \
    -netdev tap,id=netdev0,ifname=V0300t12,script=no,downscript=no \
    -device
virtio-net-pci,netdev=netdev0,id=net0,mac=42:42:42:00:00:0a,bus=pcie.0,addr=0x6
\
    -device virtio-balloon-pci,id=balloon0,bus=pcie.0,addr=0x8 \
    -vga none \
    -soundhw hda \
    -device virtio-rng-pci \
    -chardev stdio,id=seabios \
    -device isa-debugcon,iobase=0x402,chardev=seabios \
    -msg timestamp=on \
    >>/var/log/kvm/10-debian.stdout 2>>/var/log/kvm/10-debian.stderr &

for reference working windows:

LC_ALL=C
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
QEMU_AUDIO_DRV=pa /usr/bin/taskset -c 0-7 /usr/bin/qemu-system-x86_64 \
    -name 11-windows \
    -machine pc-i440fx-2.1,accel=kvm,usb=off \
    -cpu
SandyBridge,+invtsc,+osxsave,+pcid,+pdcm,+xtpr,+tm2,+est,+smx,+vmx,+ds_cpl,+monitor,+dtes64,+pbe,+tm,+ht,+ss,+acpi,+ds,+vme,hv_time,hv_relaxed,hv_vapic,hv_spinlocks=0x1fff
\
    -object
memory-backend-file,id=mem0,size=16G,mem-path=/dev/hugepages,share=off \
    -numa node,nodeid=0,memdev=mem0 \
    -m 16G \
    -realtime mlock=off \
    -smp 2,sockets=1,cores=2,threads=1 \
    -nographic -no-user-config -nodefaults -no-hpet \
    -rtc base=localtime,driftfix=slew \
    -global kvm-pit.lost_tick_policy=discard \
    -boot menu=off,strict=on \
    -device pci-bridge,chassis_nr=2,id=pci.2,bus=pci.0,addr=0x3 \
    -device pci-bridge,chassis_nr=3,id=pci.3,bus=pci.0,addr=0x4 \
    -device pci-bridge,chassis_nr=4,id=pci.4,bus=pci.0,addr=0x5 \
    -device pci-bridge,chassis_nr=5,id=pci.5,bus=pci.0,addr=0x6 \
    -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 \
    -drive
file=/dev/Z-ssd/qemu/11b-win10,if=none,id=drive-virtio-disk0,format=raw,cache=none,aio=native
\
    -drive
file=/dev/Z-ssd/data/games-blizzard,if=none,id=drive-virtio-disk1,format=raw,cache=none,aio=native
\
    -device virtio-scsi-pci,bus=pci.2,addr=0x1 \
    -device scsi-hd,drive=drive-virtio-disk0 \
    -device scsi-hd,drive=drive-virtio-disk1 \
    -netdev tap,id=netdev0,ifname=G42t11,script=no,downscript=no \
    -device
virtio-net-pci,netdev=netdev0,id=net0,mac=42:42:42:00:00:0b,bus=pci.2,addr=0x2
\
    -device vfio-pci,host=00:1a.0,id=hostdev3,bus=pci.2,addr=0x4 \
    -device vfio-pci,host=00:1d.0,id=hostdev5,bus=pci.2,addr=0x6 \
    -device
vfio-pci,host=03:00.0,id=hostdev6,bus=pci.5,multifunction=on,addr=0x1,x-vga=on,romfile=/root/roms/R7.240.176679.rom
\
    -device vfio-pci,host=03:00.1,id=hostdev7,bus=pci.5,addr=0x1.0x1 \
    -device virtio-balloon-pci,id=balloon0,bus=pci.2,addr=0x7 \
    -vga none \
    -soundhw hda \
    -chardev stdio,id=seabios \
    -device isa-debugcon,iobase=0x402,chardev=seabios \
    -msg timestamp=on \
    >>/var/log/kvm/11b-win10.stdout 2>>/var/log/kvm/11b-win10.stderr &


Other GPU, RX 480 I didn't manage to make work with anything. Windows
works until point driver kicks in, then it freezes the host. If I don't
install drivers for gpu (or network card, because windows will install
something on their own) this one runs. Any ideas what to try next?

config:

LC_ALL=C
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
QEMU_AUDIO_DRV=pa /usr/bin/taskset -c 4-7,20-23
/usr/bin/qemu-system-x86_64 \
    -name 12-wingame \
    -machine pc-q35-2.8,accel=kvm,usb=off \
    -cpu host \
    -object
memory-backend-file,id=mem0,size=16G,mem-path=/dev/hugepages,share=off \
    -numa node,nodeid=0,memdev=mem0 \
    -m 16G \
    -realtime mlock=off \
    -smp sockets=1,cores=4,threads=2 \
    -nographic -no-user-config -nodefaults -no-hpet \
    -rtc base=localtime,driftfix=slew \
    -global kvm-pit.lost_tick_policy=discard \
    -boot d \
    -device
ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1 \
    -device piix3-usb-uhci,id=usb,bus=pcie.0,addr=0x7 \
    -drive
file=/dev/Z-ssd/qemu/12-wingame,if=none,id=drive-virtio-disk0,format=raw,cache=none,aio=native
\
    -drive
file=/dev/Z-ssd/data/games-blizzard,if=none,id=drive-virtio-disk1,format=raw,cache=none,aio=native
\
    -device virtio-scsi-pci,bus=pcie.0,addr=0x5 \
    -device scsi-hd,drive=drive-virtio-disk0 \
    -device scsi-hd,drive=drive-virtio-disk1 \
    -netdev tap,id=netdev0,ifname=V0300t12,script=no,downscript=no \
    -device
virtio-net-pci,netdev=netdev0,id=net0,mac=42:42:42:00:00:0c,bus=pcie.0,addr=0x6
\
    -device
vfio-pci,host=02:00.0,id=hostdev6,bus=pcie.0,multifunction=on,addr=0x2,x-vga=on
\
    -device vfio-pci,host=02:00.1,id=hostdev7,bus=pcie.0,addr=0x2.0x1 \
    -device vfio-pci,host=07:00.0,id=hostdev2,bus=pcie.0,addr=0x3 \
    -device virtio-balloon-pci,id=balloon0,bus=pcie.0,addr=0x8 \
    -vga none \
    -soundhw hda \
    -device virtio-rng-pci \
    -chardev stdio,id=seabios \
    -device isa-debugcon,iobase=0x402,chardev=seabios \
    -msg timestamp=on \
        -drive
file=/mnt/X-gzfs/backups/pools/C-nas/data/sysiso/win10n_1607.iso,index=0,media=cdrom
\
        -drive
file=/mnt/X-gzfs/backups/pools/C-nas/data/sysiso/virtio-win-0.1.118.iso,index=1,media=cdrom
\
    >>/var/log/kvm/12-wingame.stdout 2>>/var/log/kvm/12-wingame.stderr &

Thanks in advance,
Gh.

Jiří Novák
Infrastructure Specialist

ACTUM / City Green Court
Hvězdova 1734/2c / 140 00 Praha 4 / Czech Republic
Mobile +420 737 910 508 / Reception +420 266 798 200
jiri novak actum cz / www.actum.cz

_______________________________________________
vfio-users mailing list
vfio-users redhat com
https://www.redhat.com/mailman/listinfo/vfio-users



_______________________________________________
vfio-users mailing list
vfio-users redhat com
https://www.redhat.com/mailman/listinfo/vfio-users



[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]