[vfio-users] vfio fails Guest FreeBSD9.3 host Fedora 23

Alex Williamson alex.williamson at redhat.com
Fri May 27 18:29:39 UTC 2016


On Fri, 27 May 2016 13:20:40 -0400
chintu hetam <rometoroam at gmail.com> wrote:

> I am using 1G huge pages and as i have 2 numa node configuration, i am
> ensuring that VM memory is pinned to numazone 1( coz my bus is in numazone
> 1)
> 
> [root at localhost vcr]# cat /proc/meminfo
> MemTotal:       396231416 kB
> MemFree:        272286932 kB
> MemAvailable:   284401916 kB
> Buffers:            2508 kB
> Cached:         11876580 kB
> SwapCached:            0 kB
> Active:         11108860 kB
> Inactive:       10023144 kB
> Active(anon):    9261072 kB
> Inactive(anon):     9464 kB
> Active(file):    1847788 kB
> Inactive(file): 10013680 kB
> Unevictable:       16372 kB
> Mlocked:           16372 kB
> SwapTotal:       4194300 kB
> SwapFree:        4194300 kB
> Dirty:               256 kB
> Writeback:             0 kB
> AnonPages:       9277540 kB
> Mapped:           268960 kB
> Shmem:             11248 kB
> Slab:             677860 kB
> SReclaimable:     499240 kB
> SUnreclaim:       178620 kB
> KernelStack:       16368 kB
> PageTables:        46688 kB
> NFS_Unstable:          0 kB
> Bounce:                0 kB
> WritebackTmp:          0 kB
> CommitLimit:    151978360 kB
> Committed_AS:   11918316 kB
> VmallocTotal:   34359738367 kB
> VmallocUsed:           0 kB
> VmallocChunk:          0 kB
> HardwareCorrupted:     0 kB
> AnonHugePages:   8521728 kB
> CmaTotal:              0 kB
> CmaFree:               0 kB
> HugePages_Total:      96
> HugePages_Free:       94
> HugePages_Rsvd:        0
> HugePages_Surp:        0
> [root at localhost vcr]# cat /sys/class/net/enp170s0f0/device/numa_node
> 1
> JFYI,
> [root at localhost vcr]# lscpu
> Architecture:          x86_64
> CPU op-mode(s):        32-bit, 64-bit
> Byte Order:            Little Endian
> CPU(s):                40
> On-line CPU(s) list:   0-39
> Thread(s) per core:    2
> Core(s) per socket:    10
> Socket(s):             2
> NUMA node(s):          2
> Vendor ID:             GenuineIntel
> CPU family:            6
> Model:                 62
> Model name:            Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz
> Stepping:              4
> CPU MHz:               1267.109
> CPU max MHz:           3600.0000
> CPU min MHz:           1200.0000
> BogoMIPS:              5593.24
> Virtualization:        VT-x
> L1d cache:             32K
> L1i cache:             32K
> L2 cache:              256K
> L3 cache:              25600K
> NUMA node0 CPU(s):     0-9,20-29
> NUMA node1 CPU(s):     10-19,30-39
> Flags:                 fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge
> mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall
> nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl
> xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor
> ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic
> popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm ida arat epb
> pln pts dtherm tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms
> xsaveopt
> 
> Hugepagesize:    1048576 kB
> DirectMap4k:      335552 kB
> DirectMap2M:     5875712 kB
> DirectMap1G:    398458880 kB
> 
> 
> On Fri, May 27, 2016 at 12:12 PM, Alex Williamson <
> alex.williamson at redhat.com> wrote:  
> 
> > On Fri, 27 May 2016 11:57:38 -0400
> > chintu hetam <rometoroam at gmail.com> wrote:
> >  
> > > Hi Alex,
> > >
> > > Thank you for the quick response.
> > >
> > > i have 370+G RAM in the system
> > >
> > > [root at localhost vcr]# cat /proc/meminfo
> > > MemTotal:       396231416 kB
> > > MemFree:        272301316 kB  
> >
> > The QEMU command line indicates you're using hugepages, how many
> > hugepages do you have available?  It should be at least 16384 for a 32G
> > guest with 2MB pages.  Try increasing or not using hugepages so
                                          ^^^^^^^^^^^^^^^^^^^^^^

> > we can attempt to isolate whether that's the problem.  Thanks,
> >
> > Alex
> >  
> 
> 
> 




More information about the vfio-users mailing list