[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [rdo-list] Issue with assigning multiple VFs to VM instance



Which method you are using?

Neutron sriov plugin? Nova (via flavor definition)? Neutron without sriov plugin (which I think is the best)

Thanks,

 

From: rdo-list-bounces redhat com [mailto:rdo-list-bounces redhat com] On Behalf Of Chinmaya Dwibedy
Sent: Monday, June 20, 2016 14:53
To: rdo-list redhat com
Subject: Re: [rdo-list] Issue with assigning multiple VFs to VM instance

 

 

Hi ,

 

Can anyone please suggest how to assign multiple VF devices to VM instance using open-stack openstack-mitaka release? Thank you in advance for your time and support.

 

Regards,

Chinmaya

 

On Thu, Jun 16, 2016 at 5:42 PM, Chinmaya Dwibedy <ckdwibedy gmail com> wrote:

Hi All,

 

I have installed open-stack openstack-mitaka release on CentO7 system . It has two  Intel QAT devices. There are 32 VF devices available per QAT Device/DH895xCC device.  

 

[root localhost nova(keystone_admin)]# lspci -nn | grep 0435

83:00.0 Co-processor [0b40]: Intel Corporation DH895XCC Series QAT [8086:0435]

88:00.0 Co-processor [0b40]: Intel Corporation DH895XCC Series QAT [8086:0435]

[root localhost nova(keystone_admin)]# cat  /sys/bus/pci/devices/0000\:88\:00.0/sriov_numvfs

32

[root localhost nova(keystone_admin)]# cat  /sys/bus/pci/devices/0000\:83\:00.0/sriov_numvfs

32

[root localhost nova(keystone_admin)]#

 

Changed the nova configuration (as stated below) for exposing VF ( via PCI-passthrough)  to the instances.

 

pci_alias = {"name": "QuickAssist", "product_id": "0443", "vendor_id": "8086", "device_type": "type-VF"}

pci_passthrough_whitelist =  [{"vendor_id":"8086","product_id":"0443"}}]

Restarted the nova compute, nova API and nova scheduler service

service openstack-nova-compute restart;service openstack-nova-api restart;systemctl restart openstack-nova-scheduler;

scheduler_available_filters=nova.scheduler.filters.all_filters

scheduler_default_filters=RamFilter,ComputeFilter,AvailabilityZoneFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,PciPassthroughFilter

 

Thereafter it shows all the available VFs (64) in nova database upon   select * from pci_devices. Set the flavor  4 to allow passing two VFs  to instances.

 

[root localhost nova(keystone_admin)]#  nova flavor-show 4

+----------------------------+------------------------------------------------------------+

| Property                   | Value                                                      |

+----------------------------+------------------------------------------------------------+

| OS-FLV-DISABLED:disabled   | False                                                      |

| OS-FLV-EXT-DATA:ephemeral  | 0                                                          |

| disk                       | 80                                                         |

| extra_specs                | {"pci_passthrough:alias": "QuickAssist:2"} |

| id                         | 4                                                          |

| name                       | m1.large                                                   |

| os-flavor-access:is_public | True                                                       |

| ram                        | 8192                                                       |

| rxtx_factor                | 1.0                                                        |

| swap                       |                                                            |

| vcpus                      | 4                                                          |

+----------------------------+------------------------------------------------------------+

[root localhost nova(keystone_admin)]#

 

 Also when I  launch  an instance using this new flavor, it goes into an error state

 

nova boot --flavor 4 --key_name oskey1 --image bc859dc5-103b-428b-814f-d36e59009454 --nic net-id=e2ca118d-1f25-47de-8524-bb2a2635c4be TEST

 

 

Here goes the output of nova-conductor.log

 

2016-06-16 07:55:34.640 5094 WARNING nova.scheduler.utils [req-6189d3c8-5587-4350-8cd3-704fd35cf2ad 266f5859848e4f39b9725203dda5c3f2 4bc608763cee41d9a8df26d3ef919825 - - -] Failed to compute_task_build_instances: No valid host was found. There are not enough hosts available.

Traceback (most recent call last):

 

  File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 150, in inner

    return func(*args, **kwargs)

 

  File "/usr/lib/python2.7/site-packages/nova/scheduler/manager.py", line 104, in select_destinations

    dests = self.driver.select_destinations(ctxt, spec_obj)

 

  File "/usr/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py", line 74, in select_destinations

    raise exception.NoValidHost(reason=reason)

 

NoValidHost: No valid host was found. There are not enough hosts available.

 

Here goes the output of nova-compute.log

 

2016-06-16 07:57:32.502 170789 INFO nova.compute.resource_tracker [req-4529a2a8-390f-4620-98b3-d3eb77e077a3 - - - - -] Total usable vcpus: 36, total allocated vcpus: 16

2016-06-16 07:57:32.502 170789 INFO nova.compute.resource_tracker [req-4529a2a8-390f-4620-98b3-d3eb77e077a3 - - - - -] Final resource view: name=localhost phys_ram=128721MB used_ram=33280MB phys_disk=49GB used_disk=320GB total_vcpus=36 used_vcpus=16 pci_stats=[PciDevicePool(count=0,numa_node=0,product_id='10fb',tags={dev_type='type-PF'},vendor_id='8086'), PciDevicePool(count=63,numa_node=1,product_id='0443',tags={dev_type='type-VF'},vendor_id='8086')]

2016-06-16 07:57:33.803 170789 INFO nova.compute.resource_tracker [req-4529a2a8-390f-4620-98b3-d3eb77e077a3 - - - - -] Compute_service record updated for localhost:localhost

 

Here goes the output of nova-scheduler.log

 

2016-06-16 07:55:34.636 171018 WARNING nova.scheduler.host_manager [req-6189d3c8-5587-4350-8cd3-704fd35cf2ad 266f5859848e4f39b9725203dda5c3f2 4bc608763cee41d9a8df26d3ef919825 - - -] Host localhost has more disk space than database expected (-141 GB > -271 GB)

2016-06-16 07:55:34.637 171018 INFO nova.filters [req-6189d3c8-5587-4350-8cd3-704fd35cf2ad 266f5859848e4f39b9725203dda5c3f2 4bc608763cee41d9a8df26d3ef919825 - - -] Filter PciPassthroughFilter returned 0 hosts

2016-06-16 07:55:34.638 171018 INFO nova.filters [req-6189d3c8-5587-4350-8cd3-704fd35cf2ad 266f5859848e4f39b9725203dda5c3f2 4bc608763cee41d9a8df26d3ef919825 - - -] Filtering removed all hosts for the request with instance ID '4f68c680-5a17-4a38-a6df-5cdb6d76d75b'. Filter results: ['RamFilter: (start: 1, end: 1)', 'ComputeFilter: (start: 1, end: 1)', 'AvailabilityZoneFilter: (start: 1, end: 1)', 'ComputeCapabilitiesFilter: (start: 1, end: 1)', 'ImagePropertiesFilter: (start: 1, end: 1)', 'PciPassthroughFilter: (start: 1, end: 0)']

2016-06-16 07:56:14.743 171018 INFO nova.scheduler.host_manager [req-64a8dc31-f2ab-4d93-8579-6b9f8210ece7 - - - - -] Successfully synced instances from host 'localhost'.

2016-06-16 07:58:17.748 171018 INFO nova.scheduler.host_manager [req-152ac777-1f77-433d-8493-6cd86ab3e0fc - - - - -] Successfully synced instances from host 'localhost'.

 

Note that, If I set the flavor as (#nova flavor-key 4 set "pci_passthrough:alias"="QuickAssist:1")  , it assigns a single VF to VM instance. I think, multiple PFs can be assigned per VM. Can anyone please suggest , where I am wrong and the way to solve this ? Thank you in advance for your support and help.

 

 

 

Regards,

Chinmaya

 


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]