[Rdo-list] OS Liberty + Ceph Hammer: Block Device Mapping is Invalid.

Kevin rdo at dolphin-it.de
Tue Dec 15 23:20:39 UTC 2015



Hi David,

thanks for your reply! I just fixed some problems in my cinder.conf and nova.conf's and it is working now.

The solution for my problem was to make a new config section [ceph] and set it as the default. My installation used an old LVM-backend I hat commented out in the config.

Thanks for your help!

Kind regards,
Kevin



Re: [Rdo-list] OS Liberty + Ceph Hammer: Block Device Mapping is Invalid. (13-Dez-2015 17:01)
From:   David Moreau Simard
To:Kevin
Cc:rdo-list


What format is the image you are creating a volume from ? Are you able to reproduce the problem with both qcow2 and raw formats ? 
David Moreau Simard
Senior Software Engineer | Openstack RDO
dmsimard = [irc, github, twitter]
On Dec 13, 2015 4:47 AM, "Kevin" <rdo at dolphin-it.de> wrote:



Can someone help me?
Help would be highly appreciated ;-)


Last message on OpenStack mailing list:

Dear OpenStack-users,

I just installed my first multi-node OS-setup with Ceph as my storage backend.
After configuring cinder, nova and glance as described in the Ceph-HowTo (http://docs.ceph.com/docs/master/rbd/rbd-openstack/), there remains one blocker for me:

When creating a new instance based on a bootable glance image (same ceph cluster), it fails with:

Dashboard:
> Block Device Mapping is Invalid.

nova-compute.log (http://pastebin.com/bKfEijDu):
> 2015-12-06 16:44:15.991 2333 ERROR nova.compute.manager [instance: 83677788-eafc-4d9c-9f38-3cad8030ecd3] Traceback (most recent call last):
> 2015-12-06 16:44:15.991 2333 ERROR nova.compute.manager [instance: 83677788-eafc-4d9c-9f38-3cad8030ecd3]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1738, in _prep_block_device
> 2015-12-06 16:44:15.991 2333 ERROR nova.compute.manager [instance: 83677788-eafc-4d9c-9f38-3cad8030ecd3]     wait_func=self._await_block_device_map_created)
> 2015-12-06 16:44:15.991 2333 ERROR nova.compute.manager [instance: 83677788-eafc-4d9c-9f38-3cad8030ecd3]   File "/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 476, in attach_block_devices
> 2015-12-06 16:44:15.991 2333 ERROR nova.compute.manager [instance: 83677788-eafc-4d9c-9f38-3cad8030ecd3]     map(_log_and_attach, block_device_mapping)
> 2015-12-06 16:44:15.991 2333 ERROR nova.compute.manager [instance: 83677788-eafc-4d9c-9f38-3cad8030ecd3]   File "/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 474, in _log_and_attach
> 2015-12-06 16:44:15.991 2333 ERROR nova.compute.manager [instance: 83677788-eafc-4d9c-9f38-3cad8030ecd3]     bdm.attach(*attach_args, **attach_kwargs)
> 2015-12-06 16:44:15.991 2333 ERROR nova.compute.manager [instance: 83677788-eafc-4d9c-9f38-3cad8030ecd3]   File "/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 385, in attach
> 2015-12-06 16:44:15.991 2333 ERROR nova.compute.manager [instance: 83677788-eafc-4d9c-9f38-3cad8030ecd3]     self._call_wait_func(context, wait_func, volume_api, vol['id'])
> 2015-12-06 16:44:15.991 2333 ERROR nova.compute.manager [instance: 83677788-eafc-4d9c-9f38-3cad8030ecd3]   File "/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 344, in _call_wait_func
> 2015-12-06 16:44:15.991 2333 ERROR nova.compute.manager [instance: 83677788-eafc-4d9c-9f38-3cad8030ecd3]     {'volume_id': volume_id, 'exc': exc})
> 2015-12-06 16:44:15.991 2333 ERROR nova.compute.manager [instance: 83677788-eafc-4d9c-9f38-3cad8030ecd3]   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 195, in __exit__
> 2015-12-06 16:44:15.991 2333 ERROR nova.compute.manager [instance: 83677788-eafc-4d9c-9f38-3cad8030ecd3]     six.reraise(self.type_, self.value, self.tb)
> 2015-12-06 16:44:15.991 2333 ERROR nova.compute.manager [instance: 83677788-eafc-4d9c-9f38-3cad8030ecd3]   File "/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 335, in _call_wait_func
> 2015-12-06 16:44:15.991 2333 ERROR nova.compute.manager [instance: 83677788-eafc-4d9c-9f38-3cad8030ecd3]     wait_func(context, volume_id)
> 2015-12-06 16:44:15.991 2333 ERROR nova.compute.manager [instance: 83677788-eafc-4d9c-9f38-3cad8030ecd3]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1426, in _await_block_device_map_created
> 2015-12-06 16:44:15.991 2333 ERROR nova.compute.manager [instance: 83677788-eafc-4d9c-9f38-3cad8030ecd3]     volume_status=volume_status)
> 2015-12-06 16:44:15.991 2333 ERROR nova.compute.manager [instance: 83677788-eafc-4d9c-9f38-3cad8030ecd3] VolumeNotCreated: Volume eba9ed20-09b1-44fe-920e-de8b6044500d did not finish being created even after we waited 0 seconds or 1 attempts. And its status is error.
> 2015-12-06 16:44:15.991 2333 ERROR nova.compute.manager [instance: 83677788-eafc-4d9c-9f38-3cad8030ecd3]
> 2015-12-06 16:44:16.034 2333 ERROR nova.compute.manager [req-8a7e1c2c-09ea-4c10-acb3-2716e04fe214 051f7eb0c4df40dda84a69d40ee86a48 3c297aff8cb44e618fb88356a2dd836b - - -] [instance: 83677788-eafc-4d9c-9f38-3cad8030ecd3] Build of instance 83677788-eafc-4d9c-9f38-3cad8030ecd3 aborted: Block Device Mapping is Invalid.
> 2015-12-06 16:44:16.034 2333 ERROR nova.compute.manager [instance: 83677788-eafc-4d9c-9f38-3cad8030ecd3] Traceback (most recent call last):
> 2015-12-06 16:44:16.034 2333 ERROR nova.compute.manager [instance: 83677788-eafc-4d9c-9f38-3cad8030ecd3]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1905, in _do_build_and_run_instance
> 2015-12-06 16:44:16.034 2333 ERROR nova.compute.manager [instance: 83677788-eafc-4d9c-9f38-3cad8030ecd3]     filter_properties)
> 2015-12-06 16:44:16.034 2333 ERROR nova.compute.manager [instance: 83677788-eafc-4d9c-9f38-3cad8030ecd3]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2025, in _build_and_run_instance
> 2015-12-06 16:44:16.034 2333 ERROR nova.compute.manager [instance: 83677788-eafc-4d9c-9f38-3cad8030ecd3]     'create.error', fault=e)
> 2015-12-06 16:44:16.034 2333 ERROR nova.compute.manager [instance: 83677788-eafc-4d9c-9f38-3cad8030ecd3]   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 195, in __exit__
> 2015-12-06 16:44:16.034 2333 ERROR nova.compute.manager [instance: 83677788-eafc-4d9c-9f38-3cad8030ecd3]     six.reraise(self.type_, self.value, self.tb)
> 2015-12-06 16:44:16.034 2333 ERROR nova.compute.manager [instance: 83677788-eafc-4d9c-9f38-3cad8030ecd3]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1996, in _build_and_run_instance
> 2015-12-06 16:44:16.034 2333 ERROR nova.compute.manager [instance: 83677788-eafc-4d9c-9f38-3cad8030ecd3]     block_device_mapping) as resources:
> 2015-12-06 16:44:16.034 2333 ERROR nova.compute.manager [instance: 83677788-eafc-4d9c-9f38-3cad8030ecd3]   File "/usr/lib64/python2.7/contextlib.py", line 17, in __enter__
> 2015-12-06 16:44:16.034 2333 ERROR nova.compute.manager [instance: 83677788-eafc-4d9c-9f38-3cad8030ecd3]     return self.gen.next()
> 2015-12-06 16:44:16.034 2333 ERROR nova.compute.manager [instance: 83677788-eafc-4d9c-9f38-3cad8030ecd3]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2143, in _build_resources
> 2015-12-06 16:44:16.034 2333 ERROR nova.compute.manager [instance: 83677788-eafc-4d9c-9f38-3cad8030ecd3]     reason=e.format_message())
> 2015-12-06 16:44:16.034 2333 ERROR nova.compute.manager [instance: 83677788-eafc-4d9c-9f38-3cad8030ecd3] BuildAbortException: Build of instance 83677788-eafc-4d9c-9f38-3cad8030ecd3 aborted: Block Device Mapping is Invalid.
> 2015-12-06 16:44:16.034 2333 ERROR nova.compute.manager [instance: 83677788-eafc-4d9c-9f38-3cad8030ecd3]

Glance seems to work well. I was able to upload images.
Creating non-bootable disks seem to work, as soon as I try to make one bootable, it fails.

This seems to be related to the known threads I found online but the mentioned fix was merged long before Liberty so I am finaly stuck at this point.

How can I fix this problem?

Thanks.

Kind regards
Kevin


_______________________________________________
Rdo-list mailing list
Rdo-list at redhat.com
https://www.redhat.com/mailman/listinfo/rdo-list

To unsubscribe: rdo-list-unsubscribe at redhat.com

To: dmoreaus at redhat.com
Cc: rdo-list at redhat.com




More information about the rdo-list mailing list