<html><head><meta http-equiv="Content-Type" content="text/html charset=windows-1252"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class="">I have deployed successfully on HP Blades and found this issue to be related to the number of interfaces that each blade has presented through Ironic. In other words, ironic will try to provision on a particular NIC, that might be different from the NIC the blade is booting from.<div class=""><br class=""></div><div class="">This was discussed here on the list, and the objective is to run ironic introspection, then check that each node has only one NIC (the command name escapes me right now, but was something with ironic) and it is connected to the VLAN you want, and delete the ones that were not correct.</div><div class="">Once that is clear, you should be able to run the deployment command.</div><div class=""><br class=""></div><div class="">IB</div><div class=""><br class=""></div><div class=""><div apple-content-edited="true" class="">
<div style="color: rgb(0, 0, 0); letter-spacing: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px; word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class=""><div class="">__</div><div class="">Ignacio Bravo<br class="">LTG Federal, Inc</div><div class=""><a href="http://www.ltgfederal.com" class="">www.ltgfederal.com</a></div><div class=""><br class=""></div><div class=""><br class=""></div></div></div><div><blockquote type="cite" class=""><div class="">On Feb 18, 2016, at 11:02 AM, Charles Short <<a href="mailto:cems@ebi.ac.uk" class="">cems@ebi.ac.uk</a>> wrote:</div><br class="Apple-interchange-newline"><div class="">
  
    <meta content="text/html; charset=windows-1252" http-equiv="Content-Type" class="">
  
  <div bgcolor="#FFFFFF" text="#000000" class="">
    Hi,<br class="">
    <br class="">
    I have seen the same issues when deploying on HP Blades. I had
    chosen to deploy on a subset of blades to save time whilst testing.
    The error was caused by a rogue blade.<br class="">
    Previous attempts on a different set of blades in the same chassis
    had left a blade(s) powered on and therefore presenting duplicate ip
    addresses in the blade cluster which was interfering with my new
    deployment.<br class="">
    Basically check that all of your nodes are in the correct state, i.e
    look in the iLO and cross reference with Ironic and check the power
    state.<br class="">
    <br class="">
    <br class="">
    HTH<br class="">
    <br class="">
    Charles<br class="">
    <br class="">
    <div class="moz-cite-prefix">On 14/10/2015 12:40, Udi Kalifon wrote:<br class="">
    </div>
    <blockquote cite="mid:CAMV_1to=v9FcDDYoapmqnbhC7uJ3mOAfM9XjMqQZc8cDVs2yrQ@mail.gmail.com" type="cite" class="">
      <div dir="ltr" class="">My overcloud deployment also hangs for 4 hours and
        then fails. This is what I got on the 1st run:<br class="">
        <div class=""><br class="">
          [stack@instack ~]$ openstack overcloud deploy --templates<br class="">
          Deploying templates in the directory
          /usr/share/openstack-tripleo-heat-templates<br class="">
          ERROR: Authentication failed. Please try again with option
          --include-password or export HEAT_INCLUDE_PASSWORD=1<br class="">
          Authentication required<br class="">
          <br class="">
          I am assuming the authentication error is due to the
          expiration of the token after 4 hours, and not because I
          forgot the rc file. I tried to run the deployment again and it
          failed after another 4 hours with a different error:<br class="">
          <br class="">
          [stack@instack ~]$ openstack overcloud deploy --templates<br class="">
          Deploying templates in the directory
          /usr/share/openstack-tripleo-heat-templates<br class="">
          Stack failed with status: resources.Controller: resources[0]:
          ResourceInError: resources.Controller: Went to status ERROR
          due to "Message: Exceeded maximum number of retries. Exceeded
          max scheduling attempts 3 for instance
          9eedda9e-f381-47d4-a883-0fe40db0eb5e. Last exception:
          [u'Traceback (most recent call last):\n', u'  File
          "/usr/lib/python2.7/site-packages/nova/compute/manager.py",
          line 1, Code: 500"<br class="">
          Heat Stack update failed.<br class="">
          <br class="">
        </div>
        <div class="">The failed resources are:<br class="">
        </div>
        <div class=""><br class="">
          heat resource-list -n 5 overcloud |egrep -v COMPLETE<br class="">
+-------------------------------------------+-----------------------------------------------+---------------------------------------------------+-----------------+---------------------+---------------------------------------------------------------------------------+<br class="">
          | resource_name                             |
          physical_resource_id                          |
          resource_type                                     |
          resource_status | updated_time        |
          stack_name                                                                     
          |<br class="">
+-------------------------------------------+-----------------------------------------------+---------------------------------------------------+-----------------+---------------------+---------------------------------------------------------------------------------+<br class="">
          | Compute                                   |
          aee2604f-2580-44c9-bc38-45046970fd63          |
          OS::Heat::ResourceGroup                           |
          UPDATE_FAILED   | 2015-10-14T06:32:34 |
          overcloud                                                                      
          |<br class="">
          | 0                                         |
          2199c1c6-60ca-42a4-927c-8bf0fb8763b7          |
          OS::TripleO::Compute                              |
          UPDATE_FAILED   | 2015-10-14T06:32:36 |
          overcloud-Compute-dq426vplp2nu                                                 
          |<br class="">
          | Controller                                |
          2ae19a5f-f88c-4d8b-98ec-952657b70cd6          |
          OS::Heat::ResourceGroup                           |
          UPDATE_FAILED   | 2015-10-14T06:32:36 |
          overcloud                                                                      
          |<br class="">
          | 0                                         |
          2fc3ed0c-da5c-45e4-a255-4b4a8ef58dd7          |
          OS::TripleO::Controller                           |
          UPDATE_FAILED   | 2015-10-14T06:32:38 |
          overcloud-Controller-ktbqsolaqm4u                                              
          |<br class="">
          | NovaCompute                               |
          7938bbe0-ab97-499f-8859-15f903e7c09b          |
          OS::Nova::Server                                  |
          CREATE_FAILED   | 2015-10-14T06:32:55 |
          overcloud-Compute-dq426vplp2nu-0-4acm6pstctor                                  
          |<br class="">
          | Controller                                |
          c1cd6b72-ec0d-4c13-b21c-10d0f6c45788          |
          OS::Nova::Server                                  |
          CREATE_FAILED   | 2015-10-14T06:32:58 |
          overcloud-Controller-ktbqsolaqm4u-0-d76rtersrtyt                               
          |<br class="">
+-------------------------------------------+-----------------------------------------------+---------------------------------------------------+-----------------+---------------------+---------------------------------------------------------------------------------+<br class="">
          <br class="">
          <br class="">
        </div>
        <div class="">I was unable to run resource-show or deployment-show on the
          failed resources, it kept complaining that those resources are
          not found.<br class="">
          <br class="">
        </div>
        <div class="">Thanks,<br class="">
        </div>
        <div class="">Udi.<br class="">
        </div>
        <div class=""><br class="">
        </div>
      </div>
      <div class="gmail_extra"><br class="">
        <div class="gmail_quote">On Wed, Oct 14, 2015 at 11:16 AM, Tzach
          Shefi <span dir="ltr" class=""><<a moz-do-not-send="true" href="mailto:tshefi@redhat.com" target="_blank" class="">tshefi@redhat.com</a>></span>
          wrote:<br class="">
          <blockquote class="gmail_quote" style="margin:0 0 0
            .8ex;border-left:1px #ccc solid;padding-left:1ex">
            <div dir="ltr" class="">
              <div class="">
                <div class="">
                  <div class="">
                    <div class="">
                      <div class="">
                        <div class="">
                          <div class="">
                            <div class="">
                              <div class="">
                                <div class="">Hi  Sasha\Dan, <br class="">
                                  Yep that's my bug I opened yesterday
                                  about this.  <br class="">
                                </div>
                                <br class="">
                              </div>
                            </div>
                          </div>
                        </div>
                        sshd and firewall rules look OK having tested
                        below:<br class="">
                      </div>
                      I can ssh into the virt host from my laptop with
                      root user, checking 10.X.X.X net<br class="">
                    </div>
                    Can also ssh from instack vm to virt host, checking
                    192.168.122.X net. <br class="">
                    <br class="">
                  </div>
                  Unless I should check ssh with other user, if so which
                  ? <br class="">
                </div>
                I doubt ssh user/firewall caused the problem as
                controller was installed successfully and it too uses
                same procedure ssh virt power-on method. <br class="">
                <br class="">
                Deployment is still up & stuck if any one ones to
                take a look contact me for access details in private. <br class="">
                <br class="">
              </div>
              <div class="">Will review/use  virt console, virt journal and
                timeout tips on next deployment.  <br class="">
                <br class="">
              </div>
              <div class="">Thanks<span class="HOEnZb"><font color="#888888" class=""><br class="">
                  </font></span></div>
              <span class="HOEnZb"><font color="#888888" class="">
                  <div class="">Tzach<br class="">
                  </div>
                  <div class="">
                    <div class="">
                      <div class=""><br class="">
                      </div>
                    </div>
                  </div>
                </font></span></div>
            <div class="HOEnZb">
              <div class="h5">
                <div class="gmail_extra"><br class="">
                  <div class="gmail_quote">On Wed, Oct 14, 2015 at 5:07
                    AM, Sasha Chuzhoy <span dir="ltr" class=""><<a moz-do-not-send="true" href="mailto:sasha@redhat.com" target="_blank" class=""></a><a class="moz-txt-link-abbreviated" href="mailto:sasha@redhat.com">sasha@redhat.com</a>></span>
                    wrote:<br class="">
                    <blockquote class="gmail_quote" style="margin:0 0 0
                      .8ex;border-left:1px #ccc solid;padding-left:1ex">I
                      hit the same (or similar) issue on my BM
                      environment, though I manage to complete the 1+1
                      deployment on VM successfully.<br class="">
                      I see it's reported already: <a moz-do-not-send="true" href="https://bugzilla.redhat.com/show_bug.cgi?id=1271289" rel="noreferrer" target="_blank" class=""></a><a class="moz-txt-link-freetext" href="https://bugzilla.redhat.com/show_bug.cgi?id=1271289">https://bugzilla.redhat.com/show_bug.cgi?id=1271289</a><br class="">
                      <br class="">
                      Ran a deployment with:   openstack overcloud
                      deploy --templates --timeout 90 --compute-scale 3
                      --control-scale 1<br class="">
                      The deployment fails, and I see that "all minus
                      one" overcloud nodes are still in BUILD status.<br class="">
                      <br class="">
                      [stack@undercloud ~]$ nova list<br class="">
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+<br class="">
                      | ID                                   | Name     
                                    | Status | Task State | Power State
                      | Networks            |<br class="">
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+<br class="">
                      | b15f499e-79ed-46b2-b990-878dbe6310b1 |
                      overcloud-controller-0  | BUILD  | spawning   |
                      NOSTATE     | ctlplane=192.0.2.23 |<br class="">
                      | 4877d14a-e34e-406b-8005-dad3d79f5bab |
                      overcloud-novacompute-0 | ACTIVE | -          |
                      Running     | ctlplane=192.0.2.9  |<br class="">
                      | 0fd1a7ed-367e-448e-8602-8564bf087e92 |
                      overcloud-novacompute-1 | BUILD  | spawning   |
                      NOSTATE     | ctlplane=192.0.2.21 |<br class="">
                      | 51630a7d-c140-47b9-a071-1f2fdb45f4b4 |
                      overcloud-novacompute-2 | BUILD  | spawning   |
                      NOSTATE     | ctlplane=192.0.2.22 |<br class="">
                      <br class="">
                      <br class="">
                      Will try to investigate further tomorrow.<br class="">
                      <br class="">
                      Best regards,<br class="">
                      Sasha Chuzhoy.<br class="">
                      <span class=""><br class="">
                        ----- Original Message -----<br class="">
                        > From: "Tzach Shefi" <<a moz-do-not-send="true" href="mailto:tshefi@redhat.com" target="_blank" class=""></a><a class="moz-txt-link-abbreviated" href="mailto:tshefi@redhat.com">tshefi@redhat.com</a>><br class="">
                        > To: "Dan Sneddon" <<a moz-do-not-send="true" href="mailto:dsneddon@redhat.com" target="_blank" class=""></a><a class="moz-txt-link-abbreviated" href="mailto:dsneddon@redhat.com">dsneddon@redhat.com</a>><br class="">
                        > Cc: <a moz-do-not-send="true" href="mailto:rdo-list@redhat.com" target="_blank" class="">rdo-list@redhat.com</a><br class="">
                      </span><span class="">> Sent: Tuesday, October 13, 2015
                        6:01:48 AM<br class="">
                        > Subject: Re: [Rdo-list] Overcloud deploy
                        stuck for a long time<br class="">
                        ><br class="">
                      </span>
                      <div class="">
                        <div class="">> So gave it a few more hours, on heat
                          resource nothing is failed only<br class="">
                          > create_complete and some init_complete.<br class="">
                          ><br class="">
                          > Nova show<br class="">
                          > | 61aaed37-4993-4165-93a7-3c9bf6b10a21 |
                          overcloud-controller-0 | ACTIVE | -<br class="">
                          > | | Running | ctlplane=192.0.2.8 |<br class="">
                          > | 7f9f4f52-3ee6-42d9-9275-ff88582dd6e7 |
                          overcloud-novacompute-0 | BUILD |<br class="">
                          > | spawning | NOSTATE | ctlplane=192.0.2.9
                          |<br class="">
                          ><br class="">
                          ><br class="">
                          > nova show
                          7f9f4f52-3ee6-42d9-9275-ff88582dd6e7<br class="">
                          >
+--------------------------------------+----------------------------------------------------------+<br class="">
                          > | Property | Value |<br class="">
                          >
+--------------------------------------+----------------------------------------------------------+<br class="">
                          > | OS-DCF:diskConfig | MANUAL |<br class="">
                          > | OS-EXT-AZ:availability_zone | nova |<br class="">
                          > | OS-EXT-SRV-ATTR:host |
                          instack.localdomain |<br class="">
                          > | OS-EXT-SRV-ATTR:hypervisor_hostname |
                          4626bf90-7f95-4bd7-8bee-5f5b0a0981c6<br class="">
                          > | |<br class="">
                          > | OS-EXT-SRV-ATTR:instance_name |
                          instance-00000002 |<br class="">
                          > | OS-EXT-STS:power_state | 0 |<br class="">
                          > | OS-EXT-STS:task_state | spawning |<br class="">
                          > | OS-EXT-STS:vm_state | building |<br class="">
                          ><br class="">
                          > Checking nova log this is what I see:<br class="">
                          ><br class="">
                          > nova-compute.log:{"nodes":
                          [{"target_power_state": null, "links":
                          [{"href": "<br class="">
                          > <a moz-do-not-send="true" href="http://192.0.2.1:6385/v1/nodes/4626bf90-7f95-4bd7-8bee-5f5b0a0981c6" rel="noreferrer" target="_blank" class="">http://192.0.2.1:6385/v1/nodes/4626bf90-7f95-4bd7-8bee-5f5b0a0981c6</a>
                          ",<br class="">
                          > "rel": "self"}, {"href": "<br class="">
                          > <a moz-do-not-send="true" href="http://192.0.2.1:6385/nodes/4626bf90-7f95-4bd7-8bee-5f5b0a0981c6" rel="noreferrer" target="_blank" class="">http://192.0.2.1:6385/nodes/4626bf90-7f95-4bd7-8bee-5f5b0a0981c6</a>
                          ", "rel":<br class="">
                          > "bookmark"}], "extra": {}, "last_error":
                          " Failed to change power state to<br class="">
                          > 'power on'. Error: Failed to execute
                          command via SSH : LC_ALL=C<br class="">
                          > /usr/bin/virsh --connect <a href="qemu:///system" class="">qemu:///system</a>
                          start baremetalbrbm_1.",<br class="">
                          > "updated_at":
                          "2015-10-12T14:36:08+00:00",
                          "maintenance_reason": null,<br class="">
                          > "provision_state": "deploying",
                          "clean_step": {}, "uuid":<br class="">
                          > "4626bf90-7f95-4bd7-8bee-5f5b0a0981c6",
                          "console_enabled": false,<br class="">
                          > "target_provision_state": "active",
                          "provision_updated_at":<br class="">
                          > "2015-10-12T14:35:18+00:00",
                          "power_state": "power off",<br class="">
                          > "inspection_started_at": null,
                          "inspection_finished_at": null,<br class="">
                          > "maintenance": false, "driver":
                          "pxe_ssh", "reservation": null,<br class="">
                          > "properties": {"memory_mb": "4096",
                          "cpu_arch": "x86_64", "local_gb": "40",<br class="">
                          > "cpus": "1", "capabilities":
                          "boot_option:local"}, "instance_uuid":<br class="">
                          > "7f9f4f52-3ee6-42d9-9275-ff88582dd6e7",
                          "name": null, "driver_info":<br class="">
                          > {"ssh_username": "root", "deploy_kernel":<br class="">
                          > "94cc528d-d91f-4ca7-876e-2d8cbec66f1b",
                          "deploy_ramdisk":<br class="">
                          > "057d3b42-002a-4c24-bb3f-2032b8086108",
                          "ssh_key_contents": "-----BEGIN( I<br class="">
                          > removed key..)END RSA PRIVATE KEY-----",
                          "ssh_virt_type": "virsh",<br class="">
                          > "ssh_address": "192.168.122.1"},
                          "created_at": "2015-10-12T14:26:30+00:00",<br class="">
                          > "ports": [{"href": "<br class="">
                          > <a moz-do-not-send="true" href="http://192.0.2.1:6385/v1/nodes/4626bf90-7f95-4bd7-8bee-5f5b0a0981c6/ports" rel="noreferrer" target="_blank" class="">http://192.0.2.1:6385/v1/nodes/4626bf90-7f95-4bd7-8bee-5f5b0a0981c6/ports</a>
                          ",<br class="">
                          > "rel": "self"}, {"href": "<br class="">
                          > <a moz-do-not-send="true" href="http://192.0.2.1:6385/nodes/4626bf90-7f95-4bd7-8bee-5f5b0a0981c6/ports" rel="noreferrer" target="_blank" class="">http://192.0.2.1:6385/nodes/4626bf90-7f95-4bd7-8bee-5f5b0a0981c6/ports</a>
                          ",<br class="">
                          > "rel": "bookmark"}],
                          "driver_internal_info": {"clean_steps": null,<br class="">
                          > "root_uuid_or_disk_id":
                          "9ff90423-9d18-4dd1-ae96-a4466b52d9d9",<br class="">
                          > "is_whole_disk_image": false},
                          "instance_info": {"ramdisk":<br class="">
                          > "82639516-289d-4603-bf0e-8131fa75ec46",
                          "kernel":<br class="">
                          > "665ffcb0-2afe-4e04-8910-45b92826e328",
                          "root_gb": "40", "display_name":<br class="">
                          > "overcloud-novacompute-0",
                          "image_source":<br class="">
                          > "d99f460e-c6d9-4803-99e4-51347413f348",
                          "capabilities": "{\"boot_option\":<br class="">
                          > \"local\"}", "memory_mb": "4096",
                          "vcpus": "1", "deploy_key":<br class="">
                          > "BI0FRWDTD4VGHII9JK2BYDDFR8WB1WUG",
                          "local_gb": "40", "configdrive":<br class="">
                          >
"H4sICGDEG1YC/3RtcHpwcWlpZQDt3WuT29iZ2HH02Bl7Fe/G5UxSqS3vLtyesaSl2CR4p1zyhk2Ct+ateScdVxcIgiR4A5sAr95xxa/iVOUz7EfJx8m7rXyE5IDslro1mpbGox15Zv6/lrpJ4AAHN/LBwXMIShIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADhJpvx+5UQq5EqNtvzldGs+MIfewJeNv53f/7n354F6xT/3v/TjH0v/chz0L5+8Gv2f3V+n0s+Pz34u/dj982PJfvSTvxFVfXQ7vfyBlRfGvOZo+kQuWWtNVgJn/jO/d6kHzvrGWlHOjGn0TDfmjmXL30kZtZSrlXPFREaVxQM5Hon4fdl0TU7nCmqtU6urRTlZVRP1clV+knwqK/F4UFbPOuVGKZNKFNTbgVFvwO+PyPmzipqo1solX/6slszmCuKozBzKuKPdMlE5ma<br class="">
                          ><br class="">
                          ><br class="">
                          > Any ideas on how to resolve a stuck
                          spawning compute node, it's stuck hasn't<br class="">
                          > changed for a few hours now.<br class="">
                          ><br class="">
                          > Tzach<br class="">
                          ><br class="">
                          > Tzach<br class="">
                          ><br class="">
                          ><br class="">
                          > On Mon, Oct 12, 2015 at 11:25 PM, Dan
                          Sneddon < <a moz-do-not-send="true" href="mailto:dsneddon@redhat.com" target="_blank" class="">dsneddon@redhat.com</a> >
                          wrote:<br class="">
                          ><br class="">
                          ><br class="">
                          ><br class="">
                          > On 10/12/2015 08:10 AM, Tzach Shefi
                          wrote:<br class="">
                          > > Hi,<br class="">
                          > ><br class="">
                          > > Server running centos 7.1, vm
                          running for undercloud got up to<br class="">
                          > > overcloud deploy stage.<br class="">
                          > > It looks like its stuck nothing
                          advancing for a while.<br class="">
                          > > Ideas, what to check?<br class="">
                          > ><br class="">
                          > > [stack@instack ~]$ openstack
                          overcloud deploy --templates<br class="">
                          > > Deploying templates in the directory<br class="">
                          > >
                          /usr/share/openstack-tripleo-heat-templates<br class="">
                          > > [91665.696658] device vnet2 entered
                          promiscuous mode<br class="">
                          > > [91665.781346] device vnet3 entered
                          promiscuous mode<br class="">
                          > > [91675.260324] kvm [71183]: vcpu0
                          disabled perfctr wrmsr: 0xc1 data 0xffff<br class="">
                          > > [91675.291232] kvm [71200]: vcpu0
                          disabled perfctr wrmsr: 0xc1 data 0xffff<br class="">
                          > > [91767.799404] kvm: zapping shadow
                          pages for mmio generation wraparound<br class="">
                          > > [91767.880480] kvm: zapping shadow
                          pages for mmio generation wraparound<br class="">
                          > > [91768.957761] device vnet2 left
                          promiscuous mode<br class="">
                          > > [91769.799446] device vnet3 left
                          promiscuous mode<br class="">
                          > > [91771.223273] device vnet3 entered
                          promiscuous mode<br class="">
                          > > [91771.232996] device vnet2 entered
                          promiscuous mode<br class="">
                          > > [91773.733967] kvm [72245]: vcpu0
                          disabled perfctr wrmsr: 0xc1 data 0xffff<br class="">
                          > > [91801.270510] device vnet2 left
                          promiscuous mode<br class="">
                          > ><br class="">
                          > ><br class="">
                          > > Thanks<br class="">
                          > > Tzach<br class="">
                          > ><br class="">
                          > ><br class="">
                          > >
                          _______________________________________________<br class="">
                          > > Rdo-list mailing list<br class="">
                          > > <a moz-do-not-send="true" href="mailto:Rdo-list@redhat.com" target="_blank" class="">Rdo-list@redhat.com</a><br class="">
                          > > <a moz-do-not-send="true" href="https://www.redhat.com/mailman/listinfo/rdo-list" rel="noreferrer" target="_blank" class="">https://www.redhat.com/mailman/listinfo/rdo-list</a><br class="">
                          > ><br class="">
                          > > To unsubscribe: <a moz-do-not-send="true" href="mailto:rdo-list-unsubscribe@redhat.com" target="_blank" class=""></a><a class="moz-txt-link-abbreviated" href="mailto:rdo-list-unsubscribe@redhat.com">rdo-list-unsubscribe@redhat.com</a><br class="">
                          > ><br class="">
                          ><br class="">
                          > You're going to need a more complete
                          command line than "openstack<br class="">
                          > overcloud deploy --templates". For
                          instance, if you are using VMs for<br class="">
                          > your overcloud nodes, you will need to
                          include "--libvirt-type qemu".<br class="">
                          > There are probably a couple of other
                          parameters that you will need.<br class="">
                          ><br class="">
                          > You can watch the deployment using this
                          command, which will show you<br class="">
                          > the progress:<br class="">
                          ><br class="">
                          > watch "heat resource-list -n 5 | grep -v
                          COMPLETE"<br class="">
                          ><br class="">
                          > You can also explore which resources have
                          failed:<br class="">
                          ><br class="">
                          > heat resource-list [-n 5]| grep FAILED<br class="">
                          ><br class="">
                          > And then look more closely at the failed
                          resources:<br class="">
                          ><br class="">
                          > heat resource-show overcloud
                          <resource><br class="">
                          ><br class="">
                          > There are some more complete
                          troubleshooting instructions here:<br class="">
                          ><br class="">
                          > <a moz-do-not-send="true" href="http://docs.openstack.org/developer/tripleo-docs/troubleshooting/troubleshooting-overcloud.html" rel="noreferrer" target="_blank" class="">http://docs.openstack.org/developer/tripleo-docs/troubleshooting/troubleshooting-overcloud.html</a><br class="">
                          ><br class="">
                          > --<br class="">
                          > Dan Sneddon | Principal OpenStack
                          Engineer<br class="">
                          > <a moz-do-not-send="true" href="mailto:dsneddon@redhat.com" target="_blank" class="">dsneddon@redhat.com</a> | <a moz-do-not-send="true" href="http://redhat.com/openstack" rel="noreferrer" target="_blank" class="">redhat.com/openstack</a><br class="">
                          > <a moz-do-not-send="true" href="tel:650.254.4025" value="+16502544025" target="_blank" class="">650.254.4025</a> |
                          dsneddon:irc @dxs:twitter<br class="">
                          ><br class="">
                          >
                          _______________________________________________<br class="">
                          > Rdo-list mailing list<br class="">
                          > <a moz-do-not-send="true" href="mailto:Rdo-list@redhat.com" target="_blank" class="">Rdo-list@redhat.com</a><br class="">
                          > <a moz-do-not-send="true" href="https://www.redhat.com/mailman/listinfo/rdo-list" rel="noreferrer" target="_blank" class="">https://www.redhat.com/mailman/listinfo/rdo-list</a><br class="">
                          ><br class="">
                          > To unsubscribe: <a moz-do-not-send="true" href="mailto:rdo-list-unsubscribe@redhat.com" target="_blank" class=""></a><a class="moz-txt-link-abbreviated" href="mailto:rdo-list-unsubscribe@redhat.com">rdo-list-unsubscribe@redhat.com</a><br class="">
                          ><br class="">
                          ><br class="">
                          ><br class="">
                          > --<br class="">
                          > Tzach Shefi<br class="">
                          > Quality Engineer, Redhat OSP<br class="">
                          > <a moz-do-not-send="true" href="tel:%2B972-54-4701080" value="+972544701080" target="_blank" class="">+972-54-4701080</a><br class="">
                          ><br class="">
                          >
                          _______________________________________________<br class="">
                          > Rdo-list mailing list<br class="">
                          > <a moz-do-not-send="true" href="mailto:Rdo-list@redhat.com" target="_blank" class="">Rdo-list@redhat.com</a><br class="">
                          > <a moz-do-not-send="true" href="https://www.redhat.com/mailman/listinfo/rdo-list" rel="noreferrer" target="_blank" class="">https://www.redhat.com/mailman/listinfo/rdo-list</a><br class="">
                          ><br class="">
                          > To unsubscribe: <a moz-do-not-send="true" href="mailto:rdo-list-unsubscribe@redhat.com" target="_blank" class=""></a><a class="moz-txt-link-abbreviated" href="mailto:rdo-list-unsubscribe@redhat.com">rdo-list-unsubscribe@redhat.com</a><br class="">
                        </div>
                      </div>
                    </blockquote>
                  </div>
                  <br class="">
                  <br clear="all" class="">
                  <br class="">
                  -- <br class="">
                  <div class="">
                    <div dir="ltr" class=""><font size="4" class=""><b class="">Tzach Shefi</b></font><br class="">
                      Quality Engineer, Redhat OSP<br class="">
                      <span class=""><a moz-do-not-send="true" href="callto:+972-52-4534729" target="_blank" class="">+972-54-4701080</a></span></div>
                  </div>
                </div>
              </div>
            </div>
            <br class="">
            _______________________________________________<br class="">
            Rdo-list mailing list<br class="">
            <a moz-do-not-send="true" href="mailto:Rdo-list@redhat.com" class="">Rdo-list@redhat.com</a><br class="">
            <a moz-do-not-send="true" href="https://www.redhat.com/mailman/listinfo/rdo-list" rel="noreferrer" target="_blank" class="">https://www.redhat.com/mailman/listinfo/rdo-list</a><br class="">
            <br class="">
            To unsubscribe: <a moz-do-not-send="true" href="mailto:rdo-list-unsubscribe@redhat.com" class="">rdo-list-unsubscribe@redhat.com</a><br class="">
          </blockquote>
        </div>
        <br class="">
      </div>
      <br class="">
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br class="">
      <pre wrap="" class="">_______________________________________________
Rdo-list mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Rdo-list@redhat.com">Rdo-list@redhat.com</a>
<a class="moz-txt-link-freetext" href="https://www.redhat.com/mailman/listinfo/rdo-list">https://www.redhat.com/mailman/listinfo/rdo-list</a>

To unsubscribe: <a class="moz-txt-link-abbreviated" href="mailto:rdo-list-unsubscribe@redhat.com">rdo-list-unsubscribe@redhat.com</a></pre>
    </blockquote>
    <br class="">
    <pre class="moz-signature" cols="72">-- 
Charles Short
Cloud Engineer
Virtualization and Cloud Team
European Bioinformatics Institute (EMBL-EBI)
Tel: +44 (0)1223 494205 </pre>
  </div>

_______________________________________________<br class="">Rdo-list mailing list<br class=""><a href="mailto:Rdo-list@redhat.com" class="">Rdo-list@redhat.com</a><br class="">https://www.redhat.com/mailman/listinfo/rdo-list<br class=""><br class="">To unsubscribe: rdo-list-unsubscribe@redhat.com</div></blockquote></div><br class=""></div></body></html>