I ran into an error today that I haven’t seen before. My vSphere 5.0 cluster displayed the message “DRS invocation not completed” on the Summary tab and, I noticed, it stopped moving VMs around automatically too.


I tried changing some of the DRS settings and running DRS from the DRS tab of the cluster just to try and see if that would get things going but to no avail.

Interestingly, I couldn’t find any mention of the message on the VMware KB site or anything useful in Google. I was tempted to turn DRS off completely and then try re-enabling it but that would have removed my resource pools.

It was then that I noticed that some of the hosts weren’t reporting any memory or CPU utilisation even though I knew them to be hosting VMs.


As an experiment I tried disconnecting and then reconnecting these hosts in turn. Once reconnected I started seeing DRS initiated vMotions occur to rebalance the cluster and the message disappeared from the cluster’s summary tab.

So, I’m not sure why it happened but a simple, non-disruptive solution fixed it.

Just thought I’d share…

19 comments on “Fixing “DRS Invocation Not Completed””

  1. Pingback: How To Fix Drs Invocation Error in Windows

  2. Jimmy Fa-Si-Oen Reply

    Hi Michael, Thanks for this post, i might have an alternative to the disconnect/reconnect,

    You could start an ssh session to the affected host and perform a:

    /etc/init.d/hostd restart

    and right there after

    /etc/init.d/vpxa restart

    This will cause the agent to communicate the actual resources in use by the esxi host.

    This is also a non-disruptive methode.

  3. JFM Reply

    Had the same issue with 5.1.0 build 1312873… removing and adding fixed my issue… but when using DVswitches, you must migrate off dvswitch remove from inventory, re-add to inventory then migrate back to DVswitch.

    Cheers !

  4. bminor Reply

    You can also just SSH into the affected host. Execute “services.sh restart” at the command prompt. It will restart all management agents and services for that host. This should resolve any connectivity issues that host may be experiencing such as Storage Views tabs not populating or other resources not being visible as well.

  5. sms Reply

    We faced the same issue after our Vcenter DB server crash. Latter we restarted the vcenter server and it fixed the issue.

  6. Katkaw Reply

    Happened to me yesterday, I fixed it by restarting the Management Agents. Still I wonder what causes this, does anyone know?

  7. Rapidparts, Inc. Reply

    Had this happen when vCenter C: filled up and services crashed. A disconnect and connect worked. Thank you.

  8. Scott Larsen Reply

    I just had this symptom occur on my production ESXi 5.1.0a cluster (914609 – all updates through Dec 20, 2012 applied) cluster (managed by vCenter 5.1.0b) … so it seems to be still happening, even with the latest code. Just about to try the above workaround….

  9. jeremyjbowman Reply

    And this is why I love blogging! You even read solutions to problems on your own blog. Go vSpecialist! :0) (BTW – I’ve seen this and fixed it with ESXi 5.1.0 799733). Cheers. Jer.

  10. justpaul Reply

    We had the same issue affecting our DRS cluster comprised of ESXi 5.1.0 build 838463 servers. Remove/re-add resolves. Thanks.

  11. rhodenator Reply

    Are you running ESXi 5.0 GA (2011-08-24 / 469512)? A couple of our clients had an issue almost identical with the base release (and even with the first patch released after 5.0 GA), once we took them to U1 the problem went away. The issue we saw was with some stand alone hosts, so DRS wasn’t present. However, the CPU and RAM would show empty or incorrect (less RAM than we knew allocated). So as a suggestion if you aren’t already on ESXi 5.0 U2, try going to it.

    • vspecialist Reply

      Thanks for getting in touch. I wonder if the similar symptoms could be facets of the same issue?

      The environment in question isn’t quite up to U1 yet. I had planned to update it to U2 soon anyway but it’s now a little higher on the priority list 🙂

      • Paul Sheard Reply

        Michael, This issue has just exhibited itself on a Production ESXi 5.1.0, 914609 build cluster. I restarted services.sh to clear up.


Leave A Reply

Your email address will not be published. Required fields are marked *