Portal Home > Knowledgebase > VMware Knowledge Base > VMware ESX 4.0, Patch ESX400-201105201-UG: Updates the VMware ESX 4.0 Core and CIM components

VMware ESX 4.0, Patch ESX400-201105201-UG: Updates the VMware ESX 4.0 Core and CIM components


Release date: May 05, 2011

Patch Classification
Build Information
For build information, see KB 1031732.
Also see KB 1012514.
Host Reboot Required
Virtual Machine Migration or Shutdown Required
PRs Fixed
454804, 469941, 471479, 472943, 474939, 478490, 488010, 492120, 504084, 510150, 515093, 515397, 517392, 519990, 530420, 531651, 532369, 532459, 532876, 533907, 534682, 535833, 537552, 547440, 547805, 548060, 551064, 551252, 554097, 556538, 557156, 558155, 560002, 564029, 564152, 565001, 567922, 568210, 572093, 572180, 575766, 578895, 585115, 593672, 596439, 598817, 599546, 599759, 605493, 605963, 606638, 606647, 610111, 612179, 614612, 614821, 615891, 616721, 620335, 622274, 627780, 629160, 630687, 634065, 634239, 635873, 636723, 637636, 671643
Affected Hardware
SGI InfiniteStorage 4000, SGI InfiniteStorage 4100, SGI InfiniteStorage 4600, VMXNET3 NIC
Affected Software
esxtop and resxtop, Snapshot Manager, wsman, Small-Footprint CIM Broker daemon, LSI storelib library, vmkiscsi-tool utility, VMW_SATP_ALUA plug-ins, SVGA driver, VMware Tools, Microsoft SQL Server, Network Driver Interface Specification
VIBs Included
  • vmware-esx-apps
  • vmware-esx-backuptools
  • vmware-esx-cim
  • vmware-esx-docs
  • vmware-esx-drivers-vmklinux-vmklinux
  • vmware-esx-esxcli
  • vmware-esx-esxupdate
  • vmware-esx-guest-install
  • vmware-esx-ima-qla4xxx
  • vmware-esx-iscsi
  • vmware-esx-lnxcfg
  • vmware-esx-lsi
  • vmware-esx-microcode
  • vmware-esx-nmp
  • vmware-esx-perftools
  • vmware-esx-scripts
  • vmware-esx-srvrmgmt
  • vmware-esx-tools
  • vmware-esx-uwlibs
  • vmware-esx-vmkctl
  • vmware-esx-vmkernel64
  • vmware-esx-vmnixmod
  • vmware-esx-vmx
  • vmware-hostd-esx
  • kernel
Related CVE numbers



Summaries and Symptoms

This patch resolves the following issues:
  • The Performance chart network transmit and receive statistics of a virtual machine connected to the Distributed Virtual Switch (DVS) are interchanged and incorrectly displayed.

  • If you configure the traffic shaping value on a vDS or vSwitch to greater than 4GB, the value is reset to below 4GB after you restart the ESX host. This issue causes the traffic shaper to shape traffic using much lower values, resulting in very low network bandwidth. For example, if you set the traffic shaping to a maximum bandwidth of 6Gbps, the value changes to about 1.9Gbps after you restart the ESX host.

  • All snapshots are created in a default virtual machine directory. However, if you specify a custom directory to create snapshots, snapshot delta files might remain in this directory when you delete snapshots. These redundant files eventually fill up disk space and need to be deleted manually.

  • An ESX host might fail with a purple diagnostic screen and display error messages similar to the following:

    [7m21:04:02:05.579 cpu10:4119)WARNING: Fil3: 10730: Found invalid object on 4a818bab-b4240ea4-5b2f-00237de12408 expected
    21:04:02:05.579 cpu10:4119)FSS: 662: Failed to get object f530 28 2 4a818bab b4240ea4 23005b2f 824e17d 4 1 0 0 0 0 0 :Not found
    21:04:02:05.579 cpu0:4096)VMNIX: VMKFS: 2521: status = -2 

    This issue occurs when a VMFS volume has a corrupt address in the file descriptor. 

  • A virtual machine command such as PowerOn that you issue through hostd immediately after a virtual machine powers off might fail with an error message similar to the following:

    A specified parameter was not correct

    An error message similar to the following might be written to the vCenter log:

    [2009-11-16 15:06:09.266 01756 error 'App'] [vm.powerOn] Received unexpected exception

  • After you install VMware Tools, VM Memory and VM Processor might not appear in the performance counters list in the Windows Performance Monitor (Perfmon). Performing an upgrade or repair of VMware Tools does not resolve this problem. After you install this patch, you can upgrade or repair VMware Tools to resolve the issue.

  • Resetting the storage processor of the HP MSA2012fc storage array causes the ESX/ESXi native multipath driver (NMP) module to send alerts or critical entries to vmkernel logs. These alert messages indicate that the physical media has changed for the device. However, these messages do not apply to all LUN types. They are only critical for data LUNs but do not apply to management LUNs.

  • On blade servers running with ESX, vCenter Server incorrectly reports the service tag of the blade chassis instead of the blade server's service tag. When a blade server is managed by vCenter Server, the service tag number is listed in the System Section in vCenter Server > Configuration tab > Processors. This issue is reported on Dell and IBM blade servers.
    This issue occurs due to the incorrect value of the Fixed CIM OMC_Chassis instance SerialNumber property.

  • In this patch release, the PVSCSI driver is updated to version for the following Windows guest operating systems: Windows XP (32/64-bit), Windows 2003 (32/64-bit), Windows Vista (32/64-bit), Windows 2008 RTM (32/64-bit), Windows 7 (32/64-bit), and Windows 2008 R2 (64-bit).

  • During storage rescan operations, some virtual machines stop responding when any LUN on the host is in an APD state. For more information, see KB 1016626. To work around the issue described in the KB article while using an earlier version of ESX host, you must manually set the advanced configuration option /VMFS3/FailVolumeOpenIfAPD to 1 before rescanning, and then reset it to 0 after completing the rescan. This issue is resolved by applying this patch. Now you need not apply the workaround of setting and resetting the advanced configuration option while starting the rescan operation. Virtual machines on non-APD volumes do not fail during a rescan operation, even if some LUNs are in an all-paths-down state. 

  • Target information for LUNs is sometimes not displayed in the vCenter Server UI. In releases earlier than ESX 4.0 Update 3, some iSCSI LUNs do not show the target information.To view this information in the Configuration tab, perform the following steps:
    • Click Storage Adapters under Hardware.
    • Click iSCSI Host Bus Adapter in the Storage Adapters pane.
    • Click Paths in the Details pane. 

  • ESX hosts might log messages similar to the following in the VMkernel log files for LUNs not mapped to ESX hosts:0:22:30:03.046 cpu8:4315)ScsiScan: 106: Path 'vmhba0:C0:T0:L0': Peripheral qualifier 0x1 not supported. Such messages are logged either when ESX hosts start, or when you initiate a rescan operation of the storage arrays from the vSphere Client, or every 5 minutes after ESX hosts start.

  • An ESX host does not consider the memory reservation while calculating the provisioned space of a powered-off virtual machine. As a result, the vSphere Client might display a discrepancy in the provisioned space values while the virtual machine is powered-on or powered-off. 

  • ESX host cannot identify the esxconsole.vmdk file in which the service console resides. You see the following symptoms for this issue:
    • An ESX 4.0 host becomes unresponsive when it is put into maintenance mode.
    • Rebooting the host fails.
    • On the ESX console, you see the error: VSD mount/bin/sh:can't access TTY; job control turned off.
    • The ESX host does not boot and drops into Troubleshooting (busy box) mode. The last lines in the/var/log/messages log file are similar to: 
      sysboot: Getting '/boot/cosvdmk' parameter from esx.conf 
      sysboot: COS VMDK Specified in esx.conf: /vmfs/volumes/4b27ec62-93ec3816-0475-00215aaf882a/esxconsole-4b27e9e3-20ee-69d7-ae11-00215aaf882a/esxconsole.vmdk sysboot: 66.vsd-mount returned critical failure 
      sysboot: Executing 'chvt 1'

  • When you create a virtual disk (.vmdk file) with a large size, for example, more than 1TB, on NFS storage, the creation process might fail with an error: A general system error occurred: Failed to create disk: Error creating disk.This issue occurs when the NFS client does not wait for sufficient time for the NFS storage array to initialize the virtual disk after the RPC parameter of the NFS client times out. By default the timeout value is 10 seconds.This patch provides the configuration option to tune the RPC timeout parameter using the esxcfg-advcfg -s /NFS/SetAttrRPCTimeout command.

  • If you remove a virtual machine snapshot, the VMware host agent service might fail and display a backtrace similar to the following:
    [2010-02-23 09:26:36.463 F6525B90 error 'App']
    Exception: Assert Failed: "_config != __null && _view != __null" @ bora/vim/hostd/vmsvc/vmSnapshot.cpp:1494

    This is because the -aux.xml located in the same directory as the virtual machine configuration file is empty. When a virtual machine is created or registered on a host, the contents of the -aux.xml file is read and the_view object is populated. If the XML file is empty the _view object is not populated. This results in an error when consolidating the snapshot.

  • ESX hosts cannot revert to a previous snapshot after upgrading from ESX 3.5 Update 4 to ESX 4.0 Update 3, and the following message might be displayed in vCenter Server: The features supported by the processor(s) in this machine are different from the features supported by the processor(s) in the machine on which the checkpoint was saved. Please try to resume the snapshot on a machine where the processors have the same features. 
    This issue might occur when you create virtual machines on ESX 3.0 hosts, perform vMotion and suspend virtual machines on ESX 3.5 hosts, and resume them on ESX 4.x hosts. This issue is resolved by applying this patch. The error message does not appear. You can revert to snapshots created on ESX 3.5 hosts, and resume the virtual machines on ESX 4.x hosts. 

  • During storage LUN path failover, if you perform any virtual machine operation that causes delta disk meta data updates such as creating or deleting snapshots, the ESX host might fail with a purple diagnostic screen.

  • An issue in the Virtual Machine Interface (VMI) timer causes timer interrupts to be delivered to the guest operating system at an excessive rate. This issue might occur after a vMotion migration of a virtual machine that was up for a relatively long time, such as for one hundred days. 

  • Creation of quiesced snapshots might not work on non-English versions of Microsoft Windows guest operating systems
    The issue occurs when a Windows known folder path contains non-ASCII characters, for example, in the case of the application data folder in Czech Windows guest operating systems. This issue causes the snapshot operation to fail. This issue is resolved by applying this patch.

  • Virtual machines fail to power on in some cases even when service console swap space exists on ESX hosts Powering on a virtual machine running on ESX hosts fails with Insufficient COS swap to power on error in/var/log/vmware/hostd.log though service console has 800MB and swap enabled. Also, running the free -mcommand on the service console shows greater than 20MB free. This fix enables to power on virtual machines when the service console swap space exists on ESX hosts.

  • When shutting down an ESX host, if a write-cache-enabled SAN LUN to which the ESX host has ever written becomes inaccessible, the ESX host fails to shut down and attempts perpetually to send the SYNC_CACHE command to the missing LUN.

  • Due to memory allocation failure on a system that is under memory constraints when allocating Async_Token for handling I/O, the ESX host fails with a purple screen and displays an error message similar to the following:
    Unhandled Async_Token ENOMEM Condition 
    This patch release minimizes the occurrence of this issue. 

  • Memory hot-add fails if the assigned virtual machine memory equals the size of its memory reservation An error message similar to the following error is displayed on the vSphere Client:

    Hot-add of memory failed. Failed to resume destination VM: Bad parameter. Hotplug operation failed

    Messages similar to the following are written to /var/log/vmkernel.log on the ESX host:

    WARNING: FSR: 2804: 1270734344 D: Received invalid swap bitmap lengths: source 0, destination 32768! Failing migration.
    WARNING: FSR: 3425: 1270734344 D: Failed to transfer swap state from source VM: Bad parameter
    WARNING: FSR: 4006: 1270734344 D: Failed to transfer the swap file from source VM to destination VM.
    WARNING: Migrate: 295: 1270734344 D: Failed: Bad parameter (0xbad0007) @0x41800847ae89
    WARNING: Migrate: 295: 1270734344 S: Failed: Bad parameter (0xbad0007) @0x4180084784ba 

    This issue occurs if the Fast Suspend Resume (FSR) fails during the hot-add because the source does not have a swap file but the destination does. This issue only applies to the FSR on memory hot-add. vMotion and Storage vMotion are not affected. This issue is resolved by applying this patch.

  • If an administrator opens a console through the vSphere Client to a Windows virtual machine on which multiple users are logged in through terminal sessions, their mouse movements might become synchronized with the mouse movements of the administrator. 

  • If you do not set up the Syslog settings on an ESX host no alarm or configuration error message is generated. This issue is resolved by applying this patch. Now a warning message similar to the following appears in the Summary tab of the ESX host if you do not configure Syslog: Configuration Issues
    Issue detected on [host name] in : Warning: Syslog not configured.Please check Syslog options under Configuration.Software.AdvancedSettings in vSphere Client

  • ESX host fails with a purple diagnostic screen due to a race in Dentrycache initialization.

  • An ESX host might fail with a purple diagnostic screen that displays an error message similar to the following when VMFS snapshot volumes are exposed to multiple hosts in a vCenter server cluster.
    WARNING: LVM: 8703: arrIdx (1024) out of bounds

  • When you perform a RevertSnapshot or RevertToCurrentSnapshot operation the VMware Host Agent fails and the vCenter Server displays the ESX host as disconnected.

  • Canceling a Storage vMotion task when relocating a powered-on virtual machine containing multiple disks on the same datastore to a different datastore on the same host might cause the ESX 4.0 hosts to fail with the following error:Exception: NOT_IMPLEMENTED bora/lib/pollDefault/pollDefault.c:2059

  • Virtual machines display increased memory usage in vmware-guestd and vmwareservice.exe. The memory footprint of the process continues to increase until the available memory is drained and the process cannot allocate any memory. This issue is more prominent when the guest operating system has a large number of IP addresses associated with it.

  • Reverting a snapshot for a virtual machine that has Changed Block Tracking (CBT) enabled to a snapshot older than its last incremental backup can cause inconsistencies in incremental backups of that virtual machine.

  • Stopping VMware Tools while taking the quiesced snapshot of a virtual machine causes hostd to fail. After appylying this patch, the quiesced snapshot operation exits gracefully if you stop VMware Tools.

  • Windows guest operating systems installed with VMware Windows XP display driver model (XPDM) driver might fail with avmx_fb.dll error and display a blue screen.

  • When Fault Tolerance is enabled, you cannot hot-remove devices such as NICs and SCSI controllers from the vSphere Client. However, these appear as removable devices in the Windows system tray of the virtual machine and you can remove them from within the guest operating system. This issue is resolved by applying this patch. Now you cannot remove devices from the virtual machine's system tray when Fault Tolerance is enabled.

  • A networking issue might cause an ESX host to fail with a purple diagnostic screen that displays an error message similar to the following:
    Spin count exceeded (rentry) -possible deadlock with PCPU6 
    This issue occurs if the system is sending traffic and modifying the routing table at the same time. 

  • While performing snapshot operations, if you simultaneously perform another task such as browsing a datastore, the virtual machine might sometimes be abruptly powered off. Error messages similar to the following are written tovmware.log
    vmx| [msg.disk.configureDiskError] Reason: Failed to lock the file
    vmx| Msg_Post: Error
    vmx| [msg.checkpoint.continuesync.fail] Error encountered while restarting virtual machine after taking snapshot. The virtual machine will be powered off.
    The issue occurs when a file required by the virtual machine for one operation is accessed by another process. 

  • Quiesced snapshots might fail on some non-English versions of Windows guest operating systems, such as French versions of Microsoft Windows Server 2008 R2 and Microsoft Windows 7 guest operating systems. This issue occurs because the VMware Snapshot Provider service does not get registered as a Windows service or as a COM+ application properly on some non-English versions of Microsoft Windows guest operating systems. This issue causes the whole snapshot operation to fail, and as a result, no snapshot is created. 

  • Virtual machines configured with CPU limits running on ESX/ESXi 4.x hosts experience performance degradation.

  • Guest software might use CPUID information to determine characteristics of underlying (virtual or physical) CPU hardware. In some instances, CPUID information returned by virtual hardware differs from physical hardware. Based upon these differences, certain components of guest software might malfunction. This issue is resolved by applying this patch. The fix causes certain CPUID responses to more closely match that which physical hardware would return. 

  • While installing VMware Tools on certain guest operating systems such as RHEL 3, you might see an error message similar to the following: 
    Symbol __stack_chk_fail from module /usr/X11R6/lib/modules/drivers/vmware_drv.o is unresolved!

  • ESX hosts might fail with a NOT_REACHED bora/modules/vmkernel/tcpip2/freebsd/sys/support/vmk_iscsi.c:648 message on a purple screen when you scan for LUNs from iSCSI storage array by using the esxcfg-swiscsi command from the service console or through vSphere Client (Inventory > Configuration > Storage Adapters > iSCSI Software Adapter). This issue might occur if the tcp.window.size parameter in /etc/vmware/vmkiscsid/iscsid.conf is modified manually. Applying this patch resolves this issue. Warning messages are now logged in /var/log/vmkiscsid.log for ESX if thetcp.window.size parameter is modified to a value lower than its default.

  • Rescan or add-storage operations that you run from the vCenter Client might take a long time to complete or fail with a timeout, and a log spew of messages similar to the following is written to /var/log/vmkernel: Jul 15 07:09:30 : 29:18:55:59.297 ScsiDeviceToken: 293: Sync IO 0x2a to device "naa.60060480000190101672533030334542" failed: I/O error H:0x0 D:0x2 P:0x0 Valid sense data: 0x7 0x27 0x0.
    Jul 15 07:09:30 [vmkernel name]: 29:18:55:59.298 cpu29:4356)NMP: nmp_CompleteCommandForPath: Command 0x2a (0x4100b20eb140) to NMP device "naa.60060480000190101672533030334542" failed on physical path "vmhba1:C0:T0:L100" H:0x0 D:0x2 P:0x0 Valid sense data: 0x7 0x27 0x0.
    Jul 15 07:09:30 [vmkernel_name]: 29:18:55:59.298 cpu29:4356)ScsiDeviceIO: 747: Command 0x2a to device "naa.60060480000190101672533030334542" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x7 0x27 0x0. 

    VMFS continues trying to mount the volume even if the LUN is read-only. This issue is resolved by applying this patch. Now VMFS does not attempt to mount the volume when it receives the read-only status.

  • This patch release provides an updated version of PVSCSI driver which enables you to install Windows XP guest operating system.

  • When you install new NetXen NICs on an ESX 4.0 host or when you upgrade from ESX 3.5 to ESX 4.0, you might see an error message similar to the following on the service console of the ESX 4.0 host: Out of Interrupt vectors. On ESX hosts, where NetXen 1G and NX2031 10G devices do not support NetQueue, the ESX host might run out of MSI-X interrupt vectors. ESX hosts might not start or other devices (such as storage devices) might become inaccessible because of this issue.
  • When running ESX 4.x with a software iSCSI initiator, the /var/log/vmkiscsid.log file might not get cleared, unlike other syslog-generated log files. As a result, this file grows to a very large size if the ESX host cannot communicate with the iSCSI storage. After installing this patch, if the vmkiscsid.log exceeds 100k in size, the logrotate utility creates a new file, up to six files. After the sixth file, the first file is overwritten with new messages.

  • ESX host reboots, becomes unresponsive, or displays a purple diagnostic screen when logging into the service console
    This issue is resolved by applying this patch. Now if the user account that is accessing the service console is a member of more than 32 groups, a warning message similar to the following will be written to the vmkernel logSep 9 02:16:47 osdc-cpdqa0077 kernel: [74509.293081] cannot set all groups in vmkernel

  • ESX hosts using software iSCSI initiators might fail with a purple diagnostic screen that displays iscsi_vmk messages similar to the following:

    @BlueScreen: #PF Exception(14) in world 4254:iscsi_trans_ ip 0x41800965fddb addr 0x8
    Code starts at 0x418009000000
    0x4100c04f7e50:[0x41800965fddb]iscsivmk_ConnShutdown+0x486 stack: 0x410000000000
    0x4100c04f7eb0:[0x418009665e93]iscsivmk_StopConnection+0x286 stack: 0x4100c04f7ef0
    0x4100c04f7ef0:[0x418009663e4c]iscsivmk_TransportStopConn+0x12b stack: 0x4100c04f7f6c
    0x4100c04f7fa0:[0x418009481654]iscsitrans_VmklinkTxWorld+0x36f stack: 0x1d
    0x4100c04f7ff0:[0x41800909870b]vmkWorldFunc+0x52 stack: 0x0
    0x4100c04f7ff8:[0x0]Unknown stack: 0x0
    This issue is known to occur due to any I/O delays that cause I/O requests to timeout and abort. 
  • The minimum and default recommended memory sizes in the virtual machine default settings for RHEL 32-bit and 64-bit guest operating systems are updated as follows:
    For RHEL 6 32-bit, minimum memory is updated from 1GB to 512MB, default recommended memory is updated from 2GB to1GB, maximum recommended memory is updated from 64GB to 16GB, and hard disk size is updated from 8GB to 16 GB. For RHEL 6 64-bit, default recommended memory is updated from 2GB to1GB, and hard disk size is updated from 8GB to 16GB.

  • When you upgrade VMware Tools with HGFS installed from ESX 3.5 to ESX 4.0, the HGFS driver might not be uninstalled properly. As a result, the Windows virtual machine's network Provider Order tab at Network Connections > Advanced > Advanced Settings displays incorrect information and the virtual machine might lose network connectivity. This issue is resolved by applying this patch. Now the older version of the HGFS driver and all related registry entries are uninstalled properly during upgrade.

  • VMFS volumes might write misleading error messages similar to following that indicate disk corruption, instead of benign uninitialized log buffer:
    Aug 4 21:45:43 esx18-m1f4 vmkernel: 114:02:53:33.345 cpu9:21627)FS3: 3833: FS3DiskLock for [type bb9c7cd0 offset 13516784692132593920 v 13514140778984636416, hb offset 16640
    Aug 4 21:45:43 esx18-m1f4 vmkernel: gen 0, mode 16640, owner 00000006-4cd3bbfe-fece-e61f133cdd37 mtime 35821792] failed at 60866560 on volume 'QC_DS6_R1

  • The minimum recommended memory in the virtual machine default settings for Ubuntu 32-bit and 64-bit guest operating systems is updated from 64MB to 256MB. 

  • If hardware acceleration is set to None on a Windows virtual machine, even if you configure it with the option Check and upgrade Tools before each power-on, VMware Tools is upgraded only after you log in and not immediately after you restart the virtual machine.

  • A vMotion operation might fail if the NSCD (Linux Name Server Cache Daemon) that runs in the service console is not able to resolve the FQDN and LDAP. 

  • An ESX host connected to an NFS datastore might fail with a purple diagnostic screen that displays error messages similar to the following: 
    Saved backtrace from: pcpu 16 SpinLock spin out NMI
    0x4100c00875f8:[0x41801d228ac8]ProcessReply+0x223 stack: 0x4100c008761c
    0x4100c0087648:[0x41801d18163c]vmk_receive_rpc_callback+0x327 stack: 0x4100c0087678
    0x4100c0087678:[0x41801d228141]RPCReceiveCallback+0x60 stack: 0x4100a00ac940
    0x4100c00876b8:[0x41801d174b93]sowakeup+0x10e stack: 0x4100a004b510
    0x4100c00877d8:[0x41801d167be6]tcp_input+0x24b1 stack: 0x1
    0x4100c00878d8:[0x41801d16097d]ip_input+0xb24 stack: 0x4100a05b9e00
    0x4100c0087918:[0x41801d14bd56]ether_demux+0x25d stack: 0x4100a05b9e00
    0x4100c0087948:[0x41801d14c0e7]ether_input+0x2a6 stack: 0x2336
    0x4100c0087978:[0x41801d17df3d]recv_callback+0xe8 stack: 0x4100c0087a58
    0x4100c0087a08:[0x41801d141abc]TcpipRxDataCB+0x2d7 stack: 0x41000f03ae80
    0x4100c0087a28:[0x41801d13fcc1]TcpipRxDispatch+0x20 stack: 0x4100c0087a58
    This issue might occur due to a corrupted response received from the NFS server for any read operation that you perform on the NFS datastore.

  • An ESX host with Broadcom bnx2x driver might exhibit the following symptoms: 
    The ESX host might frequently disconnect from the network. 
    The ESX host might stop responding with a purple diagnostic screen that displays messages similar to the following:

    [0x41802834f9c0]bnx2x_rx_int@esx:nover: 0x184f stack: 0x580067b28, 0x417f80067b97, 0x
    [0x418028361880]bnx2x_poll@esx:nover: 0x1cf stack: 0x417f80067c64, 0x4100bc410628, 0x
    [0x41802825013a]napi_poll@esx:nover: 0x10d stack: 0x417fe8686478, 0x41000eac2b90, 0x4

    The ESX host might stop responding with a purple diagnostic screen that displays messages similar to the following:

    0:18:56:51.183 cu10:4106)0x417f80057838:[0x4180016e7793]PktContainerGetPkt@vmkernel:nover+0xde stack: 0x1
    0:18:56:51.184 pu10:4106)0x417f80057868:[0x4180016e78d2]Pkt_SlabAlloc@vmkernel:nover+0x81 stack: 0x417f800578d8
    0:18:56:51.184 cpu10:4106)0x417f80057888:[0x4180016e7acc]Pkt_AllocWithUseSizeNFlags@vmkernel:nover+0x17 stack: 0x417f800578b8
    0:18:56:51.185 cpu10:4106)0x417f800578b8:[0x41800175aa9d]vmk_PktAllocWithFlags@vmkernel:nover+0x6c stack: 0x1
    0:18:56:51.185 cpu10:4106)0x417f800578f8:[0x418001a63e45]vmklnx_dev_alloc_skb@esx:nover+0x9c stack: 0x4100aea1e988
    0:18:56:51.185 cpu10:4106)0x417f80057918:[0x418001a423da]__netdev_alloc_skb@esx:nover+0x1d stack: 0x417f800579a8
    0:18:56:51.186 cpu10:4106)0x417f80057b08:[0x418001b6c0cf]bnx2x_rx_int@esx:nover+0xf5e stack: 0x0
    0:18:56:51.186 cpu10:4106)0x417f80057b48:[0x418001b7e880]bnx2x_poll@esx:nover+0x1cf stack: 0x417f80057c64
    0:18:56:51.187 cpu10:4106)0x417f80057bc8:[0x418001a6513a]napi_poll@esx:nover+0x10d stack: 0x417fc1f0d078
    The bnx2x driver or firmware sends panic messages and writes a backtrace with messages similar to the following in the/var/log/vmkernel log file:

    vmkernel: 0:00:34:23.762 cpu8:4401)<3>[bnx2x_attn_int_deasserted3:3379(vmnic0)]MC assert!
    vmkernel: 0:00:34:23.762 cpu8:4401)<3>[bnx2x_attn_int_deasserted3:3384(vmnic0)]driver assert
    vmkernel: 0:00:34:23.762 cpu8:4401)<3>[bnx2x_panic_dump:634(vmnic0)]begin crash dump 

  • If you configure the port group policies of NIC teaming for any of the following parameters such as load balancing, network failover detection, notify switches, or failback, and then restart the ESX host, the ESX host might send traffic only through one physical NIC. This issue is resolved by applying this patch.

  • The listProcessesInGuest vmrun command might fail to run on Ubuntu 10.04 guest operating systems. The guest operating system displays an an error message similar to the following: 
    Error: Invalid user name or password for the guest OS

  • The VMware Control Panel UI button in the Windows Control Panel for performing VMware Tools upgrade from a Windows guest operating system is disabled for non-administrator users. Also Shrink and Scripts options in the VMware Tools Control Panel are disabled for non-administrator users. This fix is only a UI change and does not block upgrades from custom applications. To block VMware Tools upgrades for all users, set theisolation.tools.autoinstall.disable="TRUE" parameter in the VMX file.

  • If you perform operations that utilize the Logical Volume Manager (LVM) such as write operations, volume re-signature, volume span, or volume growth, the ESX host might fail with a purple diagnostic screen. Error messages similar to the following might be written to the logs: 63:05:21:52.692 cpu1:4135)OC: 941: Could not get object from FS driver: Permission denied
    63:05:21:52.692 cpu1:4135)WARNING: Fil3: 1930: Failed to reserve volume f530 28 1 4be17337 9c7dae2 23004d45 22b547d 0 0 0 0 0 0 0
    63:05:21:52.692 cpu1:4135)FSS: 666: Failed to get object f530 28 2 4be17337 9c7dae2 23004d45 22b547d 4 1 0 0 0 0 0 :Permission denied
    63:05:21:52.706 cpu1:4135)WARNING: LVM: 2305: [naa.60060e80054402000000440200000908:1] Disk block size mismatch (actual 512 bytes, stored 0 bytes)

  • An ESX host might fail with a purple screen that displays an error message similar to the following when multiple threads trying to use the same TTY cause a race condition.

    ASSERT bora/vmkernel/user/userTeletype.c:969
    cr2=0xff9fcfec cr3=0xa114f000 cr4=0x128

  • You might not be able to view the CDP network location information by using the ESX command line or through the vSphere Client. 

  • After you install VMware Tools for Linux and restart the guest operating system, the device manager for the Linux kernel (udev) might report extraneous errors similar to the following: 
    May 4 16:06:11 rmavm401 udevd[23137]: add_to_rules: unknown key 'SUBSYSTEMS'
    May 4 16:06:11 rmavm401 udevd[23137]: add_to_rules: unknown key 'ATTRS{vendor}'
    May 4 16:06:11 rmavm401 udevd[23137]: add_to_rules: unknown key 'ATTRS{model}'
    May 4 16:06:11 rmavm401 udevd[23137]: add_to_rules: unknown key 'SUBSYSTEMS'
    May 4 16:06:11 rmavm401 udevd[23137]: add_to_rules: unknown key 'ATTRS{vendor}'
    May 4 16:06:11 rmavm401 udevd[23137]: add_to_rules: unknown key 'AT
    This issue is resolved by applying this patch. Now the VMware Tools Installer for Linux detects the device and only writes system-specific rules.
  • Windows virtual machine with low kernel memory causes VMware Tools to fail intermittently. 

  • When you install or update VMware Tools on Linux virtual machines, the VMware Tools installer might overwrite any entries in the configuration files (such as /etc/updated.conf file for Redhat and Ubuntu, and/etc/sysconfig/locate for SuSE) made by third party development tools. This might affect cron jobs running updatedb on these virtual machines. This issue is resolved by applying this patch.

  • If the NFS volume hosting a virtual machine encounters errors, the NVRAM file of the virtual machine might become corrupted and grow in size from the default 8K up to a few gigabytes. At this time, if you perform a vMotion or a suspend operation, the virtual machine fails with an error message similar to the following: 
    unrecoverable memory allocation failures at bora/lib/snapshot/snapshotUtil.c:856

  • Presenting snapshots of VMFS3 volumes upgraded from VMFS2 with block size greater than 1MB might fail to mount on ESX 4.x hosts. Running the command esxcfg-volume -l to list the detected VMFS snapshot volumes fail with the following error message: 
    ~ # esxcfg-volume -l
    Error: No filesystem on the device 

    Now you can mount or re-signature snapshots of VMFS3 volumes upgraded from VMFS2.

Deployment Considerations

None beyond the required patch bundles and reboot information listed in the table above.

Based on VMware KB 1031027

Also Read