In a software iSCSI environment, the iSCSI connection might go offline when multiple VMkernel NICs in the same subnet are used to access a target.
An ESXi host with a large number of I/O devices might not respond to certain devices when running 32-bit virtual machines. This can lead to an nonresponsive I/O device. The exact symptoms depend on the impacted device. This issue is observed on a Cisco UCS B250 M1 Blade Server containing several network interface cards, but can occur on any system with a large number of interrupt-generating devices.
ESXi 4.0 hosts containing LSI Logic MegaRAID controllers might fail, for example, SAS1064R or SAS1078R. This patch provides an updated version of MegaRAID SAS driver. Symptom: ESXi 4.0 hosts might fail and error messages similar to the following might be written to the VMkernel logs: Jun 17 16:43:28 vsesx1 vmkernel: 0:15:06:05.974 cpu8:4279)<6>megasas_service_aen: aen received Jun 17 16:43:28 vsesx1 vmkernel: 0:15:06:05.974 cpu1:4225)<6>megasas_hotplug_work: event code 0x010c Jun 17 16:43:28 vsesx1 vmkernel: 0:15:06:05.974 cpu1:4225)<6>megasas_hotplug_work: aen registered
ESXi 4.0 hosts might stop responding due to a deadlock issue. This issue was reported on IBM BladeCenter LS41 systems. Symptom: ESXi 4.0 hosts might fail with the following error: @BlueScreen: Spin count exceeded (allocLock) - possible deadlock Code starts at 0x418002400000 0x4100c144fd90:[0x418002435231]Panic+0x9c stack: 0x4100b0e255a0 0x4100c144fdf0:[0x418002442718]SP_WaitLock+0x1ef stack: 0x417fc27d0540 0x4100c144ff20:[0x418002408e49]Alloc_COWSharePages+0x230 stack: 0x0 0x4100c144ffa0:[0x4180024316c1]VMKCall+0x2c0 stack: 0x4100c144fff8 0x4100c144fff0:[0x41800248d2f8]VMKVMMEnterVMKernel+0x11f stack: 0x0 0x2c28:[0x0] (vmm32)
Issues such as unreliable failover behavior and failure of virtual machines might be seen on HP Smart Arrays (HPSA) P700M controller.
ESXi 4.0 hosts that use an Emulex LPFC driver might fail. Symptom: ESXi 4.0 hosts might fail with the following backtrace on the purple screen: 0x4100c111f4b8:[0xff]Unknown stack: 0x417fe422cd88 0x4100c111f508:[0x4180237bc20e]__wake_up+0x6d stack: 0x1 0x4100c111f548:[0x4180238e75af]lpfc_sli_wake_iocb_wait+0x9e stack: 0x4100c111f568 0x4100c111f768:[0x4180238e29c9]lpfc_sli_handle_fast_ring_event+0x4ac stack: 0xe458b994 0x4100c111f7f8:[0x4180238e7b76]lpfc_intr_handler+0x145 stack: 0x418000000022 0x4100c111f838:[0x4180237b82e8]Linux_IRQHandler+0x77 stack: 0x4100c111f858 0x4100c111f8a8:[0x41802342e0e5]IDTDoInterrupt+0x310 stack: 0x4100c111f990 0x4100c111f8d8:[0x41802342e484]IDT_HandleInterrupt+0x8b stack: 0x417fe37965c0 0x4100c111f8f8:[0x41802342e9f2]IDT_IntrHandler+0x91 stack: 0x4018
The version 1.30 of the vm-support script fails to properly collect vmmstats on ESXi 4.0 hosts, leading to insufficient diagnostic information for troubleshooting issues.