HP DL580 Gen9 BIOS Settings for VMware ESXi

These are my suggested BIOS settings for a HP DL580 Gen9 Servers running VMware ESXi.

If there are no requirements for serial ports then the following can be disabled as per VMware Performance Best Practices to disabled any hardware not required: –

  • Under System Options \ Serial Port Options
    • Embedded Serial Port
    • Virtual Serial Port

There is a 1GB embedded user partition on non-volatile flash memory on the system board that can be used to install ESXi onto. If you are going to install ESXi here then this partition will need to be enabled in the BIOS, by default it is disabled. If you are not going to use it then leave it disabled.

There is also an internal SD Card Slot that can be used to install ESXi on a SD Card installed in this slot. By default this slot is enabled in the BIOS, however if you are not going to use it then I would recommend disabling the slot, again as per VMware Performance Best Practices to disable any hardware not required.

The above two options are under System Options \ USB Options.

Also under System Options \ USB Options are the following settings: –

  • USB Control – I normally leave this enabled so that I can use the external ports to connect a keyboard and mouse if ever required for local configuration and troubleshooting.
  • USB Boot Support – I also normally leave this enabled as it is required to boot off an ISO image attached via the iLO card for doing things such as firmware upgrades and installing ESXi in the first place.
  • Virtual Install Disk – By default this is disabled and I normally leave it disabled. It contains drivers specific to the server that an OS can use during installation but I don’t think it is any use for installing ESXi.

Under System Options \ Processor Options make sure the following settings are set (these are all the default settings): –

  • Intel(R) Hyperthreading – Enabled – This is a VMware Performance Best Practice.
  • Processor Core Disabled – 0 – No processor cores will be disabled.
  • Processor x2APIC Support – Enabled – x2APIC support optimises interrupt distribution and has been supported by ESXi since version 5.1.

Under System Options \ Virtualisation Options ensure that all of the following are enabled as per VMware Performance Best Practices (by default they are all enabled): –

  • Virtualization Technology
  • Intel(R) VT-d
  • SR-IOV

Under System Options \ Boot Time Optimizations I normally disable the Dynamic Power Capping Functionality as I set the Power Profile to be controlled by the OS and therefore at boot time there is no need to spend time performing this operation. Also in this section by default the Extended Memory Test is also disabled and I leave it like this as enabling it results in a significant increase in boot time when the host has a large amount of memory installed as is the case with the majority of DL580 servers running ESXi.

The Advanced Memory Protection setting under System Options \ Memory Operations will depend on your requirements for the memory, e.g. if you want to implement mirrored memory or not. Normally I use Advanced ECC Support as this allows the full amount of installed memory to be utilised.

Under Boot Options I leave the Boot Mode as UEFI Mode as this is supported for ESXi 6. The HP documentation states that UEFI Optimized Boot is required to boot VMware ESXi in UEFI Boot Mode so I leave this enabled as it is by default. Although I have tried disabling it and ESXi did boot in UEFI Boot Mode. I also leave the Boot Order Policy as the default of Retry Boot Order Indefinitely as I do not see any reason to change from this setting. Your Boot order will depend on where you are booting ESXi from, e.g. SD card, User Partition, Hard Disk, PXE Boot, e.t.c. I normally set the boot order to be: –

  1. Generic USB Boot
  2. <Whatever device ESXi is installed on>

This way I can boot of a ISO image connected to the iLO or if that is not present (or another USB device) then it will boot ESXi from wherever I have installed it. I remove all other boot devices.

Under Network Options \ Network Boot Options I disabled any network adapter ports that are not going to be used for PXE booting.

Under Power Management I set the Power Profile to Custom and then set the Power Regulator to OS Control Mode as VMware ESXi includes a full range of host power management capabilities in the software that can save power when a host is not fully utilised. Configuring this to OS Control Mode allows ESXi the most flexibility in using the power management features. Under the Advanced Power Options I change the Energy/Performance Bias to Maximum Performance as this setting should allow for the highest performance with the lowest latency for environments that are not sensitive to power consumption, if you are sensitive to power consumption then you may not want to set this. I leave all of the other Power Management settings as the defaults.

Under Performance Options I ensure the following are enabled (this is the default): –

  • Intel (R) Turbo Boot Technology

VMware best practices are to enable Intel (R) Turbo Boot Technology so that the processors can transition to a higher frequency that the processor’s rated speed.

Operating systems that support the System Locality Information Table (SLIT) can use this information to improve performance by allocating resources and workloads more efficiently.

Under the Advanced Performance Tuning Options I leave the defaults as per the table below.

Option Setting Comments
Node Interleaving Disabled Enabling this disabled NUMA. VMware recommend that in most cases you will get the best performance by disabling node interleaving.
Intel NIC DMA Channels (IOAT) Enabled This is a NIC acceleration option that runs only on Intel-based NICs
HW Prefetcher Enabled Typically, setting this option to enabled provides better performance.
Adjacent Sector Prefetch Enabled Typically, setting this option to enabled provides better performance.
DCU Stream Prefetcher Enabled Typically, setting this option to enabled provides better performance.
DCU IP Prefetcher Enabled In most cases, the default value of enabled provides optimal performance
QPI Bandwidth Optimization (RTID) Balanced The Balanced option provides the best performance for most applications. The only other option of Optimized for I/O can increase bandwidth for I/O devices such as GPUs that rely on direct access to system memory.
Memory Proximity Reporting for I/O Enabled When enabled, the System ROM reports the proximity relationship between I/O devices and system memory to the operating system. Most operating systems can use this information to efficiently assign memory resources for devices such as network controllers and storage devices.
I/O Non-posted Prefetching Enabled Disabling this can significantly improve performance for a small set of configurations that require a balanced mix of read/write I/O traffic (for example, Infiniband) or multiple x16 devices that utilize max bandwidth of the PCIe bus. Disabling this feature does, however, have a slight impact on 100% I/O read bandwidth.
NUMA Group Size Optimization Clustered The default setting of Clustered provides better performance due to the resulting Kgroups being optimised along NUMA boundaries.
Intel Performance Monitoring Support Disabled This option does not impact performance. When enabled, it exposes certain chipset devices that can be used with the Intel Performance Monitoring Toolkit.
This entry was posted in HP, VMware. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s