VMware on AWS Update

I had a session with Adam Osterhost, @osterholta, from the VMware Cloud SET Team last week to update vExperts on the soon to be released VMware on AWS Service.

This service looks awesome, here are some notes I made from the session.

Clouds are becoming the new Silos, e.g. we are starting to see different clouds such as VMware on-premise private clouds, Microsoft Azure, Amazon Web Services, and Google Cloud Platform all being treated as Silos as they have their own management tools and skills required to look after them.

VMware are working on a Cross Cloud Architecture that will sit above these different clouds to provide a single architecture providing a common set of tools for: –

  • Management and Operations
  • Network and Security
  • Data Management and Governance

They already have VMware Cloud Foundation that sits above the following: –

  • On-Premise vSphere
  • VMware vCloud Air
  • vSphere running on the IBM Cloud

and soon vSphere running on AWS

This will be expanded to allow management of cloud specific non-VMware based infrastructure such as: –

  • AWS
  • Google Cloud Platform
  • Microsoft Azure
  • IBM Cloud

Use cases for VMware on AWS include: –

  • Maintain and Expand
    • Regional Capacity
    • DR and Backup
  • Consolidation and Migrate
    • Data Centre Consolidation
    • Application Migration
  • Workload Flexibility
    • Dev, Test, Lab and Training
    • Cyclic Demand

VMware on AWS will bring operational consistency while leveraging existing customer skill-set and tools across their IT environment.

It will initially be running: –

  • vSphere 6.5 on bare metal AWS hardware and not nested ESXi
  • All-flash VSAN and you will have full control of it to use or switch off features such as
    • Compression
    • Deduplication
  • NSX
  • vCenter

You will be able to use vCenter Linked Mode to your on-premise vCenter.

As VMware on AWS is using standard VMware technology then anything that works with vCenter will work with VMware on AWS such as

  • 3rd party products
  • other VMware products such as vRealize Operations
  • your own developed scripts

You will be able to use all of the AWS Services with VMware on AWS such as: –

  • Amazon EC2
  • Amazon S3
  • Amazon RDS
  • Amazon IoT
  • Amazon Direct Connect
  • Amazon IAM

As VMware on AWS will be a fully VMware managed service and they will control when vSphere is upgraded then they will introduce new functionality to allow vCenter Linked Mode to work with the same version and the previous version, i.e. on day 1 you will be able to use linked mode with VMware on AWS running 6.5 with your on-premise 6.0 vSphere infrastructure. When vSphere vNext comes out and the VMware on AWS is upgraded to it then as long as you have already upgraded your on-premise infrastructure to 6.5 then the Linked Mode will still work. This gives you more control over when you upgrade your on-premise infrastructure instead of forcing you to upgrade your on-premise when VMware on AWS is upgraded.

For best functionality, it is recommended that you have your on-premise infrastructure running the same version as VMware on AWS so that you can utilse the new functionality in the later version.

For VMware on AWS, as new VMware features only need to be tested on one type of hardware then this will allow VMware to introduce new functionality into the VMware on AWS Service before it is generally available for on-premise SDDC.

VMware are currently working with a few customers to develop the VMware on AWS Service and there will be a beta program opening up in the Spring with General Availability planned for Mid 2017

The service will go live in Virginia first and then EMEA, probably UK or Ireland, but will expand to all AWS regions/zones.

With the initial offering all hosts will have to be in the same Amazon Zone but at a later date you may be able to split across AZs for availability.

The VMware on AWS service will be sold on the number of hosts you require, not on the number of VMs you want to run. This will allow you to oversubscribe the hosts if you wish to run more VMs. The minimum number of hosts that can be purchased is likely to be 4 and will scale up. There was mention of up to 16 hosts on the session with Adam but the final details have not been worked out yet so it may be even more than that, he did show screenshots displaying options for 64 hosts.

The cost of the service will work out less the longer you commit, e.g. if you purchase a 2 year service then the cost per month will be less than if you only commit to the service for a single month.

This will be a VMware Service, delivered, operated, sold and supported by VMware. If you need support you contact the same VMware Global Support that you contact when you need support with you on-premise VMware infrastructure. You never need to contact Amazon, if there is an issue with the AWS side of things then VMware will deal directly with Amazon and therefore you are never passed backwards and forwards between the two with them pointing the finger at each other.

There will be a new web portal for requesting the service which will have a full RESTful API from Day 1 allowing features such as provisioning, scaling and billing. This web portal will be based on HTML5 for the best performance and good response times. You will be able to use the Flex Client but 90% of the functionality will be available from the HTML5 client and this should be enough for most people.

You will be able to right click on your AWS vSphere Cloud Cluster and select resize, select how many hosts and within minutes, yes minutes, you get the new capacity, i.e. additional hosts. Imagine doing this with your on-premise cloud. How long does it take to expand a vSphere Cluster on premise from the day you decide it needs to be expanded? You need to get a specification for the new hosts, qet a quote from your hardware provided, order the hardware, install the hardware, configure the hardware, install and configure ESXi, e.t.c. it can take weeks to do this. Remember the days when you had to do this every time you needed a new server and now you just provision a new VM in minutes. With VMware on AWS adding additional capacity will be like adding new VMs.

You will be able to utilise Elastic DRS which can resize your VMware on AWS Cloud by adding or removing hosts depending on workload resource requirements.

There will NOT be a dedicated Management Cluster so vCenter, NSX Manager, e.t.c. will be running on the same hosts as the VMs you want to run in this cloud but in different resource pools.

If you do not currently have NSX then you will be able to deploy a no-charge NSX Edge appliance within your on-premise infrastructure to allow functionality with VMware on AWS.

When VMware upgrade the VMware on AWS service they will add an extra host into your cluster so that one host can be taken out of service to be upgraded so you do not lose any capacity.

The specification of each host will be very similar to the Amazon I3 specification.

You will be able to vMotion between your on-premise infrastructure and VMware on AWS.

For more updates on VMware on AWS you can follow @vmwarecloud on twitter or search for the hashtag #VMWonAWS

Advertisements
Posted in AWS, VMware, VMware Cloud on AWS | Leave a comment

VMware on AWS

Coming soon VMware on AWS.

Soon you will be able to run VMware’s Software Defined Data Center (SDDC) including vSphere, vSAN and NSX on the Amazon Web Services (AWS) Cloud. It is currently in Technology Preview but expect more details regarding general availability soon.

This will mean that you will be able to run any application across vSphere based on premise private cloud, public and hybrid clouds environments.

It will be sold and supported by VMware as an on-demand service utilising Amazon’s global enterprise-grade secure highly-scalable cloud resources.

Some of the customer scenarios this is targeted at are: –

  • Application Development
  • Testing
  • Disaster Recovery
  • Geographical Expansion
  • Burst Capacity
  • Data Centre Migration

You will be able to manage it from an existing vCenter Management Interface.

By utilising VMware on AWS should allow companies to free up time spent managing infrastructure to spend more time innovating.

Dr. Matt Wood, AWS GM – Product Strategy, and Mark Lohmeyer, VMware VP Products, demonstrate how a VMware SDDC can be spun up in a few clicks and a matter of minutes in the video below. It is simply a case of selection a geographic location for the Data Center you want to spin up, the size of the Data Centre and how you want to pay for it. They also demonstrate how you can vMotion a Virtual Machine from an on-premise private VMware cloud to VMware on AWS just as you would vMotion on VM between hosts within your own private vSphere environment with no downtime.

//players.brightcove.net/1534342432001/Byh3doRJx_default/index.html?videoId=5179048452001

It looks like this will also include Elastic DRC, where a new host will be spun up on AWS when there is a shortage of resources to satisfy all of the active workloads and DRS will rebalance the workloads across the infrastructure including the new host(s).

For more information see https://www.vmware.com/cloud-services/vmware-cloud-aws.html

Posted in AWS, VMware, VMware Cloud on AWS | Leave a comment

VMware Snapshot Performance Impact

A number of people have asked me recently about the performance impact of having VMware snaphots on a VM. So here is my understanding of the performance impact of having snapshot(s) on a VM.

The issue with having a snapshot on a VM is that when a block of data is changed instead of changing this in the main vmdk file it is written to a delta file.

The delta files are not pre-allocated as the main vmdk file is (assuming it is thick provisioned); so each time you have to write to the delta file first off the delta file has to be extended and then the data written. Well not exactly every time data is written, I think it extends the delta files in 16MB chunks, so only has to extend for every 16MB of new blocks that have changed. If you change the same block of data multiple times after a snapshot is created I believe it overwrites the block in the delta file so does not need to extend the file for blocks that are changing multiple times. When the files are extended it locks the VMFS volume (or it used to, not sure if this is still the case with vSphere 6.x) so it is not just the VM with the snapshot on it being impacted but also any other VM on the same datastore.

This is the reason a thick provisioned vmdk provides slightly better performance than a thin provisioned vmdk; as the space is pre-allocated in a thick provisioned vmdk therefore the file does not need to be extended to write additional data to it. If performance of the vmdk is really important then you should create eager zeroed thick provisioned disks as all blocks are zeroed in the vmdk at the time of creation, with the default of lazy zeroed thick provisioned disks each time a block is written to for the first time it needs to be zeroed before the data is written to it; don’t ask me why it needs to be zeroed and then overwritten as I don’t have a clue! Just know that eager zeroed is supposed to be better but takes longer to create the vmdk in the first place.

When you want to read a block of data from a VM with snapshot(s) on it the vmkernel needs to work out if the block you want to read is in the delta file or in the base vmdk.

I have heard that the more snapshots you have on a VM the bigger the impact on performance and have seen a VMware KB (https://kb.vmware.com/kb/1025279) suggesting that you should not have more than 32 and for better performance only 2 or 3.

This extra processing has an impact on the VM disk access times, I suspect not a great lot of impact but it is not as quick as going directly to the base vmdk to read and write blocks of data. I think you will only ever notice it, if at all, with very high disk I/O intensive workloads.

When I get change I will build a test environment and run some performance tests to see how much of an impact snapshots have, although it will probably depend on your disk subsystem.

Posted in VMware, vSphere | Leave a comment

VMware vSphere 6.5

VMware vSphere 6.5 was released earlier this week, it was announced on 18th Oct 2016 at VMworld Barcelona but was officially available for download from Tuesday this week (15th Nov 2016).

Some of the new features I got to look at as part of the beta program and thought were good additions to the product were: –

  • Platform Services Controller (PSC) HA – in vCenter 6.0 we could implement HA for the PSC but this required complex configuration and the use of a load balancer, with 6.5 no load balancer is required and the configuration is much simpler.
  • vCenter HA – in the past we have had things such as VMware vCenter Heatbeat to provide HA of the vCenter but that was withdrawn some time again. Now we have an easy to setup vCenter HA solution.
  • VMware Update Manager (VUM) now included in the vCenter Server Appliance so you no longer require a Windows Server for VUM.
  • vSphere HA Orchestrated Restarts – you can now configure dependencies between VMs, e.g. an App server will not restart until a SQL server it is dependent on has been restarted.
  • Additional HA Restart Priorities – up to vSphere 6.0 you could configure one of 3 restart priorities for a VM, other than disabled, (high, medium, low). Now you have 2 more (highest and lowest) making a total of 5 – similar to Site Recovery Manager (SRM).

There are loads more new features but these are a few that I wanted to pick out and will dig unto them in greater detail in later articles.

Posted in VMware, vSphere | Leave a comment

vCenter 5.5 to 6.0 Migration

When vSphere 6 was released the vCenter Server Appliance (VCSA) was brought in line with the traditional Windows based vCenter Server. This meant that the appliance version of vCenter was the recommended version to implement. But how do you get your Windows vCenter 5.5 server transitioned to a vCenter 6 appliance? VMware have now released a migration tool from a Windows vCenter 5.5 server to a vCenter 6.0 appliance, known as vSphere 6.0 Update 2m.

It should be noted that it ONLY migrates from 5.5 to 6.0. Therefore if you are still running 5.0 or 5.1 then you will need to upgrade to 5.5 BEFORE being able to use this migration tool. It does, however, work with ANY update of 5.5.

The following items are migrated with this tool from 5.5 to 6.0

  • Configuration
  • Inventory
  • Alarm data
  • Historical and Performance data (optional)

The migration tool installs a VCSA preserving the identity of the previous windows vCenter, e.g. IP address, name, certificates and UUID.

One thing to consider is VMware Update Manager (VUM) as you still require a Windows Server for this. If you already have this installed on a separate server to your vCenter 5.5 Server then you are good to go. However, if VUM is installed on the SAME server as your vCenter 5.5 then you will want to move it to a separate server BEFORE you start the migration process. The same goes for other VMware solutions such as SRM, NSX, vRops, e.t.c. If they are installed on the same Windows server as the vCenter 5.5 server then they need to be moved off to a separate server to continue to work following the migration.

The data will be migrated to an embedded vProstgres database on the VCSA even if the Windows vCenter 5.5 server was using an external database.

If you vCenter 5.5 SSO was also running on the same server as your vCenter Server then the migration tool will implement an integrated Platform Service Controller (PCS) within the new VCSA. If you had an external SSO to your vCenter Server then the migration tool will install an external PCS for the VCSA.

Plan for some down time while you migrate your vCenter Server to a 6.0 appliance. For guidance on how much time you require for this migration see https://kb.vmware.com/kb/2146420.

More information on using the vCenter Server 5.5 to vCenter Server Appliance 6.0 U2m can be found in this FAQ https://kb.vmware.com/kb/2146439.

If you are already using a Windows vCenter 6.0 Server then look out for a migration tool to allow you to migrate from this to a vCenter Server appliance to be released in the future.

Posted in VMware, vSphere, vSphere 6 | Leave a comment

HP DL580 Gen9 BIOS Settings for VMware ESXi

These are my suggested BIOS settings for a HP DL580 Gen9 Servers running VMware ESXi.

If there are no requirements for serial ports then the following can be disabled as per VMware Performance Best Practices to disabled any hardware not required: –

  • Under System Options \ Serial Port Options
    • Embedded Serial Port
    • Virtual Serial Port

There is a 1GB embedded user partition on non-volatile flash memory on the system board that can be used to install ESXi onto. If you are going to install ESXi here then this partition will need to be enabled in the BIOS, by default it is disabled. If you are not going to use it then leave it disabled.

There is also an internal SD Card Slot that can be used to install ESXi on a SD Card installed in this slot. By default this slot is enabled in the BIOS, however if you are not going to use it then I would recommend disabling the slot, again as per VMware Performance Best Practices to disable any hardware not required.

The above two options are under System Options \ USB Options.

Also under System Options \ USB Options are the following settings: –

  • USB Control – I normally leave this enabled so that I can use the external ports to connect a keyboard and mouse if ever required for local configuration and troubleshooting.
  • USB Boot Support – I also normally leave this enabled as it is required to boot off an ISO image attached via the iLO card for doing things such as firmware upgrades and installing ESXi in the first place.
  • Virtual Install Disk – By default this is disabled and I normally leave it disabled. It contains drivers specific to the server that an OS can use during installation but I don’t think it is any use for installing ESXi.

Under System Options \ Processor Options make sure the following settings are set (these are all the default settings): –

  • Intel(R) Hyperthreading – Enabled – This is a VMware Performance Best Practice.
  • Processor Core Disabled – 0 – No processor cores will be disabled.
  • Processor x2APIC Support – Enabled – x2APIC support optimises interrupt distribution and has been supported by ESXi since version 5.1.

Under System Options \ Virtualisation Options ensure that all of the following are enabled as per VMware Performance Best Practices (by default they are all enabled): –

  • Virtualization Technology
  • Intel(R) VT-d
  • SR-IOV

Under System Options \ Boot Time Optimizations I normally disable the Dynamic Power Capping Functionality as I set the Power Profile to be controlled by the OS and therefore at boot time there is no need to spend time performing this operation. Also in this section by default the Extended Memory Test is also disabled and I leave it like this as enabling it results in a significant increase in boot time when the host has a large amount of memory installed as is the case with the majority of DL580 servers running ESXi.

The Advanced Memory Protection setting under System Options \ Memory Operations will depend on your requirements for the memory, e.g. if you want to implement mirrored memory or not. Normally I use Advanced ECC Support as this allows the full amount of installed memory to be utilised.

Under Boot Options I leave the Boot Mode as UEFI Mode as this is supported for ESXi 6. The HP documentation states that UEFI Optimized Boot is required to boot VMware ESXi in UEFI Boot Mode so I leave this enabled as it is by default. Although I have tried disabling it and ESXi did boot in UEFI Boot Mode. I also leave the Boot Order Policy as the default of Retry Boot Order Indefinitely as I do not see any reason to change from this setting. Your Boot order will depend on where you are booting ESXi from, e.g. SD card, User Partition, Hard Disk, PXE Boot, e.t.c. I normally set the boot order to be: –

  1. Generic USB Boot
  2. <Whatever device ESXi is installed on>

This way I can boot of a ISO image connected to the iLO or if that is not present (or another USB device) then it will boot ESXi from wherever I have installed it. I remove all other boot devices.

Under Network Options \ Network Boot Options I disabled any network adapter ports that are not going to be used for PXE booting.

Under Power Management I set the Power Profile to Custom and then set the Power Regulator to OS Control Mode as VMware ESXi includes a full range of host power management capabilities in the software that can save power when a host is not fully utilised. Configuring this to OS Control Mode allows ESXi the most flexibility in using the power management features. Under the Advanced Power Options I change the Energy/Performance Bias to Maximum Performance as this setting should allow for the highest performance with the lowest latency for environments that are not sensitive to power consumption, if you are sensitive to power consumption then you may not want to set this. I leave all of the other Power Management settings as the defaults.

Under Performance Options I ensure the following are enabled (this is the default): –

  • Intel (R) Turbo Boot Technology
  • ACPI SLIT

VMware best practices are to enable Intel (R) Turbo Boot Technology so that the processors can transition to a higher frequency that the processor’s rated speed.

Operating systems that support the System Locality Information Table (SLIT) can use this information to improve performance by allocating resources and workloads more efficiently.

Under the Advanced Performance Tuning Options I leave the defaults as per the table below.

Option Setting Comments
Node Interleaving Disabled Enabling this disabled NUMA. VMware recommend that in most cases you will get the best performance by disabling node interleaving.
Intel NIC DMA Channels (IOAT) Enabled This is a NIC acceleration option that runs only on Intel-based NICs
HW Prefetcher Enabled Typically, setting this option to enabled provides better performance.
Adjacent Sector Prefetch Enabled Typically, setting this option to enabled provides better performance.
DCU Stream Prefetcher Enabled Typically, setting this option to enabled provides better performance.
DCU IP Prefetcher Enabled In most cases, the default value of enabled provides optimal performance
QPI Bandwidth Optimization (RTID) Balanced The Balanced option provides the best performance for most applications. The only other option of Optimized for I/O can increase bandwidth for I/O devices such as GPUs that rely on direct access to system memory.
Memory Proximity Reporting for I/O Enabled When enabled, the System ROM reports the proximity relationship between I/O devices and system memory to the operating system. Most operating systems can use this information to efficiently assign memory resources for devices such as network controllers and storage devices.
I/O Non-posted Prefetching Enabled Disabling this can significantly improve performance for a small set of configurations that require a balanced mix of read/write I/O traffic (for example, Infiniband) or multiple x16 devices that utilize max bandwidth of the PCIe bus. Disabling this feature does, however, have a slight impact on 100% I/O read bandwidth.
NUMA Group Size Optimization Clustered The default setting of Clustered provides better performance due to the resulting Kgroups being optimised along NUMA boundaries.
Intel Performance Monitoring Support Disabled This option does not impact performance. When enabled, it exposes certain chipset devices that can be used with the Intel Performance Monitoring Toolkit.
Posted in HP, VMware | Leave a comment

Demise of vSphere C# Client

We have been expecting it since vSphere 4.1; VMware announced on Wed 18th May 2016 that the current version of vSphere, 6.0, will be the last version to contain the C# Client, i.e. the full desktop client we have been using since vSphere 4 and before the Web Client was available.

The Web Client that has been available since vSphere 5 has had issues with performance with many people resisting the move to it, preferring the C# desktop client even though the web client has been improved over the various releases through 5.5 and 6.0. VMware stopped adding new features to the traditional C# client with 5.5 meaning that we had to use the Web Client to utilise these new features, such as Inventory Tags, enhanced vMotion (no shared storage), vSphere Flash Read Cache and VMDKs over 2TB. This Web Client was based on Flash; a new web client based on HTML5 has recently been made available via a VMware Fling. A VMware Fling is an unsupported release which allows VMware to get features out to customer early for testing. It is this HTML5 based web client that will be included in the next version of vSphere.

The VMware Fling HTML5 Web Client is not a fully functional client at the moment but I would encourage people using vSphere 6 to download it now and start using it so that they can get used to it and try it out prior to it being the main client available. VMware Flings also allow VMware to release updates much more quickly than official supported code, therefore expect new features to be added to this client over the coming weeks and months.

The HTML5 Web Client VMware Fling can be downloaded from https://labs.vmware.com/flings/vsphere-html5-web-client

For more details on the announcement from VMware regarding the vSphere C# Client not being included in the next release of vSphere see http://blogs.vmware.com/vsphere/2016/05/goodbye-vsphere-client-for-windows-c-hello-html5.html

Posted in VMware, vSphere | Leave a comment

vCenter Server Storage Filters

vCenter has 4 Storage Filters: –

  • RDM Filter
  • VMFS Filter
  • Host Rescan Filter
  • Same Host Transports Filter

These filters affect the actions vCenter takes when scanning storage as follows:-

RDM Filter

When you attempt to add a RDM to a VM the RDM filter filters out any RDM that have already been added to a VM leaving only the ones the LUNs that are not currently formatted as a datastore or attached to a VM as a RDM. By disabling this filter you can add the same RDM to multiple VMs.

VMFS Filter

When you use the Add Storage Wizard to add a VMFS volume to an ESXi host then the VMFS Filter filters out LUNs that have already been formatted as a VMFS datastore.

Host Rescan Filter

When you add a VMFS datastores to one ESXi host the Host Rescan Filter triggers all of the other ESXi hosts to rescan for the new volume. Disabling this filter prevents the other hosts doing this. This may be helpful when adding large amounts of VMFS datastores to a cluster, i.e. you can add all the new datastores to one ESXi host, maybe via PowerCLI, and then once completes get each of the other ESXi hosts to perform a rescan.

Same Host and Transports Filter

The Same Host and Transport Filter filters out LUNs that cannot be used to extend a VMFS datastore due to host or storage incompatibility, for example if the LUN is not presented to all hosts using the datastore.

By default all of the filters are enabled. To disable a filter you need to add the relevant advanced setting to vCenter (Administration….vCenter Settings….Advanced Settings) and set it to FALSE. By default these settings are not listed in the Advanced Setting, therefore to disable any of them you need to add them. The relevant settings are: –

Filter Advanced Setting
RDM Filter config.vpxd.filter.rdmFilter
VMFS Filter config.vpxd.filter.vmfsFilter
Host Rescan Filter config.vpxd.filter.hostRescanFilter
Sam Host and Transports Filter config.vpxd.filter.SameHostAndTransportsFilter
Posted in Configuration, VMware, vSphere | Leave a comment

VCAP5 Exams Retirement

Those of you who have planned to take the either of the VCAP5 exams, VCAP5-DCA or VCAP5-DCD, need to get in quick as VMware have announced the retirement of these two exams as follows: –

  • VCAP5-DCA – Data Center Administrator will be retired on June 2nd 2016
  • VCAP5-DCD – Data Center Design will be retired on June 24th 2016

I took VCAP5-DCD 23 months ago and had planned to take VCAP5-DCA but never found the time to ensure I was 100% prepared for it. These exams are not cheap, last 3hours and the closest testing centre to me for the advanced exams is a 2.5 hour drive away. Therefore, I only wanted to book the DCA exam if I was 100% confident of passing it. As my VCAP5-DCD exam was the last VMware exam I passed then my VCP certification expires in 1 month. Therefore I have booked the VCAP5-DCA exam to give it a go before it is retired and renew my VCP (assuming I pass).

The expected general availability (GA) of the VCAP6-DCV exams, VCAP6-DCV-Design and VCAP6-DCV-Deploy, is 30th May 2016. The beta period of these exams closed on 18th March 2016, for the Design exam, and 26th February 2016, for the Deploy exam.

UPDATE: Pleased to say I passed by VCAP5-DCA
VMware have extended the life of the VCAP5 exams are currently there are no published retirement date for these exams.

The following VMware exams are also being retired in June: –

  • VCIX-NV – VMware Certified Implementation Export Network Virtualisation on June 2nd 2016
  • VCP-Cloud – VMware Certified Professional Cloud on June 24th 2016
Posted in Certification, VMware | 1 Comment

vSphere Beta

There is a new vSphere Beta Program starting soon. I don’t know the exact date but I would suspect that it will be in time for the upcoming release (v6.5 maybe) to be ready for VMware at the end of August.

You can apply to be part of this vSphere Beta Program at http://info.vmware.com/content/35853_VMware-vSphere-Beta_Interest

This vSphere Beta is a private Beta but is open to VMware customers who have deployed vSphere 5.5 and 6.0.

Beta participants are expected to do the following: –

  • Online acceptance of the Master Software Beta Test Agreement will be required prior to visiting the Private Beta Community
  • Install beta software within 3 days of receiving access to the beta product
  • Provide feedback within the first 4 weeks of the beta program
  • Submit Support Requests for bugs, issues and feature requests
  • Complete surveys and beta test assignments
  • Participate in the private beta discussion forum and conference calls

vSphere Beta Program Overview

This program enables participants to help define the direction of the most widely adopted industry-leading virtualization platform. People who want to participate in the program can now indicate their interest by filling out the simple form found at the link above. The vSphere team will grant access to the program to selected candidates in stages. This vSphere Beta Program leverages a private Beta community to download software and share information. VMware will provide discussion forums, webinars, and service requests to enable you to share your feedback with them.

You can expect to download, install, and test vSphere Beta software in your environment or get invited to try new features in a VMware hosted environment. All testing is free-form and VMware encourage you to use their software in ways that interest you. This will provide them with valuable insight into how you use vSphere in real-world conditions and with real-world test cases, enabling them to better align their product with your business needs.

Some of the many reasons to participate in this beta opportunity:

  • Receive early access to the vSphere Beta products
  • Interact with the vSphere Beta team consisting of Product Managers, Engineers, Technical Support, and Technical Writers
  • Provide direct input on product functionality, configurability, usability, and performance
  • Provide feedback influencing future products, training, documentation, and services
  • Collaborate with other participants, learn about their use cases, and share advice and learnings

 
 

 

Posted in VMware, vSphere | Leave a comment