A number of people have asked me recently about the performance impact of having VMware snaphots on a VM. So here is my understanding of the performance impact of having snapshot(s) on a VM.
The issue with having a snapshot on a VM is that when a block of data is changed instead of changing this in the main vmdk file it is written to a delta file.
The delta files are not pre-allocated as the main vmdk file is (assuming it is thick provisioned); so each time you have to write to the delta file first off the delta file has to be extended and then the data written. Well not exactly every time data is written, I think it extends the delta files in 16MB chunks, so only has to extend for every 16MB of new blocks that have changed. If you change the same block of data multiple times after a snapshot is created I believe it overwrites the block in the delta file so does not need to extend the file for blocks that are changing multiple times. When the files are extended it locks the VMFS volume (or it used to, not sure if this is still the case with vSphere 6.x) so it is not just the VM with the snapshot on it being impacted but also any other VM on the same datastore.
This is the reason a thick provisioned vmdk provides slightly better performance than a thin provisioned vmdk; as the space is pre-allocated in a thick provisioned vmdk therefore the file does not need to be extended to write additional data to it. If performance of the vmdk is really important then you should create eager zeroed thick provisioned disks as all blocks are zeroed in the vmdk at the time of creation, with the default of lazy zeroed thick provisioned disks each time a block is written to for the first time it needs to be zeroed before the data is written to it; don’t ask me why it needs to be zeroed and then overwritten as I don’t have a clue! Just know that eager zeroed is supposed to be better but takes longer to create the vmdk in the first place.
When you want to read a block of data from a VM with snapshot(s) on it the vmkernel needs to work out if the block you want to read is in the delta file or in the base vmdk.
I have heard that the more snapshots you have on a VM the bigger the impact on performance and have seen a VMware KB (https://kb.vmware.com/kb/1025279) suggesting that you should not have more than 32 and for better performance only 2 or 3.
This extra processing has an impact on the VM disk access times, I suspect not a great lot of impact but it is not as quick as going directly to the base vmdk to read and write blocks of data. I think you will only ever notice it, if at all, with very high disk I/O intensive workloads.
When I get change I will build a test environment and run some performance tests to see how much of an impact snapshots have, although it will probably depend on your disk subsystem.