Thick vs thin vmware disk provisioning difference

  -  

When it comes khổng lồ configuring your virtual environment, it’s very important that you select the right disk types for your needs or you could encounter outages, wasted storage or face lots of time on the bachồng over reconfiguring your settings.

Bạn đang xem: Thick vs thin vmware disk provisioning difference

Thin Disk

This type of virtual disk allows you khổng lồ allocate storage on dem&, instead of deciding ahead of time how much space it’s going lớn take up. This is a good option if you want khổng lồ control costs và scale out your storage over time. However, you need khổng lồ pay closer attention lớn your disk form size so you don’t overprovision & overcommit your storage lớn more than it can hold. Additionally, since it’s allocating on the fly, you might take some performance hits on initial writes that you wouldn’t encounter if you were to lớn utilize one of its thichồng disk brethren options. This is because as new data space is allocated, the blocks have to first be zeroed to lớn ensure the space is empty before the actual data is written.

You might also encounter some fragmentation on the disk when using this method, which can add to your performance degradation. This varies between storage types and vendors & how your array is phối up.

Thin disks are a good option if you’ve sầu got an application that “requires” much more storage than you know it will use or if you’re unable khổng lồ accurately predict growth.

If you’re curious about how Hyper-V does this, there is a parallel option called dynamically expanding disks.

Thiông xã Lazy Zeroed Disk

If you want lớn pre-allocate the space for your disk, one option is khổng lồ make a thichồng lazy zeroed disk. It won’t be subject to lớn the aforementioned fragmentation problem since it pre-allocates all the space so no other files will get in the middle (which causes fragmentation), và it’s easier lớn trachồng capacity utilization.

Xem thêm: Mẹ Có Nên Lo Lắng Khi Chiều Dài Đầu Mông Là Gì, Chiều Dài Đầu Mông Là Gì

If you’ve sầu got a clear picture of what space you’ll be using, like during migrations, for example, you may want khổng lồ go with the thick lazy zeroed disk option. Also, if your backend storage has a thin provisioning option, more often than not you’ll want khổng lồ go with this simply so you don’t have to monitor your space utilization in two locations. Managing thin provisioning & oversubscription on both the back-end SAN and on the front-over virtual disks at the same time can cause a lot of unnecessary headaches, as either or both locations can run out of space.

Thick Eager Zeroed Disk

Generally, we don’t utilize this option unless the vendor requires it. For example, Microsoft clusters và Oracle programs often require this in order lớn qualify for support.

When provisioning a thick eager zeroed disk, VMware pre-allocates the space và then zeroes it all out ahead of time. In other words, this takes a while — just khổng lồ increase the net-new write performance of your virtual disk. We don’t frequently see the benefit in this since you only enjoy this perk one time. It doesn’t improve the tốc độ of any of the innumerable subsequent overwrites. (In Hyper-V, by the way, this is called a fixed form size disk.)

The below diagram shows the difference of these very clearly. If you create the same form size VMDK using the three different types, it will look roughly lượt thích this on the datastore:

*

If you then write lớn the disks over time, & make a couple other disks it will look like this:

*

Physical Raw Device Mapping (RDM)

A Raw Device Mapping is when an entire volume from the storage array is given directly lớn a single VM and the VM has full control over its volume. In other words, since the VM can talk directly to lớn the SAN, it can leverage the SAN’s functions that may not be accessible when you use a virtual disk format. You would want to leverage this for clustering, SAN-based snapshots, SAN-based replication or the ability to lớn migrate a disk from a physical VPS lớn virtual or vice versa.

There are two types of RDM: physical và virtual. The physical shows the volume exactly as it appears from the storage array with no abstraction layer. Conversely, the virtual puts a “wrapper” around the RDM to lớn make it appear as if it is a virtual disk. The purpose of that is that it allows you lớn vị snapshots & storage vMotions on the volume itself. (Products like Veeam that leverage VMware-based snapshots only work virtual mode.) Typically, we use virtual RDMs khổng lồ migrate khổng lồ VMDK since it can be storage vMotioned in order khổng lồ be turned into a VMDK. If we configure physical RDMs it’s when vendors require it for things like clusters.

Xem thêm: Giải Đáp: Tóc Và Lông Mày Có Tác Dụng Gì, Tóc Và Lông Mày Có Tác Dụng Gì

It’s common for us lớn see people create RDMs under the misconception that they’ll enjoy performance gains. In reality, the performance difference between RDM and any of the VMDKs is negligible, and if you don’t need it for a cluster or a vendor requirement, you’re sacrificing flexibility và all the benefits that come along with virtualization. In the early VMware days there was a large performance disparity between RDMs & VMDKs. However, with technological advances, VMware has closed that gap.

If you have questions about your VMware disk types or your virtual environment in general, email us or give us a Gọi at 502-240-0404!