Download here: http://gg.gg/x8t0j
*To prevent unsupported snapshots operations on the shared disks, the Disk Mode of all disks in the cluster must be set to Independent – Persistent. In a VMware Cloud on AWS SDDC, vSAN supports SCSI-3 Persistent Reservations on up to six application nodes per guest cluster with up to 64 shared disks.
*SCSI3-PRs native support enables customers to create a new or move an existing Windows Server Failover Cluster (WSFC) with up to 6 nodes and 64 shared disks to VMware vSAN backend VMDKs.
In the early days of ESXi, VMware provided the ability to present a volume / LUN from a backend storage array to a virtual machine directly. This technology is called a Raw Device Mapping, also known as an RDM. While the introduction of RDMs provided several key benefits for end-users those topics are outside of the scope of this document. For an in-depth review of RDMs, and their benefits, it is recommended you review the Raw Device Mapping documentation from VMware.
RDMs are, in many ways, becoming obsolete with the introduction of VMware vSphere Virtual Volumes (vVols). Many of the features that RDMs provide are also available with vVols. Pure Storage recommends the use of vVols wherever possible and would recommend you read further on this topic in our Virtual Volume (vVol) documentation.RDM Compatibility Modes
A typical clustering setup includes: n Disks that are shared between nodes. A shared disk is required as a quorum disk. In a cluster of virtual machines across physical hosts, the shared disk must be on a Fibre Channel (FC) SAN, FCoE or iSCSI. A quorum disk must have a homogenous set of disks. This means that if the configuration is. Setup for Windows Server Failover Clustering describes the supported configurations for a WSFC with shared disk resources you can implement using virtual machines with Failover Clustering for Windows Server 2008 and above releases. You get step-by-step instructions for each configuration and a checklist of clustering requirements and recommendations.
There are two compatibility modes for a Raw Device Mapping, physical and virtual. Which option you choose relies on what features are required within your environment.Physical Compatibility Mode
An RDM used in physical compatibility mode, also known as pass-through RDM or pRDM, exposes the physical properties of the underlying storage volume / LUN to the Guest Operating System (OS) within the virtual machine. This means that all SCSI commands (with the exception of one) are passed directly to the Guest OS thus allowing the VM to take advantage of some of the lower level storage functions that may be required.Virtual Compatibility Mode
An RDM used in virtual compatibility mode, also known as a vRDM, virtualizes the physical properties of the underlying storage and as a result appears the same way a virtual disk file in a VMFS volume would appear. The only SCSI requests that are not virtualized are READ and WRITE commands, which are still sent directly to the underlying volume / LUN. vRDMs still allow for some of the same benefits as a VMDK on a VMFS datatore and are a little more flexible for moving throughout the environment.
In order to determine which compatibility mode should be used within your environment it is recommend you review the Difference between Physical compatibility RDMs and Virtual compatibility RDMs from VMware.
Due to the various configurations that are required for each RDM mode, Pure Storage does not have a best practice for which to use. Both are equally supported.Managing Raw Device MappingsConnecting a volume for use as a Raw Device Mapping
The process of presenting a volume to a cluster of ESXi hosts to be used as a Raw Device Mapping is no different than presenting a volume that will be used as a datastore. The most important step for presenting a volume that will be used as an RDM is ensuring that the LUN ID is consistent across all hosts in the cluster. The easiest way to accomplish this task is by connecting the volume to a host group on the FlashArray instead of individually to each host. This process is outlined in the FlashArray Configuration section of the VMware Platform Guide.
If the volume is not presented with the same LUN ID to all hosts in the ESXi cluster then VMware may incorrectly report that a volume is not in use when it is. VMware Knowledge Article Storage LUNs that are already in use as an RDM appear available in the Add Storage Window further explains this behavior.Identifying the underlying volume for a Raw Device Mapping
There are times where you will need to determine which volume on the FlashArray is associated with a Raw Device Mapping.
1. Right click on the virtual machine and select ’Edit Settings’.2. Locate the desired RDM you wish to inspect and expand the properties of the disk.3. Under the disk properties locate the ’Physical LUN’ section and note the vml identifier.
4. Once you have the vml identifier we can then find the LUN ID and the volume identifier to match it to a FlashArray volume.
The above string can be analyzed as follows:
*fa - hex value of the LUN ID (250 in decimal)
*624a9370 - Pure Storage identifier
*
8a75393becad4e430004e270 - volume serial number on the FlashArray
5. Now that we know the identifier and LUN ID we can look at the FlashArray to determine which volume is backing the RDM
As seen above we are able to confirm that this particular RDM is backed by the volume called ’space-reclamation-test’ on the FlashArray.Removing a Raw Device Mapping from a virtual machine
The process for removing a Raw Device Mapping from a virtual machine is a little different than that of removing a virtual machine disk (VMDK).
1. Right click the virtual machine and select ’Edit Settings’.2. Locate the desired RDM you wish to remove and click the ’x’.
3. Ensure the box ’Delete files from datastore’ is checked.
4. Click ’OK’.
5. If you are no longer require the volume then you can safely disconnect the volume from the FlashArray and rescan the ESXi hosts.
By selecting ’Delete files from datastore’ this is not deleting the data on the volume. This step simply removes the mapping file created on the datastore that points to the underlying volume (raw device). Resizing a Raw Device Mapping
Depending on which compatibility mode you have chosen for your RDM the resize process will vary. Following the process outlined in Expanding the size of a Raw Device Mapping (RDM)provides an example for both physical and virtual RDMs.Vmware Windows Cluster Shared Disk ManagerMultipathing
A common question when using Raw Device Maps (RDMs) is where the multipathing configuration should be completed. Because an RDM provides the ability for a virtual machine to access the underlying storage directly it is often thought that configuration within the VM itself was required. Luckily things are not that complicated and the configuration is no different than that of a VMFS datastore. This means that the VMware Native Multipathing Plugin (NMP) is responsible for RDM path management and not the virtual machine.
This means that no MPIO configuration is required (or needed) within the virtual machines utilizing RDMs. All of the available paths are abstracted from the VM and the RDM is presented as a disk with a single path. All of the multipathing logic is handled in the lower levels of the ESXi host.
Below is an example of what a pRDM looks like within a Windows VM:Multi-Writer
When utilizing RDMs people often have questions regarding the ’Sharing’ option while adding the RDM to the virtual machine (illustrated below).
Since RDMs are most often used for situations like clustering this becomes a concern on whether or not this value should be set. There is a fear that if left unspecified (which defaults to ’No Sharing’) corruption of come kind can happen on the disk. This is a good mindset to have as protecting data should always be the number one goal.
The first important thing to note here is that this option is meant for VMDKs or virtual RDMs (vRDM) only. It is not for use with physical RDMs (pRDM) as they are not ’VMFS backed’. So if your environment is utilizing physical RDMs then you do not need to worry about this setting.
If you utilizing virtual RDMs then there is a possibility that setting this option would be required, specifically if you are utilizing Oracle RAC on your virtual machines. As of this writing this is the only scenario in which multi-writer is known to be required with virtual RDMs on a Pure Storage FlashArray. VMware has provided additional information on this in their Enabling or disabling simultaneous write protection provided by VMFS using the multi-writer flag KB.
If there are questions around this topic please open a case with Pure Storage Technical Support for additional information.
Do not set multi-writer on RDMs that are going to be used in a Windows Server Failover Cluster (WSFC) as this may cause accessibility issues to the disk(s). Windows manages access to the RDMs via SCSI-3 persistent reservations and enabling multi-writer is not required.Queue Depth
An additional benefit to utilizing a Raw Device Mapping is that each RDM has its own queue depth limit, which may in some cases provide increased performance. Because the VM is sending I/O directly to the FlashArray there is no sharing queue depth on a datastore like there is with a VMDK.
Aside from the potentially shared queue depth on the virtual machine SCSI controller, each RDM has its own dedicated queue depth and works under the same rules as a datastore would. This means that if you have a Raw Device Mapping presented to a single virtual machine the queue depth for that RDM would be whatever the HBA queue depth was configured to. Alternatively, if you presented the RDM to multiple virtual machines then the device Number of Outstanding I/Os would be the queue limit for that particular RDM.
For additional information on queue depths, you can refer to Understanding VMware ESXi Queuing and the FlashArray.
Unless directed by Pure Storage or VMware Support there typically is no reason to modify either of these values. The default queue depth is sufficient for most workloads. UNMAP
One of the benefits of using RDMs is that SCSI UNMAP is a much less complicated process than using VMDKs. Depending on the version of ESXi you are using there are different caveats for UNMAP to be successful with VMDKs. With RDM the only requirements are that the Guest OS support SCSI UNMAP and that the ability is enabled. Windows
If utilizing Windows please refer to Windows Space Reclamation KB for UNMAP requirements and how to verifying this feature is enabled.Linux
If utilizing Linux please can refer to the Reclaiming Space on Linux KB for UNMAP requirements and how to verifying this feature is enabled.Managing RDMs with PowerShell
Pure Storage offers a PowerShell Module called PureStorage.FlashArray.VMware to help with PowerShell management of Pure Storage and VMware environments.
To install this module, run:
PS C:> Install-Module PureStorage.FlashArray.VMware
In this module there are a few cmdlets that assist specifically with RDMs.
PS C:> Get-Command -Name *RDM* -Module PureStorage.FlashArray.VMware.RDM -CommandType Function
CommandType Name Version Source----------- ---- ------- ------Function Convert-PfaRDMToVvol 1.1.0.2 PureStorage.FlashArray.VMware.RDMFunction Copy-PfaSnapshotToRDM 1.1.0.2 PureStorage.FlashArray.VMware.RDMFunction Get-PfaConnectionFromRDM 1.1.0.2 PureStorage.FlashArray.VMware.RDMFunction Get-PfaRDMSnapshot 1.1.0.2 PureStorage.FlashArray.VMware.RDMFunction Get-PfaRDMVol 1.1.0.2 PureStorage.FlashArray.VMware.RDMFunction New-PfaRDM 1.1.0.2 PureStorage.FlashArray.VMware.RDMFunction New-PfaRDMSnapshot 1.1.0.2 PureStorage.FlashArray.VMware.RDMFunction Remove-PfaRDM 1.1.0.2 PureStorage.FlashArray.VMware.RDMFunction Set-PfaRDMCapacity 1.1.0.2 PureStorage.FlashArray.VMware.RDM
For instance, to create a new RDM, run:Vmware Windows Cluster Shared Disk Free
PS C:> connect-viserver -Server vcenter-01.purestorage.comPS C:> $flasharray = new-pfaarray -endpoint flasharray-01.purestorage.com -Credentials (get-credential)PS C:> $vm = get-vm SQLVMPS C:> $vm | new-pfaRDM -sizeInTB 4 -flasharray $flasharray
Replace vCenter FQDN, FlashArray FQDN, volume size and VM name with your own.Helpful LinksWindows Server Failover Cluster (WSFC)
About Setup for Windows Server Failover Clustering on VMware vSphereMicrosoft Windows Server Failover Clustering (WSFC) with shared disks on VMware vSphere 6.x: Guidelines for supported configurationsRaw Device Mapping with Virtual Machine ClustersOracle RAC
“RAC” n “RAC” all night – Oracle RAC on vSphere 6.xOracle Solutions with VMwareConverting RDMs to vVols
Download here: http://gg.gg/x8t0j

https://diarynote-jp.indered.space

コメント

最新の日記 一覧

<<  2025年7月  >>
293012345
6789101112
13141516171819
20212223242526
272829303112

お気に入り日記の更新

テーマ別日記一覧

まだテーマがありません

この日記について

日記内を検索