Adding New Node to Windows Failover Cluster Used for SQL Server VDBs May Fail During Failover (KBA5458)
KBA
KBA# 5458
Issue
Adding a new node or new nodes to a Windows Failover Cluster used as a target for SQL Server VDBs can result in this error when testing or actually experiencing a failover:
The action 'Move' did not complete. Error code 0x8007174b, "Clustered storage is not connect to the node."
Prerequisites
- The Delphix Engine is configured to use Windows Failover Cluster environments as Target Windows environments for SQL Server VDBs.
- You are configuring your environments according to our documentation section titled Adding a SQL Server failover cluster target environment.
Applicable Delphix Versions
- Click here to view the versions of the Delphix engine to which this article applies
-
Major Release All Sub Releases 6.0 6.0.0.0, 6.0.1.0, 6.0.1.1 5.3
5.3.0.0, 5.3.0.1, 5.3.0.2, 5.3.0.3, 5.3.1.0, 5.3.1.1, 5.3.1.2, 5.3.2.0, 5.3.3.0, 5.3.3.1, 5.3.4.0, 5.3.5.0 5.3.6.0, 5.3.7.0, 5.3.7.1, 5.3.8.0, 5.3.8.1, 5.3.9.0 5.2
5.2.2.0, 5.2.2.1, 5.2.3.0, 5.2.4.0, 5.2.5.0, 5.2.5.1, 5.2.6.0, 5.2.6.1
5.1
5.1.0.0, 5.1.1.0, 5.1.2.0, 5.1.3.0, 5.1.4.0, 5.1.5.0, 5.1.5.1, 5.1.6.0, 5.1.7.0, 5.1.8.0, 5.1.8.1, 5.1.9.0, 5.1.10.0
5.0
5.0.1.0, 5.0.1.1, 5.0.2.0, 5.0.2.1, 5.0.2.2, 5.0.2.3, 5.0.3.0, 5.0.3.1, 5.0.4.0, 5.0.4.1 ,5.0.5.0, 5.0.5.1, 5.0.5.2, 5.0.5.3, 5.0.5.4
4.3
4.3.1.0, 4.3.2.0, 4.3.2.1, 4.3.3.0, 4.3.4.0, 4.3.4.1, 4.3.5.0
4.2
4.2.0.0, 4.2.0.3, 4.2.1.0, 4.2.1.1, 4.2.2.0, 4.2.2.1, 4.2.3.0, 4.2.4.0 , 4.2.5.0, 4.2.5.1
4.1
4.1.0.0, 4.1.2.0, 4.1.3.0, 4.1.3.1, 4.1.3.2, 4.1.4.0, 4.1.5.0, 4.1.6.0
Resolution
To resolve the inability to failover to the newly added node(s), you must verify that the Delphix drives used for VDB mounts of the database files and the non-Delphix clustered drives connected to the existing nodes of the cluster are connected to the new nodes.
- After the node(s) are added to the Failover cluster, complete the following on the Delphix Engine:
- Discover the newly added node(s) as a standalone target windows environment(s).
- Then, refresh the Windows target cluster with the newly added node(s).
- After these steps you can either disable/enable all the VDBs on this Windows target cluster environment or refresh the VDBs. In either case, the Delphix Luns/Drives will be connected to any new nodes to ensure a smooth failover.
- In addition to this, any non-Delphix clustered drives must also be connected to any new nodes.
Troubleshooting
On the existing nodes you can check the drive list using the diskpart command from Windows command prompt or powershell console.
When you see the DISKPART> prompt on the console enter "list disk" to present the list to the console.
As an example, consider this node with one C drive, two clustered drives (one used by Delphix for mount points for the VDB disks) and two Delphix drives (luns) used for the VDB database files.
DISKPART> list disk Disk ### Status Size Free Dyn Gpt -------- ------------- ------- ------- --- --- Disk 0 Online 40 GB 1024 KB Disk 1 Reserved 5120 MB 0 B Disk 2 Reserved 5120 MB 0 B Disk 3 Reserved 1024 TB 960 TB * Disk 4 Reserved 1024 TB 960 TB *
- Disk 0 is the C drive.
- Disk 1 and 2 are clustered drives
- Disk 3 and 4 are Delphix drives, in this particular case, listed as 1024 TB in size. (Some older versions of Delphix might display disk sizes as 100 TB).
- For all intents and purposes the LUNs mounted as NTFS disks on Windows represent themselves with these enormous sizes.
After adding new nodes, the same command executed on the new nodes could look something like this:
DISKPART> list disk Disk ### Status Size Free Dyn Gpt -------- ------------- ------- ------- --- --- Disk 0 Online 40 GB 1024 KB
Only the C drive is listed and this is the only drive the Windows node recognizes since the clustered drives (including Delphix drives) are not connected to the new node yet.
After refreshing the two VDBs on the older node from Delphix GUI/CLI or other methods (dx_toolkit) check new nodes with the same diskpart command:
DISKPART> list disk Disk ### Status Size Free Dyn Gpt -------- ------------- ------- ------- --- --- Disk 0 Online 40 GB 1024 KB Disk 1 Reserved 1024 TB 960 TB * Disk 2 Reserved 1024 TB 960 TB *
If failover is tested now, it'll fail since the two clustered disks aren't connected yet. In this test iSCSI clustered drives are used, one for the Delphix drive mount-point. A Windows administrator tasked with maintaining the Failover Cluster can run the appropriate commands so these shared clustered drives are connected to any of the new nodes.
After this is accomplished the node now lists all the appropriate disks:
DISKPART> list disk Disk ### Status Size Free Dyn Gpt -------- ------------- ------- ------- --- --- Disk 0 Online 40 GB 1024 KB Disk 1 Reserved 1024 TB 960 TB * Disk 2 Reserved 1024 TB 960 TB * Disk 3 Reserved 5120 MB 0 B Disk 4 Reserved 5120 MB 0 B
During failover test the error no longer occurs and the failover to the new node is successful.
Related Articles
The following articles may provide more information or related information to this article:
- N/A