Quantcast
Channel: High Availability (Clustering) forum
Viewing all 2306 articles
Browse latest View live

3-node active/passive/passive multi site cluster

$
0
0

Configuration:

Primary Data Center:

2 of 3 nodes - One is active and one is passive.

HA/DR Data Center:

The 3rd node is located here in passive mode.

Configured as above, if both nodes go down in the Primary Data Center fail over to the 3rd node in the HA/DR site would be a manual fail over due to a less than 50% quorum. Would it be possible to create 2 witness disks in a 3rd physical location? If I was able to configure 2 witness disks the fail over to the HA/DR site would occur automatically if the Primary Data Center is down.


Is it possible to create Windows server 2003 Ent 64b bits guest clustering on Windows Server 2012 R2 Hyper V?

$
0
0

Scenario:

VM1 Guest:              Windows Server 2003 Ent 64b.

VM2 Guest:              Windows Server 2003 Ent 64b.

Physical Host 1:        Windows Server 2012 R2 Hyper V.

Physical Host 2:        Windows Server 2012 R2 Hyper V.

Share Volumes:         HP 3PAR System Connnected to the Physical Hosts thru FC. 

Is it possible to create Windows server 2003 Ent 64b bits guest clustering on Windows Server 2012 R2 Hyper V?


We need to cluster an ERP System that is not compatible with newer versions of Windows.

Load Balancing VMs across a Cluster. No VMM server

$
0
0

Howdy,

We have a 3 node Hyper-V cluster with a bunch of VMs on it.  None of the VMs are setup with Preferred Hosts and the Failback setting is set to Prevent Failback.  Both of these are set just because that's how they come by default and we've never changed them.

When we do patching every month, whichever host patches last ends up empty since the VMs get migrated to the other ones.  We're looking for the best way set things the way we want them and then have them go back to that same state when the patching is done.

We currently use Cluster Aware Updating that we kick off manually from a separate machine so we can watch it go in case something goes wrong.  When we're done, we end up with machines all over the place but the last node end sup mostly empty.

What is the best way that we can either set specific machines to always go back to specific hosts or have something do some resource balancing after patching to spread things out or anything like that?

Thanks!

AD-less Cluster Bootstrapping Doesn't Work

$
0
0

We have a Server 2012 R2 Cluster running our production VMs, and were looking at removing the need for our last physical DC. We have two other DCs running as VMs. However testing whether the cluster would start without any DCs running completely failed.

I even tested this is a completely new testlab environment and had exactly the same result. The cluster wouldn't start unless there was a DC running.

So what's going if all of Microsoft's documentations seems to suggest that this is no longer a requirement due to the addition of AD-less Cluster Bootstrapping???? Was it added in Server 2012 and then removed in R2, doesn't make sense?

Andrew


Andrew France - http://andrewsprivatecloud.wordpress.com

Cluster-aware updating - Self updating not working

$
0
0

Hi,

I have a Windows Server 2012 failover cluster with 2 nodes, and I am having problems gettign the Self updating to work properly.

The Analyze CAU Readiness does not report any issues, and I have been able to run a remote update with no problems. I don't get any errors or failure messages in the CAU client, only this message: "WARNING: The Updating Run has been triggered, but it has not yet started and might take a long time or might fail. You
can use Get-CauRun to monitor an Updating Run in progress."

In the Event Viewer is see 2 errors and 1 warning for each run, Events 1015, 1007 and 1022.

1015: Failed to acquire lock on node "node2". This could be due to a different instance of orchestrator that owns the lock on this node.

1007: Error Message:There was a failure in a Common Information Model (CIM) operation, that is, an operation performed by software that Cluster-Aware Updating depends on.

Does anyone have any idea what is causing this to fail?

Thanks!

Why is failover cluster not failing over when drives are offline? - Takes 10 minutes to failover when a server reboots for updates

$
0
0

I have a 2008R2 domain with (2)2008R2 nodes in a cluster. I have 2 questions

1. If I "validate the cluster" and it is a cluster with several 10TB disk on it, etc. will it take a long time to validate?  Just want to make sure that step does not cause any issues on a running cluster in production

2. When a node reboots for Windows updates, the cluster does a good job on continuing to ping, but the shares basically go offline for about 10 minutes.  Is there something I can do to speed up the failover process? I have done my shares manually, and not through the Failover Cluster Manager.  I think that is fine as they still show up in Failover Cluster Manager

Thanks,


Dave











Creating Failover Cluster within HA VM's fails despite successful validation

$
0
0
Purpose:
To create a Failover SQL Cluster using 2 High Avaliable Virtual Machines

Scenario:
- two HA VM's (W2K8 Enterprise Full) running on two Bare Metals (W2K8 Enterprise Core);
- iSCSI SAN: VHD's of VM's are on LUNs exposed to the hosts;
- both VM's have access to a LUN, made available using iSCSI initiator within the guests (no pass-through);

Problem:
Validation of the configuration for the SQL Cluster is succesful (not even a warning);
Creating the cluster fails:

Cluster:
SN-SQL-CLUSTER
Node:
sn-sql01.amsterdam.schepnet.local
Node:
sn-sql02.amsterdam.schepnet.local
IP Address:
10.10.11.1
Started
7/25/2008 11:52:45 AM
Completed
7/25/2008 11:52:46 AM

Beginning to configure the cluster SN-SQL-CLUSTER.
Initializing Cluster SN-SQL-CLUSTER.
Validating cluster state on node sn-sql01.amsterdam.schepnet.local.
Searching the domain for computer object 'SN-SQL-CLUSTER'.
Creating a new computer object for 'SN-SQL-CLUSTER' in the domain.
Configuring computer object 'SN-SQL-CLUSTER' as cluster name object.
Validating installation of the Network FT Driver on node sn-sql01.amsterdam.schepnet.local.
Validating installation of the Cluster Disk Driver on node sn-sql01.amsterdam.schepnet.local.
Configuring Cluster Service on node sn-sql01.amsterdam.schepnet.local.
Validating installation of the Network FT Driver on node sn-sql02.amsterdam.schepnet.local.
Unable to successfully cleanup.
To troubleshoot cluster creation problems, run the Validate a Configuration wizard on the servers you want to cluster.
An error occurred while creating the cluster. An error occurred creating cluster 'SN-SQL-CLUSTER'. The service has not been started.

Which service cannot be started?
What could be the reason?

cluster shared volume has entered a paused state because of (c000000e)

$
0
0

I am running a windows server 2012 r2 hyper-v cluster with via datacore attached jbod storage (via iscsi).

Situation: within the last 6 month this situation occured 3 times. It happened on different Hyper-V hosts. For some reason the host looses connection to the virtual disk and cannot see other hyper-v server of the cluster. The Failover cluster service is stopped on the host and according to event logs the hyper-v host is removed from failover-cluster and later one re-joined.

The virtual disk gets moved to another Hyper-V hosts and all VMs are moved to other Hyper-V hosts and restart.

When I look at the failover-cluster the hyper-v host that lost the connection is already part of the hyper-v cluster again. It is owner of a virtual disk and several vms are running on this host. So the failed hyper-v hosts is fine again.

I am trying to find out what causes this behavior. All VMs on this host go through a crash restart and that isn't something I want to see on my production environment.

There are no AV running on the hyper-v hosts. The storage system/datacore do not show any errors. No backups or snapshots are being done during that time frame. I checked with our network admin if there were any information on switches/routers/firewall during that time frame but nothing was found.

The errors I see are:

In event log FailoverClustering - Diagnostic I see the first event

[NETFTAPI] Signaled NetftRemoteUnreachable event, local address x.x.x.x:3343 remote address x.x.x.x:3343 I see this event for all networks.

On System Event logs - at the same time:

The cluster Resource Hosting Subsystem (RHS) process was terminated and will be restarted. This is typically associated with cluster health detection and recovery of a resource. Refer to the System event log to determine which resource and resource DLL is causing the issue. (Error ID 1146).

The next error I see is:

Cluster Shared Volume 'XXX' has entered a paused state because of '(c000000e)'. All I/O will temporarily be queued until a path to the volume is reestablished. (Error ID 5120)

Afterwards I see additional errors ID 1146 and 1135 (about removing hyper-v hosts from cluster) and the error:

Cluster Shared Volume XXX has entered a paused state because of '(c000026e)'. All I/O will temporarily be queued until a path to the volume is reestablished. (Error ID 5120).

But this doesn't provide information why RHS was failing or some ideas why the connection to the storage and other hyper-v hosts was lost.

Any ideas what I could do to determine the cause?



Can I monitor the progress of CAU?

$
0
0

Howdy,

We've always run CAU manually from a utility server and pointed it at our Clusters to update them.  This let us watch the CAU program to see exactly where in the process it was, which server it's on, etc.

However, we are now testing the Self Updating feature where the cluster machines can update themselves.  If we do that, can we still monitor the progress of the updating somehow?  I don't know if I can still run CAU from the utility box and connect to the various clusters just to see how they're doing or if I can't do that.

Thanks.

do I need to be a 2012 forest to running failover cluster?

$
0
0

sorry I have more dumb questions to ask ?

do I need to be a 2012 forest to running failover cluster?

I just run

Install-WindowsFeature RSAT-AD-PowerShell.

Enable-SmbDelegation –SmbServer FileServer1(SOFS1) –SmbClient HV1 and get thie error:

SMB Delegation CMD Lets Require the AD forest to be in windows Server 2012.

Forest Functional level.

thank you

more about the scenario:

"hyper v over smb3 access denied"

Failover Cluster SQL 2012 - Is a Second NIC Still needed in separate VLAN

$
0
0

Hi All:

I am trying to get clarification on best practices building out a 2 node SQL Failover Cluster Instance.  I have read it is best to have a secondary NIC in another subnet for redundancy (I have read this is depreciated) however when you validate the cluster in Windows is looks like it wants that.

So I basically add another NIC within VMware, in another subnet/VLAN and just assign it IP and S/N mask.  Below is a sample of how it would be configured.  However now when I run the validate cluster wizard it tells me "connectivity checks using UDP on port 3343" have failed.  This is obviously because I do not have a L3 gateway defined on that NIC I am assuming.  I have also read this is normal (as it is just a warning).  Is it best practice to have a secondary NIC in another subnet and if so can that warning be disregarded.  I guess if that primary NIC failed, and the gateway was gone, would it start to use that secondary NIC?  Just want to make sure my design is sound.

So far
NODE 1
NIC 1
IP = 192.168.100.10
SN = 255.255.255.0
GW = 192.168.100.1

NIC 2
IP = 192.168.200.10
SN = 255.255.255.0
GW = NONE ASSIGNED

what is clustering, fail over clustering and high availability

$
0
0

what is clustering, fail over clustering and high availability can I know with simple example or scenario? to diffreiniate between these terms

 

hyper v over smb3 access denied

$
0
0

Hello Yall's,

  Wanted to get some advise im getting some error .

Domain Dc 2003 and 2008, forest level is 2008.

failover cluster setup:

4 servers, 2 Hyper-v 2012r2 and 2 SOFS (CIB) 2012r2.

Hyper-v Setup

the other 2 are cluster HyperV 2012 servers (HV1 & HV2). I'm able to perform most HyperV Manager functions just fine using (create new VMs, mount drives, mount ISOs, run VMs etc). The problem comes up in this scenario.

  • At AD Setup the HV1, HV2 and also the SOFS the   to used Kerberos only (service type CIFS, Hyper-v replication, Microsoft virtual console serverce, Microsoft Virtual System Migration Service) Note: and also test it by changing to use any authentication protocol.
  • at SFOS setup Permissions ( administrator Full, everyone full control)Note: test using the computer object no good same problem. I think this is my problem.
  • Share setup
  •          Create a share
  •          Enable Inheritance
  •          Permission
    • Share Permissions \\sofs.domain-name.com\share

    • Everyone Full Control

    • Domain Admin Full Control

    • “Network Service” Full Control (Service Accounts object types)

    • Hv1 and HV@ (Computers object types)

  • Failover Cluster Setup: DNS failover cluster setup up. Roles SOFS Scole-Out File Server (Scale-Out File Server for Application Data) SMB Protocol 
  • Log into HV1 and open Hyper-v Manager.
  • Now, if I happen to log into HV2 or HV1 and open Hyper-v Manager and try to make a change (for example, move the VM) I get an Access denied error.
  • how to check for "double hop"
  • Note: Problem at Failover Cluster Manager Can't see any guest (VMs) on the Note (HV1 and HV2).
  • It looks like this.

error.

Failed to load the Virtual Machine.

\\sofs\ShareV1\Vm-test General access denied error.

The operation failed

user "Domain\administrator" failed to create external configuration sote at \\sofs.domain-name.com\share: General access denied error (0x80070005)

at this point don't know what to do

 
1
2
3
Enable-SmbDelegationSmbServer SOFS01SmbClient HyperV01
Enable-SmbDelegationSmbServer SOFS01SmbClient HyperV02

Because these cmdlets only work with the new resource-based delegation, the Active Directory forest must be in “Windows Server 2012” functional level. A functional level of Windows Server 2012 R2 is not required.

Ref:

JoseB blogged about this a while back.  See if this fits:

http://blogs.technet.com/b/josebda/archive/2008/06/27/using-constrained-delegation-to-remotely-manage-a-server-running-hyper-v-that-uses-cifs-smb-file-shares.aspx

Or TaylorB's here:

http://blogs.msdn.com/b/taylorb/archive/2012/03/20/enabling-hyper-v-remote-management-configuring-constrained-delegation-for-smb-and-highly-available-smb.aspx

Using Constrained Delegation to remotely manage a server running Hyper-V that uses CIFS/SMB file shares

http://blogs.technet.com/b/josebda/archive/2008/06/27/using-constrained-delegation-to-remotely-manage-a-server-running-hyper-v-that-uses-cifs-smb-file-shares.aspx

iSCSI SAN SOFS Hyper-V Cluster w/ Replica to Standalone

$
0
0

I have been trying to nail down the proper solution and posted a few times with similar ideas, I believe I may have found it but want some advise

We have a client with the following:
2x HP P2000 iSCSI SANS w/ redundant controllers each
2x File Servers cross connected to each SAN through iSCSI Running Datacore and using Datacores MPIO software (All data core is doing is mirroring the vDisk from one SAN to the other and providing transparent failover should an entire enclosure fail, and using 20GB of RAM for caching)
2x Hyper-V Servers in a cluster utilizing the Datacore Storage with an additional standalone Hyper-V server just hosting a couple VM’s on local storage.

We are looking to move them to the native windows stack and remove Datacore from the equation while providing a similar level of redundancy. My plan was to (Since Windows server doesn’t support mirroring one CSV to another I know 2016 may help in this area, and I cant use SAS JBODs and even if I did I wouldn’t have 3 enclosures to create an enclosure aware vDisk in storage spaces) was to:

Rebuild the Datacore boxes with 2012R2, create a file serve cluster, install the SOFS Role, use 1x P2000 SAN serving it to the file server cluster through multiple iSCSI connections utilizing native MPIO and creating a CSV then a continuously available SMB 3.0 share on top and serving it to the Hyper-V Cluster (I would also use Deduplication). This would host all the production VM’s (This would be the fast storage SAN).

I would then take the other SAN (With the slower storage) serve it directly to the standalone Hyper-V server (Again multiple iSCSI connections MPIO) and turn on Hyper-V replica from the cluster to the standalone and utilize this server as a DR scenario hoping to ease their concerns about not having redundancy at the SAN level with transparent failover in the event the production SAN dies (Though very unlikely, plugged into two separate UPS’s, on generators utilizing redundant switches, essentially the works).

My question is A do you see any issues with this configuration and B should I just be connecting my fast SAN directly to my hyper-v cluster. Are there any draw backs to not having that storage cluster in the middle, you mentioned CSV cache? I think they would feel best about utilizing all their current hardware even if the advantage to having that storage layer was only very small as they have the hardware and would like to use it. Thanks for your help in advance I know its somewhat of a long post!

WIndows Server 2008 R2 failover cluster storage issue

$
0
0

Apologies for the long-winded description, but I’m currently looking for help, advice etc. with a storage issue I’m currently investigating on a 4-node Windows Server 2008 R2 SP1 failover cluster (HP servers, Emulex HBAs, EMC PowerPath, EMC Clariion CX4-960 disk array) that's proving both interesting and frustrating in equal measure.

With all four cluster nodes up, and the failover cluster fully operational, I am able to happily fail over services and application between cluster nodes without the cluster or operating system reporting any issues.

When the cluster is in this state:

  • If I generate a Failover Cluster Validation Report and run all tests (including the storage tests), the report only contains a small number of non-critical warnings.
  • If I examine the details of any of the shared cluster disks on any of the nodes using DISKPART, this is what I see (this may not seem important now, but please bear with me):

PowerDevice by PowerPath
Disk ID : <ID> or {<GUID>}
Type : FIBRE
Status : Reserved
Path : 0
Target : 0
LUN ID : <LUN>
Location Path : UNAVAILABLE
Current Read-only State: Yes
Read-only : Yes
Boot Disk : No
Pagefile Disk : No
Hibernation File Disk : No
Crash Crashdump Disk : No
Clustered Disk : Yes

FYI - The cluster uses a mix of MBR and GPT-based cluster disks presented to all nodes via the shared EMC Clariion CX4-960 disk array, SAN fabric, and Emulex HBAs.

However, if I restart one of the cluster nodes, then the affected node begins endlessly cycling through startup -> BSOD -> restart.

The time between startup and BSOD is approximately 25 minutes.

If I generate and check the cluster log (Cluster.log) for the period 25 minutes or so prior to a restart, I see the same entries i.e.

2015/08/10-11:47:59.000 ERR   [RHS] RhsCall::DeadlockMonitor: Call OPENRESOURCE timed out for resource 'NODE_DATA'.
2015/08/10-11:47:59.000 ERR   [RHS] RhsCall::DeadlockMonitor: Call OPENRESOURCE timed out for resource 'NODE_FLASH'.
2015/08/10-11:47:59.000 INFO  [RHS] Enabling RHS termination watchdog with timeout 1200000 and recovery action 3.
2015/08/10-11:47:59.000 ERR   [RHS] Resource NODE_FLASH handling deadlock. Cleaning current operation and terminating RHS process.
2015/08/10-11:47:59.000 INFO  [RHS] Enabling RHS termination watchdog with timeout 1200000 and recovery action 3.
2015/08/10-11:47:59.000 ERR   [RHS] Resource NODE_DATA handling deadlock. Cleaning current operation and terminating RHS process.
2015/08/10-11:47:59.000 WARN  [RCM] HandleMonitorReply: FAILURENOTIFICATION for 'NODE_FLASH', gen(0) result 4.
2015/08/10-11:47:59.000 ERR   [RHS] About to send WER report.
2015/08/10-11:47:59.000 INFO  [RCM] rcm::RcmResource::HandleMonitorReply: Resource 'NODE_FLASH' consecutive failure count 1.
2015/08/10-11:47:59.000 WARN  [RCM] HandleMonitorReply: FAILURENOTIFICATION for 'NODE_DATA', gen(0) result 4.

2015/08/10-11:47:59.000 INFO  [RCM] rcm::RcmResource::HandleMonitorReply: Resource 'NODE_DATA' consecutive failure count 1.
2015/08/10-11:47:59.000 ERR   [RHS] About to send WER report.
2015/08/10-11:47:59.075 ERR   [RHS] WER report is submitted. Result : WerReportQueued.
2015/08/10-11:47:59.078 ERR   [RHS] WER report is submitted. Result : WerReportQueued.

And if I check the System Event Log for the same period, I see:

  • Several Event 118, elxstor, “The driver for the device \Device\RaidPort1 performed a bus reset upon request.” Warning messages
  • Immediately followed by an Event 1230, FailoverClustering, “Cluster resource ‘NODE_DATA’ (resource type “, DLL ‘clusres.dll’) either crashed or deadlocked. The Resource Handling Subsystem (RHS) process will now attempt to terminate, and the resource will be marked to run in a separate monitor.” Error message
  • Immediately followed by an Event 1230, FailoverClustering, “Cluster resource ‘NODE_FLASH’ (resource type “, DLL ‘clusres.dll’) either crashed or deadlocked. The Resource Handling Subsystem (RHS) process will now attempt to terminate, and the resource will be marked to run in a separate monitor.” Error message.

NOTE - NODE_DATA and NODE_FLASH are GPT-based cluster disks.

What is interesting is not what is happening, but why!

In terms of the “what” I believe that, at the failover clustering level, the startup -> BSOD -> restart behaviour is a result of the following:

  1. RHS calls an entry point to resources NODE_DATA and NODE_FLASH;
  2. RHS waits DeadlockTimeout (5 minutes) for the resources to respond;
  3. The resources do not respond, and so the Cluster Service (ClusSvc) terminates the RHS process to recover from unresponsive resource;
  4. The Cluster Service (ClusSvc) waits DeadlockTimeout x 4 (20 minutes) for the RHS process to terminate;
  5. Since the RHS process does not terminate, the Cluster Service (ClusSvc) calls NetFT to bugcheck the node to recover from the RHS termination failure;
  6. NetFT bugchecks the node with a STOP.

But why aren’t the NODE_DATA and NODE_FLASH cluster resources responding to the OPENRESOURCE calls?

After a lot of digging around Windows Event Logs and cluster logs, I decided to check the status of the various cluster disks from the perspective of both the working cluster nodes and the failing cluster node.

What I saw when when I examined the details of the shared cluster disks on each node using the DISKPART utility (also backed-up by what I was seeing in the Disk Management MMC) was as follows:

Working Cluster Node (MBR-based disks):

PowerDevice by PowerPath
Disk ID : <ID>
Type : FIBRE
Status : Reserved
Path : 0
Target : 0
LUN ID : <LUN>
Location Path : UNAVAILABLE
Current Read-only State: Yes
Read-only : Yes
Boot Disk : No
Pagefile Disk : No
Hibernation File Disk : No
Crash Crashdump Disk : No
Clustered Disk : Yes

Working Cluster Node (GPT-based disks):

PowerDevice by PowerPath
Disk ID : {<GUID>}
Type : FIBRE
Status : Reserved
Path : 0
Target : 0
LUN ID : <LUN>
Location Path : UNAVAILABLE
Current Read-only State: Yes
Read-only : Yes
Boot Disk : No
Pagefile Disk : No
Hibernation File Disk : No
Crash Crashdump Disk : No
Clustered Disk : Yes

Failing Cluster Node (MBR-based disks):

PowerDevice by PowerPath
Disk ID : <ID>
Type : FIBRE
Status : Reserved
Path : 0
Target : 0
LUN ID : <LUN>
Location Path : UNAVAILABLE
Current Read-only State: Yes
Read-only : Yes
Boot Disk : No
Pagefile Disk : No
Hibernation File Disk : No
Crash Crashdump Disk : No
Clustered Disk : Yes

Failing Cluster Node (GPT-based disks):

PowerDevice by PowerPath
Disk ID : 00000000
Type : FIBRE
Status : Offline
Path : 0
Target : 0
LUN ID : <LUN>
Location Path : UNAVAILABLE
Current Read-only State: Yes
Read-only : Yes
Boot Disk : No
Pagefile Disk : No
Hibernation File Disk : No
Crash Crashdump Disk : No
Clustered Disk : No

i.e. The failing node appears to not be recognising the GPT-based cluster disks as clustered disks (or even configured disks).

Once in this failed state, the following process seems to allow the failing cluster node to join the cluster:

  1. Using cluster.exe or Failover Cluster Manager, take ALL GPT-based cluster disks offline (if ANY of the GPT-based cluster disks are online when the Cluster Service on the failing node is re-started in Step 4 below, the failing node returns to it’s cycle of start -> BSOD -> start);
  2. Set the startup type of the Cluster Service (ClusSvc) on the failing node to Manual, then wait for the failing node to restart;
  3. Rescan storage on the failing cluster node;
  4. Restart the Cluster Service (ClusSvc) on the failing node.

I’ve tried to reproduce this issue on another Windows Server 2008 R2 SP1 failover cluster (same patch level as the issue cluster, with the same HBAs, MPIO software etc., and using a similar mix of MBR and GPT-based cluster disks presented to all nodes via the same shared EMC Clariion CX4-960 disk array and SAN fabric), but I just can’t get the new cluster to exhibit the same behaviour.

NOTES -

  • This issue only seems to occur after a node restarts (i.e. if I stop then re-start the Cluster Service on any particular node, then the issue does not appear).
  • This issue doesn’t appear to be node-specific (i.e. the same issue occurs irrespective of which node is restarted).

But what I’d like to know, is:

What is / could be preventing the failing node from recognising the GPT-based cluster disks as cluster disks?

Currently, my working assumption is that if I can answer this question and solve this issue, the blocker to the OPENRESOURCE call succeeding will be removed, and the Cluster Service on the failing node will be able to restart following a server crash / restart.

Any help, advice etc. appreciated.


NLB Issue - Primary node turn to "Converging" after power failed.

$
0
0

Hi Guys,

NLB is having issue after power trip on hyper v server. Primary node status turns to "Converging".
We did try to remove and re-add note into NLB cluster, it is seem same as screen below. 
Troubleshooting had been done as below:
1. Add and remove Node.
2. Repair Network.
3. Restart server and services.
4. Each each node can ping to each node's NLB and Public IP.

We found all the outlook connection establish via UGROEXCH02, UGROEXCHT03 is idle.

Any idea?

Darren Lee

Create Cluster with workflow and inlinescript failed

what is difference between clustering and V motion

$
0
0

what is difference between clustering and V motion ? my understanding clisuter setup their psycial server or vitual server e.g one

ESXI (host) cluster or fail over to another ESXI (host)? it is correct again needs to install any application on both servers e.g Exchange, SCCM 2012, SCOM 2012 servers needs to install on both EXSI boxes or cluster box ? I know these boxes are share common share so one box installation it will function has cluster?

V motion is same as cluster?

kindly let me know what is difference between clustering and vmotion

$
0
0
kindly let me know what is difference between clustering and vmotion e.g I have VMware \ HyperV host EXSI box what is difference between VM clustering and V motion? again V motion will happen with in host or another host if another host means clustering between 2 host I am right? kindly help me to understand BASIC difference between clustering and V motion or live migration etc?

how to setup hyper V or clustering between host

$
0
0
how to setup hyper V or clustering between host e.g I have 2 host consist of 50 VM's so if I want to do fail over  clustering how to do it? again for failure if any another method is available for fall over like V motion or Live migration etc? if yes how to setup   
Viewing all 2306 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>