Saturday, February 14, 2015

Multisite DCI

I started this blog after finding no resources on how to implement multiple data center connectivity using anything other than OTV.  I intend to share my experiences and learn from individuals who have or are in the process of implementing multiple interconnected datacenter [DCI] environments utilizing Fabricpath, VXLAN or even ACI. My specific need was to connect 2 existing facilities with a third new site.  OTV might be a viable option especially if Nexus 7Ks or ASRs are already owned however due to the costs of the additional required F cards or the bandwidth limits of the ASRs making use of other products will be my focus but I will include the limited review done in considering the 7K or ASR platform to show why they were not selected.

Robert


1 comment:

  1. There are two data centers that house virtual server technologies running Windows, Linux and dedicated OSes, multiple AIX based clusters, a 10,000 user wireless infrastructure with multiple controllers and firewalls, storage from multiple top tier vendors, vendor touch only products, voice IVR systems and even recently a mainframe.

    These facilities are located about 2 miles apart from each other and are currently connected with 3 pairs of leased single mode fiber that traverse a short path and 3 pairs that follow a longer path. This fiber connects Cisco Nexus 7000s with Cisco VSS 6500s both sides having Nexus 5548s and 2148 to 2232 model Fexes. It also provides fiber channel storage connectivity and redundant Internet connections between the data centers DMZ and Firewall perimeters. There are approximately 50 remote locations connected to 2 carriers with capacities ranging from a T1 to 1Gb optical with each facility linked to both carriers.

    Stretched Layer 2 with GLBP/HSRP filtering, Windows/AIX clusters, vMotion, heartbeat links, internal and external load balanced systems, BGP failures, Internet link capacities, 3rd party connectivity, multiple wired and wireless VOIP products and seamless wireless roaming with a business partner who shares space on the campus hosting one of the data centers can make for an active environment on a normal day.

    One data center is old and suffers from electrical and cooling capacity issues, physical layout and total space, environmental issues such as the presence of water sprinkler lines above racks and other shortcomings that are not possible to fix in place. The second data center is in a building with limited expansion options due to zoning and the space used by the data center and its connected offices looks attractive to the cube farmers.

    About 18 months ago discussions were held to decide on building, buying or renting new data center space and the decision was made to rent space from one of a small number of hosting firms with underground facilities in the area. Along with this decision finding a fiber carrier who either had or would install fiber between the two existing facilities and also be able to provide the new facility with diverse entry pathways commenced. Being able to afford the carrier was an important consideration as well. Prices did vary quite a bit.

    The desire to do a long term lease of dark fiber from a carrier was deemed desirable so that whatever transport services were needed; 4Gb, 8Gb or 16Gb fiber channel, 10Gb or 40Gb data, 3G/4G cellular backhaul services, T1/pots voice lines, DMZ/Perimeter extensions modifying equipment in the data centers versus renegotiating and incurring additional costs was preferred. This also made selecting a carrier that had both large scale transport experience and end customer experience important.

    ReplyDelete