Friday, January 22, 2016

I know everyone knows this ... but Cisco vlans on 9K and 5K gear only have a bandwidth of 1Gb
In order to properly reflect real links costs you should adjust P2P vlans to the appropriate values ...

Friday, May 29, 2015

So how do I integrate the new, empty Fabricpath network at the bottom of the page with zero hosts into the network at the top which has 50 locations in multiple states and over 35,000+ ports and operates around the clock ?
Without of course any service interruptions that might drop applications.







Thursday, May 28, 2015

Have placed the Cisco and Packetlight equipment at all 3 locations including a location that could provide some add/drop functionality if wanted.  Packetlight provides a friendly GUI interface for management and also by default shows the topology when you first log in as shown below.








Wednesday, April 1, 2015

Test environment

This is the test harness.
3 sites arranged from the bottom  of the rack as follows


FEX 101
FEX 101
Nexus 5672UP -1
Nexus 5672UP -2
Nexus 9396 PX -1
Nexus 9396 PX -2

they have been given hostnames reflecting their data center and role: DCx-FP1, DCx-FP2,DCx-Core1, DCx-Core2 ... for each of the 3 data centers.

To the upper right you can see the Breaking Point device which will be used to generate and record traffic as all of those - and more - cables are removed and power cords are pulled from equipment, in some cases multiple switches.





Wednesday, February 18, 2015

Fabric choices and the impact on hardware decisions pt1

Initially the desire was to look at OTV as a DCI.  Proven and I have gone to Las Vegas and London Networkers over the last few years to see some of the advances in DC technologies - I still have my Lisp labeled sauerkraut (not going to open it).   It is kind of funny as one of the Lisp stickers said "The Overlay - it's the only way" .. not sure if foretelling the future was the intent .. but it is funny in hindsight.
One OTV oddity that I remembered .. it used to be - not sure if still is - a design constraint that even numbered vlans in an OTV environment were routed one way and odd vlans were routed another.

Great.  The data center vlans are ALL even.   Someone(s) thought they might need 500+ addresses on a vlan so they were all /23s and the vlan number are aligned with the address block - vlan 300 is x.y.0.0/23 ... vlan 310 is x.y.10.0/23 and so on up to vlan 398 at x.y.98.0/23.  There was no vlan 315 or 343 .... All traffic would go down one OTV link.  If this has changed ... it is ok, not really a concern now.

I put together 2 bills of materials for 4 7009s or 4 7004s with single or dual sups and the necessary fabric cards plus 2 extra F3 modules to add OTV support to existing 7Ks.  Not a terribly good looking price but at least it is a starting point so anything might be better.



Lets look at ASR products instead.  Maybe lower OTV performance but higher WAN performance might be OK.
  6 * 1002-X or 6 * 1006 with multiple 10G ports AND new a few new switches for the new DC.
Not a terribly good looking price either .. that bandwidth is expensive ...
All I want to do is take a packet in, lookup destination MAC, is it local or remote ... encapsulate it and send it off.  The local traffic would not even hit the ASR .. just remote.
Wow, thats an expense wrapper.

Maybe OTV was not the preferred choice from a financial standpoint.  If you want the parts and list pricing used please contact me outside of this post.

I do like Lisp though ....

Hmm,

  ESXi is in use.  Multiple UCS deployments with lots of 10G.  This is where the vlans that would be part of the DCI actually live, maybe ESX/NSX might be useful for the vlans that need to span the data centers.
Darn .. wrong version of ESXi, no plans to upgrade any time soon,  separation of server teams from networking and firewall teams ... just not going be solution.  Turns out ... not entirely free either :)


NSX is just VXLAN from VM so lets look at that ..
When the core routing protocol for this environment and the primary business partner were selected one of the desires was to be able to interact with any vendor or any corporations network.  OSPF was selected.  There are pockets of EIGRP but those hide some WAN routing oddities.

Hey ,, its as close to an open standard as possible ... everybody supports VXLAN, it is the OSPF of fabrics.  I am pretty sure in 5 years the lingua franca may change but today its VXLAN.
7Ks - not without those F3 modules ... OK /// NO 7Ks in new solution.   Hmmmm ... this will come up later after I review the math of the electric / cooling / Smartnet costs of loaded 7Ks.

Cisco, Juniper, Arista and a few others all make VXLAN capable boxes ... lets take a look....

Monday, February 16, 2015

Topology layout choices.

This was some quick whiteboard ideas on a setup to connect 3 or 4 sites together.  Multiple switches at each site are a given but the vendor;  Cisco, Arista, Plexxi, Juniper, Big Switch or Cumulus has not started really being examined nor has the underlying connectivity been decided; OTV, stretched layer 2 with spanning tree blocking - which will work until it does not, VXLAN, QFabric, ACI ( I was a software guy, I like the dynamic nature of ACI ) or even OpenStack.


A desire that I am kicking around is that if the data center fabric is being built to connect all of them, would it be possible to build this as a greenfield off to the side and migrate the existing dissimilar core components INTO the new architecture versus adding on a wan transport and still have the aspects that reflect the 10 year evolution from Sup2s and CatOS with routed top of rack switches, default gateways in closets,  default gateways not in closets, a Sup720 here, a Sup32-10 there, ZR optics on wan links, one side a Xenpak, one side a X2,  LX4s in 6704s ... just not a lot of big picture consistency due to the pattern of growth over time. 
  
If I can make a greenfield migration/morphing to have equal or lower risk, cost effective, expandable and not look like a CCIE lab endeavor I think that is the direction to pursue.  Some older equipment will be retired ( a number of 6500s are currently operating as single mode to multimode convertors or single mode repeaters due to WAN distance and fiber quality - they will all go away) , some will be moved down the food chain however over the last few years the 6500 or 4500 based closet has been losing favor as the stackable products have become better.  A rack consisting of patch panels sandwiching 48 port switches with 6 inch patch cables is pretty hard to beat.  



I like the idea of the direction of the ACI concept and initial products - there is time, maybe 6 months until implementation so a maturing of the product line is OK so I start hunting down folks who can provide information on it.  Some of the Insieme/Cisco BU staff are folks I know and I find someone who lives near where I used to in Indiana and who knows another Insieme/Cisco person so I start trying to get more information about ACI and its ability to span data centers and perhaps operate in a topology where not every leaf switch talks to every spine switch.  Twitter is not the best method to try to contact people.

This proves to be difficult ( understatement ) but eventually I get some discussions with a few different people over the next month or two.  The 2 day Cisco ACI roadshow is even coming to my town, to my Cisco office ... but I can't go.  If at this time there were a more motivated ACI customer in the United States I would be surprised ... I have to wait to some future customer event.  I even talk to a friend at a large reseller whose company has presence in one of the last roadshow locations to see if I could go with them.

$2 Billion dollar a year business, national/international leader in their industry, 35,000 to 40,000 ports, complete greenfield opportunity with motivated implementers.  Cisco ... you should have been more responsive ....



Sunday, February 15, 2015

Summer 2014

It is summer 2014.

As the decisions as to what data center location and fiber providers were underway the decision to pick both the optical equipment manufacturer and quantity of equipment along with the level of spares that would be bought.

Where HA requirements currently exist multiple chassis, stacking or other clustering technologies have been selected over single box solutions; scaling horizontally versus vertically has seemed to work well in the past.

Single chassis solutions were ruled out, even if they were 9 - 9's reliable.  The fiber paths would end up crossing multiple State, county and city jurisdictions, over flood prone rivers, cross national rail lines and Interstate highways to rural county roads. A failure or maintenance need at the wrong location or time could isolate thousands of users from their applications and data.

Part way through the fiber carrier discussions it was found that one of the existing locations might end up with the east and west fiber paths running along the same road so an alternate pathway to the west needed to be found.  A business partner located 2 miles away had 48 pairs of fiber that could be utilized to provide a diverse path if we could trench over to them. Being good partners we agreed to provide them an extra optical product so that they could use the other portions of our ring to provide them diversity as well.

Cisco ONS products were quite capable but to purchase 7 chassis would be somewhat costly and require more technical involvement in the configuration and support than desired.

Ciena offered mostly the same level of functionality desired but with the requirement for a onsite hot spare the price was still of concern.  A few years earlier when the current xWDM solution was installed the second vendor in the running was Ciena so their product line was known.

After a period of research into other OTN companies a vendor named Packetlight, available through RAD, was selected after technical evaluations and discussions with the firm and customers. It helped that some customers were in the same type of regulated business as well.  Initially a very functional model - the PL1000-E was considered.  It offered a handful of 10Gb ethernet and 8Gb fiber channel ports along with 1,2,4 Gb fiber as well as 1Gb/100M ethernet.

With three sites and our growth the slightly larger PL1000-TE model with the ability to be configured initially with 4 10/16Gb and 4 10Gb ethernet or fiber channel ports and can even support down to OC-12/600Mb ethernet along with an expansion module to give wavelength add / drop capabilities as well was selected.  I think it can expand to 80+ wavelengths if that were ever needed and it also offers AES data encryption.

In a meeting Packetlight demonstrated the smaller PL1000-E;  I believe the other server, storage and network engineers left that meeting with sufficient training to manage the product with minimal additional support; pick an unused port, plug in an optic, use the gui to tell the system what speed/type and away you go ... 30 seconds max you have a new wavelength doing whatever you wanted.  The distances between sites would not exceed the estimated 40 kilometer limit for 16G FC so if the storage hardware selected needs 16G the PL1000-TE can transport it.