Monday, February 16, 2015

Topology layout choices.

This was some quick whiteboard ideas on a setup to connect 3 or 4 sites together.  Multiple switches at each site are a given but the vendor;  Cisco, Arista, Plexxi, Juniper, Big Switch or Cumulus has not started really being examined nor has the underlying connectivity been decided; OTV, stretched layer 2 with spanning tree blocking - which will work until it does not, VXLAN, QFabric, ACI ( I was a software guy, I like the dynamic nature of ACI ) or even OpenStack.


A desire that I am kicking around is that if the data center fabric is being built to connect all of them, would it be possible to build this as a greenfield off to the side and migrate the existing dissimilar core components INTO the new architecture versus adding on a wan transport and still have the aspects that reflect the 10 year evolution from Sup2s and CatOS with routed top of rack switches, default gateways in closets,  default gateways not in closets, a Sup720 here, a Sup32-10 there, ZR optics on wan links, one side a Xenpak, one side a X2,  LX4s in 6704s ... just not a lot of big picture consistency due to the pattern of growth over time. 
  
If I can make a greenfield migration/morphing to have equal or lower risk, cost effective, expandable and not look like a CCIE lab endeavor I think that is the direction to pursue.  Some older equipment will be retired ( a number of 6500s are currently operating as single mode to multimode convertors or single mode repeaters due to WAN distance and fiber quality - they will all go away) , some will be moved down the food chain however over the last few years the 6500 or 4500 based closet has been losing favor as the stackable products have become better.  A rack consisting of patch panels sandwiching 48 port switches with 6 inch patch cables is pretty hard to beat.  



I like the idea of the direction of the ACI concept and initial products - there is time, maybe 6 months until implementation so a maturing of the product line is OK so I start hunting down folks who can provide information on it.  Some of the Insieme/Cisco BU staff are folks I know and I find someone who lives near where I used to in Indiana and who knows another Insieme/Cisco person so I start trying to get more information about ACI and its ability to span data centers and perhaps operate in a topology where not every leaf switch talks to every spine switch.  Twitter is not the best method to try to contact people.

This proves to be difficult ( understatement ) but eventually I get some discussions with a few different people over the next month or two.  The 2 day Cisco ACI roadshow is even coming to my town, to my Cisco office ... but I can't go.  If at this time there were a more motivated ACI customer in the United States I would be surprised ... I have to wait to some future customer event.  I even talk to a friend at a large reseller whose company has presence in one of the last roadshow locations to see if I could go with them.

$2 Billion dollar a year business, national/international leader in their industry, 35,000 to 40,000 ports, complete greenfield opportunity with motivated implementers.  Cisco ... you should have been more responsive ....



No comments:

Post a Comment