Wednesday, February 18, 2015

Fabric choices and the impact on hardware decisions pt1

Initially the desire was to look at OTV as a DCI.  Proven and I have gone to Las Vegas and London Networkers over the last few years to see some of the advances in DC technologies - I still have my Lisp labeled sauerkraut (not going to open it).   It is kind of funny as one of the Lisp stickers said "The Overlay - it's the only way" .. not sure if foretelling the future was the intent .. but it is funny in hindsight.
One OTV oddity that I remembered .. it used to be - not sure if still is - a design constraint that even numbered vlans in an OTV environment were routed one way and odd vlans were routed another.

Great.  The data center vlans are ALL even.   Someone(s) thought they might need 500+ addresses on a vlan so they were all /23s and the vlan number are aligned with the address block - vlan 300 is x.y.0.0/23 ... vlan 310 is x.y.10.0/23 and so on up to vlan 398 at x.y.98.0/23.  There was no vlan 315 or 343 .... All traffic would go down one OTV link.  If this has changed ... it is ok, not really a concern now.

I put together 2 bills of materials for 4 7009s or 4 7004s with single or dual sups and the necessary fabric cards plus 2 extra F3 modules to add OTV support to existing 7Ks.  Not a terribly good looking price but at least it is a starting point so anything might be better.



Lets look at ASR products instead.  Maybe lower OTV performance but higher WAN performance might be OK.
  6 * 1002-X or 6 * 1006 with multiple 10G ports AND new a few new switches for the new DC.
Not a terribly good looking price either .. that bandwidth is expensive ...
All I want to do is take a packet in, lookup destination MAC, is it local or remote ... encapsulate it and send it off.  The local traffic would not even hit the ASR .. just remote.
Wow, thats an expense wrapper.

Maybe OTV was not the preferred choice from a financial standpoint.  If you want the parts and list pricing used please contact me outside of this post.

I do like Lisp though ....

Hmm,

  ESXi is in use.  Multiple UCS deployments with lots of 10G.  This is where the vlans that would be part of the DCI actually live, maybe ESX/NSX might be useful for the vlans that need to span the data centers.
Darn .. wrong version of ESXi, no plans to upgrade any time soon,  separation of server teams from networking and firewall teams ... just not going be solution.  Turns out ... not entirely free either :)


NSX is just VXLAN from VM so lets look at that ..
When the core routing protocol for this environment and the primary business partner were selected one of the desires was to be able to interact with any vendor or any corporations network.  OSPF was selected.  There are pockets of EIGRP but those hide some WAN routing oddities.

Hey ,, its as close to an open standard as possible ... everybody supports VXLAN, it is the OSPF of fabrics.  I am pretty sure in 5 years the lingua franca may change but today its VXLAN.
7Ks - not without those F3 modules ... OK /// NO 7Ks in new solution.   Hmmmm ... this will come up later after I review the math of the electric / cooling / Smartnet costs of loaded 7Ks.

Cisco, Juniper, Arista and a few others all make VXLAN capable boxes ... lets take a look....

No comments:

Post a Comment