No matter how you decided to build out your network (tip: check out our new Network Device Buyer’s Guide!), an on-site switch plays a critical role in a network, connecting your users to the rest of your IT infrastructure. Donโ€™t think of that jack in the wall, or under the cubicle, as a simple Ethernet port. As part of effective switch management, you need to think of it as the mission-critical gateway to IT services that it is.

5 Physical location considerations and connectivity options

When a user plugs their workstation into the network, itโ€™s their single connection to email, messaging, financial systems, sales engines, and many, many other company resources. Even voice communications are likely provided through that jack. If the service is poor, that user canโ€™t do their job effectively. If enough users have a bad experience, the entire business is going to be affected.

Sadly, in business networks, the on-site switch is often neglected. No more! We’re here to talk about something every admin should have in their back pocket: the basics and best practices to setting up and configuring a reliable switch that will serve users well.

1. Physical location matters

Iโ€™ve seen switches in bathrooms, stuffed inside drop ceilings, hanging by a single screw on the wall, underneath cubicles, and on filthy shelves in a tiny closet with no airflow or climate control. I recognize that retrofitting aging buildings with wiring infrastructure is a challenge, and sometimes the solutions weโ€™re stuck with arenโ€™t ideal.

That said, try your best to get the switch into a place with, at the least, airflow. Study the switch youโ€™re installing, figure out where the air intake and exhaust locations are, and keep them clear. Even for fanless switch designs with external power supplies, itโ€™s usually a bad idea to put a switch right up against a wall. Thereโ€™s still heat the chassis needs to radiate, which is part of the reason most switches come with rubber feet that can be applied to the bottom.

Dust and dirt are also a problem. Switches get clogged with crud when theyโ€™re installed in filthy locations, and this can shorten their lifespan due to overheating. Switches in dirty places should be cleaned periodically to reduce this risk, even in industrial models that are built for difficult environments.

Consider the uplink design between the closet switch and the rest of the network carefully.

While the simplest thing to do is to connect a closet switch via a single cable back to the rest of the network, a dual uplink is preferable for redundancy and possibly capacity. There are several ways to safely achieve a dual uplink:

A backup link. A backup link is created when a second line is connected in parallel to the primary. Topologically, this makes a loop between the closet switch and the uplink switch. Spanning tree will detect the loop and block the backup link. If the primary link fails, the backup link becomes active. A better alternative is to connect to multiple switches, rather than having both links running to the same switch. Spanning tree will behave the same in either caseโ€”a loop will be detected and one link blocked until the primary path fails.

wiring closet switch network uplink design

A parallel Layer 2 link. A parallel Layer 2 link between switches can be achieved with link aggregation protocols such as Ciscoโ€™s Port Aggregation Protocol (PAgP), or industry-standard Link Aggregation Control Protocol (LACP). This scheme allows for both links to be active and carry traffic. The link aggregation protocol makes the two links appear as a single link as far as spanning tree is concerned, while still maintaining a loop-free topology. Link aggregation can scale parallel links beyond two. Four- and eight-way link aggregation bundles are common. For diversity, link aggregation bundles can be split across two physical switches that act as one virtual switch, such as the stackable versions of Ciscoโ€™s Catalyst switches or Cisco chassis switches running the Virtual Switching System.

enterprise network closet switch uplink design

A parallel Layer 3 link. One strategy for uplinking closet switches is to connect them to the rest of the network via routed (Layer 3) links as opposed to a switched (Layer 2) link. While the wiring looks the same, the end result offers better isolation from the rest of the network. With this approach, two L3 links donโ€™t create a loop, because each link belongs to its own network segment isolated from the other. The challenge of this design is that user VLAN segments canโ€™t span to different closets, as an L2 network segment (a VLAN) will not extend beyond the L3 uplinks. Dual L3 uplinks should connect to separate switches for resiliency.

a parallel Layer 3 link

Donโ€™t forget about physical path diversity. When possible, route cabling coming into the closet via alternate physical paths. The idea is that if one cable connecting the closet switch to the rest of the network is cut, flooded, burned, or otherwise damaged, the other cable wonโ€™t share the same fate. This is admittedly challenging in buildings where conduits are limited and where construction is just not amenable to such an approach. But when possible, keep the uplinks cables separate.

3. Determine if the network actually needs that 10Gbps Ethernet

When youโ€™re dealing with closet switches, itโ€™s crucial to appropriately size the uplink from the closet to the rest of the network. While every network is different, in general, closet switches can tolerate an enormous amount of oversubscription. By oversubscription, we mean the ratio of user-facing ports to uplink ports.

For example, in a switch with 48 1Gbps user-facing ports, and two link-aggregated 1Gbps uplink ports, the oversubscription ratio is 24:1. In other words, there are 24 user-facing ports for each uplink port.

Get templates for network assessment reports, presentations, pricing & moreโ€”designed just for MSPs.

Ebook cover - The Ultimate Guide to Selling Managed Network Services

Over the years, Iโ€™ve seen very high oversubscription ratios for closet switchesโ€”as high as 96:1โ€”with no problems.

The traffic patterns from user workstations tend to be both bursty (a quick spurt of traffic) and unsynchronized (traffic bursts happening on one machine at a time, and not all at once). For these reasons, itโ€™s possible to get away with high oversubscription rates for a closet switch on-site at a business that wouldnโ€™t be acceptable in most data center designs.

But the question arises: Are 10Gbps Ethernet uplinks a requirement between the closet and the rest of the network? It depends. Letโ€™s consider a few important facts that could tip the balance in favor of 10Gbps uplinks one way or the other.

  • Can you afford 10Gbps Ethernet?
    Closet switches must very often traverse scores or hundreds of meters to uplink back to the main network. To go long distances, fiber-optic cabling is required. To go exceptionally long distances, the fiber of a particular optical quality is required, and possibly a different sort of transceiver. So while itโ€™s not as expensive to deploy 10Gbps links as it once was, the cost is definitely still an important consideration. Not all fiber is created equal, and not all 10G optical modules push to light the same distance. Review this sheet from Cisco that describes a variety of 10G modules and the supported distances depending on the fiber optic cabling in use.
  • Do you have copper cabling?
    10GBase-T is 10Gbps over copper cabling. Its primary purpose is not for closet uplinks. Cisco mentions, โ€œThe primary use case for 10GBASE-T is high-speed server connectivity. Other less common scenarios use 10GBASE-T for interconnecting distribution or core switches that reside within a 330-feet (100-meter) distance.โ€ Not only is 10GBase-T intended mostly for in-rack connectivity between servers and access switches, but it also has stringent requirements for copper cabling. Over distances of less than 100m, and with the correct type of copper cabling (Cat6 certified to 500MHz, shielded Cat6, Cat6A, or Cat7), it may be possible to use 10GBase-T as a closet switch uplink. But 10GBase-T is not an obvious, cost-effective answer. Ethernetโ€™s leap from 1Gbps to 10Gbps over copper is a higher one than the leap made years ago from 100Mbps to 1Gbps.
  • Do your users move lots of data around?
    As weโ€™ve discussed, high rates of oversubscription are often fine for a switch. But some user groups might be moving a lot more data around than the average worker. Folks that work on their local workstations with large datasets fit that category. Think multimedia artists, developers working with large test databases, and similar scenarios. In those situations, a high oversubscription rate will be less tolerable, as longer bursts of traffic will increase the likelihood of simultaneous user traffic streams hitting the uplink. In that case, 10Gbps uplinks reduce the oversubscription by a factor of 10 when compared to 1Gbps uplinks, and make contention much less likely. Lower competition equals faster network throughput for the user community.
  • How many supported devices do you have on the network?
    Most organizations now support employee phones, tablets, and related BYOB devices on their networks, which means the sheer number of devices on the network has increased. Todayโ€™s average network user might represent three or four devices instead of a single workstation. As device count increases and access points proliferate, the likelihood of uplink contention also increases. 10Gbps can be helpful here.

4. Donโ€™t skimp on physical ports

A problem Iโ€™ve run into repeatedly is a 24-port switch that runs out of ports. Budget permitting, a 48-port switchโ€”preferably with additional ports to serve as the uplinksโ€”should always be installed to allow for additional growth. Yes, 48-port switches cost more, and that cost is often the driver for the purchase of a 24-port switch to begin with. But in the long run, 48-port switches are almost always preferable in my experience.

That said, consider the long-term impact of wireless networking in the environment. If youโ€™re finding that 802.11ac provides wired-equivalent performance, then youโ€™re on a different track. Youโ€™re moving away from wired connections to wireless, where your wired port density concerns have shifted from user workstations to access points. But if โ€œwires everywhereโ€ is still the choice, then my comment about 24- versus 48-port switches is hopefully thought provoking.

5. Compare switch stacks/chassis vs individual switches

Remember that switch stacks or chassis are a little different than individual switches.

A popular choice in the network closet, stackable switches, and chassis switches offer a great deal of port density with a single point of management. From a design perspective, these advantages have a couple challenges worth keeping in mind.

Stacks and chassis can be single points of failure.

To decide if this is a concern, consider the impact on the organization if the entire closet was out of service for several hours or days. Chassis switches often offer dual supervisor engines and dual power supplies to mitigate this risk.

Some might point out that the chassis switch itself is a single point of failure, but in nearly 20 years of networking, Iโ€™ve only ever run into a chassis failure once.

Stackables are generally better off than chassis, in that each physical switch in the stack can usually function as the โ€œstack master,โ€ where a new stack master will be elected in the case of a failure. In addition, power is distributed throughout the stack; a failure of the switchโ€™s power supply will only impact one switch and not the rest of the stack. Interestingly, certain Cisco stackable switches offer StackPower, which can mitigate blown power supplies in a stack.

The idea here is to make sure the uplinks from a chassis or stack do not come from the same switch or module. Too often, Iโ€™ve seen the dual uplinks coming from the same supervisor engine, meaning that if the supervisor fails, the chassis might be disconnected from the rest of the network, even with dual supervisors.

Dual uplinks should also be spread across different physical switches in the stack. My practice with closet switch stacks is to place one uplink at the top of the stack and the second at the bottom. Assuming a break in the stack between the top and bottom, this means that both parts of the fragmented stack will still uplink back to the main network. Software upgrades are sometimes an all-or-nothing affair. When upgrading chassis switches, you might need to reload the entire chassis to bring up the new software, meaning users arenโ€™t able to access the rest of the network until the chassis is back online.

You might intuitively assume that switch stack upgrades can be completed one switch at a time, but the reality is that switches in a stack often require very close software versions to be members of the same stack. Therefore, upgrading one switch might render it offline until all the other switches in the stack have been upgraded as well.

This issue varies from vendor to vendor and product to product. The point is to be sure the organization is able to cope with a software upgrade process that potentially takes hundreds of user-facing ports offline while the upgrade is going on.

3 Key switch configuration elements and considerations

A few things to keep in mind.

1. Spanning tree configuration

If you chose the backup link or parallel Layer 2 design, then youโ€™ve extended your Layer 2 domain from the core network into the closet. That means your closet switch is participating in the global spanning tree domain and needs to be configured appropriately.

Iโ€™ve written on spanning tree design before, and in that post I mention the importance of placing and enforcing the root bridge. When dealing with a closet switch, the key is ensure that the closet switch doesnโ€™t become the root bridge.

If a closet switch does become the root bridge, then links in or around the physical core of the network could end up blocked. The result could be some odd (and unexpected) forwarding paths in the network that traverse the closet switch.

To help prevent this scenario, my recommendation is to set the root bridge priority to a very high number (the max of 65535 works), rather than leave it at the default value of 32768. If youโ€™ve already set your core switches to some low value below 32768, changing them might seem unnecessary, but thinking ahead is critical. Setting the value to a higher number could help avoid an unexpected spanning tree result in the future.

Also worth mentioning is that Iโ€™m assuming the use of rapid spanning tree. 802.1w is a significant rewrite of the original 802.1d spanning tree, and features a number of performance-related enhancements. Rapid spanning tree includes equivalents of several early Cisco spanning tree enhancements such as uplinkfast that will help your switch converge on a new topology more quickly if something changes.

Practically speaking, letโ€™s say youโ€™ve implemented the backup link scheme where one link is blocking and another forwarding. If the forwarding link goes down, your switch will notice and react. Eventually, traffic will begin to flow across the remaining link. How long โ€œeventuallyโ€ is varies between the original spanning tree and the rapid spanning tree, with the rapid spanning tree being faster. The exact timers and processes involved are beyond the scope of this blog post, but I recommend this white paper from CCIE Petr Lapukhov if youโ€™re interested in digging into details.

If you chose the parallel Layer 3 link design, then youโ€™ve created an isolated Layer 2 spanning tree domain on the switch itself. The question becomes whether or not the closet switch needs to be the root of the spanning tree domain. In most cases, youโ€™ll want it to be the root because you donโ€™t want other switches that are plugged into the network, say, at a userโ€™s desk, to become the spanning tree root bridge of the network domain.

Get templates for network assessment reports, presentations, pricing & moreโ€”designed just for MSPs.

Ebook cover - The Ultimate Guide to Selling Managed Network Services

2. Routing configuration

Many closet switches are Layer 3 switches, meaning they have the ability to route traffic between different blocks of IP addresses resident in separate VLANs. In the Cisco Catalyst product line, 29xx series routers are usually Layer 2 only, while 37xx and higher model switches are Layer 3 capable.

Assuming an L3-capable switch running parallel links, there are a few different routing schemes you could choose. Rather than go through all the different possibilities, letโ€™s review one solid approach worth considering in a Cisco environmentโ€”that of dual EIGRP links. (You might like to read my introduction to EIGRP before going on.)

 closet switch EIGRP configuration

In this design, each L3 link is assigned a /29 network block. A /29 is a block of eight addresses, six being actually usable. In our example, the link to switch 1 has a range 10.100.1.0-7, where IP addresses 1 through 6 can actually be used.

I recommend /29s over more customary /30 point-to-point links, which only offer two usable addresses. /29s allow for future flexibility without renumbering the link, such as when an additional device like a firewall, WAN optimizer, or replacement switch might need to be added.

IP address conservation is usually not an issue on private networks using RFC1918 address space, so a /29 is a useful addressing scheme for point-to-point links without going overboard.

In this scenario, our closet switch has two EIGRP links, one to each of a pair of switches back in the core network. Assuming both links have the same bandwidth and delay characteristics, the EIGRP routing process will see these links as equal costs, and load balance traffic between the two links. In the event that one of the links goes down, EIGRP will detect that thereโ€™s no longer a connection and converge on the remaining link.

A couple of additional notes:

  • The closet switch should very likely be configured as an EIGRP stub router.
    Unless youโ€™ve made some unusual network design choices, thereโ€™s no reason a closet switch should ever be a transit router. The only networks for which a network closet is a source are the connected VLANs accessed by users. So, thereโ€™s no reason for the core network to query the closet switch about lost routes originating in other parts of the network.

    Another way to think about it is to consider the access switch a dead end. If the edge of the network diagram is the closet switch, then that switch is indeed a dead-end โ€” nowhere further to go. And thus, configuring it as a stub is exactly the right thing to do.
  • The closet switch doesnโ€™t need to know the entire routing table. All the closet really needs to know is a default route.
    Why? In our sample design, the only place a closet switch can send anything is into the core network. A granular routing table is only useful to a router (or Layer 3 switch) if traffic for some IP destinations is reachable via one link, and traffic for other IP destinations is reachable via other links. In our case, thatโ€™s not true. Therefore, why clutter up the closet switchโ€™s routing table with a bunch of IP destinations that are all reachable via the exact same two links?

    Instead, the core switches can summarize traffic into a default route using an โ€œip summary-address eigrpโ€ statement on the interfaces that uplink to the closet switch. The closet switch will only learn a default route instead of the entire core routing table.

3. Quality of service considerations

Another big topic that we can only give an introduction to here is Quality of Service (QoS). QoS is the big idea that some network traffic is more important than other network traffic. Important traffic (like voice and video, for example, although it could be anything you define) must be delivered, while other traffic can tolerate some loss. (For more detail on QoS, you might like to read my series here.)

For this blog post, letโ€™s a make a couple high-level observations:

  • If youโ€™re not running IP phones or video devices through the closet switch, you can probably do okay without a QoS scheme. Without voice and video traffic, what youโ€™re sending through the switch is all data traffic, most likely TCP traffic. In the rare case of congestion between a closet switch and the core network, the retransmission and sliding window mechanisms built into TCP can cope. In most situations, this is sufficient. Exceptions to this general rule include interactive applications like SSH, where a delay due to congestion could make the application difficult to use. In such a case, a QoS scheme might be useful to prioritize that interactive traffic.
  • When running real-time voice and video through your switch, probably because you have IP phones rolled out to the organization, the idea is to send voice trafficโ€”people talking on the phoneโ€”out a low-jitter โ€œpriorityโ€ queue, and video traffic out a queue that guarantees enough bandwidth for its needs. Jitter refers to the amount of time in between packet delivery. For voice traffic, delivering packets evenly is important for a good quality call. Video traffic is more tolerant of jitter than voice, but all the packets need to get where theyโ€™re going.

4. Other configuration considerations

Once the physical connectivity and forwarding design have been nailed down, a smart next step is to ensure the design continues working as intended. There are a couple of major points to consider.

First, security should be put in place to prevent switch configurations from being altered by an unauthorized party. Note that a switch configuration left to its defaults is not secure, so the wise administrator will take care to enter non-default usernames and passwords, and disable unencrypted management protocols like Telnet, SNMPv2, and HTTP, instead using SSH, SNMPv3, and HTTPS.

Another common-sense security step is to use an access list to limit the source IP addresses that are allowed to manage the switch. This prevents, say, a user at his desk from discovering the switch and trying to log in, or malware from attempting to send a new configuration to the switch via SNMP.

If your client runs a highly secure environment, you might consider more aggressive tools in the network closet. For example, IP source guard, dynamic ARP inspection, DHCP snooping, and port security are features that can be deployed to ensure that systems using the switch are who they claim to be, arenโ€™t performing roles they shouldnโ€™t, and are behaving normally. Implementation of these features is complex and beyond our scope here, but are well-worth investigating and understanding whether you ultimately implement them or not.

Another concern is switch monitoring to be sure itโ€™s up and running, that links (especially the uplinks to the core switch) are not being over-utilized, and that the switch ports are running error-free. Many network monitoring and management tools, including Auvik, can help with these tasks, and handle switch monitoring along with reporting, troubleshooting, and configuration management.

The result of all this design and configuration work is a closet switch you can trust. Youโ€™ll be able to rest a bit easier, confident in a switch thatโ€™s quietly forwarding traffic from the closet in the very best way possible.


Want to grab a full version of the No Sweat Guide to Effective Switch Management? Download your free copy here.

Get templates for network assessment reports, presentations, pricing & moreโ€”designed just for MSPs.

Ebook cover - The Ultimate Guide to Selling Managed Network Services

  1. nucco Avatar
    nucco

    Nice, informative article. I was looking for some information about what kind of QOS, if any, that a typical switch would apply to the uplink when there is contention. I wished you’d covered this, but seems to be a topic to find information about.

    Just for completion, my use case is a home network, and I’d be fine with some simple fair queueing where every competing port gets an equal share of the bandwidth.

  2. Steve Petryschuk Avatar
    Steve Petryschuk

    Hi Nucco – Take a look at your network infrastructure to ensure it supports QoS before diving further in. Many legacy consumer grade devices don’t support QoS, however some of the newer ones do. We’re working on more content in this area, but in the interim, check out articles on “Port-based QoS”.

Leave a Reply

Your email address will not be published. Required fields are marked *