Every industry loves its terms and jargon. Stop me if you’ve heard this one before: “I’ve always said that one of my core competencies is getting the most bang for my buck out of the sweat equity I put in during my 9-to-5.”

Sure, the sentence doesn’t really make any sense, but it sounds good enough when you say it. And that’s just the point jargon tends to make.

The IT industry is no different. There are lots of network traffic terms that, while maybe not as jargon-y as that phrase above, don’t mean anything to anyone outside of our world. Sometimes the jargon gets so complex folks in IT end up using different terms for specific key concepts we all need to understand.

Are you one of those people? Or are you new to IT and trying to get the basic terms down? Maybe you can’t quite remember the difference between bandwidth and throughput. No matter, you’ve come to the right place.

To really understand network traffic and the important role it plays in network management and security, there are a few key concepts and terms we should understand and agree on. Let’s take a look at a few.

1. Bandwidth

Even though bandwidth may be one of the most well-known network traffic terms, it’s often used by different people to describe different things.

In the simplest terms, bandwidth is the theoretical maximum amount of network traffic a given link can support. For example, if you’re hard wired into your network right now, it’s likely you’re connected by a 1Gbps (gigabits per second) Ethernet cable to a switch. Or maybe you’re at home, paying your ISP for a 100Mbps (megabits per second) service. Those speeds are your bandwidth.

When describing network traffic, it’s helpful to think of it like road traffic, since most of us can picture cars on a highway. Think of bandwidth as the total capacity of the highway. If cars were driving the speed limit bumper to bumper down a highway, and you could measure how many pass a specific point in a specific period of time, you’d have tallied the bandwidth of the highway.

2. Throughput

While bandwidth tells us a theoretical maximum for any given connection, throughput is a much more practical number, as it tells us the actual amount of data that’s flowing across that link.

Continuing with our road traffic example, throughput can be thought of as the actual number of cars passing a specific point within a specific period of time. Your 1Gbps Ethernet connection might in reality only ever achieve a few hundred megabits per second of throughput.

The most common question you’ll hear then is, Why is the throughput (actual traffic) less than the bandwidth (theoretical limit)? There are a number of factors that can affect your throughput, such as:

  • The limitations of any downstream connections, like your 100Mbps ISP connection
  • Not enough demand for the full 1Gbps connection
  • Other environmental factors, like your distance from a wireless access point

3. Packets

Packets are segments of data that travel at the network layer, or Layer 3, of the OSI model. Packets consist of a header and payload data. The header contains information such as the protocol in use, source and destination IP addresses, and time to live values. The payload is the actual data being transmitted from the source to the destination.

Going back to our traffic analogy, think of packets as the individual cars on the road. The header is the engine that moves the car from Point A to Point B, and the driver is the payload—the thing you want to get to your destination.

4. Latency

Remember when you were younger and watching a storm? You’d count the seconds between seeing lightning and hearing the thunder. The number of seconds divided by five was a great way to gauge the distance in miles away that the lightning was. Let’s think about latency that way. When lightning strikes, it’s “sending” the thunder your way. Latency is the delay between when the thunder is sent and when you receive it.

Put in networking terms, latency is the delay between data being sent and it reaching its destination. Latency is usually measured in milliseconds (ms) and the most common tool to measure network latency is an ICMP ping.

5. Round trip time (RTT)

The next logical step in latency, round trip time (RTT) is just that—the time it takes for packets to travel a round trip, from source to destination and back. In physical terms, if you were to throw a tennis ball at a wall, how long does it take for that ball to come back?

It’s important to note that RTT isn’t just two times latency. There are a number of factors that can affect round trip time like the throughput on the network between the two devices communicating, and the load on end devices.

6. Jitter

A natural extension of latency, jitter is a calculation of the difference in latency that can happen between individual packets. For example, if it always takes the same amount of time to send a packet between two hosts, then the jitter is very low, as the latency isn’t changing). If the latency between packets fluctuates significantly and regularly between the two hosts, then jitter is high, as the latency is changing a lot and changing often.

High levels of jitter can negatively affect the performance of a variety of real-time applications that rely on low latency, like VoIP calls, video calls, and streaming video.

7. Congestion

Network congestion occurs when the network has more data to send than the ability to send it. Like rush hour on a highway, there are only so many lanes the cars can use. When the throughput of network traffic increases closer to the maximum bandwidth of the network, network congestion is going to occur.

Congestion can significantly affect network performance too, as data may be stuck in a queue waiting to be forwarded through the link, or dropped altogether until the congestion clears up. It’s a universal truth—no one likes gridlock.

8. Packet loss

As the name implies, packet loss occurs when network packets don’t make it to their destination and are lost along the way. As mentioned, when congestion is bad, networks will drop queued packets until it has the ability to try again.

Often in situations of packet loss, applications higher in the OSI model will attempt to re-send the data, causing further network congestion and further packet loss. High packet loss is one of the common causes of slow internet, slow network, and slow Wi-Fi complaints.

In addition to network congestion, packet loss can occur for many other reasons, including hardware issues and software application bugs.

9. Flows

Network flows are one of the more complex and useful concepts related to network traffic monitoring. In theory, a network flow is a set of packets sent between two or more endpoints on a network. In practice, network flows come in a variety of different forms (host-to-host, broadcast, anycast, etc.) and use many different network protocols.

So can any group of packets be considered a flow? Not exactly. A flow is typically a group of packets with attributes in common. Think of it like a conversation between two people: words are the packets the entire conversation is the flow.

10. Flow Protocols

Devices that track these flows often report on them to outside sources. To do that, the devices rely on flow protocols, or a defined set of rules on how to communicate the details of the conversations.

Types of flow protocols include NetFlow, IPFIX, J-Flow, and sFlow. The main benefit of flow protocols is they enable administrators to look beyond basic network monitoring statistics like bandwidth utilization and gain deeper insights into what’s actually happening on the network. For example, instead of just knowing a given link has high bandwidth utilization, you can see there’s a specific user downloading a disproportionate amount of large files causing the issue.

11. IOPS

While not specifically a network traffic term, IOPS (pronounced “eye-ops”) often comes up when discussing network performance. IOPS refers to the input/output operations per second a storage device can perform.

Storage devices are limited in the speed at which they can read and write data. As the device reaches its IOPS limit, data may be queued or dropped by the application, another cause of packet loss.


Now that we’ve covered these basic network traffic terms, are you ready to start digging into monitoring and understanding your network traffic? Download our free ebook on finding blindspots, keeping networks healthy & secure with Auvik TrafficInsights.

Get templates for network assessment reports, presentations, pricing & more—designed just for MSPs.

Ebook cover - The Ultimate Guide to Selling Managed Network Services
  1. Terry Critchley Avatar
    Terry Critchley

    Steve, while not disagreeing with your (and others definition of latency, I find it easier to consider the ‘delay’ in parts. 1. The unchanging time due to the physical characteristics of the medium and 2. the ‘ad ons’ due queuing and other factors. See the paper below.
    https://www.datacenterdynamics.com/en/opinions/performance-myths-and-legends-part-ii/
    Rest of the article is useful.
    regards

    1. Steve Petryschuk Avatar
      Steve Petryschuk

      Thanks Terry. Great extension to the latency concept by introducing an unavoidable native latency, and an ‘extra’ latency. Unfortunately, both can be problems that affect end user experience, but its great to provide focus on what you can work to minimize, that ‘extra’ latency.

Leave a Reply

Your email address will not be published. Required fields are marked *