Articles

Six tips for testing broadband CPE and Wi-Fi router performance

1 min read

Performance can make or break your product


When developing or deploying broadband CPE, Wi-Fi routers, or home/business network devices, what are the things you should look for when testing performance? It's more than just simply sending max-rate data through your device. Many different factors can affect your product's performance - here are six tips to guide you when qualifying your device performance during test and development.

 

1. Consider your performance results for both throughput and latency


While throughput measures the data rate of successful transmissions, latency measures the delay between when the transmission was sent by the source and when it was successfully processed by the sink. Latency causes different problems with different applications - for example, watching a video in a high latency situation will require more buffering by the receiver, and high latency will interfere with intense online gaming applications. Both of these factors are affected by the performance of the device’s network processing.

What can affect throughput and latency? If the network processing just can’t keep up with the demand, it will drop packets. This will affect throughput if the packets never get there, but retransmissions at lower layers will introduce latency. This may be out of the hands of the network processor - if there’s disruption at the physical layer in DSL or Wi-Fi, retransmissions exist to make the connection robust, but may introduce delay.

There’s no right answer on what good throughput and latency are, and users will have a different tolerance for different application performance. When you add Wi-Fi to that, it gets pretty messy. Be sure to consider both in your testing.

2. Understand application/transport layer testing


Many performance testing tools and processes use lower layer (that is, layers 2 or 3) packets. This is fine if you’re looking for a simple "line rate" measurement, but it won’t exercise the things the device is supposed to do, that is, deal with different applications that users actually use.

Also understand exactly what the source and sink compare during the test. At the transport layer, it’s the success of transmission that matters, which will give us different results for TCP and UDP. Since we’ll be retrying lost packets in TCP, the actual packet loss will be a factor but not part of the actual success of the transmission - you’ll see latency issues rather than throughput. In UDP, loss of a packet will generally show a loss of throughput.

3. Test at different fixed rates

Most throughput testing amounts to "packet blasting", or, trying to achieve the maximum throughput through any two given connections. However, different network processors perform advanced functions to handle different rates of traffic. Moreover, testing the exact scenarios your product will be deployed in provides a better baseline for qualifying device performance in the real world.

Run tests at specific, fixed data rates during your performance testing. Not only do they provide easy pass/fail criteria, but you can match them up with the SLA rates that customers will see with their service. You can also "ramp up" testing at increasing fixed rates to uncover hidden flaws in product stability.

4. Test using multiple clients


It's not enough to use a single source and sink in your performance testing. Users are adding more and more devices to their network, and most of them are running high-throughput applications like video streaming.

Set up your performance testing to run multiple streams to multiple clients. Even better is to start with a few clients and ramp up the number of throughput streams to get an idea of your product's scalability.

5. Test for long-term stability


Many products see degradation over time, which greatly affects the end-user experience. When a family is all watching Netflix at all hours of the day, small design problems can become major stability issues. If there's bugs in your code or bad cleanup that is causing memory leaks, these can eventually affect the network processing ability of the device. This is often due to the operation of other protocols.

Running throughput for long periods of time, especially at different rates and in the presence of other protocol activity, can help root out these subtle issues. Running performance testing for several hours or days (for example, over a weekend) will provide valuable insight into your product's long term stability and overall end-user experience.

6. Use repeatable, automated performance testing


Performance is highly subjective. You’ll end up building your own metrics, but what we really want to know is how performance will change over time due to the real-world behavior of the device. That is, as users use it, will the performance change? Testing this in any way that can provide empirical results requires repeatability and consistency. We also want to be able to mix performance tests with functional tests to simulate user activity.

The CDRouter Performance add-on lets you build test packages for these combinations that execute consistently. You can set your own thresholds for success and failure; so if you know that 90% is the number you’re looking for, you can do that.

CDRouter Performance also includes a set of easy-to-setup, standardized, fixed-rate test cases with clear pass/fail results to automate specific SLA rate tests or to "ramp up" rates over time for stability testing.

CDRouter works with Continuous Integration through the CDRouter API, so if you’re trying to improve performance with firmware revisions you’ll be able to do that seamlessly.