Network latency is measured by the time it takes for
information to travel from one point to another, and efforts to reduce it have
been going on since humans first started communicating.
A brilliant example of this was seen in the early days of
the telegraph, when news between North America and Europe still had to
travel by boat before being re-transmitted over land-based telegraph lines. A
few enterprising telegraph companies started setting up stations on the southwest
coast of Ireland to intercept United Kingdom-bound ships carrying news from the
US. They would re-transmit the news and, in doing so, reduce network latency by
several hours or days. The completion of the transoceanic telegraphic lines helped
put an end to using ships to transport telegraphs, and further reduced the
latency between the continents by days.
Network latency is even more critical in today’s information
age, where delay is often measured in milliseconds (or microseconds, in the
case of financial markets) rather than days. The relentless move towards Internet-based
business, and the expectation that the network response time will be close to
instantaneous, has meant that networks must be engineered to minimize delay.
Some enterprises have made a direct correlation between
delay and revenue impact. Amazon famously claimed that every 100 milli-second
reduction in delay led to a one percent increase in sales. Google also stated
that for every half second delay, it saw a 20 percent reduction in traffic on its
Excess latency can have a profound effect on user experience—from
excess delay during a simple phone conversation, to slow-loading webpages and
delays with streaming video. Ask any on-line gamer how delay or lag impacts his
or her experience.
There are plenty of examples
that quantify the results of excess delay, and there are plenty of tools
available to measure latency; however, the real challenge lies in isolating and
correcting the actual cause of the delay. New technologies such as
virtualization and cloud computing make much better use of existing assets, but
they also add unintended layers of complexity and make troubleshooting more challenging.
Fortunately, there are tools available to help address these
issues. Solutions such as applications performance monitoring or infrastructure performance monitoring
tools are available to monitor applications and ensure that security,
availability and performance objectives are met.
One of the key requirements these tools must meet is to
monitor the bitstream without affecting its performance or availability without
excessive additional cost. Schemes such as port mirroring are
costly and reduce switch resources.
One key enabling technology in the monitoring application is
traffic access point (TAP) module. This completely passive device splits the
incoming signal and sends one copy to the monitoring equipment and the other
copy to the IT equipment. A well designed TAP should be fully integrated into
the fiber infrastructure, and the links engineered such that they can
accommodate the TAP while ensuring link performance.
As history has shown, reducing latency is an ongoing
challenge. For many industries, having the lowest latency network will provide
a distinct competitive advantage. Engineering networks to provide low latency
and putting the tools in place to monitor and isolate sources of delay can have
a great impact on a company’s success.We
have come a long way since the transoceanic telegraph, but we still fight
latency issues on several levels. The
fiber TAP is the latest tool to help cut down on this costly issue. If you have
any questions about the fiber TAP solution or other network latency solutions,
please leave a comment below and I will be sure to respond.