

When each packet travels through the network, it passes through multiple nodes where it gets redirected toward its destination. The time it takes for your packets to reach their destination is called latency. A number of factors, including the physical distance to the destination, as well as any accidents that may happen down the road can delay the arrival to destination, and, ultimately, the confirmation. However, bandwidth isn’t the only element that impacts your real Internet speed. The rate at which data packets can travel through the network is called bandwidth. Only a limited number of packets can be sent without receiving confirmation that the prior packets have reached their destination.Īt the same time, the amount of packets that can travel through the lane during a specific amount of time is also limited. When using the TCP protocol, these data packets are transmitted in sequential order through a network of lanes to the destination - when they reach their destination, a confirmation is sent back to your IP. In turn, the receiver responds by sending back another flow of information, also in the form of data packets. Sending off your data packetsĪccessing the Internet is based on an exchange of information - w hen you shop online or stream a movie, your IP sends a flow of information in the form of data packets making a request. This article aims to explain these metrics and how the TCP protocol - the way most data is transmitted over the Internet nowadays - impacts them. Internet speed or how fast data transfers in a network is calculated using different metrics: latency, bandwidth, and throughput.
