Impact of Buffering Time Reduction on False Congestion Detection in TCP Vegas Over OBS Networks

Main Article Content

Van Hoa Le

Abstract

Since Transmission Control Protocol (TCP) is the dominant protocol on the Internet and optical burst switching (OBS) networks represent the optical transmission solution that can meet future high bandwidth requirements, TCP and OBS networks represent a possible combination for the next-generation Internet. However, a problem with this combination model is that operations at the OBS layer need to be controlled so that false congestion detection at the TCP layer is minimised. Considering the combination of TCP Vegas and OBS, congestion detection at the TCP layer is based on the round-trip time (RTT), where congestion is detected if the current RTT exceeds a given RTTmax. The extra delay may be due to some actual loss, but it may also be due to the extended execution time caused by some OBS layer operations. Therefore, reducing the time of operations at the OBS layer will reduce false congestion detection at the TCP layer. This paper investigates the impact of reducing the buffering time at the ingress node on the false congestion detection rate at the TCP layer. The reduction of the burst buffering time is accomplished by nesting the offset time in the assembly time. The simulation results and analysis show that the transmission efficiency in TCP Vegas over OBS networks is significantly improved; the throughput remains high, and the delivery success rate is significantly increased compared to the conventional buffering mode.

Article Details

Section
Articles