The explosive growth of the Internet and the increasing amount of online information have made the Internet an important information source. However, the expansion of the Internet also comes with increasing heterogeneity in the types of client devices and network connections that people use to access the Web. The Internet is a network of networks, a mesh of various transmission media, with remarkable heterogeneity and dynamism in bandwidth capacity and latency characteristics. Networking technology has come up with solutions for different environments, ranging from low bandwidth (9.6K-28.8K), such as in cellular and wire-line modems to middle bandwidth (128K-1.5M), such as with ISDN, DSL, and cable modems, to high bandwidth (10M-100M), such as with local Ethernet. Different networks have different frame sizes incurring additional conversion services at gateways. Also they have different scheduling policies and their interconnectivity results in multiplexing and demultiplexing of traffic. Network delays are imminent since Internet traffic is not only increasing in proportion to available bandwidth, but also changing in character due to emergence of new networked applications. It is difficult to predict the peak data rates on different network regions at any particular time and so over-provisioning is not a realistic alternative.
Internet Protocol (IP) provides a ‘best effort’ service to applications by routing packets independently (using unique addressing) and seamless delivery over heterogeneous networks (using fragmentation and reassembly). It fundamentally advocates leaving complexity at the edges and keeping the network core simple in line with the 'end-to-end argument'. It has proved to be a robust and scalable solution for traditional Internet applications such as email, file transfer and other web applications. It depends on higher layers of the protocol stack to satisfy other application specific data transfer constraints such as reliability, latency, consistency of data throughput etc. Under conditions of unpredictable delays and losses in the network distributed multimedia applications can suffer severe performance degradation. For example the unpredictable, bursty nature of network traffic often creates transient network congestion causing routing delays and lost packets. The usability of an IP-based telephone service is severely limited by resultant sub-optimal round-trip times. This poses great technical challenges in providing a uniform application driven integrated multimedia device platform that can hide device and network differences while providing system and network state information to application developers.
complexity at the edges and keep the network core simple.
is determined by degree of satisfaction of application’s resource
requirements. Hence actual quality of service provided to application can be
better determined at application level.
dynamic resource needs can be handled better at application level.
on IP and can readily leverage QoS support available at network level.
Each client is governed by a resource management policy that captures degree of QoS adaptation tolerated by application. Policy defines local adaptations in response to variations in resource availability. Policy is configurable to enable deployment in a heterogeneous environment.
Figure: Adaptive QoS Mechanism
state of clients received using RTCP receiver reports and utilizes this
information to make adaptation in information content.
side adaptations has
the advantage that it allows both static (off-line) and dynamic (on-the-fly)
content adaptation. The former refers to an authoring post-processing situation,
where the adaptation automatically creates multiple versions on the authored
content, at anytime after the content has been created; the latter refers to
performing an on-the-fly adaptation as each request comes in. The server
architecture provides more author control since the adaptation can be tied to
the content authoring process, allowing the author to provide hints on the
adaptations for different circumstances.
Sender side adaptations should be implemented judiciously, i.e. only if majority of clients experience performance degradation, since it results in additional computational load and resource consumption on the server. The static approach generates multiple versions of the content, thus making content management more cumbersome and requiring more storage. Hence Server side adaptations has to be augmented with client side adaptations.
Client side adaptations are dictated by the encoding and transmission techniques adopted by the sender. Layered encoding with layered transmission scheme at the sender can be used to enable receiver side adaptations. Layered transmission is implemented by sending each encoded layer to a separate multicast group. Receiver selects appropriate transmission quality by subscribing to certain number of multicast groups. An alternative approach to layered encoding is encoding and transmitting multiple copies of data on separate multicast channels, each independently providing a different level of service. Receiver selects appropriate transmission quality by subscribing to corresponding multicast group. Thus the receiver adapts by independently tuning to a transmitted service level that best fits its need, capabilities and resource availability. The granularity of adaptation is predetermined by the granularity of layered encoding. Layered encoding involves complex encoding algorithms. Although in most cases encoding can be done off-line, synchronization between decoded streams at the receiver for playback adds to the overall delay. Delay in switching between levels is also a crucial parameter since very short delays can cause oscillations while large delays will result in sub-optimal performance of application. Additionally layered transmission can be prioritized such that the base layer has highest priority and subsequently higher layers have decreasing priorities for a better network support for the application. However obviously the priorities have to be supported end-to-end in the network.
For proxy based adaptation proxy gateways act as transcoders for clients with similar network or device constraints. Such gateways are placed at appropriate locations such as between bordering heterogeneous networks to deliver different levels of service to network regions with different characteristics. The client connects to the application through the gateway, which then makes request to the server on behalf of the client. The proxy intercepts the reply from the server, decides on and performs the adaptation, and then sends the transformed content back to the client. The transcoder can convert received media coding, such as from high bit-rate to low bit-rate coding. Alternatively it can perform server like adaptive rate control algorithm in response to receiver feedback. Since deployment is at intermediate nodes in the network this adaptation is more difficult to implement and coordinate. A transcoder based architecture makes it easy to place adaptation geographically close to the clients with no need needed in existing clients and servers. However since a proxy can potentially take the content from many servers and with widely varying characteristics it is difficult to determine which modification is most appropriate for each content type. Also for secured or proprietarily encoded content such as Real Networks streaming media, deploying a transcoder will involve coordinating with the service or content provider in order to access the content for performing adaptation.
1. Manish Mahajan, Ashish Desai, Manish Parashar, Experiments with Adaptive QoS Management for Multimedia Applications in Heterogeneous Environments, submitted to Multimedia Tools and Applications. [Introduction]
1. Narendra Shaha, Ashish Desai, Manish Parashar, Multimedia Content Adaptation for QoS Management over Heterogeneous Networks, to appear in International Conference on Internet Computing 2001, Las Vegas, USA [PDF].
Ashish Desai, An Adaptive QoS Mechanism for Multimedia Applications in Heterogeneous Environments, M.S. Thesis, Department of Electrical and Computer Engineering, Rutgers University, October 2001 [PDF].