This paper describes the implementation of an IMS testbed, based on open source technologies and operating systems. The testbed provides rich communication services, i.e., Instant Messaging, Network Address Book and Presence as well as VoIP and PSTN interconnectivity. Our validation tests indicate that the performance of the testbed is comparable to similar testbeds, but that operating system virtualization signi ficantly aff ects signalling delays.
Simulation is a tool that can be used to assess functionality and performance of communication networks and protocols. However, efficient simulation of complex communication systems is not a trivial task. In this paper, we discuss modeling and simulation of bus-based communication networks and present the results of modeling and simulation of a multigigabit/s LAN. We used parallel simulation techniques to reduce the simulation time of the LAN and implemented both an optimistic and a conservative parallel simulation scheme. Our experimental results on a shared memory multiprocessor indicate that the conservative parallel simulation scheme is superior to the optimistic one for this specific application. The parallel simulator based on the conservative scheme demonstates a linear speedup for large networks.
The paper is about performance evaluation of IP Spectrum Aware Geographic routing protocol (IPSAG). IPSAG is an opportunistic cognitive routing protocol which determines a source-destination route in a hop by hop manner, based on both global and local information. Simulation results are reported for a particular case of IPSAG, where the cognitive radio (CR) nodes are uniformly distributed inside the cognitive radio network (CRN), and a two-dimensional random walk model is used to model the mobility of CR nodes. The results show that the IPSAG protocol is performing well in the case of a highly mobile CRN and that the source-destination path is successfully found in the majority of the cases, especially when the network is highly populated.
The usage of Internet is rapidly increasing and a large part of the Internet traffic is generated by the World Wide Web (WWW) and the associated protocol HiperText Transfer Protocol (HTTP). Several important parameters that affect the performance of the WWW are bandwidth, scalability and latency. To tackle these parameters and to improve the overall performance of the system, it is important to understand and to characterize the application level characteristics. This article is reporting on the measurement and analysis of HTTP traffic collected on the student access network at the Blekinge Institute of Technology in Karlskrona, Sweden. The analysis is done on various HTTP traffic parameters, e.g., inter-session timings, inter-arrival timings, request message sizes, response code and number of transactions. The reported results can be useful for building synthetic workloads for simulation and benchmarking purposes.
A wavelet-based tool is reported for the analysis of Long-Range Dependence (LRD) traffic to allow for semiparametric estimation of the Hurst (H) parameter. Validation has been also done, by using fBm and fGn models, and the obtained estimations show excellent agreements with the theoretical results.
Study of long-range dependence (LRD) properties in real traffic has received an increasing attention in traffic analysis. A wavelet-based tool for the analysis of LRD is presented in this paper together with a semi-parametric estimator of the Hurst parameter. The estimator has been proved to be unbiased under very general conditions and efficient under Gaussian assumptions. Analysis of the Bellcore Ethernet traces as well as of some VBR video traces using the wavelet-based estimator is reported.
The paper presents the characteristics of two recently suggested Cognitive Radio (CR) routing protocols, IP Spectrum Aware Geographic (IPSAG) and Head-Cluster IPSAG (HCIPSAG) protocols, relative to other multi-hop routing protocols. The comparison is guided by specific CR performance criteria, as mobility and spectrum availability awareness. The protocols are representative for different multi-hop routing protocol types. The analysis shows that our protocols respond well to CR demands, especially at high mobility. The results are sustained by statistical results obtained through simulations. However, the price is an increased overhead regarding the control information needed to perform routing.
The paper is about a new routing protocol suggested for Cognitive Radio Networks (CRN) called Head Cluster IP Spectrum Aware Geographic (HC-IPSAG) and the associated performance. The protocol is an extension of the formerly advanced IPSAG routing protocol, for the case of larger CRNs. Splitting of the CRN domain into clusters is considered here. Each cluster is represented by a head node and the inter-cluster routing is done by using the IPSAG routing concepts within the virtual network created by the cluster head nodes. A CRN simulation model has been developed to study the performance of HC-IPSAG. Our results show that the protocol is performing well in the case of high mobility as well as the performance decreases with the growth of the cluster number.
Cognitive Radio Networks (CRNs) have now become very popular due to their innovative idea of increasing the spectrum efficiency by smart secondary users, which are using the unused licensed channels. In the CRN context we have suggested a new routing algorithm called IP Spectrum Aware Geographic algorithm (IPSAG) [1]. The paper presents the validation of the IPSAG correctness in the case of stationary CR nodes. A CRN model is created by overloading a channel map with a node map, and then, inside the CRN, it is proved that the source-destination path is successfully found in the majority of the cases.
The main goals of the paper are towards an understanding of the delay process in best-effort Internet for both non-congested and congested networks. A dedicated measurement system is re-ported for delay measurements in IP routers, which follows specifications of the IETF RFC 2679. The system is using both passive measurements and active probing. Dedicated application-layer software is used to generate UDP traffic with TCP-like characteristics. Pareto traffic models are used to generate self-similar traffic in the link. The reported results are in the form of several impor-tant statistics regarding processing delay of a router, router delay for a single data flow, router delay for more data flows as well as end-to-end delay for a chain of routers. We confirm results reported earlier about the fact that the delay in IP routers is generally influenced by traffic characteristics, link conditions and, at some extent, details in hardware implementation and different IOS releases. The delay in IP routers usually shows heavy-tailed characteristics. It may also occasionally show extreme values, which are due to improper functioning of the routers.
This report is a contribution towards a better understanding of traffic measurements associated with e2e delays occurring in best-effort networks. We describe problems and solutions associated with OWTT delay measurements, and give examples of such measurements. A dedicated measurement system is reported for delay measurements in IP routers, which follows specifications of the IETF RFC 2679. The system uses both passive measurements and active probing. Dedicated application-layer software is used to generate traffic. Pareto traffic models are used to generate self-similar traffic in the link. Both packet inter-arrival times and packet sizes are matching real traffic models. A passive measurement system is used for data collection that is based on using several so-called Measurement Points, each of them equipped with DAG monitoring cards. Hashing is used for the identification and matching of packets. The combination of passive and active measurements, together with using the DAG monitoring system, gives us an unique possibility to perform precise traffic measurements as well as the flexibility needed to compensate for the lack of analytic solutions. The real value of our study lies in the hop-by-hop instrumentation of the devices involved in the transfer of IP packets. The mixture of passive and active traffic measurements used, allows us to study changes in traffic patterns relative to specific reference points and to observe different contributing factors to the observed changes. This approach offers us the choice of better understanding diverse components that may impact on the performance of packet delay as well as to to measure queueing delays in operational routers.
The paper reports on a dedicated measurement system for delay measurements in IP routers, which follows specifications of the IETF RFC 2679. The system uses both passive and active measurements. Dedicated application-layer software is used to generate traffic. Pareto traffic models are used to generate self-similar traffic in the link. Both packet inter-arrival times and packet sizes are matching real traffic models. A passive measurement system is used for data collection that is based on using several so-called Measurement Points, each of them equipped with DAG monitoring cards. Hashing is used for the identification and matching of packets. The combination of passive measurements and active probing, together with using the DAG monitoring system, gives us an unique possibility to perform precise traffic measurements as well as the flexibility needed to compensate for the lack of analytic solutions.
In recent years, Internet has known an unprecedented growth, which, in turn, has lead to an increased demand for real-time and multimedia applications that have high Quality-of-Service (QoS) demands. This evolution lead to difficult challenges for the Internet Service Providers (ISPs) to provide good QoS for their clients as well as for the ability to provide differentiated service subscriptions for those clients who are willing to pay more for value added services. Furthermore, a tremendous development of several types of overlay networks have recently emerged in the Internet. Overlay networks can be viewed as networks operating at an inter-domain level. The overlay hosts learn of each other and form loosely-coupled peer relationships. The major advantage of overlay networks is their ability to establish subsidiary topologies on top of the underlying network infrastructure acting as brokers between an application and the required network connectivity. Moreover, new services that cannot be implemented (or are not yet supported) in the existing network infrastructure are much easier to deploy in overlay networks. In this context, multicast overlay services have become a feasible solution for applications and services that need (or benefit from) multicast-based functionality. Nevertheless, multicast overlay networks need to address several issues related to efficient and scalable congestion control schemes to attain a widespread deployment and acceptance from both end-users and various service providers. This report aims at presenting an overview and taxonomy of current solutions proposed that provide congestion control in overlay multicast environments. The report describes several protocols and algorithms that are able to offer a reliable communication paradigm in unicast, multicast as well as multicast overlay environments. Further, several error control techniques and mechanisms operating in these environments are also presented. In addition, this report forms the basis for further research work on reliable and QoS-aware multicast overlay networks. The research work is part of a bigger research project, "Routing in Overlay Networks (ROVER)". The ROVER project was granted in 2006 by EuroNGI Network of Excellence (NoE) to the Dept. of Telecommunication Systems at Blekinge Institute of Technology (BTH).
The paper reports on a performance study of several Application Layer Multicast (ALM) protocols. Three categories of overlay multicast networks are investigated, namely Application Level Multicast Infrastructure (ALMI), Narada and NICE is the Internet Cooperative Environment (NICE). The performance of the overlay multicast protocols is evaluated with reference to a set of performance metrics that capture both application and network level performance. The study focuses on the control overhead induced by the protocols under study. This further relates to the scalability of the protocol with increasing number of multicast participants. In order to get a better assessment of the operation of these protocols in "real-life"-like conditions, we implemented in our simulations a heavy-tailed delay at the network level and churn behavior of the overlay nodes. Our performance study contributes to a deeper understanding and better assessment of the requirements for such protocols targeted at, e.g., media streaming.
IP Television (IPTV) and other media distribution applications are expected to be one of the next Internet killer applications. One indication of this is the corporate backing that the IP Multimedia Subsystem (IMS) is getting. However, the bandwidth utilization of these applications is still an issue, as the volume of multimedia grows due to larger image resolution and higher bitrate audio. One way of managing this increase in bandwidth requirements is to use existing endhost bandwidth to decrease the load on the content server in a Peer-to-Peer (P2P) fashion. One of the most successful P2P applications is BitTorrent (BT), a swarming file transfer system. This paper presents an implementation of a BT simulator intended for future use in investigating modifications to the BT system to provide streaming media capabilities. The simulator is validated against real-world measurements and a brief performance analysis is provided.
The paper reports on a modeling and evaluation study of session characteristics of BitTorrent traffic. BitTorrent is a second generation Peer-to-Peer (P2P) application recently developed as an alternative to the classical client-server model to reduce the load burden on content servers and networks. Results are reported on measuring, modeling and analysis of application layer traces collected at the Blekinge Institute of Technology (BIT) and a local ISP. For doing this, a dedicated measurement infrastructure has been developed at BIT to collect P2P traffic. A dedicated modeling methodology has been put forth as well. New results are reported on session characteristics of BitTorrent, and it is observed that session interarrivals can be accurately modeled by the hyper-exponential distribution while session durations and sizes can be reasonably well modeled by the lognormal distribution.
This paper reports on a modeling and evaluation study of session characteristics of Bit- Torrent traffic. BitTorrent is a second generation Peer-to-Peer (P2P) application recently developed as an alternative to the classical client-server model to reduce the load burden on content servers and networks. Results are reported on measuring, modeling and analy- sis of application layer traces collected at the Blekinge Institute of Technology (BTH) and at a local Internet Service Provider (ISP). For doing this, a measurement infrastructure has been developed at BTH to collect P2P traffic. A dedicated modeling methodology has been put forth as well. New results are reported on session characteristics of BitTorrent, and it is observed that session interarrival times can be accurately modeled by the hyper- exponential distribution while session durations and sizes can be reasonably well modeled by the lognormal distribution.
The paper reports on a detailed study of the BitTorrent content distribution system. We first present a measurement infrastructure designed to allow detailed, message-level capture and analysis of P2P traffic. An associated modeling methodology is presented as well. These tools have been used to measure and model the BitTorrent protocol, which is observed to exhibit exponential characteristics of session interarrival times. We also observe that session durations and sizes are modeled with a lognormal distribution.
BitTorrent is a second generation Peer-to-Peer application that has been recently developed as an alternative to the classical client-server model to reduce the load burden on content servers and networks. The protocol relies on the use of swarming techniques for distributing content. No search functionality is built into the protocol, and the signaling is geared only towards an efficient dissemination of data. The paper reports on measurement and analysis of BitTorrent traffic collected at the Blekinge Institute of Technology (BIT), Karlskrona, Sweden. We measure and analyze data from local BitTorrent client sessions at BIT. The characteristics of the signaling traffic exchanged among the participating peers in a BitTorrent distribution swarm are investigated. A dedicated approach based on combining instrumentation at the application layer with flow identification and extraction at the transport layer is used for traffic measurements.
BitTorrent, a replicating Peer-to-Peer (P2P) file sharing system, has become extremely popular over the last years. According to Cachelogic, the BitTorrent traffic volume has increased from 26% to 52% of the total P2P traffic volume during the first half of 2004. This paper reports on new results obtained on modelling and analysis of BitTorrent traffic collected at Blekinge Institute of Technology (BTH) as well as a local Internet Service Provider (ISP). In particular, we report on new request models for a BitTorrent peer during downloading.
BitTorrent (bt), a Peer-to-Peer (p2p) distribution system, is a major bandwidth consumer in the current Internet. This paper reports on a measurement study of bt traffic intended to identify potential traffic invariants. To this end, we employ high-accuracy packet capture hardware together with specialized parsing software. Important characteristics regarding bt sessions and messages as well as general system characteristics are reported. The results indicate that characteristics such as session inter-arrival times are corroborated to be exponentially distributed, while other characteristics are shown to differ from previously reported results. These differences are attributed to changes in the core bt algorithms. Further, it is observed that long- and heavy-tailed distributions can be used to model several characteristics. The log-normal, Pareto and mixtures of these distributions are used to model session sizes and durations, while the Weibull distribution has been observed to model message inter-arrival and inter-departure times for bandwidth-limited clients.
Video-on-Demand (VoD) has been hailed as a "killer app" of the Internet for a long time, but it has yet to really make a major breakthrough, due to various technical and political reasons. The recent influx of community-driven video distribution has reactualized the research into efficient distribution methods for streaming video. In this paper, we propose a number of modifications and extensions to the BitTorrent (BT) distribution and replication system to make it suitable for use in providing a streaming video delivery service, and implement parts of these in a simulator. Also, we report on a simulation study of the implemented extensions to the BT system.
IPTV has been hailed as a “killer app'' of the Internet for a long time, but it has yet to really make a major breakthrough, due to various technical and political reasons. The recent influx of community-driven video distribution has re-actualized the research into efficient distribution methods for streaming video. In this paper, we suggest a number of modifications and extensions to the BitTorrent distribution and replication system to make it suitable for use in providing a streaming video delivery service, and implement parts of these in a simulator. Also, we report on a simulation study as well as a large-scale real-world study of the extensions on PlanetLab. Results show that BitTorrent can be used as a bandwidth-efficient alternative to traditional IPTV solutions.
This work motivates and details the concept of QoE-aware sustainable throughput in the area of video streaming. Sustainable throughput serves as a mean to compare video streaming solutions in terms of Quality of Experience (QoE) and energy efficiency (EE). It builds upon the QoE Provisioning-Delivery Hysteresis (PDH) and denotes the maximal throughput at which QoE deteriorations can be kept below a quantifiable level, which in turn allows to compare the EE of different video streaming solutions on QoE-fair grounds. In this work, we particularly focus on delivery problems stemming from outage-prone links, as they are typical for mobile systems. Well-adapted to the nature of the video-associated data streams and disturbances, a stochastic fluid flow model is used that allows for straightforward calculation of sustainable throughput values. We also discuss the application of the sustainable throughput for comparisons among different streaming solutions and their offered QoE and EE, respectively.
This paper reports on transfer rate models for the Gnutella signaling protocol. New results on message-level and IP-level rates are presented. The models are based on traffic captured at the Blekinge Institute of Technology (BTH) campus in Sweden and offer several levels of granularity: message type, application layer and network layer. The aim is to obtain parsimonous models suitable for analysis and simulation of P2P workload.
The paper reports on in-depth measurements and analysis of Gnutella signaling traffic collected at the Blekinge Institute of Technology (BIT), Karlskrona, Sweden. The measurements are based on a week-long packet trace collected with help of the well-known tcpdump application. Furthermore, a novel approach has been used to measure and analyze Gnutella signaling traffic. Associated with this, a dedicated tcptrace module has been developed and used to decode the packet trace, down to individual Gnutella messages. The measurement infrastructure consists of a Gnutella node running in ultrapeer mode and protocol decoding software. Detailed traffic characteristics have been collected and analyzed, such as session durations and interarrival times, and Gnutella message sizes and duration. Preliminary results show a high degree of variability of the Gnutella signaling traffic, which is mostly created by the QUERY messages. Furthermore, the Gnutella session interarrival times are observed to resemble the exponential distribution.
The paper reports on a measurement infrastructure developed at the Blekinge Institute of Technology (BIT) with the purpose to do traffic measurements and analysis on Peer-to-Peer (P2P) traffic. The measurement methodology is based on using application logging as well as link-layer packet capture. This offers the possibility to measure application layer information with link-layer accuracy. Details are reported on this methodology, together with description of the BIT measurement infrastructure. The paper also reports on traffic measurements done on BitTorrent and Gnutella protocols from an end-client perspective, together with some measurement results of salient protocol characteristics. Preliminary results show a high degree of variability of the BitTorrent and Gnutella traffic, where in the case of Gnutella a large contribution is given by the signaling traffic.
The paper is focused on signaling traffic between Gnutella peers that implement the latest Gnutella protocol specifications (v0.6). In particular, we provide analytically tractable statistical models at session level, message level and IP datagram level for traffic crossing a Gnutella ultrapeer at Blekinge Institute of Technology (BTH) in Karlskrona, Sweden. To the best of our knowledge this is the first work that provides Gnutella v0.6 statistical models at this level of detail. These models can be implemented straightforward in network simulators such as ns2 and OmNet++. The results show that incoming requests to open a session follow a Poisson distribution. Incoming Gnutella messages across all established sessions can be described by a compound Poisson distribution. Mixture distribution models for message transfer rates include a heavy-tailed component.
The goal of quality of service (QoS) routing in overlay networks is to address deficiencies in today's Internet Protocol (IP) routing. This is achieved by application-layer protocols executed on end-nodes, which search for Alternate paths that can provide better QoS for the overlay hosts. In the first part of this paper we introduce fundamental concepts of QoS routing and the current state-of-the-art in overlay networks for QoS. In the remaining part of the paper we report performance results for the Overlay Routing Protocol (ORP) framework developed at Blekinge Institute of Technology (BTH) in Karlskrona, Sweden. The results show that QoS paths can be established and maintained as long as one is willing to accept a protocol overhead of maximum 1.5% of the network capacity.
The global Internet has seen tremendous growth in terms of nodes and user base as well as of types of applications. One of the most important consequences of this growth is related to an increased complexity of the traffic models experienced in the networks. Each application has a set of unique characteristics in terms of the way it performs its transactions as well as the way transaction processing profile maps onto unique network resource requirements. In order to support Internet applications effectively, it is therefore important to understand and to characterize the application level transactions and also to investigate their scaling properties. Recent advances in high resolution traffic monitoring and analyzing capabilities have enabled us to build up realistic models for the TCP/IP protocol stack with diverse network applications. In this paper we report investigations of classical applications such as FTP, SMTP, and HTTP to evaluate end-to-end performance requirements and accordingly to assess end-user performance like Service Level Agreement (SLA) for WWW. Our results show the presence of a robust correlation structure in the traffic streams that has a fundamental bearing on the user perceived quality of the applications.
The global Internet has seen tremendous growth in terms of nodes and user base as well as of types of applications. One of the most important consequences of this growth is related to an increased complexity of the traffic experienced in these networks. Each application has a set of unique characteristics in terms of performance characteristics, transactions as well as the way the transaction processing profile maps onto unique network resource requirements. In order to support Internet applications effectively, it is therefore important to understand and to characterize the application level transactions as well as the effect of different TCP/IP control mechanisms on application-level parameters. It is the goal of this paper to model and to evaluate the characteristics of World Wide Web traffic. Results are reported on measuring, modeling and analysis of specific Hyper Text Transfer Protocol traffic collected from different (classes of) sites together with methodologies used for capturing HTTP flows as well as for modeling. The paper concludes with a discussion on the structure of Web pages and a model for the generation of the number of embedded pages in a Web page is suggested.
The focus of the paper is on resource engineering for supporting Service Level Agreements (SLAs) in IP networks. SLA at both link level and application level are considered. Using an object-oriented simulation model a case study is presented for client-server interactions generated by mixed traffic conditions in a Frame Relay (FR) WAN. Performance issues of Short Range Dependence (SRD) and Long Range Dependence (LRD) traffic under different resource control regimes are compared. The results show that major portion of the end-to-end delay comes from the queueing delay at the WAN ingress points, which is due to the significant bandwidth differences that may exist between LAN and WAN link layers. The results also highlight the role TCP window size and FR PVC control mechanisms play in the provision of delay performance for Internet services.
Det handlar om beskrivning av utrustning samt mjukvara som har använts i vår forsking kring trafikmättningar.
Det handlar om trafik mättningar, analys samt modellering av några av Internets applikationer (SMTP, HTTP, FTP).
Det handlar om dimensionering av Internets resurser m a p fördröjningsprestanda
The global Internet has seen tremendous growth in terms of nodes and user base as well as of types of applications. One of the most important consequences of this growth is related to an increased complexity of the traffic experienced in these networks. Each application has a set of unique characteristics in terms of performance characteristics, transactions as well as the way the transaction processing profile maps onto unique network resource requirements. In order to support Internet applications effectively,it is therefore important to understand and to characterize the application level transactions as well as the effect of different TCP/IP control mechanisms on application-level parameters. It is the goal of this paper to model and to evaluate the characteristics of World Wide Web traffic. Results are reported on measuring, modeling and analysis of specific Hyper Text Transfer Protocol traffic collected from different (classes of) sites together with methodologies used for capturing HTTP flows as well as for modeling. The paper concludes with a discussion on the structure of Web pages and a model for the generation of the number of embedded pages in a Web page is suggested.
This paper reviews the current state of the art in the rapidly developing areas of ATM traffic controls and traffic modeling, and identifies future research areas to facilitate the implementation of control methods that can support a desired quality of service without sacrificing network utilizations. Two sets of issues are identified, one on the impacts of realistic traffic on the efficacy of traffic controls in supporting specific traffic management objectives, and the other dealing with the extend to which controls modify traffic characteristics. These issues are illustrated using the example of traffic shaping of individual ON-OFF sources that have infinite variance sojourn times.
The global Internet has seen tremendous growth in terms of nodes and user base as well as of types of applications. One of the most important consequences of this growth is related to an increased complexity of the traffic experienced in these networks. Each application has a set of unique characteristics in terms of its performance characteristics, its transactions as well as the way the transaction processing profile maps onto unique network resource requirements. In order to support Internet applications effectively, it is therefore important to understand and to characterise the application level transactions. Recent advances in high resolution traffic monitoring and analysis capabilities have enabled us to build up realistic models for diverse network applications. In this paper we report investigations of classical applications such as FTP, SMTP, and HTTP to evaluate end-to-end network performance requirements. Our results show the presence of a robust correlation structure in the traffic streams that has a fundamental bearing on the user perceived quality of the applications.
Recent measurements on LAN, MAN and WAN traffic have demonstrated that Long-Range Dependence (LRD) is an invariant property irrespective of network technology being employed. As a consequence, the performance of the network is dominated by the property of LRD in the network traffic. Latency in information access is one of the most important factors in the user perceived Quality of Service (QoS) in network applications. Almost all the applications follow the client-server paradigm to transfer information entities which are typically files or typed-in messages. The distribution of the sizes of these information entities are best described by heavy-tailed distributions and results in LRD. This fundamental property impacts all the aspects of application layer performance (e.g., response time) and the network layer performance (e.g., packet loss and delay). In the traditional traffic models, the network resource management is aimed at capturing the timescales at which bursts occur and dimensioning the network for the burst sizes occuring at that timescale. This management paradigm has proven to be ineffective for the traffic having significant level of LRD. Because of LRD, bursts of all possible sizes occur at timescales spanning over several orders of magnitude. Engineering of network resources to protect application layer QoS is therefore an important task. In this paper heavy-tailed distributions are used to model the information contents transferred by some of the classical network applications such as FTP, SMTP and HTTP. The parameters of these models are based on high resolution non-intrusive monitoring of busy periods in live networks. The clients and the servers are modelled as ON-OFF sources producing LRD phenomena at the packet level through aggregation. The user level quality for these applications is investigated (in terms of end-to-end delay performance) and preliminary results are reported showing how the quality is affected by the bandwidth and buffer allocation schemes.
We model and evaluate an infrastructure based cognitive radio network where impatient unlicensed or secondary users (SUs) offer elastic traffic. The interference created by SUs to the licensed users is analyzed when the secondary network uses two different spectrum access schemes, the conventional random access scheme and a new scheme we refer to as access with preference. To further control the interference, a limit is set to the number of channels SUs have access to. With this new constraint, the abandonment probability, the mean delay and throughput of the SUs degrades significantly. To improve the QoS perceived by the active SUs, we define and evaluate an admission control scheme for the SUs.
Overlay networks have shown to be very effective towards the support and enhancement of network performance and the availability of new applications and protocols without interfering with the design of the underlying networks. One of the most challenging open issues in overlay networks, however, is paths overlapping, where overlay paths may share the same physical link and thus, the ability of overlay networks to quickly recover from congestion and path failures is severely affected. This chapter undertakes a review of some graph theoretic based methods for the selection of a set of topologically diverse routers towards the provision of independent paths for better availability, performance and reliability in overlay networks. Moreover, it proposes a graph decomposition-based approach for the maximization of path diversity without degrading network performance of in terms of latency. Some remarks on future developments and challenges in the field of overlay networks are included.
CONVINcE is a CELTIC-PLUSproject dedicated to minimizing the power consumption in IP-based video distribution networks,from theheadend to the terminal. The entirevideo distribution chain is considered in the project, covering a wide range of entities involved in this process. Examples of theseentities are headend, edge cloud, Content Distribution Network (CDN), core backbone network, Radio Access Network (RAN) as well asfixedand mobile terminals. Related to this, one of the most difficult research questions is regarding the provision of minimum end-to-end power consumption for video streamscombined with the best possible Quality of Experience (QoE) obtained at the terminal. It requiressolving a number of sophisticated research questions, among them modellingand optimization problems.
User’s location modeling and prediction is complex and a challenge for seamless mobility in heterogeneous networks. Markov models and information-theoretic techniques are appropriate to perform location prediction. The paper characterizes user’s location as a discrete sequence. We survey and describe Markovian methods and information-theoretic techniques for location prediction.
In dynamic spectrum access networks, the unused licensed spectrum of primary users (PU) is opened to unlicensed secondary user (SU) for improving spectrum efficiency. The problem we study in this paper is to maximize the SU throughput while guaranteeing PU performance. We design a simple time-based threshold strategy to maximize the SU throughput while guaranteeing the PU performance by learning directly from the interaction with PU environment. At a given time, the SU decides if it should transmit and if so to protect the PU performance. It is observed that such strategies perform closely to the strategy optimized based on prior knowledge about PU traffic models. Simulations are used for performance evaluation.
In dynamic spectrum access networks, the unused licensed spectrum used by primary users (PU) is opened to unlicensed secondary users (SU) for improving spectrum efficiency. We design a simple time-based threshold policy for collective protection of PUs, enabled by an out-of-band channel. In particular, multiple SUs may be widely distributed in a geographic location. The interference that collocated SUs cause to each other, termed self-interference, becomes a major source that may degrade the SUs communication performance. We establish an analytical framework for carrier sense multiple access (CSMA) based coexistence mechanisms when integrated into a family of time based threshold policies, and study its performance though theoretical analysis.
The use of simulations have become increasingly frequent in the study and the performance evaluation of network systems. The simulation environment deeply influences the behavior of results, so a model that simulates a realistic movement of the nodes is necessary for the study of wireless networks. Simple mobility models do not provide realistic scenarios. Often movements are completely random, uncorrelated and in open space, without the chance of considering the effects of obstacles or rules that limit and guide the movement. In this paper, we propose a more realistic model, studied for indoor environments (but applicable to outdoor models as well). Given the map of the obstacles in the simulation area (e.g., a floor plan), the nodes have the possibility to move in random walk just avoiding to cross the obstacles (e.g., walls), or following a specified virtual path that connects all the simulation area, or a hybrid of the two. Our tool creates a file containing the movement of the nodes during the whole simulation time. Simulation results show that nodes are highly dependent on the different obstacles maps and pathways. Furthermore, a mathematical demonstration is given to validate the results obtained by simulation in a simple case.
Cognitive Radio (CR) promises to provide better spectrum utilization and increase the availability of (on demand) broadband access to the Network of the Future. CR relies on devices that can sense their (radio) environment, understand it, and model it, so that they can exploit the spectrum through appropriate transmission parameters (in the space, time, and frequency domains) that do not interfere with legacy and/or licensed devices using the same spectrum in the same general area. According to the CR paradigm, the use of the spectrum that is not used by primary (licensed) users is opportunistic. Secondary users sense the spectrum and decide to use it on their own, possibly following additional rules and protocols. Because multiple such CR devices (secondary users) can find themselves in the same area and compete for the same spectrum, the question that arises is whether these devices behave fairly and appropriately. The opportunities are there for competing CR devices to try to grab more resources through the spreading of misinformation and impersonation of licensed devices.
Cognitive Radio Networks (CRNs) are emerging as a solution to increase the spectrum utilization by using unused or less used spectrum in radio environments. The basic idea is to allow unlicensed users access to licensed spectrum, under the condition that the interference perceived by the licensed users is minimal. New communication and networking technologies need to be developed, to allow the use of the spectrum in a more efficient way and to increase the spectrum utilization. This means that a number of technical challenges must be solved for this technique to get acceptance. The most important issues are regarding Dynamic Spectrum Access (DSA), architectural issues (with focus on network reconfigurability), deployment of smaller cells and security.
CONVINcE is a 2.5 years CELTIC -Plus project started in September 2014 that addresses the challenges of reducing the power consumption in IP-based video distribution networks. An end-to-end approach is adopted in the project, from the Head End, where contents are encoded and streamed, to the terminals, where they are consumed, also embracing access and core networks, Content Distribution Networks as well as Video Distribution Networks. A number of 18 industrial and academic partners from 5 European countries are participating in the project. Project leader is Thomson Video Networks in France and scientific project leader is Blekinge Institute of Technology in Sweden.