The information-security landscape continuously evolves by discovering new vulnerabilities daily and sophisticated exploit tools. Vulnerability risk management (VRM) is the most crucial cyber defense to eliminate attack surfaces in IT environments. VRM is a cyclical practice of identifying, classifying, evaluating, and remediating vulnerabilities. The evaluation stage of VRM is neither automated nor cost-effective, as it demands great manual administrative efforts to prioritize the patch. Therefore, there is an urgent need to improve the VRM procedure by automating the entire VRM cycle in the context of a given organization. The authors propose automated context-aware VRM (ACVRM), to address the above challenges. This study defines the criteria to consider in the evaluation stage of ACVRM to prioritize the patching. Moreover, patch prioritization is customized in an organization’s context by allowing the organization to select the vulnerability management mode and weigh the selected criteria. Specifically, this study considers four vulnerability evaluation cases: (i) evaluation criteria are weighted homogeneously; (ii) attack complexity and availability are not considered important criteria; (iii) the security score is the only important criteria considered; and (iv) criteria are weighted based on the organization’s risk appetite. The result verifies the proposed solution’s efficiency compared with the Rudder vulnerability management tool (CVE-plugin). While Rudder produces a ranking independent from the scenario, ACVRM can sort vulnerabilities according to the organization’s criteria and context. Moreover, while Rudder randomly sorts vulnerabilities with the same patch score, ACVRM sorts them according to their age, giving a higher security score to older publicly known vulnerabilities. © 2022 by the authors.
Vulnerability patch management is one of IT organizations' most complex issues due to the increasing number of publicly known vulnerabilities and explicit patch deadlines for compliance. Patch management requires human involvement in testing, deploying, and verifying the patch and its potential side effects. Hence, there is a need to automate the patch management procedure to keep the patch deadline with a limited number of available experts. This study proposed and implemented an automated patch management procedure to address mentioned challenges. The method also includes logic to automatically handle errors that might occur in patch deployment and verification. Moreover, the authors added an automated review step before patch management to adjust the patch prioritization list if multiple cumulative patches or dependencies are detected. The result indicated that our method reduced the need for human intervention, increased the ratio of successfully patched vulnerabilities, and decreased the execution time of vulnerability risk management.
Vulnerability Risk Management (VRM) is a critical element in cloud security that directly impacts cloud providers’ security assurance levels. Today, VRM is a challenging process because of the dramatic increase of known vulnerabilities (+26% in the last five years), and because it is even more dependent on the organization’s context. Moreover, the vulnerability’s severity score depends on the Vulnerability Database (VD) selected as a reference in VRM. All these factors introduce a new challenge for security specialists in evaluating and patching the vulnerabilities. This study provides a framework to improve the classification and evaluation phases in vulnerability risk management while using multiple vulnerability databases as a reference. Our solution normalizes the severity score of each vulnerability based on the selected security assurance level. The results of our study highlighted the role of the vulnerability databases in patch prioritization, showing the advantage of using multiple VDs.
In the last three years, the unprecedented increase in discovered vulnerabilities ranked with critical and high severity raise new challenges in Vulnerability Risk Management (VRM). Indeed, identifying, analyzing and remediating this high rate of vulnerabilities is labour intensive, especially for enterprises dealing with complex computing infrastructures such as Infrastructure-as-a-Service providers. Hence there is a demand for new criteria to prioritize vulnerabilities remediation and new automated/autonomic approaches to VRM.
In this paper, we address the above challenge proposing an Automated Context-aware Vulnerability Risk Management (AC- VRM) methodology that aims: to reduce the labour intensive tasks of security experts; to prioritize vulnerability remediation on the basis of the organization context rather than risk severity only. The proposed solution considers multiple vulnerabilities databases to have a great coverage on known vulnerabilities and to determine the vulnerability rank. After the description of the new VRM methodology, we focus on the problem of obtaining a single vulnerability score by normalization and fusion of ranks obtained from multiple vulnerabilities databases. Our solution is a parametric normalization that accounts for organization needs/specifications.
In some cases, application-level measurements can be the only way for an application to get an understanding about the performance offered by the underlying network(s). It can also be that an application-level measurement is the only practical solution to verify the availability of a particular service. Hence, as more and more applications perform measurements of various networks; be that fixed or mobile, it is crucial to understand the context in which the application level measurements operate their capabilities and limitations. To this end in this paper we discuss some of the fundamentals of computer network performance measurements and in particular the key aspects to consider when using application level measurements to estimate network performance properties.
Due to the complex diversity of contemporary Internet-services, computer network measurements have gained considerable interest during recent years. Since they supply network research, development and operations with data important for network traffic modelling, performance and trend analysis, etc. The quality of these measurements affect the results of these activities and thus the perception of the network and its services. This thesis contains a systematic investigation of computer network measurements and a comprehensive overview of factors influencing the quality of performance parameters obtained from computer network measurements. This is done using a novel network performance framework consisting of four modules: Generation, Measurement, Analysis and Visualization. These modules cover all major aspects controlling the quality of computer network measurements and thus the validity of all kinds of conclusions based on them. One major source of error is the timestamp accuracy obtained from measurement hardware and software. Therefore, a method is presented that estimates the timestamp accuracy obtained from measurement hardware and software. The method has been used to evaluate the timestamp accuracy of some commonly used hardware (Agilent J6800/J6830A and Endace DAG 3.5E) and software (Packet Capture Library). Furthermore, the influence of analysis on the quality of performance parameters is discussed. An example demonstrates how the quality of a performance metric (bitrate) is affected by different measurement tools and analysis methods. The thesis also contains performance evaluations of traffic generators, how accurately application-level measurements describe network behaviour, and of the quality of performance parameters obtained from PING and J-OWAMP. The major conclusion is that measurement systems and tools must be calibrated, verified and validated for the task of interest before using them for computer network measurements. A guideline is presented on how to obtain performance parameters at a desired quality level.
Due to the complex diversity of contemporary Internet applications, computer network measurements have gained considerable interest during the recent years. Since they supply network research, development and operations with data important for network traffic modelling, performance and trend analysis etc., the quality of these measurements affect the results of these activities and thus the perception of the network and its services. One major source of error is the timestamp accuracy obtained from measurement hardware and software. On this background, we present a method that can estimate the timestamp accuracy obtained from measurement hardware and software. The method is used to evaluate the timestamp accuracy of some commonly used measurement hardware and software. Results are presented for the Agilent J6800/J6830A measurement system, the Endace DAG 3.5E card, the Packet Capture Library (PCAP) either with PF_RING or Memory Mapping, and a RAW socket using either the kernel PDU timestamp (ioctl) or the CPU counter (TSC) to obtain timestamps.
We currently observe a rising interest in mobile broadband, which users expect to perform in a similar way as its fixed counterpart. On the other hand, the capacity allocation process on mobile access links is far less transparent to the user; still, its properties need to be known in order to minimize the impact of the network on application performance. This paper investigates the impact of the packet size on the minimal one-way delay for the uplink in third-generation mobile networks. For interactive and real-time applications such as VoIP, one-way delays are of major importance for user perception; however, they are challenging to measure due to their sensitivity to clock synchronization. Therefore, the paper applies a robust and innovative method to assure the quality of these measurements. Results from measurements from several Swedish mobile operators show that applications can gain significantly in terms of one-way delay from choosing optimal packet sizes. We show that, in certain cases, an increased packet size can improve the one-way delay performance at best by several hundred milliseconds.
The number of mobile broadband users is increasing. Furthermore, these users have high expectations into the capabilities of mobile broadband, comparable to those in fixed networks. On the other hand, the capacity assignment process on mobile access links is far from transparent to the user, and its properties need to be known in order to minimize the impact of the network on application performance. This paper investigates the impact of the packet size on the characteristics of the one-way delay for the down-link in third-generation mobile networks. For interactive and real-time applications such as VoIP, one-way delays are of major importance for user perception; however, they are challenging to measure due to their sensitivity to clock synchronization. Therefore, the paper applies a robust and innovative method to assure the quality of these measurements. We focus on the down-link as this is still the link that carries the most traffic to the user, and the quality of it will have a significant impact on all IP-based services. Results from measurements from several Swedish mobile operators reveal the possibility to partly control one-way delay and its variability by choosing appropriate packet sizes. In particular, packet sizes leading to the use of WCDMA entail significant but hardly varying one-way delays. On the other hand, we also show that HDSPA networks can deliver large amounts of data at rather high speed, but the cost is a huge variability in the one-way delay.
In this paper we describe a distributed passive measurement infrastructure. Its goals are to reduce the cost and configuration effort per measurement. The infrastructure is scalable with regards to link speeds and measurement locations. A prototype is currently deployed at our university and a demo is online at http://inga.its.bth.se/projects/dpmi. The infrastructure differentiates between measurements and the analysis of measurements, this way the actual measurement equipment can focus on the practical issues of packet measurements. By using a modular approach the infrastructure can handle many different capturing devices. The infrastructure can also deal with the security and privacy aspects that might arise during measurements.
In this work, we present a systematic study of how the traffic of different transport protocols (UDP, TCP and ICMP) is treated, in three operational Swedish 3G networks. This is done by studying the impact that protocol and packet size have on the one-way-delay (OWD) across the networks. We do this using a special method that allows us to calculate the exact OWD, without having to face the usual clock synchronization problems that are normally associated with OWD calculations. From our results we see that all three protocols are treated similarly by all three operators, when we consider packet sizes that are smaller than 250~bytes and larger than 1100~bytes. We also show that larger packet sizes are given preferential treatment, with both smaller median OWD as well as a smaller standard deviation. It is also clear that, ICMP is given a better performance compared to TCP and UDP.
In this paper we present a novel framework supporting distributed network management using a self-organizing peer-to-peer overlay network. The overlay consists of several Distributed Network Agents which can perform distributed tests and distributed monitoring for fault and performance management. In that way, the concept is able to overcome disadvantages that come along with a central management unit, like lack of scalability and reliability. So far, little attention has been payed to the quality of service experienced by the end user. Our self-organizing management overlay provides a reliable and scalable basis for distributed tests that incorporate the end user. The use of a distributed, self-organizing software will also reduce capital and operational expenditures of the operator since fewer entities have to be installed and operated.
N/A
In this work, we analyze from passive measurements the correlations between the user-induced interruptions of TCP connections and different end-to-end performance metrics. The aim of this study is to assess the possibility for a network operator to take into account the customers' experience for network monitoring. We first observe that the usual connection-level performance metrics of the interrupted connections are not very different, and sometimes better than those of normal connections. However, the request-level performance metrics show stronger correlations between the interruption rates and the network quality-of-service. Furthermore, we show that the user impatience could also be used to characterize the relative sensitivity of data applications to various network performance metrics.
The number of mobile operators providing Internet access to end users is growing. However, irrespective of the access network, we observe a distinct sensitivity of user perception to response and download times, in particular for interactive services on the web. In order to facilitate the choice of the right network for a given task, this paper presents a systematic study of web download time and corresponding throughput as a function of the file size. Based on measurement data from three Swedish mobile operators and a particular strategy of choosing file sizes, we find surprisingly simple, yet sufficiently accurate approximations of download times. These approximations are based on simple-to-measure parameters and provide valuable quantitative insights into the acceleration of HTTP/TCP/IP-based data delivery. The paper discusses the emergence of these approximations and related errors. Furthermore, it correlates the findings with Quality of Experience, thus building bridges between performance, user perception and provisioning issues.
The notion and topic of Quality of Experience (QoE) keeps attracting the attention of manufacturers, operators and researchers. It links user perception and expectations on one side and technical Quality of Service (QoS) parameters, management, pricing schemes etc. on the other side. Such links are needed in order to balance user satisfaction and economic aspects of service provisioning. However, the notion of QoE as such is not without controversy. Technicians, used to a world of objective and clearly definable parameters, tend to fear the subjective, somehow fuzzy parts associated with end user perception. Vice versa, customer relationship and marketing departments might find themselves uncomfortable with technical parameters which might not reflect the user perception in some tense situations. Nevertheless, appearance and utility of a networked service depend on the underlying technical solutions and their performance. Thus, we face the challenge of bringing it all together, which essentially describes the spirit of the 18th ITC Specialists Seminar on Quality of Experience (ITC-SS 18). ITC Specialist Seminars have a very good reputation in gathering experts and their high-quality contributions around a performance-oriented topic of mutual interest. ITC-SS 18 is intended as a meeting place between experts, researchers, practitioners, vendors and customers. It is devoted to presentations and discussions of QoE concepts, analysis, management approaches etc., both from industry and academia. While many conferences are dominated by academia, one third of the submissions to ITC-SS 18 originated from industry. The contributions have been peer-reviewed by at least three independent reviewers and finally, we selected 18 papers to be presented. Additionally, two keynote speeches reflect one industrial and one academic approach to QoE analysis and implementation. For the ITC, the leading conference for performance modeling and analysis of communication networks & systems, ITC-SS 18 opens a window towards the end user. ITC-SS 18 takes place in Karlskrona on May 29—30, 2008. It is organized by the Dept. of Telecommunication Systems (ATS) within the School of Engineering (TEK) at Blekinge Institute of Technology (BTH) in cooperation with the International Advisory Council (IAC) of ITC.
Mobile devices with ever-increasing functionality and the ubiquitous availability of wireless communication networks are driving forces behind innovative mobile applications enriching our daily life. One of the performance measures for a successful application deployment is the ability to support application-data flows by heterogeneous networks within certain delay boundaries. However, the quantitative impact of this measure is unknown and practically infeasible to determine at real-time due to the mobile device resource constraints. We research practical methods for measurement-based performance evaluation of heterogeneous data communication networks that support mobile application-data flows. We apply the lightweight Comparative Output-Input Analysis (COIA) method estimating an additional delay based on an observation interval of interest (e.g., one second) induced on the flow. An additional delay is the amount of delay that exceeds non-avoidable, minimal end-to-end delay caused by the networks propagation, serialization and transmission. We propose five COIA methods to estimate additional delay and we validate their accuracy with measurements obtained from the existing healthcare and multimedia streaming applications. Despite their simplicity, our methods prove to be accurate in relation to an observation interval of interest, and robust under a variety of network conditions. The methods offer novel insights into application-data delays with regards to the performance of heterogeneous data communication networks.
This conceptual paper focuses on revealing challenges and offering concepts associated with the incorporation of the Quality of Experience (QoE) paradigm into the design of mobile video systems. The corresponding design framework combines application, middleware and networking layer in a unique cross-layer approach, in which all layers shall jointly analyse the quality of the video and its delivery in face of volatile conditions. Particular ingredients of the framework are efficient video processing, advanced realtime scheduling, and reduced-reference metrics on application and network layer.
The usage of network-demanding applications is growing rapidly such as video streaming on mobile terminals. However, network and/or service providers might not guarantee the perceived quality for video streaming that demands high packet transmission rate. In order to satisfy the user expectations and to minimize user churn, it is important for network operators to infer the end-user perceived quality in video streaming. Today, the most reliable method to obtain end-user perceived quality is through subjective tests, and the preferred location is the user interface as it is the closest point of application to the end-user. The end-user perceived quality on video streaming is highly influenced by occasional freezes; technically the extraordinary time gaps between two consecutive pictures that are displayed to the user, i.e., high inter-picture time. In this paper, we present a QoE instrumentation for video streaming, VLQoE. We added functionality to the VLC player to record a set of metrics from the user interface, application-level, network-level, and from the available sensors of the device. To the best of our knowledge, VLQoE is the first tool of its kind that can be used in user experiments for video streaming. By using the tool, we present a two state model based on the inter-picture time, for the HTTP- and RTSP-based video streaming via 3.5G. Next, we studied the influence of inter-picture time on the user perceived quality through out a user study. We investigated the minimum user perceived inter-picture time, and the user response time.
OpenFlow flow tables in Open vSwitch contain valuable information about installed flows, priorities, packet actions and routing policies. Their importance is emphasized when collocated tenants compete for the limited entries available to install flow rules. OpenFlow flow tables are a security asset that requires confidentiality and integrity guarantees. However, commodity software switch implementations - such as Open vSwitch - do not implement protection mechanisms capable to prevent attackers from obtaining information about the installed flows or modifying flow tables. We adopt a novel approach to enabling OpenFlow flow table protection through decomposition. We identify core assets requiring security guarantees, isolate OpenFlow flow tables through decomposition and implement a prototype using Open vSwitch and Software Guard Extensions enclaves. An evaluation of the prototype on a distributed testbed both demonstrates that the approach is practical and indicates directions for further improvements. © 2019 IEEE.
Packet delay variation plays an important role in network performance degradation and affects the user-perceptual quality, especially in case of real-time services such as video streaming, VoIP etc. Lightweight methods for detecting network performance issues are desirable as compared to taskintensive measurement and analysis. On this background, this paper discusses the applicability of the Coefficient of Throughput Variation (CoTV) to quickly and reliably detect bottleneck behavior between two arbitrary points in a network. The CoTV can be used as a Reduced Reference Metric. It is relatively simple to calculate, compare and interpret, yet powerful to provide control feedback when facing changes in network performance. In this paper, we demonstrate above mentioned properties of the CoTV through a formula relating it to the sample interval of the throughput and the variability of the delay. The latter is shown to be detectable on time scales that are significantly larger than that of the variability itself. This observation is a key enabler for reducing the load on the device that performs the analysis due to the possibility to use large sample intervals. We also describe how difference and ratio of CoTV at outlet and inlet can be used to identify network performance issues.
Traffic shapers are used by researchers to emulate the behavior of networks and applications in test environments, typically with user-defined traffic shaping parameters such as throughput and loss. Also, traffic shapers are used for the enforcement of SLA, so they are of interest for Internet Service Providers. However, output given by traffic shapers may not be as accurate as desired. Therefore, it is important to assess the accuracy of the implementation of traffic shapers. In this paper, we evaluate two traffic shapers with regard to the performance of their throughput shaping. For this evaluation, traces were collected. The properties of the resulting throughput at the outlet of the shaper are compared to the properties of the throughput at the inlet in combination with the preset shaper parameters. In this sense, we also compare shapers installed on Advance Micro Devices (AMD) and Intel platforms, and we use different PDU sizes and load levels to test the influence of those parameters on the shaping. We are furthermore able to deduct internal shaper parameters such as packet rate and buffer size from our measurements, and we analyse the statistical properties of the packet departure process. The extensive measurement results in this paper allow for a detailed assessment of the question whether the shaper performance is up to mark for a desired timescale. In general, the performance of both shapers and hardware platforms can be considered satisfactory on the investigated time scales between 1 ms and 1 s, with a slight advantage for NetEm on AMD.
With the growth of the mobile internet, the popularity of multimedia services and applications have increased rapidly. As a result, end-users become quality-conscious. To fulfill the users' expectations, the study of quality of experience (QoE) is becoming very important for both researchers and service providers. This paper analyses the impact on perceived quality of received videos encoded with the H.264 baseline profile, which is suitable for mobile video, and streamed through an emulated network with packet loss and packet delay variation. To evaluate the video QoE, tests are conducted on a mobile device and on a laptop. The users' responses show that the baseline profile of H.264 is very sensitive to packet loss and packet delay variation. Moreover, there is no considerable impact on users' perception either test is conducted on the mobile device or on the laptop by playing the same resolution video.
In this paper, we consider the application of partial buffer sharing to an M/G/1/K queueing system for cognitive radio networks (CRNs). It is assumed that the CRN is subject to Nakagami-m. fading. Secondary users are allowed to utilize the licensed radio spectrum of the primary users through underlay spectrum access. A finite buffer at the secondary transmitter is partitioned into two regions, the first region serves both classes of packets while the second region serves only packets of the highest priority class. Therefore, the examined CRN can be modeled as an M/G/1/K queueing system using partial buffer sharing. An embedded Markov chain is applied to analyze the queueing behavior of the system. Utilizing the balance equations and the normalized equation, the equilibrium state distribution of the system at an arbitrary time instant can be found. This outcome is utilized to investigate the impact of queue length, arrival rates, and fading parameters on queueing performance measures such as blocking probability, throughput, mean packet transmission time, channel utilization, mean number of packets in the system, and mean packet waiting time for each class of packets.
In this paper, we develop a queueing analysis for opportunistic decode-and-forward (DF) relay networks. It is assumed that the networks undergo Nakagami-m fading and that the external arrival process follows a Poisson distribution. By selecting the best relay according to the opportunistic relaying scheme, the source first transmits its signal to the best relay which then attempts to decode the reception and forwards the output to the destination. It is assumed that each relay operates in full-duplex mode, i.e., it can receive and transmit signals simultaneously. The communication process throughout the network can be modeled as a queueing network which is structured from sub-systems of M/G/1 and G/G/1 queueing stations. We invoke the approximate analysis, so-called method of decomposition, to analyze the performance behavior of the considered relay network. The whole queueing network is broken into separate queues which are then investigated individually. Based on this approach, the end-to-end packet transmission time and throughput of the considered relay network are quantified in comparison with the networks with partial relay selection (PRS).
Internet traffic monitoring and analysis have been playing a crucial role in understanding and characterizing user behavior on the web. In particular, ON-OFF models capture the essential phases of user communication with web servers. The OFF phases reflect both deliberate and accidental gaps in the traffic flow. In this paper, we present a passive monitoring and analysis method devised to assist in the identification of such traffic gaps that may result in the degradation of Quality of Experience (QoE). Our first contribution consists in a revised ON-OFF model to cater for OFF times reflecting accidental gaps which are induced by the network. Second, a wavelet-based criterion is proposed to differentiate between the network-induced traffic gaps and user think times. The proposed method is intended to be implemented in near-real-time as it does not require any deep packet inspection. Both web service providers and network operators may use this method to obtain objective evidence of the appearance of QoE problems from link-level measurements.
The Quality of Experience (QoE) is becoming an increasingly popular research area among the ICT related industry and academia.There is a need of a user-centric approach for the design and monitoring of network applications and services. It stimulates the need for an online passive mechanism to monitor the user activity in real-time. The mechanism may vary with the type of application. In this work, we keep our focus on Web browsing application as it is one of the most popular applications on the Internet. The aim is to let the users browse freely and monitor their activity instead of asking them subjectively about their usage experience. Monitoring of the TCP connection interruptions could be one of the ways to have indications about users’ feelings on the Web. This is based on the idea that, in case of bad performance, users break a Web browsing session by pressing a reset or stop button in the browser hence generating a TCP reset on TCP flow level. In this present work, we carry this analysis further to observe the user interruptions in relation to transfer sizes and durations. We want to see how the users react to bad performance and try to explain why we observe smaller mean transfer sizes. Do the users avoid launching large transfers in the case of bad performance or do they try to get the same files but stop the transfers that are becoming too long?
Many emerging smart applications and services employ Web technology, and users nowadays surf the Web from any device via any kind of access network. Typically, high page latencies trigger users to abort ongoing transfers, resulting in the abrupt terminations of the TCP connections. This paper presents a systematic study of the termination process of the TCP connections and identifies the reasons behind the observed sequences of termination flags. Monitoring and classification of the termination behavior of the TCP connections can provide indications about the user-perceived performance of Web transfers. From the results, it is observed that the TCP termination behavior is heavily-dependent on the client-side application. Therefore, a set of criteria is required to identify the abortions made by the user.
The fluctuating performance of wireless and mobile networks has triggered the need for smart algorithms to assess the user perception, resulting from the quality of network services. While efforts have been done to model the user experience resulting from the network performance, there is still the need for practical methods to assess the user-perceived performance, in the real environment. In this work, we present a set of criteria to observe the user behavior on the Web, passively from the network-level. The criteria are based on the monitoring of TCP control flags and HTTP requests. Thus, information about user actions performed in the web browser can be inferred by monitoring the TCP termination flags and by keeping track of the HTTP requests. Along the way, we also present some anomalies observed in the TCP connection termination process, which may result in performance degradation of Web transfers.
Passive monitoring of user-perceived performance degradation is an important tool for service providers to improve customer loyalty. In this paper, we discuss our on-going work on the development of two network-based methods to objectively assess the user-perceived network performance. One method is based on the observation of TCP connections interrupted by the users. This method allows us to detect user’s interest in the service in relation to the network performance. Another method is simple and based on the identification of traffic gaps in the user transfers that may hurt the user perception. This work, amongst others, provokes a discussion on the impact of the frequency and duration of such gaps.
In network emulation, traffic shapers are used to shape the performance of the network. They are provided with certain inputs in a test environment to vary the network performance accordingly in order to investigate the effects of different network conditions on applications in real yet emulated scenarios. However, it is very important for the shapers to work as supposed in order to successfully realize the desired network conditions. They may make the results of network emulations unrealistic and unreliable if their functioning is not according to the desired specification. In this work, we evaluate the delay shaping of three traffic shapers, NIST Net, Netem and KauNet through the results obtained from a number of experiments. A comparison of the output of their delay shaping is presented. This comparison can enable us to select the most suitable shaper based on the required shaping. Effects of hardware platforms on the shaping are also filtered out by performing the experiments with shapers installed on Advance Micro Devices (AMD) and Intel platforms separately. Different Protocol Data Unit (PDU) sizes are used in the experiments to test the influence of packet sizes on the shaping. These delay evaluation results are then complemented by the Coefficient of Throughput Variation (CoTV) results.
During the last decade, we have witnessed a rapiddevelopment of extended reality (XR) technologies such asaugmented reality (AR) and virtual reality (VR). Further, therehave been tremendous advancements in artificial intelligence(AI) and machine learning (ML). These two trends will havea significant impact on future digital societies. The vision ofan immersive, ubiquitous, and intelligent virtual space opensup new opportunities for creating an enhanced digital world inwhich the users are at the center of the development process,so-calledintelligent realities(IRs).The “Human-Centered Intelligent Realities” (HINTS) profileproject will develop concepts, principles, methods, algorithms,and tools for human-centered IRs, thus leading the wayfor future immersive, user-aware, and intelligent interactivedigital environments. The HINTS project is centered aroundan ecosystem combining XR and communication paradigms toform novel intelligent digital systems.HINTS will provide users with new ways to understand,collaborate with, and control digital systems. These novelways will be based on visual and data-driven platforms whichenable tangible, immersive cognitive interactions within realand virtual realities. Thus, exploiting digital systems in a moreefficient, effective, engaging, and resource-aware condition.Moreover, the systems will be equipped with cognitive featuresbased on AI and ML, which allow users to engage with digitalrealities and data in novel forms. This paper describes theHINTS profile project and its initial results. ©2023, Copyright held by the authors
In many cases, application-level measurements can be the only way for an application to evaluate and adapt to the performance offered by the underlying networks. Applications perceive heterogeneous networking environments spanning over multiple administrative domains as black boxes being inaccessible for lower-level measurement instrumentation. However, application-level measurements can be inaccurate and differ significantly from the lower-level ones, amongst others due to the influence of the protocol stacks. In this paper we quantify and discuss such differences using the DPMI, with Measurement Points instrumented with DAG 3.5E cards for reference link-level measurements. We shed light on various impacts on timestamp accuracy of application-level measurements. Moreover, we quantify the accuracy of generating traffic with constant inter-packet-time. The latter is essential for an accurate emulation of application-level streaming traffic and thus for obtaining realistic end-to-end performance measurements.