Change search
Refine search result
123 51 - 100 of 105
CiteExportLink to result list
Cite
Citation style
• apa
• harvard1
• ieee
• modern-language-association-8th-edition
• vancouver
• Other style
More styles
Language
• de-DE
• en-GB
• en-US
• fi-FI
• nn-NO
• nn-NB
• sv-SE
• Other locale
More languages
Output format
• html
• text
• asciidoc
• rtf
Rows per page
• 5
• 10
• 20
• 50
• 100
• 250
Sort
• Standard (Relevance)
• Author A-Ö
• Author Ö-A
• Title A-Ö
• Title Ö-A
• Publication type A-Ö
• Publication type Ö-A
• Issued (Oldest first)
• Issued (Newest first)
• Created (Oldest first)
• Created (Newest first)
• Last updated (Oldest first)
• Last updated (Newest first)
• Disputation date (earliest first)
• Disputation date (latest first)
• Standard (Relevance)
• Author A-Ö
• Author Ö-A
• Title A-Ö
• Title Ö-A
• Publication type A-Ö
• Publication type Ö-A
• Issued (Oldest first)
• Issued (Newest first)
• Created (Oldest first)
• Created (Newest first)
• Last updated (Oldest first)
• Last updated (Newest first)
• Disputation date (earliest first)
• Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
• 51. Jena, Ajit K.
Traffic Control in ATM Networks: Engineering Impacts of Realistic Traffic Processes1996Conference paper (Refereed)

This paper reviews the current state of the art in the rapidly developing areas of ATM traffic controls and traffic modeling, and identifies future research areas to facilitate the implementation of control methods that can support a desired quality of service without sacrificing network utilizations. Two sets of issues are identified, one on the impacts of realistic traffic on the efficacy of traffic controls in supporting specific traffic management objectives, and the other dealing with the extend to which controls modify traffic characteristics. These issues are illustrated using the example of traffic shaping of individual ON-OFF sources that have infinite variance sojourn times.

• 52. Karlsson, Pär
Modelling of traffic with high variability over long time scales with MMPPs1996Conference paper (Refereed)

We describe the frst steps in the evaluation of an idea to match the high vari- ability found in measurements of trafficc, by Markov-modulated Poisson processes (MMPPs). It has been shown that one can arrange the parameters of a complex MMPP in a way that at least makes it visually self-similar over a limited time scale. The big benefit that arises from having an MMPP as a model of the trafic is that it is much easier to analyse mathematically than competing models, such as chaotic maps and fractional Brownian motion. We suggest to start with an MMPP with two states and match the four pa- rameters of it to a certain time scale. By splitting each of the two states into two new states, and adjusting the parameters associated with the new states to another (finer) time scale, variability over larger time scales is introduced. The resulting states can then be split again, until the required accuracy is obtained. In the split- ting of states, one must in each stage conserve the mean of the stage above when defining the new states. The main purpose of our models is to model the queue filling behaviour of a real-life trafic process. To determine the suitability of our models this is the most important qualification and it is used to evaluate the models.

• 53. Karlsson, Pär
On the Characteristics of WWW Traffic and the Relevance to ATM1997Conference paper (Refereed)
• 54. Karlsson, Pär
TCP/IP User Level Modeling for ATM1998Conference paper (Refereed)

We propose a method for performance modeling of TCP/IP over ATM. The modeling is focused on user level behavior and demands. The basic components of our model are the arrivals of new TCP connections according to a Poisson process, and file sizes following heavy-tailed distributions. Using simulations we investigate the impacts of the behavior of such a source on the traffic at lower layers in the network. The benefits of considering the whole system in this way are several. Compared to commonly suggested models operating solely on the link level, a more complete and thorough view of the system is attained. The model also lends itself easily to studies of improvements and modifications of the involved protocols, as well as new ways of handling the traffic. The verification of our model demonstrates that it captures relevant features shown to be present in traffic measurements, such as high variability over long time-scales, self-similarity, long-range dependence, and buffering characteristics.

• 55. Karlsson, Pär
The Characteristics of WWW Traffic and the Relevance to ATM1997Conference paper (Refereed)

This document describes a study of the characteristics of recorded WWWtraffic. Several parameters of the traffic are investigated, The results are used to investigate a scenario where ATM is used as the underlying transport mechanism. Problems with the deployment of ATM in the approach taken are considered and suggestions for improvements are made. The high variability of the traffic implies that a fixed allocation of bandwidth between the mean and peak rate is an infeasible way to achieve a reasonable utilization of the system, since this results in tremendous buffering demands. This calls for a different view on the way to study the system under consideration. Different properties of the traffic must be taken care of by different methods. Variations over longer time-scales are dealt with by means of capacity allocation and fluctuations with shorter duration are buffered. This realistic way of looking at the system might also put different tasks, such as traffic modelling, in a new light. For instance, does a model that is to be used for buffer dimensioning have to capture traffic behavior on time-scales that are longer than reasonably can be buffered anyway?

• 56. Larsson, Sven-Olof
A Local Approach for VPC Capacity Management1998Conference paper (Refereed)

By reserving transmission capacity on a series of links from one node to another, making a virtual path connection (VPC) between these nodes, several benefits are obtained. VPCs will simplify routing at transit nodes, connection admission control, and QoS management by traffic segregation. As telecommunications traffics experience variations in the number of calls per time unit due to office hours, inaccurate forecasting, quick changes in traffic loads (New Year's Eve), and changes in the types of traffic (as in introduction of new services), there is a need to cope for this by adaptive capacity reallocation between different VPCs. We have developed a type of VPC capacity management policy that uses an allocation function to determine the needed capacity for the coming updating interval, based on the current number of active connections but independent of the offered traffic. In this work we propose and evaluate a method to get an optimal parameter setting of the allocation function based only on average values for a network link. We also discuss the influence of different factors on the allocation function.

• 57. Larsson, Sven-Olof
An Evaluation of a Local Approach for VPC Capacity Management1998Conference paper (Refereed)

By reserving transmission capacity on a series of links from one node to another, making a virtual path connection (VPC) between these nodes, several benefits are obtained. VPCs will simplify routing at transit nodes, connection admission control, and QoS management by traffic segregation. As telecommunications traffics experience variations in the number of calls per time unit due to office hours, inaccurate forecasting, quick changes in traffic loads, and changes in the types of traffic (as in introduction of new services), there is a need to cope for this by adaptive capacity reallocation between different VPCs. We have developed a type of local VPC capacity management policy that uses an allocation function to determine the needed capacity for the coming updating interval, based on the current number of active connections but independently of the offered traffic. We determine its optimal parameters, and the optimal updating interval for different overhead costs. Our policy is shown to be able to combine benefits from both VP and VC routing by fast capacity reallocations. The method of signaling is easy to implement and our evaluations indicate that the method is robust. This paper is based on our earlier work, described in [19]. The calculations are simplified and the methodology is changed.

• 58. Larsson, Sven-Olof
VPC Management in ATM Networks1998Report (Other academic)

The goals of VPC management functions are to reduce the call blocking probability, increase responsiveness, stability, and fairness. As telecommunications traffics experience variations in the number of calls per time unit, due to office hours, inaccurate forecasting, quick changes in traffic loads (e.g. New Year's Eve), and changes in the types of traffic (as in introduction of new services), this can be met by adaptive capacity reallocation and topology reconfiguration. Brief explanations of the closely related concepts effective bandwidth and routing are given together with an overview of ATM. Fundamentally different approaches for VPC capacity reallocations are compared and their pros and cons are discussed. Finally, a further development of one of the approaches is described.

• 59. Larsson, Sven-Olof
VPC Management in ATM Networks1997Licentiate thesis, comprehensive summary (Other academic)
• 60.
Blekinge Institute of Technology, Department of Telecommunications and Mathematics.
Blekinge Institute of Technology, Department of Telecommunications and Mathematics.
A Comparison between Different Approaches for VPC Bandwidth Management1997Conference paper (Refereed)

By reserving transmission capacity on a series of links from one node to another, making a virtual path connection (VPC) between these nodes, several benefits are obtained. VPCs will enable segregation of traffics with different QoS, simplify routing at transit nodes, and simplify connection admission control. As telecommunications traffics experience variations in the number of calls per time unit, due to office hours, inaccurate forecasting, quick changes in traffic loads, and changes in the types of traffic (as in introduction of new services), there is a need to cope for this by adaptive capacity reallocation between different VPCs. The focus of this paper is to introduce a distributed approach for VPC management and compare it to a local and a centralised one. Our results show pros and cons of the different approaches.

• 61. Larsson, Sven-Olof
A Comparison between Different Approaches for VPC Bandwidth Management1997Conference paper (Refereed)

By reserving transmission capacity on a series of links from one node to another, making a virtual path connection (VPC) between these nodes, several benefits are obtained. VPCs will enable segregation of traffics with different QoS, simplify routing at transit nodes, and simplify connection ad-mission control. As telecommunications traffics experience variations in the number of calls per time unit, due to office hours, inaccurate forecasting, quick changes in traffic loads, and changes in the types of traffic (as in introduction of new services), there is a need to cope for this by adaptive capacity real-location between different VPCs. The focus of this paper is to introduce a distributed approach for VPC management and compare it to a local and a centralised one. Our results show pros and cons of the dif-ferent approaches.

• 62. Larsson, Sven-Olof
A Study of a Distributed Approach for VPC Network Management1997Conference paper (Refereed)

By reserving transmission capacity on a series of links from one node to another, making a virtul path connection (VPC) between these nodes, several benefits are made. VPCs will enable segregation of traffics with different QoS, simplify routing at transit nodes, and simplify connection admission control. As telecommunications traffics experience variations in the number of calls per time unit, due to office hours, inaccurate forecasting, quick changes in traffic loads, and changes in the type of traffic (as in introduction of new services), there is a need to cope for this by adaptive capacity reallocation between different VPCs. The focus of this paper is to introduce a distributed appraoch for VPC management and compare it to a central one. Our results show that the distributed approach is an interesting alternative.

• 63.
Blekinge Institute of Technology, Department of Telecommunications and Mathematics.
Blekinge Institute of Technology, Department of Telecommunications and Mathematics.
Performance Evaluation of a Distributed Approach for VPC Network Management1997Conference paper (Refereed)

By reserving transmission capacity on a series of links from one node to another, making a virtual path connection (VPC) between these nodes, several benefits are made. VPCs will enable segregation of traffics with different QoS, simplify routing at transit nodes, and simplify connection admission control. As telecommunications traffics experience variations in the number of calls per time unit, due to office hours, inaccurate forecasting, quick changes in traffic loads, and changes in the types of traffic (as in introduction of new services), there is a need to cope for this by adaptive capacity reallocation. By using VPC capacity reallocation, the responsivness for traffic fluctuations will increase. The focus of this paper is to propose and evaluate a distributed approach for VPC management with multiple routes. The evaluation is done in networks with one type of traffic and having Poissonian call arrivals. The size of the network is moderate and can be seen as a core ATM network.

• 64. Larsson, Sven-Olof
Performance Evaluation of a Local Approach for VPC Capacity Management1998In: IEICE transactions on communications, ISSN 0916-8516, E-ISSN 1745-1345, Vol. 81, no 5, p. 870-876Article in journal (Refereed)
• 65.
Blekinge Institute of Technology, Department of Telecommunications and Mathematics.
Blekinge Institute of Technology, Department of Telecommunications and Mathematics.
Performance Evaluation of Different Local and Distributed Approaches for VP Network Management1996Conference paper (Refereed)

As telecommunications traffics experience variations in the intensity due to office hours, inaccurate forecasting, quick changes in traffic load, and changes in the types of traffic (as in introduction of new services), there is a need to cope for this by adaptive change of the capacity in the network. This can be done by reserving transmission capacity on a series of links from one node to another making a virtual path (VP) between these nodes. The control of the VPs can be centralised, distributed or local. The idea of local and distributed approaches is to increase the robustness and performance compared to a central approach, which is depending on a central computer for the virtual path network (VPN) management. The main focus in this paper is to simulate different strategies and compare them to each other. The measured parameters are blocked traffic, the amount of signalling, unused capacity and maximal VP blocking. The simulation is done in a non-hierarchical network with one type of traffic i.e. a VP-subnetwork using statistical multiplexing and having Poissonian traffic streams. The size of the network is moderate. The VPN management will handle slow to medium fast variations, typically on the order of minutes up to hours. The traffic variations are met by reshaping the VPN in order to match the current demands.

• 66. Lazraq, Tawfiq
Modelling a 10 Gbits/Port Shared Memory ATM Switch1997Conference paper (Refereed)

The speed of optical transmission links is growing at a rate which is difficult for the microelectronic technology of ATM switches to follow. In order to cover the transmission rate gap between optical transmission links and ATM switches, ATM switches operating at multi Gbit/s rate have to be developed. A 10 Gbit/s/port shared memory ATM switch is under development at Linkoping Institute of Technology (LiTH) and Lund Institute of Technology (LTH) in Sweden. It has 8 inputs and 8 outputs. The switch will be implemented on a single chip in 0.8 μm BiCMOS. We report on a performance analysis of the switch under a specific traffic model. This traffic model emulates the LAN type of traffic. Performance analysis is crucial for evaluating and dimensioning the very high speed ATM switch

• 67. Lennerstad, Håkan
Logical graphs: how to map mathematics1996In: ZDM - Zentralblatt für Didaktik der Mathematik, ISSN 0044-4103, Vol. 27, no 3, p. 87-92Article in journal (Refereed)

A logical graph is a certain directed graph with which any mathematical theory or proof can be presented - its logic is formulated in graph form. Compared to the usual narrative description, the presentation usually gains in survey, clarity and precision. A logical graph formulation can be thought of as a detailed and complete map over the mathematical landscape. The main goal in the design of logical graphs is didactical: to improve the orientation in a mathematical proof or theory for a reader, and thus to improve the access of mathematics.

• 68. Lennerstad, Håkan
The directional display1997Conference paper (Refereed)

The directional display contains and shows several images-which particular image is visible depends on the viewing direction. This is achieved by packing information at high density on a surface, by a certain back illumination technique, and by explicit mathematical formulas which eliminate projection deformations and make it possible to automate the production of directional displays. The display is illuminated but involves no electronic components. Patent is pending for the directional display. Directional dependency of an image can be used in several ways. One is to achieve three-dimensional effects. In contrast to that of holograms, large size and full color involve no problems. Another application of the technique is to show moving sequences. Yet another is to make a display more directionally independent than conventional displays. It is also possible and useful in several contexts to show different text in different directions with the same display. The features can be combined.

• 69.
Blekinge Institute of Technology, Department of Telecommunications and Mathematics.
The Geometry of the Directional Display1996Report (Refereed)

The directional display is a new kind of display which can contain and show several images -which particular image is visible depends on the viewing direction. This is achieved by packing information at high density on a surface, by a certain back illumination technique, and by explicit mathematical formulas which make it possible to automatize the printing of a display to obtain desired effects. The directional dependency of the display can be used in several different ways. One is to achieve three-dimensional effects. In contrast to that of holograms, large size and full color here involve no problems. Another application of the basic technique is to show moving sequences. Yet another is to make a display more directionally independent than today’s displays. Patent is pending for the invention in Sweden.

• 70. Lennerstad, Håkan
An Optimal Execution Time Estimate of Static versus Dynamic Allocation in Multiprocessor Systems1992Report (Other academic)

Consider a multiprocessor with $k$ identical processors, executing parallel programs consisting of $n$ processes. Let $T_s(P)$ and $T_d(P)$ denote the execution times for the program $P$ with optimal static and dynamic allocations respectively, i. e. allocations giving minimal execution time. We derive a general and explicit formula for the maximal execution time ratio $g(n,k)=\max T_s(P)/T_d(P)$, where the maximum is taken over all programs $P$ consisting of $n$ processes. Any interprocess dependency structure for the programs $P$ is allowed, only avoiding deadlock. Overhead for synchronization and reallocation is neglected. Basic properties of the function $g(n,k)$ are established, from which we obtain a global description of the function. Plots of $g(n,k)$ are included. The results are obtained by investigating a mathematical formulation. The mathematical tools involved are essentially tools of elementary combinatorics. The formula is a combinatorial function applied on certain extremal matrices corresponding to extremal programs. It is mathematically complicated but rapidly computed for reasonable $n$ and $k$, in contrast to the np-completeness of the problems of finding optimal allocations.

• 71. Lennerstad, Håkan
Combinatorics for multiprocessor scheduling optimization and other contexts in computer architecture1996Conference paper (Refereed)

The method described consists of two steps. First, unnecessary programs are eliminated through a sequence of program transformations. Second, within the remaining set of programs, sometimes regarded as matrices, those where all possible combinations of synchronizations occur equally frequently are proven to be extremal. At this stage we obtain a formulation which is simple enough to allow explicit formulas to be derived. It turns out that the same method can be used for obtaining worst-case bounds on other NP-hard problems within computer architecture.

• 72. Lennerstad, Håkan
Optimal combinatorial functions comparing multiprocess allocation performance in multiprocessor systems2000In: SIAM journal on computing (Print), ISSN 0097-5397, E-ISSN 1095-7111, p. 1816-1838Article in journal (Refereed)

For the execution of an arbitrary parallel program P, consisting of a set of processes with any executable interprocess dependency structure, we consider two alternative multiprocessors. The first multiprocessor has q processors and allocates parallel programs dynamically; i.e., processes may be reallocated from one processor to another. The second employs cluster allocation with k clusters and u processors in each cluster: here processes may be reallocated within a cluster only. Let T-d(P, q) and T-c(P, k, u) be execution times for the parallel program P with optimal allocations. We derive a formula for the program independent performance function [GRAPHICS] Hence, with optimal allocations, the execution of P can never take more than a factor G(k, u, q) longer time with the second multiprocessor than with the first, and there exist programs showing that the bound is sharp. The supremum is taken over all parallel programs consisting of any number of processes. Overhead for synchronization and reallocation is neglected only. We further present a tight bound which exploits a priori knowledge of the class of parallel programs intended for the multiprocessors, thus resulting in a sharper bound. The function g(n, k, u, q) is the above maximum taken over all parallel programs consisting of n processes. The functions G and g can be used in various ways to obtain tight performance bounds, aiding in multiprocessor architecture decisions.

• 73. Lennerstad, Håkan
Optimal Combinatorial Functions Comparing Multiprocess Allocation Performance in Multiprocessor Systems1993Report (Other academic)

For the execution of an arbitrary parallel program P, consisting of a set of processes, we consider two alternative multiprocessors. The first multiprocessor has q processors and allocates parallel programs dynamically, i.e. processes may be reallocated from one processor to another. The second employs cluster allocation with k clusters and u processors in each cluster - here processes may be reallocated within a cluster only. Let T_d(P,q) and T_c (P,k,u) be execution times for the parallel program P with optimal allocations. We derive a formula for the program independent performance function G(k,u,q)=\sup_ all parallel programs P T_c(P,k,u)}{T_d(P,q)}. Hence, with optimal allocations, the execution of $P$ can never take more than a factor $G(k,u,q)$ longer time with the second multiprocessor than with the first, and there exist programs showing that the bound is sharp. The supremum is taken over all parallel programs consisting of any number of processes. Any interprocess dependency structure is allowed for the parallel programs, except deadlock. Overhead for synchronization and reallocation is neglected only. We further present optimal formulas which exploits a priori knowledge of the class of parallel programs intended for the multiprocessor, thus resulting in sharper optimal bounds. The function g(n,k,u,q) is the above maximum taken over all parallel programs consisting of n processes. The function s(n,v,k,u) is the same maximum, with q=n, taken over all parallel programs of $n$ processes which has a degree of parallelism characterized by a certain parallel profile vector v=(v_1,...,v_n). The functions can be used in various ways to obtain optimal performance bounds, aiding in multiprocessor architecture decisions. An immediate application is the evaluation of heuristic allocation algorithms. It is well known that the problems of finding the corresponding optimal allocations are NP-complete. We thus in effect present a methodology to obtain optimal control of NP-complete scheduling problems.

• 74. Lennerstad, Håkan
Optimal Scheduling Results for Parallel Computing1996In: Applications on advanced architecture computers / [ed] Astfalk, Greg, Philadelphia, USA: SIAM , 1996, p. 155-164Chapter in book (Refereed)

Load balancing is one of many possible causes of poor performance on parallel machines. If good load balancing of the decomposed algorithm or data is not achieved, much of the potential gain of the parallel algorithm is lost to idle processors. Each of the two extremes of load balancing - static allocation and dynamic allocation - has advantages and disadvantages. This chapter illustrates the relationship between static and dynamic allocation of tasks.

• 75. Lennerstad, Håkan
Optimal Worst Case Formulas Comparing Cache Memory Associativity1995Report (Other academic)

Consider an arbitrary program $P$ which is to be executed on a computer with two alternative cache memories. The first cache is set associative or direct mapped. It has $k$ sets and $u$ blocks in each set, this is called a (k,u)$-cache. The other is a fully associative cache with$q$blocks - a$(1,q)$-cache. We present formulas optimally comparing the performance of a$(k,u)$-cache compared to a$(1,q)$-cache for worst case programs. Optimal mappings of the program variables to the cache blocks are assumed. Let$h(P,k,u)$denote the number of cache hits for the program$P$, when using a$(k,u)$-cache and an optimal mapping of the program variables of$P$to the cache blocks. We establish an explicit formula for the quantity $$\inf_P \frac{h(P,k,u)}{h(P,1,q)},$$ where the infimum is taken over all programs$P$which contain$n$variables. The formula is a function of the parameters$n,k,u$and$q$only. We also deduce a formula for the infimum taken over all programs of any number of variables, this formula is a function of$k,u$and$q$. We further prove that programs which are extremal for this minimum may have any hit ratio, i.e. any ratio$h(P,1,q)/m(P)$. Here$m(P)$is the total number of memory references for the program P. We assume the commonly used LRU replacemant policy, that each variable can be stored in one memory block, and is free to be stored in any block. Since the problems of finding optimal mappings are NP-hard, the results provide optimal bounds for NP-hard quantities. The results on cache hits can easily be transformed to results on access times for different cache architectures. • 76. Lennerstad, Håkan Optimal worst case formulas comparing cache memory associativity2000In: SIAM journal on computing (Print), ISSN 0097-5397, E-ISSN 1095-7111, p. 872-905Article in journal (Refereed) In this paper we derive a worst case formula comparing the number of cache hits for two different cache memories. From this various other bounds for cache memory performance may be derived. Consider an arbitrary program P which is to be executed on a computer with two alternative cache memories. The rst cache is set-associative or direct-mapped. It has k sets and u blocks in each set; this is called a (k, u)-cache. The other is a fully associative cache with q blocks-a (1, q)-cache. We derive an explicit formula for the ratio of the number of cache hits h(P, k, u) for a(k, u)-cache compared to a (1, q)-cache for a worst case program P. We assume that the mappings of the program variables to the cache blocks are optimal. The formula quantifies the ratio [GRAPHICS] where the in mum is taken over all programs P with n variables. The formula is a function of the parameters n, k, u, and q only. Note that the quantity h ( P, k, u) is NP-hard. We assume the commonly used LRU (least recently used) replacement policy, that each variable can be stored in one memory block, and that each variable is free to be mapped to any set. Since the bound is decreasing in the parameter n, it is an optimal bound for all programs with at most n variables. The formula for cache hits allows us to derive optimal bounds comparing the access times for cache memories. The formula also gives bounds ( these are not optimal, however) for any other replacement policy, for direct-mapped versus set-associative caches, and for programs with variables larger than the cache memory blocks. • 77. Lindström, Fredric Delayed Filter Update: An Acoustic Echo Canceler Structure for Improved Doubletalk Detection2003Conference paper (Refereed) • 78. Lundberg, Lars Optimal bounds on the gain of permitting dynamic allocation of communication channels in distributed computing1999In: Acta Informatica, ISSN 0001-5903, E-ISSN 1432-0525, p. 425-446Article in journal (Refereed) Consider a distributed system consisting of n computers connected by a number of identical broadcast channels. All computers may receive messages from all channels. We distinguish between two kinds of systems: systems in which the computers may send on any channel (dynamic allocation) and system where the send port of each computer is statically allocated to a particular channel. A distributed task (application) is executed on the distributed system. A task performs execution as well as communication between its subtasks. We compare the completion time of the communication for such a task using dynamic allocation and k(d) channels with the completion time using static allocation and k(s) channels. Some distributed tasks will benefit very much from allowing dynamic allocation, whereas others will work fine with static allocation. In this paper we define optimal upper and lower bounds on the gain (or loss) of using dynamic allocation and k(d) channels compared to static allocation and k(s) channels. Our results show that, for some tasks, the gain of permitting dynamic allocation is substantial, e.g. when k(s) = k(d) = 3, there are tasks which will complete 1.89 times faster using dynamic allocation compared to using the best possible static allocation, but there are no tasks with a higher such ratio. • 79. Blekinge Institute of Technology, Department of Telecommunications and Mathematics. A Comparative Study of Three New Object-Oriented Methods1995Report (Refereed) In this paper we will compare and contrast some of the newer methods with some of the established methods in the field of object-oriented software engineering. The methods re-viewed are Solution-Based Modelling, Business Object Notation and Object Behaviour Analysis. The new methods offer new solutions and ideas to issues such as object identi-fication from scenarios, traceability supporting techniques, criteria for phase completion and method support for reliability. Although all these contributions, we identified some issues, particular design for dynamic binding, that still have to be taken into account in an object-oriented method. • 80. Blekinge Institute of Technology, Department of Telecommunications and Mathematics. Applying the Object-Oriented Framework Technique to a Family of Embedded Systems1996Report (Refereed) This paper discusses some experiences from a project developing an object-oriented framework for a family of fire alarm system products. TeleLarm AB, a Swedish security company, initiated the project. One application has so far been generated from the framework with successful results. The released application has shown zero defects and has proved to be highly flexible. Fire alarm systems have a long lifetime and have high reliability and flexibility requirements. The most important observations presented in this paper are that the programming language C++ can be used successfully for small embedded systems; and that object-orientation and framework techniques offer flexibility and reusability in such systems. It has also been noted that design for verifiability and testability is very important, affecting as it does both maintainability and reliability. • 81. Blekinge Institute of Technology, Department of Telecommunications and Mathematics. Verifying Framework-Based Applications by Establishing Conformance1996Report (Refereed) The use of object-oriented frameworks is one way to increase productivity by reusing both design and code. In this paper, a framework-based application is viewed as composed by a framework part and an increment. It is difficult to relate the intended behaviour of the final application to specific increment requirements, it is therefore difficult to test the increment using traditional testing methods. Instead, the notion of increment conformance is proposed, meaning that the increment is designed conformant to the intentions of the framework designers. This intention is specified as a set of composability constraints defined as an essential part of the framework documentation. Increment conformance is established by verifying the composability constraints by means of code and design inspection. Conformance of the increment is a necessary but not sufficient condition for correct behaviour of the final application. • 82. Blekinge Institute of Technology, Department of Telecommunications and Mathematics. Blekinge Institute of Technology, Department of Telecommunications and Mathematics. Tight Bounds on the Minimum Euclidean Distance for Block Coded Phase Shift Keying1996Report (Refereed) We present upper and lower bounds on the minimum Euclidean distance$d_{Emin}(C)$for block coded PSK. The upper bound is an analytic expression depending on the alphabet size$q$, the block length$n$and the number of codewords$|C|$of the code$C$. This bound is valid for all block codes with$q\geq4$and with medium or high rate - codes where$|C|>(q/3)^n$. The lower bound is valid for Gray coded binary codes only. This bound is a function of$q$and of the minimum Hamming distance$d_{Hmin}(B)$of the corresponding binary code$B$. We apply the results on two main classes of block codes for PSK; Gray coded binary codes and multilevel codes. There are several known codes in both classes which satisfy the upper bound on$d_{Emin}(C)$with equality. These codes are therefore best possible, given$q,n$and$|C|$. We can deduce that the upper bound for many parameters$q,n$and$|C|$is optimal or near optimal. In the case of Gray coded binary codes, both bounds can be applied. It follows for many binary codes that the upper and the lower bounds on$d_{Emin}(C)$coincide. Hence, for these codes$d_{Emin}(C)\$ is maximal.

• 83.
Blekinge Institute of Technology, Department of Telecommunications and Mathematics.
Some Results on Optimal Decisions in Network Oriented Load Control in Signaling Networks1996Report (Refereed)

Congestion control in the signaling system number 7 is a necessity to fulfil the requirements of a telecommunication network that satisfy customers’ requirements on quality of service. Heavy network load is an important source of customer dissatisfaction as congested networks result in deteriorated quality of service. With the introduction of a Congestion Control Mechanism (CCM), that annihilates service sessions with a predicted completion time greater than the maximum allowed completion time for the session, network performance improves dramatically. Annihilation of already delayed sessions let other sessions benefit and increase the useful overall network throughput. This report discuss the importance of customer satisfaction and the relation between congestion in signaling networks and customer dissatisfaction. The advantage of using network profit as a network performance metric is also addressed in this report. Network profit and network costs are given a stringent definition with respect to customer satisfaction. An expression of the marginal cost for accepting or annihilating sessions is also given. Finally, the CCM is refined using a decision theoretic approach that bases the decision of annihilation on the average profit attached to each of the two possible actions, i.e. annihilate the session or not. The decision theoretic approach use a load dependent probability distribution for the completion time. The results in this report indicate that the decision theoretic approach to the CCM (DCCM) is robust and can handle very high overloads, both transient and focused, keeping the network profit on a high level.

• 84. Pettersson, Stefan
A Decision Theoretic Approach to Congestion Control in Signalling Networks1996Conference paper (Refereed)

Congestion control in the signaling system number 7 is a necessity to fulfil the requirements of a telecommunication network that satisfy customers’ requirements on quality of service. Heavy network load is an important source of customer dis-satisfaction as congested networks result in deteriorated quality of service. With the introduction of a Congestion Control Mechanism (CCM), that annihilates serv-ice sessions with a predicted completion time greater than the maximum allowed completion time for the session, network performance improves dramatically. Annihilation of already delayed sessions let other sessions benefit and increase the useful overall network throughput. This paper uses a decision theoretic approach that bases the decision of annihilation on the average profit attached to each of the two possible actions, i.e. annihilate or not. We describe the load dependent proba-bility distribution for the completion time, and discuss the use of attributes attached to each session describing the outcome of any performed CCM action, e.g. the bad will costs associated with annihilation. These attributes are also used to calculate the network profit for a given network load. The results in this paper indi-cate that the decision theoretic approach to the CCM (DCCM) can handle very high overloads, keeping the network profit on a reasonable level.

• 85. Pettersson, Stefan
A Profit Optimizing Strategy for Congestion Control in Signaling Networks1995Conference paper (Refereed)

Congestion control in the signaling system number 7 (SS7) is a necessity to fulfil the requirements of a telecommunication network that satisfy customers’ requirements on quality of service. Heavy network load is an important source of customer dissatisfaction as congested networks result in deteriorated quality of service. With the introduction of a Congestion Control Mechanism (CCM), that annihilates service sessions with a predicted completion time greater than the maximum allowed com-pletion time for the session, network performance improve dramatically. Annihilation of already delayed sessions let other sessions benefit and increase the overall network throughput. This paper investigates the possibilities of using a decision theoretic approach that base the decision of annihilation on the average loss attached to each of the two possible actions, i.e. annihilate or not. Attributes are attached to each session describing the out-come of any performed CCM action, e.g. the economic loss connected with the annihila-tion of a session. The attributes are also used to calculate the network loss for a given network load. The results in this paper indicate that the decision theoretic approach can decrease the network loss up to 40% for the improved CCM (ICCM) compared to an ordinary CCM.

• 86. Pettersson, Stefan
Economical Aspects of a Congestion Control Mechanism in a Signaling Network1995Conference paper (Refereed)

Congestion control in signaling system 7 (SS7) is a necessity for fulfilling the requirements of a telecommunication network that provides customer satisfaction. Heavy network load is a source of customer dissatisfaction as congested networks result in unsuccessful calls. With the introduction of network profit as a metric, it is possible to study the efficiency of an annihilation congestion control algorithm (ACCM) from the operator's point of view. Several strategies for applying the ACCM are investigated. A model describing the income and cost for a call is also introduced

• 87. Pettersson, Stefan
Network Oriented Load Control in Intelligent Networks1997Conference paper (Refereed)

Heavy network load in the signaling system number 7 is an important source of customer dissatisfaction as congested networks result in deteriorated quality of service. With the introduction of a Congestion Control Mechanism (CCM), that rejects service sessions with a predicted completion time greater than a maximum al- lowed completion time for the session, network perfor- mance improves dramatically, and thus customer satis- faction. Rejection of already delayed sessions let other sessions benefit and increase the useful overall network throughput. The decision of rejection is based upon Bayesian decision theory that takes into account the cost or revenue attached to each action, i.e. whether to reject the session or not. More valuable sessions are then given priority through the network, at the ex- pense of less valuable sessions. To clearly display the benefit from this approach we propose to use network profit as a performance metric. This paper summarises the ongoing research and discusses the future direction of this project. Of spe- cial interest is deployment of new services in the IN and the implications this has to network load and the profit made by the operator.

• 88. Popescu, Adrian
Modeling and Analysis of Network Applications and Services1998Other (Other academic)

Recent traffic measurement studies from a wide range of working packet networks have convincingly shown the presence of self-similar (long-range dependence LRD) properties in both local and wide area traffic traces. LRD processes are characterized (in the case of finite variance) by self-similarity of aggregated summands, slowly decaying covariances, heavy-tailed distributions and a spectral density that tends to infinity for frequencies approaching zero. This discovery calls to question some of the basic assumptions made by most of the research in control, engineering and operations of broadband integrated systems. At the time being, there is mounting evidence that self-similarity is of fundamental importance for a number of teletraffic engineering problems, such as traffic measurements and modeling, queueing behavior and buffer sizing, admission control, congestion control, etc. These impacts have highlighted the need for precise and computationally feasible methods to estimate diverse LRD parameters. Especially real-time estimation of measured data traces and off-line analysis of enormous collected data sets call for accurate and effective estimation techniques. A wavelet-based tool for the analysis of LRD is presented in this paper together with a semi-parametric estimator of the Hurst parameter. The estimator has been proved to be unbiased under fractional Brownian motion fBm and Gaussian assumptions. Analysis of the Bellcore Ethernet traces using the wavelet-based estimator is also reported.

• 89. Popescu, Adrian
NEMESIS: A Multigigabit Optical Local Area Network1994Conference paper (Refereed)

A new architecture is developed for an integrated 20 Gbps fiber optic Local Area Network (LAN) that supports data rates up to 9.6 Gbps. The architecture does not follow the standard, vertically-oriented Open System Interconnection (OSI) layering approach of other LANs. Instead, a horizontally-oriented model is introduced for the communication process to open up the three fundamental bottlenecks, i.e., opto-electronic, service and processing bottlenecks, that occur in a multi-Gbps integrated communication over multiwavelength optical networks. Furthermore, the design follows also a new concept called Wavelength-Dedicated-to-Application (WDA) concept in opening up the opto-electronic and service bottlenecks. Separate, simplified, and application-oriented protocols supporting both circuit- and packet-switching are used to open up the processing bottleneck.

• 90. Popescu, Adrian
Dynamic Time Sharing: A New Approach For Congestion Management1996Conference paper (Refereed)

A new approach for bandwidth allocation and congestion control is reported in this paper, which is of the Rate Controlled admission with Priority Scheduling service type. It is called Dynamic Time Sharing (DTS), because of the dynamic nature of the procedure for resource partitioning to allocate and guarantee a required bandwidth for every traffic class. This approach is based on guaranteeing specific traffic parameters (bandwidth requirements) through a policing unit, and then optimizing the bandwidth assignment within the network for specific parameters of interest (like delay or jitter, and loss). The optimization process is based on the parameters guaranteed by the policing unit. A batch admission policy is used at the edges of the network according to a specific framing strategy to follow the traffic characteristics (e.g., the traffic constraint function) of different traffic classes. On the other hand, another framing (congestion control) strategy is used within the network, which is based on different (delay/loss) requirements of the traffic classes. Proper management of bandwidth and buffer resources is provided in every (switch) node of the network, such as to guarantee the diverse performance of interest.

• 91. Popescu, Adrian
Dynamic Time Sharing: A New Approach For Congestion Management1997In: ATM Networks: Performance Modelling and Analysis / [ed] Kouvatsos, Demetres, London: Chapman & Hall , 1997Chapter in book (Other academic)

A new approach for bandwidth allocation and congestion control is reported in this paper, which is of the Rate Controlled admission with Priority Scheduling service type. It is called Dynamic Time Sharing (DTS), because of the dynamic nature of the procedure for resource partitioning to allocate and guarantee a requested bandwidth for every traffic class. This approach is based on guaranteeing specific traffic parameters (bandwidth requirements) through a policing unit, and then optimizing the bandwidth assignment within the network for specific parameters of interest (like delay or jitter, and loss). The optimization process is based on the parameters guaranteed by the policing unit. The policing unit also functions to enforce the "fair" sharing of resources. A batch admission policy is used at the edges of the network according to a specific framing strategy to follow the traffic characteristics (e.g., the traffic constraint function) of different traffic classes. On the other hand, the DTS mechanism allows for another framing (congestion control) strategy to be used within the network, which is based on different (delay/loss) requirements of the traffic classes. Proper management of bandwidth and buffer resources is provided in every (switch) node of the network, such as to guarantee the diverse performance of interest.

• 92. Pruthi, Parag
HTTP Interactions with TCP1998Conference paper (Refereed)

In this paper we describe our simulation models for evaluating end-to-end performance of HTTP transactions. We first analyze several gigabytes of collected traffic traces from a production Frame Relay network. Using these traces we extract web traffic and analyze the server web pages accessed by actual users. We analyze over 25000 web pages and develop a web client/server interaction model based upon our analysis of many server contents. We make specific contributions by analyzing the popularity of web servers and the number of bytes transferred from them during a busy hour. We also compute the distribution of the number of embedded items within a web document. We then use these models to drive a network simulation and show the interactions between the TCP/IP flow control and retransmission mechanism on source parameters. One of our important contributions has been to show that the Hurst parameter is robust with regard to TCP/IP flow and error control. Using the simulation studies we show that the distribution of end-to-end application message delay has a heavy-tail distribution and we discuss how these distributions arise in the network context.

• 93. Pruthi, Parag
Effect of Controls on Self-Similar Traffic1997Conference paper (Refereed)

Tremendous advances in technology have made possible Giga- and Terabit networks today. Simi- lar advances need to be made in the management and control of these networks if these technologies are to be successfully accepted in the market place. Although years of research have been expended at designing control mechanisms necessary for fair resource allocation as well as guaranteeing Quality of Service, the discovery of the self-similar nature of traffic ows in all packet networks and services, irrespective of topology, technology or protocols leads one to wonder whether these control mech- anisms are applicable in the real world. In an attempt to answer this question we have designed network simulators consisting of realistic client/server interactions over various protocol stacks and network topologies. Using this testbed we present some preliminary results which show that simple ow control mechanisms and bounded resources cannot alter the heavy-tail nature of the offered traffic. We also discuss methods by which application level models can be designed and their impacts on network performance can be studied.

• 94.
Blekinge Institute of Technology, Faculty of Engineering, Department of Mechanical Engineering. Blekinge Institute of Technology, School of Engineering, Department of Mechanical Engineering.
Blekinge Institute of Technology, Faculty of Engineering, Department of Mechanical Engineering. Blekinge Institute of Technology, School of Engineering, Department of Mathematics and Science. Blekinge Institute of Technology, School of Engineering, Department of Mechanical Engineering. Blekinge Institute of Technology, School of Engineering, Department of Mathematics and Natural Sciences. Blekinge Institute of Technology, Department of Telecommunications and Mathematics.
A new equation and exact solutions describing focal fields in media with modular nonlinearity2017In: Nonlinear dynamics, ISSN 0924-090X, E-ISSN 1573-269X, Vol. 89, no 3, p. 1905-1913Article in journal (Refereed)

Brand-new equations which can be regarded as modifications of Khokhlov–Zabolotskaya–Kuznetsov or Ostrovsky–Vakhnenko equations are suggested. These equations are quite general in that they describe the nonlinear wave dynamics in media with modular nonlinearity. Such media exist among composites, meta-materials, inhomogeneous and multiphase systems. These new models are interesting because of two reasons: (1) the equations admit exact analytic solutions and (2) the solutions describe real physical phenomena. The equations model nonlinear focusing of wave beams. It is shown that inside the focal zone a stationary waveform exists. Steady-state profiles are constructed by the matching of functions describing the positive and negative branches of exact solutions of an equation of Klein–Gordon type. Such profiles have been observed many times during experiments and numerical studies. The non-stationary waves can contain singularities of two types: discontinuity of the wave and of its derivative. These singularities are eliminated by introducing dissipative terms into the equations—thereby increasing their order. © 2017 The Author(s)

• 95. Rönngren, Robert
Parallel Simulation of a High Speed LAN1994Conference paper (Refereed)

In this paper we discuss modeling and simulation of a multigigabit/s LAN. We use parallel simulation techniques to reduce the simulation time. Optimistic and conservative parallel simulators have been used. Our results on a shared memory multiprocessor indicate that the conservative method is superior to the optimistic one for the specific application. Further, the parallel simulator based on the conservative scheme shows a linear speedup for large networks.

• 96. Svensson, Anders
Dynamic Alternation between Load Sharing Algorithms1992Conference paper (Refereed)

Load sharing algorithms can use sender-initiated, receiver-initiated, or symmetrically-initiated schemes to improve performance in distributed systems. The relative performance of these schemes has been shown to depend on the system load. The author proposes an adaptive symmetrically-initiated scheme where all nodes alternate between a sender-initiated and a receiver-initiated algorithm depending on the current system load. Simulations show that the mean job response times for the proposed scheme are superior to the best attained by its two algorithms used separately and simultaneously. The alternating scheme performs best at intermediate and high loads, when the job arrival process is bursty, and when it is costly to find complementary nodes.

• 97. Svensson, Anders
History, an Intelligent Load Sharing Filter1990Conference paper (Refereed)

The author proposes a filter component to be included in a load-sharing algorithm to detect short-lived jobs not worth considering for remote execution. Three filters are presented. One filter, called History, detects short-lived jobs by using job names and statistics based on previous executions. Job traces are allocated from diskless work stations connected by a local area network and supported by a distributed file system. Trace-driven simulation is then used to evaluate History with respect to the other filters. Two load-sharing algorithms show significant improvement of the mean job response ratio when the History filter is added.

• 98.
Blekinge Institute of Technology, Department of Telecommunications and Mathematics.
Optimization of Circuit Switched Networks1996Conference paper (Refereed)
• 99.
Blekinge Institute of Technology, Department of Telecommunications and Mathematics.
The Adaptive Cross Validation Method-applied to the Control of Circuit Switched Networks1996Conference paper (Refereed)

The adaptive cross validation (AVC) method is a very general method for system performance optimization. It can be used in the design phase to determine how a system should be designed, or it could be used during the operational phase to dynamically determine the control of the system. A novel formalism, suitable for presentation of mathematical models in technical publications, is used to formally describe the studied system, which is a circuit switched network. The generality of the method implies that it, is valid for general distributions describing inter-arrival and holding times, as well as for complex routing methods which make an analytical approach infeasible. The method is used for real-time control of two routing algorithm parameters. The results are very satisfactory.

• 100. Svensson, Anders
The Adaptive Cross Validation Method: Design and Control of Dynamical Systems1996Conference paper (Refereed)

A new approach to find optimal settings of model parameters as well as optimal models of dynamical systems is presented. The Adaptive Cross Validation (ACV) method is based on a number of well known ideas which are combined to a general optimization tool. It could be used for both design (off-line) and control (on-line) of widespread applications, which can be both continuous and discrete. The method is specially suited for modular optimization problems. A new mathematical model formalism for describing systems is also introduced. A controlled system application, a circuit-switched network, is used as an example to clarify the method.

123 51 - 100 of 105
CiteExportLink to result list
Cite
Citation style
• apa
• harvard1
• ieee
• modern-language-association-8th-edition
• vancouver
• Other style
More styles
Language
• de-DE
• en-GB
• en-US
• fi-FI
• nn-NO
• nn-NB
• sv-SE
• Other locale
More languages
Output format
• html
• text
• asciidoc
• rtf