Change search
Refine search result
123 1 - 50 of 141
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the 'Create feeds' function.
  • 1. Algestam, Henrik
    et al.
    Offesson, Marcus
    Lundberg, Lars
    Using components to increase mailtainability in a large telecommunication system2002Conference paper (Refereed)
  • 2.
    Aziz, Hussein Muzahim
    et al.
    Blekinge Institute of Technology, School of Computing.
    Fiedler, Markus
    Blekinge Institute of Technology, School of Computing.
    Grahn, Håkan
    Blekinge Institute of Technology, School of Computing.
    Lundberg, Lars
    Blekinge Institute of Technology, School of Computing.
    Compressing Video Based on Region of Interest2013Conference paper (Refereed)
    Abstract [en]

    Real-time video streaming suffer from bandwidth limitation that are unable to handle the high amount of video data. To reduce the amount of data to be streamed, we propose an adaptive technique to crop the important part of the video frames, and drop the part that are outside the important part; this part is called the Region of Interest (ROI). The Sum of Absolute Differences (SAD) is computed to the consecutive video frames on the server side to identify and extract the ROI. The ROI are extracted from the frames that are between reference frames based on three scenarios. The scenarios been designed to position the reference frames in the video frames sequence. Linear interpolation is performed from the reference frames to reconstruct the part that are outside the ROI on the mobile side. We evaluate the proposed approach for the three scenarios by looking at the size of the compressed videos and measure the quality of the videos by using the Mean Opinion Score (MOS). The results show that our technique significantly reduces the amount of data to be streamed over wireless networks with acceptable video quality are provided to the mobile viewers.

  • 3. Aziz, Hussein Muzahim
    et al.
    Fiedler, Markus
    Blekinge Institute of Technology, School of Computing.
    Grahn, Håkan
    Blekinge Institute of Technology, School of Computing.
    Lundberg, Lars
    Blekinge Institute of Technology, School of Computing.
    Eliminating the Effects of Freezing Frames on User Perceptive by Using a Time Interleaving Technique2012In: Multimedia Systems, ISSN 0942-4962, E-ISSN 1432-1882, Vol. 18, no 3, p. 251-262Article in journal (Refereed)
    Abstract [en]

    Streaming video over a wireless network faces several challenges such as high packet error rates, bandwidth variations, and delays, which could have negative effects on the video streaming and the viewer will perceive a frozen picture for certain durations due to loss of frames. In this study, we propose a Time Interleaving Robust Streaming (TIRS) technique to significantly reduce the frozen video problem and provide a satisfactory quality for the mobile viewer. This is done by reordering the streaming video frames as groups of even and odd frames. The objective of streaming the video in this way is to avoid the losses of a sequence of neighbouring frames in case of a long sequence interruption. We evaluate our approach by using a user panel and mean opinion score (MOS) measurements; where the users observe three levels of frame losses. The results show that our technique significantly improves the smoothness of the video on the mobile device in the presence of frame losses, while the transmitted data are only increased by almost 9% (due to reduced time locality).

  • 4. Aziz, Hussein Muzahim
    et al.
    Fiedler, Markus
    Grahn, Håkan
    Lundberg, Lars
    Streaming Video as Space – Divided Sub-Frames over Wireless Networks2010Conference paper (Refereed)
    Abstract [en]

    Real time video streaming suffers from lost, delayed, and corrupted frames due to the transmission over error prone channels. As an effect of that, the user may notice a frozen picture in their screen. In this work, we propose a technique to eliminate the frozen video and provide a satisfactory quality to the mobile viewer by splitting the video frames into sub- frames. The multiple descriptions coding (MDC) is used to generate multiple bitstreams based on frame splitting and transmitted over multichannels. We evaluate our approach by using mean opinion score (MOS) measurements. MOS is used to evaluate our scenarios where the users observe three levels of frame losses for real time video streaming. The results show that our technique significantly improves the video smoothness on the mobile device in the presence of frame losses during the transmission.

  • 5. Aziz, Hussein Muzahim
    et al.
    Grahn, Håkan
    Lundberg, Lars
    Eliminating the Freezing Frames for the Mobile User over Unreliable Wireless Networks2009Conference paper (Refereed)
    Abstract [en]

    The main challenge of real time video streaming over a wireless network is to provide good quality service (QoS) to the mobile viewer. However, wireless networks have a limited bandwidth that may not be able to handle the continues video frame sequence and also with the possibility that video frames could be dropped or corrupted during the transmission. This could severely affect the video quality. In this study we come up with a mechanism to eliminate the frozen video and provide a quality satisfactory for the mobile viewer. This can be done by splitting the video frames to sub-frame and transmitted over multiple channels. We will present a subjective test, the Mean Opinion Score (MOS). MOS is used to evaluate our scenarios where the users can observe three levels of frame losses for real time video streaming. The results for our technique significantly improves the indicate perceived that video quality.

  • 6. Aziz, Hussein Muzahim
    et al.
    Grahn, Håkan
    Lundberg, Lars
    Sub-Frame Crossing for Streaming Video over Wireless Networks2010Conference paper (Refereed)
    Abstract [en]

    Transmitting a real time video streaming over a wireless network cannot guarantee that all the frames could be received by the mobile devices. The characteristics of a wireless network in terms of the available bandwidth, frame delay, and frame losses cannot be known in advanced. In this work, we propose a new mechanism for streaming video over a wireless channel. The proposed mechanism prevents freezing frames in the mobile devices. This is done by splitting the video frame in two sub-frames and combines them with another sub-frame from different sequence position in the streaming video. In case of lost or dropped frame, there is still a possibility that another half (sub-frame) will be received by the mobile device. The receiving sub-frames will be reconstructed to its original shape. A rate adaptation mechanism will be also highlight in this work. We show that sever can skip up to 50% of the sub-frames and we can still be able to reconstruct the receiving sub-frame and eliminate the freezing picture in the mobile device.

  • 7. Aziz, Hussein Muzahim
    et al.
    Lundberg, Lars
    Graceful degradation of mobile video quality over wireless network2009Conference paper (Refereed)
    Abstract [en]

    Real-time video transmission over wireless channels has become an important topic in wireless communication because of the limited bandwidth of wireless network that should handle high amount of video frames. Video frames must arrive at the client before the playout time with enough time to display the contents of the frames. Real-time video transmission is particularly sensitive to delay as it has a strict bounded end-to-end delay constraint; video applications impose stringent requirements on communication parameters, such as frame lost and frame dropped due to excessive delay are the primary factors affecting the user-perceived quality. In this study we investigate ways of obtaining a graceful and controlled degradation of the quality, by introducing redundancy in the frame sequence and compensating this by limiting colourcoding and resolution. The effect of that is to use double streaming mechanism, in this way we will obtain less freezing at the expense of limited colours and resolution. Our experiments, applied to scenarios where users can observe three types of dropping load for real time video streaming, the analytical measurements tools are used in this study to evaluate the video quality is the mean opinion score and we will demonstrate this and argue that the proposed technique improves the use perceived of the video quality.

  • 8. Baca, Dejan
    et al.
    Carlsson, Bengt
    Lundberg, Lars
    Evaluating the Cost Reduction of Static Code Analysis for Software Security2008Conference paper (Refereed)
    Abstract [en]

    Automated static code analysis is an efficient technique to increase the quality of software during early development. This paper presents a case study in which mature software with known vul-nerabilities is subjected to a static analysis tool. The value of the tool is estimated based on reported failures from customers. An average of 17% cost savings would have been possible if the static analysis tool was used. The tool also had a 30% success rate in detecting known vulnerabilities and at the same time found 59 new vulnerabilities in the three examined products.

  • 9.
    Baca, Dejan
    et al.
    Blekinge Institute of Technology, School of Computing.
    Carlsson, Bengt
    Blekinge Institute of Technology, School of Computing.
    Petersen, Kai
    Blekinge Institute of Technology, School of Computing.
    Lundberg, Lars
    Blekinge Institute of Technology, School of Computing.
    Improving software security with static automated code analysis in an industry setting2013In: Software, practice & experience, ISSN 0038-0644, E-ISSN 1097-024X, Vol. 43, no 3, p. 259-279Article in journal (Refereed)
    Abstract [en]

    Software security can be improved by identifying and correcting vulnerabilities. In order to reduce the cost of rework, vulnerabilities should be detected as early and efficiently as possible. Static automated code analysis is an approach for early detection. So far, only few empirical studies have been conducted in an industrial context to evaluate static automated code analysis. A case study was conducted to evaluate static code analysis in industry focusing on defect detection capability, deployment, and usage of static automated code analysis with a focus on software security. We identified that the tool was capable of detecting memory related vulnerabilities, but few vulnerabilities of other types. The deployment of the tool played an important role in its success as an early vulnerability detector, but also the developers perception of the tools merit. Classifying the warnings from the tool was harder for the developers than to correct them. The correction of false positives in some cases created new vulnerabilities in previously safe code. With regard to defect detection ability, we conclude that static code analysis is able to identify vulnerabilities in different categories. In terms of deployment, we conclude that the tool should be integrated with bug reporting systems, and developers need to share the responsibility for classifying and reporting warnings. With regard to tool usage by developers, we propose to use multiple persons (at least two) in classifying a warning. The same goes for making the decision of how to act based on the warning.

  • 10. Baca, Dejan
    et al.
    Petersen, Kai
    Carlsson, Bengt
    Lundberg, Lars
    Static Code Analysis to Detect Software Security Vulnerabilities: Does Experience Matter?2009Conference paper (Refereed)
    Abstract [en]

    Code reviews with static analysis tools are today recommended by several security development processes. Developers are expected to use the tools' output to detect the security threats they themselves have introduced in the source code. This approach assumes that all developers can correctly identify a warning from a static analysis tool (SAT) as a security threat that needs to be corrected. We have conducted an industry experiment with a state of the art static analysis tool and real vulnerabilities. We have found that average developers do not correctly identify the security warnings and only developers with specific experiences are better than chance in detecting the security vulnerabilities. Specific SAT experience more than doubled the number of correct answers and a combination of security experience and SAT experience almost tripled the number of correct security answers.

  • 11.
    Boddapati, Venkatesh
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Petef, Andrej
    Sony Mobile Communications AB, SWE.
    Rasmusson, Jim
    Sony Mobile Communications AB, SWE.
    Lundberg, Lars
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Classifying environmental sounds using image recognition networks2017In: Procedia Computer Science / [ed] Toro C.,Hicks Y.,Howlett R.J.,Zanni-Merk C.,Toro C.,Frydman C.,Jain L.C.,Jain L.C., Elsevier B.V. , 2017, Vol. 112, p. 2048-2056Conference paper (Refereed)
    Abstract [en]

    Automatic classification of environmental sounds, such as dog barking and glass breaking, is becoming increasingly interesting, especially for mobile devices. Most mobile devices contain both cameras and microphones, and companies that develop mobile devices would like to provide functionality for classifying both videos/images and sounds. In order to reduce the development costs one would like to use the same technology for both of these classification tasks. One way of achieving this is to represent environmental sounds as images, and use an image classification neural network when classifying images as well as sounds. In this paper we consider the classification accuracy for different image representations (Spectrogram, MFCC, and CRP) of environmental sounds. We evaluate the accuracy for environmental sounds in three publicly available datasets, using two well-known convolutional deep neural networks for image recognition (AlexNet and GoogLeNet). Our experiments show that we obtain good classification accuracy for the three datasets. © 2017 The Author(s).

  • 12.
    Boeva, Veselka
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering. Blekinge Inst Technol, Comp Sci & Engn Dept, Karlskrona, Sweden..
    Lundberg, Lars
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering. Blekinge Inst Technol, Comp Sci & Engn Dept, Karlskrona, Sweden..
    Kota, Sai M. Harsha
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Sköld, Lars
    Telenor , SWE.
    Analysis of Organizational Structure through Cluster Validation Techniques Evaluation of email communications at an organizational level2017In: 2017 17TH IEEE INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS (ICDMW 2017) / [ed] Gottumukkala, R Ning, X Dong, G Raghavan, V Aluru, S Karypis, G Miele, L Wu, X, IEEE , 2017, p. 170-176Conference paper (Refereed)
    Abstract [en]

    In this work, we report an ongoing study that aims to apply cluster validation measures for analyzing email communications at an organizational level of a company. This analysis can be used to evaluate the company structure and to produce further recommendations for structural improvements. Our initial evaluations, based on data in the forms of emails logs and organizational structure for a large European telecommunication company, show that cluster validation techniques can be useful tools for assessing the organizational structure using objective analysis of internal email communications, and for simulating and studying different reorganization scenarios.

  • 13. Bosch, Jan
    et al.
    Lundberg, Lars
    Software architecture: Engineering quality attributes2003In: Journal of Systems and Software, ISSN 0164-1212 , Vol. 66, no 3, p. 183-186Article in journal (Refereed)
  • 14. Broberg, Magnus
    et al.
    Lundberg, Lars
    Grahn, Håkan
    A Tool for Binding Threads to Processors2001Conference paper (Refereed)
    Abstract [en]

    Many multiprocessor systems are based on distributed shared memory. It is often important to statically bind threads to processors in order to avoid remote memory access, due to performance. Finding a good allocation takes long time and it is hard to know when to stop searching for a better one. It is sometimes impossible to run the application on the target machine. The developer needs a tool that finds the good allocations without the target multiprocessor. We present a tool that uses a greedy algorithm and produces allocations that are more than 40% faster (in average) than when using a binpacking algorithm. The number of allocations to be evaluated can be reduced by 38% with a 2% performance loss. Finally, an algorithm is proposed that is promising in avoiding local maxima.

  • 15. Broberg, Magnus
    et al.
    Lundberg, Lars
    Grahn, Håkan
    An Allocation Strategy Using Shadow-processors and Simulation Technique2001Conference paper (Refereed)
    Abstract [en]

    Efficient performance tuning of parallel programs for multiprocessors is often hard. When it comes to assigning threads to processors there is not much support from commercial operating systems, like the Solaris operating system. The only known value is, in best case, the total execution time of each thread. The developer is left to the bin packing algorithm with no knowledge about the interactions and dependencies between the threads. The bin packing algorithm assigns, in the worst case, the threads to the processors such that the program will have the longest possible execution time. A simple example of such a program is shown. We present here a way of retrieving more information and a test mechanism that makes it possible to compare two different assignments of threads on processors also with regard to the interactions and dependencies between the threads. Also an algorithm is proposed that gives the best assignment of threads to processors in the case above where the bin packing algorithm gave the worst possible assignment. The algorithm uses shadow-processors and requires more processors than on the target machine during some allocation steps. Thus, a simulation tool like the one presented here must be used.

  • 16. Broberg, Magnus
    et al.
    Lundberg, Lars
    Grahn, Håkan
    Performance Optimization using Critical Path Analysis in Multithreaded Programs on Multiprocessors1999Report (Other academic)
    Abstract [en]

    Efficient performance tuning of parallel programs is often hard. Optimization is often done when the program is written as a last effort to increase the performance. With sequential programs each (executed) code segment will affect the total execution time of the program. Thus, any code segment that is optimized in a sequential program will decrease the execution time. In the case of a parallel program executed on a multiprocessor this is not always true. This is due to dependencies between the different threads. As a result, certain code segments of the execution may not affect the total execution time of the program. Thus, optimization of such code segments will not increase the performance. In this paper we present a new approach to perform the optimization phase. Our approach finds the critical path of the multithreaded program and the optimization is only done on those specific code segments of the program. We have implemented the critical path analysis in a performance optimization tool.

  • 17.
    Broberg, Magnus
    et al.
    Blekinge Institute of Technology, Department of Software Engineering and Computer Science.
    Lundberg, Lars
    Blekinge Institute of Technology, Department of Software Engineering and Computer Science.
    Grahn, Håkan
    Blekinge Institute of Technology, Department of Software Engineering and Computer Science.
    Performance Optimization using Extended Critical Path Analysis in Multithreaded Programs on Multiprocessors2001In: Journal of Parallel and Distributed Computing, ISSN 0743-7315, Vol. 61, no 1, p. 115-136Article in journal (Refereed)
  • 18. Broberg, Magnus
    et al.
    Lundberg, Lars
    Grahn, Håkan
    Selecting simulation models when predicting parallel program behavior2002Conference paper (Refereed)
    Abstract [en]

    The use of multiprocessors is an important way to increase the performance of a parallel program. This means that. the program has to be parallelized to make use of the multiple processors. The parallelization is unfortunately not an easy task. Development tools supporting parallel programs are important. Further, it is the customer that decides the number of processors in the target machine, and as a result the developer has to make sure that the program runs efficiently on any number of processors. Many simulation tools support the developer by simulating any number of processors and predict the performance based on a uniprocessor execution trace. This popular technique gives reliable results in many cases. Based on our experience from developing such a tool, and studying other (commercial) tools, we have identified three basic simulation models. Due to the flexibility of general purpose programming languages and operating systems, like C/C++ and Sun Solaris, two of the models may cause deadlock in a deadlock-free program. Selecting the appropriate model is difficult, we show that the three models have significantly different accuracy when using real world programs. Based on the findings we present a practical scheme when to use the models.

  • 19. Broberg, Magnus
    et al.
    Lundberg, Lars
    Grahn, Håkan
    Selecting Simulation Models when Predicting Parallel Program Behaviour.2002Conference paper (Refereed)
  • 20. Broberg, Magnus
    et al.
    Lundberg, Lars
    Grahn, Håkan
    Selecting Simulation Models when Predicting Parallel Program Behaviour2002Report (Other academic)
    Abstract [en]

    The use of multiprocessors is an important way to increase the performance of a supercom-puting program. This means that the program has to be parallelized to make use of the multi-ple processors. The parallelization is unfortunately not an easy task. Development tools supporting parallel programs are important. Further, it is the customer that decides the number of processors in the target machine, and as a result the developer has to make sure that the pro-gram runs efficiently on any number of processors. Many simulation tools support the developer by simulating any number of processors and predict the performance based on a uni-processor execution trace. This popular technique gives reliable results in many cases. Based on our experience from developing such a tool, and studying other (commercial) tools, we have identified three basic simulation models. Due to the flexibility of general purpose programming languages and operating systems, like C/C++ and Sun Solaris, two of the models may cause deadlock in a deadlock-free program. Selecting the appropriate model is difficult, since we in this paper also show that the three models have significantly different accuracy when using real world programs. Based on the findings we present a practical scheme when to use the three models.

  • 21. Broberg, Magnus
    et al.
    Lundberg, Lars
    Grahn, Håkan
    Visualization and performance prediction of multithreaded Solaris programs by tracing kernel threads1999Conference paper (Refereed)
    Abstract [en]

    Efficient performance tuning of parallel programs is often hard. We present a performance prediction and visualization tool called VPPB. Based on a monitored uni-processor execution, VPPB shows the (predicted) behaviour of a multithreaded program using any number of processors and the program behaviour is visualized as a graph. The first version of VPPB was unable to handle I/O operations. This version has, by an improved tracing technique, added the possibility to trace activities at the kernel level as well. Thus, VPPB is now able to trace various I/O activities, e.g., manipulation of OS internal buffers, physical disk I/O, socket I/O, and RPC. VPPB allows flexible performance tuning of parallel programs developed for shared memory multiprocessors using a standardized environment; C/C++ programs that lues the thread package in Solaris 2.X.

  • 22. Broberg, Magnus
    et al.
    Lundberg, Lars
    Grahn, Håkan
    VPPB: A Visualization and Performance Prediction Tool for Multitreaded Solaris Programs1998Conference paper (Refereed)
    Abstract [en]

    Efficient performance tuning of parallel programs is often hard. In this paper we describe an approach that uses a uni-processor execution of a multithreaded program as reference to simulate a multiprocessor execution. The speed-up is predicted, and the program behaviour is visualized as a graph, which can be used in the performance tuning process. The simulator considers scheduling as well as hardware parameters, e.g., the thread priority, no. of LWPs, and no. of CPUs. The visualization part shows the simulated execution in two graphs: one showing the threads’ behaviour over time and the other the amount of parallel-ism over time. In the first graph is it possible to relate an event in the graph to the code line causing the event. Validation using a Sun multiprocessor with eight processors and five scientific parallel applications shows that the speed-up predictions are within +/-6% of a real execution.

  • 23. Broberg, Magnus
    et al.
    Lundberg, Lars
    Klonowska, Kamilla
    A Method for Bounding the Minimal Completion Time in Multiprocessors2002Report (Other academic)
    Abstract [en]

    The cluster systems used today usually prohibit that a running process on one node is reallocated to another node. A parallel program developer thus has to decide how processes should be allocated to the nodes in the cluster. Finding an allocation that results in minimal completion time is NP-hard and (non-optimal) heuristic algorithms have to be used. One major drawback with heuristics is that we do not know if the result is close to optimal or not. In this paper we present a method for finding a guaranteed minimal completion time for a given program. The method can be used as a bound that helps the user to determine when it is worth-while to continue the heuristic search. Based on some parameters derived from the program, as well as some parameters describing the hardware platform, the method produces the minimal completion time bound. The method includes an aggressive branch-and-bound algorithm that has been shown to reduce the search space to 0.0004%. A practical demonstration of the method is presented using a tool that automatically derives the necessary program parameters and produces the bound without the need for a multiprocessor. This makes the method accessible for practitioners.

  • 24.
    Casalicchio, Emiliano
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Lundberg, Lars
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Shirinbab, Sogand
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Energy-Aware Adaptation in Managed Cassandra Datacenters2016In: Proceedings - 2016 International Conference on Cloud and Autonomic Computing, ICCAC / [ed] Gupta I.,Diao Y., IEEE, 2016, p. 60-71Conference paper (Refereed)
    Abstract [en]

    Today, Apache Cassandra, an highly scalable and available NoSql datastore, is largely used by enterprises of each size and for application areas that range from entertainment to big data analytics. Managed Cassandra service providers are emerging to hide the complexity of the installation, fine tuning and operation of Cassandra datacenters. As for all complex services, human assisted management of a multi-tenant cassandra datacenter is unrealistic. Rather, there is a growing demand for autonomic management solutions. In this paper, we present an optimal energy-aware adaptation model for managed Cassandra datacenters that modify the system configuration orchestrating three different actions: horizontal scaling, vertical scaling and energy aware placement. The model is built from a real case based on real application data from Ericsson AB. We compare the performance of the optimal adaptation with two heuristics that avoid system perturbations due to re-configuration actions triggered by subscription of new tenants and/or changes in the SLA. One of the heuristic is local optimisation and the second is a best fit decreasing algorithm selected as reference point because representative of a wide range of research and practical solutions. The main finding is that heuristic’s performance depends on the scenario and workload and no one dominates in all the cases. Besides, in high load scenarios, the suboptimal system configuration obtained with an heuristic adaptation policy introduce a penalty in electric energy consumption in the range [+25%, +50%] if compared with the energy consumed by an optimal system configuration.

  • 25.
    Casalicchio, Emiliano
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Lundberg, Lars
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Shirinbab, Sogand
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Optimal adaptation for Apache Cassandra2016In: SoSeMC workshop at 13th IEEE International Conference on Autonomic Computing / [ed] IEEE, IEEE Computer Society, 2016Conference paper (Refereed)
  • 26.
    Casalicchio, Emiliano
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Lundberg, Lars
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Shirinbad, Sogand
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    An Energy-Aware Adaptation Model for Big Data Platforms2016In: 2016 IEEE International Conference on Autonomic Computing (ICAC) / [ed] IEEE, IEEE, 2016, p. 349-350Conference paper (Refereed)
    Abstract [en]

    Platforms for big data includes mechanisms and tools to model, organize, store and access big data (e.g. Apache Cassandra, Hbase, Amazon SimpleDB, Dynamo, Google BigTable). The resource management for those platforms is a complex task and must account also for multi-tenancy and infrastructure scalability. Human assisted control of Big data platform is unrealistic and there is a growing demand for autonomic solutions. In this paper we propose a QoS and energy-aware adaptation model designed to cope with the real case of a Cassandra-as-a-Service provider.

  • 27. Damm, Lars-Ola
    et al.
    Lundberg, Lars
    Company-wide Implementation of Metrics for Early Software Fault Detection2007Conference paper (Refereed)
  • 28. Damm, Lars-Ola
    et al.
    Lundberg, Lars
    Identification of test process improvements by combining fault trigger classification and faults-slip-through measurement2005Conference paper (Refereed)
  • 29. Damm, Lars-Ola
    et al.
    Lundberg, Lars
    Introducing Test Automation and Test-Driven Development: An Experience Report2004Conference paper (Refereed)
  • 30. Damm, Lars-Ola
    et al.
    Lundberg, Lars
    Quality Impact of Introducing Component-Level Test Automation and Test-Driven Development2007Conference paper (Refereed)
  • 31. Damm, Lars-Ola
    et al.
    Lundberg, Lars
    Results from Introducing Component-Level Test Automation and Test-Driven Development2006In: Journal of Systems and Software, ISSN 0164-1212 , Vol. 79, no 7, p. 1001-1014Article in journal (Refereed)
  • 32. Damm, Lars-Ola
    et al.
    Lundberg, Lars
    Using Fault Slippage Measurement for Monitoring Software Process Quality during Development2006Conference paper (Refereed)
  • 33. Damm, Lars-Ola
    et al.
    Lundberg, Lars
    Olsson, David
    Automated Software Component Verification and Test-Driven Development2003Conference paper (Refereed)
  • 34. Damm, Lars-Ola
    et al.
    Lundberg, Lars
    Olsson, David
    Introducing Test Automation and Test-Driven Development: An Experience Report2005In: Electronical Notes in Theoretical Computer Science, ISSN 1571-0661, E-ISSN 1571-0661, Vol. 116, p. 3-15Article in journal (Refereed)
  • 35. Damm, Lars-Ola
    et al.
    Lundberg, Lars
    Wohlin, Claes
    A model for software rework reduction through a combination of anomaly metrics 2008In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 81, no 11, p. 1968-1982Article in journal (Refereed)
    Abstract [en]

    Analysis of anomalies reported during testing of a project can tell a lot about how well the processes and products work. Still, organizations rarely use anomaly reports for more than progress tracking although projects commonly spend a significant part of the development time on finding and correcting faults. This paper presents an anomaly metrics model that organizations can use for identifying improvements in the development process, i.e. to reduce the cost and lead-time spent on rework-related activities and to improve the quality of the delivered product. The model is the result of a four year research project performed at Ericsson. © 2008 Elsevier Inc. All rights reserved.

  • 36. Damm, Lars-Ola
    et al.
    Lundberg, Lars
    Wohlin, Claes
    Determining the Improvement Potential of a Software Development Organization through Fault Analysis: A Method and a Case Study2004Conference paper (Refereed)
    Abstract [en]

    Successful software process improvement depends on the ability to analyze past projects and determine which parts of the process that could become more efficient. One typical data source is the faults that are reported during product development. From an industrial need, this paper provides a solution based on a measure called faults-slip-through, i.e. the measure tells which faults that should have been found in earlier phases. From the measure, the improvement potential of different parts of the development process is estimated by calculating the cost of the faults that slipped through the phase where they should have been found. The usefulness of the method was demonstrated by applying it on two completed development projects at Ericsson AB. The results show that the implementation phase had the largest improvement potential since it caused the largest faults-slip-through cost to later phases, i.e. 81 and 84 percent of the total improvement potential in the two studied projects.

  • 37. Damm, Lars-Ola
    et al.
    Lundberg, Lars
    Wohlin, Claes
    Faults-slip-through – A Concept for Measuring the Efficiency of the Test Process2006In: Software Process: Improvement and Practice, ISSN 1077-4866 , Vol. 11, no 1, p. 47-59Article in journal (Refereed)
  • 38. Diestelkamp, Wolfgang
    et al.
    Lundberg, Lars
    Performance tuning a generic database system by data striping2000Conference paper (Refereed)
    Abstract [en]

    In this paper we briefly present an existing database system with its advantages and disadvantages. The system uses an approach that significantly improves flexibility and maintainability. A performance model has been developed for the system, thus making it possible to quantitatively compare the performance reduction caused by the increased maintainability. Validations using real-world scenarios and data show that the performance model is very accurate. The model and validation show that the performance loss of using the flexible approach is substantial. Based on insights gained from the performance model, we improve performance by using different data-striping techniques. We show the effect of standard RAID 0 striping, and then further improve our results by using range partitioning.

  • 39. Dittrich, Yvonne
    et al.
    Lindeberg, Olle
    Ludvigsson, Ingela
    Lundberg, Lars
    Wessman, Bengt
    Diestelkamp, Wolfgang
    Tillman, Marie
    Design for Change2001Report (Other academic)
    Abstract [en]

    The report summarises the first year of the research project 'Design for Design in Use of Database Applications'. It focuses on end user tailoring and adaptable systems.

  • 40. Dittrich, Yvonne
    et al.
    Lundberg, Lars
    Lindeberg, Olle
    End-User Development by Tailoring. Blurring the border between Use and Development2003Conference paper (Refereed)
  • 41. Elwing, Robert
    et al.
    Paulsson, Ulf
    Lundberg, Lars
    Performance of SOAP in Web Service Environment Compared to CORBA.2002Conference paper (Refereed)
    Abstract [en]

    Web Services is a new concept that promises flexibility and interconnection between different systems. The communication in Web Services uses SOAP $Simple Object Access Protocol, which is based on XML. We have together with an industrial partner made experiments with SOAP in a Web Service environment to find out the response time using SOAP compared to CORBA. It turns out that a direct and naive use of SOAP would result in a response time degradation of a factor 400 compared to CORBA. We identified the major reasons for the poor performance of SOAP and evaluated some performance improvement techniques. After applying these the techniques, the performance of CORBA is 7 times better compared to SOAP.

  • 42.
    Fiedler, Markus
    et al.
    Blekinge Institute of Technology, School of Engineering, Department of Telecommunication Systems.
    Zepernick, Hans-Jürgen
    Blekinge Institute of Technology, School of Engineering, Department of Telecommunication Systems.
    Lundberg, Lars
    Blekinge Institute of Technology, School of Engineering, Department of Telecommunication Systems.
    Arlos, Patrik
    Blekinge Institute of Technology, School of Engineering, Department of Telecommunication Systems.
    Pettersson, Mats
    Blekinge Institute of Technology, School of Engineering, Department of Telecommunication Systems.
    QoE-based Cross-Layer Design of Mobile Video Systems: Challenges and Concepts2009Conference paper (Refereed)
    Abstract [en]

    This conceptual paper focuses on revealing challenges and offering concepts associated with the incorporation of the Quality of Experience (QoE) paradigm into the design of mobile video systems. The corresponding design framework combines application, middleware and networking layer in a unique cross-layer approach, in which all layers shall jointly analyse the quality of the video and its delivery in face of volatile conditions. Particular ingredients of the framework are efficient video processing, advanced realtime scheduling, and reduced-reference metrics on application and network layer.

  • 43. Forsman, Mattias
    et al.
    Glad, Andreas
    Lundberg, Lars
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Ilie, Dragos
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Algorithms for Automated Live Migration of Virtual Machines2015In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 101, p. 110-126Article in journal (Refereed)
    Abstract [en]

    We present two strategies to balance the load in a system with multiple virtual machines (VMs) through automated live migration. When the push strategy is used, overloaded hosts try to migrate workload to less loaded nodes. On the other hand, when the pull strategy is employed, the light-loaded hosts take the initiative to offload overloaded nodes. The performance of the proposed strategies was evaluated through simulations. We have discovered that the strategies complement each other, in the sense that each strategy comes out as “best” under different types of workload. For example, the pull strategy is able to quickly re-distribute the load of the system when the load is in the range low-to-medium, while the push strategy is faster when the load is medium-to-high. Our evaluation shows that when adding or removing a large number of virtual machines in the system, the “best” strategy can re-balance the system in 4–15 minutes.

  • 44. Häggander, Daniel
    et al.
    Liden, P
    Lundberg, Lars
    A method for automatic optimization of dynamic memory management in C++2001Conference paper (Refereed)
    Abstract [en]

    In C++, the memory allocator is often a bottleneck that severely limits performance and scalability on multiprocessor systems. The traditional solution is to optimize the C library memory allocation routines, An alternative is to attack the problem on the source code level, i.e. modify the applications source code. Such an approach makes it possible to achieve more efficient and customized memory management. To implement and maintain such source code optimizations is however both laborious and costly, since it is a manual procedure. Applications developed using object-oriented techniques, such as frameworks and design patterns, tend to use a great deal of dynamic memory to offer dynamic features. These features are mainly used for maintainability reasons, and temporal locality often characterizes the run-time behavior of the dynamic memory operations. We have implemented a pre-processor based method, named Amplify, which in a completely automated procedure optimizes (object-oriented) C++ applications to exploit the temporal locality in dynamic memory usage. Test results show that Amplify can obtain significant speed-up for synthetic applications and that it was useful for a commercial product.

  • 45. Häggander, Daniel
    et al.
    Lundberg, Lars
    Attacking the dynamic memory problem for SMPs2000Conference paper (Refereed)
    Abstract [en]

    We have studied three large object oriented telecommunication server applications. In order to obtain high performance, these applications are executed on SMPs. Dynamic memory management was a serious serialization bottleneck in all applications. The dynamic memory problem was attacked in a number of ways for the three applications, and in this paper we summarize our experiences from these attacks. There are two basic ways of attacking the problem: either to reduce the cost of using dynamic memory or to reduce the usage of dynamic memory. The problem can also be attacked at different levels, i.e. the operating system level, the implementation level, and the software architecture and design level. Each of the investigated ways of attacking the dynamic memory problem has its advantages and disadvantages. We argue that the attack should focus on the operating system level when dealing with existing code and on the software architecture level when developing new applications.

  • 46. Häggander, Daniel
    et al.
    Lundberg, Lars
    Matton, J
    Quality attribute conflicts: Experiences from a large telecommunication application2001Conference paper (Refereed)
    Abstract [en]

    Modern telecommunication applications must provide high availability and performance. They must also be maintainable in order to reduce the maintenance cost and rime-to-marker for new versions. Previous studies have shown that the ambition to build maintainable systems may result in ver?: poor performance. Here we evaluate an application called SDP pre-paid and show that the ambition to build systems with high performance and availability can lead to a complex software design with poor maintainability. We show that more than 85% of the SDP code is due to performance and availability optimizations. By implementing a SDP prototype with an alternative architecture we show that the code size can be reduced with an order of magnitude by removing the performance and availability optimizations from the source code and instead using modern fault tolerant hardware and third party software. The performance and availability of the prototype is as least as good as the old SDI! The hardware and third parry software cost is only 20-30% higher for the prototype. We also define three guidelines that help us to focus the additional hardware investments to the parts where it is really needed.

  • 47.
    Karlsson, Tim
    et al.
    Blekinge Institute of Technology, School of Computing.
    Lundberg, Lars
    Blekinge Institute of Technology, School of Computing.
    Performance evaluation of cauchy reed-solomon coding on multicore systems2013Conference paper (Refereed)
    Abstract [en]

    We have evaluated the performance of Cauchy Reed-Solomon (CRS) encoding of data blocks with sizes 32 kB to 256 MB. The performance measurements are done on an Intel processor with 4 cores and integrated graphics support. We also used an AMD graphics card in our performance evaluations. Three versions of the CRS algorithm are developed: one sequential version and two OpenCL versions. The OpenCL versions have been targeted to the CPU, the integrated GPU and the AMD graphics card. The measurements show that the graphics card performs better than CPU for large buffers. However, the highest throughput is obtained for one of the CPU versions and moderate buffer sizes (around 1 MB).

  • 48.
    Klonowska, Kamilla
    et al.
    Blekinge Institute of Technology, School of Engineering, Department of Systems and Software Engineering.
    Lennerstad, Håkan
    Blekinge Institute of Technology, School of Engineering, Department of Mathematics and Natural Sciences.
    Lundberg, Lars
    Blekinge Institute of Technology, School of Engineering, Department of Systems and Software Engineering.
    Svahnberg, Charlie
    Blekinge Institute of Technology, School of Engineering, Department of Systems and Software Engineering.
    Optimal recovery schemes in fault tolerant distributed computing2005In: Acta Informatica, ISSN 0001-5903, E-ISSN 1432-0525, Vol. 41, no 6, p. 341-365Article in journal (Refereed)
    Abstract [en]

    Clusters and distributed systems offer fault tolerance and high performance through load sharing. When all n computers are up and running, we would like the load to be evenly distributed among the computers. When one or more computers break down, the load on these computers must be redistributed to other computers in the system. The redistribution is determined by the recovery scheme. The recovery scheme is governed by a sequence of integers modulo n. Each sequence guarantees minimal load on the computer that has maximal load even when the most unfavorable combinations of computers go down. We calculate the best possible such recovery schemes for any number of crashed computers by an exhaustive search, where brute force testing is avoided by a mathematical reformulation of the problem and a branch-and-bound algorithm. The search nevertheless has a high complexity. Optimal sequences, and thus a corresponding optimal bound, are presented for a maximum of twenty one computers in the distributed system or cluster.

  • 49. Klonowska, Kamilla
    et al.
    Lundberg, Lars
    Lennerstad, Håkan
    The maximum gain of increasing the number of preemptions in multiprocessor scheduling2009In: ACTA INFORMATICA, ISSN 0001-5903 , Vol. 46, no 4, p. 285-295Article in journal (Refereed)
    Abstract [en]

    We consider the optimal makespan C(P, m, i) of an arbitrary set P of independent jobs scheduled with i preemptions on a multiprocessor with m identical processors. We compare the ratio for such makespans for i and j preemptions, respectively, where i < j. This ratio depends on P, but we are interested in the P that maximizes this ratio, i. e. we calculate a formula for the worst case ratio G(m, i, j) defined as G(m, i, j) = max C(P, m, i)/C(P, m, j), where the maximum is taken over all sets P of independent jobs.

  • 50. Klonowska, Kamilla
    et al.
    Lundberg, Lars
    Lennerstad, Håkan
    Using Modulo Golomb Rulers for Optimal Recovery Schemes in Distributed Computing2003Conference paper (Refereed)
123 1 - 50 of 141
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf