Change search
Refine search result
28293031323334 1501 - 1550 of 1682
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1501.
    Tännström, Ulf Nilsson
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    GPGPU separation of opaque and transparent mesh polygons2014Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [en]

    Context: By doing a depth-prepass in a tiled forward renderer, pixels can be prevented from being shaded more than once. More aggressive culling of lights that might contribute to tiles can also be performed. In order to produce artifact-free rendering, only meshes containing fully opaque polygons can be included in the depth-prepass. This limits the benefit of the depth-prepass for scenes containing large, mostly opaque, meshes that has some portions of transparency in them. Objectives: The objective of this thesis was to classify the polygons of a mesh as either opaque or transparent using the GPU. Then to separate the polygons into two different vertex buffers depending on the classification. This allows all opaque polygons in a scene to be used in the depth-prepass, potentially increasing render performance. Methods: An implementation was performed using OpenCL, which was then used to measure the time it took to separate the polygons in meshes of different complexity. The polygon separation times were then compared to the time it took to load the meshes into the game. What effect the polygon separation had on rendering times was also investigated. Results: The results showed that polygon separation times were highly dependent on the number of polygons and the texture resolution. It took roughly 350ms to separate a mesh with 100k polygons and a 2048x2048 texture, while the same mesh with a 1024x1024 texture took a quarter of the time. In the test scene used the rendering times differed only slightly. Conclusions: If the polygon separation should be performed when loading the mesh or when exporting it depends on the game. For games with a lower geometrical and textural detail level it may be feasible to separate the polygons each time the mesh is loaded, but for most game it would be recommended to perform it once when exporting the mesh.

  • 1502.
    Tärnskär, Filip
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Jonatan, Helgason
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    "Film med undertext": - "En studie om tillgänglighet av film för döva och personer med hörselnedsättning."2016Independent thesis Basic level (degree of Bachelor), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This bachelor thesis studies the experience of movies for Deaf and hard of hearing. The purpose is to study the possibilities of increasing the accessibility of movies, when you no longer can rely on the narrative audio. The investigative part of the thesis consists of two parts. The first part consists of a survey where the target group, Deaf and hard of hearing where asked to describe their experience of movies. The second part of the investigation consists of a practial test of the survey results in the form of a media conformation. 

    Through being humble and inclusive, and with the use of subtitles we can make our film more accessible to the Deaf and the hard of hearing.

  • 1503.
    Törn, Johan
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Comparison Between Two DifferentScreen Space Ambient OcclusionTechniques2017Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Context. In this project a comparison between two screen space ambientocclusion techniques are presented. The techniques are Scalable AO (SAO)and Multiresolution SSAO (MSSAO) since they both are techniques thatuse mipmaps to accelerate their calculations.

    Objectives. The aim is to see how big the difference is between the resultsof these two techniques and a golden reference that is an object space raytraced texture that is created with mental ray in Maya and how long timethe computation takes.

    Methods. The comparisons between the AO textures that these techniquesproduce and the golden references are performed using Structural SimilarityIndex (SSIM) and Perceptual Image Difference (PDIFF).

    Results. On the lowest resolution, both techniques execute in about thesame time on average, except that SAO with the shortest distance is faster.The only effect caused by the shorter distance, in this case, is that moresamples are taken in higher resolution mipmap levels than when longerdistances are used. The MSSAO achieved a better SSIM value meaningthat MSSAO is more similar to the golden reference than SAO. As theresolution increases the SSIM value between both techniques become moresimilar with SAO getting a better value and MSSAO getting slightly worse,while the execution time for MSSAO has larger increases than SAO.

    Conclusions. It is concluded that MSSAO is better than SAO in lowerresolution while SAO is better in larger resolution. I would recommendthat SAO is used for indoor scenes where there are not many small geometryparts close to each other that should occlude each other. MSSAO shouldbe used for outdoor scenes with a lot of vegetation which has many smallgeometry parts close to each other that should occlude. At higher resolution,MSSAO takes longer computational time as compared with SAO, while atlower resolution the computational time is similar.

  • 1504.
    Törnquist Krasemann, Johanna
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Computational decision-support for railway traffic management and associated configuration challenges: An experimental study2015In: Journal of Rail Transport Planning & Management, ISSN 2210-9706, Vol. 5, no 3, p. 95-109, article id 10.1016/j.jrtpm.2015.09.002Article in journal (Refereed)
    Abstract [en]

    This paper investigates potential configuration challenges in the development of optimization-based computational re-scheduling support for railway traffic networks. The paper presents results from an experimental study on how the characteristics of different situations influence the problem formulation and the resulting re-scheduling solutions. Two alternative objective functions are applied: Minimization of the delays at the end stations which exceed three minutes and minimization of delays larger than three minutes at intermediary commercial stops and at end stations. The study focuses on the congested, single-tracked Iron Ore line located in Northern Sweden. A combinatorial optimization model adapted to the special restrictions of this line is applied on 20 different disturbance scenarios and solved using commercial optimization software. The resulting re-scheduling solutions are analyzed numerically and visually in order to better understand the practical impact of using the suggested problem formulations in this context. The results show that the two alternative, objective functions result in structurally, quite different re-scheduling solutions. All scenarios were solved to optimality within 1 minute or less, which indicates that commercial solvers can handle practical problems of a relevant size for this type of setting, but the type of scenario has also a significant impact on the computation time.

  • 1505.
    Törnquist Krasemann, Johanna
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Configuration of an optimization-based decision support for railway traffic management in different contexts2015In: 6th International Conference on Railway Operations Modelling and Analysis, Tokyo, March 23-26, 2015, 2015Conference paper (Refereed)
    Abstract [en]

    This paper investigates potential configuration challenges in the development of optimization-based computational re-scheduling support for railway traffic networks. The paper presents results from an experimental study on how the characteristics of different situations and the network influence the problem formulation and the resulting re-scheduling solutions. Two alternative objective functions are applied: a) Minimization of the delays at the end stations which exceed three minutes and b) minimization of delays larger than three minutes at intermediary commercial stops and at end stations. The study focuses on the congested, single-tracked Iron Ore line located in Northern Sweden and partially Norway. A combinatorial optimization model adapted to the special restrictions of this line is applied and solved using commercial optimization software. 20 different disturbance scenarios are solved and the resulting re-scheduling solutions are analyzed numerically and visually in order to better understand their practical impact. The results show that the two alternative, but similar, objective functions result in structurally, quite different re-scheduling solutions. The results also show that the selected objective functions have some flaws when it comes to scheduling trains that are ahead of their schedule by early departure, or by having a lot of margin time due to waiting time in meeting/passing locations. These early trains are not always “pushed” forward unless the objective function promotes that in some way. All scenarios were solved to optimality within 1 minute or less, which indicates that commercial solvers can handle practical problems of a relevant size for this type of setting.

  • 1506. Tümmler, Christoph
    et al.
    Mival, Oli
    Lim Jumelle, Ai Keow
    Holanec, Ivo
    Fricker, Samuel
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    A Social Technological Aligment Matrix2014Conference paper (Refereed)
    Abstract [en]

    This paper refers to the term “implementation” as the process of integrating a new technology into established workflows. Especially in health care this has proven to be a very critical phase and many large-scale projects have failed on this very last mile. Although strategies such as requirements engineering, co-designing and user interaction design have been proposed to reduce the risk of end-user rejection and subsequently project failur. There is still no tool to analyze, predict and quantify user acceptance and identify critical areas which might be addressed before the start of the implementation phase in order to reduce resistance and increase the effectiveness and efficiency.

  • 1507. Ulziit, B.a
    et al.
    Warraich, Z.A.b
    Gencel, Cigdem
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    A conceptual framework of challenges and solutions for managing global software maintenance2015In: Journal of Software: Evolution and Process, ISSN 2047-7473, E-ISSN 2047-7481, Vol. 27, no 10, p. 763-792Article in journal (Refereed)
    Abstract [en]

    Context Software maintenance process in globally distributed settings brings significant management challenges to software organizations. Objectives Investigate the factors specific to managing software maintenance process in globally distributed settings and best practices in software organizations. Method A systematic literature review and interviews with industry practitioners were conducted. For analysis and synthesis, the grounded theory method was used. Results We identified a number of management challenges and mitigation strategies and then classified them under people, process, product, and technology factors. Overall, a structure of challenges and solutions, the conceptual framework, has been developed that may be used to understand and classify global maintenance challenges. Conclusions Distributed software maintenance process has specific management challenges in relation to process, people, product, and technology. Therefore, companies performing maintenance in distributed settings should consider these factors, which are not present in the general global software development literature, although many lessons apply to both. © 2015 John Wiley and Sons, Ltd.

  • 1508.
    UMESH, AKELLA
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Performance analysis of transmission protocols for H.265 encoder2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In recent years there has been a predominant increase in multimedia services such as live streaming, Video on Demand (VoD), video conferencing, videos for the learning. Streaming of high quality videos has become a challenge for service providers to enhance the user’s watching experience. The service providers cannot guarantee the perceived quality. In order to enhance the user’s expectations, it is also important to estimate the quality of video perceived by the user. There are different video streaming protocols that are used to stream from server to client. In this research, we aren’t focused on the user’s experience. We are mainly focused on the performance behavior of the protocols.

    In this study, we investigate the performance of the HTTP, RTSP and WebRTC protocols when streaming is carried out for H.265 encoder. The study addresses for the objective assessment of different protocols over VoD streaming at the network and application layers. Packet loss and delay variations are altered at the network layer using network emulator NetEm when streaming from server to client. The metrics at the network layer and application layer are collected and analyzed. The video is streamed from server to a client, the quality of the video is checked by some of the users.

    The research method has been carried out using an experimental testbed. The metrics such as packet counts at network layer and stream bitrate at application layer are collected for HTTP, RTSP and WebRTC protocols. Variable delays and packet losses are injected into the network to emulate real world.

    Based on the results obtained, it was found at the application layer that, out of the three protocols, HTTP, RTSP and WebRTC, the stream bitrate of the video transmitted using HTTP was less when compared to the other. Hence, HTTP performs better in the application layer. At the network layer, the packet counts of the video transmitted were collected using TCP port for HTTP and UDP port for RTSP and WebRTC protocols. The performance of HTTP was found to be stable in most of the scenarios. On comparing RTSP and WebRTC, the number of packet counts collected were more in number for RTSP when compared to WebRTC. This is because, the protocol and also the streamer are using more resources to transmit the video. Hence, both the protocols RTSP and WebRTC are performing better relatively. 

  • 1509.
    Unterkalmsteiner, Michael
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Coordinating requirements engineering and software testing2015Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    The development of large, software-intensive systems is a complex undertaking that is generally tackled by a divide and conquer strategy. Organizations face thereby the challenge of coordinating the resources which enable the individual aspects of software development, commonly solved by adopting a particular process model. The alignment between requirements engineering (RE) and software testing (ST) activities is of particular interest as those two aspects are intrinsically connected: requirements are an expression of user/customer needs while testing increases the likelihood that those needs are actually satisfied.

    The work in this thesis is driven by empirical problem identification, analysis and solution development towards two main objectives. The first is to develop an understanding of RE and ST alignment challenges and characteristics. Building this foundation is a necessary step that facilitates the second objective, the development of solutions relevant and scalable to industry practice that improve REST alignment.

    The research methods employed to work towards these objectives are primarily empirical. Case study research is used to elicit data from practitioners while technical action research and field experiments are conducted to validate the developed  solutions in practice.

    This thesis contains four main contributions: (1) An in-depth study on REST alignment challenges and practices encountered in industry. (2) A conceptual framework in the form of a taxonomy providing constructs that further our understanding of REST alignment. The taxonomy is operationalized in an assessment framework, REST-bench (3), that was designed to be lightweight and can be applied as a postmortem in closing development projects. (4) An extensive investigation into the potential of information retrieval techniques to improve test coverage, a common REST alignment challenge, resulting in a solution prototype, risk-based testing supported by topic models (RiTTM).

    REST-bench has been validated in five cases and has shown to be efficient and effective in identifying improvement opportunities in the coordination of RE and ST. Most of the concepts operationalized from the REST taxonomy were found to be useful, validating the conceptual framework. RiTTM, on the other hand, was validated in a single case experiment where it has shown great potential, in particular by identifying test cases that were originally overlooked by expert test engineers, improving effectively test coverage.

  • 1510.
    Unterkalmsteiner, Michael
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Abrahamsson, Pekka
    Wang, XiaoFeng
    Nguyen-Duc, Anh
    Shah, Syed
    Bajwa, Sohaib Shahid
    Baltes, Guido H.
    Conboy, Kieran
    Cullina, Eoin
    Dennehy, Denis
    Edison, Henry
    Fernandez-Sanchez, Carlos
    Garbajosa, Juan
    Gorschek, Tony
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Klotins, Eriks
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Hokkanen, Laura
    Kon, Fabio
    Lunesu, Ilaria
    Marchesi, Michele
    Morgan, Lorraine
    Oivo, Markku
    Selig, Christoph
    Seppänen, Pertti
    Sweetman, Roger
    Tyrväinen, Pasi
    Ungerer, Christina
    Yagüe, Agustin
    Software Startups: A Research Agenda2016In: e-Informatica Software Engineering Journal, ISSN 1897-7979, E-ISSN 2084-4840, Vol. 10, no 1, p. 89-123Article in journal (Refereed)
    Abstract [en]

    Software startup companies develop innovative, software-intensive products within limited timeframes and with few resources, searching for sustainable and scalable business models. Software startups are quite distinct from traditional mature software companies, but also from micro-, small-, and medium-sized enterprises, introducing new challenges relevant for software engineering research. This paper’s research agenda focuses on software engineering in startups, identifying, in particular, 70+ research questions in the areas of supporting startup engineering activities, startup evolution models and patterns, ecosystems and innovation hubs, human aspects in software startups, applying startup concepts in non-startup environments, and methodologies and theories for startup research. We connect and motivate this research agenda with past studies in software startup research, while pointing out possible future directions. While all authors of this research agenda have their main background in Software Engineering or Computer Science, their interest in software startups broadens the perspective to the challenges, but also to the opportunities that emerge from multi-disciplinary research. Our audience is therefore primarily software engineering researchers, even though we aim at stimulating collaborations and research that crosses disciplinary boundaries. We believe that with this research agenda we cover a wide spectrum of the software startup industry current needs.

  • 1511.
    Unterkalmsteiner, Michael
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Feldt, Robert
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Gorschek, Tony
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Lavesson, Niklas
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Large-scale Information Retrieval in Software Engineering - An Experience Report from Industrial Application2016In: Journal of Empirical Software Engineering, ISSN 1382-3256, E-ISSN 1573-7616, Vol. 21, no 6, p. 2324-2365Article in journal (Refereed)
    Abstract [en]

    Background: Software Engineering activities are information intensive. Research proposes Information Retrieval (IR) techniques to support engineers in their daily tasks, such as establishing and maintaining traceability links, fault identification, and software maintenance. Objective: We describe an engineering task, test case selection, and illustrate our problem analysis and solution discovery process. The objective of the study is to gain an understanding of to what extent IR techniques (one potential solution) can be applied to test case selection and provide decision support in a large-scale, industrial setting. Method: We analyze, in the context of the studied company, how test case selection is performed and design a series of experiments evaluating the performance of different IR techniques. Each experiment provides lessons learned from implementation, execution, and results, feeding to its successor. Results: The three experiments led to the following observations: 1) there is a lack of research on scalable parameter optimization of IR techniques for software engineering problems; 2) scaling IR techniques to industry data is challenging, in particular for latent semantic analysis; 3) the IR context poses constraints on the empirical evaluation of IR techniques, requiring more research on developing valid statistical approaches. Conclusions: We believe that our experiences in conducting a series of IR experiments with industry grade data are valuable for peer researchers so that they can avoid the pitfalls that we have encountered. Furthermore, we identified challenges that need to be addressed in order to bridge the gap between laboratory IR experiments and real applications of IR in the industry.

  • 1512.
    Unterkalmsteiner, Michael
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Gorschek, Tony
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Process Improvement Archaeology: What led us here and what’s next?2018In: IEEE Software, ISSN 0740-7459, E-ISSN 1937-4194, Vol. 35, no 4, p. 53-61Article in journal (Refereed)
    Abstract [en]

    While in every organization corporate culture and history change over time, intentional efforts to identifyperformance problems are of particular interest when trying to understand the current state of an organization.The results of past improvement initiatives can shed light on the evolution of an organization, and represent,with the advantage of perfect hindsight, a learning opportunity for future process improvements. Weencountered the opportunity to test this premise in an applied research collaboration with the SwedishTransport Administration (STA), the government agency responsible for the planning, implementation andmaintenance of long-term rail, road, shipping and aviation infrastructure in Sweden.

  • 1513.
    Unterkalmsteiner, Michael
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Gorschek, Tony
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Requirements quality assurance in industry: Why, what and how?2017In: Lect. Notes Comput. Sci., Springer, 2017, Vol. 10153, p. 77-84Conference paper (Refereed)
    Abstract [en]

    Context and Motivation: Natural language is the most common form to specify requirements in industry. The quality of the specification depends on the capability of the writer to formulate requirements aimed at different stakeholders: they are an expression of the customer’s needs that are used by analysts, designers and testers. Given this central role of requirements as a mean to communicate intention, assuring their quality is essential to reduce misunderstandings that lead to potential waste. Problem: Quality assurance of requirement specifications is largely a manual effort that requires expertise and domain knowledge. However, this demanding cognitive process is also congested by trivial quality issues that should not occur in the first place. Principal ideas: We propose a taxonomy of requirements quality assurance complexity that characterizes cognitive load of verifying a quality aspect from the human perspective, and automation complexity and accuracy from the machine perspective. Contribution: Once this taxonomy is realized and validated, it can serve as the basis for a decision framework of automated requirements quality assurance support.

  • 1514.
    Unterkalmsteiner, Michael
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Gorschek, Tony
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Feldt, Robert
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Klotins, Eriks
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Assessing Requirements Engineering and Software Test Alignment - Five Case Studies2015In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 109, no C, p. 62-77Article in journal (Refereed)
    Abstract [en]

    The development of large, software-intensive systems is a complex undertaking that we generally tackle by a divide and conquerstrategy. Companies thereby face the challenge of coordinating individual aspects of software development, in particular betweenrequirements engineering (RE) and software testing (ST). A lack of REST alignment can not only lead to wasted effort but alsoto defective software. However, before a company can improve the mechanisms of coordination they need to be understood first.With REST-bench we aim at providing an assessment tool that illustrates the coordination in software development projects andidentify concrete improvement opportunities. We have developed REST-bench on the sound fundamentals of a taxonomy onREST alignment methods and validated the method in five case studies. Following the principles of technical action research, wecollaborated with five companies, applying REST-bench and iteratively improving the method based on the lessons we learned.We applied REST-bench both in Agile and plan-driven environments, in projects lasting from weeks to years, and staffed as largeas 1000 employees. The improvement opportunities we identified and the feedback we received indicate that the assessmentwas effective and efficient. Furthermore, participants confirmed that their understanding on the coordination between RE and STimproved.

  • 1515.
    Upadhya, Bhanu
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Altoumaimi, Rasha
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Altoumaimi, Thelal
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Characteristics and Control of the Motor System in E-bikes2014Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [en]

    This study is based on e-bikes, mainly the ‘Pedelecs’ (under Swedish standards). Pedelecs* is the category of e-bikes which indicates electric bicycles only, that has specific standard in terms of motor power and speed limitations. We are concerned with respect to Sweden, in the analysis, especially because though it is already defined by EU for Europeans, it still varies in some countries, within Europe itself. In this research and experiment, we have brought useful revelations about its features in terms of power, comfort and cost. Likewise, our efforts have been to test its reliability on technical grounds, geographical conditions, people’s awareness and interests. Similarly, on effective grounds, ratio of bike users, import conditions, its growth and declines trends, and other influencing factors have been analyzed to understand e-bike’s possibilities in Sweden. To highlights e-bike’s features and importance, we have done a thorough investigation, taking comparative analysis with ordinary bicycles and normal vehicles, by using common elements like cost effectiveness, power efficiency, leisure service, easy accessibility, environment effects and so on. The findings have proven e-bikes to be the most effective solution on various grounds than any other transport alternatives especially in short distance and inner city traveling. In theoretical details on e-bikes, we have introduced details about the components applicable in e-bike, how they operate, their importance in terms of effectiveness with respect to power consumption and energy dispatching (motor capacity), quality of performance (types of components and features) and other comparative technical aspects. To understand the ground reality better, a short survey have been conducted to give some understanding about the awareness people are having regarding e-bike, their remarks towards this product, and based on their conclusions, our predictions report on its development and popularity chances in Sweden. While analyzing facts in general, we discovered that pedelecs for US may not be pedelecs for Sweden, because of standard varies from country to country. According to European classification standard, a pedelec must have the motor capacity up to 250 W, and must stop the motor when the speed is above 25 km/h. Speaking about the popularity of e-bike, In China the number of e-bikes sold reached up to 200 million, Germany is leading the way in Europe, therefore by the favorable situations available in Sweden, we can predict high potential in Sweden. The statistics data proved that Sweden is a bicycle country, where the amount of bicycles sold in 2012 was around 525,000, among which 6,500 were e-bikes imported the same year, suggesting its potential of growth being real. While analyzing mathematically e-bike’s functions, the four different calculations have been analyzed, keeping the weight of the person constant, but varying other common parameters that in use, in order to personify the drag in equation. By doing so taking the average power we have observed that it requires around 157 watts going up the hill when gradient is 4%, at the speed around 10 Km/h. This result have been again tried to be verified in the experimental works as well. Based on these relevant information, in the experiment we have tested to find how much energy is dissipated in 2 minutes, taking six samples to authenticate our result. After not being successful taking angle measurements by riding outside or inside lab, it is achieved to some degree after applying it on running machine in gym with some complications. The result that have been achieved signifying that the voltage of the battery dropped to 37.8V, which in the beginning of the experiment have been recorded 40.8V when current applied have been around 4.8A. Angle measurement here precisely indicating the behavior of e-bike on various degrees of hillsides, because there comes the angle, which is formed in relation to the plane surface. When e-bike goes uphill it creates a positive angle, that is where we have our main concern, because then the difference in power consumption suddenly increases. The angle is also form when e-bike moves downhill but that is a negative angle, and cost no difference on power consumption, therefore we are giving emphasis on angle measurements to positive ones only. The battery that has been used in the experiment rated 36V/9Ah (i.e., 0.324 kWh). Using this battery we have got the reading that it can hold (when completely charged) up to 32 Km distance (or 10 Wh/km), which is inversely proportional** to rider’s weight and drag. To sum up the experiment, the results have revealed that battery performance directly depends upon whether condition, weight of the rider and area where the cycle is ridden. These are among the discovered facts found in the experiment. When e-bike is used in hilly areas the speed slows down considerably to 13km/hour, because of the disequilibrium force, and that is when excessive power is consumed. This part has been difficult to test correctly in the lab because to simulate the disequilibrium drag or pull could not be realized accurately, besides when it has been tried outdoor, we could not get stable running motor because of the pedal dependent motor system, it has also not been so fruitful for precise readings. Afterwards when it has been tried in the gym, the outcome is that at every angle the power consumed by the battery or the energy dissipated is around 3.7watt. Even then it is still not possible to calculate measurements that must be available in real like situations, because the other affected parameters like wind, friction, tire size, weather, rider’s weight is not possible to take into considerations. ___________ *Pedelec is the abbreviation form of "Pedal Electric Bicycle". The characteristics of Pedelec is to assist human input power rather than replacing it completely. **It is obvious that when the weight of the rider is heavy then it takes more power to draw him ahead, which directly indicates that the consumption of power rises, which again means that e-bike’s total covering distance is simultaneously reduced. This is also true when there is a drag, which stops e-bike’s natural flow on normal conditions. That again means it needs additional power to run, therefore when these factors rise, the efficiency of power subsequently declines, accordingly the total coverage distance of an e-bike.

  • 1516.
    Uppalapati, Navya
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Optimizing Energy Consumption Using Live Migration2016Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context: Cloud Computing has evolved and advanced over the recent years due to its concept of sharing computing resources rather than having local servers to handle applications. The growth of Cloud Computing has resulted in large number of datacenters around the world containing thousands of nodes. The nodes are used to process various forms of workloads. Generally, the datacenters efficiency is calculated solely based how fast workload can be processed. Recently, energy consumption has been adopted as additional efficiency metric. The main reasons for this development is increased environmental awareness and escalating costs related to supplying power to large number units and to datacenter cooling. Cloud providers has developed the concept of virtualization, where multiple operating system and applications run on the same server at the same time. A key feature enabled by virtualization is migrating a virtual machine from one physical host to another. In particular, the capability of Virtual Machine (VM) migration brings multiple benefits such elastic resource sharing and energy aware consolidation. Live Virtual Machine migration in datacenters has great potential to decrease energy consumption up to certain level of usage.

    Objectives: The aim of this thesis is to perform cold and/or live migration to relocate Virtual Machines among hosts in a datacenter thereby reducing the energy consumption. PowerAPI is used to estimate the energy consumption of each VM. A heuristic algorithm is developed and evaluated in order to optimize energy consumption. The overall CPU utilization is calculated during the live migration when the energy consumed is optimized.

    Method: With the obtained knowledge about the VM migration and the factors that influence the migration process, a heuristic algorithm is designed for limiting energy consumption in datacenter. The algorithm takes the energy distribution over a set of VMs and corresponding hosts as input. The output of this algorithm will be the redistribution of VMs to the hosts such that the overall energy consumption is lowered. The proposed model is implemented and evaluated in an Openstack environment.

    Results: The results of the experiment study give the energy consumption of each node and then sumup to give the total energy consumption of the datacenter. The results are taken with the default OpenStack VM placement algorithm as well as with the heuristic algorithm developed in this work. The comparison of results indicate that the total energy consumption of the datacenter is reduced when the heuristic is used. The overall CPU utilization of each node is evaluated and the values are almost similar when compared with heuristic.

    Conclusion: The analysis of results concludes that the overall energy consumption of the datacenter is optimized by relocating the virtual machines among hosts according to the algorithm using virtual machine live migration. This also results that CPU Utilization is not varied much when live migration is used to optimize the energy consumption.

  • 1517.
    Uppströmer, Viktor
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Råberg, Henning
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Detecting Lateral Movement in Microsoft Active Directory Log Files: A supervised machine learning approach2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Cyber attacks raise a high threat for companies and organisations worldwide. With the cost of a data breach reaching $3.86million on average, the demand is high fora rapid solution to detect cyber attacks as early as possible. Advanced persistent threats (APT) are sophisticated cyber attacks which have long persistence inside the network. During an APT, the attacker will spread its foothold over the network. This stage, which is one of the most critical steps in an APT, is called lateral movement. The purpose of the thesis is to investigate lateral movement detection with a machine learning approach. Five machine learning algorithms are compared using repeated cross-validation followed statistical testing to determine the best performing algorithm and feature importance. Features used for learning the classifiers are extracted from Active Directory log entries that relate to each other, with a similar workstation, IP, or account name. These features are the basis of a semi-synthetic dataset, which consists of a multiclass classification problem.

    The experiment concludes that all five algorithms perform with an accuracy of 0.998. RF displays the highest f1-score (0.88) and recall (0.858), SVM performs the best with the performance metric precision (0.972), and DT has the lowest computational cost (1237ms). Based on these results, the thesis concludes that the algorithms RF, SVM, and DT perform best in different scenarios. For instance, SVM should be used if a low amount of false positives is favoured. If the general and balanced performance of multiple metrics is preferred, then RF will perform best. The results also conclude that a significant amount of the examined features can be disregarded in future experiments, as they do not impact the performance of either classifier.

  • 1518.
    Usman, Muhammad
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Improving Expert Estimation of Software Development Effort in Agile Contexts2018Doctoral thesis, comprehensive summary (Other academic)
  • 1519.
    Usman, Muhammad
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Britto, Ricardo
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Effort Estimation in Co-located and Globally Distributed Agile Software Development: A Comparative Study2016In: PROCEEDINGS OF 2016 JOINT CONFERENCE OF THE INTERNATIONAL WORKSHOP ON SOFTWARE MEASUREMENT AND THE INTERNATIONAL CONFERENCE ON SOFTWARE PROCESS AND PRODUCT MEASUREMENT (IWSM-MENSURA) / [ed] Heidrich, J Vogelezang, F, IEEE , 2016, p. 219-224Conference paper (Refereed)
    Abstract [en]

    Context: Agile methods are used both by both colocated and globally distributed teams. Recently separate studies have been conducted to understand how effort estimation is practiced in Agile Software Development (ASD) in co-located and distributed contexts. There is need to compare the findings of these studies. Objectives: The objective of this comparative study is to identify the similarities and differences in how effort estimation is practiced in co-located and globally distributed ASD. Method: We combined the data of the two surveys to conduct this comparative study. First survey was conducted to identify the state of the practice on effort estimation in co-located ASD, while the second one identified the same in globally distributed ASD context. Results: The main findings of this comparative study are: 1) Agile practitioners, both in co-located and distributed contexts, apply techniques that use experts' subjective assessment to estimate effort. 2) Story points are the most frequently used size metrics in both co-located and distributed agile contexts 3) Team's prior experience and skill level are leading cost drivers in both contexts. Distributed agile practitioners cited additional cost drivers related to the geographical distance between distributed teams. 4) In both co-located and distributed agile context, effort is estimated mainly at iteration and release planning levels 5) With regard to the accuracy of effort estimates, underestimation is the dominant for both co-located and distributed agile software development. Conclusions: Similar techniques and size metrics have been used to estimate effort by both co-located and distributed agile teams. The main difference is with regard to the factors that are considered as important cost drivers. Global barriers due to cultural, geographical and temporal differences are important cost and effort drivers for distributed ASD. These additional cost drivers should be considered when estimating effort of a distributed agile project to avoid gross underestimation.

  • 1520.
    Usman, Muhammad
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Britto, Ricardo
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Börstler, Jürgen
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Mendes, Emilia
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering. Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Taxonomies in software engineering: A Systematic mapping study and a revised taxonomy development method2017In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 85, p. 43-59Article in journal (Refereed)
    Abstract [en]

    Context: Software Engineering (SE) is an evolving discipline with new subareas being continuously developed and added. To structure and better understand the SE body of knowledge, taxonomies have been proposed in all SE knowledge areas. Objective: The objective of this paper is to characterize the state-of-the-art research on SE taxonomies. Method: A systematic mapping study was conducted, based on 270 primary studies. Results: An increasing number of SE taxonomies have been published since 2000 in a broad range of venues, including the top SE journals and conferences. The majority of taxonomies can be grouped into the following SWEBOI(knowledge areas: construction (19.55%), design (19.55%), requirements (15.50%) and maintenance (11.81%). Illustration (45.76%) is the most frequently used approach for taxonomy validation. Hierarchy (53.14%) and faceted analysis (39.48%) are the most frequently used classification structures. Most taxonomies rely on qualitative procedures to classify subject matter instances, but in most cases (86.53%) these procedures are not described in sufficient detail. The majority of the taxonomies (97%) target unique subject matters and many taxonomy-papers are cited frequently. Most SE taxonomies are designed in an ad-hoc way. To address this issue, we have revised an existing method for developing taxonomies in a more systematic way. Conclusion: There is a strong interest in taxonomies in SE, but few taxonomies are extended or revised. Taxonomy design decisions regarding the used classification structures, procedures and descriptive bases are usually not well described and motivated. (C) 2017 The Authors. Published by Elsevier B.V.

  • 1521.
    Usman, Muhammad
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Britto, Ricardo
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Damm, Lars-Ola
    Ericsson, SWE.
    Börstler, Jürgen
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Effort Estimation in Large-Scale Software Development: An Industrial Case Study2018In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 99, p. 21-40Article in journal (Refereed)
    Abstract [en]

    Context: Software projects frequently incur schedule and budget overruns. Planning and estimation are particularlychallenging in large and globally distributed projects. While software engineering researchers have beeninvestigating effort estimation for many years to help practitioners to improve their estimation processes, there is littleresearch about effort estimation in large-scale distributed agile projects.Objective: The main objective of this paper is three-fold: i) to identify how effort estimation is carried out in largescaledistributed agile projects; ii) to analyze the accuracy of the effort estimation processes in large-scale distributedagile projects; and iii) to identify the factors that impact the accuracy of effort estimates in large-scale distributed agileprojects.Method: We performed an exploratory longitudinal case study. The data collection was operationalized througharchival research and semi-structured interviews.Results: The main findings of this study are: 1) underestimation is the dominant trend in the studied case, 2) reestimationat the analysis stage improves the accuracy of the effort estimates, 3) requirements with large size/scopeincur larger effort overruns, 4) immature teams incur larger effort overruns, 5) requirements developed in multi-sitesettings incur larger effort overruns as compared to requirements developed in a collocated setting, and 6) requirementspriorities impact the accuracy of the effort estimates.Conclusion: Effort estimation is carried out at quotation and analysis stages in the studied case. It is a challengingtask involving coordination amongst many different stakeholders. Furthermore, lack of details and changes in requirements,immaturity of the newly on-boarded teams and the challenges associated with the large-scale add complexitiesin the effort estimation process.

  • 1522.
    Usman, Muhammad
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Börstler, Jürgen
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Mendes, Emilia
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Effort estimation in agile software development: a survey on the state of the practice2015In: Proceedings of the 19th International Conference on Evaluation and Assessment in Software Engineering (EASE 2015), ACM Digital Library, 2015Conference paper (Refereed)
  • 1523.
    Usman, Muhammad
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Börstler, Jürgen
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering. Blekinge Institute of Technology, School of Computing.
    An Effort Estimation Taxonomy for Agile Software Development2017In: International journal of software engineering and knowledge engineering, ISSN 0218-1940, Vol. 27, no 4, p. 641-674Article in journal (Refereed)
    Abstract [en]

    In Agile Software Development (ASD) effort estimation plays an important role during release and iteration planning. The state of the art and practice on effort estimation in ASD have been recently identified. However, this knowledge has not yet been organized. The aim of this study is twofold: (1) To organize the knowledge on effort estimation in ASD and (2) to use this organized knowledge to support practice and the future research on effort estimation in ASD. We applied a taxonomy design method to organize the identified knowledge as a taxonomy of effort estimation in ASD. The proposed taxonomy offers a faceted classification scheme to characterize estimation activities of agile projects. Our agile estimation taxonomy consists of four dimensions: estimation context, estimation technique, effort predictors and effort estimate. Each dimension in turn has several facets. We applied the taxonomy to characterize estimation activities of 10 agile projects identified from the literature to assess whether all important estimation-related aspects are reported. The results showed that studies do not report complete information related to estimation. The taxonomy was also used to characterize the estimation activities of four agile teams from three different software companies. The practitioners involved in the investigation found the taxonomy useful in characterizing and documenting the estimation sessions. © 2017 The Author(s).

  • 1524.
    Usman, Muhammad
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Mendes, Emilia
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Weidt, F.
    Britto, R.
    Effort estimation in Agile Software Development: A systematic literature review2014Conference paper (Refereed)
    Abstract [en]

    Ever since the emergence of agile methodologies in 2001, many software companies have shifted to Agile Software Development (ASD), and since then many studies have been conducted to investigate effort estimation within such context; however to date there is no single study that presents a detailed overview of the state of the art in effort estimation for ASD. Objectives: The aim of this study is to provide a detailed overview of the state of the art in the area of effort estimation in ASD. Method: To report the state of the art, we conducted a systematic literature review in accordance with the guidelines proposed in the evidence-based software engineering literature. Results: A total of 25 primary studies were selected; the main findings are: i) Subjective estimation techniques (e.g. expert judgment, planning poker, use case points estimation method) are the most frequently applied in an agile context; ii) Use case points and story points are the most frequently used size metrics respectively; iii) MMRE (Mean Magnitude of Relative Error) and MRE (Magnitude of Relative Error) are the most frequently used accuracy metrics; iv) team skills, prior experience and task size are cited as the three important cost drivers for effort estimation in ASD; and v) Extreme Programming (XP) and SCRUM are the only two agile methods that are identified in the primary studies. Conclusion: Subjective estimation techniques, e.g. expert judgment-based techniques, planning poker or the use case points method, are the one used the most in agile effort estimation studies. As for the size metrics, the ones that were used the most in the primary studies were story points and use case points. Several research gaps were identified, relating to the agile methods, size metrics and cost drivers, thus suggesting numerous possible avenues for future work

  • 1525.
    Usman, Muhammad
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Mendes, Emilia
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Weidt, Francila
    Britto, Ricardo
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Effort estimation in agile software development: a systematic literature review2014In: Proceedings of the 10th International Conference on Predictive Models in Software Engineering, 2014, p. 82-91Conference paper (Refereed)
    Abstract [en]

    Context: Ever since the emergence of agile methodologies in 2001, many software companies have shifted to Agile Software Development (ASD), and since then many studies have been conducted to investigate effort estimation within such context; however to date there is no single study that presents a detailed overview of the state of the art in effort estimation for ASD. Objectives: The aim of this study is to provide a detailed overview of the state of the art in the area of effort estimation in ASD. Method: To report the state of the art, we conducted a systematic literature review in accordance with the guidelines proposed in the evidence-based software engineering literature.Results: A total of 25 primary studies were selected; the main findings are: i) Subjective estimation techniques (e.g. expert judgment, planning poker, use case points estimation method) are the most frequently applied in an agile context; ii) Use case points and story points are the most frequently used size metrics respectively; iii) MMRE (Mean Magnitude of Relative Error) and MRE (Magnitude of Relative Error) are the most frequently used accuracy metrics; iv) team skills, prior experience and task size are cited as the three important cost drivers for effort estimation in ASD; and v) Extreme Programming (XP) and SCRUM are the only two agile methods that are identified in the primary studies. Conclusion: Subjective estimation techniques, e.g. expert judgment-based techniques, planning poker or the use case points method, are the one used the most in agile effort estimation studies. As for the size metrics, the ones that were used the most in the primary studies were story points and use case points. Several research gaps were identified, relating to the agile methods, size metrics and cost drivers, thus suggesting numerous possible avenues for future work.

  • 1526.
    Usman, Muhammad
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Minhas, Nasir Mehmood
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Use of personality tests in empirical software engineering studies: A review of ethical issues2019In: ACM International Conference Proceeding Series, Association for Computing Machinery , 2019, p. 237-242Conference paper (Refereed)
    Abstract [en]

    There has been a lot of research on personality and its impact on software engineering practice. These studies use different psychological tests to identify personality types of software practitioners. The administration of these tests requires expertise. As the humans are involved, other ethical issues, such as consent, also become important. In this study, we evaluated a small sample of 15 studies that used a psychological test Myers-Briggs Type Indicator (MBTI) in a software engineering context with respect to different ethical issues related to informed consent, qualification of the test administrators and the use of appropriate tests. The results show that most of the studies in our sample seriously lack with respect to various ethical issues. © 2019 Association for Computing Machinery.

  • 1527.
    Usman, Muhammad
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Börstler, Jürgen
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Neto, Pedro
    bFederal University of Piaui , BRA.
    Developing and Using Checklists to Improve Software Effort Estimation: a Multi-Case Study2018In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 146, p. 286-309Article in journal (Refereed)
    Abstract [en]

    Expert judgment based effort estimation techniques are widely used for estimating software effort. In the absence of process support, experts may overlook important factors during estimation, leading to inconsistent estimates. This might cause underestimation, which is a common problem in software projects. This multi-case study aims to improve expert estimation of software development effort. Our goal is two-fold: 1) to propose a process to develop and evolve estimation checklists for agile teams, and 2) to evaluate the usefulness of the checklists in improving expert estimation processes. The use of checklists improved the accuracy of the estimates in two case companies. In particular, the underestimation bias was reduced to a large extent. For the third case, we could not perform a similar analysis, due to the unavailability of historical data. However, when checklist was used in two sprints, the estimates were quite accurate (median Balanced Relative Error (BRE) bias of -0.05 ). The study participants from the case companies observed several benefits of using the checklists during estimation, such as increased confidence in estimates, improved consistency due to help in recalling relevant factors, more objectivity in the process, improved understanding of the tasks being estimated, and reduced chances of missing tasks.

  • 1528.
    Utbult, Max
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Refai, Rami El
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Musikalisk gestaltning av Trudvang2014Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [sv]

    Syftet med kandidatarbetet är att skapa musik till en folkgrupp som saknar en etablerad musikkultur. För att göra detta valdes folkgrupper ut ur den fiktiva spelvärlden Drakar och Demoner: Trudvang. Fornnordisk kultur var en inspiration till Trudvang och detta studerades för att påvisa likheter till folkgrupperna i Trudvang. Parallellerna mellan de olika kulturerna blev uppsatsens förundersökning. Uppsatsens förundersökning resulterade i att det finns en efterfråga på musik i spelet varpå två folkgrupper från Trudvang valdes ut för gestaltning med musik, alferna och virannerna. Baserat på instrument som samerna och anglosaxarna kunde tänkas använda producerade vi musik till den andra delen av forskningen. Musiken undersöktes genom att dela ut en enkät till erfarna spelare på spelets forum för att bestämma huruvida gestaltningen av alferna och virannerna var lyckad. Resultaten visade att det lämpar sig att gestalta dessa folkgrupper med den producerade musiken och som ett resultat ge dem en musikkultur.

  • 1529.
    V N ANJANAYA UDAY, MAJETI
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Effects of Development Platform Heterogeneity in Testing of Heterogeneous systems: An Industrial Survey2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. Over the years, software has evolved to a large and complex system of systems. According to the literature, a heterogeneous system is defined as “a system comprised of n number of subsystems where at least one subsystem exhibits heterogeneity with respect to other subsystem”. The area of research in heterogeneous system has also received large attention in recent years, as a result of shift in technology and customer needs. In heterogeneous systems, heterogeneity may occur in different dimensions for different systems.

    Objectives. The main aim of this thesis is, “to investigate the effects of development platform heterogeneity in heterogeneous system on the test process”. The objectives to achieve our aim is to determine the influence of platform heterogeneity on software testing and also to investigate best practices for testing heterogeneous systems with different types of heterogeneity.

    Methods. An industrial survey and interviews with practitioners are considered as a research method in this thesis. The purpose of this survey is to help the testers to understand how the platform heterogeneity affects the test process.

    Results. In this research, the researcher had gathered data related to effects and best practices in heterogeneous systems from both survey and interviews.

    Conclusions. In this thesis, the researcher had investigated the effects of development platform heterogeneity in heterogeneous system on the test process and also identified the best practices for testing heterogeneous systems that exhibit different types of heterogeneity. Apart from these, the researcher also identified different types of development platforms used for development of a heterogeneous type of systems in the industry.

  • 1530.
    Vaara, Jonatan
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Rahden, Tomas
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Vem Kan Spela Förutan Bild?: En undersökning om diegesis och ledande ljud i förstapersonsspel2016Independent thesis Basic level (degree of Bachelor), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This bachelor’s thesis examines the question “How can diegetic sound design be used to guide players in first-person shooters?”. This is done with theories from authors including Michel Chion, Christine Jørgensen and Karen Collins, which are then applied in two analyzes of the games “Half-Life 2” (Valve Corporation 2004) and Shadow Warrior” (Flying Wild Hog 2013) to find out which techniques they use when working with sound. We then used these techniques in our own first-person shooter, a game without any form of visual feedback, in order to performe a game test and draw our own conclusions concerning guiding sound design. The results show that it’s possible to play a game without any visual feedback, as long as the sound design follows a set of specific rules.

  • 1531.
    vadlamudi, jithin chand
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    How a Discrete event simulation model can relieve congestion at a RORO terminal gate system: Case study: RORO port terminal in the Port of Karlshamn.2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. Due to increase in demand for RORO shipping services,the RORO terminal gate system need to handle more number of vehicles for every RORO vessel departure. Therefore, various congestion problems can occur; so, to address all possible congestion related problemsat RORO terminal, terminal gate systems are implemented with advanced technologies and updated to full or partial functioning automated gate systems.

    Objectives. In this research study considering the future increase in demand for wheeled cargo shipping, we attempt to propose a solution for reducing congestion and investigating optimal positions for each automated gate system service at RORO port terminal.

    Methods. In this Master thesis, as part of qualitative study we conduct a literature review and case study to know about the existing related work on this research problem and know about the real world system operation and behaviour of a RORO terminal gate system.Later, applying the adequate knowledge acquired from above mentioned qualitative studies, we perform a discrete event simulation experiment using Anylogic® professional 7.02 simulation software to address defined research objectives.

    Results. Considering the peak and low periods of present and future estimated demand volumes as different scenarios,various simulation experiment results are generated for different key performance indicators. The result values of these key performance indicators address various research objectives.

    Conclusions. This research study finally concludes that, the average queue length values at each automated gate system service implicates optimal position for each service and directly address the congestion problem. We also conclude that in every estimated increase in vehicles attending the RORO terminal, assigning optimal arrival time windows for respective vehicle types minimizes the congestion problem at automated gate system.

  • 1532.
    Vajrapu, Rakesh Guptha
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Kothwar, Sravika
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Software Requirements Prioritization Practices in Software Start-ups: A Qualitative research based on Start-ups in India2018Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context: Requirements prioritization is used in software product management and is concerned with identifying the most valuable requirements from a given set. This is necessary to satisfy the needs of customers, to provide support for stakeholders and more importantly for release planning. Irrespective of the size of the organization (small, medium and large), requirements prioritization is important to minimize the risk during development. However, few studies explore how requirements prioritization is practiced in start-ups. Software start-ups are becoming important suppliers of innovative and software-intensive products.Earlier studies suggest that requirements discovery and validation is the core activity in start-ups. However, due to limited resources, start-ups need to prioritize on what requirements to focus. If they do it wrong it leads to wasted resources.While larger organizations may afford such waste, start-ups cannot.Moreover, researchers have identified that start-ups are not small versions of large companies and the existing software development practices cannot be transferred directly due to low rigor in current studies.Thus, we planned to conduct an exploratory study on requirements prioritization practices in the context of software start-ups.

    Objectives: The main aim of our study is to explore the state-of-art of requirements prioritization practices used in start-ups.We also identify the challenges associated with the corresponding practices and few possible solutions.

    Methods: In this qualitative research, we conduct a literature review by referring to many article sources like IEEE Xplore, Scopus, and Google Scholar to identify the prioritization practices and challenges in general. An interview study is conducted by using semi-structured interviews to collect data from practitioners.Thematic analysis was used to analyze the interview data.

    Results: We have identified 15 practices from 8 different start-ups companies with corresponding challenges and possible solutions. Our results show mixed reviews in terms of the prioritization practices at start-ups. From the total of 8 companies about 6 companies followed formal methods while in the remaining 2 companies, prioritization was informal and not clear. The results show that value-based method is the dominant prioritization technique in start-ups. The results also show that customer input and return on investment aspects of prioritization play a key role when compared to other aspects.

    Conclusions: The results of this study provide an understanding of the various requirements prioritization practices in start-ups and challenges faced in implementing them.These results are validated from the answers found in the literature. The solutions identified for the corresponding challenges allow the practitioners to approach them in a better way. As this study focused only on Indian software start-up companies, it is recommended to extend to Swedish software start-up companies as well to get a broader perspective. Scaling of sample size is also recommended. This study may help future research on requirements engineering in start-ups. It may also help practitioners who have an intention to begin a software start-up company to get an idea of what challenges they may face while prioritizing requirements and can use these solutions to mitigate them.

  • 1533.
    Vaka, Kranthi
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Narla, Karthik
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    The impact of maturity, scale and distribution on software quality: An industrial case study2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. In this ever-changing world of software development, the process of organizations adopting distributed development is gaining prominence. Implementing various development processes in such distributed environment is giving rise to numerous issues which affects the quality of the product. These issues could be due to the involvement of architects across national borders during the process of development. In this research, the focus is to improve software quality by addressing the impact of maturity and scale between teams and its affect on code review process. Further to identify the issues behind the distribution between teams separated by geographical, temporal and cultural distances.

    Objectives. The main objective of this research is to identify how different factors such as maturity on quality of deliverables, scale and distribution that impacts the code review process affecting software quality. Based on code review comments in data set, the factors that were examined in this research are evolvability of defects and difference in the quality of software developed by mature and immature teams within code review process. Later on, the issues related to the impact of geographical, temporal and cultural distances on the type of defects revealed during distributed development are identified.

    Methods. To achieve these objectives, a case study was conducted at Ericsson. A mixed approach has been chosen that includes, archival data and semi-structured interviews to gather useful data for this research. Archival data is one of the data collection method used for reviewing comments in data set and gather quantitative results for the study. We employed approaches such as descriptive statistics, hypothesis testing, and graphical representation to analyze data. Moreover, to strengthen these results, semi-structured group interview is conducted to triangulate the data and collect additional insights about code review process in large scale organizations.

    Results. By conducting this research, it is inferred that teams with a lower level of maturity produce more number of defects. It was observed that 35.11% functional, 59.03% maintainability, 0.11% compatibility, 0.028% security, 0.73% reliability, 4.96% performance efficiency, 0.014% portability of defects were found from archival data. Majority of defects were of functional and maintainability type, which impacts software quality in distributed environment. In addition to the above-mentioned results, other findings are related to evolvability of defects within immature teams which shows that there is no particular trend in increase or decrease in number of defects. Issues that occur due to distribution between teams are found out in this research. The overall results of this study are to suggest the impact of maturity and scale on software quality by making numerical assumptions and validating these finding with interviews. Interviews are also used to inquire information about the issues from dataset related to the impact of global software engineering (GSE) distances on code review process.

    Conclusions. At the end of this research it is concluded that in these type of projects, immature teams produce more number of defects than mature teams. This is because when large-scale projects are distributed globally, it is always harder to share and acquire knowledge between teams, increase group learning and mentor teams located in immature sites. Immature developers have problems of understanding the structure of code, new architects need to acquire knowledge on the scope and real time issues for improving quality of software. Using results presented in this thesis, researchers can find new gaps easily to extend the research on various influences on code review process in distributed environment. 

  • 1534.
    Vakkalanka, Sairam
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Narayanasetty, SR Phanindra Kumar
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Investigating Research on Teaching Modeling in Software Engineering -A Systematic Mapping Study2013Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Context: Modeling is an important activity, which is used, in different phases of software engineering. Without models and modeling, it is nearly impossible to design and develop software systems, which demands the need for modeling to be taught in software engineering. There exist a number of reported models, methods, tools and languages to teach modeling in software engineering, which suggests the need for a classification and an overview of the area. This research investigates the state of published research on teaching modeling in software engineering in order to provide a systematic overview and classification of these different ways of teaching modeling with an insight on their importance and relevance to this research area. Objectives: The overall goal of the research was achieved with fulfilling the following objectives: understanding how systematic mapping is conducted, developing a systematic mapping process that will properly provide data for investigating the published research, applying the process, and finally reflecting on the results of the mappings, analyzing the importance and evaluating relevance of the published research. Methods: Systematic literature review was used as a tool to understand and inspect how systematic mapping was carried out in the area of software engineering. Based on the results of systematic literature review, new guidelines were formulated to conduct systematic mapping. These guidelines were used to investigate the published research on teaching modeling in software engineering. The results obtained through the systematic mapping were evaluated based on Industrial relevance, Rigor and citation count to examine their importance and identify research gaps. Results: 131 articles were classified into five classes such as Languages, Course Design, Curriculum design, Diagrams, others using semi-manual classification scheme and classification facets such as the type of audience, type of contribution, type of research, type of publication, type of publication year, type of research method and type of study setting. After the evaluation of Industrial relevance, rigor & citation ranking on the obtained results of the classification, 8 processes, 4 tools, 3 methods, 2 measurement-metrics and 1 model were extracted to teach modeling in software engineering. Also, this classification when compared with an existing classification, which is based on interviews and discussions, showed that our classification provides a wider overview with a deeper insight of the different ways to teach modeling in software engineering. Conclusions: Results of this systematic mapping study indicate that there is an increase in the research activity on teaching modeling in software engineering, with Unified Modeling Language (UML) being the widely research area. Much research is emphasized on teaching modeling to students from academia which indicates a research gap in developing methods, models, tools and processes to teach modeling to students/practitioners from the industry. Also, considering the citation ranking, industrial relevance and rigor of the articles, areas such as course design and curriculum development are highly neglected, suggesting the need for more research focus.

  • 1535.
    Vamsi Appana, Vamsi
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Evaluating Industrial Relevance in Search Based Software Engineering Research: A Systematic Mapping Study and Survey2017Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Search Based Software Engineering is one of the important field of software engineering. Over the past few years even though there is a lot of study performed on SBSE and its search techniques in software development areas, it appears SBSE is not very industry relevant at the moment because most of the academic work was limited towards the application of search techniques. Hence, author proposes a study to know the trend of SBSE literature over the past few years and also analyze to what degree current SBSE research is industry relevant

  • 1536.
    van der Mei, Rob
    et al.
    Centrum Wiskunde and Informatica (CWI), NLD.
    van den Berg, Hans
    TNO, NLD.
    Ganchev, Ivan
    University of Limerick, IRE.
    Tutschku, Kurt Tutschku
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Leitner, Philipp
    University of Zurich, CHE.
    Lassila, Pasi
    Aalto University, FIN.
    Burakowski, Wojciech
    Warsaw University of Technology, POL.
    Liberal, Fidel
    University of the Basque Country (UPV/EHU), ESP.
    Arvidsson, Åke
    Kristianstad University, SWE.
    Hoßfeld, Tobias
    lInstitute of Computer Science and Business Information Systems (ICB), DEU.
    Wac, Katarzyna
    University of Geneva, CHE.
    Melvin, Hugh
    National University of Ireland, IRE.
    Galinac Grbac, Tihana
    University of Rijeka, HRV.
    Haddad, Yoram
    JCT-Lev Academic Center, ISR.
    Key, Peter
    Microsoft Research Ltd., GBR.
    State of the Art and Research Challenges in the Area of Autonomous Control for a Reliable Internet of Services2018In: Autonomous Control for a Reliable Internet of Services: Methods, Models, Approaches, Techniques, Algorithms, and Tools / [ed] Ganchev, Ivan, van der Mei, Robert D., van den Berg, J.L., Springer Publishing Company, 2018Chapter in book (Refereed)
    Abstract [en]

    The explosive growth of the Internet has fundamentally changed the global society. The emergence of concepts like service-oriented architecture (SOA), Software as a Service (SaaS), Platform as a Service (PaaS), Infrastructure as a Service (IaaS), Network as a Service (NaaS) and Cloud Computing in general has catalyzed the migration from the information-oriented Internet into an Internet of Services (IoS). This has opened up virtually unbounded possibilities for the creation of new and innovative services that facilitate business processes and improve the quality of life. However, this also calls for new approaches to ensuring quality and reliability of these services. The goal of this book chapter is to first analyze the state-of-the-art in the area of autonomous control for a reliable IoS and then to identify the main research challenges within it. A general background and high-level description of the current state of knowledge is presented. Then, for each of the three subareas, namely the autonomous management and real-time control, methods and tools for monitoring and service prediction, and smart pricing and competition in multi-domain systems, a brief general introduction and background are presented, and a list of key research challenges is formulated.

  • 1537.
    VANGALA, SHIVAKANTHREDDY
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Pattern Recognition applied to Continuous integration system.2018Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context: Thisthesis focuses on regression testing in the continuous integration environment which is integration testing that ensures that changes made in the new development code to thesoftware product do not introduce new faults to the software product. Continuous integration is software development practice which integrates all development, testing, and deployment activities. In continuous integration,regression testing is done by manually selecting and prioritizingtestcases from a larger set of testcases. The main challenge faced using manual testcases selection and prioritization is insome caseswhereneeded testcases are ignored in subset of selected testcasesbecause testers didn’t includethem manually while designing hourly cycle regression test suite for particular feature development in product. So, Ericsson, the company in which environment this thesis is conducted,aims at improvingtheirtestcase selection and prioritization in regression testing using pattern recognition.

    Objectives:This thesis study suggests prediction models using pattern recognition algorithms for predicting future testcases failures using historical data. This helpsto improve the present quality of continuous integration environment by selecting appropriate subset of testcases from larger set of testcases for regression testing. There exist several candidate pattern recognition algorithms that are promising for predicting testcase failures. Based on the characteristics of the data collected at Ericsson, suitable pattern recognition algorithms are selected and predictive models are built. Finally, two predictive models are evaluated and the best performing model is integrated into the continuous integration system.

    Methods:Experiment research method is chosen for this research because discovery of cause and effect relationships between dependent and independent variables can be used for the evaluation of the predictive model.The experiment is conducted in RStudio, which facilitates to train the predictive models using continuous integration historical data. The predictive ability of the algorithms is evaluated using prediction accuracy evaluation metrics.

    Results: After implementing two predictive models (neural networks & k-nearest means) using continuous integration data, neural networks achieved aprediction accuracy of 75.3%, k-nearest neighbor gave result 67.75%.

    Conclusions: This research investigated the feasibility of an adaptive and self-learning test machinery by pattern recognition in continuous integration environment to improve testcase selection and prioritization in regression testing. Neural networks have proved effective capability of predicting failure testcase by 75.3% over the k-nearest neighbors.Predictive model can only make continuous integration efficient only if it has 100% prediction capability, the prediction capability of the 75.3% will not make continuous integration system more efficient than present static testcase selection and prioritization as it has deficiency of lacking prediction 25%. So, this research can only conclude that neural networks at present has 75.3% prediction capability but in future when data availability is more,this may reach to 100% predictive capability. The present Ericsson continuous integration system needs to improve its data storage for historical data at present it can only store 30 days of historical data. The predictive models require large data to give good prediction. To support continuous integration at present Ericsson is using jenkins automation server, there are other automation servers like Team city, Travis CI, Go CD, Circle CI which can store data more than 30 days using them will mitigate the problem of data storage.

  • 1538.
    Varanasi, Panchajanya
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    A Perspective of Leadership Requirement in Scrum Based Software Development - A Case Study2018Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. Software Development has been witnessing great innovations over past few years with good number of technologies, tools and practices invading the industry. Client demands and collaboration in the development process are also seen increasing. So many new practices and methodologies are coming up and Agile is one of prominent practices adopted by many. Even in Agile, Scrum methodology is picking up more demand presently. As Software Development process and practices are changing so are the leadership styles in the same. Leadership is critical for success of any team. This study intended to explore this model and requirement of leadership in Scrum based Software Development in a practical scenario.

    Objectives. Leadership, which is essential in any Software project, differs from Traditional Methodology to Scrum Methodology of Agile practices in Software Development. Through a case study, the author attempted to investigate and explore the perspective of Leadership requirement in Scrum based Software Development in a practical scenario. The study aimed to gather and analyze the Leadership model implemented in two domestic projects in an Indian company, involved in two distinctive domains, and sum up the impressions gained in the same. The study aimed to assess whether the gathered knowledge adds up to the existing body of knowledge on the phenomenon or on the contrary whether any suggestions for improvement can be given to the case units.

    Methods. Case Study method was chosen for undertaking this explorative study. A literature review was conducted prior to the case study to gain knowledge on the phenomenon, which also answered one of the Research Questions and helped partially the other. A multiple case study was conducted through semi structured personal interviews, tools analysis and direct observation in the case units. Qualitative data analysis was made using Grounded Theory on this three orders of collected data. The results were compared with the Literature and conformity or variance analyzed. This comparative analysis is used for making recommendations to the case units for improvement or for additions to the existing body of knowledge.

    Results. Through the results of Literature Review, Leadership models in Software Development including Agile Scrum were summed up. And through the results of the case study, the leadership models and features implemented in the case units have been identified. These results are further validated and contrasted with the results of the literature review. How the literature models and the case unit models of leadership differed is studied. The justification for the implemented leadership model in the practical situation is also analyzed. Following, a review of the models employed at the case environments, the perspective of leadership in the two Scrum based Software Development projects is summed up. At the end, it is assessed what effect the case study would have on the existing body of knowledge on the phenomenon and modifications that can be proposed to the case units based on the results and analysis.

    Conclusion. It is concluded that the Case Units are implementing Situational Leadership and Transformational Leadership in a mixed way. Telling and Selling models in Situational Leadership are prominent while Participating and Delegating are ranking less. Some of the important features of Transformational Leadership like Self Management, Organizational Consciousness, Adaptability and Proactive are in implementation but not all features of the model are assumed. Even Scrum is implemented in a modified way, extending only controlled autonomy with higher monitoring and it had a direct effect on the leadership. On the whole it is directive leadership that is in play with co-existence of collaborative one situationally.

  • 1539.
    Vasireddy, Sindhu
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Impact of Initial Delay and Stallings on the Quality of Experience of the User2018Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context: In telecommunications, it is important for network providers to have a knowledge of generic relationships between multi-dimensional QoE and QoS parameters to be able to provide quality service to the customers, keeping in mind the real-time constraints such as time, money and labor. So far, there have been several research works on formulating a generic quantitative relationship between a single QoE and a single QoS parameter in literature. As per the research conducted, the most common examples of mapping between a QoS parameter and QoE were found to be the exponential model (the IQX hypothesis), the logarithmic model (the WeberFechner law), and the power model. However, it has been less common to study the multi-dimensional relationship between QoE and QoS parameters.

    Objective: The purpose of this paper here is to discuss the impact of several QoS parameters on QoE. The proposal put forth by existing literature is that a multiplicative model better explains the impact of QoS parameters on the overall quality as perceived by the user. The proposal was, however, never backed by subjective data.

    Method: We have performed several subjective tests in this regard to test our hypothesis. Non-adaptive streaming of videos in a monitored server-client setup was used. In these tests, the objective was to obtain the Mean Opinion Scores(MOS) for varying QoS parameters such as the initial delay and the number of stalls. Network shaping was used for introducing the disturbances in the videos. The experimental setup consisted of a total of 27 experiments per user and each user was handed over a questionnaire. The questionnaire mainly consisted of four questions aimed at gathering feedback from the users regarding the quality of the videos shown to them. Users were asked to mark their MOS on a continuous scale. The videos were subjected to three different values of Initial Delay, Stalls and Resolution, each. The average duration per stalls throughout the experiments was maintained at 2 seconds.

    Results: Data was collected from 15 users. Thus, in total 405 MOS values were recorded for 27 combinations of Initial Delay, number of Stalls and Resolution. The impact of initial delay and stalls on the QoE as indicated by the MOS was then categorized and studied for each Resolution. With the help of regression tools in MATLAB and Solver in Excel, possible models that explain the multi-dimensional QoS-QoE relationship were studied.

    Conclusion: The results mostly indicated towards the multiplicative model just as proposed by the existing literature. Also, it was observed that Initial Delay alone has not much impact on the overall QoE. So, the impact of Initial Delay could be described either by an additive or a multiplicative model. However, the impact of Stalls on QoE was found to be multiplicative.

  • 1540.
    Vellanki, Mohit
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Performance Evaluation of Cassandra in a Virtualized Environment2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. Apache Cassandra is an open-source, scalable, NoSQL database that distributes the data over many commodity servers. It provides no single point of failure by copying and storing the data in different locations. Cassandra uses a ring design rather than the traditional master-slave design.

    Virtualization is the technique using which physical resources of a machine are divided and utilized by various virtual machines. It is the fundamental technology, which allows cloud computing to provide resource sharing among the users.

     Objectives. Through this research, the effects of virtualization on Cassandra are observed by comparing the virtual machine arrangement to physical machine arrangement along with the overhead caused by virtualization.

     Methods. An experiment is conducted in this study to identify the aforementioned effects of virtualization on Cassandra compared to the physical machines. Cassandra runs on physical machines with Ubuntu 14.04 LTS arranged in a multi node cluster. Results are obtained by executing the mixed, read only and write only operations in the Cassandra stress tool on the data populated in this cluster. This procedure is repeated for 100% and 66% workload. The same procedure is repeated in virtual machines cluster and the results are compared.

     Results. Virtualization overhead has been identified in terms of CPU utilization and the effects of virtualization on Cassandra are found out in terms of Disk utilization, throughput and latency.

     Conclusions. The overhead caused due to virtualization is observed and the effect of this overhead on the performance of Cassandra has been identified. The consequence of the virtualization overhead has been related to the change in performance of Cassandra.

  • 1541.
    Velpula, Chaitanyakumar
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Requirements Negotiation and Conflict Resolution in Distributed Software Development: A Systematic Mapping Study and Survey2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The main aim of this thesis is to explore the industrial practices of requirements negotiation and conflict resolution within distributed software development. The motivation for this study is to get insight of the industrial practices in particular interventions (Communication tools, Models, Communication media) that used by practitioners to resolve requirements negotiation and conflicts resolution between clients and suppliers, since many researchers purposed interventions in the literature for requirements negotiation and conflicts resolution in distributed software development. Context: In Requirements Engineering, requirements negotiation and conflict resolution are crucial activities to achieve a common ground between clients and suppliers, it is considered as one of the crucial factors for delivering successful software. However, the shift from traditional collocated practices to a distributed environment offers both benefits and drawbacks which were studied earlier by researcher, but surprisingly there are few studies with insight of exploring the distributed requirements negotiations and conflict resolution practices. This research investigates the state of requirements negotiation and conflict resolution activities in distributed software development with an insight on their importance and relevance to this research area.

    Objectives: Overall goal of this thesis is to understand how requirements negotiations and conflict resolution are performed in distributed software development, knowing what are the available tools to perform requirements negotiation and conflict resolution, whether these existing tools are good enough to cope up with the industrial practices, knowing most widely used tools, methods and approaches, most importantly does the present research able to bridge the gap with in distributed software development?

    Methods: This thesis study comprises of two research methodologies. 1. Systematic mapping study (SMS)- To identify the proposed interventions in the literature to perform requirements negotiation and conflict resolution activities in Industrial Software Development within a distributed environment. 2. Industrial Survey- To identify industrial practices to perform rei quirements negotiation and conflict resolution in Industrial Software Development within a distributed environment.

    Results: 20 studies were identified through systematic mapping study (SMS). After analyzing the obtained studies, the list of interventions (Preparatory activities/communication tools/ Models) were gathered and analyzed. Thereupon, an industrial survey is conducted from the obtained literature, which has obtained 41 responses. Effective communication media for preparatory activities in requirements negotiations and conflict resolution are identified, validation of communication tools for effective requirements negotiations and conflict resolution is performed. Apart from the validation, this study provided list of factors that affects the requirement negotiations and conflict resolution activities in distributed software development.

    Conclusions: To conclude, the obtained results from this study will benefit practitioner in capturing more insight towards the requirements negotiations and conflict resolution in distributed software engineering. This study identified the preparatory activities involved for effective communication to perform requirements negotiation activities, effective tools, models and factors affecting of requirements negotiations and conflict resolution. In addition to this, validation of results obtained from the literature is carried through survey. Practitioners can be benefitted from the end results of by knowing the effective requirements negotiation and conflict resolution interventions (Communicational tools/ Models/ Communication media) for early planning in distributed software development. Researchers can extend the study by looking in to the real-time approaches followed by the practitioners to perform the both activities in the direction of future studies.

  • 1542.
    Vemula, S Sai Srinivas Jayapala
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems. Blekinge Institute of Technology.
    Performance Evaluation of OpenStack Deployment Tools2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Cloud computing allows access to a collection of computing resources that can be easily provisioned, configured as well as released on-demand with minimum cost and effort. OpenStack is an open source cloud management platform aimed at providing public or private IaaS cloud on standard hardware. Since, deploying OpenStack manually is tedious and time-consuming, several tools that automate the deployment of OpenStack are available. Usually, cloud administrators choose a tool based on its level of automation, ease of use or interoperability with existing tools used by them. However, another desired factor while choosing a deployment tool is its deployment speed. Cloud admins cannot select based on this factor since, there is no previous work done on the comparison of deployment tools based on deployment time. This thesis aims to address this issue.

    The main aim of the thesis is to evaluate the performance of OpenStack deployment tools with respect to operating system provisioning and OpenStack deployment time, on physical servers. Furthermore, the effect of varying number of nodes, OpenStack architecture deployed and resources (cores and RAM) provided to deployment node on provisioning and deployment times, is also analyzed. Also, the tools are classified based on stages of deployment and method of deploying OpenStack services. In this thesis we evaluate the performance of MAAS, Foreman, Mirantis Fuel and Canonical Autopilot.

    The performance of the tools is measured via experimental research method. Operating system provisioning time and OpenStack deployment times are measured while varying the number of nodes/ OpenStack architecture and resources provided to deployment node i.e. cores and RAM.

    Results show that provisioning time of MAAS is less than Mirantis Fuel which is less than Foreman for all node scenarios and resources cases considered. Furthermore, for all 3 tools as number of nodes increases provisioning time increases. However, the amount of increase is lowest for MAAS than Mirantis Fuel and Foreman. Similarly, results for bare metal OpenStack deployment time show that, Canonical Autopilot outperforms Mirantis Fuel by a significant difference for all OpenStack scenarios and resources cases considered. Furthermore, as number of nodes in an OpenStack scenario as well as its complexity increases, the deployment time for both the tools increases.

    From the research, it is concluded that MAAS and Canonical Autopilot perform better as provisioning and bare metal OpenStack deployment tool respectively, than other tools that have been analyzed. Furthermore, from the analysis it can be concluded that increase in number of nodes/ OpenStack architecture, leads to an increase in both provisioning time and OpenStack deployment time for all the tools. Finally, after analyzing the results the tools are classified based on the method of deploying OpenStack services i.e. parallel or role-wise parallel.

  • 1543.
    Vemulapalli, Revanth
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Mada, Ravi Kumar
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Performance of Disk I/O operations during the Live Migration of a Virtual Machine over WAN2014Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Virtualization is a technique that allows several virtual machines (VMs) to run on a single physical machine (PM) by adding a virtualization layer above the physical host's hardware. Many virtualization products allow a VM be migrated from one PM to other PM without interrupting the services running on the VM. This is called live migration and offers many potential advantages like server consolidation, reduced energy consumption, disaster recovery, reliability, and efficient workflows such as "Follow-the-Sun''. At present, the advantages of VM live migration are limited to Local Area Networks (LANs) as migrations over Wide Area Networks (WAN) offer lower performance due to IP address changes in the migrating VMs and also due to large network latency. For scenarios which require migrations, shared storage solutions like iSCSI (block storage) and NFS (file storage) are used to store the VM's disk to avoid the high latencies associated with disk state migration when private storage is used. When using iSCSI or NFS, all the disk I/O operations generated by the VM are encapsulated and carried to the shared storage over the IP network. The underlying latency in WAN will effect the performance of application requesting the disk I/O from the VM. In this thesis our objective was to determine the performance of shared and private storage when VMs are live migrated in networks with high latency, with WANs as the typical case. To achieve this objective, we used Iometer, a disk benchmarking tool, to investigate the I/O performance of iSCSI and NFS when used as shared storage for live migrating Xen VMs over emulated WANs. In addition, we have configured the Distributed Replicated Block Device (DRBD) system to provide private storage for our VMs through incremental disk replication. Then, we have studied the I/O performance of the private storage solution in the context of live disk migration and compared it to the performance of shared storage based on iSCSI and NFS. The results from our testbed indicate that the DRBD-based solution should be preferred over the considered shared storage solutions because DRBD consumed less network bandwidth and has a lower maximum I/O response time.

  • 1544.
    Venigalla, Thejaswi
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Akkapaka, Raj Kiran
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Teletraffic Models for Mobile Network Connectivity.2013Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    We are in an era marked by tremendous global growth in mobile traffic and subscribers due to change in the mobile communication technology from second generation to third and fourth generations. Especially usage of packet-data applications has recorded remarkable growth. The need for mobile communication networks capable of providing an ever increasing spectrum of services calls for efficient techniques for the analysis, monitoring and design of networks. To meet the ever increasing demands of the user and to ensure on reliability and affordability, system models that can capture the characteristics of actual network load and yield acceptable precise predictions of performance in a reasonable amount of time must be developed. This can be achieved using teletraffic models as they capture the behaviour of system through interpret-able functions and parameters. Past years have seen extremely numerous teletraffic models for different purposes. Nevertheless there is no model that provides a proper frame work to analyse the mobile networks. This report attempts to provide a frame work to analyse the mobile traffic and based on the analysis we design teletraffic models that represent the realistic mobile networks and calculate the buffer under-flow probability.

  • 1545.
    venkesh, kandari
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Implementation and Performance Optimization of WebRTC Based Remote Collaboration System2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
  • 1546.
    Vesterlund, Martin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Wiklund, Viktor
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Is this your smart phone?: On connecting MAC-addresses to a specific individual using access point data2015Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. The potential to track individuals become greater and greater in the society today. We want to develop a method that is easy to understand so more people can participate in the discussion about the collection, and storing, of seemingly non-invasive device data and personal integrity.

    Objectives. In this work we investigate the potential to connect a WiFi enabled device to a known individual by analysing log files. Since we want to keep the method as simple as possible we choose to not use machine learning because this might add unnecessary layers of complexity.

    Methods. The conducted experiments were performed against a test group consisting of six persons. The dataset used consisted of authentication logs from a university WiFi-network collected during a month and data acquired by capturing WiFi-traffic.

    Results. We were able to connect 67% of the targeted test persons to their smart phones and 60% to their laptops.

    Conclusions. In this work we conclude that a device identifier in combination with data that can tie it to a location at a given time is to be seen as sensitive information with regard to personal integrity. We also conclude that it is possible to create and use an easy method to connect a device to a given person.

  • 1547.
    Vestman, Alexander
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    An Asynchronous Event Communication Technique for Soft Real-Time GPGPU Applications2015Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context Interactive GPGPU applications requires low response time feedback from events such as user input in order to provide a positive user experience. Communication of these events must be performed asynchronously as to not cause significant performance penalties.

    Objectives In this study the usage of CPU/GPU shared virtual memory to perform asynchronous communication is explored. Previous studies have shown that shared virtual memory can increase computational performance compared to other types of memory.

    Methods A communication technique that aimed to utilize the performance increasing properties of shared virtual memory was developed and implemented. The implemented technique was then compared to an implementation using explicitly transferred memory in an experiment measuring the performance of the various stages involved in the technique.

    Results The results from the experiment revealed that utilizing shared virtual memory for performing asynchronous communication was in general slightly slower than- or comparable to using explicitly transferred memory. In some cases, where the memory access pattern was right, utilization of shared virtual memory lead to a 50% reduction in execution time compared to explicitly transferred memory.

    Conclusions A conclusion that shared virtual memory can be utilized for performing asynchronous communication was reached. It was also concluded that by utilizing shared virtual memory a performance increase can be achieved over explicitly transferred memory. In addition it was concluded that careful consideration of data size and access pattern is required to utilize the performance increasing properties of shared virtual memory.

  • 1548.
    Vestman, Simon
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Cloud application platform - Virtualization vs Containerization: A comparison between application containers and virtual machines2017Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Context. As the number of organizations using cloud application platforms to host their applications increases, the priority of distributing physical resources within those platforms is increasing simultaneously. The goal is to host a higher quantity of applications per physical server, while at the same time retain a satisfying rate of performance combined with certain scalability. The modern needs of customers occasionally also imply an assurance of certain privacy for their applications.

    Objectives. In this study two types of instances for hosting applications in cloud application platforms, virtual machines and application containers, are comparatively analyzed. This investigation has the goal to expose advantages and disadvantages between the instances in order to determine which is more appropriate for being used in cloud application platforms, in terms of performance, scalability and user isolation.

    Methods. The comparison is done on a server running Linux Ubuntu 16.04. The virtual machine is created using Devstack, a development environment of Openstack, while the application container is hosted by Docker. Each instance is running an apache web server for handling HTTP requests. The comparison is done by using different benchmark tools for different key usage scenarios and simultaneously observing the resource usage in respective instance.

    Results. The results are produced by investigating the user isolation and resource occupation of respective instance, by examining the file system, active process handling and resource allocation after creation. Benchmark tools are executed locally on respective instance, for a performance comparison of the usage of physical resources. The amount of CPU operations executed within a given time is measured in order determine the processor performance, while the speed of read and write operations to the main memory is measured in order to determine the RAM performance. A file is also transmitted between host server and application in order to compare the network performance between respective instance, by examining the transfer speed of the file. Lastly a set of benchmark tools are executed on the host server to measure the HTTP server request handling performance and scalability of each instance. The amount of requests handled per second is observed, but also the resource usage for the request handling at an increasing rate of served requests and clients.

    Conclusions. The virtual machine is a better choice for applications where privacy is a higher priority, due to the complete isolation and abstraction from the rest of the physical server. Virtual machines perform better in handling a higher quantity of requests per second, while application containers is faster in transferring files through network. The container requires a significantly lower amount of resources than the virtual machine in order to run and execute tasks, such as responding to HTTP requests. When it comes to scalability the prefered type of instance depends on the priority of key usage scenarios. Virtual machines have quicker response time for HTTP requests but application containers occupy less physical resources, which makes it logically possible to run a higher quantity of containers than virtual machines simultaneously on the same physical server.

  • 1549.
    Viding, Emmie
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Does Your TV Spy on You?: The security, privacy and safety issues with IoT2019Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    The growth of Internet of Things is steadily increasing, both in Sweden and globally. This relative new technology improves the lives of many; but at the price of their security, privacy and safety.

    This thesis consists of a literature study and an online survey. It investigates what security, privacy and safety risks Internet of Things devices may bring, how aware people are about these risks, how the user can minimize the risk of being hacked or attacked and what manufacturers can do to make safer Internet of Thing devices.

    The survey was created based on the risks related to Internet of Things devices which was found during the literature study.

    It was possible to identify security, privacy and safety risks related to Internet of Things. It was also possible to find answers of how both users and manufacturers can protect their devices from being hacked. The survey showed that there was a correlation between how interested people are in technology and how aware they are of the risks with Internet of Things.

    Internet of Things can be used to do DDoS attacks, espionage and eavesdropping. People who are interested in technology tends to protect themselves more actively (by changing default password and updating the software) compared to those who are not interested.

  • 1550.
    Vighagen, Anders Hallgren Jesper
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Visuellt kaos2014Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [sv]

    Den här uppsatsen handlar om hur serietidningens narrativa likheter och olikheter med litteratur samt film och hur dessa kan användas för skapandet av ett ickelinjärt narrativ. Vi ville se var styrkan ligger i serietidningar för att försöka berätta en berättelse på ett nytt sätt som vi inte är vana vid att se. För att undersöka detta forskade vi kring vad ickelinjärt berättande är och vad som skiljer olika berättande medium från varandra. Vi applicerade sedan denna forskning i skapandet av en serietidning där vi testa de tekniker och metoder vi lärt oss om tidigare. Resultatet bygger på erfarenheterna som vi fått från skapandet av serietidningen och hur de olika narrativa likheterna samt olikheterna kunde appliceras på en serietidning. Den färdiga produkten beskrivs i resultatdelen och de delar som är relevanta för problemformuleringen beskrivs i detalj och skapar en stark bild av vad serietidningar kan skapa.

28293031323334 1501 - 1550 of 1682
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf