Change search
Refine search result
1234567 51 - 100 of 1525
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 51.
    Alégroth, Emil
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Matsuki, Shinsuke
    Veriserve Corporation, JPN.
    Vos, Tanja
    Open University of the Netherlands, NLD.
    Akemine, Kinji
    Nippon Telegraph and Telephone Corporation, JPN.
    Overview of the ICST International Software Testing Contest2017In: Proceedings - 10th IEEE International Conference on Software Testing, Verification and Validation, ICST 2017, IEEE Computer Society, 2017, p. 550-551Conference paper (Refereed)
    Abstract [en]

    In the software testing contest, practitioners and researcher's are invited to test their test approaches against similar approaches to evaluate pros and cons and which is perceivably the best. The 2017 iteration of the contest focused on Graphical User Interface-driven testing, which was evaluated on the testing tool TESTONA. The winner of the competition was announced at the closing ceremony of the international conference on software testing (ICST), 2017. © 2017 IEEE.

  • 52.
    Amaradri, Anand Srivatsav
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Nutalapati, Swetha Bindu
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Continuous Integration, Deployment and Testing in DevOps Environment2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. Owing to a multitude of factors like rapid changes in technology, market needs, and business competitiveness, software companies these days are facing pressure to deliver software rapidly and on a frequent basis. For frequent and faster delivery, companies should be lean and agile in all phases of the software development life cycle. An approach called DevOps, which is based on agile principles has come into play. DevOps bridges the gap between development and operations teams and facilitates faster product delivery. The DevOps phenomenon has gained a wide popularity in the past few years, and several companies are adopting DevOps to leverage its perceived benefits. However, the organizations may face several challenges while adopting DevOps. There is a need to obtain a clear understanding of how DevOps functions in an organization.

    Objectives. The main aim of this study is to provide a clear understanding about how DevOps works in an organization to researchers and software practitioners. The objectives of the study are to identify the benefits of implementing DevOps in organizations where agile development is in practice, the challenges faced by organizations during DevOps adoption, to identify the solutions/ mitigation strategies, to overcome the challenges,the DevOps practices, and the problems faced by DevOps teams during continuous integration, deployment and testing.

    Methods. A mixed methods approach having both qualitative and quantitative research methods is used to accomplish the research objectives.A Systematic Literature Review is conducted to identify the benefits and challenges of DevOps adoption, and the DevOps practices. Interviews are conducted to further validate the SLR findings, and identify the solutions to overcome DevOps adoption challenges, and the DevOps practices. The SLR and interview results are mapped, and a survey questionnaire is designed.The survey is conducted to validate the qualitative data, and to identify the other benefits and challenges of DevOps adoption, solutions to overcome the challenges, DevOps practices, and the problems faced by DevOps teams during continuous integration, deployment and testing.

    Results. 31 primary studies relevant to the research are identified for conducting the SLR. After analysing the primary studies, an initial list of the benefits and challenges of DevOps adoption, and the DevOps practices is obtained. Based on the SLR findings, a semi-structured interview questionnaire is designed, and interviews are conducted. The interview data is thematically coded, and a list of the benefits, challenges of DevOps adoption and solutions to overcome them, DevOps practices, and problems faced by DevOps teams is obtained. The survey responses are statistically analysed, and a final list of the benefits of adopting DevOps, the adoption challenges and solutions to overcome them, DevOps practices and problems faced by DevOps teams is obtained.

    Conclusions. Using the mixed methods approach, a final list of the benefits of adopting DevOps, DevOps adoption challenges, solutions to overcome the challenges, practices of DevOps, and the problems faced by DevOps teams during continuous integration, deployment and testing is obtained. The list is clearly elucidated in the document. The final list can aid researchers and software practitioners in obtaining a better understanding regarding the functioning and adoption of DevOps. Also, it has been observed that there is a need for more empirical research in this domain.

  • 53.
    Ambreen, T.
    et al.
    Int Islamic Univ, PAK.
    Ikram, N.
    Riphah Int Univ, PAK.
    Usman, Muhammad
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Niazi, M.
    King Fahd Univ Petr & Minerals, SAU.
    Empirical research in requirements engineering: trends and opportunities2018In: Requirements Engineering, ISSN 0947-3602, E-ISSN 1432-010X, Vol. 23, no 1, p. 63-95Article in journal (Refereed)
    Abstract [en]

    Requirements engineering (RE) being a foundation of software development has gained a great recognition in the recent era of prevailing software industry. A number of journals and conferences have published a great amount of RE research in terms of various tools, techniques, methods, and frameworks, with a variety of processes applicable in different software development domains. The plethora of empirical RE research needs to be synthesized to identify trends and future research directions. To represent a state-of-the-art of requirements engineering, along with various trends and opportunities of empirical RE research, we conducted a systematic mapping study to synthesize the empirical work done in RE. We used four major databases IEEE, ScienceDirect, SpringerLink and ACM and Identified 270 primary studies till the year 2012. An analysis of the data extracted from primary studies shows that the empirical research work in RE is on the increase since the year 2000. The requirements elicitation with 22 % of the total studies, requirements analysis with 19 % and RE process with 17 % are the major focus areas of empirical RE research. Non-functional requirements were found to be the most researched emerging area. The empirical work in the sub-area of requirements validation and verification is little and has a decreasing trend. The majority of the studies (50 %) used a case study research method followed by experiments (28 %), whereas the experience reports are few (6 %). A common trend in almost all RE sub-areas is about proposing new interventions. The leading intervention types are guidelines, techniques and processes. The interest in RE empirical research is on the rise as whole. However, requirements validation and verification area, despite its recognized importance, lacks empirical research at present. Furthermore, requirements evolution and privacy requirements also have little empirical research. These RE sub-areas need the attention of researchers for more empirical research. At present, the focus of empirical RE research is more about proposing new interventions. In future, there is a need to replicate existing studies as well to evaluate the RE interventions in more real contexts and scenarios. The practitioners’ involvement in RE empirical research needs to be increased so that they share their experiences of using different RE interventions and also inform us about the current requirements-related challenges and issues that they face in their work. © 2016 Springer-Verlag London

  • 54.
    Amiri, Mohammad Reza Shams
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Rohani, Sarmad
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Automated Camera Placement using Hybrid Particle Swarm Optimization2014Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Context. Automatic placement of surveillance cameras' 3D models in an arbitrary floor plan containing obstacles is a challenging task. The problem becomes more complex when different types of region of interest (RoI) and minimum resolution are considered. An automatic camera placement decision support system (ACP-DSS) integrated into a 3D CAD environment could assist the surveillance system designers with the process of finding good camera settings considering multiple constraints. Objectives. In this study we designed and implemented two subsystems: a camera toolset in SketchUp (CTSS) and a decision support system using an enhanced Particle Swarm Optimization (PSO) algorithm (HPSO-DSS). The objective for the proposed algorithm was to have a good computational performance in order to quickly generate a solution for the automatic camera placement (ACP) problem. The new algorithm benefited from different aspects of other heuristics such as hill-climbing and greedy algorithms as well as a number of new enhancements. Methods. Both CTSS and ACP-DSS were designed and constructed using the information technology (IT) research framework. A state-of-the-art evolutionary optimization method, Hybrid PSO (HPSO), implemented to solve the ACP problem, was the core of our decision support system. Results. The CTSS is evaluated by some of its potential users after employing it and later answering a conducted survey. The evaluation of CTSS confirmed an outstanding satisfactory level of the respondents. Various aspects of the HPSO algorithm were compared to two other algorithms (PSO and Genetic Algorithm), all implemented to solve our ACP problem. Conclusions. The HPSO algorithm provided an efficient mechanism to solve the ACP problem in a timely manner. The integration of ACP-DSS into CTSS might aid the surveillance designers to adequately and more easily plan and validate the design of their security systems. The quality of CTSS as well as the solutions offered by ACP-DSS were confirmed by a number of field experts.

  • 55.
    Amjad, Shoaib
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Malhi, Rohail Khan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Burhan, Muhammad
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    DIFFERENTIAL CODE SHIFTED REFERENCE IMPULSE-BASED COOPERATIVE UWB COMMUNICATION SYSTEM2013Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Cooperative Impulse Response – Ultra Wideband (IR-UWB) communication is a radio technology very popular for short range communication systems as it enables single-antenna mobiles in a multi-user environment to share their antennas by creating virtual MIMO to achieve transmit diversity. In order to improve the cooperative IR-UWB system performance, we are going to use Differential Code Shifted Reference (DCSR). The simulations are used to compute Bit Error Rate (BER) of DCSR in cooperative IR-UWB system using different numbers of Decode and Forward relays while changing the distance between the source node and destination nodes. The results suggest that when compared to Code Shifted Reference (CSR) cooperative IR-UWB communication system; the DCSR cooperative IR-UWB communication system performs better in terms of BER, power efficiency and channel capacity. The simulations are performed for both non-line of sight (N-LOS) and line of sight (LOS) conditions and the results confirm that system has better performance under LOS channel environment. The simulation results also show that performance improves as we increase the number of relay nodes to a sufficiently large number.

  • 56.
    Ammar, Doreid
    et al.
    Norwegian Univ Sci & Technol, NOR.
    De Moor, Katrien
    Norwegian Univ Sci & Technol, NOR.
    Xie, Min
    Next Generat Serv, Telenor Res, NOR.
    Fiedler, Markus
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Heegaard, Poul
    Norwegian Univ Sci & Technol, NOR.
    Video QoE Killer and Performance Statistics in WebRTC-based Video Communication2016Conference paper (Refereed)
    Abstract [en]

    In this paper, we investigate session-related performance statistics of a Web-based Real-Time Communication (WebRTC) application called appear. in. We explore the characteristics of these statistics and explore how they may relate to users' Quality of Experience (QoE). More concretely, we have run a series of tests involving two parties and according to different test scenarios, and collected real-time session statistics by means of Google Chrome's WebRTC-internals tool. Despite the fact that the Chrome statistics have a number of limitations, our observations indicate that they are useful for QoE research when these limitations are known and carefully handled when performing post-processing analysis. The results from our initial tests show that a combination of performance indicators measured at the sender's and receiver's end may help to identify severe video freezes (being an important QoE killer) in the context of WebRTC-based video communication. In this paper the performance indicators used are significant drops in data rate, non-zero packet loss ratios, non-zero PLI values, and non-zero bucket delay.

  • 57.
    AMUJALA, NARAYANA KAILASH
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    SANKI, JOHN KENNEDY
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Video Quality of Experience through Emulated Mobile Channels2014Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Over the past few years, Internet traffic took a ramp increase. Of which, most of the traffic is video traffic. With the latest Cisco forecast it is estimated that, by 2017 online video will be highly adopted service with large customer base. As the networks are being increasingly ubiquitous, applications are turning equally intelligent. A typical video communication chain involves transmission of encoded raw video frames with subsequent decoding at the receiver side. One such intelligent codec that is gaining large research attention is H.264/SVC, which can adapt dynamically to the end device configurations and network conditions. With such a bandwidth hungry, video communications running over lossy mobile networks, its extremely important to quantify the end user acceptability. This work primarily investigates the problems at player user interface level compared to the physical layer disturbances. We have chosen Inter frame time at the Application layer level to quantify the user experience (player UI) for varying lower layer metrics like noise and link power with nice demonstrator telling cases. The results show that extreme noise and low link level settings have adverse effect on user experience in temporal dimension. The video are effected with frequent jumps and freezes.

  • 58.
    ananth, Indirajith Vijai
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Study on Assessing QoE of 3DTV Using Subjective Methods2013Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    The ever increasing popularity and enormous growth in 3D movie industry is the stimulating phenomenon for the penetration of 3D services into home entertainment systems. Providing a third dimension gives intense visual experience to the viewers. Being a new eld, there are several researches going on to measure the end user's viewing experience. Research groups including 3D TV manufacturers, service providers and standards organizations are interested to improve user experience. Recent research in 3D video quality measurements have revealed uncertain issues as well as more well known results. Measuring the perceptual stereoscopic video quality by subjective testing can provide practical results. This thesis studies and investigate three di erent rating scales (Video Quality, Visual Discomfort and Sense of Presence) and compares them by subjective testing, combined with two viewing distances at 3H and 5H, where H is the hight of display screen. This thesis work shows that single rating scale produces the same result as three di erent scales and viewing distance has very less or no impact on Quality of Experience (QoE) of 3DTV for 3H and 5H distances for symmetric coding impairments.

  • 59.
    Anderberg, Ted
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Rosén, Joakim
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Follow the Raven: A Study of Audio Diegesis within a Game’s Narrative2017Independent thesis Basic level (degree of Bachelor), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Virtual Reality is one of the next big things in gaming, more and more games delivering an immersive VR-experience are popping up. Words such as immersion and presence has quickly become buzzwords that’s often used to describe a VR-game or experience. This interactive simulation of reality is literally turning people’s heads. The crowd pleaser, the ability to look around in 360-degrees, is however casting a shadow on the aural aspect. This study focused on this problem in relation to audio narrative. We examined which differences we could identify between a purely diegetic audio narrative and one utilizing a mix between diegetic and non-diegetic sound. How to grab the player’s attention and guide them to places in order for them to progress in the story. By spatializing audio using HRTF, we tested this dilemma through a game comparison with the help of soundscapes by R. Murray Schafer and auditory hierarchy by David Sonnenschein, as well as inspiration from Actor Network Theory. In our game comparison we found that while the synthesized sound, non-diegetic, ensured that the sound grabs the player’s attention, the risk of breaking the player’s immersion also increases.

  • 60.
    Anderdahl, Johan
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Darner, Alice
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Particle Systems Using 3D Vector Fields with OpenGL Compute Shaders2014Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [en]

    Context. Particle systems and particle effects are used to simulate a realistic and appealing atmosphere in many virtual environments. However, they do occupy a significant amount of computational resources. The demand for more advanced graphics increases by each generation, likewise does particle systems need to become increasingly more detailed. Objectives. This thesis proposes a texture-based 3D vector field particle system, computed on the Graphics Processing Unit, and compares it to an equation-based particle system. Methods. Several tests were conducted comparing different situations and parameters for the methods. All of the tests measured the computational time needed to execute the different methods. Results. We show that the texture-based method was effective in very specific situations where it was expected to outperform the equation-based. Otherwise, the equation-based particle system is still the most efficient. Conclusions. Generally the equation-based method is preferred, except for in very specific cases. The texture-based is most efficient to use for static particle systems and when a huge number of forces is applied to a particle system. Texture-based vector fields is hardly useful otherwise.

  • 61.
    Andersen, Dennis
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Screen-Space Subsurface Scattering, A Real-time Implementation Using Direct3D 11.1 Rendering API2015Independent thesis Basic level (degree of Bachelor), 180 HE creditsStudent thesis
    Abstract [en]

    Context Subsurface scattering - the effect of light scattering within a material. Lots of materials on earth possess translucent properties. It is therefore an important factor to consider when trying to render realistic images. Historically the effect has been used for offline rendering with ray tracers, but is now considered a real-time rendering technique and is done based on approximations off previous models. Early real-time methods approximates the effect in object texture space which does not scale well with real-time applications such as games. A relatively new approach makes it possible to apply the effect as a post processing effect using GPGPU capabilities, making this approach compatible with most modern rendering pipelines.

    Objectives The aim of this thesis is to explore the possibilities of a dynamic real-time solution to subsurface scattering with a modern rendering API to utilize GPGPU programming and modern data management, combined with previous techniques

    Methods The proposed subsurface scattering technique is implemented in a delimited real-time graphics engine using a modern rendering API to evaluate the impact on performance by conducting several experiments with specific properties.

    Results The result obtained hints that by using a flexible solution to represent materials, execution time lands at an acceptable rate and could be used in real-time. These results shows that the execution time grows nearly linearly with consideration to the number of layers and the strength of the effect. Because the technique is performed in screen space, the performance scales with subsurface scattering screen coverage and screen resolution.

    Conclusions The technique could be used in real-time and could trivially be integrated to most existing rendering pipelines. Further research and testing should be done in order to determine how the effect scales in a complex 3D-game environment.

  • 62.
    Andersson, Anders Tobias
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Facial Feature Tracking and Head Pose Tracking as Input for Platform Games2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Modern facial feature tracking techniques can automatically extract and accurately track multiple facial landmark points from faces in video streams in real time. Facial landmark points are defined as points distributed on a face in regards to certain facial features, such as eye corners and face contour. This opens up for using facial feature movements as a handsfree human-computer interaction technique. These alternatives to traditional input devices can give a more interesting gaming experience. They also open up for more intuitive controls and can possibly give greater access to computers and video game consoles for certain disabled users with difficulties using their arms and/or fingers.

    This research explores using facial feature tracking to control a character's movements in a platform game. The aim is to interpret facial feature tracker data and convert facial feature movements to game input controls. The facial feature input is compared with other handsfree inputmethods, as well as traditional keyboard input. The other handsfree input methods that are explored are head pose estimation and a hybrid between the facial feature and head pose estimation input. Head pose estimation is a method where the application is extracting the angles in which the user's head is tilted. The hybrid input method utilises both head pose estimation and facial feature tracking.

    The input methods are evaluated by user performance and subjective ratings from voluntary participants playing a platform game using the input methods. Performance is measured by the time, the amount of jumps and the amount of turns it takes for a user to complete a platform level. Jumping is an essential part of platform games. To reach the goal, the player has to jump between platforms. An inefficient input method might make this a difficult task. Turning is the action of changing the direction of the player character from facing left to facing right or vice versa. This measurement is intended to pick up difficulties in controling the character's movements. If the player makes many turns, it is an indication that it is difficult to use the input method to control the character movements efficiently.

    The results suggest that keyboard input is the most effective input method, while it is also the least entertaining of the input methods. There is no significant difference in performance between facial feature input and head pose input. The hybrid input version has the best results overall of the alternative input methods. The hybrid input method got significantly better performance results than the head pose input and facial feature input methods, while it got results that were of no statistically significant difference from the keyboard input method.

    Keywords: Computer Vision, Facial Feature Tracking, Head Pose Tracking, Game Control

  • 63.
    Andersson, David
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Nilsson, Eric
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Internet of Things: A survey about knowledge and thoughts2018Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
  • 64.
    Andersson, Jonas
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Silhouette-based Level of Detail: A comparison of real-time performance and image space metrics2016Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Context. The geometric complexity of objects in games and other real-time applications is a crucial aspect concerning the performance of the application. Such applications usually redraw the screen between 30-60 times per second, sometimes even more often, which can be a hard task in an environment with a high number of geometrically complex objects. The concept called Level of Detail, often abbreviated LoD, aims to alleviate the load on the hardware by introducing methods and techniques to minimize the amount of geometry while still maintaining the same, or very similar result.

    Objectives. This study will compare four of the often used techniques, namely Static LoD, Unpopping LoD, Curved PN Triangles, and Phong Tessellation. Phong Tessellation is silhouette-based, and since the silhouette is considered one of the most important properties, the main aim is to determine how it performs compared to the other three techniques.

    Methods. The four techniques are implemented in a real-time application using the modern rendering API Direct3D 11. Data will be gathered from this application to use in several experiments in the context of both performance and image space metrics.

    Conclusions. This study has shown that all of the techniques used works in real-time, but with varying results. From the experiments it can be concluded that the best technique to use is Unpopping LoD. It has good performance and provides a good visual result with the least average amount of popping of the compared techniques. The dynamic techniques are not suitable as a substitute to Unpopping LoD, but further research could be conducted to examine how they can be used together, and how the objects themselves can be designed with the dynamic techniques in mind.

  • 65.
    Andersson, Linda
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Evaluation of HMI Development for Embedded System Control2014Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [en]

    Context:The interface development is increasing in complexity and applications with a lot of functionalities that are reliable, understandable and easy to use have to be developed. To be able to compete, the time-to-market has to be short and cost effective. The development process is important and there are a lot of aspects that can be improved. The needs of the development and the knowledge among the developers are key factors. Here code reuse, standardization and the usability of the development tool plays an important role which could have a lot of positive impact on the development process and the quality of the final product. Objectives: A framework for describing important properties for HMI development tools is presented. A representative collection of two development tools are selected, described and based on the experiences from the case study its applicability is mapped to the evaluation framework. Methods: Interviews were made with HMI developers to get information from the field. Following that, a case study of two different development tools were made to highlight the pros and cons of each tool. Results: The properties presented in the evaluation framework are that the toolkit should be open for multiple platforms, accessible for the developer, it should support custom templates, require non-extensive coding knowledge and be reusable. The evaluated frameworks shows that it is hard to meet all the demands. Conclusions: To find a well suited development toolkit is not an easy task. The choice should be made depending on the needs of the HMI applications and the available development resources.

  • 66.
    Andersson, Lukas
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Comparison of Anti-Aliasing in Motion2018Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Background. Aliasing is a problem that every 3D game has because of the resolutions that monitors are using right now is not high enough. Aliasing is when you look at an object in a 3D world and see that it has jagged edges where it should be smooth. This can be reduced by a technique called anti-aliasing.

    Objectives. The object of this study is to compare three different techniques, Fast approximate anti-aliasing (FXAA), Subpixel Morphological Anti Aliasing (SMAA) and Temporal anti-aliasing (TAA) in motion to see which is a good default for games.

    Methods. An experiment was run where 20 people participated and tested a real-time prototype which had a camera moving through a scene multiple times with different anti-aliasing techniques.

    Results. The results showed that TAA was consistently performing best in the tests of blurry picture quality, aliasing and flickering. Both SMAA and FXAA were only comparable to TAA in the blur area of the test and falling behind all the other parts.

    Conclusions. TAA is a great anti-aliasing technique to use for avoiding aliasing and flickering while in motion. Blur was thought to be a problem but as the test shows most people did not feel that blur was a problem for any of the techniques that were used.

  • 67.
    Andersson, Marcus
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Nilsson, Alexander
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Improving Integrity Assurances of Log Entries From the Perspective of Intermittently Disconnected Devices2014Student thesis
    Abstract [en]

    It is common today in large corporate environments for system administrators to employ centralized systems for log collection and analysis. The log data can come from any device between smart-phones and large scale server clusters. During an investigation of a system failure or suspected intrusion these logs may contain vital information. However, the trustworthiness of this log data must be confirmed. The objective of this thesis is to evaluate the state of the art and provide practical solutions and suggestions in the field of secure logging. In this thesis we focus on solutions that do not require a persistent connection to a central log management system. To this end a prototype logging framework was developed including client, server and verification applications. The client employs different techniques of signing log entries. The focus of this thesis is to evaluate each signing technique from both a security and performance perspective. This thesis evaluates "Traditional RSA-signing", "Traditional Hash-chains"', "Itkis-Reyzin's asymmetric FSS scheme" and "RSA signing and tick-stamping with TPM", the latter being a novel technique developed by us. In our evaluations we recognized the inability of the evaluated techniques to detect so called `truncation-attacks', therefore a truncation detection module was also developed which can be used independent of and side-by-side with any signing technique. In this thesis we conclude that our novel Trusted Platform Module technique has the most to offer in terms of log security, however it does introduce a hardware dependency on the TPM. We have also shown that the truncation detection technique can be used to assure an external verifier of the number of log entries that has at least passed through the log client software.

  • 68.
    Andersson, Robin
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Analys av Arbetsmiljöverkets tillämpning av enkätverktyget NOSACQ-502013Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [sv]

    År 2013 införde Arbetsmiljöverket ett nytt webbaserat enkätverktyg som skall kunna ge ett mått av säkerhetskulturen hos företag och organisationer. Denna enkät baserades på tidigare forskning som tog fram en enkät, NOSACQ-50, för just detta ändamål. Värt att notera är att Arbetsmiljöverkets version är förkortad där vissa påståenden togs bort och vissa skrevs om. Det är detta som analysen behandlar. Hur påverkas resultaten av dessa förändringar som genomfördes av Arbetsmiljöverket? Analysen undersöker den möjliga felmarginalen på två olika sätt. Först räknas en teoretisk felmarginal ut som påvisar hur mycket resultatet kan skilja sig. Därefter analyseras resultaten från enkätundersökningen med samma variabler som fastställdes i den teoretiska analysen. Det visar sig att Arbetsmiljöverkets version av enkäten kan ge upphov till en felmarginal på närmare 0,8175 poäng. Denna marginal är förvånansvärt stor även om den baserar sig på en väldigt osannolik situation. Vid nästa del av analysen visar det sig att enkätundersökningen har en felmarginal på <0,00 poäng, vilket innebär att resultatet inte påverkas i någon större utsträckning. Detta ger ett intressant slutresultat där det påvisats en stor felmarginal i teorin, men som i praktiken är närmare obefintlig. Hur resultatet skall tolkas är inte helt klart. Det finns ett antal felkällor som måste beaktas, såsom lågt deltagarantal i undersökningen. Analysen bygger även i stor utsträckning på subjektiva bedömningar, vilket minskar trovärdigheten för resultaten. Därav har författaren dragit slutsatsen att det finns en uppenbar skillnad i resultaten mellan analysobjekten i teorin. Dock finns det inte tillräckligt med data för att fastställa någon skillnad i praktiken. Det går inte heller att avgöra huruvida den teoretiska analysen och dess resultat stämmer, endast att skillnaden finns där.

  • 69.
    Andersson, Tobias
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Brenden, Christoffer
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Parallelism in Go and Java: A Comparison of Performance Using Matrix Multiplication2018Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    This thesis makes a comparison between the performance of Go and Java using parallelizedimplementations of the Classic Matrix Multiplication Algorithm (CMMA). The comparisonattempts to only use features for parallelization, goroutines for Go and threads for Java,while keeping other parts of the code as generic and comparable as possible to accuratelymeasure the performance during parallelization.In this report we ask the question of how programming languages compare in terms of multi-threaded performance? In high-performance systems such as those designed for mathemati-cal calculations or servers meant to handle requests from millions of users, multithreadingand by extension performance are vital. We would like to find out if and how much of a dif-ference the choice of programming language could benefit these systems in terms of parallel-ism and multithreading.Another motivation is to analyze techniques and programming languages that have emergedthat hide the complexity of handling multithreading and concurrency from the user, lettingthe user specify keywords or commands from which the language takes over and creates andmanages the thread scheduling on its own. The Go language is one such example. Is this newtechnology an improvement over developers coding threads themselves or is the technologynot quite there yet?To these ends experiments were done with multithreaded matrix multiplication and was im-plemented using goroutines for Go and threads for Java and was performed with sets of4096x4096 matrices. Background programs were limited and each set of calculations wasthen run multiple times to get average values for each calculation which were then finallycompared to one another.Results from the study showed that Go had ~32-35% better performance than Java between 1and 4 threads, with the difference diminishing to ~2-5% at 8 to 16 threads. The differencehowever was believed to be mostly unrelated to parallelization as both languages maintainednear identical performance scaling as the number of threads increased until the scaling flat-lined for both languages at 8 threads and up. Java did continue to gain a slight increase goingfrom 4 to 8 threads, but this was believed to be due to inefficient resource utilization onJava’s part or due to Java having better utilization of hyper-threading than Go.In conclusion, Go was found to be considerably faster than Java when going from the mainthread and up to 4 threads. At 8 threads and onward Java and Go performed roughly equal.For performance difference between the number of threads in the languages themselves nonoticeable performance increase or decrease was found when creating 1 thread versus run-ning the matrix multiplication directly on the main thread for either of the two languages.Coding multithreading in Go was found to be easier than in Java while providing greater toequal performance. Go just requires the ‘go’ keyword while Java requires thread creation andmanagement. This would put Go in favor for those trying to avoid the complexity of multi-threading while also seeking its benefits.

  • 70.
    Andrej, Sekáč
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Performance evaluation based on data from code reviews2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. Modern code review tools such as Gerrit have made available great amounts of code review data from different open source projects as well as other commercial projects. Code reviews are used to keep the quality of produced source code under control but the stored data could also be used for evaluation of the software development process.

    Objectives. This thesis uses machine learning methods for an approximation of review expert’s performance evaluation function. Due to limitations in the size of labelled data sample, this work uses semisupervised machine learning methods and measure their influence on the performance. In this research we propose features and also analyse their relevance to development performance evaluation.

    Methods. This thesis uses Radial Basis Function networks as the regression algorithm for the performance evaluation approximation and Metric Based Regularisation as the semi-supervised learning method. For the analysis of feature set and goodness of fit we use statistical tools with manual analysis.

    Results. The semi-supervised learning method achieved a similar accuracy to supervised versions of algorithm. The feature analysis showed that there is a significant negative correlation between the performance evaluation and three other features. A manual verification of learned models on unlabelled data achieved 73.68% accuracy. Conclusions. We have not managed to prove that the used semisupervised learning method would perform better than supervised learning methods. The analysis of the feature set suggests that the number of reviewers, the ratio of comments to the change size and the amount of code lines modified in later parts of development are relevant to performance evaluation task with high probability. The achieved accuracy of models close to 75% leads us to believe that, considering the limited size of labelled data set, our work provides a solid base for further improvements in the performance evaluation approximation.

  • 71.
    Andresen, Mario
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Johnsson, Daniel
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Analysmetod för designade ljudbilder: Skapandet av nya förhållningssätt för ljuddesign2015Independent thesis Basic level (degree of Bachelor), 20 credits / 30 HE creditsStudent thesis
    Abstract [sv]

    Syftet med detta kandidatarbete var att framställa en analysmetod för dekonstruktion av ljuddesign i film. Målet med analysmetoden var att skapa ett förhållningssätt för ljuddesigners som gör det enklare att lära sig från ljudläggningar och genom övning eventuellt kunna bidra till ett mer avancerat sätt att tänka på vad ljud kan göra i en filmproduktion.

    Metoden är tänkt som ett stöd för blivande ljuddesigners som lärt sig tekniska kunskaper men kämpar med den kreativa biten där problemen inte har lika konkreta lösningar. Metoden är också tänkt som ett supplement till mer erfarna ljuddesigners för övning på sina kunskaper ellerförbättring av deras egen process.

    Vi tycker metoden nådde upp till de målen, men den visade sig också vara mer flexibel än så. Genom applicering av metoden inför arbete med vår gestaltning där vi ljudlade ett filmklipp blev designprocessen mycket enklare att komma igenom. Vi tror därför att en metod som vår kan vara en viktig del i att få in ljud tidigare i en filmproduktion.

  • 72.
    Andén, Calle
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Moyle, Alexander
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Artem ex Machina: En undersökning av emergence som fenomen och som metod vid skapandet av posthumanistisk konst2017Independent thesis Basic level (degree of Bachelor), 20 credits / 30 HE creditsStudent thesis
    Abstract [sv]

    Detta kandidatarbete är en undersökning av begreppet emergence och hur det kan användas i skapandet av digital interaktiv konst. Vi undersöker sambandet mellan designern och datorn, och hur användaren kan interagera med båda dessa för att bidra till och utforma skapandet.

     

    För att demonstrera detta har vi skapat en simulation, som är tänkt att efterlikna tidigt mänskligt beteende på hög nivå: uppståndelsen av civilizationer, interaktion mellan folkgrupper, och utnyttjande av naturliga resurser. Vi diskuterar de etiska och politiska konsekvenserna som följer på skapandet av en sådan simulation, och vilken sorts interaktion vi främjar i vår design.

  • 73.
    Angelova, Milena
    et al.
    Technical University of Sofia-branch Plovdiv, BUL.
    Vishnu Manasa, Devagiri
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Boeva, Veselka
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Linde, Peter
    Blekinge Institute of Technology, The Library.
    Lavesson, Niklas
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    An Expertise Recommender SystemBased on Data from an InstitutionalRepository (DiVA)2018In: Proceedings of the 22nd edition of the International Conference on ELectronic PUBlishing, 2018Conference paper (Refereed)
    Abstract [en]

    Finding experts in academics is an important practical problem, e.g. recruiting reviewersfor reviewing conference, journal or project submissions, partner matching for researchproposals, finding relevant M. Sc. or Ph. D. supervisors etc. In this work, we discuss anexpertise recommender system that is built on data extracted from the Blekinge Instituteof Technology (BTH) instance of the institutional repository system DiVA (DigitalScientific Archive). DiVA is a publication and archiving platform for research publicationsand student essays used by 46 publicly funded universities and authorities in Sweden andthe rest of the Nordic countries (www.diva-portal.org). The DiVA classification system isbased on the Swedish Higher Education Authority (UKÄ) and the Statistic Sweden's (SCB)three levels classification system. Using the classification terms associated with studentM. Sc. and B. Sc. theses published in the DiVA platform, we have developed a prototypesystem which can be used to identify and recommend subject thesis supervisors inacademy.

  • 74.
    Annavarjula, Vaishnavi
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Computer-Vision Based Retinal Image Analysis for Diagnosis and Treatment2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context- Vision is one of the five elementary physiologial senses. Vision is enabled via the eye, a very delicate sense organ which is highly susceptible to damage which results in loss of vision. The damage comes in the form of injuries or diseases such as diabetic retinopathy and glaucoma. While it is not possible to predict accidents, predicting the onset of disease in the earliest stages is highly attainable. Owing to the leaps in imaging technology,it is also possible to provide near instant diagnosis by utilizing computer vision and image processing capabilities.

    Objectives- In this thesis, an algorithm is proposed and implemented to classify images of the retina into healthy or two classes of unhealthy images, i.e, diabetic retinopathy, and glaucoma thus aiding diagnosis. Additionally the algorithm is studied to investigate which image transformation is more feasible in implementation within the scope of this algorithm and which region of retina helps in accurate diagnosis.

    Methods- An experiment has been designed to facilitate the development of the algorithm. The algorithm is developed in such a way that it can accept all the values of a dataset concurrently and perform both the domain transforms independent of each other.

    Results- It is found that blood vessels help best in predicting disease associations, with the classifier giving an accuracy of 0.93 and a Cohen’s kappa score of 0.90. Frequency transformed images also presented a accuracy in prediction with 0.93 on blood vessel images and 0.87 on optic disk images.

    Conclusions- It is concluded that blood vessels from the fundus images after frequency transformation gives the highest accuracy for the algorithm developed when the algorithm is using a bag of visual words and an image category classifier model.

    Keywords-Image Processing, Machine Learning, Medical Imaging

  • 75.
    Ansari, Yousuf Hameed
    et al.
    Blekinge Institute of Technology, Faculty of Engineering, Department of Applied Signal Processing.
    Siddiqui, Sohaib Ahmed
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Quality Assessment for HEVC Encoded Videos: Study of Transmission and Encoding Errors2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    There is a demand for video quality measurements in modern video applications specifically in wireless and mobile communication. In real time video streaming it is experienced that the quality of video becomes low due to different factors such as encoder and transmission errors. HEVC/H.265 is considered as one of the promising codecs for compression of ultra-high definition videos. In this research, full reference based video quality assessment is performed. The raw format reference videos have been taken from Texas database to make test videos data set. The videos are encoded using HM9 reference software in HEVC format. Encoding errors has been set during the encoding process by adjusting the QP values. To introduce packet loss in the video, the real-time environment has been created. Videos are sent from one system to another system over UDP protocol in NETCAT software. Packet loss is induced with different packet loss ratios into the video using NETEM software. After the compilation of video data set, to assess the video quality two kind of analysis has been performed on them. Subjective analysis has been carried on different human subjects. Objective analysis has been achieved by applying five quality matrices PSNR, SSIM, UIQI, VFI and VSNR. The comparison is conducted on the objective measurement scores with the subjective and in the end results deduce from classical correlation methods.

  • 76.
    Antman, Benjamin
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Cybernetic Social Space: A Theoretical Comparison of Mediating Spaces in Digital Culture2014Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [en]

    This essay does a theoretical comparison of the intricate social production in digital and real spaces, proposing a model for the non-technical exploration of the social production of spaces relating to human digital technology. The ‘social space’ proposed by Henri Lefebvre (1974) - responsible for producing material space - and the holistic model of ‘cybernetic space’ proposed by Ananda Mitra and Rae Lynn Schwartz (2001) - responsible for supporting the production of real and digital spaces - are argued as collaboratively producing cybernetic social spaces, serving as the definition of a unified model for the production of spaces in contemporary society. The digital spaces are argued as being a similar analogue to classical ‘social space’. Two native cybernetic spaces are presented and discussed, argued as being responsible for the transitive production of digital and real spaces as they survey and situate the production of cybernetic social space. Finally, two case studies exemplifying the aesthetics and politics of cybernetic space are presented, analyzed and discussed in accordance with the proposed model of cybernetic social space.

  • 77.
    ANWAR, WALEED
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Software Quality Characteristics Tested For Mobile Application Development: Literature Review and Empirical Survey2015Independent thesis Advanced level (degree of Master (One Year)), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Smart phones use is increasing day by day as there is large number of app users. Due to more use of apps, the testing of mobile application should be done correctly and flawlessly to ensure the effectiveness of mobile applications.

  • 78.
    Aouachria, Moufida
    et al.
    Universite du Quebec a Montreal, CAN.
    Leshob, Abderrahmane
    Universite du Quebec a Montreal, CAN.
    Gonzalez-Huerta, Javier
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Ghomari, Abdessamed Réda
    Ecole nationale superieure d'Informatique, DZA.
    Hadaya, Pierre
    Universite du Quebec a Montreal, CAN.
    Business Process Integration: How to Achieve Interoperability through Process Patterns2017In: Proceedings - 14th IEEE International Conference on E-Business Engineering, ICEBE 2017 - Including 13th Workshop on Service-Oriented Applications, Integration and Collaboration, SOAIC 207, Institute of Electrical and Electronics Engineers Inc. , 2017, p. 109-117Conference paper (Refereed)
    Abstract [en]

    Business process integration (BPI) is a crucial technique for supporting inter-organizational business interoperability. BPI allows automation of business processes and the integration of systems across numerous organizations. The integration of organizations' process models is one of the most addressed and used approach to achieve BPI. However, this model integration is complex and requires that designers have extensive experience in particular when organizations' business processes are incompatible. This paper considers the issue of modeling cross-organization processes out of a collection of organizations' private process models. To this end, we propose six adaptation patterns to resolve incompatibilities when combining organizations' processes. Each pattern is formalized with workflow net. © 2017 IEEE.

  • 79.
    Aoun, Peter
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Berg, Nils
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Moving an on-screen cursor with the Emotiv Insight EEG headset: An evaluation through case studies2018Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Today smartphones are everywhere and they ease the lives of millions of people every day. However there are people who, because of various reasons, are unable to receive the benefits of these devices because they are not able to interact with a smartphone in the intended way; using their hands.

    In this thesis we investigate an alternative method for interacting with a smartphone; using a commercially available electroencephalography (EEG) headset. EEG is a technique for measuring and recording brain activity, often through the use of sensors placed along the scalp of the user.

    We developed a prototype of a brain-computer interface (BCI) for use with android and the Emotiv Insight commercial EEG headset. The prototype allows the user to control an on-screen cursor in one dimension within an android application using the Emotiv Insight.

    We performed three case studies with one participant in each. The participants had no prior experience with EEG headsets or BCIs. We had them train to use the Emotiv Insight with our BCI prototype. After the training was completed they performed a series of tests in order to measure their ability to control an on-screen cursor in one dimension. Finally the participants filled out a questionnaire regarding their subjective experiences of using the Emotiv Insight.

    These case studies showed the inadequacies of the Emotiv Insight. All three participants had issues with training and using the headset. These issues are reflected in our tests, where 44 out of 45 attempts at moving the cursor to a specific area resulted in a failure. All participants also reported fatigue and headaches during the case studies. We also concluded that the Emotiv Insight provides a poor user experience because of fatigue in longer sessions and the amount of work needed to train the headset.

  • 80.
    APPELQVIST, ALBIN
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    TIMALM, DANIEL
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Ljuddesign i andrapersonsperspektivet2017Independent thesis Basic level (degree of Bachelor), 20 credits / 30 HE creditsStudent thesis
    Abstract [sv]

    Med det här kandidatarbetet vill vi utforska hur spelvärldens ljuddesign kan användas och anpassas för att skapa ljuddesign i andrapersonsperspektivet. Eftersom andrapersonsperspektivet främst framkommer i spel vill vi bilda oss en uppfattning om hur ljudbilden kan se ut utifrån andrapersonsperspektivet för audiovisuella medier. Med hjälp av Michael Pauens kriterier för första, andra och tredjepersonsperspektivet bildar vi oss en förståelse för den filosofiska aspekten på de olika perspektiven. Pauens kriterier förandrapersonsperspektivet utgår ifrån distinktionen och förståelsen mellan ens egna medvetande och den andra personens medvetande. Andra viktiga begrepp som använts i vår undersökning är Syncresis, vår förmåga att koppla ljud till bild och Diegesis, det som avgör vilka element som finns i en films egna värld. Begreppen Diffraktion och Situerad kunskap har hjälpt oss att använda den kunskap som vi redan besitter och fokusera på alla aspekter av ljuddesignen och göra det möjligt för oss att utforska olika synvinklar på hur andrapersonsperspektivet kan gestaltas. Därefter nyttjas denna kunskap och appliceras på tekniker som ljudinspelning, sound mapping, foley och mixning. Våra experiment med inspelning och ljudläggning för olika vyer har resulterat i en audiovisuell gestaltning som visar hur andrapersonsperspektivet kan se ut och hur ljudbilden skulle kunna vara. Vad vi har kommit fram till är att skiftet mellan protagonisten och andrapersonen i den narrativa aspekten måste framgå tydligt i det audiovisuella mediet och även att det hade varit tydligare för åskådaren att se att det är andrapersonsperspektivet om det hade gestaltats i längre narrativa ark. Vi har som utforskare blivit mer medvetna om hur andrapersonsperspektivet kan gestaltas.

  • 81.
    Ardito, Luca
    et al.
    Politecnico di Torino, ITA.
    Coppola, Riccardo
    Politecnico di Torino, ITA.
    Torchiano, Marco
    Politecnico di Torino, ITA.
    Alégroth, Emil
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Towards automated translation between generations of GUI-based tests for mobile devices2018In: Companion Proceedings for the ISSTA/ECOOP 2018 Workshops, Association for Computing Machinery, Inc , 2018, p. 46-53Conference paper (Refereed)
    Abstract [en]

    Market demands for faster delivery and higher software quality are progressively becoming more stringent. A key hindrance for software companies to meet such demands is how to test the software due to to the intrinsic costs of development, maintenance and evolution of testware. Especially since testware should be defined, and aligned, with all layers of system under test (SUT), including all graphical user interface (GUI) abstraction levels. These levels can be tested with different generations of GUI-based test approaches, where 2nd generation, or Layout-based, tests leverage GUI properties and 3rd generation, or Visual, tests make use of image recognition. The two approaches provide different benefits and drawbacks and are seldom used together because of the aforementioned costs, despite growing academic evidence of the complementary benefits. In this work we propose the proof of concept of a novel two-step translation approach for Android GUI testing that we aim to implement, where a translator first creates a technology independent script with actions and elements of the GUI, and then translates it to a script with the syntax chosen by the user. The approach enables users to translate Layout-based to Visual scripts and vice versa, to gain the benefits (e.g. robustness, speed and ability to emulate the user) of both generations, whilst minimizing the drawbacks (e.g. development and maintenance costs). We outline our approach from a technical perspective, discuss some of the key challenges with the realization of our approach, evaluate the feasibility and the advantages provided by our approach on an open-source Android application, and discuss the potential industrial impact of this work. © 2018 ACM.

  • 82.
    Arlock, Jonatan
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Digitala Hantverk: Kunskapstraditioner i informationssamhället2016Independent thesis Basic level (degree of Bachelor), 20 credits / 30 HE creditsStudent thesis
  • 83.
    Armon, Negin
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Illustrerad narrativ2014Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [sv]

    Konsten att berätta historier med bilder och illustrationer är något som har funnits i flera tusen år. Detta kandidatarbete undersöker ämnet illustrerad narrativ och redogör olika aspekter och principer som är avgörande för att man ska kunna förmedla ett budskap med hjälp av illustrationer. Idag förknippas illustrerad narrativ framför allt med serietidningar, bilderböcker och grafiska romaner som berättar en narration i form av en serie av bilder. Under den teoretiska delen av arbetet har jag först gått igenom illustrerat narrativs historia och sedan redovisat principer för det moderna illustrerat narrativet. Jag har också forskat kring bildens kommunikativa värde och samspelet mellan bild och text. Under produktionsdelen av detta kandidatarbete har jag skapat en illustrerad bok. Tanken har varit att tillämpa de teoretiska stöden från forskningsdelen i ett praktiskt arbete. Jag har även gått igenom den kreativa processen för att skapa illustrerade berättelser och redovisat principer som är avgörande för att man ska kunna skapa balans och harmoni i illustrationer. The art of telling stories with images and illustrations has been around for thousands of years. This bachelor thesis explores the subject of illustrated narrative, and describes various aspects and principles that are crucial in order to be able to convey a message using illustrations. Today is illustrated narrative especially associated with comic books, picture books and graphic novels that tell a story in the form of series of images. In the theoretical part of the work I first go through the history of illustrated narrative and then present its modern principles. Moreover, I report on my research on image communicative value and the interplay between image and text. In order to transform my theoretical understanding and research into practical form, in the production section of this work, I present an illustrated book. In doing so, I explain the creative process to create illustrated stories as well as the important principles in creating balance and harmony in illustrations.

  • 84.
    Arnesson, Andreas
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Codename one and PhoneGap, a performance comparison2015Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Creating smartphone applications for more than one operating system requires knowledge of several code languages, more code maintenance, higher development costs and longer development time. To make this easier cross-platform tools (CPTs) exist. But using a CPT can decrease performance of the application. Applications with low performance are more likely to get uninstalled and this makes developers lose income. There are four main CPT approaches hybrid, interpreter, web and cross-compiler. Each has different disadvantages .and advantages. This study will examine the performance difference between two CPTs, Codename One and PhoneGap. The performance measurements, CPU load, memory usage, energy consumption, time execution and application size will be made to compare the CPTs. If cross-compilers have better performance than other CPT approaches will also be investigated. An experiment where three applications are created with native Android, Codename One and PhoneGap will be made and performance measurements will be made. A literature study with research from IEEE and Engineering village will be conducted on different CPT approaches. PhoneGap performed best with shortest execution time, least energy consumption and least CPU usage while Codename One had smallest application size and least memory usage. The research available on performance for CPTs is short and not well done. The difference between PhoneGap and Codename One is not big except for writing to SQLite. No basis was found for the statement that cross-compilers have better performance than other CPT approaches.  

  • 85.
    Arnesson, Andreas
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Lewenhagen, Kenneth
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Comparison and Prediction of Temporal Hotspot Maps2018Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. To aid law enforcement agencies when coordinating and planningtheir efforts to prevent crime, there is a need to investigate methods usedin such areas. With the help of crime analysis methods, law enforcementare more efficient and pro-active in their work. One analysis method istemporal hotspot maps. The temporal hotspot map is often represented asa matrix with a certain resolution such as hours and days, if the aim is toshow occurrences of hour in correlation to weekday. This thesis includes asoftware prototype that allows for the comparison, visualization and predic-tion of temporal data.

    Objectives. This thesis explores if multiprocessing can be utilized to im-prove execution time for the following two temporal analysis methods, Aoris-tic and Getis-Ord*. Furthermore, to what extent two temporal hotspotmaps can be compared and visualized is researched. Additionally it wasinvestigated if a naive method could be used to predict temporal hotspotmaps accurately. Lastly this thesis explores how different software packag-ing methods compare to certain aspects defined in this thesis.

    Methods. An experiment was performed, to answer if multiprocessingcould improve execution time of Getis-Ord* or Aoristic. To explore howhotspot maps can be compared, a case study was carried out. Another ex-periment was used to answer if a naive forecasting method can be used topredict temporal hotspot maps. Lastly a theoretical analysis was executedto extract how different packaging methods work in relation to defined as-pects.

    Results. For both Getis-Ord* and Aoristic, the sequential implementationsachieved the shortest execution time. The Jaccard measure calculated thesimilarity most accurately. The naive forecasting method created, provednot adequate and a more advanced method is preferred. Forecasting Swedishburglaries with three previous months produced a mean of only 12.1% over-lap between hotspots. The Python package method accumulated the highestscore of the investigated packaging methods.

    Conclusions. The results showed that multiprocessing, in the languagePython, is not beneficial to use for Aoristic and Getis-Ord* due to thehigh level of overhead. Further, the naive forecasting method did not provepractically useful in predicting temporal hotspot maps.

  • 86.
    Arvola Bjelkesten, Kim
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Feasibility of Point Grid Room First Structure Generation: A bottom-up approach2017Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Context. Procedural generation becomes increasingly important for videogames in an age where the scope of the content required demands bot a lot of time and work. One of the fronts of this field is structure generation where algorithms create models for the game developers to use. Objectives. This study aims to explore the feasibility of the bottom-up approach within the field of structure generation for video games. Methods. Developing an algorithm using the bottom-up approach, PGRFSG, and utilizing a user study to prove the validity of the results. Each participant evaluates five structures giving them a score based on if they belong in a video game. Results. The participants evaluations show that among the structures generated were some that definitely belonged in a video game world. Two of the five structures got a high score though for one structure that was deemed as not the case. Conclusions. A conclusion can be made that the PGRFSG algorithm creates structures that belong in a video game world and that the bottom-up approach is a suitable one for structure generation based on the results presented.

  • 87.
    Asif, Sajjad
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Investigating Web Size Metrics for Early Web Cost Estimation2018Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context Web engineering is a new research field which utilizes engineering principles to produce quality web applications. Web applications have become more complex with the passage of time and it's quite difficult to analyze the web metrics for the estimation due to a wide range of web applications. Correct estimates for web development effort play a very important role in the success of large-scale web development projects.

    Objectives In this study I investigated size metrics and cost drivers used by web companies for early web cost estimation. I also aim to get validation through industrial interviews and web quote form. This form is designed based on most frequently occurring metrics after analyzing different companies. Secondly, this research aims to revisit previous work done by Mendes (a senior researcher and contributor in this research area) to validate whether early web cost estimation trends are same or changed? The ultimate goal is to help companies in web cost estimation.

    Methods First research question is answered by conducting an online survey through 212 web companies and finding their web predictor forms (quote forms). All companies included in the survey used Web forms to give quotes on Web development projects based on gathered size and cost measures. The second research question is answered by finding most occurring size metrics from the results of Survey 1. List of size metrics are validated by two methods: (i) Industrial interviews are conducted with 15 web companies to validate results of the first survey (ii) a quote form is designed using validated results from industrial interviews and quote form sent to web companies around the world to seek data on real Web projects. Data gathered from Web projects are analyzed using CBR tool and results are validated with Industrial interview results along with Survey 1.  Final results are compared with old research to justify answer of third research question whether size metrics have been changed. All research findings are contributed to Tukutuku research benchmark project.

    Results “Number of pages/features” and “responsive implementation” are top web size metrics for early Web cost estimation.

    Conclusions. This research investigated metrics which can be used for early Web cost estimation at the early stage of Web application development. This is the stage where the application is not built yet but just requirements are being collected and an expected cost estimation is being evaluated. List of new metrics variable is concluded which can be added in Tukutuku project.

  • 88.
    Asklund, Ulf
    et al.
    Lund University, SWE.
    Höst, Martin
    Lund University, SWE.
    Wnuk, Krzysztof
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Experiences from Monitoring Effect of Architectural Changes2016In: Software Quality.: The Future of Systems- and Software Development / [ed] Winkler, Dietmar, Biffl, Stefan, Bergsmann, Johannes, 2016, p. 97-108Conference paper (Refereed)
    Abstract [en]

    A common situation is that an initial architecture has been sufficient in the initial phases of a project, but when the size and complexity of the product increases the architecture must be changed. In this paper experiences are presented from changing an architecture into independent units, providing basic reuse of main functionality although giving higher priority to independence than reuse. An objective was also to introduce metrics in order to monitor the architectural changes. The change was studied in a case-study through weekly meetings with the team, collected metrics, and questionnaires. The new architecture was well received by the development team, who found it to be less fragile. Concerning the metrics for monitoring it was concluded that a high abstraction level was useful for the purpose.

  • 89.
    Askwall, Niklas
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Utvärderingsmetod Säkerhetskultur: Ett första steg i en valideringsprocess2013Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [sv]

    Företag investerar idag väldigt mycket pengar på att säkra sina fysiska och logiska tillgångar med hjälp av tekniska skyddsmekanismer. Dock är all säkerhet på något sätt beroende av den enskilde individens omdöme och kunskap. Hur går det avgöra att organisationen kan lita på individens omdöme och kunskap? Hur går det avgöra om en organisation har en god kultur kring säkerhet? Genom att utvärdera säkerhetskulturen kan organisationer få ett utökat underlag i riskhanteringsarbetet samt en bättre förmåga att hantera det som hotar verksamhetens tillgångar. Den forskning som finns idag på området säkerhetskultur är både oense kring vad som utgör god säkerhetskultur men framför allt hur kulturen ska utvärderas. Denna forskningsansats är således ett försök att ta fram en intuitiv utvärderingsmetod som organisationer kan använda för att utvärdera sin säkerhetskultur. Utvärderingsmetoden liknar en gap-analys där en organisations önskade kultur fastställs och datainsamling sker genom en enkätundersökning. Dataunderlaget sammanställs och används för att skapa ett index för den rådande kulturen i jämförelse med den önskade kulturen. I detta inledande försök testas undersökningens reliabilitet genom Cronbach's alpha och validiteten testas genom en form av konfirmatorisk faktoranalys. Resultatet visar hur ett index som representerar en organisations säkerhetskultur skapas. Det går att påvisa god reliabilitet på utvärderingsmetoden och författaren finner goda argument för nyttan av en sådan metod i det proaktiva säkerhetsarbetet. Dock har omständigheter gjort det mycket svårt att påvisa god validitet i denna inledande undersökning.

  • 90. Astor, Philipp
    et al.
    Adam, Marc
    Jerčić, Petar
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Schaaff, Kristina
    Weinhardt, Christof
    Integrating biosignals into information systems: A NeuroIS tool for improving emotion regulation2013In: Journal of Management Information Systems, ISSN 0742-1222, E-ISSN 1557-928X, Vol. 30, no 3, p. 247-277Article in journal (Refereed)
    Abstract [en]

    Traders and investors are aware that emotional processes can have material consequences on their financial decision performance. However, typical learning approaches for debiasing fail to overcome emotionally driven financial dispositions, mostly because of subjects' limited capacity for self-monitoring. Our research aims at improving decision makers' performance by (1) boosting their awareness to their emotional state and (2) improving their skills for effective emotion regulation. To that end, we designed and implemented a serious game-based NeuroIS tool that continuously displays the player's individual emotional state, via biofeedback, and adapts the difficulty of the decision environment to this emotional state. The design artifact was then evaluated in two laboratory experiments. Taken together, our study demonstrates how information systems design science research can contribute to improving financial decision making by integrating physiological data into information technology artifacts. Moreover, we provide specific design guidelines for how biofeedback can be integrated into information systems

  • 91.
    Atchukatla, Mahammad suhail
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Algorithms for efficient VM placement in data centers: Cloud Based Design and Performance Analysis2018Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Content: Recent trends show that cloud computing adoption is continuously increasing in every organization. So, demand for the cloud datacenters tremendously increases over a period, resulting in significantly increased resource utilization of the datacenters. In this thesis work, research was carried out on optimizing the energy consumption by using packing of the virtual machines in the datacenter. The CloudSim simulator was used for evaluating bin-packing algorithms and for practical implementation OpenStack cloud computing environment was chosen as the platform for this research.

     

    Objectives:  In this research, our objectives are as follows

    • Perform simulation of algorithms in CloudSim simulator.
    • Estimate and compare the energy consumption of different packing algorithms.
    • Design an OpenStack testbed to implement the Bin packing algorithm.

     

    Methods:

    We use CloudSim simulator to estimate the energy consumption of the First fit, the First fit decreasing, Best fit and Enhanced best-fit algorithms. Design a heuristic model for implementation in the OpenStack environment for optimizing the energy consumption for the physical machines. Server consolidation and live migration are used for the algorithms design in the OpenStack implementation. Our research also extended to the Nova scheduler functionality in an OpenStack environment.

     

    Results:

    Most of the case the enhanced best-fit algorithm gives the better results. The results are obtained from the default OpenStack VM placement algorithm as well as from the heuristic algorithm developed in this simulation work. The comparison of results indicates that the total energy consumption of the data center is reduced without affecting potential service level agreements.

     

    Conclusions:

    The research tells that energy consumption of the physical machines can be optimized without compromising the offered service quality. A Python wrapper was developed to implement this model in the OpenStack environment and minimize the energy consumption of the Physical machine by shutdown the unused physical machines. The results indicate that CPU Utilization does not vary much when live migration of the virtual machine is performed.

  • 92.
    Atla, Prashant
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Power Profiling of different Heterogeneous Computers2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context: In the present world, there is an increase in the usage of com- munication services. The growth in the usage and services relying on the communication network has brought in the increase in energy consumption for all the resources involved like computers and other networking compo- nent. Energy consumption has become an other efficient metric, so there is a need of efficient networking services in various fields which can be obtained by using the efficient networking components like computers. For that pur- pose we have to know about the energy usage behavior of that component. Similarly as there is a growth in use of large data-centers there is a huge requirement of computation resources. So for an efficient use of these re- sources we need the measurement of each component of the system and its contribution towards the total power consumption of the system. This can be achieved by power profiling of different heterogeneous computers for es- timating and optimizing the usage of the resources.

    Objectives: In this study, we investigate the power profiles of different heterogeneous computers, under each system component level by using a predefined workload. The total power consumption of each system compo- nent is measured and evaluated using the open energy monitor(OEM). Methods: In oder to perform the power profile an experimental test bed is implemented. Experiments with different workload on each component are conducted on all the computers. The power for all the system under test is measured by using the OEM which is connected to each system under test(SUT).

    Results: From the results obtained, the Power profiles of different SUT’s are tabulated and analyzed. The power profiles are done in component level under different workload scenarios for four different heterogeneous comput- ers. From the results and analysis it can be stated that there is a variation in power consumed by each component of a computer based on its con- figuration. From the results we evaluate the property of super positioning principle. 

  • 93. Avritzer, Alberto
    et al.
    Beecham, Sarah
    Britto, Ricardo
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Kroll, Josiane
    Menaché, Daniel
    Noll, John
    Paasivaara, Maria
    Extending Survivability Models for Global Software Development with Media Synchronicity Theory2015In: Proceeding of the IEEE 10th International Conference on Global Software Engineering, IEEE Communications Society, 2015, p. 23-32Conference paper (Refereed)
    Abstract [en]

    In this paper we propose a new framework to assess survivability of software projects accounting for media capability details as introduced in Media Synchronicity Theory (MST). Specifically, we add to our global engineering framework the assessment of the impact of inadequate conveyance and convergence available in the communication infrastructure selected to be used by the project, on the system ability to recover from project disasters. We propose an analytical model to assess how the project recovers from project disasters related to process and communication failures. Our model is based on media synchronicity theory to account for how information exchange impacts recovery. Then, using the proposed model we evaluate how different interventions impact communication effectiveness. Finally, we parameterize and instantiate the proposed survivability model based on a data gathering campaign comprising thirty surveys collected from senior global software development experts at ICGSE'2014 and GSD'2015.

  • 94.
    Avutu, Neeraj
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Performance Evaluation of MongoDB on AWS and OpenStack2018Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
  • 95.
    Axelsson, Arvid
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Light Field Coding Using Panoramic Projection2014Student thesis
    Abstract [en]

    A new generation of 3d displays provides depth perception without the need for glasses and allows the viewer to see content from many different directions. Providing video for these displays requires capturing the scene by several cameras at different viewpoints, the data from which together forms light field video. To encode such video with existing video coding requires a large amount of data and it increases quickly with a higher number of views, which this application needs. One such coding is the multiview extension of High Efficiency Video Coding (mv-hevc), which encodes a number of similar video streams as different layers. A new coding scheme for light field video, called Panoramic Light Field (plf), is implemented and evaluated in this thesis. The main idea behind the coding is to project all points in a scene that are visible from any of the viewpoints to a single, global view, similar to how texture mapping maps a texture onto a 3d model in computer graphics. Whereas objects ordinarily shift position in the frame as the camera position changes, this is not the case when using this projection. A visible point in space is projected to the same image pixel regardless of viewpoint, resulting in large similarities between images from different viewpoints. The similarity between the layers in light field video helps to achieve more efficient compression when the projection is combined with existing multiview coding. In order to evaluate the scheme, 3d content was created and software was developed to encode it using plf. Video using this coding is compared to existing technology: a straightforward encoding of the views using mvhevc. The results show that the plf coding performs better on the sample content at lower quality levels, while it is worse at higher bitrate due to quality loss from the projection procedure. It is concluded that plf is a promising technology and suggestions are given for future research that may improve its performance further.

  • 96.
    Axelsson, Erika
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Hjältarna vi ser: Hur barn blir påverkade av media2018Independent thesis Basic level (degree of Bachelor), 20 credits / 30 HE creditsStudent thesis
    Abstract [sv]

    Temat för detta arbete är att undersöka hur kvinnor framställs som superhjältar, om det finns några skillnader mellan de kvinnliga och manliga hjältarna. I detta arbete kommer det även att undersökas hur barn blir påverkade av media och hur barn analyserar det de ser och hur de utrycker sig i bild. I metoden används dels en workshop med elever från årskurs tre för att undersöka vilka superhjältar som spontant kom till barnen. Barnen skulle sedan måla sina egna superhjältar som sedan analyserades för att se om det blev någon skillnad på hur killar och tjejer målade sina superhjältar. Barnen målade sina teckningar olika, ett exempel var att kroppsbyggnad var olika på killar och tjejers målningar. Andra delen av metoden är att analysera fyra filmer med hjälp av Bechdeltestet för att undersöka hur och om kvinnornas roll har förändrats med tiden. Det var inte alltid självklart att en film skulle bli godkänd. Det finns idag skillnader på hur män och kvinnor gestaltas i superhjältefilmer. Detta märks dels på deras krafter och dels hur de lär sig hantera dessa. Trots att kvinnorna får mer och mer plats utgör männen största delen i filmerna, då männen oftast har huvudrollen.

  • 97.
    Axelsson, Jonas
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Comparison of user accuracy and speed when performing 3D game target practice using a computer monitor and virtual reality headset2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Consumer grade Virtual Reality (VR)-headsets are on the rise, and with them comes an increasing number of digital games which support VR. How players perceive the gameplay and how well they perform at the games tasks can be key factors to designing new games.

    This master’s thesis aims to evaluate if a user can performa game task, specifically a target practice, in less time and/or more accurately when using a VR-headset as opposed to a computer screen and mouse. To gather statistics and measure the differences, an experiment was conducted using a test application developed alongside this report. The experiment recorded accuracy scores and time taken in tests performed by 35 test participants using both a VR-headset and computer screen.

    The resulting data sets are presented in the results chapter of this report. A Kolmogorov-Smirnov Normality Test and Student’s paired samples t-test was performed on the data to establish its statistical significance. After analysis, the results are reviewed, discussed and conclusions are made.

    This study concludes that when performing the experiment, the use of a VR-headset decreased the users accuracy and to a lesser extent also increased the time the user took to hit all targets. An argument was made that the longer previous experience with computer screen and mouse of most users gave this method an unfair advantage. With equally long training, VR use might score similar results.

  • 98.
    Ayyagari, Nitin Reddy
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Databases For Mediation Systems: Design and Data scaling approach2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context: There is continuous growth in data generation due to wide usage of modern communication systems. Systems have to be designed which can handle the processing of these data volumes efficiently. Mediation systems are meant to serve this purpose. Databases form an integral part of the mediation systems. Suitability of the databases for such systems is the principle theme of this work.

    Objectives: The objective of this thesis is to identify the key requirements for databases that can be used as part of Mediation systems, gain a thorough understanding of various features, the data models commonly used in databases and to benchmark their performance.

    Methods: Previous work that has been carried out on various databases is studied as a part of literature review. Test bed is set up as a part of experiment and performance metrics such as throughput and total time taken were measured through a Java based client. Thorough analysis has been carried out by varying various parameters like data volumes, number of threads in the client etc.

    Results: Cassandra has a very good write performance for event and batch operations. Cassandra has a slightly better read performance when compared to MySQL Cluster but this differentiation withers out in case of fewer number of threads in the client.

    Conclusions: On evaluation of MySQL Cluster and Cassandra we conclude that they have several features that are suitable for mediation systems. On the other hand, Cassandra does not guarantee ACID transactions while MySQL Cluster has good support. There is need for further evaluation on new generation databases which are not mature enough as of now.

  • 99. Baca, Dejan
    et al.
    Boldt, Martin
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Carlsson, Bengt
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Jacobsson, Andreas
    A Novel Security-Enhanced Agile Software Development Process Applied in an Industrial Setting2015In: Proceedings 10th International Conference on Availability, Reliability and Security ARES 2015, IEEE Computer Society Digital Library, 2015Conference paper (Refereed)
    Abstract [en]

    A security-enhanced agile software development process, SEAP, is introduced in the development of a mobile money transfer system at Ericsson Corp. A specific characteristic of SEAP is that it includes a security group consisting of four different competences, i.e., security manager, security architect, security master and penetration tester. Another significant feature of SEAP is an integrated risk analysis process. In analyzing risks in the development of the mobile money transfer system, a general finding was that SEAP either solves risks that were previously postponed or solves a larger proportion of the risks in a timely manner. The previous software development process, i.e., the baseline process of the comparison outlined in this paper, required 2.7 employee hours spent for every risk identified in the analysis process compared to, on the average, 1.5 hours for the SEAP. The baseline development process left 50% of the risks unattended in the software version being developed, while SEAP reduced that figure to 22%. Furthermore, SEAP increased the proportion of risks that were corrected from 12.5% to 67.1%, i.e., more than a five times increment. This is important, since an early correction may avoid severe attacks in the future. The security competence in SEAP accounts for 5% of the personnel cost in the mobile money transfer system project. As a comparison, the corresponding figure, i.e., for security, was 1% in the previous development process.

  • 100.
    Bachu, Rajesh
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    A framework to migrate and replicate VMware Virtual Machines to Amazon Elastic Compute Cloud: Performance comparison between on premise and the migrated Virtual Machine2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context Cloud Computing is the new trend in the IT industry. Traditionally obtaining servers was quiet time consuming for companies. The whole process of research on what kind of hardware to buy, get budget approval, purchase the hardware and get access to the servers could take weeks or months. In order to save time and reduce expenses, most companies are moving towards the cloud. One of the known cloud providers is Amazon Elastic Compute Cloud (EC2). Amazon EC2 makes it easy for companies to obtain virtual servers (known as computer instances) in a cloud quickly and inexpensively. Another advantage of using Amazon EC2 is the flexibility that they offer, so the companies can even import/export the Virtual Machines (VM) that they have built which meets the companies IT security, configuration, management and compliance requirements into Amazon EC2.

    Objectives In this thesis, we investigate importing a VM running on VMware into Amazon EC2. In addition, we make a performance comparison between a VM running on VMware and the VM with same image running on Amazon EC2.

    Methods A Case study research has been done to select a persistent method to migrate VMware VMs to Amazon EC2. In addition an experimental research is conducted to measure the performance of Virtual Machine running on VMware and compare it with same Virtual Machine running on EC2. We measure the performance in terms of CPU, memory utilization as well as disk read/write speed using well-known open-source benchmarks from Phoronix Test Suite (PTS).

    Results Investigation on importing VM snapshots (VMDK, VHD and RAW format) to EC2 was done using three methods provided by AWS. Comparison of performance was done by running each benchmark for 25 times on each Virtual Machine.

    Conclusions Importing VM to EC2 was successful only with RAW format and replication was not successful as AWS installs some software and drivers while importing the VM to EC2. Migrated EC2 VM performs better than on premise VMware VM in terms of CPU, memory utilization and disk read/write speed.

1234567 51 - 100 of 1525
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf