Change search
Refine search result
1234567 1 - 50 of 1152
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the 'Create feeds' function.
  • 1.
    ABBAS, FAHEEM
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Intelligent Container Stacking System at Seaport Container Terminal2016Independent thesis Advanced level (degree of Master (One Year)), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Context: The workload at seaport container terminal is increasing gradually. We need to improve the performance of terminal to fulfill the demand. The key section of the container terminal is container stacking yard which is an integral part of the seaside and the landside. So its performance has the effects on both sides. The main problem in this area is unproductive moves of containers. However, we need a well-planned stacking area in order to increase the performance of terminal and maximum utilization of existing resources.

    Objectives: In this work, we have analyzed the existing container stacking system at Helsingborg seaport container terminal, Sweden, investigated the already provided solutions of the problem and find the best optimization technique to get the best possible solution. After this, suggest the solution, test the proposed solution and analyzed the simulation based results with respect to the desired solution.

    Methods: To identify the problem, methods and proposed solutions of the given problem in the domain of container stacking yard management, a literature review has been conducted by using some e-resources/databases. A GA with best parametric values is used to get the best optimize solution. A discrete event simulation model for container stacking in the yard has been build and integrated with genetic algorithm. A proposed mathematical model to show the dependency of cost minimization on the number of containers’ moves.

    Results: The GA has been achieved the high fitness value versus generations for 150 containers to storage at best location in a block with 3 tier levels and to minimize the unproductive moves in the yard. A comparison between Genetic Algorithm and Tabu Search has been made to verify that the GA has performed better than other algorithm or not. A simulation model with GA has been used to get the simulation based results and to show the container handling by using resources like AGVs, yard crane and delivery trucks and container stacking and retrieval system in the yard. The container stacking cost is directly proportional to the number of moves has been shown by the mathematical model.

    Conclusions: We have identified the key factor (unproductive moves) that is the base of other key factors (time & cost) and has an effect on the performance of the stacking yard and overall the whole seaport terminal. We have focused on this drawback of stacking system and proposed a solution that makes this system more efficient. Through this, we can save time and cost both. A Genetic Algorithm is a best approach to solve the unproductive moves problem in container stacking system.

  • 2.
    Abbireddy, Sharath
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    A Model for Capacity Planning in Cassandra: Case Study on Ericsson’s Voucher System2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Cassandra is a NoSQL(Not only Structured Query Language) database which serves large amount of data with high availability .Cassandra data storage dimensioning also known as Cassandra capacity planning refers to predicting the amount of disk storage required when a particular product is deployed using Cassandra. This is an important phase in any product development lifecycle involving Cassandra data storage system. The capacity planning is based on many factors which are classified as Cassandra specific and Product specific.This study is to identify the different Cassandra specific and product specific factors affecting the disk space in Cassandra data storage system. Based on these factors a model is to be built which would predict the disk storage for Ericsson’s voucher system.A case-study is conducted on Ericsson’s voucher system and its Cassandra cluster. Interviews were conducted on different Cassandra users within Ericsson R&D to know their opinion on capacity planning approaches and factors affecting disk space for Cassandra. Responses from the interviews were transcribed and analyzed using grounded theory.A total of 9 Cassandra specific factors and 3 product specific factors are identified and documented. Using these 12 factors a model was built. This model was used in predicting the disk space required for voucher system’s Cassandra.The factors affecting disk space for deploying Cassandra are now exhaustively identified. This makes the capacity planning process more efficient. Using these factors the Voucher system’s disk space for deployment is predicted successfully.

  • 3.
    Abdelraheem, Mohamed Ahmed
    et al.
    SICS Swedish ICT AB, SWE.
    Gehrmann, Christian
    SICS Swedish ICT AB, SWE.
    Lindström, Malin
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Nordahl, Christian
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Executing Boolean queries on an encrypted Bitmap index2016In: CCSW 2016 - Proceedings of the 2016 ACM Cloud Computing Security Workshop, co-located with CCS 2016, Association for Computing Machinery (ACM), 2016, 11-22 p.Conference paper (Refereed)
    Abstract [en]

    We propose a simple and efficient searchable symmetric encryption scheme based on a Bitmap index that evaluates Boolean queries. Our scheme provides a practical solution in settings where communications and computations are very constrained as it offers a suitable trade-off between privacy and performance.

  • 4.
    Abdelrasoul, Nader
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Optimization Techniques For an Artificial Potential Fields Racing Car Controller2013Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Context. Building autonomous racing car controllers is a growing field of computer science which has been receiving great attention lately. An approach named Artificial Potential Fields (APF) is used widely as a path finding and obstacle avoidance approach in robotics and vehicle motion controlling systems. The use of APF results in a collision free path, it can also be used to achieve other goals such as overtaking and maneuverability. Objectives. The aim of this thesis is to build an autonomous racing car controller that can achieve good performance in terms of speed, time, and damage level. To fulfill our aim we need to achieve optimality in the controller choices because racing requires the highest possible performance. Also, we need to build the controller using algorithms that does not result in high computational overhead. Methods. We used Particle Swarm Optimization (PSO) in combination with APF to achieve optimal car controlling. The Open Racing Car Simulator (TORCS) was used as a testbed for the proposed controller, we have conducted two experiments with different configuration each time to test the performance of our APF- PSO controller. Results. The obtained results showed that using the APF-PSO controller resulted in good performance compared to top performing controllers. Also, the results showed that the use of PSO proved to enhance the performance compared to using APF only. High performance has been proven in the solo driving and in racing competitions, with the exception of an increased level of damage, however, the level of damage was not very high and did not result in a controller shut down. Conclusions. Based on the obtained results we have concluded that the use of PSO with APF results in high performance while taking low computational cost.

  • 5.
    Abghari, Shahrooz
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    García Martín, Eva
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Johansson, Christian
    NODA Intelligent Systems AB, SWE.
    Lavesson, Niklas
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Grahn, Håkan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Trend analysis to automatically identify heat program changes2017In: Energy Procedia, Elsevier, 2017, Vol. 116, 407-415 p.Conference paper (Refereed)
    Abstract [en]

    The aim of this study is to improve the monitoring and controlling of heating systems located at customer buildings through the use of a decision support system. To achieve this, the proposed system applies a two-step classifier to detect manual changes of the temperature of the heating system. We apply data from the Swedish company NODA, active in energy optimization and services for energy efficiency, to train and test the suggested system. The decision support system is evaluated through an experiment and the results are validated by experts at NODA. The results show that the decision support system can detect changes within three days after their occurrence and only by considering daily average measurements.

  • 6.
    Abheeshta, Putta
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Comparative Analysis of Software Development Practices across Software Organisations: India and Sweden2016Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. System Development Methodologies (SDM’s) have been an area of intensive research in the field of software engineering. Different software organisations adopt different development methodologies and use different development practices. The frequency of usage of development practices and acceptance factors for adoption of development methodology are crucial for software organisations. The factors of acceptance and development practices differ across geographical locations. Many challenges have been presented in the literature with respect to the mismatch of the development practices across organisations while collaborating across organisations in distributed development. There is no considerable amount of research done in context of differences across development practices and acceptance factors for adoption of a particular development methodology. Objectives. The primary objectives of the research are to find out a) differences in (i) practice usage (ii) acceptance factors such as organisational, social and cultural b) explore the reasons for the differences and also investigate consequences of such differences while collaborating, across organisations located in India and Sweden. Methods. A literature review was conducted by searching in scientific databases for identifying common agile and plan-driven development practices and acceptance theories for development methodologies. Survey was conducted across organisations located in India and Sweden to find out the usage frequency of development practices and acceptance factors. Ten interviews were conducted to investigate, reasons for differences and consequences of differences from the software practitioners from organisations located in India and Sweden. Literature evidences were used to support the results collected from interviews. Results. From the survey, organisations in India have adopted a higher frequency of plan driven practices when compared to Sweden and agile practices were adopted at higher frequency in Sweden when compared to India. The number of organisations adopting "pure agile" methodologies have been significantly higher in Sweden. There was significant differences were found across the acceptance factors such as cultural, organisational, image and career factors between India and Sweden. The factors such as cultural, social, human, business and organisational factors are responsible for such differences across development practices and acceptance factors. Challenges related to communication, coordination and control were found due to the differences, while collaborating between Indian and Sweden sites. Conclusions. The study signifies the importance of identifying the frequency of development practices and also the acceptance factors responsible for adoption of development methodologies in the software organisations. The mismatch between these practices will led to various challenges. The study draws insights into various non-technical factors such as cultural, human, organisational, business and social while collaborating between organisations. Variations across these factors will lead to many coordination, communication and control issues. Keywords: Development Practices, Agile Development, Plan Driven Development, Acceptance Factors, Global Software Development.

  • 7.
    Adamov, Alexander
    et al.
    Kharkiv Natl Univ Radio Elect, NioGuard Secur Lab, Kharkov, Kharkiv Oblast, Ukraine..
    Carlsson, Anders
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    A Sandboxing Method to Protect Cloud Cyberspace2015In: PROCEEDINGS OF 2015 IEEE EAST-WEST DESIGN & TEST SYMPOSIUM (EWDTS), IEEE Communications Society, 2015Conference paper (Refereed)
    Abstract [en]

    This paper addresses the problem of protecting cloud environments against targeted attacks, which have become a popular mean of gaining access to organization's confidential information and resources of cloud providers. Only in 2015 eleven targeted attacks have been discovered by Kaspersky Lab. One of them - Duqu2 - successfully attacked the Lab itself. In this context, security researchers show rising concern about protecting corporate networks and cloud infrastructure used by large organizations against such type of attacks. This article describes a possibility to apply a sandboxing method within a cloud environment to enforce security perimeter of the cloud.

  • 8.
    Adamov, Alexander
    et al.
    Harkivskij Nacionalnij Universitet Radioelectroniki, UKR.
    Carlsson, Anders
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Cloud incident response model2016In: Proceedings of 2016 IEEE East-West Design and Test Symposium, EWDTS 2016, Institute of Electrical and Electronics Engineers (IEEE), 2016Conference paper (Refereed)
    Abstract [en]

    This paper addresses the problem of incident response in clouds. A conventional incident response model is formulated to be used as a basement for the cloud incident response model. Minimization of incident handling time is considered as a key criterion of the proposed cloud incident response model that can be done at the expense of embedding infrastructure redundancy into the cloud infrastructure represented by Network and Security Controllers and introducing Security Domain for threat analysis and cloud forensics. These architectural changes are discussed and applied within the cloud incident response model. © 2016 IEEE.

  • 9.
    Adapa, Sasank Sai Sujan
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    APPLYING LEAN PRINCIPLES FOR PERFORMANCE ORIENTED SERVICE DESIGN OF VIRTUAL NETWORK FUNCTIONS FOR NFV INFRASTRUCTURE: Concepts of Lean2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. Network Function Virtualization was recently proposed by European Telecommunications Standards Institute (ETSI) to improve the network service flexibility by virtualization of network services and applications that run on hardware. To virtualize network functions, the software is decoupled from underlying physical hardware. NFV aims to transform industries by reducing capital investments on hardware by using commercial-of-the-shelf (COTS) hardware. NFV makes rapid innovative growth in telecom services through software based service deployment.

    Objectives. This thesis work aims to investigate how business organizations function and the roles in defining a service relationship model. The work also aims to define a service relationship model and to validate it via proof of concept using network function virtualization as a service. For this thesis, we finally apply lean principles for the defined service relationship model to reduce waste and investigate how lean benefits the model to be proven as performance service oriented.

    Methods. The essence of this work is to make a business organization lean by investigating its actions and applying lean principles. To elaborate, this thesis work involves in a research of papers from IEEE, TMF, IETF and Ericsson. It results in modelling of a PoC by following requirement analysis methodology and by applying lean principles to eliminate unnecessary processes which doesn’t add any value.

    Results. The results of the work include a full-fledged service relationship model that include three service levels with roles that can fit in to requirement specifications of NFV infrastructure. The results also show the service levels functionalities and their relationships between the roles. It has also been observed that the services that are needed to be standardized are defined with syntax for ways to describe network functions. It is observed that lean principles benefit the service relationship model from reducing waste factors and hereby providing a PoC which is performance service oriented.

    Conclusions. We conclude that roles defined are fit for the service relationship model designed. Moreover, we conclude that the model can hence contain the flow of service by standardizing the subservices and reducing waste interpreted with lean principles and there is a need for further use case proof of the model in full scale industry trials. It also concludes the ways to describe network functions syntax which follows lean principles that are essential to have them for the sub-services standardization. However, PoC defined can be an assurance to the NFV infrastructure.

  • 10.
    Addu, Raj Kiran
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Potuvardanam, Vinod Kumar
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Effect of Codec Performance on Video QoE for videos encoded with Xvid, H.264 and WebM/VP82014Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    In recent years, there has been a significant growth in multimedia services such as mobile video streaming, Video-on-Demand and video conferencing. This has led to the development of various video coding techniques, aiming to deliver high quality video while using available bandwidth efficiently. This upsurge in the usage of video applications has also resulted in making endusers more quality-conscious. In order to meet the users’ expectations, the Quality of Experience (QoE) studies has gained utmost importance from both researchers and service providers. This thesis aims to compare the performance of H.264/AVC, Xvid and WebM/VP8 video codecs in wired and wireless networks. The codec performance is evaluated for different packet loss and delay variation values. The evaluation of codec performance is done using both subjective and objective assessment methods. In subjective assessment method, the evaluation of video codec performance is done using ITU-T recommended Absolute Category Rating (ACR) method. Using this method the perceptual video quality ratings are taken from the users, which are then averaged to obtain Mean Opinion Score. These obtained scores are used to analyze the performance of encoded videos with respect to users’ perception. In addition to subjective assessment method, the quality of encoded video is also measured using objective assessment method. The objective metric SSIM (Structural Similarity) is used to evaluate the performance of encoded videos. Based on the results, it was found that for lower packet loss and delay variation values H.264 showed better results when compared to Xvid and WebM/VP8 whereas, WebM/VP8 outperformed Xvid and H.264 for higher packet loss and delay variation values. On the whole, H.264 and WebM/VP8 performed better than Xvid. It was also found that all three video codecs performed better in wired network when compared to the wireless network.

  • 11. Afzal, Wasif
    et al.
    Ghazi, Ahmad Nauman
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Itkonen, Juha
    Torkar, Richard
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Andrews, Anneliese
    Bhatti, Khurram
    An experiment on the effectiveness and efficiency of exploratory testing2015In: Empirical Software Engineering, ISSN 1382-3256, Vol. 20, no 3, 844-878 p.Article in journal (Refereed)
    Abstract [en]

    The exploratory testing (ET) approach is commonly applied in industry, but lacks scientific research. The scientific community needs quantitative results on the performance of ET taken from realistic experimental settings. The objective of this paper is to quantify the effectiveness and efficiency of ET vs. testing with documented test cases (test case based testing, TCT). We performed four controlled experiments where a total of 24 practitioners and 46 students performed manual functional testing using ET and TCT. We measured the number of identified defects in the 90-minute testing sessions, the detection difficulty, severity and types of the detected defects, and the number of false defect reports. The results show that ET found a significantly greater number of defects. ET also found significantly more defects of varying levels of difficulty, types and severity levels. However, the two testing approaches did not differ significantly in terms of the number of false defect reports submitted. We conclude that ET was more efficient than TCT in our experiment. ET was also more effective than TCT when detection difficulty, type of defects and severity levels are considered. The two approaches are comparable when it comes to the number of false defect reports submitted.

  • 12. Afzal, Wasif
    et al.
    Torkar, Richard
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Towards benchmarking feature subset selection methods for software fault prediction2016In: Studies in Computational Intelligence, Springer, 2016, 617, Vol. 617, 33-58 p.Chapter in book (Refereed)
    Abstract [en]

    Despite the general acceptance that software engineering datasets often contain noisy, irrelevant or redundant variables, very few benchmark studies of feature subset selection (FSS) methods on real-life data from software projects have been conducted. This paper provides an empirical comparison of state-of-the-art FSS methods: information gain attribute ranking (IG); Relief (RLF); principal component analysis (PCA); correlation-based feature selection (CFS); consistencybased subset evaluation (CNS); wrapper subset evaluation (WRP); and an evolutionary computation method, genetic programming (GP), on five fault prediction datasets from the PROMISE data repository. For all the datasets, the area under the receiver operating characteristic curve—the AUC value averaged over 10-fold cross-validation runs—was calculated for each FSS method-dataset combination before and after FSS. Two diverse learning algorithms, C4.5 and naïve Bayes (NB) are used to test the attribute sets given by each FSS method. The results show that although there are no statistically significant differences between the AUC values for the different FSS methods for both C4.5 and NB, a smaller set of FSS methods (IG, RLF, GP) consistently select fewer attributes without degrading classification accuracy. We conclude that in general, FSS is beneficial as it helps improve classification accuracy of NB and C4.5. There is no single best FSS method for all datasets but IG, RLF and GP consistently select fewer attributes without degrading classification accuracy within statistically significant boundaries. © Springer International Publishing Switzerland 2016.

  • 13.
    Ahlström, Eric
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Holmqvist, Lucas
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Goswami, Prashant
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Comparing Traditional Key Frame and Hybrid Animation2017In: SCA '17 Proceedings of the ACM SIGGRAPH / Eurographics Symposium on Computer Animation, ACM Digital Library, 2017, nr. a20Conference paper (Refereed)
    Abstract [en]

    In this research the authors explore a hybrid approach which usesthe basic concept of key frame animation together with proceduralanimation to reduce the number of key frames needed for an animationclip. The two approaches are compared by conducting anexperiment where the participating subjects were asked to ratethem based on their visual appeal.

  • 14.
    Ahmadi Mehri, Vida
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    An Investigation of CPU utilization relationship between host and guests in a Cloud infrastructure2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Cloud computing stands as a revolution in IT world in recent years. This technology facilitates resource sharing by reducing hardware costs for business users and promises energy efficiency and better resource utilization to the service providers. CPU utilization is a key metric considered in resource management across clouds.

    The main goal of this thesis study is directed towards investigating CPU utilization behavior with regard to host and guest, which would help us in understanding the relationship between them. It is expected that perception of these relationships would be helpful in resource management.

    Working towards our goal, the methodology we adopted is experi- mental research. This involves experimental modeling, measurements and observations from the results. The experimental setup covers sev- eral complex scenarios including cloud and a standalone virtualization system. The results are further analyzed for a visual correlation.

    Results show that CPU utilization in cloud and virtualization sce- nario coincides. More experimental scenarios are designed based on the first observations. The obtaining results show the irregular behav- ior between PM and VM in variable workload.

    CPU utilization retrieved from both cloud and a standalone system is similar. 100% workload situations showed that CPU utilization is constant with no correlation co-efficient obtained. Lower workloads showed (more/less) correlation in most of the cases in our correlation analysis. It is expected that more number of iterations can possibly vary the output. Further analysis of these relationships for proper resource management techniques will be considered. 

  • 15.
    Ahmed, Qutub Uddin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Mujib, Saifullah Bin
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Context Aware Reminder System: Activity Recognition Using Smartphone Accelerometer and Gyroscope Sensors Supporting Context-Based Reminder Systems2014Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Context. Reminder system offers flexibility in daily life activities and assists to be independent. The reminder system not only helps reminding daily life activities, but also serves to a great extent for the people who deal with health care issues. For example, a health supervisor who monitors people with different health related problems like people with disabilities or mild dementia. Traditional reminders which are based on a set of defined activities are not enough to address the necessity in a wider context. To make the reminder more flexible, the user’s current activities or contexts are needed to be considered. To recognize user’s current activity, different types of sensors can be used. These sensors are available in Smartphone which can assist in building a more contextual reminder system. Objectives. To make a reminder context based, it is important to identify the context and also user’s activities are needed to be recognized in a particular moment. Keeping this notion in mind, this research aims to understand the relevant context and activities, identify an effective way to recognize user’s three different activities (drinking, walking and jogging) using Smartphone sensors (accelerometer and gyroscope) and propose a model to use the properties of the identification of the activity recognition. Methods. This research combined a survey and interview with an exploratory Smartphone sensor experiment to recognize user’s activity. An online survey was conducted with 29 participants and interviews were held in cooperation with the Karlskrona Municipality. Four elderly people participated in the interview. For the experiment, three different user activity data were collected using Smartphone sensors and analyzed to identify the pattern for different activities. Moreover, a model is proposed to exploit the properties of the activity pattern. The performance of the proposed model was evaluated using machine learning tool, WEKA. Results. Survey and interviews helped to understand the important activities of daily living which can be considered to design the reminder system, how and when it should be used. For instance, most of the participants in the survey are used to using some sort of reminder system, most of them use a Smartphone, and one of the most important tasks they forget is to take their medicine. These findings helped in experiment. However, from the experiment, different patterns have been observed for three different activities. For walking and jogging, the pattern is discrete. On the other hand, for drinking activity, the pattern is complex and sometimes can overlap with other activities or can get noisy. Conclusions. Survey, interviews and the background study provided a set of evidences fostering reminder system based on users’ activity is essential in daily life. A large number of Smartphone users promoted this research to select a Smartphone based on sensors to identify users’ activity which aims to develop an activity based reminder system. The study was to identify the data pattern by applying some simple mathematical calculations in recorded Smartphone sensors (accelerometer and gyroscope) data. The approach evaluated with 99% accuracy in the experimental data. However, the study concluded by proposing a model to use the properties of the identification of the activities and developing a prototype of a reminder system. This study performed preliminary tests on the model, but there is a need for further empirical validation and verification of the model.

  • 16.
    Aivars, Sablis
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Benefits of transactive memory systems in large-scale development2016Independent thesis Advanced level (degree of Master (Two Years)), 80 credits / 120 HE creditsStudent thesis
    Abstract [en]

    Context. Large-scale software development projects are those consisting of a large number of teams, maybe even spread across multiple locations, and working on large and complex software tasks. That means that neither a team member individually nor an entire team holds all the knowledge about the software being developed and teams have to communicate and coordinate their knowledge. Therefore, teams and team members in large-scale software development projects must acquire and manage expertise as one of the critical resources for high-quality work.

    Objectives. We aim at understanding whether software teams in different contexts develop transactive memory systems (TMS) and whether well-developed TMS leads to performance benefits as suggested by research conducted in other knowledge-intensive disciplines. Because multiple factors may influence the development of TMS, based on related TMS literature we also suggest to focus on task allocation strategies, task characteristics and management decisions regarding the project structure, team structure and team composition.

    Methods. We use the data from two large-scale distributed development companies and 9 teams, including quantitative data collected through a survey and qualitative data from interviews to measure transactive memory systems and their role in determining team performance. We measure teams’ TMS with a latent variable model. Finally, we use focus group interviews to analyze different organizational practices with respect to team management, as a set of decisions based on two aspects: team structure and composition, and task allocation.

    Results. Data from two companies and 9 teams are analyzed and the positive influence of well-developed TMS on team performance is found. We found that in large-scale software development, teams need not only well-developed team’s internal TMS, but also have well- developed and effective team’s external TMS. Furthermore, we identified practices that help of hinder development of TMS in large-scale projects.

    Conclusions. Our findings suggest that teams working in large-scale software development can achieve performance benefits if transactive memory practices within the team are supported with networking practices in the organization. 

  • 17.
    Akama-kisseh, Jerome
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    EXPLORING COMPUTERIZED TROUBLE TICKETING SYSTEM AND ITS BENEFITS IN VODAFONE GHANA2016Independent thesis Advanced level (degree of Master (One Year)), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Today more than ever, Computerized Trouble Ticketing System is becoming a booming information technology system that makes the difference between staying in business in a competitive global telecommunication arena.

    This quantitative exploratory survey utilised conveniently selected research subjects to explore computerized trouble ticketing system and its inherent benefits in Vodafone Ghana Plc. Cross section of vital data set collected with the aid of structured questionnaires haven been analyzed using descriptive statistics model.

    The study revealed that, effective and efficient usage of computerized trouble ticketing systems benefit the company in terms of its customer satisfaction, competitive advantage and business intelligence in competitive telecom arena. Nevertheless, the smooth realization of these inherent benefits are constantly challenged by complexity in managing volumes of data generated, intense era of competition, high cost of trouble ticketing system, as well as, rapid technological obsolesce in computerized trouble ticketing applications in telecommunication market.

    The study recommended for the quick and effective adoption of differentiation strategy, cost leadership strategy and customer relationship management, which are customer-centric measures that can build sustainable long-term customer relationship that can create value for the company, as well as, for the customers.

  • 18.
    Akkineni, Srinivasu
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    The impact of RE process factors and organizational factors during alignment between RE and V&V: Systematic Literature Review and Survey2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context: Requirements engineering (RE) and Verification and validation (V&V) areas are treated to be integrated and assure successful development of the software project. Therefore, activation of both competences in the early stages of the project will support products in meeting the customer expectation regarding the quality and functionality. However, this quality can be achieved by aligning RE and V&V. There are different practices such as requirements, verification, validation, control, tool etc. that are followed by organizations for alignment and to address different challenges faced during the alignment between RE and V&V. However, there is a requisite for studies to understand the alignment practices, challenges and factors, which can enable successful alignment between RE and V&V.

    Objectives: In this study, an exploratory investigation is carried out to know the impact of factors i.e. RE process and organizational factors during the alignment between RE and V&V. The main objectives of this study are:

    1. To find the list of RE practices that facilitate alignment between RE and V&V.
    2. To categorize RE practices with respect to their requirement phases.
    3. To find the list of RE process and organizational factors that influence alignment between RE and V&V besides their impact.
    4. To identify the challenges that are faced during the alignment between RE and V&V.
    5. To obtain list of challenges that are addressed by RE practices during the alignment between RE and V&V.

    Methods: In this study Systematic Literature Review (SLR) is conducted using snowballing procedure to identify the relevant information about RE practices, challenges, RE process factors and organizational factors. The studies were captured from Engineering Village database. Rigor and relevance analysis is performed to assess the quality of the studies obtained through SLR. Further, a questionnaire intended for industrial survey was prepared from the gathered literature and distributed to practitioners from the software industry in order to collect empirical information about this study. Thereafter, data obtained from industrial survey was analyzed using statistical analysis and chi-square significance test.

    Results: 20 studies were identified through SLR, which are relevant to this study. After analyzing the obtained studies, the list of RE process factors, organizational factors, challenges and RE practices during alignment between RE and V&V are gathered. Thereupon, an industrial survey is conducted from the obtained literature, which has obtained 48 responses. Alignment between RE and V&V possess an impact of RE process factors and organizational factors and this is also mentioned by the respondents of the survey. Moreover, this study finds an additional RE process factors and organizational factors during the alignment between RE and V&V, besides their impact. Another contribution is, addressing the unaddressed challenges by RE practices obtained through the literature. Additionally, validation of categorized RE practices with respect to their requirement phases is carried out.

    Conclusions: To conclude, the obtained results from this study will benefit practitioners for capturing more insight towards the alignment between RE and V&V. This study identified the impact of RE process factors and organizational factors during the alignment between RE and V&V along with the importance of challenges faced during the alignment between RE and V&V. This study also addressed the unaddressed challenges by RE practices obtained through literature. Respondents of the survey believe that many RE process and organizational factors have negative impact on the alignment between RE and V&V based on the size of an organization. In addition to this, validation of results for applying RE practices at different requirement phases is toted through survey. Practitioners can identify the benefits from this research and researchers can extend this study to remaining alignment practices.

  • 19.
    Akser, M.
    et al.
    Ulster University, GBR.
    Bridges, B.
    Ulster University, GBR.
    Campo, G.
    Ulster University, GBR.
    Cheddad, Abbas
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Curran, K.
    Ulster University, GBR.
    Fitzpatrick, L.
    Ulster University, GBR.
    Hamilton, L.
    Ulster University, GBR.
    Harding, J.
    Ulster University, GBR.
    Leath, T.
    Ulster University, GBR.
    Lunney, T.
    Ulster University, GBR.
    Lyons, F.
    Ulster University, GBR.
    Ma, M.
    University of Huddersfield, GBR.
    Macrae, J.
    Ulster University, GBR.
    Maguire, T.
    Ulster University, GBR.
    McCaughey, A.
    Ulster University, GBR.
    McClory, E.
    Ulster University, GBR.
    McCollum, V.
    Ulster University, GBR.
    Mc Kevitt, P.
    Ulster University, GBR.
    Melvin, A.
    Ulster University, GBR.
    Moore, P.
    Ulster University, GBR.
    Mulholland, E.
    Ulster University, GBR.
    Muñoz, K.
    BijouTech, CoLab, Letterkenny, Co., IRL.
    O’Hanlon, G.
    Ulster University, GBR.
    Roman, L.
    Ulster University, GBR.
    SceneMaker: Creative technology for digital storytelling2017In: Lect. Notes Inst. Comput. Sci. Soc. Informatics Telecommun. Eng. / [ed] Brooks A.L.,Brooks E., Springer Verlag , 2017, Vol. 196, 29-38 p.Conference paper (Refereed)
    Abstract [en]

    The School of Creative Arts & Technologies at Ulster University (Magee) has brought together the subject of computing with creative technologies, cinematic arts (film), drama, dance, music and design in terms of research and education. We propose here the development of a flagship computer software platform, SceneMaker, acting as a digital laboratory workbench for integrating and experimenting with the computer processing of new theories and methods in these multidisciplinary fields. We discuss the architecture of SceneMaker and relevant technologies for processing within its component modules. SceneMaker will enable the automated production of multimodal animated scenes from film and drama scripts or screenplays. SceneMaker will highlight affective or emotional content in digital storytelling with particular focus on character body posture, facial expressions, speech, non-speech audio, scene composition, timing, lighting, music and cinematography. Applications of SceneMaker include automated simulation of productions and education and training of actors, screenwriters and directors. © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2017.

  • 20.
    Alahmad, Yazan
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Gender-Bending in Massively Multiplayer Online Role-playing Games: Reasons & Consequences of gender-bending2015Independent thesis Basic level (degree of Bachelor), 20 credits / 30 HE creditsStudent thesis
  • 21.
    Alahyari, Hiva
    et al.
    Chalmers; Göteborgs Universitet, SWE.
    Berntsson Svensson, Richard
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Gorschek, Tony
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    A study of value in agile software development organizations2017In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 125, 271-288 p.Article in journal (Refereed)
    Abstract [en]

    The Agile manifesto focuses on the delivery of valuable software. In Lean, the principles emphasise value, where every activity that does not add value is seen as waste. Despite the strong focus on value, and that the primary critical success factor for software intensive product development lies in the value domain, no empirical study has investigated specifically what value is. This paper presents an empirical study that investigates how value is interpreted and prioritised, and how value is assured and measured. Data was collected through semi-structured interviews with 23 participants from 14 agile software development organisations. The contribution of this study is fourfold. First, it examines how value is perceived amongst agile software development organisations. Second, it compares the perceptions and priorities of the perceived values by domains and roles. Third, it includes an examination of what practices are used to achieve value in industry, and what hinders the achievement of value. Fourth, it characterises what measurements are used to assure, and evaluate value-creation activities.

  • 22.
    Albinsson, Mattias
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Andersson, Linus
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Improving Quality of Experience through Performance Optimization of Server-Client Communication2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In software engineering it is important to consider how a potential user experiences the system during usage. No software user will have a satisfying experience if they perceive the system as slow, unresponsive, unstable or hiding information. Additionally, if the system restricts the users to only having a limited set of actions, their experience will further degrade. In order to evaluate the effect these issues have on a user‟s perceived experience, a measure called Quality of Experience is applied.

    In this work the foremost objective was to improve how a user experienced a system suffering from the previously mentioned issues, when searching for large amounts of data. To achieve this objective the system was evaluated to identify the issues present and which issues were affecting the user perceived Quality of Experience the most. The evaluated system was a warehouse management system developed and maintained by Aptean AB‟s office in Hässleholm, Sweden. The system consisted of multiple clients and a server, sending data over a network. Evaluation of the system was in form of a case study analyzing its performance, together with a survey performed by Aptean staff to gain knowledge of how the system was experienced when searching for large amounts of data. From the results, three issues impacting Quality of Experience the most were identified: (1) interaction; limited set of actions during a search, (2) transparency; limited representation of search progress and received data, (3) execution time; search completion taking long time.

    After the system was analyzed, hypothesized technological solutions were implemented to resolve the identified issues. The first solution divided the data into multiple partitions, the second decreased data size sent over the network by applying compression and the third was a combination of the two technologies. Following the implementations, a final set of measurements together with the same survey was performed to compare the solutions based on their performance and improvement gained in perceived Quality of Experience.

    The most significant improvement in perceived Quality of Experience was achieved by the data partitioning solution. While the combination of solutions offered a slight further improvement, it was primarily thanks to data partitioning, making that technology a more suitable solution for the identified issues compared to compression which only slightly improved perceived Quality of Experience. When the data was partitioned, updates were sent more frequently and allowed the user not only a larger set of actions during a search but also improved the information available in the client regarding search progress and received data. While data partitioning did not improve the execution time it offered the user a first set of data quickly, not forcing the user to idly wait, making the user experience the system as fast. The results indicated that to increase the user‟s perceived Quality of Experience for systems with server-client communication, data partitioning offered several opportunities for improvement.

  • 23. Alegroth, Emil
    et al.
    Feldt, Robert
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Kolstrom, Pirjo
    Maintenance of automated test suites in industry: An empirical study on Visual GUI Testing2016In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 73, 66-80 p.Article in journal (Refereed)
    Abstract [en]

    Context: Verification and validation (V&V) activities make up 20-50% of the total development costs of a software system in practice. Test automation is proposed to lower these V&V costs but available research only provides limited empirical data from industrial practice about the maintenance costs of automated tests and what factors affect these costs. In particular, these costs and factors are unknown for automated GUI-based testing. Objective: This paper addresses this lack of knowledge through analysis of the costs and factors associated with the maintenance of automated GUI-based tests in industrial practice. Method: An empirical study at two companies, Siemens and Saab, is reported where interviews about, and empirical work with, Visual GUI Testing is performed to acquire data about the technique's maintenance costs and feasibility. Results: 13 factors are observed that affect maintenance, e.g. tester knowledge/experience and test case complexity. Further, statistical analysis shows that developing new test scripts is costlier than maintenance but also that frequent maintenance is less costly than infrequent, big bang maintenance. In addition a cost model, based on previous work, is presented that estimates the time to positive return on investment (ROI) of test automation compared to manual testing. Conclusions: It is concluded that test automation can lower overall software development costs of a project while also having positive effects on software quality. However, maintenance costs can still be considerable and the less time a company currently spends on manual testing, the more time is required before positive, economic, ROI is reached after automation. (C) 2016 Elsevier B.V. All rights reserved.

  • 24. Alegroth, Emil
    et al.
    Feldt, Robert
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Ryrholm, Lisa
    Visual GUI testing in practice: challenges, problems and limitations2015In: Journal of Empirical Software Engineering, ISSN 1382-3256, E-ISSN 1573-7616, Vol. 20, no 3, 694-744 p.Article in journal (Refereed)
    Abstract [en]

    In today’s software development industry, high-level tests such as Graphical User Interface (GUI) based system and acceptance tests are mostly performed with manual practices that are often costly, tedious and error prone. Test automation has been proposed to solve these problems but most automation techniques approach testing from a lower level of system abstraction. Their suitability for high-level tests has therefore been questioned. High-level test automation techniques such as Record and Replay exist, but studies suggest that these techniques suffer from limitations, e.g. sensitivity to GUI layout or code changes, system implementation dependencies, etc. Visual GUI Testing (VGT) is an emerging technique in industrial practice with perceived higher flexi- bility and robustness to certain GUI changes than previous high-level (GUI) test automation techniques. The core of VGT is image recognition which is applied to analyze and interact with the bitmap layer of a system’s front end. By coupling image recognition with test scripts, VGT tools can emulate end user behavior on almost any GUI-based system, regardless of implementation language, operating system or platform. However, VGT is not without its own challenges, problems and limitations (CPLs) but, like for many other automated test techniques, there is a lack of empirically-based knowledge of these CPLs and how they impact industrial applicability. Crucially, there is also a lack of information on the cost of applying this type of test automation in industry. This manuscript reports an empirical, multi-unit case study performed at two Swedish companies that develop safety-critical software. It studies their transition from manual system test cases into tests auto- mated with VGT. In total, four different test suites that together include more than 300 high-level system test cases were automated for two multi-million lines of code systems. The results show that the transitioned test cases could find defects in the tested systems and that all applicable test cases could be automated. However, during these transition projects a number of hurdles had to be addressed; a total of 58 different CPLs were identified and then categorized into 26 types. We present these CPL types and an analysis of the implications for the transition to and use of VGT in industrial software development practice. In addition, four high-level solutions are presented that were identified during the study, which would address about half of the identified CPLs. Furthermore, collected metrics on cost and return on investment of the VGT transition are reported together with information about the VGT suites’ defect finding ability. Nine of the identified defects are reported, 5 of which were unknown to testers with extensive experience from using the manual test suites. The main conclusion from this study is that even though there are many challenges related to the transition and usage of VGT, the technique is still valuable, flexible and considered cost-effective by the industrial practitioners. The presented CPLs also provide decision support in the use and advancement of VGT and potentially other automated testing techniques similar to VGT, e.g. Record and Replay.

  • 25.
    Aleksandr, Polescuk
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Linking Residential Burglaries using the Series Finder Algorithm in a Swedish Context2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. A minority of criminals performs a majority of the crimes today. It is known that every criminal or group of offenders to some extent have a particular pattern (modus operandi) how crime is performed. Therefore, computers' computational power can be employed to discover crimes that have the same model and possibly are carried out by the same criminal. The goal of this thesis was to apply the existing Series Finder algorithm to a feature-rich dataset containing data about Swedish residential burglaries.

    Objectives. The following objectives were achieved to complete this thesis: Modifications performed on an existing Series Finder implementation to fit the Swedish police forces dataset and MatLab code converted to Python. Furthermore, experiment setup designed with appropriate metrics and statistical tests. Finally, modified Series Finder implementation's evaluation performed against both Spatial-Temporal and Random models.

    Methods. The experimental methodology was chosen in order to achieve the objectives. An initial experiment was performed to find right parameters to use for main experiments. Afterward, a proper investigation with dependent and independent variables was conducted.

    Results. After the metrics calculations and the statistical tests applications, the accurate picture revealed how each model performed. Series Finder showed better performance than a Random model. However, it had lower performance than the Spatial-Temporal model. The possible causes of one model performing better than another are discussed in analysis and discussion section.

    Conclusions. After completing objectives and answering research questions, it could be clearly seen how the Series Finder implementation performed against other models. Despite its low performance, Series Finder still showed potential, as presented in future work.

  • 26.
    Ali, Nauman Bin
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Is effectiveness sufficient to choose an intervention?: Considering resource use in empirical software engineering2016In: Proceedings of the 10th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement, ESEM 2016, Ciudad Real, Spain, September 8-9, 2016, 2016, 54Conference paper (Refereed)
  • 27.
    Ali, Nauman bin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    FLOW-assisted value stream mapping in the early phases of large-scale software development2016In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 111, 213-227 p.Article in journal (Refereed)
    Abstract [en]

    Value stream mapping (VSM) has been successfully applied in the context of software process improvement. However, its current adaptations from Lean manufacturing focus mostly on the flow of artifacts and have taken no account of the essential information flows in software development. A solution specifically targeted toward information flow elicitation and modeling is FLOW. This paper aims to propose and evaluate the combination of VSM and FLOW to identify and alleviate information and communication related challenges in large-scale software development. Using case study research, FLOW-assisted VSM was used for a large product at Ericsson AB, Sweden. Both the process and the outcome of FLOW-assisted VSM have been evaluated from the practitioners’ perspective. It was noted that FLOW helped to systematically identify challenges and improvements related to information flow. Practitioners responded favorably to the use of VSM and FLOW, acknowledged the realistic nature and impact on the improvement on software quality, and found the overview of the entire process using the FLOW notation very useful. The combination of FLOW and VSM presented in this study was successful in systematically uncovering issues and characterizing their solutions, indicating their practical usefulness for waste removal with a focus on information flow related issues.

    The full text will be freely available from 2017-12-10 13:41
  • 28.
    Ali, Nauman
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Evaluating strategies for study selection in systematic literature studies2014In: ESEM '14 Proceedings of the 8th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement, ACM , 2014, Vol. article 45Conference paper (Refereed)
    Abstract [en]

    Context: The study selection process is critical to improve the reliability of secondary studies. Goal: To evaluate the selection strategies commonly employed in secondary studies in software engineering. Method: Building on these strate- gies, a study selection process was formulated and evalu- ated in a systematic review. Results: The selection process used a more inclusive strategy than the one typically used in secondary studies, which led to additional relevant articles. Conclusions: The results indicates that a good-enough sam- ple could be obtained by following a less inclusive but more efficient strategy, if the articles identified as relevant for the study are a representative sample of the population, and there is a homogeneity of results and quality of the articles.

  • 29.
    Alibabaei, Navid
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Wireless Mesh Networks: a comparative study of Ad-Hoc routing protocols toward more efficient routing2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Each day, the dream of seamless networking and connectivity everywhere is getting closer to become a reality. In this regard, mobile Ad-Hoc networks (MANETs) have been a hot topic in the last decade; but the amount of MANET usage nowadays confines to a tiny percentage of all our network connectivity in our everyday life, which connectivity through infrastructured networks has the major share. On the other hand, we know that future of networking belongs to Ad-Hocing , so for now we try to give our everyday infrastructure network a taste of Ad-Hocing ability; these types of networks are called Wireless Mesh Networks (WMN) and routing features play a vital role in their functionality. In this thesis we examine the functionality of 3 Ad-Hoc routing protocols known as AODV, OLSR and GRP using simulation method in OPNET17.5. For this goal we set up 4 different scenarios to examine the performance of these routing protocols; these scenarios vary from each other in amount of nodes, background traffic and mobility of the nodes. Performance measurements of these protocols are done by network throughput, end-end delay of the transmitted packets and packet loss ratio as our performance metrics. After the simulation run and gathering the results we study them in a comparative view, first based on each scenario and then based on each protocol. For conclusion, as former studies suggest AODV, OLSR and DRP are among the best routing protocols for WMNs, so in this research we don’t introduce the best RP based on the obtained functionality results, instead we discuss the network conditions that each of these protocols show their best functionality in them and suggest the best routing mechanism for different networks based on the analysis from the former part.  

  • 30.
    Altaf, Moaz
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    SMI-S for the Storage Area Network (SAN) Management2014Student thesis
    Abstract [en]

    The storage vendors have their own standards for the management of their storage resources but it creates interoperability issues on different storage products. With the recent advent of the new protocol named Storage Management Initiative-Specification (SMI-S), the Storage Networking Industry Association (SNIA) has taken a major step in order to make the storage management more effective and organized. SMI-S has replaced its predecessor Storage Network Management Protocol (SNMP) and it has been categorized as an ISO standard. The main objective of the SMI-S is to provide interoperability management of the heterogeneous storage vendor systems by unifying the Storage Area Network (SAN) management, hence making the dreams of the network managers come true. SMI-S is a guide to build systems using modules that ‘plug’ together. SMI-S compliant storage modules that use CIM ‘language’ and adhere to CIM schema interoperate in a system regardless of which vendor built them. SMI-S is object-oriented, any physical or abstract storage-related elements can be defined as a CIM object. SMI-S can unify the SAN management systems and it works well with the heterogeneous storage environment. SMI-S has offered a cross-platform, cross-vendor storage resource management. This thesis work discusses the use of SMI-S at Compuverde which is a storage solution provider, located in the heart of the Karlskrona, the southeastern part of Sweden. Compuverde was founded by Stefan Bernbo in Karlskrona,Sweden. Just like all others leading storage providers, Compuverde has also decided to deploy the Storage Management Initiative-Specification (SMI-S) to manage their Storage Area Network (SAN) and to achieve interoperability. This work was done to help Compuverde to deploy the SMI-S protocol for the management of the Storage Area Network (SAN) which, among many of its features, would create alerts/traps in case of a disk failure in the SAN. In this way, they would be able to keep the data of their clients, safe and secure and keep their reputation for being reliable in the storage industry. Since Compuverde regularly use Microsoft Windows and Microsoft have started to support SMI-S for storage provisioning in System Center 2012 Virtual Machine Manager (SCVMM), this work was done using the SCVMM 2012 and the Windows Server 2012.The SMI-S provider which was used for this work was QNAP TS- 469 Pro.

  • 31.
    Aluguri, Tarun
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Performance Evaluation of OpenStack Deployment Tools2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Cloud computing enables on-demand access to a shared pool of computing resources, that can beeasily provisioned, configured and released with minimal management cost and effort. OpenStack isan open source cloud management platform aimed at providing private or public IaaS cloud onstandard hardware. Since, deploying OpenStack manually is tedious and time-consuming, there are several tools that automate the deployment of OpenStack. Usually, cloud admins choose a tool basedon its level of automation, ease of use or interoperability with existing tools used by them. However,another desired factor while choosing a deployment tool is its deployment speed. Cloud admins cannot select based on this factor since, there is no previous work on the comparison of deploymenttools based on deployment time. This thesis aims to address this issue.

    The main aim of the thesis is to evaluate the performance of OpenStack deployment tools with respectto operating system provisioning and OpenStack deployment time, on physical servers. Furthermore,the effect of varying number of nodes, OpenStack architecture deployed and resources (cores andRAM) provided to deployment node on provisioning and deployment times, is also analyzed. Also,the tools classified based on stages of deployment and method of deploying OpenStack services. In this thesis we evaluate the performance of MAAS, Foreman, Mirantis Fuel and Canonical Autopilot.

    The performance of the tools is measured via experimental research method. Operating system provisioning time and OpenStack deployment times are measured while varying the number of nodes/OpenStack architecture and resources provided to deployment node i.e. cores and RAM.

    Results show that provisioning time of MAAS is less than Mirantis Fuel, which is less than Foreman.Furthermore, for all 3 tools as number of nodes increases provisioning time increases. However, the amount of increase is lowest for MAAS than Mirantis Fuel and Foreman. Similarly, results for baremetal OpenStack deployment time show that, Canonical Autopilot outperforms Mirantis Fuel by asignificant difference for all OpenStack scenarios considered. Furthermore, as number of nodes in an OpenStack scenario increases, the deployment time for both the tools increases.

    From the research, it is concluded that MAAS and Canonical Autopilot perform better as provisioningand bare metal OpenStack deployment tool respectively, than other tools that have been analyzed.Furthermore, from the analysis it can be concluded that increase in number of nodes/ OpenStackarchitecture, leads to an increase in both provisioning time and OpenStack deployment time for all the tools.

  • 32.
    Alégroth, Emil
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Feldt, Robert
    Chalmers, SWE.
    On the long-term use of visual gui testing in industrial practice: a case study2017In: Journal of Empirical Software Engineering, ISSN 1382-3256, E-ISSN 1573-7616, Vol. 22, no 6, 2937-2971 p.Article in journal (Refereed)
    Abstract [en]

    Visual GUI Testing (VGT) is a tool-driven technique for automated GUI-based testing that uses image recognition to interact with and assert the correctness of the behavior of a system through its GUI as it is shown to the user. The technique’s applicability, e.g. defect-finding ability, and feasibility, e.g. time to positive return on investment, have been shown through empirical studies in industrial practice. However, there is a lack of studies that evaluate the usefulness and challenges associated with VGT when used long-term (years) in industrial practice. This paper evaluates how VGT was adopted, applied and why it was abandoned at the music streaming application development company, Spotify, after several years of use. A qualitative study with two workshops and five well chosen employees is performed at the company, supported by a survey, which is analyzed with a grounded theory approach to answer the study’s three research questions. The interviews provide insights into the challenges, problems and limitations, but also benefits, that Spotify experienced during the adoption and use of VGT. However, due to the technique’s drawbacks, VGT has been abandoned for a new technique/framework, simply called the Test interface. The Test interface is considered more robust and flexible for Spotify’s needs but has several drawbacks, including that it does not test the actual GUI as shown to the user like VGT does. From the study’s results it is concluded that VGT can be used long-term in industrial practice but it requires organizational change as well as engineering best practices to be beneficial. Through synthesis of the study’s results, and results from previous work, a set of guidelines are presented that aim to aid practitioners to adopt and use VGT in industrial practice. However, due to the abandonment of the technique, future research is required to analyze in what types of projects the technique is, and is not, long-term viable. To this end, we also present Spotify’s Test interface solution for automated GUI-based testing and conclude that it has its own benefits and drawbacks.

  • 33.
    Alégroth, Emil
    et al.
    Chalmers, SWE.
    Gustafsson, Johan
    SAAB AB, SWE.
    Ivarsson, Henrik
    SAAB AB, SWE.
    Feldt, Robert
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Replicating Rare Software Failures with Exploratory Visual GUI Testing2017In: IEEE Software, ISSN 0740-7459, E-ISSN 1937-4194, Vol. 34, no 5, 53-59 p., 8048660Article in journal (Refereed)
    Abstract [en]

    Saab AB developed software that had a defect that manifested itself only after months of continuous system use. After years of customer failure reports, the defect still persisted, until Saab developed failure replication based on visual GUI testing. © 1984-2012 IEEE.

  • 34.
    Alégroth, Emil
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Matsuki, Shinsuke
    Veriserve Corporation, JPN.
    Vos, Tanja
    Open University of the Netherlands, NLD.
    Akemine, Kinji
    Nippon Telegraph and Telephone Corporation, JPN.
    Overview of the ICST International Software Testing Contest2017In: Proceedings - 10th IEEE International Conference on Software Testing, Verification and Validation, ICST 2017, IEEE Computer Society, 2017, 550-551 p.Conference paper (Refereed)
    Abstract [en]

    In the software testing contest, practitioners and researcher's are invited to test their test approaches against similar approaches to evaluate pros and cons and which is perceivably the best. The 2017 iteration of the contest focused on Graphical User Interface-driven testing, which was evaluated on the testing tool TESTONA. The winner of the competition was announced at the closing ceremony of the international conference on software testing (ICST), 2017. © 2017 IEEE.

  • 35.
    Amaradri, Anand Srivatsav
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Nutalapati, Swetha Bindu
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Continuous Integration, Deployment and Testing in DevOps Environment2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. Owing to a multitude of factors like rapid changes in technology, market needs, and business competitiveness, software companies these days are facing pressure to deliver software rapidly and on a frequent basis. For frequent and faster delivery, companies should be lean and agile in all phases of the software development life cycle. An approach called DevOps, which is based on agile principles has come into play. DevOps bridges the gap between development and operations teams and facilitates faster product delivery. The DevOps phenomenon has gained a wide popularity in the past few years, and several companies are adopting DevOps to leverage its perceived benefits. However, the organizations may face several challenges while adopting DevOps. There is a need to obtain a clear understanding of how DevOps functions in an organization.

    Objectives. The main aim of this study is to provide a clear understanding about how DevOps works in an organization to researchers and software practitioners. The objectives of the study are to identify the benefits of implementing DevOps in organizations where agile development is in practice, the challenges faced by organizations during DevOps adoption, to identify the solutions/ mitigation strategies, to overcome the challenges,the DevOps practices, and the problems faced by DevOps teams during continuous integration, deployment and testing.

    Methods. A mixed methods approach having both qualitative and quantitative research methods is used to accomplish the research objectives.A Systematic Literature Review is conducted to identify the benefits and challenges of DevOps adoption, and the DevOps practices. Interviews are conducted to further validate the SLR findings, and identify the solutions to overcome DevOps adoption challenges, and the DevOps practices. The SLR and interview results are mapped, and a survey questionnaire is designed.The survey is conducted to validate the qualitative data, and to identify the other benefits and challenges of DevOps adoption, solutions to overcome the challenges, DevOps practices, and the problems faced by DevOps teams during continuous integration, deployment and testing.

    Results. 31 primary studies relevant to the research are identified for conducting the SLR. After analysing the primary studies, an initial list of the benefits and challenges of DevOps adoption, and the DevOps practices is obtained. Based on the SLR findings, a semi-structured interview questionnaire is designed, and interviews are conducted. The interview data is thematically coded, and a list of the benefits, challenges of DevOps adoption and solutions to overcome them, DevOps practices, and problems faced by DevOps teams is obtained. The survey responses are statistically analysed, and a final list of the benefits of adopting DevOps, the adoption challenges and solutions to overcome them, DevOps practices and problems faced by DevOps teams is obtained.

    Conclusions. Using the mixed methods approach, a final list of the benefits of adopting DevOps, DevOps adoption challenges, solutions to overcome the challenges, practices of DevOps, and the problems faced by DevOps teams during continuous integration, deployment and testing is obtained. The list is clearly elucidated in the document. The final list can aid researchers and software practitioners in obtaining a better understanding regarding the functioning and adoption of DevOps. Also, it has been observed that there is a need for more empirical research in this domain.

  • 36. Ambreen, T.
    et al.
    Ikram, N.
    Usman, Muhammad
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Niazi, M.
    Empirical research in requirements engineering: trends and opportunities2016In: Requirements Engineering, ISSN 0947-3602, E-ISSN 1432-010X, 1-33 p.Article in journal (Refereed)
    Abstract [en]

    Requirements engineering (RE) being a foundation of software development has gained a great recognition in the recent era of prevailing software industry. A number of journals and conferences have published a great amount of RE research in terms of various tools, techniques, methods, and frameworks, with a variety of processes applicable in different software development domains. The plethora of empirical RE research needs to be synthesized to identify trends and future research directions. To represent a state-of-the-art of requirements engineering, along with various trends and opportunities of empirical RE research, we conducted a systematic mapping study to synthesize the empirical work done in RE. We used four major databases IEEE, ScienceDirect, SpringerLink and ACM and Identified 270 primary studies till the year 2012. An analysis of the data extracted from primary studies shows that the empirical research work in RE is on the increase since the year 2000. The requirements elicitation with 22 % of the total studies, requirements analysis with 19 % and RE process with 17 % are the major focus areas of empirical RE research. Non-functional requirements were found to be the most researched emerging area. The empirical work in the sub-area of requirements validation and verification is little and has a decreasing trend. The majority of the studies (50 %) used a case study research method followed by experiments (28 %), whereas the experience reports are few (6 %). A common trend in almost all RE sub-areas is about proposing new interventions. The leading intervention types are guidelines, techniques and processes. The interest in RE empirical research is on the rise as whole. However, requirements validation and verification area, despite its recognized importance, lacks empirical research at present. Furthermore, requirements evolution and privacy requirements also have little empirical research. These RE sub-areas need the attention of researchers for more empirical research. At present, the focus of empirical RE research is more about proposing new interventions. In future, there is a need to replicate existing studies as well to evaluate the RE interventions in more real contexts and scenarios. The practitioners’ involvement in RE empirical research needs to be increased so that they share their experiences of using different RE interventions and also inform us about the current requirements-related challenges and issues that they face in their work. © 2016 Springer-Verlag London

  • 37.
    Amiri, Mohammad Reza Shams
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Rohani, Sarmad
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Automated Camera Placement using Hybrid Particle Swarm Optimization2014Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Context. Automatic placement of surveillance cameras' 3D models in an arbitrary floor plan containing obstacles is a challenging task. The problem becomes more complex when different types of region of interest (RoI) and minimum resolution are considered. An automatic camera placement decision support system (ACP-DSS) integrated into a 3D CAD environment could assist the surveillance system designers with the process of finding good camera settings considering multiple constraints. Objectives. In this study we designed and implemented two subsystems: a camera toolset in SketchUp (CTSS) and a decision support system using an enhanced Particle Swarm Optimization (PSO) algorithm (HPSO-DSS). The objective for the proposed algorithm was to have a good computational performance in order to quickly generate a solution for the automatic camera placement (ACP) problem. The new algorithm benefited from different aspects of other heuristics such as hill-climbing and greedy algorithms as well as a number of new enhancements. Methods. Both CTSS and ACP-DSS were designed and constructed using the information technology (IT) research framework. A state-of-the-art evolutionary optimization method, Hybrid PSO (HPSO), implemented to solve the ACP problem, was the core of our decision support system. Results. The CTSS is evaluated by some of its potential users after employing it and later answering a conducted survey. The evaluation of CTSS confirmed an outstanding satisfactory level of the respondents. Various aspects of the HPSO algorithm were compared to two other algorithms (PSO and Genetic Algorithm), all implemented to solve our ACP problem. Conclusions. The HPSO algorithm provided an efficient mechanism to solve the ACP problem in a timely manner. The integration of ACP-DSS into CTSS might aid the surveillance designers to adequately and more easily plan and validate the design of their security systems. The quality of CTSS as well as the solutions offered by ACP-DSS were confirmed by a number of field experts.

  • 38.
    Amjad, Shoaib
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Malhi, Rohail Khan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Burhan, Muhammad
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    DIFFERENTIAL CODE SHIFTED REFERENCE IMPULSE-BASED COOPERATIVE UWB COMMUNICATION SYSTEM2013Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Cooperative Impulse Response – Ultra Wideband (IR-UWB) communication is a radio technology very popular for short range communication systems as it enables single-antenna mobiles in a multi-user environment to share their antennas by creating virtual MIMO to achieve transmit diversity. In order to improve the cooperative IR-UWB system performance, we are going to use Differential Code Shifted Reference (DCSR). The simulations are used to compute Bit Error Rate (BER) of DCSR in cooperative IR-UWB system using different numbers of Decode and Forward relays while changing the distance between the source node and destination nodes. The results suggest that when compared to Code Shifted Reference (CSR) cooperative IR-UWB communication system; the DCSR cooperative IR-UWB communication system performs better in terms of BER, power efficiency and channel capacity. The simulations are performed for both non-line of sight (N-LOS) and line of sight (LOS) conditions and the results confirm that system has better performance under LOS channel environment. The simulation results also show that performance improves as we increase the number of relay nodes to a sufficiently large number.

  • 39.
    Ammar, Doreid
    et al.
    Norwegian Univ Sci & Technol, NOR.
    De Moor, Katrien
    Norwegian Univ Sci & Technol, NOR.
    Xie, Min
    Next Generat Serv, Telenor Res, NOR.
    Fiedler, Markus
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Heegaard, Poul
    Norwegian Univ Sci & Technol, NOR.
    Video QoE Killer and Performance Statistics in WebRTC-based Video Communication2016Conference paper (Refereed)
    Abstract [en]

    In this paper, we investigate session-related performance statistics of a Web-based Real-Time Communication (WebRTC) application called appear. in. We explore the characteristics of these statistics and explore how they may relate to users' Quality of Experience (QoE). More concretely, we have run a series of tests involving two parties and according to different test scenarios, and collected real-time session statistics by means of Google Chrome's WebRTC-internals tool. Despite the fact that the Chrome statistics have a number of limitations, our observations indicate that they are useful for QoE research when these limitations are known and carefully handled when performing post-processing analysis. The results from our initial tests show that a combination of performance indicators measured at the sender's and receiver's end may help to identify severe video freezes (being an important QoE killer) in the context of WebRTC-based video communication. In this paper the performance indicators used are significant drops in data rate, non-zero packet loss ratios, non-zero PLI values, and non-zero bucket delay.

  • 40.
    AMUJALA, NARAYANA KAILASH
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    SANKI, JOHN KENNEDY
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Video Quality of Experience through Emulated Mobile Channels2014Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Over the past few years, Internet traffic took a ramp increase. Of which, most of the traffic is video traffic. With the latest Cisco forecast it is estimated that, by 2017 online video will be highly adopted service with large customer base. As the networks are being increasingly ubiquitous, applications are turning equally intelligent. A typical video communication chain involves transmission of encoded raw video frames with subsequent decoding at the receiver side. One such intelligent codec that is gaining large research attention is H.264/SVC, which can adapt dynamically to the end device configurations and network conditions. With such a bandwidth hungry, video communications running over lossy mobile networks, its extremely important to quantify the end user acceptability. This work primarily investigates the problems at player user interface level compared to the physical layer disturbances. We have chosen Inter frame time at the Application layer level to quantify the user experience (player UI) for varying lower layer metrics like noise and link power with nice demonstrator telling cases. The results show that extreme noise and low link level settings have adverse effect on user experience in temporal dimension. The video are effected with frequent jumps and freezes.

  • 41.
    ananth, Indirajith Vijai
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Study on Assessing QoE of 3DTV Using Subjective Methods2013Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    The ever increasing popularity and enormous growth in 3D movie industry is the stimulating phenomenon for the penetration of 3D services into home entertainment systems. Providing a third dimension gives intense visual experience to the viewers. Being a new eld, there are several researches going on to measure the end user's viewing experience. Research groups including 3D TV manufacturers, service providers and standards organizations are interested to improve user experience. Recent research in 3D video quality measurements have revealed uncertain issues as well as more well known results. Measuring the perceptual stereoscopic video quality by subjective testing can provide practical results. This thesis studies and investigate three di erent rating scales (Video Quality, Visual Discomfort and Sense of Presence) and compares them by subjective testing, combined with two viewing distances at 3H and 5H, where H is the hight of display screen. This thesis work shows that single rating scale produces the same result as three di erent scales and viewing distance has very less or no impact on Quality of Experience (QoE) of 3DTV for 3H and 5H distances for symmetric coding impairments.

  • 42.
    Anderberg, Ted
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Rosén, Joakim
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Follow the Raven: A Study of Audio Diegesis within a Game’s Narrative2017Independent thesis Basic level (degree of Bachelor), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Virtual Reality is one of the next big things in gaming, more and more games delivering an immersive VR-experience are popping up. Words such as immersion and presence has quickly become buzzwords that’s often used to describe a VR-game or experience. This interactive simulation of reality is literally turning people’s heads. The crowd pleaser, the ability to look around in 360-degrees, is however casting a shadow on the aural aspect. This study focused on this problem in relation to audio narrative. We examined which differences we could identify between a purely diegetic audio narrative and one utilizing a mix between diegetic and non-diegetic sound. How to grab the player’s attention and guide them to places in order for them to progress in the story. By spatializing audio using HRTF, we tested this dilemma through a game comparison with the help of soundscapes by R. Murray Schafer and auditory hierarchy by David Sonnenschein, as well as inspiration from Actor Network Theory. In our game comparison we found that while the synthesized sound, non-diegetic, ensured that the sound grabs the player’s attention, the risk of breaking the player’s immersion also increases.

  • 43.
    Anderdahl, Johan
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Darner, Alice
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Particle Systems Using 3D Vector Fields with OpenGL Compute Shaders2014Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [en]

    Context. Particle systems and particle effects are used to simulate a realistic and appealing atmosphere in many virtual environments. However, they do occupy a significant amount of computational resources. The demand for more advanced graphics increases by each generation, likewise does particle systems need to become increasingly more detailed. Objectives. This thesis proposes a texture-based 3D vector field particle system, computed on the Graphics Processing Unit, and compares it to an equation-based particle system. Methods. Several tests were conducted comparing different situations and parameters for the methods. All of the tests measured the computational time needed to execute the different methods. Results. We show that the texture-based method was effective in very specific situations where it was expected to outperform the equation-based. Otherwise, the equation-based particle system is still the most efficient. Conclusions. Generally the equation-based method is preferred, except for in very specific cases. The texture-based is most efficient to use for static particle systems and when a huge number of forces is applied to a particle system. Texture-based vector fields is hardly useful otherwise.

  • 44.
    Andersen, Dennis
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Screen-Space Subsurface Scattering, A Real-time Implementation Using Direct3D 11.1 Rendering API2015Independent thesis Basic level (degree of Bachelor), 180 HE creditsStudent thesis
    Abstract [en]

    Context Subsurface scattering - the effect of light scattering within a material. Lots of materials on earth possess translucent properties. It is therefore an important factor to consider when trying to render realistic images. Historically the effect has been used for offline rendering with ray tracers, but is now considered a real-time rendering technique and is done based on approximations off previous models. Early real-time methods approximates the effect in object texture space which does not scale well with real-time applications such as games. A relatively new approach makes it possible to apply the effect as a post processing effect using GPGPU capabilities, making this approach compatible with most modern rendering pipelines.

    Objectives The aim of this thesis is to explore the possibilities of a dynamic real-time solution to subsurface scattering with a modern rendering API to utilize GPGPU programming and modern data management, combined with previous techniques

    Methods The proposed subsurface scattering technique is implemented in a delimited real-time graphics engine using a modern rendering API to evaluate the impact on performance by conducting several experiments with specific properties.

    Results The result obtained hints that by using a flexible solution to represent materials, execution time lands at an acceptable rate and could be used in real-time. These results shows that the execution time grows nearly linearly with consideration to the number of layers and the strength of the effect. Because the technique is performed in screen space, the performance scales with subsurface scattering screen coverage and screen resolution.

    Conclusions The technique could be used in real-time and could trivially be integrated to most existing rendering pipelines. Further research and testing should be done in order to determine how the effect scales in a complex 3D-game environment.

  • 45.
    Andersson, Anders Tobias
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Facial Feature Tracking and Head Pose Tracking as Input for Platform Games2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Modern facial feature tracking techniques can automatically extract and accurately track multiple facial landmark points from faces in video streams in real time. Facial landmark points are defined as points distributed on a face in regards to certain facial features, such as eye corners and face contour. This opens up for using facial feature movements as a handsfree human-computer interaction technique. These alternatives to traditional input devices can give a more interesting gaming experience. They also open up for more intuitive controls and can possibly give greater access to computers and video game consoles for certain disabled users with difficulties using their arms and/or fingers.

    This research explores using facial feature tracking to control a character's movements in a platform game. The aim is to interpret facial feature tracker data and convert facial feature movements to game input controls. The facial feature input is compared with other handsfree inputmethods, as well as traditional keyboard input. The other handsfree input methods that are explored are head pose estimation and a hybrid between the facial feature and head pose estimation input. Head pose estimation is a method where the application is extracting the angles in which the user's head is tilted. The hybrid input method utilises both head pose estimation and facial feature tracking.

    The input methods are evaluated by user performance and subjective ratings from voluntary participants playing a platform game using the input methods. Performance is measured by the time, the amount of jumps and the amount of turns it takes for a user to complete a platform level. Jumping is an essential part of platform games. To reach the goal, the player has to jump between platforms. An inefficient input method might make this a difficult task. Turning is the action of changing the direction of the player character from facing left to facing right or vice versa. This measurement is intended to pick up difficulties in controling the character's movements. If the player makes many turns, it is an indication that it is difficult to use the input method to control the character movements efficiently.

    The results suggest that keyboard input is the most effective input method, while it is also the least entertaining of the input methods. There is no significant difference in performance between facial feature input and head pose input. The hybrid input version has the best results overall of the alternative input methods. The hybrid input method got significantly better performance results than the head pose input and facial feature input methods, while it got results that were of no statistically significant difference from the keyboard input method.

    Keywords: Computer Vision, Facial Feature Tracking, Head Pose Tracking, Game Control

  • 46.
    Andersson, Jonas
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Silhouette-based Level of Detail: A comparison of real-time performance and image space metrics2016Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Context. The geometric complexity of objects in games and other real-time applications is a crucial aspect concerning the performance of the application. Such applications usually redraw the screen between 30-60 times per second, sometimes even more often, which can be a hard task in an environment with a high number of geometrically complex objects. The concept called Level of Detail, often abbreviated LoD, aims to alleviate the load on the hardware by introducing methods and techniques to minimize the amount of geometry while still maintaining the same, or very similar result.

    Objectives. This study will compare four of the often used techniques, namely Static LoD, Unpopping LoD, Curved PN Triangles, and Phong Tessellation. Phong Tessellation is silhouette-based, and since the silhouette is considered one of the most important properties, the main aim is to determine how it performs compared to the other three techniques.

    Methods. The four techniques are implemented in a real-time application using the modern rendering API Direct3D 11. Data will be gathered from this application to use in several experiments in the context of both performance and image space metrics.

    Conclusions. This study has shown that all of the techniques used works in real-time, but with varying results. From the experiments it can be concluded that the best technique to use is Unpopping LoD. It has good performance and provides a good visual result with the least average amount of popping of the compared techniques. The dynamic techniques are not suitable as a substitute to Unpopping LoD, but further research could be conducted to examine how they can be used together, and how the objects themselves can be designed with the dynamic techniques in mind.

  • 47.
    Andersson, Linda
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Evaluation of HMI Development for Embedded System Control2014Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [en]

    Context:The interface development is increasing in complexity and applications with a lot of functionalities that are reliable, understandable and easy to use have to be developed. To be able to compete, the time-to-market has to be short and cost effective. The development process is important and there are a lot of aspects that can be improved. The needs of the development and the knowledge among the developers are key factors. Here code reuse, standardization and the usability of the development tool plays an important role which could have a lot of positive impact on the development process and the quality of the final product. Objectives: A framework for describing important properties for HMI development tools is presented. A representative collection of two development tools are selected, described and based on the experiences from the case study its applicability is mapped to the evaluation framework. Methods: Interviews were made with HMI developers to get information from the field. Following that, a case study of two different development tools were made to highlight the pros and cons of each tool. Results: The properties presented in the evaluation framework are that the toolkit should be open for multiple platforms, accessible for the developer, it should support custom templates, require non-extensive coding knowledge and be reusable. The evaluated frameworks shows that it is hard to meet all the demands. Conclusions: To find a well suited development toolkit is not an easy task. The choice should be made depending on the needs of the HMI applications and the available development resources.

  • 48.
    Andersson, Marcus
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Nilsson, Alexander
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Improving Integrity Assurances of Log Entries From the Perspective of Intermittently Disconnected Devices2014Student thesis
    Abstract [en]

    It is common today in large corporate environments for system administrators to employ centralized systems for log collection and analysis. The log data can come from any device between smart-phones and large scale server clusters. During an investigation of a system failure or suspected intrusion these logs may contain vital information. However, the trustworthiness of this log data must be confirmed. The objective of this thesis is to evaluate the state of the art and provide practical solutions and suggestions in the field of secure logging. In this thesis we focus on solutions that do not require a persistent connection to a central log management system. To this end a prototype logging framework was developed including client, server and verification applications. The client employs different techniques of signing log entries. The focus of this thesis is to evaluate each signing technique from both a security and performance perspective. This thesis evaluates "Traditional RSA-signing", "Traditional Hash-chains"', "Itkis-Reyzin's asymmetric FSS scheme" and "RSA signing and tick-stamping with TPM", the latter being a novel technique developed by us. In our evaluations we recognized the inability of the evaluated techniques to detect so called `truncation-attacks', therefore a truncation detection module was also developed which can be used independent of and side-by-side with any signing technique. In this thesis we conclude that our novel Trusted Platform Module technique has the most to offer in terms of log security, however it does introduce a hardware dependency on the TPM. We have also shown that the truncation detection technique can be used to assure an external verifier of the number of log entries that has at least passed through the log client software.

  • 49.
    Andersson, Robin
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Analys av Arbetsmiljöverkets tillämpning av enkätverktyget NOSACQ-502013Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [sv]

    År 2013 införde Arbetsmiljöverket ett nytt webbaserat enkätverktyg som skall kunna ge ett mått av säkerhetskulturen hos företag och organisationer. Denna enkät baserades på tidigare forskning som tog fram en enkät, NOSACQ-50, för just detta ändamål. Värt att notera är att Arbetsmiljöverkets version är förkortad där vissa påståenden togs bort och vissa skrevs om. Det är detta som analysen behandlar. Hur påverkas resultaten av dessa förändringar som genomfördes av Arbetsmiljöverket? Analysen undersöker den möjliga felmarginalen på två olika sätt. Först räknas en teoretisk felmarginal ut som påvisar hur mycket resultatet kan skilja sig. Därefter analyseras resultaten från enkätundersökningen med samma variabler som fastställdes i den teoretiska analysen. Det visar sig att Arbetsmiljöverkets version av enkäten kan ge upphov till en felmarginal på närmare 0,8175 poäng. Denna marginal är förvånansvärt stor även om den baserar sig på en väldigt osannolik situation. Vid nästa del av analysen visar det sig att enkätundersökningen har en felmarginal på <0,00 poäng, vilket innebär att resultatet inte påverkas i någon större utsträckning. Detta ger ett intressant slutresultat där det påvisats en stor felmarginal i teorin, men som i praktiken är närmare obefintlig. Hur resultatet skall tolkas är inte helt klart. Det finns ett antal felkällor som måste beaktas, såsom lågt deltagarantal i undersökningen. Analysen bygger även i stor utsträckning på subjektiva bedömningar, vilket minskar trovärdigheten för resultaten. Därav har författaren dragit slutsatsen att det finns en uppenbar skillnad i resultaten mellan analysobjekten i teorin. Dock finns det inte tillräckligt med data för att fastställa någon skillnad i praktiken. Det går inte heller att avgöra huruvida den teoretiska analysen och dess resultat stämmer, endast att skillnaden finns där.

  • 50.
    Andrej, Sekáč
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Performance evaluation based on data from code reviews2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. Modern code review tools such as Gerrit have made available great amounts of code review data from different open source projects as well as other commercial projects. Code reviews are used to keep the quality of produced source code under control but the stored data could also be used for evaluation of the software development process.

    Objectives. This thesis uses machine learning methods for an approximation of review expert’s performance evaluation function. Due to limitations in the size of labelled data sample, this work uses semisupervised machine learning methods and measure their influence on the performance. In this research we propose features and also analyse their relevance to development performance evaluation.

    Methods. This thesis uses Radial Basis Function networks as the regression algorithm for the performance evaluation approximation and Metric Based Regularisation as the semi-supervised learning method. For the analysis of feature set and goodness of fit we use statistical tools with manual analysis.

    Results. The semi-supervised learning method achieved a similar accuracy to supervised versions of algorithm. The feature analysis showed that there is a significant negative correlation between the performance evaluation and three other features. A manual verification of learned models on unlabelled data achieved 73.68% accuracy. Conclusions. We have not managed to prove that the used semisupervised learning method would perform better than supervised learning methods. The analysis of the feature set suggests that the number of reviewers, the ratio of comments to the change size and the amount of code lines modified in later parts of development are relevant to performance evaluation task with high probability. The achieved accuracy of models close to 75% leads us to believe that, considering the limited size of labelled data set, our work provides a solid base for further improvements in the performance evaluation approximation.

1234567 1 - 50 of 1152
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf