Change search
Refine search result
2345678 201 - 250 of 593
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 201.
    Graziotin, Daniel
    et al.
    Universitat Stuttgart, DEU.
    Fagerholm, Fabian
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Wang, Xiaofeng
    Free University of Bozen-Bolzano, ITA.
    Abrahamsson, Pekka
    Jyvaskylan Yliopisto, FIN.
    What happens when software developers are (un)happy2018In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 140, p. 32-47Article in journal (Refereed)
    Abstract [en]

    The growing literature on affect among software developers mostly reports on the linkage between happiness, software quality, and developer productivity. Understanding happiness and unhappiness in all its components – positive and negative emotions and moods – is an attractive and important endeavor. Scholars in industrial and organizational psychology have suggested that understanding happiness and unhappiness could lead to cost-effective ways of enhancing working conditions, job performance, and to limiting the occurrence of psychological disorders. Our comprehension of the consequences of (un)happiness among developers is still too shallow, being mainly expressed in terms of development productivity and software quality. In this paper, we study what happens when developers are happy and unhappy while developing software. Qualitative data analysis of responses given by 317 questionnaire participants identified 42 consequences of unhappiness and 32 of happiness. We found consequences of happiness and unhappiness that are beneficial and detrimental for developers’ mental well-being, the software development process, and the produced artifacts. Our classification scheme, available as open data enables new happiness research opportunities of cause-effect type, and it can act as a guideline for practitioners for identifying damaging effects of unhappiness and for fostering happiness on the job. © 2018

  • 202.
    Greiff, Magnus
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Johansson, André
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Symfony vs Express: A Server-Side Framework Comparison2019Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Context Considering the rising popularity of Node.js and the fact that a very largepercentage of websites today are based on PHP, there is a need to understand thesimilarities and differences between these languages.  Comparing their most popularserver-side  frameworks  is  valuable  to  developers  in  seeing  the  advantages  of  usingone over the other - for both user and developer.

    Objectives In this study we investigate how Express.js and Symfony compareagainst each other in terms of installation, functionality and performance. Thiswill provide understanding of when to use JavaScript frameworks and when to usePHP frameworks for server-side projects.

    Method A literature study was done to answer what similarities and differences existbetween the frameworks. To explore how they compare in performance when mul-tiple users are actively sending requests to the server, an experiment was performed.Another experiment was carried out to measure performance in CPU-intensive ap-plications.

    Results The result shows that both frameworks are quick to install and it is a fastprocess to setup a basic application. Both frameworks are highly cuztomizable andconfigurable because they are supported by a big open source community and theonly difference is that Express is supportive of single-page applications which Sym-fony can not do on its own. Express was better than Symfony to handle multipleconcurrent users when it comes to CPU usage and time it takes for the requests. For100 and 1000 requests, Express CPU usage varied more than Symfony, but at 10000and 100000 it varied less. In all tests with concurrent users, Express was faster. Testsperformed in the second experiment showed that Symfony is only able to use 1 corewhen making the requests while Express is able to use multiple cores. Even thoughSymfony was limited by 1 core it was faster, most likely because it used more memory.

    Conclusions This study shows that there are more similarities than differencesbetween Express and Symfony. They both strive for high customization and highflexibility with a goal to make tedious tasks easier for the developer. Both rely onopen source modules and components to add additional functionality. Out of thebox, Express comes with less functionality as it strives to be minimalistic. However,to install Symfony is slightly quicker than Express and requires no code. Thereare currently more daily downloads of Express than Symfony, and therefore it’sconsidered more popular. It’s supportive of JavaScript-only for front- and back-endand is able to handle more concurrent users than Symfony, and is therefore betterfor high-traffic websites. But Symfony is able to handle CPU-intensive applicationsbetter than Express, and is able to load in large data sets faster, making it a goodchoice for applications with a lot of data and high CPU usage.

  • 203.
    Gren, Lucas
    et al.
    Chalmers University of Technology, SWE.
    Berntsson Svensson, Richard
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Unterkalmsteiner, Michael
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Is it possible to disregard obsolete requirements?: An initial experiment on a potentially new bias in software effort estimation2017In: Proceedings - 2017 IEEE/ACM 10th International Workshop on Cooperative and Human Aspects of Software Engineering, CHASE 2017, Institute of Electrical and Electronics Engineers Inc. , 2017, p. 56-61Conference paper (Refereed)
    Abstract [en]

    Effort estimation is a complex area in decision-making, and is influenced by a diversity of factors that could increase the estimation error. The effects on effort estimation accuracy of having obsolete requirements in specifications have not yet been studied. This study aims at filling that gap. A total of 150 students were asked to provide effort estimates for different amounts of requirements, and one group was explicitly told to disregard some of the given requirements. The results show that even the extra text instructing participants to exclude requirements in the estimation task, had the subjects give higher estimates. The effect of having obsolete requirements in requirements specifications and backlogs in software effort estimation is not taken into account enough today, and this study provides empirical evidence that it possibly should. We also suggest different psychological explanations to the found effect. © 2017 IEEE.

  • 204.
    Gren, Lucas
    et al.
    Chalmers, SWE.
    Torkar, Richard
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Feldt, Robert
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Group development and group maturity when building agile teams: A qualitative and quantitative investigation at eight large companies2017In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 124, p. 104-119Article in journal (Refereed)
    Abstract [en]

    The agile approach to projects focuses more on close-knit teams than traditional waterfall projects, which means that aspects of group maturity become even more important. This psychological aspect is not much researched in connection to the building of an “agile team.” The purpose of this study is to investigate how building agile teams is connected to a group development model taken from social psychology. We conducted ten semi-structured interviews with coaches, Scrum Masters, and managers responsible for the agile process from seven different companies, and collected survey data from 66 group-members from four companies (a total of eight different companies). The survey included an agile measurement tool and the one part of the Group Development Questionnaire. The results show that the practitioners define group developmental aspects as key factors to a successful agile transition. Also, the quantitative measurement of agility was significantly correlated to the group maturity measurement. We conclude that adding these psychological aspects to the description of the “agile team” could increase the understanding of agility and partly help define an “agile team.” We propose that future work should develop specific guidelines for how software development teams at different maturity levels might adopt agile principles and practices differently.

  • 205. Gren, Lucas
    et al.
    Torkar, Richard
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Feldt, Robert
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Group Maturity and Agility, Are They Connected?: – A Survey Study2015In: Proceedings of the 41st EUROMICRO Conference on Software Engineering and Advanced Applications (SEAA), IEEE, 2015, p. 1-8Conference paper (Refereed)
    Abstract [en]

    The focus on psychology has increased within software engineering due to the project management innovation "agile development processes". The agile methods do not explicitly consider group development aspects; they simply assume what is described in group psychology as mature groups. This study was conducted with 45 employees and their twelve managers (N=57) from two SAP customers in the US that were working with agile methods, and the data were collected via an online survey. The selected Agility measurement was correlated to a Group Development measurement and showed significant convergent validity, i.e., a more mature team is also a more agile team. This means that the agile methods probably would benefit from taking group development into account when its practices are being introduced.

  • 206. Gren, Lucas
    et al.
    Torkar, Richard
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Feldt, Robert
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Work motivational challenges regarding the interface between agile teams and a non-agile surrounding organization: A case study2014In: 2014 AGILE CONFERENCE (AGILE), IEEE Press, 2014, p. 11-15Conference paper (Refereed)
  • 207.
    Gustafsson, Jimmy
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Evolving Neuromodulatory Topologies for Plasticity in Video Game Playing2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In the last decades neural networks have become more frequent in video games. Neuroevolution helps us generate optimal network topologies for specific tasks, but there are still still unexplored areas of neuroevolution, and ways of improving the performance of neural networks, which we could utilize for video game playing. The aim of this thesis is to find a suitable fitness evaluation and improve the plasticity of evolved neural networks, as well as comparing the performance and general video game playing abilities of established neuroevolution methods. Using Analog Genetic Encoding we implement evolving neuromodulatory topologies in a typical two-dimensional platformer video game, and have it play against itself without neuromodulation, and against a popular genetic algorithm known as Neuroevolution of Augmenting Topologies. A suitable input and output structure is developed as well as an appropriate fitness evaluation for properly mating and mutating a population of neural networks. The benefits of neuromodulation are tested by running and completing a number of tile-based platformer video game levels. The results show an increased performance in networks influenced by neuromodulators, but no general video game playing abilities are obtained. This shows us that a more advanced general gameplay learning method with higher influence is required.

  • 208.
    Haghverdian, Pol
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Olsson, Martin
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Identification of cloud service use-cases and quality aspects:end-user perspective: Learnability, Operability and Security quality attributes and their corresponding use cases2016Independent thesis Basic level (degree of Bachelor), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. With the entry of smart-phones on the market in the beginningof 2007, the integration of an mp3 player, camera and gps into an all in onedevice. As the integration was realized, creating and storing own contentbecame easier. Therefore the need of more storage became a problem as thesmart-phones were limited in capacity. The 3G network was on the rise andthe cloud solutions could help to contribute to the storage problems usersstarted to have.

    Objectives. In this study we will evaluate what can be done with use casesin terms of quality attributes, seeing it from a users perspective by havingusers rank use cases for cloud services. With further investigation we willmake a contribution of what the differences between public and personalclouds are.

    Methods. Use-cases were found by the conducted empirical study andwere based on a Systematic mapping review. In this review, a number ofarticle sources are used, including Google search, Bth summon and Googlescholar. Studies were selected after reading the articles and checked if thepapers matched our defined inclusion criteria. We also designed a surveywith variable amount of questions depending on what the participant wouldanswer. The questions were featured in terms of functionality interpretedfrom the use-cases found in the SLM.

    Results. Through our SLM we found six different use-cases which were Recovery, Collaborative working, Password protection, Backup, Version tracking and Media streaming. The identified quality attributes gave two or moremappings to their corresponding use-case. As for the comparison betweendifferent clouds, only two out of six use-cases where implemented for the Personal cloud.

    Conclusions. This gave us the conclusion that the vendors have beenmostly focusing on the storage part of the Personal cloud, but there are solutions in order to increase the functionalities. Those solutions will probably not fit everyone as it includes open source software, with skills of handling installation and other procedures by the user.

  • 209.
    Hansson, Karl
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Hernvall, Mikael
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Performance and Perceived Realism in Rasterized 3D Sound Propagation for Interactive Virtual Environments2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Background. 3D sound propagation is important for immersion and realism in interactive and dynamic virtual environments. However, this is difficult to model in a physically accurate manner under real-time constraints. Computer graphics techniques are used in acoustics research to increase performance, yet there is little utilization of the especially efficient rasterization techniques, possibly due to concerns of physical accuracy. Fortunately, psychoacoustics have shown that perceived realism does not equate physical accuracy. This indicates that perceptually realistic and high-performance 3D sound propagation may be achievable with rasterization techniques.

    Objectives. This thesis investigates whether 3D sound propagation can be modelled with high performance and perceived realism using rasterization-based techniques.

    Methods. A rasterization-based solution for 3D sound propagation is implemented. Its perceived realism is measured using psychoacoustic evaluations. Its performance is analyzed through computation time measurements with varying sound source and triangle count, and theoretical calculations of memory consumption. The performance and perceived realism of the rasterization-based solution is compared with an existing solution.

    Results. The rasterization-based solution shows both higher performance and perceived realism than the existing solution.

    Conclusions. 3D sound propagation can be modelled with high performance and perceived realism using rasterization-based techniques. Thus, rasterized 3D sound propagation may provide efficient, low-cost, perceptually realistic 3D audio for areas where immersion and perceptual realism are important, such as video games, serious games, live entertainment events, architectural design, art production and training simulations.

  • 210.
    Hassan, Ali
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Markowicz, Christian
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Real-time snow simulation with compression and thermodynamics2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Background: Snow simulation can be used to increase the visual experience in applications such as games. Previously, snow has been simulated in real-time through two-dimensional grid based methods, which limits itself in the aspect of dynamic interactions. To widen the scope of what games current game engines can produce, an approach to simulating the behavior of snow with non-recoverable compression and phase transition is proposed.

    Objective: The objective of this thesis is to construct a particle simulation model to simulate the behaviors of snow in regards to compression and phase transition in real-time. The solution is limited to the behavior of deposited snow, and will therefore not consider the aspect of snowfall and realistic visualization.

    Method: The method consists of a particle simulation with incorporated functionality of compression and thermodynamics. Test cases based on compression, phase transition and performance have been conducted.

    Results and Conclusions: The results show that the model captures phase transition with the phases of snow, ice, and water. Compression by external forces and self-weight is also captured, but with missing behavior in terms of bond creation between grains. Performance measurements indicates that the simulation is applicable for real-time applications. It is concluded that the approach is a good stepping stone for future improvements of snow simulation.

  • 211.
    Heidari, Ramin
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Android Elastic Service Execution and Evaluation2013Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Context. Mobile devices recently have attained huge popularity in people’s life. During recent years, there have been many attempts for proposing several approaches to delegate and execute the computing intensive part of the mobile applications on more powerful remote servers due to shortage of resources on mobile devices. However, there are still research challenges in this area regarding the models as well as principles that govern circumstances of executing a part of mobile application remotely on a server along with effects of execution on the smartphone resources. Objectives. The aim behind conducting this research is to propose a model for executing the service component of an Android application on the remote server. This study exploits the enhancement of Android operating system functionality to execute services components on a remote powerful machine. It reports the model as well as the enhancements to achieve this purpose. Additionally, an experiment is conducted to realize what factors rule to execute a computation locally on mobile device or offload it to be executed on a remote machine. Methods. Two research methodologies have been used in preforming this research; Case study and controlled experiment. In the case study we investigates feasibility of functionality enhancement in Android operating system to run service components of Android applications on a remote server. We propose a new model for this purpose and motivate it by several different resources such as journal and conference papers and the Android developer site. A prototype of the model is implemented in order to put into use in the next part of our study. Second, a controlled experiment is conducted on the outcome prototype of the case study to explore the principles that governs executing the service component of Android application on a remote powerful machines and the affection of this execution on the mobile resources. Results. A Model for executing the service component of Android application on a powerful remote server is proposed. Also, a prototype implemented according to the Model. The effects of executing Android service components in a remote machine on energy consumption as well as performance of a smartphone are investigated. Moreover, we examined when would be beneficial to offload an intensive computation in order to be executed on the remote server. Conclusions. We conclude that it’s applicable to enhance the Android OS to execute service component of an Android application on a remote server. Also, We conclude that there is a strong coloration between amount of payload and computation of data that require to be executed on a remote server. Basically, offloading the computation is beneficial when there is a large amount of computation with small amount of communication and payload. Furthermore we conclude that the execution time for the intensive computations drastically increase when it’s executed on the server but for less computation data the performance is better when the execution is on the smartphone. Besides that, we express that the energy consumption on the smartphone growth gradually when the payload passes over a particular size.

  • 212.
    Holmersson, Marcus
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Web applications as a common tool: Template applications and guidelines for none software developers2019Independent thesis Basic level (university diploma), 80 credits / 120 HE creditsStudent thesis
  • 213. Holt, Nina Elisabeth
    et al.
    Briand, Lionel
    Torkar, Richard
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Empirical evaluations on the cost-effectiveness of state-based testing: An industrial case study2014In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 56, no 8, p. 890-910Article in journal (Refereed)
    Abstract [en]

    Context Test models describe the expected behavior of the software under test and provide the basis for test case and oracle generation. When test models are expressed as UML state machines, this is typically referred to as state-based testing (SBT). Despite the importance of being systematic while testing, all testing activities are limited by resource constraints. Thus, reducing the cost of testing while ensuring sufficient fault detection is a common goal in software development. No rigorous industrial case studies of SBT have yet been published. Objective In this paper, we evaluate the cost-effectiveness of SBT on actual control software by studying the combined influence of four testing aspects: coverage criterion, test oracle, test model and unspecified behavior (sneak paths). Method An industrial case study was used to investigate the cost-effectiveness of SBT. To enable the evaluation of SBT techniques, a model-based testing tool was configured and used to automatically generate test suites. The test suites were evaluated using 26 real faults collected in a field study. Results Results show that the more detailed and rigorous the test model and oracle, the higher the fault-detection ability of SBT. A less precise oracle achieved 67% fault detection, but the overall cost reduction of 13% was not enough to make the loss an acceptable trade-off. Removing details from the test model significantly reduced the cost by 85%. Interestingly, only a 24–37% reduction in fault detection was observed. Testing for sneak paths killed the remaining eleven mutants that could not be killed by the conformance test strategies. Conclusions Each of the studied testing aspects influences cost-effectiveness and must be carefully considered in context when selecting strategies. Regardless of these choices, sneak-path testing is a necessary step in SBT since sneak paths are common while also undetectable by conformance testing.

  • 214.
    Huang, Simon
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Load time optimization of JavaScript web applications2019Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Background.

    Websites are getting larger in size each year, the median size increased from 1479.6 kilobytes to 1699.0 kilobytes on the desktop and 1334.3 kilobytes to 1524.1 kilobytes on the mobile. There are several methods that can be used to decrease the size. In my experiment I use the methods tree shaking, code splitting, gzip, bundling, and minification.

    Objectives.

    I will investigate how using the methods separately affect the loading time and con- duct a survey targeted at participants that works as JavaScript developers in the field.

    Methods.

    I have used Vue for creating a website and ran Lighthouse tests against the website. All this within two Docker containers to make the reproducibility easier. Interviews with JavaScript developers were made to find out if they use these methods in their work.

    Results.

    The best result would be to use all of the methods; gzip, minification, tree shaking, code splitting, and bundling in a combination. If gzip is the only option available for the developer to use, we can see around 60% decrease in loading time. The inter- views showed that most developers did not use or did not know of tree shaking and code splitting. Some frameworks have these methods built in to work automatically, therefor the developers does not know that it is being utilized.

    Conclusions.

    Since tree shaking and code splitting are two relatively new techniques, there is not much scientific measured values available. From the results, we can give the conclusion that using all of the mentioned methods will give the best result in loading time. All of the methods will affect the loading time, and only using gzip will affect it with a 60% decrease.

  • 215.
    Hultstrand, Sebastian
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Olofsson, Robin
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Git - CLI or GUI: Which is most widely used and why?2015Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Many of us have encountered the discussion about which interface is better for working with Git, command-line or graphical. This thesis is an attempt to find out which user interface new Git users prefer for Git and what experienced Git users prefer.

    We aimed to find out if there’s anything significant which can be gained from using either of the interfaces in comparison to each other. Lastly we looked at what factors influences git users choice of user interface and how?.

    We have collected data through three interviews and a survey, which yielded approximately 370 responses. Based on our results we’ve found that the command-line interface is the more popular user interface, in general, amongst Git users. We’ve also found that most users stop using graphical user inter- faces, as their primary user interface, as they get more experience with Git. They usually change their primary user interface to a command- line interface or start using both the graphical user interface and the command-line interface together. The results from our study regard- ing why, is presented in this thesis.

  • 216.
    Hyrynsalmi, Sami
    et al.
    Tampere University of Technology, FIN.
    Klotins, Eriks
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Unterkalmsteiner, Michael
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Gorschek, Tony
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Tripathi, Nirnaya
    University of Oulu, FIN.
    Pompermaier, Leandro Bento
    PUCRS—Pontifical Catholic University of Rio Grande do Sul, BRA.
    Prikladnicki, Rafæl
    PUCRS—Pontifical Catholic University of Rio Grande do Sul, BRA.
    What is a minimum viable (video) game?: Towards a research agenda2018In: Lect. Notes Comput. Sci., Springer Verlag , 2018, Vol. 11195, p. 217-231Conference paper (Refereed)
    Abstract [en]

    The concept of ‘Minimum Viable Product’ (MVP) is largely adapted in the software industry as well as in academia. Minimum viable products are used to test hypotheses regarding the target audience, save resources from unnecessary development work and guide a company towards a stable business model. As the game industry is becoming an important business domain, it is not surprise that the concept has been adopted also in the game development. This study surveys how a Minimum Viable Game (MVG) is defined, what is reported in extant literature as well as present results from a small case study survey done to nine game development companies. The study shows that despite popularity of minimum viable games in the industrial fora, the presented views on the concept are diverged and there is lack of practical guidelines and research supporting game companies. This study points out research gaps in the area as well as calls for actions to further develop the concept and to define guidelines. © IFIP International Federation for Information Processing 2018.

  • 217.
    Hörnlund, Emil
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Schenström, Rasmus
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Indoor Location Surveillance: Utilizing Wi-Fi and Bluetooth Signals2019Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Personal information nowadays have become valuable for many stakeholders. We want to find out how much information someone can gather from our daily devices such as a smartphone, using some budget devices together with some programming knowledge. Can we gather enough information to be able to determine a location to a target device? The main objectives of our bachelor thesis is to determine the accuracy of positioning for nearby personal devices using trilateration of short-distance communications (Wi-Fi vs Bluetooth). But also, how much and what information our devices leak without us knowing with respect to personal integrity. We collected Wi-Fi and Bluetooth data in total from four target devices. Two different experiments were executed, calibration experiment and visualization experiment. The data were collected by capturing the Wi-Fi and Bluetooth Received Signal Strength Indication(RSSI) transmitted wirelessly from target devices. We then apply a method called trilateration to be able to pinpoint a target to a location. In theory, Bluetooth signals are twice as accurate as Wi-Fi signals. In practise, we were able to locate a target device with an accuracy of 5 - 10 meters. Bluetooth signals are stable but have long response time while Wi-Fi signals have short response time but high fluctuation in the RSSI values. The idea itself, being able to determine a handheld device position is not impossible, as can be seen from our results. It may though require more powerful hardware to secure an acceptable accuracy. On the other hand, achieving this kind of results from such a cheap hardware as Raspberry Pi:s are truly amazing.

  • 218. Ickin, Selim
    et al.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Gonzalez-Huerta, Javier
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Why do users install and delete apps?: A survey study2017In: Lecture Notes in Business Information Processing, Springer Verlag , 2017, Vol. 304, p. 186-191Conference paper (Refereed)
    Abstract [en]

    Practitioners on the area of mobile application development usually rely on set of app-related success factors, the majority of which are directly related to their economical/business profit (e.g., number of downloads, or the in-app purchases revenue). However, gathering also the user-related success factors, that explain the reasons why users choose, download, and install apps as well as the user-related failure factors that explain the reasons why users delete apps, might help practitioners understand how to improve the market impact of their apps. The objectives were to: identify (i) the reasons why users choose and installing mobile apps from app stores; (ii) the reasons why users uninstall the apps. A questionnaire-based survey involving 121 users from 26 different countries was conducted. © Springer International Publishing AG 2017.

  • 219. Ihantola, Petri
    et al.
    Vihavainen, Arto
    Ahadi, Alireza
    Butler, Matthew
    Börstler, Jürgen
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Edwards, Stephen H.
    Isohanni, Essi
    Korhonen, Ari
    Petersen, Andrew
    Rivers, Kelly
    Rubio, Miguel Ángel
    Sheard, Judy
    Skupas, Bronius
    Spacco, Jaime
    Szabo, Claudia
    Toll, Daniel
    Educational Data Mining and Learning Analytics in Programming: Literature Review and Case Studies2016In: Proceedings of the 2015 ITiCSE on Working Group Reports, ACM Digital Library, 2016, p. 41-63Conference paper (Refereed)
    Abstract [en]

    Educational data mining and learning analytics promise better understanding of student behavior and knowledge, as well as new information on the tacit factors that contribute to student actions. This knowledge can be used to inform decisions related to course and tool design and pedagogy, and to further engage students and guide those at risk of failure. This working group report provides an overview of the body of knowledge regarding the use of educational data mining and learning analytics focused on the teaching and learning of programming. In a literature survey on mining students' programming processes for 2005-2015, we observe a significant increase in work related to the field. However, the majority of the studies focus on simplistic metric analysis and are conducted within a single institution and a single course. This indicates the existence of further avenues of research and a critical need for validation and replication to better understand the various contributing factors and the reasons why certain results occur. We introduce a novel taxonomy to analyse replicating studies and discuss the importance of replicating and reproducing previous work. We describe what is the state of the art in collecting and sharing programming data. To better understand the challenges involved in replicating or reproducing existing studies, we report our experiences from three case studies using programming data. Finally, we present a discussion of future directions for the education and research community.

  • 220.
    Ilyas, Bilal
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Elkhalifa, Islam
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Static Code Analysis: A Systematic Literature Review and an Industrial Survey2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context: Static code analysis is a software verification technique that refers to the process of examining code without executing it in order to capture defects in the code early, avoiding later costly fixations. The lack of realistic empirical evaluations in software engineering has been identified as a major issue limiting the ability of research to impact industry and in turn preventing feedback from industry that can improve, guide and orient research. Studies emphasized rigor and relevance as important criteria to assess the quality and realism of research. The rigor defines how adequately a study has been carried out and reported, while relevance defines the potential impact of the study on industry. Despite the importance of static code analysis techniques and its existence for more than three decades, the number of empirical evaluations in this field are less in number and do not take into account the rigor and relevance into consideration.

    Objectives: The aim of this study is to contribute toward bridging the gap between static code analysis research and industry by improving the ability of research to impact industry and vice versa. This study has two main objectives. First, developing guidelines for researchers, which will explore the existing research work in static code analysis research to identify the current status, shortcomings, rigor and industrial relevance of the research, reported benefits/limitations of different static code analysis techniques, and finally, give recommendations to researchers to help improve the future research to make it more industrial oriented. Second, developing guidelines for practitioners, which will investigate the adoption of different static code analysis techniques in industry and identify benefits/limitations of these techniques as perceived by industrial professionals. Then cross-analyze the findings of the SLR and the surbvey to draw final conclusions, and finally, give recommendations to professionals to help them decide which techniques to adopt.

    Methods: A sequential exploratory strategy characterized by the collection and analysis of qualitative data (systematic literature review) followed by the collection and analysis of quantitative data (survey), has been used to conduct this research. In order to achieve the first objective, a thorough systematic literature review has been conducted using Kitchenham guidelines. To achieve the second study objective, a questionnaire-based online survey was conducted, targeting professionals from software industry in order to collect their responses regarding the usage of different static code analysis techniques, as well as their benefits and limitations. The quantitative data obtained was subjected to statistical analysis for the further interpretation of the data and draw results based on it.

    Results: In static code analysis research, inspection and static analysis tools received significantly more attention than the other techniques. The benefits and limitations of static code analysis techniques were extracted and seven recurrent variables were used to report them. The existing research work in static code analysis field significantly lacks rigor and relevance and the reason behind it has been identified. Somre recommendations are developed outlining how to improve static code analysis research and make it more industrial oriented. From the industrial point of view, static analysis tools are widely used followed by informal reviews, while inspections and walkthroughs are rarely used. The benefits and limitations of different static code analysis techniques, as perceived by industrial professionals, have been identified along with the influential factors.

    Conclusions: The SLR concluded that the techniques having a formal, well-defined process and process elements have receive more attention in research, however, this doesn’t necessarily mean that technique is better than the other techniques. The experiments have been used widely as a research method in static code analysis research, but the outcome variables in the majority of the experiments are inconsistent. The use of experiments in academic context contributed nothing to improve the relevance, while the inadequate reporting of validity threats and their mitigation strategies contributed significantly to poor rigor of research. The benefits and limitations of different static code analysis techniques identified by the SLR could not complement the survey findings, because the rigor and relevance of most of the studies reporting them was weak. The survey concluded that the adoption of static code analysis techniques in the industry is more influenced by the software life-cycle models in practice in organizations, while software product type and company size do not have much influence. The amount of attention a static code analysis technique has received in research doesn’t necessarily influence its adoption in industry which indicates a wide gap between research and industry. However, the company size, product type, and software life-cycle model do influence professionals perception on benefits and limitations of different static code analysis techniques.

  • 221.
    Irshad, Mohsin
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Assessing Reusability in Automated Acceptance Tests2018Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    Context: Automated acceptance tests have become a core practice of agile software development (e.g. Extreme Programming). These tests are closely tied to requirements specifications and these tests provide a mechanism for continuous validation of software requirements. Software reuse has evolved with the introduction of each new reusable artefact (e.g., reuse of code, reuse of frameworks, tools etc.). In this study, we have investigated the reusability of automated acceptance tests keeping in view their close association with textual requirements.

    Objective: As automated acceptance tests are closely related to software requirements, we have used existing research in software engineering to identify reusability related characteristics of software requirements and used these characteristics for automated acceptance tests.This study attempts to address the following aspects: (i) what important reuse characteristics should be considered when measuring reusability of automated acceptance tests? (ii) how reusability can be measured in automated acceptance tests?, and (iii) how cost avoided through reuse of automated acceptance tests can be calculated?

    Method: We have used a combination of research methods to answer different aspects of our study. We started by identifying reusability related characteristics of software requirements, with help of systematic literature review. Later, we tried to identify the reusability related characteristics of defect reports and the process is documented using an experience report. After identifying the characteristics from the previous two studies, we used these characteristics on two case-studies conducted on Behaviour driven development test cases (i.e., acceptance tests of textual nature). We proposed two approaches that can identify the reuse potential of automated acceptance tests and evaluated these approaches in the industry. Later, to calculate the cost avoided through reuse, we proposed and evaluated a method that is applicable to any reusable artifact.

    Results: The results from the systematic literature review shows that text-based requirements reuse approaches are most commonly used in the industry. Structuring these text-based requirements and identifying the reusable requirements by matching are the two commonly used methods for enabling requirements to reuse. The results from the second study, industrial experience report, indicates that defect reports can be formulated in template and defect triage meeting can be used to identify important test-cases related to defect reports. The results from these two studies, text-based requirements reuse approaches and template based defect reports, were included when identifying approaches to measure reuse potential of BDD test-cases. The two proposed approaches, Normalised Compression Distance (NCD) and Similarity Ratio, for measuring reuse potential were evaluated in the industry. The evaluation indicated that Similarity ratio approach performed better than the NCD approach, however, the results from both approaches were comparable with the results gathered with the help of expert analysis. The cost related aspects of reusable acceptance tests were addressed and evaluated using a method that calculates the cost-avoidance through reuse. The industrial evaluation of the method and guidelines show that the method is an artifact independent method. 

    Conclusions: The evidence from this study shows that the automated acceptance tests are reusable, similar to text-based software requirements and their reuse potential can be calculated as well. The industrial evaluation of the three studies (i.e. approaches to measure reuse potential, calculation of cost avoidance and defect reports in triage meetings) shows that the overall results are applicable to the industry. However, further work is required to evaluate the reuse potential of automated acceptance tests in different contexts. 

  • 222.
    Irshad, Mohsin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Poulding, Simon
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    A systematic literature review of software requirements reuse approaches2018In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 93, no Jan, p. 223-245Article in journal (Refereed)
    Abstract [en]

    Context: Early software reuse is considered as the most beneficial form of software reuse. Hence, previous research has focused on supporting the reuse of software requirements. Objective: This study aims to identify and investigate the current state of the art with respect to (a) what requirement reuse approaches have been proposed, (b) the methods used to evaluate the approaches, (c) the characteristics of the approaches, and (d) the quality of empirical studies on requirements reuse with respect to rigor and relevance. Method: We conducted a systematic review and a combination of snowball sampling and database search have been used to identify the studies. The rigor and relevance scoring rubric has been used to assess the quality of the empirical studies. Multiple researchers have been involved in each step to increase the reliability of the study. Results: Sixty-nine studies were identified that describe requirements reuse approaches. The majority of the approaches used structuring and matching of requirements as a method to support requirements reuse and text-based artefacts were commonly used as an input to these approaches. Further evaluation of the studies revealed that the majority of the approaches are not validated in the industry. The subset of empirical studies (22 in total) was analyzed for rigor and relevance and two studies achieved the maximum score for rigor and relevance based on the rubric. It was found that mostly text-based requirements reuse approaches were validated in the industry. Conclusion: From the review, it was found that a number of approaches already exist in literature, but many approaches are not validated in industry. The evaluation of rigor and relevance of empirical studies show that these do not contain details of context, validity threats, and the industrial settings, thus highlighting the need for the industrial evaluation of the approaches. © 2017 Elsevier B.V.

  • 223.
    Irshad, Mohsin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Torkar, Richard
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Afzal, Wasif
    Capturing cost avoidance through reuse: Systematic literature review and industrial evaluation2016In: ACM International Conference Proceeding Series, ACM Press, 2016, Vol. 01-03-June-2016Conference paper (Refereed)
    Abstract [en]

    Background: Cost avoidance through reuse shows the benefits gained by the software organisations when reusing an artefact. Cost avoidance captures benefits that are not captured by cost savings e.g. spending that would have increased in the absence of the cost avoidance activity. This type of benefit can be combined with quality aspects of the product e.g. costs avoided because of defect prevention. Cost avoidance is a key driver for software reuse. Objectives: The main objectives of this study are: (1) To assess the status of capturing cost avoidance through reuse in the academia; (2) Based on the first objective, propose improvements in capturing of reuse cost avoidance, integrate these into an instrument, and evaluate the instrument in the software industry. Method: The study starts with a systematic literature review (SLR) on capturing of cost avoidance through reuse. Later, a solution is proposed and evaluated in the industry to address the shortcomings identified during the systematic literature review. Results: The results of a systematic literature review describe three previous studies on reuse cost avoidance and show that no solution, to capture reuse cost avoidance, was validated in industry. Afterwards, an instrument and a data collection form are proposed that can be used to capture the cost avoided by reusing any type of reuse artefact. The instrument and data collection form (describing guidelines) were demonstrated to a focus group, as part of static evaluation. Based on the feedback, the instrument was updated and evaluated in industry at 6 development sites, in 3 different countries, covering 24 projects in total. Conclusion: The proposed solution performed well in industrial evaluation. With this solution, practitioners were able to do calculations for reuse costs avoidance and use the results as decision support for identifying potential artefacts to reuse.

  • 224.
    Isaksson, Conny
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Elmgren, Gustav
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    A ticket to blockchains2018Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
  • 225.
    Jabangwe, Ronald
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Software Quality Evaluation for Evolving Systems in Distributed Development Environments2015Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Context: There is an overwhelming prevalence of companies developing software in global software development (GSD) contexts. The existing body of knowledge, however, falls short of providing comprehensive empirical evidence on the implication of GSD contexts on software quality for evolving software systems. Therefore there is limited evidence to support practitioners that need to make informed decisions about ongoing or future GSD projects. Objective: This thesis work seeks to explore changes in quality, as well as to gather confounding factors that influence quality, for software systems that evolve in GSD contexts. Method: The research work in this thesis includes empirical work that was performed through exploratory case studies. This involved analysis of quantitative data consisting of defects as an indicator for quality, and measures that capture software evolution, and qualitative data from company documentations, interviews, focus group meetings, and questionnaires. An extensive literature review was also performed to gather information that was used to support the empirical investigations. Results: Offshoring software development work, to a location that has employees with limited or no prior experience with the software product, as observed in software transfers, can have a negative impact on quality. Engaging in long periods of distributed development with an offshore site and eventually handing over all responsibilities to the offshore site can be an alternative to software transfers. This approach can alleviate a negative effect on quality. Finally, the studies highlight the importance of taking into account the GSD context when investigating quality for software that is developed in globally distributed environments. This helps with making valid inferences about the development settings in GSD projects in relation to quality. Conclusion: The empirical work presented in this thesis can be useful input for practitioners that are planning to develop software in globally distributed environments. For example, the insights on confounding factors or mitigation practices that are linked to quality in the empirical studies can be used as input to support decision-making processes when planning similar GSD projects. Consequently, lessons learned from the empirical investigations were used to formulate a method, GSD-QuID, for investigating quality using defects for evolving systems. The method is expected to help researchers avoid making incorrect inferences about the implications of GSD contexts on quality for evolving software systems, when using defects as a quality indicator. This in turn will benefit practitioners that need the information to make informed decisions for software that is developed in similar circumstances.

  • 226.
    Jabangwe, Ronald
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Börstler, Jürgen
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Handover of managerial responsibilities in global software development: a case study of source code evolution and quality2015In: Software quality journal, ISSN 0963-9314, E-ISSN 1573-1367, Vol. 23, no 4, p. 539-566Article in journal (Refereed)
    Abstract [en]

    Studies report on the negative effect on quality in global software development (GSD) due to communication and coordination-related challenges. However, empirical studies reporting on the magnitude of the effect are scarce. This paper presents findings from an embedded explanatory case study on the change in quality over time, across multiple releases, for products that were developed in a GSD setting. The GSD setting involved periods of distributed development between geographically dispersed sites as well as a handover of project management responsibilities between the involved sites. Investigations were performed on two medium-sized products from a company that is part of a large multinational corporation. Quality is investigated quantitatively using defect data and measures that quantify two source code properties, size and complexity. Observations were triangulated with subjective views from company representatives. There were no observable indications that the distribution of work or handover of project management responsibilities had an impact on quality on both products. Among the product-, process- and people-related success factors, we identified well-designed product architectures, early handover planning and support from the sending site to the receiving site after the handover and skilled employees at the involved sites. Overall, these results can be useful input for decision-makers who are considering distributing development work between globally dispersed sites or handing over project management responsibilities from one site to another. Moreover, our study shows that analyzing the evolution of size and complexity properties of a product’s source code can provide valuable information to support decision-making during similar projects. Finally, the strategy used by the company to relocate responsibilities can also be considered as an alternative for software transfers, which have been linked with a decline in efficiency, productivity and quality.

  • 227.
    Jabangwe, Ronald
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Wohlin, Claes
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Šmite, Darja
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Börstler, Jürgen
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    A method for investigating the quality of evolving object-oriented software using defects in global software development projects2016In: Journal of Software: Evolution and Process, ISSN 2047-7473, E-ISSN 2047-7481, Vol. 28, no 8, p. 622-641Article in journal (Refereed)
    Abstract [en]

    Context: Global software development (GSD) projects can have distributed teams that work independently in different locations or team members that are dispersed. The various development settings in GSD can influence quality during product evolution. When evaluating quality using defects as a proxy, the development settings have to be taken into consideration. Objective: The aim is to provide a systematic method for supporting investigations of the implication of GSD contexts on defect data as a proxy for quality. Method: A method engineering approach was used to incrementally develop the proposed method. This was done through applying the method in multiple industrial contexts and then using lessons learned to refine and improve the method after application. Results: A measurement instrument and visualization was proposed incorporating an understanding of the release history and understanding of GSD contexts. Conclusion: The method can help with making accurate inferences about development settings because it includes details on collecting and aggregating data at a level that matches the development setting in a GSD context and involves practitioners at various phases of the investigation. Finally, the information that is produced from following the method can help practitioners make informed decisions when planning to develop software in comparable circumstances. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  • 228.
    Jabangwe, Ronald
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering. Lero / Regulated Software Research Centre.
    Šmite, Darja
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Hessbo, Emil
    Distributed Software Development in an Offshore Outsourcing Project: A Case Study of Source Code Evolution and Quality2016In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 72, p. 125-136Article in journal (Refereed)
    Abstract [en]

    Context: Offshore outsourcing collaborations can result in distributed development, which has been linked to quality-related concerns. However, there are few studies that focus on the implication of distributed development on quality, and they report inconsistent findings using different proxies for quality. Thus, there is a need for more studies, as well as to identify useful proxies for certain distributed contexts. The presented empirical study was performed in a context that involved offshore outsourcing vendors in a multisite distributed development setting.

    Objective: The aim of the study is to investigate how quality changes during evolution in a distributed development environment that incurs organizational changes in terms of number of companies involved.

    Method: A case study approach is followed in the investigation. Only post-release defects are used as a proxy for external quality due to unreliable defect data found pre-release such as those reported during integration. Focus group meetings were also held with practitioners.

    Results: The results suggest that practices that can be grouped into product, people, and process categories can help ensure post-release quality. However, post-release defects are insufficient for showing a conclusive impact on quality of the development setting. This is because the development teams worked independently as isolated distributed teams, and integration defects would help to better reflect on the impact on quality of the development setting.

    Conclusions: The mitigation practices identified can be useful information to practitioners that are planning to engage in similar globally distributed development projects. Finally, it is important to take into consideration the arrangement of distributed development teams in global projects, and to use the context to identify appropriate proxies for quality in order to draw correct conclusions about the implications of the context. This would help with providing practitioners with well-founded findings about the impact on quality of globally distributed development settings.

  • 229.
    Jabbari, Ramtin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Ali, Nauman bin
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Tanveer, Binish
    Fraunhofer Institute for Experimental Software Engineering IESE, DEU.
    Towards a benefits dependency network for DevOps based on a systematic literature review2018In: Journal of Software: Evolution and Process, ISSN 2047-7473, E-ISSN 2047-7481, Vol. 30, no 11, article id e1957Article in journal (Refereed)
    Abstract [en]

    DevOps as a new way of thinking for software development and operations has received much attention in the industry, while it has not been thoroughly investigated in academia yet. The objective of this study is to characterize DevOps by exploring its central components in terms of principles, practices and their relations to the principles, challenges of DevOps adoption, and benefits reported in the peer-reviewed literature. As a key objective, we also aim to realize the relations between DevOps practices and benefits in a systematic manner. A systematic literature review was conducted. Also, we used the concept of benefits dependency network to synthesize the findings, in particular, to specify dependencies between DevOps practices and link the practices to benefits. We found that in many cases, DevOps characteristics, ie, principles, practices, benefits, and challenges, were not sufficiently defined in detail in the peer-reviewed literature. In addition, only a few empirical studies are available, which can be attributed to the nascency of DevOps research. Also, an initial version of the DevOps benefits dependency network has been derived. The definition of DevOps principles and practices should be emphasized given the novelty of the concept. Further empirical studies are needed to improve the benefits dependency network presented in this study. © 2018 John Wiley & Sons, Ltd.

  • 230.
    Jabbari, Ramtin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Ali, Nauman Bin
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Tanveer, Binish
    What is DevOps?: A Systematic Mapping Study on Definitions and Practices2016Conference paper (Refereed)
  • 231.
    Jain, Aman
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Aduri, Raghu ram
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Quality metrics in continuous delivery: A mixed approach2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. Continuous delivery deals with concept of deploying the user stories as soon as they are finished rather than waiting for the sprint to end. This concept increases the chances of early improvement to the software and provides the customer with a clear view of the final product that is expected from the software organization, but little research has been done on the quality of product developed and the ways to measure it. This research is conducted in the context of presenting a checklist of quality metrics that can be used by the practitioners to ensure good quality product delivery.

    Objectives. In this study, the authors strive towards the accomplishment of the following objectives: the first objective is to identify the quality metrics being used in agile approaches and continuous delivery by the organizations. The second objective is to evaluate the usefulness of the identified metrics, limitations of the metrics and identify new metrics. The final objective is to is to present and evaluate a solution i.e., checklist of metrics that can be used by practitioners to ensure quality of product developed using continuous delivery.

    Methods. To accomplish the objectives, the authors used mixture of approaches. First literature review was performed to identify the quality metrics being used in continuous delivery. Based on the data obtained from the literature review, the authors performed an online survey using a questionnaire posted over an online questionnaire hosting website. The online questionnaire was intended to find the usefulness of identified metrics, limitations of using metrics and also to identify new metrics based on the responses obtained for the online questionnaire. The authors conducted interviews and the interviews comprised of few close-ended questions and few open-ended questions which helped the authors to validate the usage of the metrics checklist.

    Results. Based on the LR performed at the start of the study, the authors obtained data regarding the background of continuous delivery, research performed over continuous delivery by various practitioners as well as a list of quality metrics used in continuous delivery. Later, the authors conducted an online survey using questionnaire that resulted in ranking the usefulness of quality metrics and identification of new metrics used in continuous delivery. Based on the data obtained from the online questionnaire, a checklist of quality metrics involved in continuous delivery was generated.

    Conclusions. Based on the interviews conducted to validate the checklist of metrics (generated as a result of the online questionnaire), the authors conclude that the checklist of metrics is fit for use in industry, but with some necessary changes made to the checklist based on the project requirements. The checklist will act as a reminder to the practitioners regarding the quality aspects that need to be measured during product development and maybe as a starting point while planning metrics that need to be measured during the project.

  • 232.
    Jalali, Samireh
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Wohlin, Claes
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Angelis, Lefteris
    Investigating the Applicability of Agility Assessment Surveys: A Case Study2014In: Journal of Systems and Software, ISSN 0164-1212 , Vol. 98, p. 172-190Article in journal (Refereed)
    Abstract [en]

    Context: Agile software development has become popular in the past decade despite that it is not a particularly well-defined concept. The general principles in the Agile Manifesto can be instantiated in many different ways, and hence the perception of Agility may differ quite a lot. This has resulted in several conceptual frameworks being presented in the research literature to evaluate the level of Agility. However, the evidence of actual use in practice of these frameworks is limited. Objective: The objective in this paper is to identify online surveys that can be used to evaluate the level of Agility in practice, and to evaluate the surveys in an industrial setting. Method: Surveys for evaluating Agility were identified by systematically searching the web. Based on an exploration of the surveys found, two surveys were identified as most promising for our objective. The two surveys selected were evaluated in a case study with three Agile teams in a software consultancy company. The case study included a self-assessment of the Agility level by using the two surveys, interviews with the Scrum master and a team representative, interviews with the customers of the teams and a focus group meeting for each team. Results: The perception of team Agility was judged by each of the teams and their respective customer, and the outcome was compared with the results from the two surveys. Agility profiles were created based on the surveys. Conclusions: It is concluded that different surveys may very well judge Agility differently, which support the viewpoint that it is not a well-defined concept. The researchers and practitioners agreed that one of the surveys, at least in this specific case, provided a better and more holistic assessment of the Agility of the teams in the case study.

  • 233.
    Jasarevic, Mirza
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Impact of Weather Phenomena on Object Detection: Testing YOLOv3 In Traffic- And Weather Simulations2019Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Context: Object detection is gaining more influence in everyday life, various institutions and agencies are utilizing it to help streamline their day-to-day tasks. It helps process large quantities of data and requires less resources, hence making for a promising tool in the future. It still faces many baseline issues, such as weather conditions obstructing the shape or tone of an object and thereby causing misidentifications. This can be as harmless as a toy misidentifying a frown for a smile and reacting happily to your sorrow, or as harmful as a self-driving car misidentifying a dark car. If for instance the darker car’s lights were in disrepair, and it was placed under the shade of a few trees at a crossing, the first car might continue through when it should have slowed down in good time or even stopped to ensure it can prevent an accident. The intent of this research is to delve into those aspects and scenarios where the weather and natural lighting outdoors can affect object detection in traffic from the perspective of a police vehicle’s camera. Evidence of law enforcement attempting implementation of the technology can be readily found on the internet even as far back as 2010, providing the right relevance for this study.

    Realization (Method): The research will be conducted using four common categories of objects encountered in everyday traffic; cars, people, motorbikes and trains. Each category will have three instances in form of images in their relevant setting, on streets for instance, to represent them in the conducted tests. Each test consists of four filters; contrast, blur, noise and resizing. For each filter there will be 20 versions, i.e. every fifth degree will be an option to apply and these will all be combined to make a total of 20^4 combinations for each image, then all combinations will be tested and detections will be registered.

    Objectives: The scope of objectives for this study was to find out which of the four categories was easiest to detect, which of the four filters was most disruptive, and to find out if there are any rules of thumb for what degrees of each filter could be considered a threshold beyond which detection is not guaranteed.

    Results: The results proved cars and people to be easiest to detect, noise to be the most obstructive filter, and contrast to guarantee detection up to 30% of application from the original luminance. Blur and change of size were negligible in impact and thus did not matter, while noise was too complex to give a clear answer in regards to beyond what percentage of noise application stops all further detections.

    Conclusions: What could be concluded from this study is that certain visual effects are harmless, contrast and noise are predominant, and that more research into the disruption of noise should be done. Noise meaning particles or specks of black or white color in the shape of pixels strewn across an image(i.e. “Salt-and-pepper noise”) to simulate things such as snow, rain etc. Object detection has been costly in mistakes even very recently in the public sphere so it needs more optimization. But it has many uses in many fields such as medicine, law enforcement, statistics and the fire departments, and for broader, commercial use, models need more training.

  • 234.
    Jerkenhag, Joakim
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Comparing machine learning methods for classification and generation of footprints of buildings from aerial imagery2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The up to date mapping data is of great importance in social services and disaster relief as well as in city planning. The vast amounts of data and the constant increase of geographical changes lead to large loads of continuous manual analysis. This thesis takes the process of updating maps and breaks it down to the problem of discovering buildings by comparing different machine learning methods to automate the finding of buildings. The chosen methods, YOLOv3 and Mask R-CNN, are based on Region Convolutional Neural Network(R-CNN) due to their capabilities of image analysis in both speed and accuracy. The image data supplied by Lantmäteriet makes up the training and testing data; this data is then used by the chosen machine learning methods. The methods are trained at different time limits, the generated models are tested and the results analysed. The results lay ground for whether the model is reasonable to use in a fully or partly automated system for updating mapping data from aerial imagery. The tested methods showed volatile results through their first hour of training, with YOLOv3 being more so than Mask R-CNN. After the first hour and until the eight hour YOLOv3 shows a higher level of accuracy compared to Mask R-CNN. For YOLOv3, it seems that with more training, the recall increases while precision decreases. For Mask R-CNN, however, there is some trade-off between the recall and precision throughout the eight hours of training. While there is a 90 % confidence interval that the accuracy of YOLOv3 is decreasing for each hour of training after the first hour, the Mask R-CNN method shows that its accuracy is increasing for every hour of training,however, with a low confidence and can therefore not be scientifically relied upon. Due to differences in setups the image size varies between the methods, even though they train and test on the same areas; this results in a fair evaluation where YOLOv3 analyses one square kilometre 1.5 times faster than the Mask R-CNN method does. Both methods show potential for automated generation of footprints, however, the YOLOv3 method solely generates bounding boxes, leaving the step of polygonization to manual work while the Mask R-CNN does, as the name implies, create a mask of which the object is encapsulated. This extra step is thought to further automate the manual process and with viable results speed up the updating of map data.

  • 235.
    Ji, Yuan
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Zheng, Hengyuan
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    The Challenge for Practitioners to Adopt Requirement Prioritization Techniques in Practice2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Background: Requirements prioritization and its technique is still an important research topic. However, industry adoption of its techniques is still lack of research and has many challenges. As well, this topic involves the technology transfer.

    Objectives: The objective of this study is to find what challenges for practitioner to adopt requirement prioritization techniques in practice.

    Methods: We use a literature review and twice interview-based surveys. The literature review studies requirement prioritization techniques in literature. The 1st interview studies the status of practitioner’s used techniques and 2nd interview studies the practitioner’s idea towards recommended techniques in literature as well as adoption challenges. The data of interview is mainly analyzed by thematic analysis.

    Results: The literature review presents the procedure of 49 requirement prioritization techniques in literatures. The 1st time interview presents the technique procedures and other conditions of 11 practitioners. With above 2 results, we find the technique recommended to these 11 interviewees and then conduct the 2nd time interview to discover more interviewees’ ideas and the challenges of technique adoption, which are also compared with related works.

    Conclusions: Overall, there are many challenges for practitioner to adopt the requirement prioritization technique. As an independent subject, the practitioner’s adoption of prioritization technique still needs to be studied further: 1. Studying this subject needs to involve the scope of technology transfer; 2. Some challenges in requirement prioritization can also hamper the practitioner’s technique adoption and should be alleviated separately.

  • 236.
    Jiang Axelsson, Bohui
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    A LTE UPCUL architecture design combining Multi-Blackboards and Pipes & Filters architectures2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. The single blackboard architecture is widely used in the LTE application area. Despite its several benefits, this architecture limits synchronization possibilities of the developed systems and increases the signal operational latency. As a result the DSP (Digital Signal Processing) utilization is suboptimal.

    Objectives. In this thesis, we design a new architecture, which combines concepts of Multi-Blackboards and Pipes & Filters architectures, as a replacement for the current single blackboard architecture at Ericsson. The implementation of the new architecture makes the environment asynchronous. We evaluate the new architecture at simulated environment of Ericsson with 222225 connection items from 9000 base stations all over the world. Each connection item has a complete UE session and one of possible connection statuses, e.g. active, inactive, connected, DRX sleeping, postponed. These connection items can be from any country in the world.

    Methods. We design a new architecture for UPCUL component of LTE network based on analysis of real network data from Ericsson. We perform a case study to develop and evaluate the new architecture at Ericsson.

    Results. We evaluate the new architecture by performing a case study at Ericsson. The results of case study show that the new architecture not only increases DSP utilization by 35%, but also decreases signal operational latency by 53%, FO operation time by 20% and FO operation cycles by 20%. Also, the new architecture increases correctness performance.

    Conclusions.  We conclude that the new architecture increases DSP utilization and decreases the signal operational latency, therefore, improves performances of UPCUL component of LTE.  Due to time constraints, we only considered four LTE FOs (Function Objects) and relative signals. Future work should focus mainly on the other FOs and signals. We also analyze unconsidered FOs, and make an integration solution table which contains solutions to integrate these unconsidered FOs into the new architecture.  The second avenue for future work is to re-size the size of the two blackboard storages. We find out that the maximum memory size of needed UE sessions per sub-frame is only 1.305% of the memory size of all UE sessions (31650 bytes). So the memory size of blackboard storage should be adjusted on the basis of needed UE sessions instead of all UE sessions.

  • 237.
    Jiang, Haozhen
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Chen, Yi
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Comparison of Different Techniques of Web GUI-based Testing with the Representative Tools Selenium and EyeSel2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. Software testing is becoming more and more important in software development life-cycle especially for web testing. Selenium is one of the most widely used property-based Graph-User-Interface(GUI) web testing tools. Nevertheless, it also has some limitations. For instance, Selenium cannot test the web components in some specific plugins or HTML5 videos frame. But it is important for testers to verify the functionality of plugins or videos on the websites. Recently, the theory of the image recognition-based GUI testing is introduced which can locate and interact with the components to be tested on the websites by image recognition. There are only a few papers do research on comparing property-based GUI web testing and image recognition-based GUI testing. Hence, we formulated our research objectives based on this main gap.

    Objectives. We want to compare these two different techniques with EyeSel which is the tool represents the image recognition-based GUI testing and Selenium which is the tool represents the property-based GUI testing. We will evaluate and compare the strengths and drawbacks of these two tools by formulating specific JUnit testing scripts. Besides, we will analyze the comparative result and then evaluate if EyeSel can solve some of the limitations associated with Selenium. Therefore, we can conclude the benefits and drawbacks of property-based GUI web testing and image recognition-based GUI testing.  

    Methods. We conduct an experiment to develop test cases based on websites’ components both by Selenium and EyeSel. The experiment is conducted in an educational environment and we select 50 diverse websites as the subjects of the experiment. The test scripts are written in JAVA and ran by Eclipse.  The experiment data is collected for comparing and analyzing these two tools.

    Results. We use quantitative analysis and qualitative analysis to analyze our results. First of all, we use quantitative analysis to evaluate the effectiveness and efficiency of two GUI web testing tools. The effectiveness is measured by the number of components that can be tested by these two tools while the efficiency is measured by the measurements of test cases’ development time and execution time. The results are as follows (1) EyeSel can test more number of components in web testing than Selenium (2) Testers need more time to develop test cases by Selenium than by EyeSel (3) Selenium executes the test cases faster than EyeSel. (4) “Results (1)” indicates the effectiveness of EyeSel is better than Selenium while “Results (2)(3)” indicate the efficiency of EyeSel is better than Selenium. Secondly, we use qualitative analysis to evaluate four quality characteristics (learnability, robustness, portability, functionality) of two GUI web testing tools. The results show that portability and functionality of Selenium are better than EyeSel while the learnability of EyeSel is better than Selenium. And both of them have good robustness in web testing.

    Conclusions. After analyzing the results of comparison between Selenium and EyeSel, we conclude that (1) Image recognition-based GUI testing is more effectiveness than property-based GUI web testing (2) Image recognition-based GUI testing is more efficiency than property-based GUI web testing (3) The portability and functionality of property-based GUI web testing is better than Image recognition-based GUI testing (4) The learnability of image recognition-based GUI testing is better than property-based GUI web testing. (5) Both of them are good at different aspects of robustness

  • 238.
    Johansson, Eric
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Adapting the web: Analysis of guidelines for responsive design2019Independent thesis Basic level (university diploma), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Smartphone usage is higher than ever and the number is steadily increasing, but not all websites on the Internet are adapted for use on smartphones. This study set out to find common and proven guidelines from the current scientific literature and create a guide on how to best adapt a desktop website to be optimized for use on smartphones. Areas of research were usability, readability and energy saving. A literature review of the body of data on the subject was reviewed and the result was put in a list of guidelines. The guidelines were used to compare the desktop version versus the smartphone version on 5 frequently visited websites.

       The result was summarized with a score for each website and their respective solution for displaying components on small screens was noted. A prototype website was constructed in two versions: one responsive and one unresponsive. The prototype website’s different versions were then tested by a group of testers. The result of the tests concluded that the guidelines raised user satisfaction and readability. Sufficient energy saving metrics could not be extracted in the way design and usability was tested and had to be excluded from the testing.

       The list of guidelines showed that there are solutions for solving readability, usability issues  and energy-saving issues on smartphones. The testing concluded that there was an increase in text readability and usability of the website when the guidelines were implemented. Further testing of energy saving must be conducted to test the validity of the remaining untested  guidelines.

  • 239.
    Johnell, Carl
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Parallel programming in Go and Scala: A performance comparison2015Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

        This thesis provides a performance comparison of parallel programming in Go and Scala. Go supports concurrency through goroutines and channels. Scala have parallel collections, futures and actors that can be used for concurrent and parallel programming. The experiment used two different types of algorithms to compare the performance between Go and Scala. Parallel versions of matrix multiplication and matrix chain multiplication were implemented with goroutines and channels in Go. Matrix multiplication was implemented with parallel collections and futures in Scala, and chain multiplication was implemented with actors.

        The results from the study shows that Scala has better performance than Go, parallel matrix multiplication was about 3x faster in Scala. However, goroutines and channels are more efficient than actors. Go performed better than Scala when the number of goroutines and actors increased in the benchmark for parallel chain multiplication.

        Both Go and Scala have features that makes parallel programming easier, but I found Go as a language was easier to learn and understand than Scala. I recommend anyone interested in Go to try it out because of its ease of use.

  • 240.
    Jonarv Hultgren, Susanne
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Tennevall, Philip
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Saving resources through smart farming: An IoT experiment study2019Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Context: Smart farming, agritech, is growing in popularity and is starting to develop rapidly with some already existing technology that is implemented in agriculture for both industrial and private use.

    Objectives: The goal of this thesis is to investigate the benefits and issues with implementing technology in agriculture, agritech. In this thesis the investigation and research is performed by conduction a literature study and an experiment.

    Realization: A prototype was created to monitor the soil moisture level and calculating the average soil moisture value, then water the plants when needed. This was then compared to a manually watered pot to investigate if agritech could reduce the water usage when maintaining plants.

    Results: The result of the experiment indicates that it is possible to improve the use of resources such as human labor, time spent on maintaining the plants and water usage.

    Conclusions: The conclusion of this thesis is with the help of agritech, human workers can spend more time on other tasks and maintain the technology implemented. Instead of observing the plants to see if they need watering and watering them manually. Water usage may also be minimized with the help of sensors that make sure the plants only get watered when needed by constantly checking the soil moisture level.

  • 241.
    Josyula, Jitendra
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Panamgipalli, Sarat
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Usman, Muhammad
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Britto, Ricardo
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Ali, Nauman bin
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Software Practitioners' Information Needs and Sources: A Survey Study2018In: Proceedings - 2018 9th International Workshop on Empirical Software Engineering in Practice, IWESEP 2018, IEEE , 2018, p. 1-6Conference paper (Refereed)
    Abstract [en]

    Software engineering practitioners have information needs to support strategic, tactical and operational decision-making. However, there is scarce research on understanding which information needs exist and how they are currently fulfilled in practice. This study aims to identify the information needs, the frequency of their occurrence, the sources of information used to satisfy the needs, and the perception of practitioners regarding the usefulness of the sources currently used. For this purpose, a literature review was conducted to aggregate the current state of understanding in this area. We built on the results of the literature review and developed further insights through in-depth interviews with 17 practitioners. We further triangulated the findings from these two investigations by conducting a web-based survey (with 83 completed responses). Based on the results, we infer that information regarding product design, product architecture and requirements gathering are the most frequently faced needs. Software practitioners mostly use blogs, community forums, product documentation, and discussion with colleagues to address their information needs.

  • 242.
    Josyula, Jitendra Rama Aswadh
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Panamgipalli, Soma Sekhara Sarat Chandra
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Identifying the information needs and sources of software practitioners.: A mixed method approach.2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. Every day software practitioners need information for resolving a number of questions. This information need should be identified and addressed in order to successfully develop and deliver a software system. One of the ways to address these information needs is to make use of some information sources like blogs, websites, documentation etc. Identifying the needs and sources of software practitioners can improve and benefit the practitioners as well as the software development process. But the information needs and sources of software practitioners are partially studied and rather it is mostly focused on the knowledge management in software engineering. So, the current context of this study is to identify the information needs and information sources of software practitioners and also to investigate the practitioner’s perception on different information sources.           

    Objectives. In this study we primarily investigated some of the information needs of software practitioners and the information sources that they use to fulfill their needs. Secondly we investigated the practitioner’s perception on available information sources by identifying the aspects that they consider while using different information sources. 

    Methods. To achieve the research objectives this study conducted an empirical investigation by performing a survey, with two data collection techniques. A simple literature review was also performed initially to identify some of the information needs and sources of software practitioners. Then we started survey by conducting the semi-structured interviews, based on the data obtained from the literature. Moreover, an online questionnaire was designed, after conducting the preliminary analysis of the data obtained from both the interviews and literature review. Coding process of grounded theory was used for analyzing the data obtained from the interviews and descriptive statistics were used for analyzing the data obtained from the online questionnaire. The data obtained from both the qualitative and quantitative methods is triangulated by comparing the identified information needs and sources with those that are presented in the literature. 

    Results. From the preliminary literature review, we identified seven information needs and six information sources. Based on the results of the literature review, we then conducted interviews with software practitioners and identified nine information needs and thirteen information sources. From the interviews we also investigated the aspects that software practitioners look into, while using different information sources and thus we identified four major aspects. We then validated the results from the literature review and interviews with the help of an online questionnaire. From the online questionnaire, we finally identified the frequency of occurrence of the identified information needs and the frequency of use of different information sources.     

    Conclusions. We identified that the software practitioners are currently facing nine type of information needs, out of which, information on clarifying the requirements and information on produce design and architecture are the most frequently faced needs. To address these needs most of the practitioners are using the information sources like blogs and community forums, product documentation and discussion with colleagues. While the research articles are moderately used and the IT magazines and social networking sites are least used to address their needs. We also identified that most of the practitioners consider the reliability/accuracy of the information source as an extremely important factor. The identified information needs and sources along with the practitioner’s perception are clearly elucidated in the document.

    A future direction of this work could be, testing the applicability of the identified information needs by extending the sample population. Also, there is a scope for research on how the identified information needs can be minimized to make the information acquisition more easy for the practitioners. 

  • 243.
    Kaliniak, Paweł
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering. Wrocław University of Technology.
    Conversion of SBVR Behavioral Business Rules to UML Diagrams: Initial Study of Automation2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. Automation of conversion of business rules into source code in software development project can reduce time and effort in phase of development. In this thesis we discuss automatic conversion of behavioral business rules defined in Semantics of Business Vocabulary and Rules (SBVR) standard, to fragments of Unified Modeling Language diagrams: activity, sequence and state machine. It is conversion from Computation Independent Model (CIM) into Platform Independent Model (PIM) levels defined by Model Driven Architecture (MDA). PIM in MDA can be further transformed into Platform Specific Model which is prepared for source code generation.

    Objectives. Aim of this thesis is to initially explore field of automatic conversion of behavioral business rules - conversion from SBVR representation to fragments of UML diagrams. It is done by fulfilment of objectives defined as following:

    -To find out properties of SBVR behavioral rule which ensure that the rule can be automatically converted to parts of UML behavioral diagrams (activity, sequence, state machine).

    -To propose mapping of SBVR contructs to constructs of UML behavioral diagrams.

    -To prepare guidelines which help to specify SBVR behavioral business rules in such way that they can be automatically transformed into fragments of selected UML behavioral diagrams.

    Methods. Expert opinion and case study were applied. Business analysts from industry and academia were asked to convert set of SBVR behavioral business rules to UML behavioral diagrams: activity, sequence and state machine. Analysis of the set of business rules and their conversions to UML diagrams was basis for fulfilment of objectives.

    Results. 2 syntax and 3 semantic properties were defined. Conversion rules which define mapping for SBVR behavioral business rules constructs to UML constructs were defined: 5 rules for conversion to activity diagram, 6 for conversion to sequence diagram, 5 for conversion to state machine diagram. 6 guidelines which are intended to help in specification of behavioral business rules that can be automatically transformed into UML diagrams according to the presented conversion rules, were defined.

    Conclusions. Research performed in this thesis is initial study of automatic conversion of behavioral business rules from SBVR notation to UML behavioral diagrams notation. Validation of defined properties, conversion rules and guidelines can be done in industry as future work. Re-execution of research for larger and more diverse set/sets of behavioral business rules taken from industry projects, sufficiently broad access to business analysts from industry and academia could help to improve results.

  • 244. Kalinowski, Marcos
    et al.
    Mendes, Emilia
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Travassos, G.H.
    An industry ready defect causal analysis approach exploring Bayesian networks2014In: Lecture Notes in Business Information Processing, Vienna: Springer , 2014, Vol. 166, p. 12-33Conference paper (Refereed)
    Abstract [en]

    Defect causal analysis (DCA) has shown itself an efficient means to improve the quality of software processes and products. A DCA approach exploring Bayesian networks, called DPPI (Defect Prevention-Based Process Improvement), resulted from research following an experimental strategy. Its conceptual phase considered evidence-based guidelines acquired through systematic reviews and feedback from experts in the field. Afterwards, in order to move towards industry readiness the approach evolved based on results of an initial proof of concept and a set of primary studies. This paper describes the experimental strategy followed and provides an overview of the resulting DPPI approach. Moreover, it presents results from applying DPPI in industry in the context of a real software development lifecycle, which allowed further comprehension and insights into using the approach from an industrial perspective.

  • 245.
    Kancharla, Akshitha
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Pannala, Akhil
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Factors for Accelerating the Development Speed in Systems of Artificial Intelligence2019Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Background: With the increase in the application of Artificial Intelligence, there is an urge to find ways to increase the development speed of these systems (time-to-market). Because time is one of the most expensive and valuable resources in software development. Faster development speed is essential for companies to survive. There are articles in the literature that states the factors/antecedents for improving the development speed in Traditional Software Engineering. However, we cannot draw direct conclusions from these factors because development in Traditional Software Engineering and Artificial Intelligence differ.

    Objectives: The primary objectives of this research are: a) Conduct a literature review to identify the list of factors that affect the speed of Traditional Software Engineering. b) Perform an In-depth interview study to evaluate whether the listed factors of Traditional Software Engineering can be applied in accelerating the development of AI systems engineering.

    Methods: The method chosen to address the research question 1 is the Systematic Literature Review. The reason for selecting Systematic Literature Review (SLR) is that we follow specific well-defined structure to identify, analyze and interpret the data about the research question with the evidence. The search strategy Snowballing is the best alternative for conducting SLR as per the guidelines are given by Wohlin. The method chosen to address the research question 2 is an In-depth interview study. We conduct interviews to gather information related to our research. Here, the participant is the interviewee, who may be a data scientist or project manager in the field of AI and the interviewer is a student. Each interviewee lists the factors that affect the development speed of AI systems and rank them based on their importance using Trello.

    Results: The results from the systematic literature is the list of papers that are obtained from the snowball sampling. From the collected data, factors are extracted which are then used for the interviews. The interviews are conducted based on the questionnaire that was prepared. All the interviews are recorded and then transcribed. The transcribed data is analyzed using Conventional Content Analysis.

    Conclusions: The study identifies the factors that will help accelerate the development speed of Artificial Intelligence systems. The identified factors are mostly non-technical such as team leadership, trust, etc. By selecting suitable research methods for each research question, the objectives are addressed.

  • 246.
    Karlson, Max
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Olsson, Fredrik
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Investigating the Newly Graduated StudentsExperience after University2019Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Today’s labor market is teeming with software development jobs, and employeesare needed more than ever. With this statement, one would believe it is easy fora newly graduated student to start their career. However, according to severalstudies, there are specific areas where newly graduated Software Engineeringstudents struggle when beginning their first job. Currently, there is a displace-ment about what the school should focus on when teaching their students. Thiscauses various challenges to arise for newly graduated students when they areinitially starting their career. To address this issue, this study aims to iden-tify whether or not there exists a gap between the education provided by theuniversities, and what is expected from the industry. In accordance with this,the purpose is also the point out which areas might be challenging for newlygraduated students, and highlight how the school and industry can benefit fromthe results of this study.By conducting interviews with both newly graduated student with one to threeyears working experience or personnel responsible for hiring new employees atcompanies, this study will give an insight on which common areas newly grad-uates may struggle with. Although the result specifies several areas which arechallenging to newly graduated students. The greatest challenges which thenewly graduated graduated students faced were areas revolving around softskills. This was in accordance with the opinions of the recruiters. Insinuatingthat these areas are what the school should focus more on. Other differencesbetween the newly graduated interviewee’s opinions and the recruiters are alsohighlighted in the report Several subjects in school could improve its way ofteaching. Furthermore, there are possibilities for companies to better adjusttheir on-boarding of newly graduated. By addressing the challenges which newlygraduated face they can provide their new employees with a better understand-ing of how to properly work and function in the industry today.

  • 247.
    Karlsson, Jan
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Eriksson, Patrik
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    How the choice of Operating System can affect databases on a Virtual Machine2014Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [en]

    As databases grow in size, the need for optimizing databases is becoming a necessity. Choosing the right operating system to support your database becomes paramount to ensure that the database is fully utilized. Furthermore with the virtualization of operating systems becoming more commonplace, we find ourselves with more choices than we ever faced before. This paper demonstrates why the choice of operating system plays an integral part in deciding the right database for your system in a virtual environment. This paper contains an experiment which measured benchmark performance of a Database management system on various virtual operating systems. This experiment shows the effect a virtual operating system has on the database management system that runs upon it. These findings will help to promote future research into this area as well as provide a foundation on which future research can be based upon.

  • 248.
    Kartheek arun sai ram, chilla
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Kavya, Chelluboina
    Investigating Metrics that are Good Predictors of Human Oracle Costs An Experiment2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. Human oracle cost, the cost associated in estimating the correctness of the output for the given test inputs is manually evaluated by humans and this cost is significant and is a concern in the software test data generation field. This study has been designed in the context to assess metrics that might predict human oracle cost.

    Objectives. The major objective of this study is to address the human oracle cost, for this the study identifies the metrics that are good predictors of human oracle cost and can further help to solve the oracle problem. In this process, the identified suitable metrics from the literature are applied on the test input, to see if they can help in predicting the correctness of the output for the given test input. Methods. Initially a literature review was conducted to find some of the metrics that are relevant to the test data. Besides finding the aforementioned metrics, our literature review also tries to find out some possible code metrics that can be ap- plied on test data. Before conducting the actual experiment two pilot experiments were conducted. To accomplish our research objectives an experiment is conducted in the BTH university with master students as sample population. Further group interviews were conducted to check if the participants perceive any new metrics that might impact the correctness of the output. The data obtained from the experiment and the interviews is analyzed using linear regression model in SPSS suite. Further to analyze the accuracy vs metric data, linear discriminant model using SPSS pro- gram suite was used.

    Results.Our literature review resulted in 4 metrics that are suitable to our study. As our test input is HTML we took HTML depth, size, compression size, number of tags as our metrics. Also, from the group interviews another 4 metrics are drawn namely number of lines of code and number of <div>, anchor <a> and paragraph <p> tags as each individual metric. The linear regression model which analyses time vs metric data, shows significant results, but with multicollinearity effecting the result, there was no variance among the considered metrics. So, the results of our study are proposed by adjusting the multicollinearity. Besides, the above analysis, linear discriminant model which analyses accuracy vs metric data was conducted to predict the metrics that influences accuracy. The results of our study show that metrics positively correlate with time and accuracy.

    Conclusions. From the time vs metric data, when multicollinearity is adjusted by applying step-wise regression reduction technique, the program size, compression size and <div> tag are influencing the time taken by sample population. From accuracy vs metrics data number of <div> tags and number of lines of code are influencing the accuracy of the sample population. 

  • 249. Kashfi, P.
    et al.
    Feldt, Robert
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Nilsson, A.
    Berntsson Svensson, Richard
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    A conceptual ux-aware model of requirements2016In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Springer, 2016, Vol. 9856 LNCS, p. 234-245Conference paper (Refereed)
    Abstract [en]

    User eXperience (UX) is becoming increasingly important for success of software products. Yet, many companies still face various challenges in their work with UX. Part of these challenges relate to inadequate knowledge and awareness of UX and that current UX models are commonly not practical nor well integrated into existing Software Engineering (SE) models and concepts. Therefore, we present a conceptual UX-aware model of requirements for software development practitioners. This layered model shows the interrelation between UX and functional and quality requirements. The model is developed based on current models of UX and software quality characteristics. Through the model we highlight the main differences between various requirement types in particular essentially subjective and accidentally subjective quality requirements. We also present the result of an initial validation of the model through interviews with 12 practitioners and researchers. Our results show that the model can raise practitioners’ knowledge and awareness of UX in particular in relation to requirement and testing activities. It can also facilitate UX-related communication among stakeholders with different backgrounds. © IFIP International Federation for Information Processing 2016.

  • 250.
    Kashfi, Pariya
    et al.
    Chalmers; Gothenburg Univ, SWE.
    Feldt, Robert
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Nilsson, Agneta
    Chalmers; Gothenburg Univ, SWE.
    Berntsson Svensson, Richard
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Evidence-based Timelines for User eXperience Software Process Improvement Retrospectives2016In: 2016 42ND EUROMICRO CONFERENCE ON SOFTWARE ENGINEERING AND ADVANCED APPLICATIONS (SEAA), IEEE Computer Society, 2016, p. 59-62Conference paper (Refereed)
    Abstract [en]

    We performed a retrospective meeting at a case company to reflect on its decade of Software Process Improvement (SPI) activities for enhancing UX integration. We supported the meeting by a pre-generated timeline of the main activities. This approach is a refinement of a similar approach that is used in Agile projects to improve effectiveness and decrease memory bias of retrospective meetings. The method is evaluated through gathering practitioners' view using a questionnaire. We conclude that UX research and practice can benefit from the SPI body of knowledge. We also argue that a cross-section evidence-based timeline retrospective meeting is useful for enhancing UX work in companies, especially for identifying and reflecting on `organizational issues'. This approach also provides a cross-section longitudinal overview of the SPI activities that cannot easily be gained in other common SPI learning approaches.

2345678 201 - 250 of 593
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf