Change search
Refine search result
1234567 151 - 200 of 1526
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 151.
    Boddapati, Venkatesh
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Classifying Environmental Sounds with Image Networks2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. Environmental Sound Recognition, unlike Speech Recognition, is an area that is still in the developing stages with respect to using Deep Learning methods. Sound can be converted into images by extracting spectrograms and the like. Object Recognition from images using deep Convolutional Neural Networks is a currently developing area holding high promise. The same technique has been studied and applied, but on image representations of sound.

    Objectives. In this study, investigation is done to determine the best possible accuracy of performing a sound classification task using existing deep Convolutional Neural Networks by comparing the data pre-processing parameters. Also, a novel method of combining different features into a single image is proposed and its effect tested. Lastly, the performance of an existing network that fuses Convolutional and Recurrent Neural architectures is tested on the selected datasets.

    Methods. In this, experiments were conducted to analyze the effects of data pre-processing parameters on the best possible accuracy with two CNNs. Also, experiment was also conducted to determine whether the proposed method of feature combination is beneficial or not. Finally, an experiment to test the performance of a combined network was conducted.

    Results. GoogLeNet had the highest classification accuracy of 73% on 50-class dataset and 90-93% on 10-class datasets. The sampling rate and frame length values of the respective datasets which contributed to the high scores are 16kHz, 40ms and 8kHz, 50ms respectively. The proposed combination of features does not improve the classification accuracy. The fused CRNN network could not achieve high accuracy on the selected datasets.

    Conclusions. It is concluded that deep networks designed for object recognition can be successfully used to classify environmental sounds and the pre-processing parameters’ values determined for achieving best accuracy. The novel method of feature combination does not significantly improve the accuracy when compared to spectrograms alone. The fused network which learns the special and temporal features from spectral images performs poorly in the classification task when compared to the convolutional network alone.

  • 152.
    Boddapati, Venkatesh
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Petef, Andrej
    Sony Mobile Communications AB, SWE.
    Rasmusson, Jim
    Sony Mobile Communications AB, SWE.
    Lundberg, Lars
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Classifying environmental sounds using image recognition networks2017In: Procedia Computer Science / [ed] Toro C.,Hicks Y.,Howlett R.J.,Zanni-Merk C.,Toro C.,Frydman C.,Jain L.C.,Jain L.C., Elsevier B.V. , 2017, Vol. 112, p. 2048-2056Conference paper (Refereed)
    Abstract [en]

    Automatic classification of environmental sounds, such as dog barking and glass breaking, is becoming increasingly interesting, especially for mobile devices. Most mobile devices contain both cameras and microphones, and companies that develop mobile devices would like to provide functionality for classifying both videos/images and sounds. In order to reduce the development costs one would like to use the same technology for both of these classification tasks. One way of achieving this is to represent environmental sounds as images, and use an image classification neural network when classifying images as well as sounds. In this paper we consider the classification accuracy for different image representations (Spectrogram, MFCC, and CRP) of environmental sounds. We evaluate the accuracy for environmental sounds in three publicly available datasets, using two well-known convolutional deep neural networks for image recognition (AlexNet and GoogLeNet). Our experiments show that we obtain good classification accuracy for the three datasets. © 2017 The Author(s).

  • 153.
    Bodicherla, Saikumar
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Pamulapati, Divyani
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Knowledge Management Maturity Model for Agile Software Development2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context: Knowledge is the major aspect of an organization which enables the enterprise to be more productive and to deliver the high complexity services. Knowledge management plays a key role in agile software development because it supports cultural infrastructure esteems like collaboration, communication, and knowledge transfer. This research aims to explore how organizations that adopts Agile Software Development (ASD) implement knowledge management utilizing practices that supports the key process areas. Several knowledge management maturity models have been proposed over a decade ago but not all of the models that is specially stated knowledge Management Maturity Model (KMMM) for Agile software development. To fulfil this research gap, we introduce the maturity model which emphasize knowledge management in ASD among the practitioners. This maturity model helps to assess their knowledge management in organization and provides a road map to the organizations for any further improvement required in their processes. 

    Objectives: In this thesis, we investigate the key process areas of knowledge management maturity models that could support agile software development. Through investigation about the key process areas, we found that the organizations should emphasis on key process areas and its practices in order to improve the software process. The objectives of this research include:

    • Explore the key process areas and practices of knowledge management in the knowledge management maturity models. 
    • Identify the views of practitioners on knowledge management practices and key process areas for Agile software development.
    • To propose the maturity model for Knowledge management in Agile software development among the practitioner’s opinions. 

    Methods: In this research, we conducted two methods: Systematic mapping and Survey to fulfil our aim and objectives. We conducted Systematic mapping study through the snowballing process to investigate empirical literature about Knowledge management maturity models. To triangulate the systematic mapping results, we conducted a survey. From the survey results, we obtained the responses and were analyzed statistically using descriptive statistics.

    Results: From Systematic mapping, we identified 18 articles and analyzed 24 practices of Knowledge management maturity models. These practices are indicated in key process areas such as process, people, technology. Through the systematic mapping results, 9 KM practices that were found from KMMM literature were listed in the survey questionnaire and answered by software engineering practitioners. Moreover, 5 other new practices for agile have suggested in the survey that was not found in KMMM literature. To address the systematic mapping and survey results, we propose the maturity model which emphasize knowledge management practices in ASD among the practitioners.

    Conclusions: This thesis lists the main elements of practices that are utilized by the organization and also show the usage of maturity levels at each practice in detail. Furthermore, this thesis helps the organization's to assess the current levels of maturity that exist to each practice in a real process. Hence, the researchers can utilize the model from this thesis and further they can improve their Km in organizations.

  • 154.
    Bodireddigari, Sai Srinivas
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    A Framework To Measure the Trustworthiness of the User Feedback in Mobile Application Stores2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context: Mobile application stores like Google Play, Apple store, Windows store have over 3 million apps. Users download the applications from their respective stores and they generally prefer the apps with the highest ratings. In response to the present situation, application stores provided the categories like editor’s choice or top charts, providing better visibility for the applications. Customer reviews play such critical role in the development of the application and the organization, in such case there might be flawed reviews or biased opinions about the application due to many factors. The biased opinions and flawed reviews are likely to cause user review untrustworthiness. The reviews or ratings in the mobile application stores are used by the organizations to make the applications more efficient and more adaptable to the user. The context leads to importance of the user’s review trustworthiness and managing the trustworthiness in the user feedback by knowing the causes of mistrust. Hence, there is a need for a framework to understand the trustworthiness in the user given feedback.

    Objectives: In the following study the author aims for the accomplishment of the following objectives, firstly, exploring the causes of untrustworthiness in user feedback for an application in the mobile application stores such as google play store. Secondly, Exploring the effects of trustworthiness on the users and developers. Finally, the aim is to propose a framework for managing the trustworthiness in the feedback.

    Methods: To accomplish the objectives, author used qualitative research method. The data collection method is an interview-based survey that was conducted with 13 participants, to find out the causes of untrustworthiness in the user feedback from user’s perspective and developer’s perspective. Author follows thematic coding for qualitative data analysis.

    Results:Author identifies 11 codes from the description of the transcripts and explores the relationship among the trustworthiness with the causes. 11 codes were put into 4 themes, and a thematic network is created between the themes. The relations were then analyzed with cost-effect analysis.

    Conclusions: We conclude that 11 causes effect the trustworthiness according to user’s perspective and 9 causes effect the trustworthiness according to the developer’s perspective, from the analysis. Segregating the trustworthy feedback from the untrustworthy feedback is important for the developers, as the next releases should be planned based on that. Finally, an inclusion and exclusion criteria to help developers manage trustworthy user feedback is defined. 

  • 155.
    Boer, de, Wiebe Douwe
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Participatory Design Ideals2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The Swedish academic discipline Informatics has roots in the Scandinavian design approach Participatory Design (PD). PD’s point of departure is to design ICT and the new environment it becomes part of together with the future users driven by the ideal to bring more democracy to the workplace. PD builds further on the Action Research and industrial democracy tradition already starting in the 1960s in Scandinavia, in which the powerful Scandinavian trade unions have a central role. The aim of the unions is to prepare the workers and have influence on the introduction of new technologies that (are expected to) change the work and work environment of the workers. In the 1970s, when more computers emerge in the work place, this leads to the development of PD. Important difference with AR is that the aim of PD is to actually design new ICT and the new environment it becomes part of.

    During the in PD literature much referred to project UTOPIA in the first half of the 1980s, led by project leader and PD pioneer Pelle Ehn, it is discovered that bringing the different expertise of designers/researchers and workers together in design-by-doing processes also result in more appropriate ICT.

     

    With ICT being ubiquitous nowadays, influencing most aspects of our lives, inside and outside the workplace, and another role of trade unions in (Scandinavian) society, a question is how PD should further develop. PD pioneer Morten Kyng (also a UTOPIA designer/researcher) proposes a framework for next PD practices in a discussion paper. The first element he mentions in the framework is ideals; The designer/researcher should as a first step consider what ideals to pursue as a person and for the project, and then to consider how to discuss the goals of the project partners, for which Kyng does no further suggestions how to approach this.

    This design and research thesis has as aim to design and propose some PD processes to come at the beginning of a PD/design project to shared ideals to pursue, based on a better understanding of the political and philosophical background of PD, including design as a discipline in its own right.

     

    For a better understanding of the political and philosophical roots of PD, and design as a discipline in its own right, Pelle Ehns’s early (PD research) work and (PD) influences and supporting theories are explored, next to Kyng’s discussion paper (framework) and reactions from his debate partners on this. Find out is that politics and what ideals to pursue in PD are sensitive and (still) important subjects in PD, and in a broader sense also for design in general one could argue. In relation to this also related disciplines like Computer Ethics, Value Sensitive Design, and more recent formulated ideals for PD and its relation to ethics are explored. As a result a proposal for a redesigned framework for next PD practices as a design artefact is designed, in which the element ideals is most elaborated.

    Before the understanding of design as a discipline in its own right is further explored by exploring a selection of different models and quotes from related (design) literature, on which is reflected also in relation to PD, and which are used as reminders in a design process to come to a proposal for a model that tries to reframe the relation between design, practice and research.

     

    Finally some methods, processes and techniques used in PD, design, AR and related literature that can contribute to design proposals for design processes that enable the design of ideals using a PD approach, are explored. These are used as reminders in design-by-doing processes, in which suggestions for techniques and processes to design ideals together with participants are tried out in real live situations, reflected on and iteratively further developed. Trying to avoid framing as much as possible, (semi-) anonymity and silence seem to be important ingredients in these processes to stimulate the generation of idea(l)s as much as possible free from bias and dominance patterns. An additional design artefact developed in this context is a template for an annotated portfolio used to describe and reflect on the different processes. 

  • 156.
    Boeva, Veselka
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Angelova, Milena
    Technical University Sofia, BUL.
    Lavesson, Niklas
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Rosander, Oliver
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Tsiporkova, Elena
    Collective Center for the Belgian Technological Industry, BEL.
    Evolutionary clustering techniques for expertise mining scenarios2018In: ICAART 2018 - Proceedings of the 10th International Conference on Agents and Artificial Intelligence, Volume 2 / [ed] van den Herik J.,Rocha A.P., SciTePress , 2018, Vol. 2, p. 523-530Conference paper (Refereed)
    Abstract [en]

    The problem addressed in this article concerns the development of evolutionary clustering techniques that can be applied to adapt the existing clustering solution to a clustering of newly collected data elements. We are interested in clustering approaches that are specially suited for adapting clustering solutions in the expertise retrieval domain. This interest is inspired by practical applications such as expertise retrieval systems where the information available in the system database is periodically updated by extracting new data. The experts available in the system database are usually partitioned into a number of disjoint subject categories. It is becoming impractical to re-cluster this large volume of available information. Therefore, the objective is to update the existing expert partitioning by the clustering produced on the newly extracted experts. Three different evolutionary clustering techniques are considered to be suitable for this scenario. The proposed techniques are initially evaluated by applying the algorithms on data extracted from the PubMed repository. Copyright © 2018 by SCITEPRESS – Science and Technology Publications, Lda. All rights reserved.

  • 157.
    Boeva, Veselka
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering. Blekinge Inst Technol, Comp Sci & Engn Dept, Karlskrona, Sweden..
    Lundberg, Lars
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering. Blekinge Inst Technol, Comp Sci & Engn Dept, Karlskrona, Sweden..
    Kota, Sai M. Harsha
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Sköld, Lars
    Telenor , SWE.
    Analysis of Organizational Structure through Cluster Validation Techniques Evaluation of email communications at an organizational level2017In: 2017 17TH IEEE INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS (ICDMW 2017) / [ed] Gottumukkala, R Ning, X Dong, G Raghavan, V Aluru, S Karypis, G Miele, L Wu, X, IEEE , 2017, p. 170-176Conference paper (Refereed)
    Abstract [en]

    In this work, we report an ongoing study that aims to apply cluster validation measures for analyzing email communications at an organizational level of a company. This analysis can be used to evaluate the company structure and to produce further recommendations for structural improvements. Our initial evaluations, based on data in the forms of emails logs and organizational structure for a large European telecommunication company, show that cluster validation techniques can be useful tools for assessing the organizational structure using objective analysis of internal email communications, and for simulating and studying different reorganization scenarios.

  • 158.
    Boivie, Joakim
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Digital Wanderlust: Med digital materia som följeslagare i skapandet2017Independent thesis Basic level (degree of Bachelor), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    With this bachelor thesis I aim to bring to light the role of the computer in digital creative work. This is accomplished by treating the code that make up digital objects as a form of matter, and with Karen Barad’s agential realism and other research into digital materiality as a point of reference this matter is invited to the creative process as an actor. I’ve been striving for a glimpse of how digital matter comes to life when it’s allowed an active part in the creative process, to see how it expresses itself. By engaging with the digital matter through diffraction and remix as methods I’ve been given an insight into the core of it, and through the process I’ve been working alongside digital matter in intra action.

    Ultimately I can see how digital matter won’t appear alone, I myself and the computer are both entangled together with the digital matter as a result of the intra actions we’ve been engaging in. My intervention in digital matter becomes visible as glitches, traces of decay that give the digital matter, which can be so fleeting, more concrete and material characteristics. The unintelligible complexity of digital matter also comes to light when it’s allowed influence, as it appears visually. With this knowledge I’ve gained the awareness that digital matter does not have an absolute appearance, and this thesis can be seen as an investigation into how digital matter can appear.

  • 159.
    Boldt, Martin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Anton, Borg
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Clustering residential burglaries using multiple heterogeneous variablesIn: International Journal of Information Technology & Decision MakingArticle in journal (Refereed)
    Abstract [en]

    To identify series of residential burglaries, detecting linked crimes performed bythe same constellations of criminals is necessary. Comparison of crime reports today isdicult as crime reports traditionally have been written as unstructured text and oftenlack a common information-basis. Based on a novel process for collecting structured crimescene information the present study investigates the use of clustering algorithms to groupsimilar crime reports based on combined crime characteristics from the structured form.Clustering quality is measured using Connectivity and Silhouette index, stability usingJaccard index, and accuracy is measured using Rand index and a Series Rand index.The performance of clustering using combined characteristics was compared with spatialcharacteristic. The results suggest that the combined characteristics perform better orsimilar to the spatial characteristic. In terms of practical signicance, the presentedclustering approach is capable of clustering cases using a broader decision basis.

  • 160.
    Boldt, Martin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Bala, Jaswanth
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Filtering Estimated Crime Series Based on Route Calculations on Spatio-temporal Data2016In: European Intelligence and Security Informatics Conference / [ed] Brynielsson J.,Johansson F., IEEE, 2016, p. 92-95Conference paper (Refereed)
    Abstract [en]

    Law enforcement agencies strive to link serial crimes, most preferably based on physical evidence, such as DNA or fingerprints, in order to solve criminal cases more efficiently. However, physical evidence is more common at crime scenes in some crime categories than others. For crime categories with relative low occurrence of physical evidence it could instead be possible to link related crimes using soft evidence based on the perpetrators' modus operandi (MO). However, crime linkage based on soft evidence is associated with considerably higher error-rates, i.e. crimes being incorrectly linked. In this study, we investigate the possibility of filtering erroneous crime links based on travel time between crimes using web-based direction services, more specifically Google maps. A filtering method has been designed, implemented and evaluated using two data sets of residential burglaries, one with known links between crimes, and one with estimated links based on soft evidence. The results show that the proposed route-based filtering method removed 79 % more erroneous crimes than the state-of-the-art method relying on Euclidean straight-line routes. Further, by analyzing travel times between crimes in known series it is indicated that burglars on average have up to 15 minutes for carrying out the actual burglary event. © 2016 IEEE.

  • 161.
    Boldt, Martin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Borg, Anton
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    A statistical method for detecting significant temporal hotspots using LISA statistics2017In: Proceedings - 2017 European Intelligence and Security Informatics Conference, EISIC 2017, IEEE Computer Society, 2017, p. 123-126Conference paper (Refereed)
    Abstract [en]

    This work presents a method for detecting statisticallysignificant temporal hotspots, i.e. the date and time of events,which is useful for improved planning of response activities.Temporal hotspots are calculated using Local Indicators ofSpatial Association (LISA) statistics. The temporal data is ina 7x24 matrix that represents a temporal resolution of weekdaysand hours-in-the-day. Swedish residential burglary events areused in this work for testing the temporal hotspot detectionapproach. Although, the presented method is also useful forother events as long as they contain temporal information, e.g.attack attempts recorded by intrusion detection systems. Byusing the method for detecting significant temporal hotspotsit is possible for domain-experts to gain knowledge about thetemporal distribution of the events, and also to learn at whichtimes mitigating actions could be implemented.

  • 162.
    Boldt, Martin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Borg, Anton
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Evaluating Temporal Analysis Methods UsingResidential Burglary Data2016In: ISPRS International Journal of Geo-Information, Special Issue on Frontiers in Spatial and Spatiotemporal Crime Analytics, ISSN 2220-9964, Vol. 5, no 9, p. 1-22Article in journal (Refereed)
    Abstract [en]

    Law enforcement agencies, as well as researchers rely on temporal analysis methods in many crime analyses, e.g., spatio-temporal analyses. A number of temporal analysis methods are being used, but a structured comparison in different configurations is yet to be done. This study aims to fill this research gap by comparing the accuracy of five existing, and one novel, temporal analysis methods in approximating offense times for residential burglaries that often lack precise time information. The temporal analysis methods are evaluated in eight different configurations with varying temporal resolution, as well as the amount of data (number of crimes) available during analysis. A dataset of all Swedish residential burglaries reported between 2010 and 2014 is used (N = 103,029). From that dataset, a subset of burglaries with known precise offense times is used for evaluation. The accuracy of the temporal analysis methods in approximating the distribution of burglaries with known precise offense times is investigated. The aoristic and the novel aoristic_ext method perform significantly better than three of the traditional methods. Experiments show that the novel aoristic_ext method was most suitable for estimating crime frequencies in the day-of-the-year temporal resolution when reduced numbers of crimes were available during analysis. In the other configurations investigated, the aoristic method showed the best results. The results also show the potential from temporal analysis methods in approximating the temporal distributions of residential burglaries in situations when limited data are available.

  • 163.
    Boldt, Martin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Borg, Anton
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Melander, Ulf
    En strukturerad metod för registrering och automatisk analys av brott2014In: The Past, the Present and the Future of Police Research: Proceedings from the fifth Nordic Police Research seminar / [ed] Rolf Granér och Ola Kronkvist, 2014Conference paper (Refereed)
    Abstract [sv]

    I detta artikel beskrivs en metod som används i polisregionerna Syd, Väst och Stockholm1 för att samla in strukturerade brottsplatsuppgifter från bostadsinbrott, samt hur den insamlade informationen kan analyseras med automatiska metoder som kan assistera brottssamordnare i deras arbete. Dessa automatiserade analyser kan användas som filtrerings- eller selekteringsverktyg för bostadsinbrott och därmed effektivisera och underlätta arbetet. Vidare kan metoden användas för att avgöra sannolikheten att två brott är utförda av samma gärningsman, vilket kan hjälpa polisen att identifiera serier av brott. Detta är möjligt då gärningsmän tenderar att begå brott på ett snarlikt sätt och det är möjligt, baserat på strukturerade brottsplatsuppgifter, att automatiskt hitta dessa mönster. I kapitlet presenteras och utvärderas en prototyp på ett IT-baserat beslutsstödsystem samt två automatiska metoder för brottssamordning.

  • 164.
    Boldt, Martin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Borg, Anton
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Svensson, Martin
    Blekinge Institute of Technology, Faculty of Engineering, Department of Industrial Economics.
    Hildeby, Jonas
    Polisen, SWE.
    Predicting burglars' risk exposure and level of pre-crime preparation using crime scene data2018In: Intelligent Data Analysis, ISSN 1088-467X, Vol. 22, no 1, p. 167-190, article id IDA 322-3210Article in journal (Refereed)
    Abstract [en]

    Objectives: The present study aims to extend current research on how offenders’ modus operandi (MO) can be used in crime linkage, by investigating the possibility to automatically estimate offenders’ risk exposure and level of pre-crime preparation for residential burglaries. Such estimations can assist law enforcement agencies when linking crimes into series and thus provide a more comprehensive understanding of offenders and targets, based on the combined knowledge and evidence collected from different crime scenes. Methods: Two criminal profilers manually rated offenders’ risk exposure and level of pre-crime preparation for 50 burglaries each. In an experiment we then analyzed to what extent 16 machine-learning algorithms could generalize both offenders’ risk exposure and preparation scores from the criminal profilers’ ratings onto 15,598 residential burglaries. All included burglaries contain structured and feature-rich crime descriptions which learning algorithms can use to generalize offenders’ risk and preparation scores from.Results: Two models created by Naïve Bayes-based algorithms showed best performance with an AUC of 0.79 and 0.77 for estimating offenders' risk and preparation scores respectively. These algorithms were significantly better than most, but not all, algorithms. Both scores showed promising distinctiveness between linked series, as well as consistency for crimes within series compared to randomly sampled crimes.Conclusions: Estimating offenders' risk exposure and pre-crime preparation  can complement traditional MO characteristics in the crime linkage process. The estimations are also indicative to function for cross-category crimes that otherwise lack comparable MO. Future work could focus on increasing the number of manually rated offenses as well as fine-tuning the Naïve Bayes algorithm to increase its estimation performance.

  • 165.
    Boldt, Martin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    jacobsson, andreas
    Malmö University, SWE.
    Baca, Dejan
    Fidesmo AB, SWE.
    Carlsson, Bengt
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Introducing a novel security-enhanced agile software development process2017In: International Journal of Secure Software Engineering, ISSN 1947-3036, E-ISSN 1947-3044, ISSN 1947-3036, Vol. 8, no 2Article in journal (Refereed)
    Abstract [en]

    In this paper, a novel security-enhanced agile software development process, SEAP, is introduced. It has been designed, tested, and implemented at Ericsson AB, specifically in the development of a mobile money transfer system. Two important features of SEAP are 1) that it includes additional security competences, and 2) that it includes the continuous conduction of an integrated risk analysis for identifying potential threats. As a general finding of implementing SEAP in software development, the developers solve a large proportion of the risks in a timely, yet cost-efficient manner. The default agile software development process at Ericsson AB, i.e. where SEAP was not included, required significantly more employee hours spent for every risk identified compared to when integrating SEAP. The default development process left 50.0% of the risks unattended in the software version that was released, while the application of SEAP reduced that figure to 22.5%. Furthermore, SEAP increased the proportion of risks that were corrected from 12.5% to 67.9%, a more than five times increment.

  • 166.
    Boldt, Martin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Jacobsson, Andreas
    Carlsson, Bengt
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    On the risk exposure of smart home automation systems2014In: Proceedings 2014 International Conferenceon Future Internet of Things and Cloud, IEEE Computer Society Digital Library, 2014Conference paper (Refereed)
    Abstract [en]

    A recent study has shown that more than every fourth person in Sweden feels that they have poor knowledge and control over their energy use, and that four out of ten would like to be more aware and to have better control over their consumption [5]. A solution is to provide the householders with feedback on their energy consumption, for instance, through a smart home automation system [10]. Studies have shown that householders can reduce energy consumption with up to 20% when gaining such feedback [5] [10]. Home automation is a prime example of a smart environment built on various types of cyber-physical systems generating volumes of diverse, heterogeneous, complex, and distributed data from a multitude of applications and sensors. Thereby, home automation is also an example of an Internet of Things (IoT) scenario, where a communication network extends the present Internet by including everyday items and sensors [22]. Home automation is attracting more and more attention from commercial actors, such as, energy suppliers, infrastructure providers, and third party software and hardware vendors [8] [10]. Among the non-commercial stake-holders, there are various governmental institutions, municipalities, as well as, end-users.

  • 167.
    BONAM, VEERA VENKATA SIVARAMAKRISHNA
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Multipath TCP and Measuring end-to-end TCP Throughput: Multipath TCP Descriptions and Ways to Improve TCP Performance2018Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Internet applications make use of the services provided by a transport protocol, such as TCP (a reliable, in-order stream protocol). We use this term Transport Service to mean the end-to- end service provided to application by the transport layer.

     

    That service can only be provided correctly if information about the intended usage is supplied from the application. The application may determine this information at the design time, compile time, or run time, and it may include guidance on whether a feature is required, a preference by the application, or something in between.

    Multipath TCP (MPTCP) adds the capability of using multiple paths to a regular TCP session. Even though it is designed to be totally backward compatible to applications. The data transport differs compared to regular TCP, and there are several additional degrees of freedom that the particular application may want to exploit.

     

    Multipath TCP is particularly useful in the context of wireless networks using both Wi-Fi and a mobile network is a typical use case. In addition to the gains in throughput from inverse multiplexing, links may be added or dropped as the user moves in or out of coverage without disrupting the end-to-end TCP connection. The problem of link handover is thus solved by abstraction in the transport layer, without any special mechanisms at the network or link level.

     

    Handover functionality can then be implemented at the endpoints without requiring special functionality in the sub-networks according to the Internet's end-to-end principle. Multipath TCP can balance a single TCP connection across multiple interfaces and reach very high throughput.

  • 168.
    Borg, Anton
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    On Descriptive and Predictive Models for Serial Crime Analysis2014Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Law enforcement agencies regularly collect crime scene information. There exists, however, no detailed, systematic procedure for this. The data collected is affected by the experience or current condition of law enforcement officers. Consequently, the data collected might differ vastly between crime scenes. This is especially problematic when investigating volume crimes. Law enforcement officers regularly do manual comparison on crimes based on the collected data. This is a time-consuming process; especially as the collected crime scene information might not always be comparable. The structuring of data and introduction of automatic comparison systems could benefit the investigation process. This thesis investigates descriptive and predictive models for automatic comparison of crime scene data with the purpose of aiding law enforcement investigations. The thesis first investigates predictive and descriptive methods, with a focus on data structuring, comparison, and evaluation of methods. The knowledge is then applied to the domain of crime scene analysis, with a focus on detecting serial residential burglaries. This thesis introduces a procedure for systematic collection of crime scene information. The thesis also investigates impact and relationship between crime scene characteristics and how to evaluate the descriptive model results. The results suggest that the use of descriptive and predictive models can provide feedback for crime scene analysis that allows a more effective use of law enforcement resources. Using descriptive models based on crime characteristics, including Modus Operandi, allows law enforcement agents to filter cases intelligently. Further, by estimating the link probability between cases, law enforcement agents can focus on cases with higher link likelihood. This would allow a more effective use of law enforcement resources, potentially allowing an increase in clear-up rates.

  • 169.
    Borg, Anton
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Boldt, Martin
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Clustering Residential Burglaries Using Modus Operandi and Spatiotemporal Information2016In: International Journal of Information Technology and Decision Making, ISSN 0219-6220, Vol. 15, no 1, p. 23-42Article in journal (Refereed)
    Abstract [en]

    To identify series of residential burglaries, detecting linked crimes performed by the same constellations of criminals is necessary. Comparison of crime reports today is difficult as crime reports traditionally have been written as unstructured text and often lack a common information-basis. Based on a novel process for collecting structured crime scene information, the present study investigates the use of clustering algorithms to group similar crime reports based on combined crime characteristics from the structured form. Clustering quality is measured using Connectivity and Silhouette index (SI), stability using Jaccard index, and accuracy is measured using Rand index (RI) and a Series Rand index (SRI). The performance of clustering using combined characteristics was compared with spatial characteristic. The results suggest that the combined characteristics perform better or similar to the spatial characteristic. In terms of practical significance, the presented clustering approach is capable of clustering cases using a broader decision basis.

  • 170.
    Borg, Anton
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Boldt, Martin
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Eliasson, Johan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Detecting Crime Series Based on Route Estimation and Behavioral Similarity2017In: 2017 EUROPEAN INTELLIGENCE AND SECURITY INFORMATICS CONFERENCE (EISIC) / [ed] Brynielsson, J, IEEE , 2017, p. 1-8Conference paper (Refereed)
    Abstract [en]

    A majority of crimes are committed by a minority of offenders. Previous research has provided some support for the theory that serial offenders leave behavioral traces on the crime scene which could be used to link crimes to serial offenders. The aim of this work is to investigate to what extent it is possible to use geographic route estimations and behavioral data to detect serial offenders. Experiments were conducted using behavioral data from authentic burglary reports to investigate if it was possible to find crime routes with high similarity. Further, the use of burglary reports from serial offenders to investigate to what extent it was possible to detect serial offender crime routes. The result show that crime series with the same offender on average had a higher behavioral similarity than random crime series. Sets of crimes with high similarity, but without a known offender would be interesting for law enforcement to investigate further. The algorithm is also evaluated on 9 crime series containing a maximum of 20 crimes per series. The results suggest that it is possible to detect crime series with high similarity using analysis of both geographic routes and behavioral data recorded at crime scenes.

  • 171.
    Borg, Anton
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Boldt, Martin
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Eliasson, Johan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Detecting Crime Series Based on Route Estimationand Behavioral Similarity2017Conference paper (Refereed)
    Abstract [en]

    A majority of crimes are committed by a minority of offenders. Previous research has provided some support for the theory that serial offenders leave behavioral traces on the crime scene which could be used to link crimes to serial offenders. The aim of this work is to investigate to what extent it is possible to use geographic route estimations and behavioral data to detect serial offenders. Experiments were conducted using behavioral data from authentic burglary reports to investigate if it was possible to find crime routes with high similarity. Further, the use of burglary reports from serial offenders to investigate to what extent it was possible to detect serial offender crime routes. The result show that crime series with the same offender on average had a higher behavioral similarity than random crime series. Sets of crimes with high similarity, but without a known offender would be interesting for law enforcement to investigate further. The algorithm is also evaluated on 9 crime series containing a maximum of 20 crimes per series. The results suggest that it is possible to detect crime series with high similarity using analysis of both geographic routes and behavioral data recorded at crime scenes.

  • 172.
    Borg, Anton
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Boldt, Martin
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Lavesson, Niklas
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Melander, Ulf
    Boeva, Veselka
    Detecting serial residential burglaries using clustering2014In: Expert Systems with Applications, ISSN 0957-4174 , Vol. 41, no 11, p. 5252-5266Article in journal (Refereed)
    Abstract [en]

    According to the Swedish National Council for Crime Prevention, law enforcement agencies solved approximately three to five percent of the reported residential burglaries in 2012. Internationally, studies suggest that a large proportion of crimes are committed by a minority of offenders. Law enforcement agencies, consequently, are required to detect series of crimes, or linked crimes. Comparison of crime reports today is difficult as no systematic or structured way of reporting crimes exists, and no ability to search multiple crime reports exist. This study presents a systematic data collection method for residential burglaries. A decision support system for comparing and analysing residential burglaries is also presented. The decision support system consists of an advanced search tool and a plugin-based analytical framework. In order to find similar crimes, law enforcement officers have to review a large amount of crimes. The potential use of the cut-clustering algorithm to group crimes to reduce the amount of crimes to review for residential burglary analysis based on characteristics is investigated. The characteristics used are modus operandi, residential characteristics, stolen goods, spatial similarity, or temporal similarity. Clustering quality is measured using the modularity index and accuracy is measured using the rand index. The clustering solution with the best quality performance score were residential characteristics, spatial proximity, and modus operandi, suggesting that the choice of which characteristic to use when grouping crimes can positively affect the end result. The results suggest that a high quality clustering solution performs significantly better than a random guesser. In terms of practical significance, the presented clustering approach is capable of reduce the amounts of cases to review while keeping most connected cases. While the approach might miss some connections, it is also capable of suggesting new connections. The results also suggest that while crime series clustering is feasible, further investigation is needed.

  • 173.
    Borg, Markus
    et al.
    RISE SICS AB, SWE.
    Alegroth, Emil
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Runeson, Per
    Lunds Universitet, SWE.
    Software Engineers' Information Seeking Behavior in Change Impact Analysis: An Interview Study2017In: IEEE International Conference on Program Comprehension, IEEE Computer Society , 2017, p. 12-22Conference paper (Refereed)
    Abstract [en]

    Software engineers working in large projects must navigate complex information landscapes. Change Impact Analysis (CIA) is a task that relies on engineers' successful information seeking in databases storing, e.g., source code, requirements, design descriptions, and test case specifications. Several previous approaches to support information seeking are task-specific, thus understanding engineers' seeking behavior in specific tasks is fundamental. We present an industrial case study on how engineers seek information in CIA, with a particular focus on traceability and development artifacts that are not source code. We show that engineers have different information seeking behavior, and that some do not consider traceability particularly useful when conducting CIA. Furthermore, we observe a tendency for engineers to prefer less rigid types of support rather than formal approaches, i.e., engineers value support that allows flexibility in how to practically conduct CIA. Finally, due to diverse information seeking behavior, we argue that future CIA support should embrace individual preferences to identify change impact by empowering several seeking alternatives, including searching, browsing, and tracing. © 2017 IEEE.

  • 174.
    Borg, Markus
    et al.
    SICS Swedish ICT AB, SWE.
    Luis de la Vara, Jose
    Univ Carlos III Madrid, ESP.
    Wnuk, Krzysztof
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Practitioners' Perspectives on Change Impact Analysis for Safety-Critical Software: A Preliminary Analysis2016In: COMPUTER SAFETY, RELIABILITY, AND SECURITY, SAFECOMP 2016, 2016, p. 346-358Conference paper (Refereed)
    Abstract [en]

    Safety standards prescribe change impact analysis (CIA) during evolution of safety-critical software systems. Although CIA is a fundamental activity, there is a lack of empirical studies about how it is performed in practice. We present a case study on CIA in the context of an evolving automation system, based on 14 interviews in Sweden and India. Our analysis suggests that engineers on average spend 50-100 h on CIA per year, but the effort varies considerably with the phases of projects. Also, the respondents presented different connotations to CIA and perceived the importance of CIA differently. We report the most pressing CIA challenges, and several ideas on how to support future CIA. However, we show that measuring the effect of such improvement solutions is non-trivial, as CIA is intertwined with other development activities. While this paper only reports preliminary results, our work contributes empirical insights into practical CIA.

  • 175.
    Borg, Markus
    et al.
    SICS Swedish ICT AB, SWE.
    Wnuk, Krzysztof
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Regnell, Björn
    Lund University, SWE.
    Runeson, Per
    Lund University, SWE.
    Supporting Change Impact Analysis Using a Recommendation System: An Industrial Case Study in a Safety-Critical Context2017In: IEEE Transactions on Software Engineering, ISSN 0098-5589, E-ISSN 1939-3520, Vol. 43, no 7, p. 675-700Article in journal (Refereed)
    Abstract [en]

    Abstract—Change Impact Analysis (CIA) during software evolution of safety-critical systems is a labor-intensive task. Severalauthors have proposed tool support for CIA, but very few tools were evaluated in industry. We present a case study on ImpRec, arecommendation System for Software Engineering (RSSE), tailored for CIA at a process automation company. ImpRec builds onassisted tracing, using information retrieval solutions and mining software repositories to recommend development artifacts, potentiallyimpacted when resolving incoming issue reports. In contrast to the majority of tools for automated CIA, ImpRec explicitly targetsdevelopment artifacts that are not source code. We evaluate ImpRec in a two-phase study. First, we measure the correctness ofImpRec’s recommendations by a simulation based on 12 years’ worth of issue reports in the company. Second, we assess the utilityof working with ImpRec by deploying the RSSE in two development teams on different continents. The results suggest that ImpRecpresents about 40 percent of the true impact among the top-10 recommendations. Furthermore, user log analysis indicates thatImpRec can support CIA in industry, and developers acknowledge the value of ImpRec in interviews. In conclusion, our findings showthe potential of reusing traceability associated with developers’ past activities in an RSSE

  • 176.
    Bouhennache, Rafik
    et al.
    Science and technology institute, university center of Mila, DZA.
    Bouden, Toufik
    ohammed Seddik Ben Yahia University of Jijel, DZA.
    Taleb-Ahmed, Abdmalik
    university of V alenciennes, FRA.
    Cheddad, Abbas
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering. Blekinge Institute of Technology.
    A new spectral index for the extraction of built-up land features from Landsat 8 satellite imagery2018In: Geocarto International, ISSN 1010-6049, E-ISSN 1752-0762Article in journal (Refereed)
    Abstract [en]

    Extracting built-up areas from remote sensing data like Landsat 8 satellite is a challenge. We have investigated it by proposing a new index referred as Built-up Land Features Extraction Index (BLFEI). The BLFEI index takes advantage of its simplicity and good separability between the four major component of urban system, namely built-up, barren, vegetation and water. The histogram overlap method and the Spectral Discrimination Index (SDI) are used to study separability. BLFEI index uses the two bands of infrared shortwaves, the red and green bands of the visible spectrum. OLI imagery of Algiers, Algeria, was used to extract built-up areas through BLFEI and some new previously developed built-up indices used for comparison. The water areas are masked out leading to Otsu’s thresholding algorithm to automatically find the optimal value for extracting built-up land from waterless regions. BLFEI, the new index improved the separability by 25% and the accuracy by 5%.

  • 177.
    Bowin, Hampus
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Johansson, Daniel
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Scalability of the Bitcoin and Nano protocols: a comparative analysis2018Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    In the past year cryptocurrencies have gained a lot of attention because of the increase in price. This attention has increased the number of people trading and investing in different cryptocurrencies which has lead to an increased number of transactions flowing through the different networks. This has revealed scalability issues in some of them, especially in the most popular cryptocurrency, Bitcoin. Many people are working on solutions to this problem. One proposed solution replaces the blockchain with a DAG structure. In this report the scalability of Bitcoin’s protocol will be compared to the scalability of the protocol used in the newer cryptocurrency, Nano. The comparison is conducted in terms of throughput and latency. To perform this comparison, an experiment was conducted where tests were run with an increasing number of nodes and each test sent different number of transactions per second from every node. Our results show that Nano’s protocol scales better regarding both throughput and latency, and we argue that the reason for this is that the Bitcoin protocol uses a blockchain as a global data-structure unlike Nano that uses a block-lattice structure where each node has their own local blockchain.

  • 178.
    BRAMAH-LAWANI, ALEX
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    REQUIREMENTS ELICITATION AND SPECIFICATION FOR HAPTIC INTERFACES FOR VISUALLY IMPAIRED USERS2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
  • 179.
    Brask, Jessica
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Hedberg, Frida
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Recensera mig du kåta man2016Independent thesis Basic level (degree of Bachelor), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In this bachelor essay we´ve made an participant observation on a website where sexual services are for sale. This website contains a forum where sexbuyers and sexsellers discuss their thoughts about sex and purchase of sexual services. Sexbuyers often express themselves as victims because of their own extreme sexual cravings. They argue that as long as there is sexual services for sale, there is going to be a demand to buy them. We notice that there is an acceptance for sexbuyers to have an temporary sexual relation in exchange for money. We notice in our studies that these so called “intemperated horny men” tend to protect one another.We consider, in contrast to the sexbuyers, that as long as there is an existing “craving” for sexual services, the phenomenon of sexual trafficking will continue. Much because many victims of sexual trafficking are being sold as if it was by their own free will. Because of this, we will in this bachelor essay problematize sexual trafficking as a consequence of sexbuyers actions.We as mediaproducers would like to, through pictures, shed light on the consequences of sexual trafficking and at the same time emphasize this through provocative internet activism.

  • 180.
    Bredemo, Fredrik
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Hörbarhet i praktiken: En Actor Network analys av arbetet kring hörbarhet2015Independent thesis Basic level (degree of Bachelor), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Abstract

    The television industry is a huge industry that is governed by a few big broadcasting networks. The biggest distributer and producer of television in Sweden is Sveriges Television (SVT) and everyone working with sound will, more likely than not, work for them in a project.

    In this study I’ve analyzed the results of two months of empirical work, I’ve identified the actors that make up the network “God hörbarhet” (good audibility) and the reason this is interesting is for determining a more solid definition for the term “good audibility”.

    I’ve then put this more solid definition up against SVT’s loose definition, this only to expand on their current system and delivery specifications.

  • 181.
    BRHANIE, BEKALU MULLU
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Multi-Label Classification Methods for Image Annotation2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
  • 182. Brik, Bouziane
    et al.
    Lagraa, Nasreddine
    Abderrahmane, Lakas
    Cheddad, Abbas
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    DDGP: Distributed Data Gathering Protocol for vehicular networks2016In: Vehicular Communications, ISSN 2214-2096, Vol. 4, p. 15-29Article in journal (Refereed)
    Abstract [en]

    Vehicular Ad-Hoc Network (VANet) is an emerging research area, it offers a wide range of applications including safety, road traffic efficiency, and infotainment applications. Recently researchers are studying the possibility of making use of deployed VANet applications for data collection. In this case, vehicles are considered as mobile collectors that gather both real time and delay tolerant data and deliver them to interested entities. In this paper, we propose a novel Distributed Data Gathering Protocol (DDGP) for the collection of delay tolerant as well as real time data in both urban and highway environments. The main contribution of DDGP is a new medium access technique that enables vehicles to access the channel in a distributed way based on their location information. In addition, DDGP implements a new aggregation scheme, which deletes redundant, expired, and undesired data. We provide an analytical proof of correctness of DDGP, in addition to the performance evaluation through an extensive set of simulation experiments. Our results indicate that DDGP enhances the efficiency and the reliability of the data collection process by outperforming existing schemes in terms of several criteria such as delay and message overhead, aggregation ratio, and data retransmission rate. (C) 2016 Elsevier Inc. All rights reserved.

  • 183. Brik, Bouziane
    et al.
    Lagraa, Nasreddine
    Lakas, Abderrahmane
    Cherroun, Hadda
    Cheddad, Abbas
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    ECDGP: extended cluster-based data gathering protocol for vehicular networks2015In: Wireless Communications & Mobile Computing, ISSN 1530-8669, E-ISSN 1530-8677Article in journal (Refereed)
  • 184.
    Brisland, Karl
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Användarupplevelse på webben under skiftande tekniska förutsättningar: en fallstudie av ett implementationsprojekt2018Independent thesis Basic level (university diploma), 10 credits / 15 HE creditsStudent thesis
    Abstract [sv]

    Denna rapport dokumenterar utvecklingen av en interaktiv tidslinje i webbformat, med högt ställda krav på utseende, funktion, visuella effekter samt konsekvent användarupplevelse på olika plattformar. Implementationen utfördes utifrån teoretiska riktlinjer som erhållits genom litteraturstudier och därefter värderades utfallet dels genom att bedöma dess efterlevnad av dessa riktlinjer och dels genom en användarstudie med ett flertal olika enheter. Resultaten från användarstudien användes sedan för att värdera den praktiska relevansen hos den teoretiska grunden. Slutprodukten utgörs av en välfungerande webbapplikation som, med vissa eftergifter, beter sig konsekvent på alla testade enheter och väl uppfyller de teoretiska rekommendationerna ur främst design- och prestandaperspektiv. Användartesten visade på vissa svagheter som åtgärdades i möjligaste mån samt avslöjade en mycket stark vilja bland användarna att själva kontrollera navigering och informationsinhämtning. Testen visade även att de identifierade riktlinjerna varit såväl värdefulla som användbara, inklusive hur användarstudien i sig skulle utföras. Projektet ledde fram till en väl underbyggd bästa praxis samt värdefulla insikter ifråga om användarbeteende och i synnerhet användandet av uppmärksamhetsfångande element i designen, samt utgjorde en tydlig demonstration av vilket stort mervärde som användartest kan medföra. Slutligen noteras att det på grund av tekniska begränsningar inte alltid går att uppnå fullständig konsekvens ifråga om användarupplevelse under skiftande förutsättningar.

  • 185.
    Britto, Ricardo
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Knowledge Classification for Supporting Effort Estimation in Global Software Engineering Projects2015Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    Background: Global Software Engineering (GSE) has become a widely applied operational model for the development of software systems; it can increase profits and decrease time-to-market. However, there are many challenges associated with development of software in a globally distributed fashion. There is evidence that these challenges affect many process related to software development, such as effort estimation. To the best of our knowledge, there are no empirical studies to gather evidence on effort estimation in the GSE context. In addition, there is no common terminology for classifying GSE scenarios focusing on effort estimation.

    Objective: The main objective of this thesis is to support effort estimation in the GSE context by providing a taxonomy to classify the existing knowledge in this field.

    Method: Systematic literature review (to identify and analyze the state of the art), survey (to identify and analyze the state of the practice), systematic mapping (to identify practices to design software engineering taxonomies), and literature survey (to complement the states of the art and practice) were the methods employed in this thesis.

    Results: The results on the states of the art and practice show that the effort estimation techniques employed in the GSE context are the same techniques used in the collocated context. It was also identified that global aspects, e.g. time, geographical and social-cultural distances, are accounted for as cost drivers, although it is not clear how they are measured. As a result of the conducted mapping study, we reported a method that can be used to design new SE taxonomies. The aforementioned results were combined to extend and specialize an existing GSE taxonomy, for suitability for effort estimation. The usage of the specialized GSE effort estimation taxonomy was illustrated by classifying 8 finished GSE projects. The results show that the specialized taxonomy proposed in this thesis is comprehensive enough to classify GSE projects focusing on effort estimation.

    Conclusions: The taxonomy presented in this thesis will help researchers and practitioners to report new research on effort estimation in the GSE context; researchers and practitioners will be able to gather evidence, com- pare new studies and find new gaps in an easier way. The findings from this thesis show that more research must be conducted on effort estimation in the GSE context. For example, the way the cost drivers are measured should be further investigated. It is also necessary to conduct further research to clarify the role and impact of sourcing strategies on the effort estimates’ accuracies. Finally, we believe that it is possible to design an instrument based on the specialized GSE effort estimation taxonomy that helps practitioners to perform the effort estimation process in a way tailored for the specific needs of the GSE context.

  • 186.
    Britto, Ricardo
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Strategizing and Evaluating the Onboarding of Software Developers in Large-Scale Globally Distributed Legacy Projects2017Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Background: Recruitment and onboarding of software developers are essential steps in software development undertakings. The need for adding new people is often associated with large-scale long-living projects and globally distributed projects. The formers are challenging because they may contain large amounts of legacy (and often complex) code (legacy projects). The latters are challenging, because the inability to find sufficient resources in-house may lead to onboarding people at a distance, and often in many distinct sites. While onboarding is of great importance for companies, there is little research about the challenges and implications associated with onboarding software developers and teams in large-scale globally distributed projects with large amounts of legacy code. Furthermore, no study has proposed any systematic approaches to support the design of onboarding strategies and evaluation of onboarding results in the aforementioned context.

    Objective: The aim of this thesis is two-fold: i) identify the challenges and implications associated with onboarding software developers and teams in large-scale globally distributed legacy projects; and ii) propose solutions to support the design of onboarding strategies and evaluation of onboarding results in large-scale globally distributed legacy projects.

    Method: In this thesis, we employed literature review, case study, and business process modeling. The main case investigated in this thesis is the development of a legacy telecommunication software product in Ericsson.

    Results: The results show that the performance (productivity, autonomy, and lead time) of new developers/teams onboarded in remote locations in large-scale distributed legacy projects is much lower than the performance of mature teams. This suggests that new teams have a considerable performance gap to overcome. Furthermore, we learned that onboarding problems can be amplified by the following challenges: the complexity of the product and technology stack, distance to the main source of product knowledge, lack of team stability, training expectation misalignment, and lack of formalism and control over onboarding strategies employed in different sites of globally distributed projects. To help companies addressing the challenges we identified in this thesis, we propose a process to support the design of onboarding strategies and the evaluation of onboarding results.

    Conclusions: The results show that scale, distribution and complex legacy code may make onboarding more difficult and demand longer periods of time for new developers and teams to achieve high performance. This means that onboarding in large-scale globally distributed legacy projects must be planned well ahead and companies must be prepared to provide extended periods of mentoring by expensive and scarce resources, such as software architects. Failure to foresee and plan such resources may result in effort estimates on one hand, and unavailability of mentors on another, if not planned in advance. The process put forward herein can help companies to deal with the aforementioned problems through more systematic, effective and repeatable onboarding strategies.

  • 187.
    Britto, Ricardo
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Cruzes, Daniella
    SINTEF Digital, NOR.
    Šmite, Darja
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Šāblis, Aivars
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Onboarding Software Developers and Teams in Three Globally Distributed Legacy Projects: A Multi-Case Study2018In: Journal of Software: Evolution and Process, ISSN 2047-7473, E-ISSN 2047-7481, Vol. 30, no 4, article id e1921Article in journal (Refereed)
    Abstract [en]

    Onboarding is the process of supporting new employees regarding their social and performance adjustment to their new job. Software companies have faced challenges with recruitment and onboarding of new team members and there is no study that investigates it in a holistic way. In this paper, we conducted a multi-case study to investigate the onboarding of software developers/teams, associated challenges, and areas for further improvement in three globally distributed legacy projects. We employed Bauer's model for onboarding to identify the current state of the onboarding strategies employed in each case. We learned that the employed strategies are semi-formalized. Besides, in projects with multiple sites, some functions are executed locally and the onboarding outcomes may be hard to control. We also learned that onboarding in legacy projects is especially challenging and that decisions to distribute such projects across multiple locations shall be approached carefully. In our cases, the challenges to learn legacy code were further amplified by the project scale and the distance to the original sources of knowledge. Finally, we identified practices that can be used by companies to increase the chances of being successful when onboarding software developers and teams in globally distributed legacy projects.

  • 188.
    Britto, Ricardo
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Freitas, Vitor
    Mendes, Emilia
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Usman, Muhammad
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Effort Estimation in Global Software Development: A systematic Literature Review2014In: Proceedings of the 2014 9th IEEE International Conference on Global Software Engineering, 2014, p. 135-144Conference paper (Refereed)
    Abstract [en]

    Nowadays, software systems are a key factor in the success of many organizations as in most cases they play a central role helping them attain a competitive advantage. However, despite their importance, software systems may be quite costly to develop, so substantially decreasing companies’ profits. In order to tackle this challenge, many organizations look for ways to decrease costs and increase profits by applying new software development approaches, like Global Software Development (GSD). Some aspects of the software project like communication, cooperation and coordination are more chal- lenging in globally distributed than in co-located projects, since language, cultural and time zone differences are factors which can increase the required effort to globally perform a software project. Communication, coordination and cooperation aspects affect directly the effort estimation of a project, which is one of the critical tasks related to the management of a software development project. There are many studies related to effort estimation methods/techniques for co-located projects. However, there are evidences that the co-located approaches do not fit to GSD. So, this paper presents the results of a systematic literature review of effort estimation in the context of GSD, which aimed at help both researchers and practitioners to have a holistic view about the current state of the art regarding effort estimation in the context of GSD. The results suggest that there is room to improve the current state of the art on effort estimation in GSD. 

  • 189.
    Britto, Ricardo
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Mendes, Emilia
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Börstler, Jürgen
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    An Empirical Investigation on Effort Estimation in Agile Global Software Development2015In: Proceedings of the 2015 IEEE 10th International Conference on Global Software Engineering, 2015, p. 38-45Conference paper (Refereed)
    Abstract [en]

    Effort estimation is a project management activity that is mandatory for the execution of softwareprojects. Despite its importance, there have been just a few studies published on such activities within the Agile Global Software Development (AGSD) context. Their aggregated results were recently published as part of a secondary study that reported the state of the art on effort estimationin AGSD. This study aims to complement the above-mentioned secondary study by means of anempirical investigation on the state of the practice on effort estimation in AGSD. To do so, a survey was carried out using as instrument an on-line questionnaire and a sample comprising softwarepractitioners experienced in effort estimation within the AGSD context. Results show that the effortestimation techniques used within the AGSD and collocated contexts remained unchanged, with planning poker being the one employed the most. Sourcing strategies were found to have no or a small influence upon the choice of estimation techniques. With regard to effort predictors, globalchallenges such as cultural and time zone differences were reported, in addition to factors that are commonly considered in the collocated context, such as team experience. Finally, many challenges that impact the accuracy of the effort estimates were reported by the respondents, such as problems with the software requirements and the fact that the communication effort between sites is not properly accounted.

  • 190.
    Britto, Ricardo
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Mendes, Emilia
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Wohlin, Claes
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    A Specialized Global Software Engineering Taxonomy for Effort Estimation2016In: International Conference on Global Software Engineering, IEEE Computer Society, 2016, p. 154-163Conference paper (Refereed)
    Abstract [en]

    To facilitate the sharing and combination of knowledge by Global Software Engineering (GSE) researchers and practitioners, the need for a common terminology and knowledge classification scheme has been identified, and as a consequence, a taxonomy and an extension were proposed. In addition, one systematic literature review and a survey on respectively the state of the art and practice of effort estimation in GSE were conducted, showing that despite its importance in practice, the GSE effort estimation literature is rare and reported in an ad-hoc way. Therefore, this paper proposes a specialized GSE taxonomy for effort estimation, which was built on the recently proposed general GSE taxonomy (including the extension) and was also based on the findings from two empirical studies and expert knowledge. The specialized taxonomy was validated using data from eight finished GSE projects. Our effort estimation taxonomy for GSE can help both researchers and practitioners by supporting the reporting of new GSE effort estimation studies, i.e. new studies are to be easier to identify, compare, aggregate and synthesize. Further, it can also help practitioners by providing them with an initial set of factors that can be considered when estimating effort for GSE projects.

  • 191.
    Britto, Ricardo
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Usman, Muhammad
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Bloom's taxonomy in software engineering education: A systematic mapping study2015In: Frontiers in Education Conference (FIE), 2015, IEEE Communications Society, 2015, p. 392-399Conference paper (Refereed)
    Abstract [en]

    Designing and assessing learning outcomes could be a challenging activity for any SoftwareEngineering (SE) educator. To support the process of designing and assessing SE courses, educators have been applied the cognitive domain of Bloom's taxonomy. However, to the best of our knowledge, the evidence on the usage of Bloom's taxonomy in SE higher education has not yet been systematically aggregated or reviewed. Therefore, in this paper we report the state of the art on the usage of Bloom's taxonomy in SE education, identified by conducted a systematic mapping study. As a result of the performed systematic mapping study, 26 studies were deemed as relevant. The main findings from these studies are: i) Bloom's taxonomy has mostly been applied at undergraduate level for both design and assessment of software engineering courses; ii) software construction is the leading SE subarea in which Bloom's taxonomy has been applied. The results clearly point out the usefulness of Bloom's taxonomy in the SE education context. We intend to use the results from this systematic mapping study to develop a set of guidelines to support the usage of Bloom's taxonomycognitive levels to design and assess SE courses.

  • 192.
    Britto, Ricardo
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Usman, Muhammad
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Mendes, Emilia
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    A TAXONOMY OF WEB EFFORT PREDICTORS2017In: Journal of Web Engineering, ISSN 1540-9589, E-ISSN 1544-5976, Vol. 16, no 7-8, p. 541-570Article in journal (Refereed)
    Abstract [en]

    Web engineering as a field has emerged to address challenges associated with developing Web applications. It is known that the development of Web applications differs from the development of non-Web applications, especially regarding some aspects such as Web size metrics. The classification of existing Web engineering knowledge would be beneficial for both practitioners and researchers in many different ways, such as finding research gaps and supporting decision making. In the context of Web effort estimation, a taxonomy was proposed to classify the existing size metrics, and more recently a systematic literature review was conducted to identify aspects related to Web resource/effort estimation. However, there is no study that classifies Web predictors (both size metrics and cost drivers). The main objective of this study is to organize the body of knowledge on Web effort predictors by designing and using a taxonomy, aiming at supporting both research and practice in Web effort estimation. To design our taxonomy, we used a recently proposed taxonomy design method. As input, we used the results of a previously conducted systematic literature review (updated in this study), an existing taxonomy of Web size metrics and expert knowledge. We identified 165 unique Web effort predictors from a final set of 98 primary studies; they were used as one of the basis to design our hierarchical taxonomy. The taxonomy has three levels, organized into 13 categories. We demonstrated the utility of the taxonomy and body of knowledge by using examples. The proposed taxonomy can be beneficial in the following ways: i) It can help to identify research gaps and some literature of interest and ii) it can support the selection of predictors for Web effort estimation. We also intend to extend the taxonomy presented to also include effort estimation techniques and accuracy metrics.

  • 193.
    Britto, Ricardo
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Usman, Muhammad
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Mendes, Emilia
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Effort Estimation in Agile Global Software Development Context2014In: Agile Methods. Large-Scale Development, Refactoring, Testing, and Estimation: XP 2014 International Workshops, Rome, Italy, May 26-30, 2014, Revised Selected Papers, Springer, 2014, Vol. 199, p. 182-192Conference paper (Refereed)
    Abstract [en]

    Both Agile Software Development (ASD) and Global Software Development (GSD) are 21st century trends in the software industry. Many studies are reported in the literature wherein software companies have applied an agile method or practice GSD. Given that effort estimation plays a remarkable role in software project management, how do companies perform effort estimation when they use agile method in a GSD context? Based on two effort estimation Systematic Literature Reviews (SLR) - one in within the ASD context and the other in a GSD context, this paper reports a study in which we combined the results of these SLRs to report the state of the art of effort estimation in agile global software development (ASD) context.

  • 194.
    Britto, Ricardo
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Usman, Muhammad
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Minhas, Nasir Mehmood
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    A quasi-experiment to evaluate the impact of mental fatigue on study selection process2017In: ACM International Conference Proceeding Series, Association for Computing Machinery , 2017, p. 264-269Conference paper (Refereed)
    Abstract [en]

    Context: Existing empirical evidence indicates that loss of alertness associated with mental fatigue is highly correlated with fluctuations in the performance of people carrying out auditory tasks. In software engineering research, mental fatigue may affect the results of study selection (an auditory task) when conducting secondary studies such as systematic literature reviews or systematic mapping studies. However, to date there is no empirical study that reports an in-depth investigation about the relationship between mental fatigue and researchers' selection decisions during study selection process. Objective: The main objective of this paper is to report the design and preliminary results of an investigation about the impact of mental fatigue on the study selection process of secondary studies. Method: We designed and piloted a quasi-experiment. Results: The preliminary results do not indicate that mental fatigue negatively impacts the correctness of selection decision and confidence. However, it is important to note that the preliminary results are only based on six subjects. Conclusion: This paper brings awareness about the role of mental fatigue in the conduction of secondary studies. Although the preliminary results do not indicate any meaningful relationship, we believe that it is worthwhile to continue the research, by adding more subjects, and also revising the design of the reported quasi-experiment. © 2017 ACM.

  • 195.
    Britto, Ricardo
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Wohlin, Claes
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Mendes, Emilia
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    An Extended Global Software Engineering Taxonomy2016In: Journal of Software Engineering Research and Development, ISSN 2195-1721, Vol. 4, no 3Article in journal (Refereed)
    Abstract [en]

    In Global Software Engineering (GSE), the need for a common terminology and knowledge classification has been identified to facilitate the sharing and combination of knowledge by GSE researchers and practitioners. A GSE taxonomy was recently proposed to address such a need, focusing on a core set of dimensions; however its dimensions do not represent an exhaustive list of relevant GSE factors. Therefore, this study extends the existing taxonomy, incorporating new GSE dimensions that were identified by means of two empirical studies conducted recently.

  • 196.
    Britto, Ricardo
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Šmite, Darja
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Damm, Lars-Ola
    Ericsson, SWE.
    Software Architects in Large-Scale Distributed Projects: An Ericsson Case Study2016In: IEEE Software, ISSN 0740-7459, E-ISSN 1937-4194, Vol. 33, no 6, p. 48-55, article id 7725230Article in journal (Refereed)
    Abstract [en]

    Software architects are key assets for successful development projects. However, not much research has investigated the challenges they face in large-scale distributed projects. So, researchers investigated how architects at Ericsson were organized, their roles and responsibilities, and the effort they spent guarding and governing a large-scale legacy product developed by teams at multiple locations. Despite recent trends such as microservices and agile development, Ericsson had to follow a more centralized approach to deal with the challenges of scale, distribution, and monolithic architecture of a legacy software product. So, the architectural decisions were centralized to a team of architects. The team extensively used code reviews to not only check the code's state but also reveal defects that could turn into maintainability problems. The study results also suggest that the effort architects spend designing architecture, guarding its integrity and evolvability, and mentoring development teams is directly related to team maturity. In addition, significant investment is needed whenever new teams and locations are onboarded.

  • 197.
    Britto, Ricardo
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering. Blekinge Institute of Technology.
    Šmite, Darja
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Lars-Ola, Damm
    Ericsson, SWE.
    Experiences from Measuring Learning and Performance in Large-Scale Distributed Software Development2016In: Proceedings of the 10th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement, ACM Digital Library, 2016, article id 17Conference paper (Refereed)
    Abstract [en]

    Background: Developers and development teams in large-scale software development are often required to learn continuously. Organizations also face the need to train and support new developers and teams on-boarded in ongoing projects. Although learning is associated with performance improvements, experience shows that training and learning does not always result in a better performance or significant improvements might take too long.

    Aims: In this paper, we report our experiences from establishing an approach to measure learning results and associated performance impact for developers and teams in Ericsson.

    Method: Experiences reported herein are a part of an exploratory case study of an on-going large-scale distributed project in Ericsson. The data collected for our measurements included archival data and expert knowledge acquired through both unstructured and semi-structured interviews. While performing the measurements, we faced a number of challenges, documented in the form of lessons learned.

    Results: We aggregated our experience in eight lessons learned related to collection, preparation and analysis of data for further measurement of learning potential and performance in large-scale distributed software development.

    Conclusions: Measuring learning and performance is a challenging task. Major problems were related to data inconsistencies caused by, among other factors, distributed nature of the project. We believe that the documented experiences shared herein can help other researchers and practitioners to perform similar measurements and overcome the challenges of large-scale distributed software projects, as well as proactively address these challenges when establishing project measurement programs.

  • 198.
    Britto, Ricardo
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Šmite, Darja
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Lars-Ola, Damm
    Ericsson, SWE.
    Börstler, Jürgen
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Learning and Performance Evolution of Immature Remote Teams in Large-ScaleSoftware Projects: An Industrial Case StudyIn: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025Article in journal (Refereed)
    Abstract [en]

    Context: Large-scale distributed software projects with long life cycles often involve a considerable amount ofcomplex legacy code. The combination of scale and distribution challenges, and the diculty to acquire knowledgeabout large amounts of complex legacy code may make the onboarding of new developers/teams problematic. Thismay lead to extended periods of low performance.Objective: The main objective of this paper is to analyze the learning processes and performance evolutions (teamproductivity and team autonomy) of remote software development teams added late to a large-scale legacy softwareproduct development, and to propose recommendations to support the learning of remote teams.Method: We conducted a case study in Ericsson, collecting data through archival research, semi-structured interviews,and workshops. We analyzed the collected data using descriptive, inferential and graphical statistics and softqualitative analysis.Results: The results show that the productivity and autonomy of immature remote teams are on average 3.67 and2.27 times lower than the ones of mature teams, respectively. Furthermore, their performance had a steady increaseduring almost the entire first year and dropped (productivity) or got stagnated (autonomy) for a great part of the secondyear. In addition to these results, we also identified four challenges that aected the learning process and performanceevolution of immature remote teams: complexity of the product and technology stack, distance to the main source ofproduct knowledge, lack of team stability, and training expectation misalignment.Conclusion: The results indicate that scale, distribution and complex legacy code may make learning more dicultand demand a long period to achieve high performance. To support the learning of remote teams, we put forward fiverecommendations. We believe that our quantitative analysis, as well as the identified factors and recommendationscan help other companies to onboard new remote teams in large-scale legacy product development projects.

  • 199.
    Brodén, Alexander
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Pihl Bohlin, Gustav
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Towards Real-Time NavMesh Generation Using GPU Accelerated Scene Voxelization2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. Producing NavMeshes for pathfinding in computer games is a time-consuming process. Recast and Detour is a pair of stateof-the-art libraries that allows automation of NavMesh generation. It builds on a technique called Scene Voxelization, where triangle geometry is converted to voxels in heightfields. The algorithm is expensive in terms of execution time. A fast voxelization algorithm could be useful in real-time applications where geometry is dynamic. In recent years, voxelization implementations on the GPU have been shown to outperform CPU implementations in certain configurations.

    Objectives. The objective of this thesis is to find a GPU-based alternative to Recast’s voxelization algorithm, and determine when the GPU-based solution is faster than the reference. Methods. This thesis proposes a GPU-based alternative to Recast’s voxelization algorithm, designed to be an interchangeable step in Recast’s pipeline, in a real-time application where geometry is dynamic. Experiments were conducted to show how accurately the algorithm generates heightfields, how fast the execution time is in certain con- figurations, and how the algorithm scales with different sets of input data.

    Results. The proposed algorithm, when run on an AMD Radeon RX 480 GPU, was shown to be both accurate and fast in certain configurations. At low voxelfield resolutions, it outperformed the reference algorithm on typical Recast reference models. The biggest performance gain was shown when the input contained large numbers of small triangles. The algorithm performs poorly when the input data has triangles that are big in relation to the size of the voxels, and an optional optimization was presented to address this issue. Another optimization was presented that further increases performance gain when many instances of the same mesh are voxelized.

    Conclusions. The objectives of the thesis were met. A fast, GPUbased algorithm for voxelization in Recast was presented, and conclusions about when it can outperform the reference algorithm were drawn. Possibilities for even greater performance gains were identified for future research.

  • 200.
    Bron, Mikael
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Hantering av fysiska säkerhetsrisker – en kunskapsöversikt2013Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    The competence to manage risks related to health, security, fire and safety is a sought-after skill.This is especially noticeable in both business and public administration job postings for therecruitment process of managers, administrators or coordinators to security departments. At thesame time there is little specialist literature available in Swedish on the subject of risk managementin the context of protecting assets and people from physical security threats. The lack of literatureaffects the study of risk management from a physical and procedural security perspective,particularly at an academic level where this is a relatively new topic. To move forward and expandthe field of knowledge is an important step, not only for the scientific community but also for theindustry. This bachelor thesis attempts to be an initial but significant contribution to a topic thatis likely to grow. By mapping what has already been published on the subject in English as wellas summing up and analyzing the scientific knowledge from similar disciplines the thesis has alsohad an additional goal: to reach out with knowledge to those dealing with risk management inpractice, and thus raising their awareness and developing their professional skills.The purpose of this study is to present the current state of knowledge and at the same time toshow the width and depth of the risk management process. This is done by identifying similaritiesand differences in definitions, process descriptions, problems and best practice of the studied areaswhile at the same time account for any criticism offered against risk management as a concept.The results show that there are more similarities than differences in the risk management processand methods regardless of whether the purpose is to protect people and assets from healthhazards, crime, fire or accidents.The paper has been conducted as a descriptive literature study and a comparative textual analysis.The risk management process has been described with reference to the generic ISO standard(31000:2009, Risk management - Principles and guidelines). Also, ten common risk analysismethods that cover all steps in the risk assessment process have been described. The narrative andrelated analysis follow the same order as the ISO-standard process description.The material has been supplemented and compared with guidelines and scientific papers from threetypes of risks management contexts: (1) health hazards, (2) fire and safety, and (3) security.The paper also provides examples of the inconsistent use of terms and definitions both between andwithin different disciplines involved in risk management. One of the conclusions of the report is thatcreating a unified, universal terminology to be used in the security context probably is impossibleas well as being not necessary. Instead, certain terminological misunderstandings can be avoided byproviding clear definitions and explanations of their meaning in each particular case.

1234567 151 - 200 of 1526
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf