Change search
Refine search result
1234567 151 - 200 of 1681
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 151. Beyene, Ayne A.
    et al.
    Welemariam, Tewelle
    Persson, Marie
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Lavesson, Niklas
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Improved concept drift handling in surgery prediction and other applications2015In: Knowledge and Information Systems, ISSN 0219-1377, Vol. 44, no 1, p. 177-196Article in journal (Refereed)
    Abstract [en]

    The article presents a new algorithm for handling concept drift: the Trigger-based Ensemble (TBE) is designed to handle concept drift in surgery prediction but it is shown to perform well for other classification problems as well. At the primary care, queries about the need for surgical treatment are referred to a surgeon specialist. At the secondary care, referrals are reviewed by a team of specialists. The possible outcomes of this review are that the referral: (i) is canceled, (ii) needs to be complemented, or (iii) is predicted to lead to surgery. In the third case, the referred patient is scheduled for an appointment with a surgeon specialist. This article focuses on the binary prediction of case three (surgery prediction). The guidelines for the referral and the review of the referral are changed due to, e.g., scientific developments and clinical practices. Existing decision support is based on the expert systems approach, which usually requires manual updates when changes in clinical practice occur. In order to automatically revise decision rules, the occurrence of concept drift (CD) must be detected and handled. The existing CD handling techniques are often specialized; it is challenging to develop a more generic technique that performs well regardless of CD type. Experiments are conducted to measure the impact of CD on prediction performance and to reduce CD impact. The experiments evaluate and compare TBE to three existing CD handling methods (AWE, Active Classifier, and Learn++) on one real-world dataset and one artificial dataset. TBA significantly outperforms the other algorithms on both datasets but is less accurate on noisy synthetic variations of the real-world dataset.

  • 152.
    Bheri, Sujeet
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Vummenthala, SaiKeerthana
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    An Introduction to the DevOps Tool Related Challenges2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Introduction : DevOps bridges the gap between the development and operations by improving the collaboration while automating the as many as steps from developing the software to releasing the product to the customers. To automate the software development activities, DevOps relies on the tools. There are many challenges associated with the tool implementation such as choosing the suitable tools and integrating tools with existed tools and practices. There must be a clear understanding on what kind of tools are used by the DevOps practitioners and what challenges does each tool create for them.

    Objectives: The main aim of our study is to investigate the challenges faced by the DevOps practitioners related to the tools and compare the findings with the related literature. Our contributions are (i) a comprehensive set of tools used by Developers and Operators in the software industries; (ii) challenges related to tools faced by the practitioners; and (iii) suggested recommendations and its effectiveness to mitigate the above challenges.

    Methods: we adopted case study for our study to achieve our research objectives. We have chosen literature review and semi-structured interviews as our data collection methods. Results: In our study we identified seven tools used by developers and operators which were not reported in the literature such as Intellij, Neo4j, and Postman. We identified tool related challenges from the practitioners such as difficulty in choosing the suitable tools, lack of maturity in tools such as Git, and learning new tools. We also identified recommendations for addressing tool related challenges such as Tech-Talks and seminars using complementary tools to overcome the limitations of other tools. We also identified benefits related to the adoption of such recommendations.

    Conclusion: We expect the DevOps tool landscape to change as old tools either become more sophisticated or outdated and new tools are being developed to better support DevOps and more easily integrate with deployment pipeline. With regard to tool related challenges literature review as well as interviews show that there is a lack of knowledge on how to select appropriate tools and the time it takes to learn the DevOps practices are common challenges. Regarding suggested recommendations, the most feasible one appears to be seminars and knowledge sharing events which educate practitioners how to use better tools and how to possible identify suitable tools.

  • 153.
    Bihl, Erik
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    The captivating use of silence in film: How silence affects the emotional aspect of cinema2017Independent thesis Basic level (degree of Bachelor), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In this thesis I use both Dense Clarity - Clear Density as well as qualitative interviewing as methods to guide me through this examination of sound design. Through studying other works and executing personal tests I try to find out if there is a need to use sound and silence in a creative way to evoke emotion. I examine films as well as literature from the 1960s all the way to the 2000s, to see how the use of silence has unfolded over the years. I also create a visual production that strengthens my theory that silence affects narrative more than its credited for. But the essay isn’t just about silence, it’s revolved around sound too, expanding into how sound correlates with emotion and how one can apply it to their production.  

  • 154.
    Bilski, Mateusz
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Migration from blocking to non-blocking web frameworks2014Independent thesis Advanced level (degree of Master (One Year))Student thesis
    Abstract [en]

    The problem of performance and scalability of web applications is challenged by most of the software companies. It is difficult to maintain the performance of a web application while the number of users is continuously increasing. The common solution for this problem is scalability. A web application can handle incoming and outgoing requests using blocking or non-blocking Input/Output operation. The way that a single server handles requests affects its ability to scale and depends on a web framework that was used to build the web application. It is especially important for Resource Oriented Architecture (ROA) based applications which consist of distributed Representational State Transfer (REST) web services. This research was inspired by a real problem stated by a software company that was considering the migration to the non-blocking web framework but did not know the possible profits. The objective of the research was to evaluate the influence of web framework's type on the performance of ROA based applications and to provide guidelines for assessing profits of migration from blocking to non-blocking JVM web frameworks. First, internet ranking was used to obtain the list of the most popular web frameworks. Then, the web frameworks were used to conduct two experiments that investigated the influence of web framework's type on the performance of ROA based applications. Next, the consultations with software architects were arranged in order to find a method for approximating the performance of overall application. Finally, the guidelines were prepared based on the consultations and the results of the experiments. Three blocking and non-blocking highly ranked and JVM based web frameworks were selected. The first experiment showed that the non-blocking web frameworks can provide performance up to 2.5 times higher than blocking web frameworks in ROA based applications. The experiment performed on existing application showed average 27\% performance improvement after the migration. The elaborated guidelines successfully convinced the company that provided the application for testing to conduct the migration on the production environment. The experiment results proved that the migration from blocking to non-blocking web frameworks increases the performance of web application. The prepared guidelines can help software architects to decide if it is worth to migrate. However the guidelines are context depended and further investigation is needed to make it more general.

  • 155.
    bin Ali, Nauman
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Operationalization of lean thinking through value stream mapping with simulation and FLOW2015Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Background: The continued success of Lean thinking beyond manufacturing has led to an increasing interest to utilize it in software engineering (SE). Value Stream Mapping (VSM) had a pivotal role in the operationalization of Lean thinking. However, this has not been recognized in SE adaptations of Lean. Furthermore, there are two main shortcomings in existing adaptations of VSM for an SE context. First, the assessments for the potential of the proposed improvements are based on idealistic assertions. Second, the current VSM notation and methodology are unable to capture the myriad of significant information flows, which in software development go beyond just the schedule information about the flow of a software artifact through a process. Objective: This thesis seeks to assess Software Process Simulation Modeling (SPSM) as a solution to the first shortcoming of VSM. In this regard, guidelines to perform simulation-based studies in industry are consolidated, and the usefulness of VSM supported with SPSM is evaluated. To overcome the second shortcoming of VSM, a suitable approach for capturing rich information flows in software development is identified and its usefulness to support VSM is evaluated. Overall, an attempt is made to supplement existing guidelines for conducting VSM to overcome its known shortcomings and support adoption of Lean thinking in SE. The usefulness and scalability of these proposals is evaluated in an industrial setting. Method: Three literature reviews, one systematic literature review, four industrial case studies, and a case study in an academic context were conducted as part of this research. Results: Little evidence to substantiate the claims of the usefulness of SPSM was found. Hence, prior to combining it with VSM, we consolidated the guidelines to conduct an SPSM based study and evaluated the use of SPSM in academic and industrial contexts. In education, it was found to be a useful complement to other teaching methods, and in the industry, it triggered useful discussions and was used to challenge practitioners’ perceptions about the impact of existing challenges and proposed improvements. The combination of VSM with FLOW (a method and notation to capture information flows, since existing VSM adaptions for SE are insufficient for this purpose) was successful in identifying challenges and improvements related to information needs in the process. Both proposals to support VSM with simulation and FLOW led to identification of waste and improvements (which would not have been possible with conventional VSM), generated more insightful discussions and resulted in more realistic improvements. Conclusion: This thesis characterizes the context and shows how SPSM was beneficial both in the industrial and academic context. FLOW was found to be a scalable, lightweight supplement to strengthen the information flow analysis in VSM. Through successful industrial application and uptake, this thesis provides evidence of the usefulness of the proposed improvements to the VSM activities.

  • 156.
    Bin Ali, Nauman
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering. Blekinge Inst Technol, Karlskrona, Sweden..
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering. Blekinge Inst Technol, Karlskrona, Sweden..
    Nicolau de Franca, Breno Bernard
    Univ Fed Rio de Janeiro, ESE Grp, PESC COPPE, BR-68511 Rio De Janeiro, Brazil..
    Evaluation of simulation-assisted value stream mapping for software product development: Two industrial cases2015In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 68, p. 45-61Article in journal (Refereed)
    Abstract [en]

    Context: Value stream mapping (VSM) as a tool for lean development has led to significant improvements in different industries. In a few studies, it has been successfully applied in a software engineering context. However, some shortcomings have been observed in particular failing to capture the dynamic nature of the software process to evaluate improvements i.e. such improvements and target values are based on idealistic situations. Objective: To overcome the shortcomings of VSM by combining it with software process simulation modeling, and to provide reflections on the process of conducting VSM with simulation. Method: Using case study research, VSM was used for two products at Ericsson AB, Sweden. Ten workshops were conducted in this regard. Simulation in this study was used as a tool to support discussions instead of as a prediction tool. The results have been evaluated from the perspective of the participating practitioners, an external observer, and reflections of the researchers conducting the simulation that was elicited by the external observer. Results: Significant constraints hindering the product development from reaching the stated improvement goals for shorter lead time were identified. The use of simulation was particularly helpful in having more insightful discussions and to challenge assumptions about the likely impact of improvements. However, simulation results alone were found insufficient to emphasize the importance of reducing waiting times and variations in the process. Conclusion: The framework to assist VSM with simulation presented in this study was successfully applied in two cases. The involvement of various stakeholders, consensus building steps, emphasis on flow (through waiting time and variance analysis) and the use of simulation proposed in the framework led to realistic improvements with a high likelihood of implementation. (C) 2015 Elsevier B.V. All rights reserved.

  • 157.
    bin Ali, Nauman
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Wohlin, Claes
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    A systematic literature review on the industrial use of software process simulation2014In: Journal of Systems and Software, ISSN 0164-1212, Vol. 97Article in journal (Refereed)
    Abstract [en]

    Context Software process simulation modelling (SPSM) captures the dynamic behaviour and uncertainty in the software process. Existing literature has conflicting claims about its practical usefulness: SPSM is useful and has an industrial impact; SPSM is useful and has no industrial impact yet; SPSM is not useful and has little potential for industry. Objective To assess the conflicting standpoints on the usefulness of SPSM. Method A systematic literature review was performed to identify, assess and aggregate empirical evidence on the usefulness of SPSM. Results In the primary studies, to date, the persistent trend is that of proof-of-concept applications of software process simulation for various purposes (e.g. estimation, training, process improvement, etc.). They score poorly on the stated quality criteria. Also only a few studies report some initial evaluation of the simulation models for the intended purposes. Conclusion There is a lack of conclusive evidence to substantiate the claimed usefulness of SPSM for any of the intended purposes. A few studies that report the cost of applying simulation do not support the claim that it is an inexpensive method. Furthermore, there is a paramount need for improvement in conducting and reporting simulation studies with an emphasis on evaluation against the intended purpose.

  • 158.
    bin Ali, Nauman
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Unterkalmsteiner, Michael
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Use and evaluation of simulation for software process education: a case study2014Conference paper (Refereed)
    Abstract [en]

    Software Engineering is an applied discipline and concepts are difficult to grasp only at a theoretical level alone. In the context of a project management course, we introduced and evaluated the use of software process simulation (SPS) based games for improving students’ understanding of software development processes. The effects of the intervention were measured by evaluating the students’ arguments for choosing a particular development process. The arguments were assessed with the Evidence-Based Reasoning framework, which was extended to assess the strength of an argument. The results indicate that students generally have difficulty providing strong arguments for their choice of process models. Nevertheless, the assessment indicates that the intervention of the SPS game had a positive impact on the students’ arguments. Even though the illustrated argument assessment approach can be used to provide formative feedback to students, its use is rather costly and cannot be considered a replacement for traditional assessments.

  • 159.
    Birgersson, Frida
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Onomatopoesi i skönlitteratur: - gemensamma tolkningar av ljudhärmande ord2016Independent thesis Basic level (university diploma), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Previous and recent studies shows that onomatopoeia is presented in fictionstories but the studies lack valid information within the subject. Moredetailed research can be found within language studies. The purpose of thisstudy is to enlighten onomatopoeia, how it is presented in fictionproductions and if people share a mutual vision of sounds imitating wordsthat might be used in these productions. The study was conducted with aqualitative method and with a phenomenological perspective. 36 individualsparticipated in a questionnaire that was conducted. A number of sounds waspresented to the participants who then had to type the sound in text. BONKand SPLOOSH showed a mutual vision and the result was presented astables. A product was created to unite the previous and recent studies withthe result from the questionnaire. This resulted in an interactive book thatwith help of digital tool create an ordiginal product.

  • 160.
    Bitla, Krishna Sai
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Veesamsetty, Sairam Sagar
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Measuring Process Flow using Metrics in Agile Software Development2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. Software Project management focuses on planning and executing the activities for developing software. Agile Software Project Management helps to plan shorter iterations and frequent changes to customer requirement. Developing the process flow metrics helps to monitor the process and to tune the process for the given context.

    Objectives. The main objectives in the thesis are to identify process flow metrics and frameworks that are suitable for measuring the process flow in Agile projects especially projects with significant dependence on hardware components. Apart from identified metrics from the literature, we identify the impact, challenges, and advantages of using agile models with the help of productivity and process flow metrics and implement them on a test phase project and compare the productivity of agile model with waterfall model.

    Methods. The thesis presents a two-step study. The first step was to perform a Systematic Literature Review (SLR) and collect the metrics from the literature study that can be used for the comparison of the productivity of both the processes. The Second step was to conduct a case study at Volvo Cars to get us a better understanding the impact of the agile and how the process flow metrics can be used in real time for measuring and comparison.  

    Results. In the first step of SLR, 363 metrics that can be used by software teams have been identified of which 10 were suitable for the comparison of our current case study required by the second step of our thesis. In the second step, in the first iteration after the transition, an increase in productivity of 6.25% is achieved by the team following the agile process over the team following the traditional process. Several advantages and challenges faced during the transition have been identified which might have affected the achieved productivity.

    Conclusions. We conclude from the results achieved that metrics can be used as a tool to enhance the benefits of the Agile process. Process Flow metrics can be of good use to compare the difference of productivity between different processes and make improvements to the current processes. Use of process flow metrics increases the insight of all the team members on the progress of the project and guides them to enhance team performance and stay on track with the project schedule.

  • 161.
    Bjarnason, Elizabeth
    et al.
    Lund Univ, SWE.
    Morandini, Mirko
    Fdn Bruno Kessler, ITA.
    Borg, Markus
    Lund Univ, SWE.
    Unterkalmsteiner, Michael
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Felderer, Michael
    Univ Innsbruck, AUT.
    Staats, Matthew
    Google Inc, CHE.
    2nd International Workshop on Requirements Engineering and Testing (RET 2015)2015In: 2015 IEEE/ACM 37TH IEEE INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING, VOL 2, IEEE , 2015, p. 997-998Conference paper (Refereed)
    Abstract [en]

    The RET (Requirements Engineering and Testing) workshop provides a meeting point for researchers and practitioners from the two separate fields of Requirements Engineering (RE) and Testing. The goal is to improve the connection and alignment of these two areas through an exchange of ideas, challenges, practices, experiences and results. The long term aim is to build a community and a body of knowledge within the intersection of RE and Testing. One of the main outputs of the 1st workshop was a collaboratively constructed map of the area of RET showing the topics relevant to RET for these. The 2nd workshop will continue in the same interactive vein and include a keynote, paper presentations with ample time for discussions, and a group exercise. For true impact and relevance this cross-cutting area requires contribution from both RE and Testing, and from both researchers and practitioners. For that reason we welcome a range of paper contributions from short experience papers to full research papers that both clearly cover connections between the two fields.

  • 162. Bjarnason, Elizabeth
    et al.
    Unterkalmsteiner, Michael
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Borg, Markus
    Engström, Emelie
    A Multi-Case Study of Agile Requirements Engineering and the Use of Test Cases as Requirements2016In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 77, p. 61-79Article in journal (Refereed)
    Abstract [en]

    [Context] It is an enigma that agile projects can succeed ‘without requirements’ when weak requirementsengineering is a known cause for project failures. While agile development projects often manage well withoutextensive requirements test cases are commonly viewed as requirements and detailed requirements are documented astest cases.[Objective] We have investigated this agile practice of using test cases as requirements to understand how test casescan support the main requirements activities, and how this practice varies.[Method] We performed an iterative case study at three companies and collected data through 14 interviews and 2focus groups.[Results] The use of test cases as requirements poses both benefits and challenges when eliciting, validating,verifying, and managing requirements, and when used as a documented agreement. We have identified five variants ofthe test-cases-as-requirements practice, namely de facto, behaviour-driven, story-test driven, stand-alone strict andstand-alone manual for which the application of the practice varies concerning the time frame of requirementsdocumentation, the requirements format, the extent to which the test cases are a machine executable specification andthe use of tools which provide specific support for the practice of using test cases as requirements.[Conclusions] The findings provide empirical insight into how agile development projects manage andcommunicate requirements. The identified variants of the practice of using test cases as requirements can be used toperform in-depth investigations into agile requirements engineering. Practitioners can use the providedrecommendations as a guide in designing and improving their agile requirements practices based on projectcharacteristics such as number of stakeholders and rate of change.

  • 163.
    Bjarnason, Elizabeth
    et al.
    Lund University.
    Unterkalmsteiner, Michael
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Engström, Emelie
    Lund University.
    Borg, Markus
    Lund University.
    An Industrial Case Study on the Use of Test Cases as Requirements2015In: Lecture Notes in Business Information, Springer, 2015, p. 27-39Conference paper (Refereed)
    Abstract [en]

    It is a conundrum that agile projects can succeed 'without requirements' when weak requirements engineering is a known cause for project failures. While Agile development projects often manage well without extensive requirements documentation, test cases are commonly used as requirements. We have investigated this agile practice at three companies in order to understandhow test cases can fill the role of requirements. We performed a case study based on twelve interviews performed in a previous study.The findings include a range of benefits and challenges in using test cases for eliciting, validating, verifying, tracing and managing requirements. In addition, we identified three scenarios for applying the practice, namely as a mature practice, as a de facto practice and as part of an agile transition. The findings provide insights into how the role of requirements may be met in agile development including challenges to consider.

  • 164.
    Bjäreholt, Johan
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    RISC-V Compiler Performance:A Comparison between GCC and LLVM/clang2017Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    RISC-V is a new open-source instruction set architecture (ISA) that in De-cember 2016 manufactured its rst mass-produced processors. It focuses onboth eciency and performance and diers from other open-source architec-tures by not having a copyleft license permitting vendors to freely design,manufacture and sell RISC-V chips without any fees nor having to sharetheir modications on the reference implementations of the architecture.The goal of this thesis is to evaluate the performance of the GCC andLLVM/clang compilers support for the RISC-V target and their ability tooptimize for the architecture. The performance will be evaluated from ex-ecuting the CoreMark and Dhrystone benchmarks are both popular indus-try standard programs for evaluating performance on embedded processors.They will be run on both the GCC and LLVM/clang compilers on dierentoptimization levels and compared in performance per clock to the ARM archi-tecture which is mature yet rather similar to RISC-V. The compiler supportfor the RISC-V target is still in development and the focus of this thesis willbe the current performance dierences between the GCC and LLVM com-pilers on this architecture. The platform we will execute the benchmarks onwil be the Freedom E310 processor on the SiFive HiFive1 board for RISC-Vand a ARM Cortex-M4 processor by Freescale on the Teensy 3.6 board. TheFreedom E310 is almost identical to the reference Berkeley Rocket RISC-Vdesign and the ARM Coretex-M4 processor has a similar clock speed and isaimed at a similar target audience.The results presented that the -O2 and -O3 optimization levels on GCCfor RISC-V performed very well in comparison to our ARM reference. Onthe lower -O1 optimization level and -O0 which is no optimizations and -Oswhich is -O0 with optimizations for generating a smaller executable code sizeGCC performs much worse than ARM at 46% of the performance at -O1,8.2% at -Os and 9.3% at -O0 on the CoreMark benchmark with similar resultsin Dhrystone except on -O1 where it performed as well as ARM. When turn-ing o optimizations (-O0) GCC for RISC-V was 9.2% of the performanceon ARM in CoreMark and 11% in Dhrystone which was unexpected andneeds further investigation. LLVM/clang on the other hand crashed whentrying to compile our CoreMark benchmark and on Dhrystone the optimiza-tion options made a very minor impact on performance making it 6.0% theperformance of GCC on -O3 and 5.6% of the performance of ARM on -O3, soeven with optimizations it was still slower than GCC without optimizations.In conclusion the performance of RISC-V with the GCC compiler onthe higher optimization levels performs very well considering how young theRISC-V architecture is. It does seems like there could be room for improvement on the lower optimization levels however which in turn could also pos-sibly increase the performance of the higher optimization levels. With theLLVM/clang compiler on the other hand a lot of work needs to be done tomake it competetive in both performance and stability with the GCC com-piler and other architectures. Why the -O0 optimization is so considerablyslower on RISC-V than on ARM was also very unexpected and needs furtherinvestigation.

  • 165.
    Björklund, Johanna
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Thorburn, Kyle
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Miljö som berättar: En karaktärs berättelse genom Environmental Storytelling2015Independent thesis Basic level (degree of Bachelor), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This study aims to explore and gain new insight into how one could tell a game-character's story through Environmental Storytelling. By doing this we hope to make game characters feel more real and alive, while at the same time furthering the game’s narrative. Before delving into the creation and examining of our game, we establish that it’s really important let the players come to their own conclusion regarding perceived story, even if they are inaccurate. After having created a game in an attempt to answer our question at issue we let other people play the game. We believe this is important as we, the creators of the game, already know everything there is to know about the character in question. The research uses Actor Network Theory as its methodology, as we believe it can help us understand relationships to why something works or doesn’t work. This research concludes that it’s all the small details that adds up, enriching a character's story.

  • 166.
    Björkman, Adam
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Kardos, Max
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Threat Analysis of Smart Home Assistants Involving Novel Acoustic Based Attack-Vectors2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Background. Smart home assistants are becoming more common in our homes. Often taking the form of a speaker, these devices enable communication via voice commands. Through this communication channel, users can for example order a pizza, check the weather, or call a taxi. When a voice command is given to the assistant, the command is sent to cloud services over the Internet, enabling a multitude of functions associated with risks regarding security and privacy. Furthermore, with an always active Internet connection, smart home assistants are a part of the Internet of Things, a type of historically not secure devices. Therefore, it is crucial to understand the security situation and the risks that a smart home assistant brings with it.

    Objectives. This thesis aims to investigate and compile threats towards smart home assistants in a home environment. Such a compilation could be used as a foundation during the creation of a formal model for securing smart home assistants and other devices with similar properties.

    Methods. Through literature studies and threat modelling, current vulnerabilities towards smart home assistants and systems with similar properties were found and compiled. A few  vulnerabilities were tested against two smart home assistants through experiments to verify which vulnerabilities are present in a home environment. Finally, methods for the prevention and protection of the vulnerabilities were found and compiled.

    Results. Overall, 27 vulnerabilities towards smart home assistants and 12 towards similar systems were found and identified. The majority of the found vulnerabilities focus on exploiting the voice interface. In total, 27 methods to prevent vulnerabilities in smart home assistants or similar systems were found and compiled. Eleven of the found vulnerabilities did not have any reported protection methods. Finally, we performed one experiment consisting of four attacks against two smart home assistants with mixed results; one attack was not successful, while the others were either completely or partially successful in exploiting the target vulnerabilities.

    Conclusions. We conclude that vulnerabilities exist for smart home assistants and similar systems. The vulnerabilities differ in execution difficulty and impact. However, we consider smart home assistants safe enough to usage with the accompanying protection methods activated.

  • 167.
    Björneskog, Amanda
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Goniband Shoshtari, Nima
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Comparison of Security and Risk awareness between different age groups2017Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    The Internet have become a 'necessity' in the everyday life of just below 50\% of the world population. With the growth of the Internet and it creating a great platform to help people and making life easier, it has also brought a lot of malicious situations. Now a days people hack or uses social engineering on other people for a living, scamming and fraud is part of their daily life. Therefore security awareness is truly important and sometimes vital.We wanted to look at the difference in security awareness depending on which year you were born, in relation to the IT-boom and growth of the Internet. Does it matter if you lived through the earlier stages of the Internet or not? We found that the security awareness did increase with age, but if it was due to the candidates growing up before or after the IT-boom or due to the fact that younger people tend to be more inattentive is hard to tell. Our result is that the age group, 16-19, were more prone to security risks, due to an indifferent mindset regarding their data and information.

  • 168.
    Bjöörn, Christopher
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Johnsson, Jacob
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Universe-defining rules2014Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [sv]

    Abstrakt I detta arbete undersöks hur konceptet lek går att applicera på digitala spel och hur man presenterar ett fiktivt universum och de regler som definierar det universumet. Syftet med denna undersökning är att öka kvaliteten på digitala spel för spelare genom att öka förståelsen för hur sådana regler introduceras. Frågeställningen som ämnas att besvaras är “hur kan man introducera realistiska, semi-realistiska och fiktiva regler i ett spel?”. Undersökningen baseras delvis på analyser kring varför vissa introduktioner av regler ofta accepteras och andra inte, dels på utvärdering av en gestaltning och dels på tidigare forskning. Denna undersökning är indelad i två delar; en researchdel och en produktionsdel. För att besvara frågan har research skett kring vad som känns till sedan tidigare och ett digitalt spel har producerats där den stora regeln som skiljer verkligheten från detta fiktiva universum är paranormal aktivitet, eller spöken. Nyckelord: regler, magisk cirkel, inlevelse och spelproduktion. Abstract In this work the concept of play and how it may be applied to digital games and how to introduce a fictional universe and the rules that define that universe is being investigated. The purpose of this work is to increase the quality of digital games by increasing our understanding of how such rules may be introduced. The question to be answered is “how may realistic, semi-realistic and fictional rules be introduced in a digital game?”. This work is based partly on analyses on why some introductions of rules are often accepted and some often not, partly on evaluation of a product created by us and partly on earlier research. This work is split into two parts; one research part and one production part. To answer the question research about what is previously known has been conducted and a digital game has been produced where the main rule that separates the fictional universe from ours is paranormal activity, or ghosts. Keywords: Rules, magical circle, immersion and game production.

  • 169.
    Blal, Redouane
    et al.
    Universite du Quebec a Montreal, CAN.
    Leshob, Abderrahmane
    Universite du Quebec a Montreal, CAN.
    Gonzalez-Huerta, Javier
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Mili, Hafedh
    Universite du Quebec a Montreal, CAN.
    Boubaker, Anis
    Universite du Quebec a Montreal, CAN.
    From inter-organizational business process models to service-oriented architecture models2018In: Service Oriented Computing and Applications, ISSN 1863-2386, E-ISSN 1863-2394, Vol. 12, no 3-4, p. 227-245Article in journal (Refereed)
    Abstract [en]

    Today’s business processes become increasingly complex and often cross the boundaries of the organizations. On the one hand, to support their business processes, modern organizations use enterprise information systems that need to be aware of the organizations’ processes and contexts. Such systems are called Process-Aware Information System (PAIS). On the other hand, the service-oriented architecture (SOA) is a fast emerging architectural style that has been widely adopted by modern organizations to design and implement PAIS that support their business processes. This paper aims to bridge the gap between inter-organizational business processes and SOA-based PAISs that support them. It proposes a novel model-driven design method that generates SOA models expressed in SoaML taking the specification of collaborative business processes expressed in BPMN as input. We present the principles underlying the approach, the state of an ongoing implementation, and the results of two studies conducted to empirically validate the method in the context of ERP key processes. © 2018, Springer-Verlag London Ltd., part of Springer Nature.

  • 170.
    Blidkvist, Jesper
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Westgren, Joakim
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Re-texturing and compositing new material on pre-rendered media: Using DirectX and UV sampling2016Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Context: This thesis investigates a new method for re-texturing and com- positing new or additional material on specific pre-rendered images using various blend equations. This is done by sampling a number of render passes created alongside the original source material, most notably a UV pass for accurate texture positioning and different lighting passes to enhance the control over the final result. This will allow comparatively simple and cheap compositing without the overhead that other commercially available tools might add.

    Objectives: Render the necessary UV coordinates and lighting calculations from a 3D application to two separate textures.Sample said textures in DirectX and use the information to accurately light and position the additional dynamic material for blending with the pre-rendered media.

    Method: The thesis uses an implementation method in which quantita- tive data is gathered by comparing the resulting composited images using two common image comparison methods, the Structured Similarity Index (SSIM) and Peak Signal to Noise Ratio (PSNR), against a Gold Standard render.

    Results: The results of this implementation indicates that both the per- ceived and measured similarity is close enough to prove the validity of this method. Conclusions. This thesis shows the possibility and practical use of DirectX as tool capable of the most fundamental compositing operations. In its current state, the implementation is limited in terms of flexibility and func- tionality when compared to other proprietary compositing software packages and some visual artefacts and quality issues are present. There are however no indications that these issues could not be solved with additional work. 

  • 171.
    Bloom, Filip
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Competitive Coevolution for micromanagement in StarCraft: Brood War2017Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Context. Interest in and research on neural networks and their capacity for finding solutions to nonlinear problems has increased greatly in recent years.

    Objectives. This thesis attempts to compare competitive coevolution to traditional neuroevolution in the game StarCraft: Brood War.

    Methods. Implementing and evolving AI-controlled players for the game StarCraft and evaluating their performance.

    Results. Fitness values and win rates against the default StarCraft AI and between the networks were gathered.

    Conclusions. The neural networks failed to improve under the given circumstances. The best networks performed on par with the default StarCraft AI.

  • 172.
    Boddapati, Venkatesh
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Classifying Environmental Sounds with Image Networks2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. Environmental Sound Recognition, unlike Speech Recognition, is an area that is still in the developing stages with respect to using Deep Learning methods. Sound can be converted into images by extracting spectrograms and the like. Object Recognition from images using deep Convolutional Neural Networks is a currently developing area holding high promise. The same technique has been studied and applied, but on image representations of sound.

    Objectives. In this study, investigation is done to determine the best possible accuracy of performing a sound classification task using existing deep Convolutional Neural Networks by comparing the data pre-processing parameters. Also, a novel method of combining different features into a single image is proposed and its effect tested. Lastly, the performance of an existing network that fuses Convolutional and Recurrent Neural architectures is tested on the selected datasets.

    Methods. In this, experiments were conducted to analyze the effects of data pre-processing parameters on the best possible accuracy with two CNNs. Also, experiment was also conducted to determine whether the proposed method of feature combination is beneficial or not. Finally, an experiment to test the performance of a combined network was conducted.

    Results. GoogLeNet had the highest classification accuracy of 73% on 50-class dataset and 90-93% on 10-class datasets. The sampling rate and frame length values of the respective datasets which contributed to the high scores are 16kHz, 40ms and 8kHz, 50ms respectively. The proposed combination of features does not improve the classification accuracy. The fused CRNN network could not achieve high accuracy on the selected datasets.

    Conclusions. It is concluded that deep networks designed for object recognition can be successfully used to classify environmental sounds and the pre-processing parameters’ values determined for achieving best accuracy. The novel method of feature combination does not significantly improve the accuracy when compared to spectrograms alone. The fused network which learns the special and temporal features from spectral images performs poorly in the classification task when compared to the convolutional network alone.

  • 173.
    Boddapati, Venkatesh
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Petef, Andrej
    Sony Mobile Communications AB, SWE.
    Rasmusson, Jim
    Sony Mobile Communications AB, SWE.
    Lundberg, Lars
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Classifying environmental sounds using image recognition networks2017In: Procedia Computer Science / [ed] Toro C.,Hicks Y.,Howlett R.J.,Zanni-Merk C.,Toro C.,Frydman C.,Jain L.C.,Jain L.C., Elsevier B.V. , 2017, Vol. 112, p. 2048-2056Conference paper (Refereed)
    Abstract [en]

    Automatic classification of environmental sounds, such as dog barking and glass breaking, is becoming increasingly interesting, especially for mobile devices. Most mobile devices contain both cameras and microphones, and companies that develop mobile devices would like to provide functionality for classifying both videos/images and sounds. In order to reduce the development costs one would like to use the same technology for both of these classification tasks. One way of achieving this is to represent environmental sounds as images, and use an image classification neural network when classifying images as well as sounds. In this paper we consider the classification accuracy for different image representations (Spectrogram, MFCC, and CRP) of environmental sounds. We evaluate the accuracy for environmental sounds in three publicly available datasets, using two well-known convolutional deep neural networks for image recognition (AlexNet and GoogLeNet). Our experiments show that we obtain good classification accuracy for the three datasets. © 2017 The Author(s).

  • 174.
    Bodicherla, Saikumar
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Pamulapati, Divyani
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Knowledge Management Maturity Model for Agile Software Development2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context: Knowledge is the major aspect of an organization which enables the enterprise to be more productive and to deliver the high complexity services. Knowledge management plays a key role in agile software development because it supports cultural infrastructure esteems like collaboration, communication, and knowledge transfer. This research aims to explore how organizations that adopts Agile Software Development (ASD) implement knowledge management utilizing practices that supports the key process areas. Several knowledge management maturity models have been proposed over a decade ago but not all of the models that is specially stated knowledge Management Maturity Model (KMMM) for Agile software development. To fulfil this research gap, we introduce the maturity model which emphasize knowledge management in ASD among the practitioners. This maturity model helps to assess their knowledge management in organization and provides a road map to the organizations for any further improvement required in their processes. 

    Objectives: In this thesis, we investigate the key process areas of knowledge management maturity models that could support agile software development. Through investigation about the key process areas, we found that the organizations should emphasis on key process areas and its practices in order to improve the software process. The objectives of this research include:

    • Explore the key process areas and practices of knowledge management in the knowledge management maturity models. 
    • Identify the views of practitioners on knowledge management practices and key process areas for Agile software development.
    • To propose the maturity model for Knowledge management in Agile software development among the practitioner’s opinions. 

    Methods: In this research, we conducted two methods: Systematic mapping and Survey to fulfil our aim and objectives. We conducted Systematic mapping study through the snowballing process to investigate empirical literature about Knowledge management maturity models. To triangulate the systematic mapping results, we conducted a survey. From the survey results, we obtained the responses and were analyzed statistically using descriptive statistics.

    Results: From Systematic mapping, we identified 18 articles and analyzed 24 practices of Knowledge management maturity models. These practices are indicated in key process areas such as process, people, technology. Through the systematic mapping results, 9 KM practices that were found from KMMM literature were listed in the survey questionnaire and answered by software engineering practitioners. Moreover, 5 other new practices for agile have suggested in the survey that was not found in KMMM literature. To address the systematic mapping and survey results, we propose the maturity model which emphasize knowledge management practices in ASD among the practitioners.

    Conclusions: This thesis lists the main elements of practices that are utilized by the organization and also show the usage of maturity levels at each practice in detail. Furthermore, this thesis helps the organization's to assess the current levels of maturity that exist to each practice in a real process. Hence, the researchers can utilize the model from this thesis and further they can improve their Km in organizations.

  • 175.
    Bodireddigari, Sai Srinivas
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    A Framework To Measure the Trustworthiness of the User Feedback in Mobile Application Stores2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context: Mobile application stores like Google Play, Apple store, Windows store have over 3 million apps. Users download the applications from their respective stores and they generally prefer the apps with the highest ratings. In response to the present situation, application stores provided the categories like editor’s choice or top charts, providing better visibility for the applications. Customer reviews play such critical role in the development of the application and the organization, in such case there might be flawed reviews or biased opinions about the application due to many factors. The biased opinions and flawed reviews are likely to cause user review untrustworthiness. The reviews or ratings in the mobile application stores are used by the organizations to make the applications more efficient and more adaptable to the user. The context leads to importance of the user’s review trustworthiness and managing the trustworthiness in the user feedback by knowing the causes of mistrust. Hence, there is a need for a framework to understand the trustworthiness in the user given feedback.

    Objectives: In the following study the author aims for the accomplishment of the following objectives, firstly, exploring the causes of untrustworthiness in user feedback for an application in the mobile application stores such as google play store. Secondly, Exploring the effects of trustworthiness on the users and developers. Finally, the aim is to propose a framework for managing the trustworthiness in the feedback.

    Methods: To accomplish the objectives, author used qualitative research method. The data collection method is an interview-based survey that was conducted with 13 participants, to find out the causes of untrustworthiness in the user feedback from user’s perspective and developer’s perspective. Author follows thematic coding for qualitative data analysis.

    Results:Author identifies 11 codes from the description of the transcripts and explores the relationship among the trustworthiness with the causes. 11 codes were put into 4 themes, and a thematic network is created between the themes. The relations were then analyzed with cost-effect analysis.

    Conclusions: We conclude that 11 causes effect the trustworthiness according to user’s perspective and 9 causes effect the trustworthiness according to the developer’s perspective, from the analysis. Segregating the trustworthy feedback from the untrustworthy feedback is important for the developers, as the next releases should be planned based on that. Finally, an inclusion and exclusion criteria to help developers manage trustworthy user feedback is defined. 

  • 176.
    Boer, de, Wiebe Douwe
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Participatory Design Ideals2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The Swedish academic discipline Informatics has roots in the Scandinavian design approach Participatory Design (PD). PD’s point of departure is to design ICT and the new environment it becomes part of together with the future users driven by the ideal to bring more democracy to the workplace. PD builds further on the Action Research and industrial democracy tradition already starting in the 1960s in Scandinavia, in which the powerful Scandinavian trade unions have a central role. The aim of the unions is to prepare the workers and have influence on the introduction of new technologies that (are expected to) change the work and work environment of the workers. In the 1970s, when more computers emerge in the work place, this leads to the development of PD. Important difference with AR is that the aim of PD is to actually design new ICT and the new environment it becomes part of.

    During the in PD literature much referred to project UTOPIA in the first half of the 1980s, led by project leader and PD pioneer Pelle Ehn, it is discovered that bringing the different expertise of designers/researchers and workers together in design-by-doing processes also result in more appropriate ICT.

     

    With ICT being ubiquitous nowadays, influencing most aspects of our lives, inside and outside the workplace, and another role of trade unions in (Scandinavian) society, a question is how PD should further develop. PD pioneer Morten Kyng (also a UTOPIA designer/researcher) proposes a framework for next PD practices in a discussion paper. The first element he mentions in the framework is ideals; The designer/researcher should as a first step consider what ideals to pursue as a person and for the project, and then to consider how to discuss the goals of the project partners, for which Kyng does no further suggestions how to approach this.

    This design and research thesis has as aim to design and propose some PD processes to come at the beginning of a PD/design project to shared ideals to pursue, based on a better understanding of the political and philosophical background of PD, including design as a discipline in its own right.

     

    For a better understanding of the political and philosophical roots of PD, and design as a discipline in its own right, Pelle Ehns’s early (PD research) work and (PD) influences and supporting theories are explored, next to Kyng’s discussion paper (framework) and reactions from his debate partners on this. Find out is that politics and what ideals to pursue in PD are sensitive and (still) important subjects in PD, and in a broader sense also for design in general one could argue. In relation to this also related disciplines like Computer Ethics, Value Sensitive Design, and more recent formulated ideals for PD and its relation to ethics are explored. As a result a proposal for a redesigned framework for next PD practices as a design artefact is designed, in which the element ideals is most elaborated.

    Before the understanding of design as a discipline in its own right is further explored by exploring a selection of different models and quotes from related (design) literature, on which is reflected also in relation to PD, and which are used as reminders in a design process to come to a proposal for a model that tries to reframe the relation between design, practice and research.

     

    Finally some methods, processes and techniques used in PD, design, AR and related literature that can contribute to design proposals for design processes that enable the design of ideals using a PD approach, are explored. These are used as reminders in design-by-doing processes, in which suggestions for techniques and processes to design ideals together with participants are tried out in real live situations, reflected on and iteratively further developed. Trying to avoid framing as much as possible, (semi-) anonymity and silence seem to be important ingredients in these processes to stimulate the generation of idea(l)s as much as possible free from bias and dominance patterns. An additional design artefact developed in this context is a template for an annotated portfolio used to describe and reflect on the different processes. 

  • 177.
    Boeva, Veselka
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Angelova, Milena
    Technical University Sofia, BUL.
    Lavesson, Niklas
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Rosander, Oliver
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Tsiporkova, Elena
    Collective Center for the Belgian Technological Industry, BEL.
    Evolutionary clustering techniques for expertise mining scenarios2018In: ICAART 2018 - Proceedings of the 10th International Conference on Agents and Artificial Intelligence, Volume 2 / [ed] van den Herik J.,Rocha A.P., SciTePress , 2018, Vol. 2, p. 523-530Conference paper (Refereed)
    Abstract [en]

    The problem addressed in this article concerns the development of evolutionary clustering techniques that can be applied to adapt the existing clustering solution to a clustering of newly collected data elements. We are interested in clustering approaches that are specially suited for adapting clustering solutions in the expertise retrieval domain. This interest is inspired by practical applications such as expertise retrieval systems where the information available in the system database is periodically updated by extracting new data. The experts available in the system database are usually partitioned into a number of disjoint subject categories. It is becoming impractical to re-cluster this large volume of available information. Therefore, the objective is to update the existing expert partitioning by the clustering produced on the newly extracted experts. Three different evolutionary clustering techniques are considered to be suitable for this scenario. The proposed techniques are initially evaluated by applying the algorithms on data extracted from the PubMed repository. Copyright © 2018 by SCITEPRESS – Science and Technology Publications, Lda. All rights reserved.

  • 178.
    Boeva, Veselka
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Lundberg, Lars
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Angelova, Milena
    TU of Sofia, BUL.
    Kohstall, Jan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Cluster Validation Measures for Label Noise Filtering2018In: 9th International Conference on Intelligent Systems 2018: Theory, Research and Innovation in Applications, IS 2018 - Proceedings / [ed] JardimGoncalves, R; Mendonca, JP; Jotsov, V; Marques, M; Martins, J; Bierwolf, R, Institute of Electrical and Electronics Engineers Inc. , 2018, p. 109-116Conference paper (Refereed)
    Abstract [en]

    Cluster validation measures are designed to find the partitioning that best fits the underlying data. In this paper, we show that these well-known and scientifically proven validation measures can also be used in a different context, i.e., for filtering mislabeled instances or class outliers prior to training in super-vised learning problems. A technique, entitled CVI-based Outlier Filtering, is proposed in which mislabeled instances are identified and eliminated from the training set, and a classification hypothesis is then built from the set of remaining instances. The proposed approach assigns each instance several cluster validation scores representing its potential of being an outlier with respect to the clustering properties the used validation measures assess. We examine CVI-based Outlier Filtering and compare it against the LOF detection method on ten data sets from the UCI data repository using five well-known learning algorithms and three different cluster validation indices. In addition, we study two approaches for filtering mislabeled instances: local and global. Our results show that for most learning algorithms and data sets, the proposed CVI-based outlier filtering algorithm outperforms the baseline method (LOF). The greatest increase in classification accuracy has been achieved by combining at least two of the used cluster validation indices and global filtering of mislabeled instances. © 2018 IEEE.

  • 179.
    Boeva, Veselka
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering. Blekinge Inst Technol, Comp Sci & Engn Dept, Karlskrona, Sweden..
    Lundberg, Lars
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering. Blekinge Inst Technol, Comp Sci & Engn Dept, Karlskrona, Sweden..
    Kota, Sai M. Harsha
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Sköld, Lars
    Telenor , SWE.
    Analysis of Organizational Structure through Cluster Validation Techniques Evaluation of email communications at an organizational level2017In: 2017 17TH IEEE INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS (ICDMW 2017) / [ed] Gottumukkala, R Ning, X Dong, G Raghavan, V Aluru, S Karypis, G Miele, L Wu, X, IEEE , 2017, p. 170-176Conference paper (Refereed)
    Abstract [en]

    In this work, we report an ongoing study that aims to apply cluster validation measures for analyzing email communications at an organizational level of a company. This analysis can be used to evaluate the company structure and to produce further recommendations for structural improvements. Our initial evaluations, based on data in the forms of emails logs and organizational structure for a large European telecommunication company, show that cluster validation techniques can be useful tools for assessing the organizational structure using objective analysis of internal email communications, and for simulating and studying different reorganization scenarios.

  • 180.
    Boinapally, Kashyap
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Security Certificate Renewal Management2019Independent thesis Advanced level (degree of Master (Two Years)), 80 credits / 120 HE creditsStudent thesis
    Abstract [en]

    Context. An SSL encrypted client-server communication is necessary to maintain the security and privacy of the communication. For an SSL encryption to work, there should be a security certificate which has a certain expiry period. Periodic renewal of the certificate after its expiry is a waste of time and an effort on part of the company.

    Objectives. In this study, a new system has been developed and implemented, which sends a certificate during prior communication and does not wait for the certificate to expire. Automating the process to a certain extent was done to not compromise the security of the system and to speed up the process and reduce the downtime.

    Methods. Experiments have been conducted to test the new system and compare it to the old system. The experiments were conducted to analyze the packets and the downtime occurring from certificate renewal.

    Results. The results of the experiments show that there is a significant reduction in downtime. This was achieved due to the implementation of the new system and semi-automation

    Conclusions. The system has been implemented, and it greatly reduces the downtime occurring due to the expiry of the security certificates. Semi-Automation has been done to not hamper the security and make the system robust.

  • 181.
    Boivie, Joakim
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Digital Wanderlust: Med digital materia som följeslagare i skapandet2017Independent thesis Basic level (degree of Bachelor), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    With this bachelor thesis I aim to bring to light the role of the computer in digital creative work. This is accomplished by treating the code that make up digital objects as a form of matter, and with Karen Barad’s agential realism and other research into digital materiality as a point of reference this matter is invited to the creative process as an actor. I’ve been striving for a glimpse of how digital matter comes to life when it’s allowed an active part in the creative process, to see how it expresses itself. By engaging with the digital matter through diffraction and remix as methods I’ve been given an insight into the core of it, and through the process I’ve been working alongside digital matter in intra action.

    Ultimately I can see how digital matter won’t appear alone, I myself and the computer are both entangled together with the digital matter as a result of the intra actions we’ve been engaging in. My intervention in digital matter becomes visible as glitches, traces of decay that give the digital matter, which can be so fleeting, more concrete and material characteristics. The unintelligible complexity of digital matter also comes to light when it’s allowed influence, as it appears visually. With this knowledge I’ve gained the awareness that digital matter does not have an absolute appearance, and this thesis can be seen as an investigation into how digital matter can appear.

  • 182.
    Boldt, Martin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Anton, Borg
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Clustering residential burglaries using multiple heterogeneous variablesIn: International Journal of Information Technology & Decision MakingArticle in journal (Refereed)
    Abstract [en]

    To identify series of residential burglaries, detecting linked crimes performed bythe same constellations of criminals is necessary. Comparison of crime reports today isdicult as crime reports traditionally have been written as unstructured text and oftenlack a common information-basis. Based on a novel process for collecting structured crimescene information the present study investigates the use of clustering algorithms to groupsimilar crime reports based on combined crime characteristics from the structured form.Clustering quality is measured using Connectivity and Silhouette index, stability usingJaccard index, and accuracy is measured using Rand index and a Series Rand index.The performance of clustering using combined characteristics was compared with spatialcharacteristic. The results suggest that the combined characteristics perform better orsimilar to the spatial characteristic. In terms of practical signicance, the presentedclustering approach is capable of clustering cases using a broader decision basis.

  • 183.
    Boldt, Martin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Bala, Jaswanth
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Filtering Estimated Crime Series Based on Route Calculations on Spatio-temporal Data2016In: European Intelligence and Security Informatics Conference / [ed] Brynielsson J.,Johansson F., IEEE, 2016, p. 92-95Conference paper (Refereed)
    Abstract [en]

    Law enforcement agencies strive to link serial crimes, most preferably based on physical evidence, such as DNA or fingerprints, in order to solve criminal cases more efficiently. However, physical evidence is more common at crime scenes in some crime categories than others. For crime categories with relative low occurrence of physical evidence it could instead be possible to link related crimes using soft evidence based on the perpetrators' modus operandi (MO). However, crime linkage based on soft evidence is associated with considerably higher error-rates, i.e. crimes being incorrectly linked. In this study, we investigate the possibility of filtering erroneous crime links based on travel time between crimes using web-based direction services, more specifically Google maps. A filtering method has been designed, implemented and evaluated using two data sets of residential burglaries, one with known links between crimes, and one with estimated links based on soft evidence. The results show that the proposed route-based filtering method removed 79 % more erroneous crimes than the state-of-the-art method relying on Euclidean straight-line routes. Further, by analyzing travel times between crimes in known series it is indicated that burglars on average have up to 15 minutes for carrying out the actual burglary event. © 2016 IEEE.

  • 184.
    Boldt, Martin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Borg, Anton
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    A statistical method for detecting significant temporal hotspots using LISA statistics2017In: Proceedings - 2017 European Intelligence and Security Informatics Conference, EISIC 2017, IEEE Computer Society, 2017, p. 123-126Conference paper (Refereed)
    Abstract [en]

    This work presents a method for detecting statisticallysignificant temporal hotspots, i.e. the date and time of events,which is useful for improved planning of response activities.Temporal hotspots are calculated using Local Indicators ofSpatial Association (LISA) statistics. The temporal data is ina 7x24 matrix that represents a temporal resolution of weekdaysand hours-in-the-day. Swedish residential burglary events areused in this work for testing the temporal hotspot detectionapproach. Although, the presented method is also useful forother events as long as they contain temporal information, e.g.attack attempts recorded by intrusion detection systems. Byusing the method for detecting significant temporal hotspotsit is possible for domain-experts to gain knowledge about thetemporal distribution of the events, and also to learn at whichtimes mitigating actions could be implemented.

  • 185.
    Boldt, Martin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Borg, Anton
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Evaluating Temporal Analysis Methods UsingResidential Burglary Data2016In: ISPRS International Journal of Geo-Information, Special Issue on Frontiers in Spatial and Spatiotemporal Crime Analytics, ISSN 2220-9964, Vol. 5, no 9, p. 1-22Article in journal (Refereed)
    Abstract [en]

    Law enforcement agencies, as well as researchers rely on temporal analysis methods in many crime analyses, e.g., spatio-temporal analyses. A number of temporal analysis methods are being used, but a structured comparison in different configurations is yet to be done. This study aims to fill this research gap by comparing the accuracy of five existing, and one novel, temporal analysis methods in approximating offense times for residential burglaries that often lack precise time information. The temporal analysis methods are evaluated in eight different configurations with varying temporal resolution, as well as the amount of data (number of crimes) available during analysis. A dataset of all Swedish residential burglaries reported between 2010 and 2014 is used (N = 103,029). From that dataset, a subset of burglaries with known precise offense times is used for evaluation. The accuracy of the temporal analysis methods in approximating the distribution of burglaries with known precise offense times is investigated. The aoristic and the novel aoristic_ext method perform significantly better than three of the traditional methods. Experiments show that the novel aoristic_ext method was most suitable for estimating crime frequencies in the day-of-the-year temporal resolution when reduced numbers of crimes were available during analysis. In the other configurations investigated, the aoristic method showed the best results. The results also show the potential from temporal analysis methods in approximating the temporal distributions of residential burglaries in situations when limited data are available.

  • 186.
    Boldt, Martin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Borg, Anton
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Ickin, Selim
    Ericsson Research, SWE.
    Gustafsson, Jörgen
    Ericsson Research, SWE.
    Anomaly detection of event sequences using multiple temporal resolutions and Markov chains2019In: Knowledge and Information Systems, ISSN 0219-1377, E-ISSN 0219-3116Article in journal (Refereed)
    Abstract [en]

    Streaming data services, such as video-on-demand, are getting increasingly more popular, and they are expected to account for more than 80% of all Internet traffic in 2020. In this context, it is important for streaming service providers to detect deviations in service requests due to issues or changing end-user behaviors in order to ensure that end-users experience high quality in the provided service. Therefore, in this study we investigate to what extent sequence-based Markov models can be used for anomaly detection by means of the end-users’ control sequences in the video streams, i.e., event sequences such as play, pause, resume and stop. This anomaly detection approach is further investigated over three different temporal resolutions in the data, more specifically: 1 h, 1 day and 3 days. The proposed anomaly detection approach supports anomaly detection in ongoing streaming sessions as it recalculates the probability for a specific session to be anomalous for each new streaming control event that is received. Two experiments are used for measuring the potential of the approach, which gives promising results in terms of precision, recall, F 1 -score and Jaccard index when compared to k-means clustering of the sessions. © 2019, The Author(s).

  • 187.
    Boldt, Martin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Borg, Anton
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Melander, Ulf
    En strukturerad metod för registrering och automatisk analys av brott2014In: The Past, the Present and the Future of Police Research: Proceedings from the fifth Nordic Police Research seminar / [ed] Rolf Granér och Ola Kronkvist, 2014Conference paper (Refereed)
    Abstract [sv]

    I detta artikel beskrivs en metod som används i polisregionerna Syd, Väst och Stockholm1 för att samla in strukturerade brottsplatsuppgifter från bostadsinbrott, samt hur den insamlade informationen kan analyseras med automatiska metoder som kan assistera brottssamordnare i deras arbete. Dessa automatiserade analyser kan användas som filtrerings- eller selekteringsverktyg för bostadsinbrott och därmed effektivisera och underlätta arbetet. Vidare kan metoden användas för att avgöra sannolikheten att två brott är utförda av samma gärningsman, vilket kan hjälpa polisen att identifiera serier av brott. Detta är möjligt då gärningsmän tenderar att begå brott på ett snarlikt sätt och det är möjligt, baserat på strukturerade brottsplatsuppgifter, att automatiskt hitta dessa mönster. I kapitlet presenteras och utvärderas en prototyp på ett IT-baserat beslutsstödsystem samt två automatiska metoder för brottssamordning.

  • 188.
    Boldt, Martin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Borg, Anton
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Svensson, Martin
    Blekinge Institute of Technology, Faculty of Engineering, Department of Industrial Economics.
    Hildeby, Jonas
    Polisen, SWE.
    Predicting burglars' risk exposure and level of pre-crime preparation using crime scene data2018In: Intelligent Data Analysis, ISSN 1088-467X, Vol. 22, no 1, p. 167-190, article id IDA 322-3210Article in journal (Refereed)
    Abstract [en]

    Objectives: The present study aims to extend current research on how offenders’ modus operandi (MO) can be used in crime linkage, by investigating the possibility to automatically estimate offenders’ risk exposure and level of pre-crime preparation for residential burglaries. Such estimations can assist law enforcement agencies when linking crimes into series and thus provide a more comprehensive understanding of offenders and targets, based on the combined knowledge and evidence collected from different crime scenes. Methods: Two criminal profilers manually rated offenders’ risk exposure and level of pre-crime preparation for 50 burglaries each. In an experiment we then analyzed to what extent 16 machine-learning algorithms could generalize both offenders’ risk exposure and preparation scores from the criminal profilers’ ratings onto 15,598 residential burglaries. All included burglaries contain structured and feature-rich crime descriptions which learning algorithms can use to generalize offenders’ risk and preparation scores from.Results: Two models created by Naïve Bayes-based algorithms showed best performance with an AUC of 0.79 and 0.77 for estimating offenders' risk and preparation scores respectively. These algorithms were significantly better than most, but not all, algorithms. Both scores showed promising distinctiveness between linked series, as well as consistency for crimes within series compared to randomly sampled crimes.Conclusions: Estimating offenders' risk exposure and pre-crime preparation  can complement traditional MO characteristics in the crime linkage process. The estimations are also indicative to function for cross-category crimes that otherwise lack comparable MO. Future work could focus on increasing the number of manually rated offenses as well as fine-tuning the Naïve Bayes algorithm to increase its estimation performance.

  • 189.
    Boldt, Martin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    jacobsson, andreas
    Malmö University, SWE.
    Baca, Dejan
    Fidesmo AB, SWE.
    Carlsson, Bengt
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Introducing a novel security-enhanced agile software development process2017In: International Journal of Secure Software Engineering, ISSN 1947-3036, E-ISSN 1947-3044, ISSN 1947-3036, Vol. 8, no 2Article in journal (Refereed)
    Abstract [en]

    In this paper, a novel security-enhanced agile software development process, SEAP, is introduced. It has been designed, tested, and implemented at Ericsson AB, specifically in the development of a mobile money transfer system. Two important features of SEAP are 1) that it includes additional security competences, and 2) that it includes the continuous conduction of an integrated risk analysis for identifying potential threats. As a general finding of implementing SEAP in software development, the developers solve a large proportion of the risks in a timely, yet cost-efficient manner. The default agile software development process at Ericsson AB, i.e. where SEAP was not included, required significantly more employee hours spent for every risk identified compared to when integrating SEAP. The default development process left 50.0% of the risks unattended in the software version that was released, while the application of SEAP reduced that figure to 22.5%. Furthermore, SEAP increased the proportion of risks that were corrected from 12.5% to 67.9%, a more than five times increment.

  • 190.
    Boldt, Martin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Jacobsson, Andreas
    Carlsson, Bengt
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    On the risk exposure of smart home automation systems2014In: Proceedings 2014 International Conferenceon Future Internet of Things and Cloud, IEEE Computer Society Digital Library, 2014Conference paper (Refereed)
    Abstract [en]

    A recent study has shown that more than every fourth person in Sweden feels that they have poor knowledge and control over their energy use, and that four out of ten would like to be more aware and to have better control over their consumption [5]. A solution is to provide the householders with feedback on their energy consumption, for instance, through a smart home automation system [10]. Studies have shown that householders can reduce energy consumption with up to 20% when gaining such feedback [5] [10]. Home automation is a prime example of a smart environment built on various types of cyber-physical systems generating volumes of diverse, heterogeneous, complex, and distributed data from a multitude of applications and sensors. Thereby, home automation is also an example of an Internet of Things (IoT) scenario, where a communication network extends the present Internet by including everyday items and sensors [22]. Home automation is attracting more and more attention from commercial actors, such as, energy suppliers, infrastructure providers, and third party software and hardware vendors [8] [10]. Among the non-commercial stake-holders, there are various governmental institutions, municipalities, as well as, end-users.

  • 191.
    Boldt, Martin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Rekanar, Kaavya
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Analysis and text classification of privacy policies from rogue and top-100 fortune global companies2019In: International Journal of Information Security and Privacy, ISSN 1930-1650, E-ISSN 1930-1669, Vol. 13, no 2, p. 47-66Article in journal (Refereed)
    Abstract [en]

    In the present article, the authors investigate to what extent supervised binary classification can be used to distinguish between legitimate and rogue privacy policies posted on web pages. 15 classification algorithms are evaluated using a data set that consists of 100 privacy policies from legitimate websites (belonging to companies that top the Fortune Global 500 list) as well as 67 policies from rogue websites. A manual analysis of all policy content was performed and clear statistical differences in terms of both length and adherence to seven general privacy principles are found. Privacy policies from legitimate companies have a 98% adherence to the seven privacy principles, which is significantly higher than the 45% associated with rogue companies. Out of the 15 evaluated classification algorithms, Naïve Bayes Multinomial is the most suitable candidate to solve the problem at hand. Its models show the best performance, with an AUC measure of 0.90 (0.08), which outperforms most of the other candidates in the statistical tests used. Copyright © 2019, IGI Global.

  • 192.
    BONAM, VEERA VENKATA SIVARAMAKRISHNA
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Multipath TCP and Measuring end-to-end TCP Throughput: Multipath TCP Descriptions and Ways to Improve TCP Performance2018Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Internet applications make use of the services provided by a transport protocol, such as TCP (a reliable, in-order stream protocol). We use this term Transport Service to mean the end-to- end service provided to application by the transport layer.

     

    That service can only be provided correctly if information about the intended usage is supplied from the application. The application may determine this information at the design time, compile time, or run time, and it may include guidance on whether a feature is required, a preference by the application, or something in between.

    Multipath TCP (MPTCP) adds the capability of using multiple paths to a regular TCP session. Even though it is designed to be totally backward compatible to applications. The data transport differs compared to regular TCP, and there are several additional degrees of freedom that the particular application may want to exploit.

     

    Multipath TCP is particularly useful in the context of wireless networks using both Wi-Fi and a mobile network is a typical use case. In addition to the gains in throughput from inverse multiplexing, links may be added or dropped as the user moves in or out of coverage without disrupting the end-to-end TCP connection. The problem of link handover is thus solved by abstraction in the transport layer, without any special mechanisms at the network or link level.

     

    Handover functionality can then be implemented at the endpoints without requiring special functionality in the sub-networks according to the Internet's end-to-end principle. Multipath TCP can balance a single TCP connection across multiple interfaces and reach very high throughput.

  • 193.
    Bond, David
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Nyblom, Madelein
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Evaluation of four different virtual locomotion techniques in an interactive environment2019Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Background: Virtual Reality (VR) devices are becoming more and more common as game systems. Even though modern VR Head Mounted Displays (HMD) allow the user to walk in real life, it still limits the user to the space of the room they are playing in and the player will need virtual locomotion in games where the environment size exceeds that of the real life play area. Evaluations of multiple VR locomotion techniques have already been done, usually evaluating motion sickness or usability. A common theme in many of these is that the task is search based, in an environment with low focus on interaction. Therefore in this thesis, four VR locomotion techniques are evaluated in an environment with focus on interaction, to see if a difference exists and whether one technique is optimal. The VR locomotion techniques are: Arm-Swinging, Point-Tugging, Teleportation, and Trackpad.

    Objectives: A VR environment is created with focus on interaction in this thesis. In this environment the user has to grab and hold onto objects while using a locomotion technique. This study then evaluates which VR locomotion technique is preferred in the environment. This study also evaluates whether there is a difference in preference and motion sickness, in an environment with high focus in interaction compared to one with low focus.

    Methods: A user study was conducted with 15 participants. Every participant performed a task with every VR locomotion technique, which involved interaction. After each technique, the participant answered a simulator sickness questionnaire, and an overall usability questionnaire.

    Results: The results achieved in this thesis indicated that Arm-Swinging was the most enjoyed locomotion technique in the overall usability questionnaire. But it also showed that Teleportation had the best rating in tiredness and overwhelment. Teleportation also did not cause motion sickness, while the rest of the locomotion techniques did.

    Conclusions: As a conclusion, a difference can be seen for VR locomotion techniques between an environment with low focus on interaction, to an environment with high focus. This difference was seen in both the overall usability questionnaire and the motion sickness questionnaire. It was concluded that Arm-Swinging could be the most fitting VR locomotion technique for an interactive environment, however Teleportation could be more optimal for longer sessions.

  • 194.
    Borg, Anton
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    On Descriptive and Predictive Models for Serial Crime Analysis2014Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Law enforcement agencies regularly collect crime scene information. There exists, however, no detailed, systematic procedure for this. The data collected is affected by the experience or current condition of law enforcement officers. Consequently, the data collected might differ vastly between crime scenes. This is especially problematic when investigating volume crimes. Law enforcement officers regularly do manual comparison on crimes based on the collected data. This is a time-consuming process; especially as the collected crime scene information might not always be comparable. The structuring of data and introduction of automatic comparison systems could benefit the investigation process. This thesis investigates descriptive and predictive models for automatic comparison of crime scene data with the purpose of aiding law enforcement investigations. The thesis first investigates predictive and descriptive methods, with a focus on data structuring, comparison, and evaluation of methods. The knowledge is then applied to the domain of crime scene analysis, with a focus on detecting serial residential burglaries. This thesis introduces a procedure for systematic collection of crime scene information. The thesis also investigates impact and relationship between crime scene characteristics and how to evaluate the descriptive model results. The results suggest that the use of descriptive and predictive models can provide feedback for crime scene analysis that allows a more effective use of law enforcement resources. Using descriptive models based on crime characteristics, including Modus Operandi, allows law enforcement agents to filter cases intelligently. Further, by estimating the link probability between cases, law enforcement agents can focus on cases with higher link likelihood. This would allow a more effective use of law enforcement resources, potentially allowing an increase in clear-up rates.

  • 195.
    Borg, Anton
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Boldt, Martin
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Clustering Residential Burglaries Using Modus Operandi and Spatiotemporal Information2016In: International Journal of Information Technology and Decision Making, ISSN 0219-6220, Vol. 15, no 1, p. 23-42Article in journal (Refereed)
    Abstract [en]

    To identify series of residential burglaries, detecting linked crimes performed by the same constellations of criminals is necessary. Comparison of crime reports today is difficult as crime reports traditionally have been written as unstructured text and often lack a common information-basis. Based on a novel process for collecting structured crime scene information, the present study investigates the use of clustering algorithms to group similar crime reports based on combined crime characteristics from the structured form. Clustering quality is measured using Connectivity and Silhouette index (SI), stability using Jaccard index, and accuracy is measured using Rand index (RI) and a Series Rand index (SRI). The performance of clustering using combined characteristics was compared with spatial characteristic. The results suggest that the combined characteristics perform better or similar to the spatial characteristic. In terms of practical significance, the presented clustering approach is capable of clustering cases using a broader decision basis.

  • 196.
    Borg, Anton
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Boldt, Martin
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Eliasson, Johan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Detecting Crime Series Based on Route Estimation and Behavioral Similarity2017In: 2017 EUROPEAN INTELLIGENCE AND SECURITY INFORMATICS CONFERENCE (EISIC) / [ed] Brynielsson, J, IEEE , 2017, p. 1-8Conference paper (Refereed)
    Abstract [en]

    A majority of crimes are committed by a minority of offenders. Previous research has provided some support for the theory that serial offenders leave behavioral traces on the crime scene which could be used to link crimes to serial offenders. The aim of this work is to investigate to what extent it is possible to use geographic route estimations and behavioral data to detect serial offenders. Experiments were conducted using behavioral data from authentic burglary reports to investigate if it was possible to find crime routes with high similarity. Further, the use of burglary reports from serial offenders to investigate to what extent it was possible to detect serial offender crime routes. The result show that crime series with the same offender on average had a higher behavioral similarity than random crime series. Sets of crimes with high similarity, but without a known offender would be interesting for law enforcement to investigate further. The algorithm is also evaluated on 9 crime series containing a maximum of 20 crimes per series. The results suggest that it is possible to detect crime series with high similarity using analysis of both geographic routes and behavioral data recorded at crime scenes.

  • 197.
    Borg, Anton
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Boldt, Martin
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Eliasson, Johan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Detecting Crime Series Based on Route Estimationand Behavioral Similarity2017Conference paper (Refereed)
    Abstract [en]

    A majority of crimes are committed by a minority of offenders. Previous research has provided some support for the theory that serial offenders leave behavioral traces on the crime scene which could be used to link crimes to serial offenders. The aim of this work is to investigate to what extent it is possible to use geographic route estimations and behavioral data to detect serial offenders. Experiments were conducted using behavioral data from authentic burglary reports to investigate if it was possible to find crime routes with high similarity. Further, the use of burglary reports from serial offenders to investigate to what extent it was possible to detect serial offender crime routes. The result show that crime series with the same offender on average had a higher behavioral similarity than random crime series. Sets of crimes with high similarity, but without a known offender would be interesting for law enforcement to investigate further. The algorithm is also evaluated on 9 crime series containing a maximum of 20 crimes per series. The results suggest that it is possible to detect crime series with high similarity using analysis of both geographic routes and behavioral data recorded at crime scenes.

  • 198.
    Borg, Anton
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Boldt, Martin
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Lavesson, Niklas
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Melander, Ulf
    Boeva, Veselka
    Detecting serial residential burglaries using clustering2014In: Expert Systems with Applications, ISSN 0957-4174 , Vol. 41, no 11, p. 5252-5266Article in journal (Refereed)
    Abstract [en]

    According to the Swedish National Council for Crime Prevention, law enforcement agencies solved approximately three to five percent of the reported residential burglaries in 2012. Internationally, studies suggest that a large proportion of crimes are committed by a minority of offenders. Law enforcement agencies, consequently, are required to detect series of crimes, or linked crimes. Comparison of crime reports today is difficult as no systematic or structured way of reporting crimes exists, and no ability to search multiple crime reports exist. This study presents a systematic data collection method for residential burglaries. A decision support system for comparing and analysing residential burglaries is also presented. The decision support system consists of an advanced search tool and a plugin-based analytical framework. In order to find similar crimes, law enforcement officers have to review a large amount of crimes. The potential use of the cut-clustering algorithm to group crimes to reduce the amount of crimes to review for residential burglary analysis based on characteristics is investigated. The characteristics used are modus operandi, residential characteristics, stolen goods, spatial similarity, or temporal similarity. Clustering quality is measured using the modularity index and accuracy is measured using the rand index. The clustering solution with the best quality performance score were residential characteristics, spatial proximity, and modus operandi, suggesting that the choice of which characteristic to use when grouping crimes can positively affect the end result. The results suggest that a high quality clustering solution performs significantly better than a random guesser. In terms of practical significance, the presented clustering approach is capable of reduce the amounts of cases to review while keeping most connected cases. While the approach might miss some connections, it is also capable of suggesting new connections. The results also suggest that while crime series clustering is feasible, further investigation is needed.

  • 199.
    Borg, Markus
    et al.
    RISE SICS AB, SWE.
    Alegroth, Emil
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Runeson, Per
    Lunds Universitet, SWE.
    Software Engineers' Information Seeking Behavior in Change Impact Analysis: An Interview Study2017In: IEEE International Conference on Program Comprehension, IEEE Computer Society , 2017, p. 12-22Conference paper (Refereed)
    Abstract [en]

    Software engineers working in large projects must navigate complex information landscapes. Change Impact Analysis (CIA) is a task that relies on engineers' successful information seeking in databases storing, e.g., source code, requirements, design descriptions, and test case specifications. Several previous approaches to support information seeking are task-specific, thus understanding engineers' seeking behavior in specific tasks is fundamental. We present an industrial case study on how engineers seek information in CIA, with a particular focus on traceability and development artifacts that are not source code. We show that engineers have different information seeking behavior, and that some do not consider traceability particularly useful when conducting CIA. Furthermore, we observe a tendency for engineers to prefer less rigid types of support rather than formal approaches, i.e., engineers value support that allows flexibility in how to practically conduct CIA. Finally, due to diverse information seeking behavior, we argue that future CIA support should embrace individual preferences to identify change impact by empowering several seeking alternatives, including searching, browsing, and tracing. © 2017 IEEE.

  • 200.
    Borg, Markus
    et al.
    RISE Research Institutes of Sweden AB, SWE.
    Chatzipetrou, Panagiota
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Wnuk, Krzysztof
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Alégroth, Emil
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Gorschek, Tony
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Papatheocharous, Efi
    RISE Research Institutes of Sweden AB, SWE.
    Shah, Syed Muhammad Ali
    iZettle, SWE.
    Axelsson, Jakob
    RISE Research Institutes of Sweden AB, SWE.
    Selecting component sourcing options: A survey of software engineering's broader make-or-buy decisions2019In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 112, p. 18-34Article in journal (Refereed)
    Abstract [en]

    Context: Component-based software engineering (CBSE) is a common approach to develop and evolve contemporary software systems. When evolving a system based on components, make-or-buy decisions are frequent, i.e., whether to develop components internally or to acquire them from external sources. In CBSE, several different sourcing options are available: (1) developing software in-house, (2) outsourcing development, (3) buying commercial-off-the-shelf software, and (4) integrating open source software components. Objective: Unfortunately, there is little available research on how organizations select component sourcing options (CSO) in industry practice. In this work, we seek to contribute empirical evidence to CSO selection. Method: We conduct a cross-domain survey on CSO selection in industry, implemented as an online questionnaire. Results: Based on 188 responses, we find that most organizations consider multiple CSOs during software evolution, and that the CSO decisions in industry are dominated by expert judgment. When choosing between candidate components, functional suitability acts as an initial filter, then reliability is the most important quality. Conclusion: We stress that future solution-oriented work on decision support has to account for the dominance of expert judgment in industry. Moreover, we identify considerable variation in CSO decision processes in industry. Finally, we encourage software development organizations to reflect on their decision processes when choosing whether to make or buy components, and we recommend using our survey for a first benchmarking. © 2019

1234567 151 - 200 of 1681
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf