12 1 - 20 of 21
rss atomLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
  • Erlandsson, Fredrik
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Bródka, Piotr
    Wrocław University of Science and Technology, Poland.
    Boldt, Martin
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Johnson, Henric
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Do We Really Need To Catch Them All?: A New User-Guided Social Media Crawling Method2017In: Entropy, ISSN 1099-4300, E-ISSN 1099-4300Article in journal (Refereed)
    Abstract [en]

    With the growing use of popular social media services like Facebook and Twitter it is hard to collect all content from the networks without access to the core infrastructure or paying for it. Thus, if all content cannot be collected one must consider which data are of most importance.In this work we present a novel User-Guided Social Media Crawling method (USMC) that is able to collect data from social media, utilizing the wisdom of the crowd to decide the order in which user generated content should be collected, to cover as many user interactions as possible. USMC is validated by crawling 160 Facebook public pages, containing 368 million users and 1.3 billion interactions, and it is compared with two other crawling methods. The results show that it is possible to cover approximately 75% of the interactions on a Facebook page by sampling just 20% of its posts, and at the same time reduce the crawling time by 53%.What is more, the social network constructed from the 20% sample has more than 75% of the users and edges compared to the social network created from all posts, and has very similar degree distribution.

  • Public defence: 2017-12-14 10:00
    Pilthammar, Johan
    Blekinge Institute of Technology, Faculty of Engineering, Department of Mechanical Engineering.
    Elastic Press and Die Deformations in Sheet Metal Forming Simulations 2017Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    Never before has the car industry been as challenging, interesting, and demanding as it is today. New and advanced techniques are being continuously introduced, which has led to increasing competition in an almost ever-expanding car market. As the pace and complexity heightens in the car market, manufacturing processes must advance at an equal speed.

    An important manufacturing process within the automotive industry, and the focus of this thesis, is sheet metal forming (SMF). Sheet metal forming is used to create door panels, structural beams, and trunk lids, among other parts, by forming sheets of metal in press lines with stamping dies. The SMF process has been simulated for the past couple of decades with finite element (FE) simulations, whereby one can predict factors such as shape, strains, thickness, springback, risk of failure, and wrinkles. A factor that most SMF simulations do not currently include is the die and press elasticity. This factor is handled manually during the die tryout phase, which is often long and expensive.

    The importance of accurately representing press and die elasticity in SMF simulations is the focus of this research project. The research objective is to achieve virtual tryout and improved production support through SMF simulations that consider elastic die and press deformations. Loading a die with production forces and including the deformations in SMF simulations achieves a reliable result. It is impossible to achieve accurate simulation results without including the die deformations.

    This thesis also describes numerical methods for optimizing and compensating tool surfaces against press and die deformations. In order for these compensations to be valid, it is imperative to accurately represent dies and presses. A method of measuring and inverse modeling the elasticity of a press table has been developed and is based on digital image correlation (DIC) measurements and structural optimization with FE software.

    Optimization, structural analysis, and SMF simulations together with experimental measurements have immense potential to improve simulation results and significantly reduce the lead time of stamping dies. Last but not least, improved production support and die design are other areas that can benefit from these tools.

  • Femmer, Henning
    et al.
    Technical University Munich, GER.
    Unterkalmsteiner, Michael
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Gorschek, Tony
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Which requirements artifact quality defects are automatically detectable?: A case study2017In: , 2017Conference paper (Refereed)
    Abstract [en]

    [Context:] The quality of requirements engineeringartifacts, e.g. requirements specifications, is acknowledged tobe an important success factor for projects. Therefore, manycompanies spend significant amounts of money to control thequality of their RE artifacts. To reduce spending and improvethe RE artifact quality, methods were proposed that combinemanual quality control, i.e. reviews, with automated approaches.[Problem:] So far, we have seen various approaches to auto-matically detect certain aspects in RE artifacts. However, westill lack an overview what can and cannot be automaticallydetected. [Approach:] Starting from an industry guideline forRE artifacts, we classify 166 existing rules for RE artifacts alongvarious categories to discuss the share and the characteristics ofthose rules that can be automated. For those rules, that cannotbe automated, we discuss the main reasons. [Contribution:] Weestimate that 53% of the 166 rules can be checked automaticallyeither perfectly or with a good heuristic. Most rules need onlysimple techniques for checking. The main reason why some rulesresist automation is due to imprecise definition. [Impact:] Bygiving first estimates and analyses of automatically detectable andnot automatically detectable rule violations, we aim to provide anoverview of the potential of automated methods in requirementsquality control.

  • Unterkalmsteiner, Michael
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Gorschek, Tony
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Process Improvement Archaeology: What led us here and what’s next?In: IEEE Software, ISSN 0740-7459, E-ISSN 1937-4194Article in journal (Refereed)
    Abstract [en]

    While in every organization corporate culture and history change over time, intentional efforts to identifyperformance problems are of particular interest when trying to understand the current state of an organization.The results of past improvement initiatives can shed light on the evolution of an organization, and represent,with the advantage of perfect hindsight, a learning opportunity for future process improvements. Weencountered the opportunity to test this premise in an applied research collaboration with the SwedishTransport Administration (STA), the government agency responsible for the planning, implementation andmaintenance of long-term rail, road, shipping and aviation infrastructure in Sweden.

  • Papatheocharous, Efi
    et al.
    RISE SICS, SWE.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Cicchetti, Antonio
    Mälardalens University, SWE.
    Sentilles, Séverine
    Mälardalens University, SWE.
    Muhammad Ali Shah, Syed
    RISE SICS, SWE.
    Gorschek, Tony
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Decision support for choosing architectural assets in the development of software-intensive systems: The GRADE taxonomy2015In: ECSAW '15 Proceedings of the 2015 European Conference on Software Architecture Workshops / [ed] Matthias Galster, ACM Digital Library, 2015, 48Conference paper (Refereed)
    Abstract [en]

    Engineering software-intensive systems is a complex process that typically involves making many critical decisions. A continuous challenge during system design, analysis and development is deciding on the reference architecture that could reduce risks and deliver the expected functionality and quality of a product or a service to its users. The lack of evidence in documenting strategies supporting decision-making in the selection of architectural assets in systems and software engineering creates an impediment in learning, improving and also reducing the risks involved. In order to fill this gap, ten experienced researchers in the field of decision support for the selection of architectural assets in engineering software-intensive systems conducted a workshop to reduce traceability of strategies and define a dedicated taxonomy. The result was the GRADE taxonomy, whose key elements can be used to support decision-making as exemplified through a real case instantiation for validation purposes. The overall aim is to support future work of researchers and practitioners in decision-making in the context of architectural assets in the development of software-intensive systems. The taxonomy may be used in three ways: (i) identify new opportunities in structuring decisions; (ii) support the review of alternatives and enable informed decisions; and (iii) evaluate decisions by describing in a retrospective fashion decisions, factors impacting the decision and the outcome.

  • Wagner, Stefan
    Blekinge Institute of Technology, Faculty of Engineering, Department of Mathematics and Natural Sciences.
    Secondary Characteristic Classes of Lie Algebra ExtensionsManuscript (preprint) (Other academic)
    Abstract [en]

    We introduce a notion of secondary characteristic classes of Lie algebra extensions. As a spin-off of our construction we obtain a new proof of Lecomte’s generalization of the Chern–Weil homomorphism.

  • Wagner, Stefan
    et al.
    Blekinge Institute of Technology, Faculty of Engineering, Department of Mathematics and Natural Sciences.
    Schwieger, Kay
    iteratec GmbH, GER.
    Noncommutative Coverings of Quantum ToriManuscript (preprint) (Other academic)
    Abstract [en]

    We introduce a framework for coverings of noncommutative spaces. Moreover, we study noncommutative coverings of irrational quantum tori and characterize all such coverings that are connected in a reasonable sense.

  • Cheddad, Abbas
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Kusetogullari, Hüseyin
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Grahn, Håkan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Object recognition using shape growth pattern2017In: Proceedings of the 10th International Symposium on Image and Signal Processing and Analysis (ISPA), IEEE Computer Society Digital Library, 2017, 47-52 p.Conference paper (Refereed)
    Abstract [en]

    This paper proposes a preprocessing stage to augment the bank of features that one can retrieve from binary images to help increase the accuracy of pattern recognition algorithms. To this end, by applying successive dilations to a given shape, we can capture a new dimension of its vital characteristics which we term hereafter: the shape growth pattern (SGP). This work investigates the feasibility of such a notion and also builds upon our prior work on structure preserving dilation using Delaunay triangulation. Experiments on two public data sets are conducted, including comparisons to existing algorithms. We deployed two renowned machine learning methods into the classification process (i.e., convolutional neural network-CNN- and random forests-RF-) since they perform well in pattern recognition tasks. The results show a clear improvement of the proposed approach's classification accuracy (especially for data sets with limited training samples) as well as robustness against noise when compared to existing methods.

  • Trojer, Lena
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics. BTH.
    Sharing Fragile Future: feminist technoscience in contexts of implication2017Book (Refereed)
    Abstract [en]

    Like a winding string passing tryings at risk, this book is my endeavour to make explicit the situatedness and responsibility of research and researchers in the trouble, let it be in the ‘grand challenges’ of our time or in the very local challenges of survival. Efforts to promote more complex and integrated understandings of ‘society in science’ or science as a political arena is urgent when facing the incalculabilities in our late modern spheres of society. There is no doubt technologies co-evolve out of interactions in specific contexts. This implies the responsibility to be a collective one for where and how technologies travel and with what use. No innocent position exists. The demand on us as knowledge and technology producers is focused on the direct reality producing consequences of our research and thus put us right into the context of implication.

    The frames of understanding are developed within feminist technoscience linked to practitioners and writers of mode 2 knowledge production. How can feminist research as well as other research disciplines taking a critical view of science be able to mobilize the transformatory potential needed?

    Part I presents insights into needed relocations in (onto)epistemological infrastructures and Part II a positioning in the fields of feminist research and feminist technoscience. Part III includes experiences and discussions about two political dimensions – research political initiatives to support feminist research followed by reflections on the convergence of science and politics. Part IV offers examples of research in contexts of not only application but implication.

  • Erlandsson, Fredrik
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Bródka, Piotr
    Wrocƚaw University of Technology, POL.
    Borg, Anton
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Seed selection for information cascade in multilayer networks2017In: Complex Networks & Their Applications VI: Proceedings of the 6th International Workshop on Complex Networks and their Applications (COMPLEX NETWORKS 2017), Springer-Verlag New York, 2017Chapter in book (Refereed)
    Abstract [en]

    Information spreading is an interesting field in the domain of online social media. In this work, we are investigating how well different seed selection strategies affect the spreading processes simulated using independent cascade model on eighteen multilayer social networks. Fifteen networks are built based on the user interaction data extracted from Facebook public pages and tree of them are multilayer networks downloaded from public repository (two of them being Twitter networks). The results indicate that various state of the art seed selection strategies for single-layer networks like K-Shell or VoteRank do not perform so well on multilayer networks and are outperformed by Degree Centrality.

  • Sun, Bin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Liyao, Ma
    University of Jinan, CHI.
    Wei, Cheng
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Wei, Wen
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Prashant, Goswami
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Guohua, Bai
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    An Improved k-Nearest Neighbours Method for Traffic Time Series Imputation2017Conference paper (Refereed)
    Abstract [en]

    Intelligent transportation systems (ITS) are becoming more and more effective, benefiting from big data. Despite this, missing data is a problem that prevents many prediction algorithms in ITS from working effectively. Much work has been done to impute those missing data. Among different imputation methods, k-nearest neighbours (kNN) has shown excellent accuracy and efficiency. However, the general kNN is designed for matrix instead of time series so it lacks the usage of time series characteristics such as windows and weights that are gap-sensitive. This work introduces gap-sensitive windowed kNN (GSW-kNN) imputation for time series. The results show that GSW-kNN is 34% more accurate than benchmarking methods, and it is still robust even if the missing ratio increases to 90%.

  • Sun, Bin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Wei, Cheng
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Prashant, Goswami
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Guohua, Bai
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Flow-Aware WPT k-Nearest Neighbours Regression for Short-Term Traffic Prediction2017In: Proceedings - IEEE Symposium on Computers and Communications, Institute of Electrical and Electronics Engineers (IEEE), 2017, Vol. 07, 48-53 p., 8024503Conference paper (Refereed)
    Abstract [en]

    Robust and accurate traffic prediction is critical in modern intelligent transportation systems (ITS). One widely used method for short-term traffic prediction is k-nearest neighbours (kNN). However, choosing the right parameter values for kNN is problematic. Although many studies have investigated this problem, they did not consider all parameters of kNN at the same time. This paper aims to improve kNN prediction accuracy by tuning all parameters simultaneously concerning dynamic traffic characteristics. We propose weighted parameter tuples (WPT) to calculate weighted average dynamically according to flow rate. Comprehensive experiments are conducted on one-year real-world data. The results show that flow-aware WPT kNN performs better than manually tuned kNN as well as benchmark methods such as extreme gradient boosting (XGB) and seasonal autoregressive integrated moving average (SARIMA). Thus, it is recommended to use dynamic parameters regarding traffic flow and to consider all parameters at the same time.

  • Britto, Ricardo
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Strategizing and Evaluating the Onboarding of Software Developers in Large-Scale Globally Distributed Legacy Projects2017Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Background: Recruitment and onboarding of software developers are essential steps in software development undertakings. The need for adding new people is often associated with large-scale long-living projects and globally distributed projects. The formers are challenging because they may contain large amounts of legacy (and often complex) code (legacy projects). The latters are challenging, because the inability to find sufficient resources in-house may lead to onboarding people at a distance, and often in many distinct sites. While onboarding is of great importance for companies, there is little research about the challenges and implications associated with onboarding software developers and teams in large-scale globally distributed projects with large amounts of legacy code. Furthermore, no study has proposed any systematic approaches to support the design of onboarding strategies and evaluation of onboarding results in the aforementioned context.

    Objective: The aim of this thesis is two-fold: i) identify the challenges and implications associated with onboarding software developers and teams in large-scale globally distributed legacy projects; and ii) propose solutions to support the design of onboarding strategies and evaluation of onboarding results in large-scale globally distributed legacy projects.

    Method: In this thesis, we employed literature review, case study, and business process modeling. The main case investigated in this thesis is the development of a legacy telecommunication software product in Ericsson.

    Results: The results show that the performance (productivity, autonomy, and lead time) of new developers/teams onboarded in remote locations in large-scale distributed legacy projects is much lower than the performance of mature teams. This suggests that new teams have a considerable performance gap to overcome. Furthermore, we learned that onboarding problems can be amplified by the following challenges: the complexity of the product and technology stack, distance to the main source of product knowledge, lack of team stability, training expectation misalignment, and lack of formalism and control over onboarding strategies employed in different sites of globally distributed projects. To help companies addressing the challenges we identified in this thesis, we propose a process to support the design of onboarding strategies and the evaluation of onboarding results.

    Conclusions: The results show that scale, distribution and complex legacy code may make onboarding more difficult and demand longer periods of time for new developers and teams to achieve high performance. This means that onboarding in large-scale globally distributed legacy projects must be planned well ahead and companies must be prepared to provide extended periods of mentoring by expensive and scarce resources, such as software architects. Failure to foresee and plan such resources may result in effort estimates on one hand, and unavailability of mentors on another, if not planned in advance. The process put forward herein can help companies to deal with the aforementioned problems through more systematic, effective and repeatable onboarding strategies.

  • Jagtap, Santosh
    et al.
    Delft University of Technology.
    Kandachar, Prabhu
    Delft University of Technology.
    Towards Linking Disruptive Innovations and BOP Markets2009Conference paper (Refereed)
    Abstract [en]

    The base of the world economic pyramid consists of 4 billion people typically earning less than 4 USD per day. This population is generally called the base of the pyramid (BoP). Much research on BoP markets focuses on motivating companies to enter these markets to create a win-win situation such that companies can gain benefits and BoP customers can satisfy their unmet or under-served needs. The reviewed literature suggests the need of innovations to successfully deploy products and services in these BoP markets. The reviewed research on disruptive innovations suggests that these innovations provide a good opportunity in new markets in contrast to companies’ mainstream markets. This paper presents the findings of the initial phase of our research, and attempts to demonstrate that BoP can present a potential new market for companies to successfully employ disruptive innovations. This is shown by synthesizing the reviewed literature on: (1) design, development, marketing, and distribution of products and services in BoP markets; and (2) disruptive innovations.

  • Jagtap, Santosh
    Delft University of Technology.
    Requirements and Use of In-Service Informationin an Engineering Redesign Task: Case StudiesFrom the Aerospace Industry2010In: Journal of the Association for Information Science and Technology, ISSN 2330-1635, E-ISSN 2330-1643Article in journal (Refereed)
    Abstract [en]

    This article describes the research stimulated by a fundamental shift that is occurring in the manufacture and marketing of aero engines for commercial and defense purposes, away from the selling of products to the provision of services. This research was undertaken in an aerospace company, which designs and manufactures aero engines and also offers contracts, under which it remains responsible for the maintenance of engines. These contracts allow the company to collect far more data about the in-service performance of their engines than was previously available. This article aims at identifying what parts of this in-service information are required when components or systems of existing engines need to be redesigned because they have not performed as expected in service. In addition, this article aims at understanding how designers use this in-service information in a redesign task. In an attempt to address these aims, we analyzed five case studies involving redesign of components or systems of an existing engine. The findings show that the inservice information accessed by the designers mainly contains the undesired physical actions (e.g., deterioration mechanisms, deterioration effects, etc.) and the causal chains of these undesired physical actions. We identified a pattern in the designers’ actions regarding the use of these causal chains. The designers have generated several solutions that utilize these causal chains seen in the in-service information.The findings provide a sound basis for developing tools and methods to support designers in effectively satisfying their in-service information requirements in a redesign task.

  • Usman, Muhammad
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Britto, Ricardo
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Damm, Lars-Ola
    Ericsson, SWE.
    Börstler, Jürgen
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Effort Estimation in Large-Scale Software Development: An Industrial Case StudyIn: Article in journal (Refereed)
    Abstract [en]

    Context: Software projects frequently incur schedule and budget overruns. Planning and estimation are particularlychallenging in large and globally distributed projects. While software engineering researchers have beeninvestigating effort estimation for many years to help practitioners to improve their estimation processes, there is littleresearch about effort estimation in large-scale distributed agile projects.Objective: The main objective of this paper is three-fold: i) to identify how effort estimation is carried out in largescaledistributed agile projects; ii) to analyze the accuracy of the effort estimation processes in large-scale distributedagile projects; and iii) to identify the factors that impact the accuracy of effort estimates in large-scale distributed agileprojects.Method: We performed an exploratory longitudinal case study. The data collection was operationalized througharchival research and semi-structured interviews.Results: The main findings of this study are: 1) underestimation is the dominant trend in the studied case, 2) reestimationat the analysis stage improves the accuracy of the effort estimates, 3) requirements with large size/scopeincur larger effort overruns, 4) immature teams incur larger effort overruns, 5) requirements developed in multi-sitesettings incur larger effort overruns as compared to requirements developed in a collocated setting, and 6) requirementspriorities impact the accuracy of the effort estimates.Conclusion: Effort estimation is carried out at quotation and analysis stages in the studied case. It is a challengingtask involving coordination amongst many different stakeholders. Furthermore, lack of details and changes in requirements,immaturity of the newly on-boarded teams and the challenges associated with the large-scale add complexitiesin the effort estimation process.

  • Britto, Ricardo
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering. Blekinge Institute of Technology.
    Šmite, Darja
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Lars-Ola, Damm
    Ericsson, SWE.
    Experiences from Measuring Learning and Performance in Large-Scale Distributed Software Development2016In: Proceedings of the 10th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement, ACM Digital Library, 2016, 17Conference paper (Refereed)
    Abstract [en]

    Background: Developers and development teams in large-scale software development are often required to learn continuously. Organizations also face the need to train and support new developers and teams on-boarded in ongoing projects. Although learning is associated with performance improvements, experience shows that training and learning does not always result in a better performance or significant improvements might take too long.

    Aims: In this paper, we report our experiences from establishing an approach to measure learning results and associated performance impact for developers and teams in Ericsson.

    Method: Experiences reported herein are a part of an exploratory case study of an on-going large-scale distributed project in Ericsson. The data collected for our measurements included archival data and expert knowledge acquired through both unstructured and semi-structured interviews. While performing the measurements, we faced a number of challenges, documented in the form of lessons learned.

    Results: We aggregated our experience in eight lessons learned related to collection, preparation and analysis of data for further measurement of learning potential and performance in large-scale distributed software development.

    Conclusions: Measuring learning and performance is a challenging task. Major problems were related to data inconsistencies caused by, among other factors, distributed nature of the project. We believe that the documented experiences shared herein can help other researchers and practitioners to perform similar measurements and overcome the challenges of large-scale distributed software projects, as well as proactively address these challenges when establishing project measurement programs.

  • Silvander, Johan
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Towards Intent-Driven Systems2017Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    Context: Software supporting an enterprise’s business, also known as a business support system, needs to support the correlation of activities between actors as well as influence the activities based on knowledge about the value networks in which the enterprise acts. This can be supported with the help of intent-driven systems. The aim of intent-driven systems is to capture stakeholders’ intents and transform these into a form that enables computer processing of them. Only then are different machine actors able to negotiate with each other on behalf of their respective stakeholders and their intents, and suggest a mutually beneficial agreement.

    Objective: When building a business support system it is critical to separate the business model of the business support system itself from the business models used by the enterprise which is using the business support system. The core idea of intent-driven systems is the possibility to change behavior of the system itself, based on stakeholder intents. This requires a separation of concerns between the parts of the system used to execute the stakeholder business, and the parts which are used to design the business based on stakeholder intents. The business studio is a software that supports the realization of business models used by the enterprise by configuring the capabilities provided by the business support system. The aim is to find out how we can support the design of a business studio which is based on intent-driven systems.

    Method: We are using the design science framework as our research frame- work. During our design science study we have used the following research methods: systematic literature review, case study, quasi experiment, and action research.

    Results: We have produced two design artifacts as a start to be able to support the design of a business studio. These artifacts are the models and quasi-experiment in Chapter 3, and the action research in Chapter 4. The models found during the case study have proved to be a valuable artifact for the stakeholder. The results from the quasi-experiment and the action research are seen as new problem solving knowledge by the stakeholder.

    Conclusion: The synthesis shows a need for further research regarding semantic interchange of information, actor interaction in intent-driven systems, and the governance of intent-driven systems.

  • Boldt, Martin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Borg, Anton
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    A statistical method for detecting significant temporal hotspots using LISA statistics2017Conference paper (Refereed)
    Abstract [en]

    This work presents a method for detecting statisticallysignificant temporal hotspots, i.e. the date and time of events,which is useful for improved planning of response activities.Temporal hotspots are calculated using Local Indicators ofSpatial Association (LISA) statistics. The temporal data is ina 7x24 matrix that represents a temporal resolution of weekdaysand hours-in-the-day. Swedish residential burglary events areused in this work for testing the temporal hotspot detectionapproach. Although, the presented method is also useful forother events as long as they contain temporal information, e.g.attack attempts recorded by intrusion detection systems. Byusing the method for detecting significant temporal hotspotsit is possible for domain-experts to gain knowledge about thetemporal distribution of the events, and also to learn at whichtimes mitigating actions could be implemented.

  • Boldt, Martin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    jacobsson, andreas
    Malmö University, SWE.
    Baca, Dejan
    Fidesmo AB, SWE.
    Carlsson, Bengt
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Introducing a novel security-enhanced agile software development process2017In: International Journal of Secure Software Engineering, ISSN 1947-3036, E-ISSN 1947-3044, ISSN 1947-3036, Vol. 8, no 2Article in journal (Refereed)
    Abstract [en]

    In this paper, a novel security-enhanced agile software development process, SEAP, is introduced. It has been designed, tested, and implemented at Ericsson AB, specifically in the development of a mobile money transfer system. Two important features of SEAP are 1) that it includes additional security competences, and 2) that it includes the continuous conduction of an integrated risk analysis for identifying potential threats. As a general finding of implementing SEAP in software development, the developers solve a large proportion of the risks in a timely, yet cost-efficient manner. The default agile software development process at Ericsson AB, i.e. where SEAP was not included, required significantly more employee hours spent for every risk identified compared to when integrating SEAP. The default development process left 50.0% of the risks unattended in the software version that was released, while the application of SEAP reduced that figure to 22.5%. Furthermore, SEAP increased the proportion of risks that were corrected from 12.5% to 67.9%, a more than five times increment.