Endre søk
Begrens søket
1234567 101 - 150 of 1861
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Treff pr side
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
Merk
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 101.
    Arvola Bjelkesten, Kim
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för kreativa teknologier.
    Feasibility of Point Grid Room First Structure Generation: A bottom-up approach2017Independent thesis Basic level (degree of Bachelor), 10 poäng / 15 hpOppgave
    Abstract [en]

    Context. Procedural generation becomes increasingly important for videogames in an age where the scope of the content required demands bot a lot of time and work. One of the fronts of this field is structure generation where algorithms create models for the game developers to use. Objectives. This study aims to explore the feasibility of the bottom-up approach within the field of structure generation for video games. Methods. Developing an algorithm using the bottom-up approach, PGRFSG, and utilizing a user study to prove the validity of the results. Each participant evaluates five structures giving them a score based on if they belong in a video game. Results. The participants evaluations show that among the structures generated were some that definitely belonged in a video game world. Two of the five structures got a high score though for one structure that was deemed as not the case. Conclusions. A conclusion can be made that the PGRFSG algorithm creates structures that belong in a video game world and that the bottom-up approach is a suitable one for structure generation based on the results presented.

  • 102.
    Aryal, Dhiraj
    et al.
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Shakya, Anup
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    A Taxonomy of SQL Injection Defense Techniques2011Independent thesis Advanced level (degree of Master (Two Years))Oppgave
    Abstract [en]

    Context: SQL injection attack (SQLIA) poses a serious defense threat to web applications by allowing attackers to gain unhindered access to the underlying databases containing potentially sensitive information. A lot of methods and techniques have been proposed by different researchers and practitioners to mitigate SQL injection problem. However, deploying those methods and techniques without a clear understanding can induce a false sense of security. Classification of such techniques would provide a great assistance to get rid of such false sense of security. Objectives: This paper is focused on classification of such techniques by building taxonomy of SQL injection defense techniques. Methods: Systematic literature review (SLR) is conducted using five reputed and familiar e-databases; IEEE, ACM, Engineering Village (Inspec/Compendex), ISI web of science and Scopus. Results: 61 defense techniques are found and based on these techniques, a taxonomy of SQL injection defense techniques is built. Our taxonomy consists of various dimensions which can be grouped under two higher order terms; detection method and evaluation criteria. Conclusion: The taxonomy provides a basis for comparison among different defense techniques. Organization(s) can use our taxonomy to choose suitable owns depending on their available resources and environments. Moreover, this classification can lead towards a number of future research directions in the field of SQL injection.

  • 103.
    Asghari, Negin
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Evaluating GQM+ Strategies Framework for Planning Measurement System2012Independent thesis Advanced level (degree of Master (Two Years))Oppgave
    Abstract [en]

    Context. Most organizations are aware of the significance of software measurement programs to help organizations assess and improve the ways they develop software. Measurement plays a vital role in improving software process and products. However, the number of failing measurement programs is high and the reasons are vary. A recent approach for planning measurement programs is GQM+Strategies, which makes an important extension to existing approaches, it links measurements and improvement activities to strategic goals and ways to achieve this goals. However, concrete guides of how to collect the information needed to use GQM+strategies is not provided in the literature yet. Objectives. The contribution of this research is to propose and assess an elicitation approach (The Goal Strategy Elicitation (GSE) approach) for the information needed to apply GQM+strategies in an organization, which also leads to a partial evaluation of GQM+strategies as such. In this thesis, the initial focus is placed on eliciting the goals and strategies in the most efficient way. Methods. The primary research approach used is action research, which allows to flexibly assess a new method or technique in an iterative manner, where the feedback of one iteration is taken into the next iteration, thus improving on the method or technique proposed. Complementary to that, we used literature review with the primary focus to position the work, explore GQM+strategies, and to determine which elicitation approach for the support of measurement programs have been proposed. Results. The Goal Strategy Elicitation (GSE) approach as a tool for eliciting goals and strategies within the software organization to contribute in planning a measurement program has been developed. The iterations showed that the approach of elicitation may not be too structured (e.g. template/notation based), but rather shall support the stakeholders to express their thoughts relatively freely. Hence, the end-result was an interview guide, not based on notations (as in the first iteration), and asking questions in a way that the interviewees are able to express themselves easily without having to e.g. distinguish definitions for goals and strategies. Conclusions. We conclude that the GSE approach is a strong tool for the software organization to be able to elicit the goals and strategies to support GQM+Strategies. GSE approach evolved in each iteration and the latest iteration together with the guideline is still used within the studied company for eliciting goals and strategies, and the organization acknowledged that they will continue to do so. Moreover, we conclude that there is a need for further empirical validation of the GSE approach in further full-scale industry trials.

  • 104.
    Asif, Sajjad
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Investigating Web Size Metrics for Early Web Cost Estimation2018Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Context Web engineering is a new research field which utilizes engineering principles to produce quality web applications. Web applications have become more complex with the passage of time and it's quite difficult to analyze the web metrics for the estimation due to a wide range of web applications. Correct estimates for web development effort play a very important role in the success of large-scale web development projects.

    Objectives In this study I investigated size metrics and cost drivers used by web companies for early web cost estimation. I also aim to get validation through industrial interviews and web quote form. This form is designed based on most frequently occurring metrics after analyzing different companies. Secondly, this research aims to revisit previous work done by Mendes (a senior researcher and contributor in this research area) to validate whether early web cost estimation trends are same or changed? The ultimate goal is to help companies in web cost estimation.

    Methods First research question is answered by conducting an online survey through 212 web companies and finding their web predictor forms (quote forms). All companies included in the survey used Web forms to give quotes on Web development projects based on gathered size and cost measures. The second research question is answered by finding most occurring size metrics from the results of Survey 1. List of size metrics are validated by two methods: (i) Industrial interviews are conducted with 15 web companies to validate results of the first survey (ii) a quote form is designed using validated results from industrial interviews and quote form sent to web companies around the world to seek data on real Web projects. Data gathered from Web projects are analyzed using CBR tool and results are validated with Industrial interview results along with Survey 1.  Final results are compared with old research to justify answer of third research question whether size metrics have been changed. All research findings are contributed to Tukutuku research benchmark project.

    Results “Number of pages/features” and “responsive implementation” are top web size metrics for early Web cost estimation.

    Conclusions. This research investigated metrics which can be used for early Web cost estimation at the early stage of Web application development. This is the stage where the application is not built yet but just requirements are being collected and an expected cost estimation is being evaluated. List of new metrics variable is concluded which can be added in Tukutuku project.

  • 105.
    Ask, Anna Vikström
    Blekinge Tekniska Högskola, Institutionen för programvaruteknik och datavetenskap.
    Reasons for fire fighting in projects2003Independent thesis Advanced level (degree of Master (One Year))Oppgave
    Abstract [en]

    This work is a study examining the causes of fire fighting in software projects. Fire fighting is the practice of reactive management, i.e. focus being put at solving the problem of the moment. The study in the thesis is performed in two parts, one part is a literature study examining what academia considers as the reasons of fire fighting and how to minimise the problem. The other part of the thesis is an interview series performed in the industry with the purpose of finding what they consider the causes of the fire fighting phenomena. What is indicated by the interview series, as being the main causes of the problems are problems that are related to requirements, and problems caused by persons with key knowledge leaving the project.

  • 106.
    Asklund, Ulf
    et al.
    Lund University, SWE.
    Höst, Martin
    Lund University, SWE.
    Wnuk, Krzysztof
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Experiences from Monitoring Effect of Architectural Changes2016Inngår i: Software Quality.: The Future of Systems- and Software Development / [ed] Winkler, Dietmar, Biffl, Stefan, Bergsmann, Johannes, 2016, s. 97-108Konferansepaper (Fagfellevurdert)
    Abstract [en]

    A common situation is that an initial architecture has been sufficient in the initial phases of a project, but when the size and complexity of the product increases the architecture must be changed. In this paper experiences are presented from changing an architecture into independent units, providing basic reuse of main functionality although giving higher priority to independence than reuse. An objective was also to introduce metrics in order to monitor the architectural changes. The change was studied in a case-study through weekly meetings with the team, collected metrics, and questionnaires. The new architecture was well received by the development team, who found it to be less fragile. Concerning the metrics for monitoring it was concluded that a high abstraction level was useful for the purpose.

  • 107.
    Aslam, Khurum
    et al.
    Blekinge Tekniska Högskola, Sektionen för teknik, Avdelningen för programvarusystem.
    Khurum, Mahvish
    Blekinge Tekniska Högskola, Sektionen för teknik, Avdelningen för programvarusystem.
    A Model for Early Requirements Triage and Selection Utilizing Product Strategies2007Independent thesis Advanced level (degree of Master (One Year))Oppgave
    Abstract [en]

    In market-driven product development, large numbers of requirements flow in continuously. It is critical for product management to select the requirements aligned with overall business goals and discard others as early as possible. It has been suggested in literature to utilize product strategies for early requirements triage and selection. However, no explicit method/model/framework has been suggested as how to do it. This thesis presents a model for early requirements triage and selection utilizing product strategies based on a literature study and interviews with people at two organizations about the requirements triage and selection processes and product strategies formulation. The model is validated statically within the same two organizations.

  • 108. Aspvall, Bengt
    et al.
    Halldorsson, MM
    Manne, F
    Approximations for the general block distribution of a matrix2001Inngår i: Theoretical Computer Science, ISSN 0304-3975, E-ISSN 1879-2294, Vol. 262, nr 1-2, s. 145-160Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    The general block distribution of a matrix is a rectilinear partition of the matrix into orthogonal blocks such that the maximum sum of the elements within a single block is minimized. This corresponds to partitioning the matrix onto parallel processors so as to minimize processor load while maintaining regular communication patterns. Applications of the problem include various parallel sparse matrix computations, compilers for high-performance languages, particle in cell computations, video and image compression, and simulations associated with a communication network. We analyze the performance guarantee of a natural and practical heuristic based on iterative refinement, which has previously been shown to give good empirical results. When p2 is the number of blocks, we show that the tight performance ratio is Theta(rootp). When the matrix has rows of large cost, the details of the objective function of the algorithm are shown to be important, since a naive implementation can lead to a Ohm (p) performance ratio. Extensions to more general cost functions, higher-dimensional arrays, and randomized initial configurations are also considered. (C) 2001 Elsevier Science B.V. All rights reserved.

  • 109.
    Auer, Florian
    et al.
    University of Innsbruck, AUT.
    Felderer, Michael
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Lenarduzzi, Valentina
    Tampere University of Technology, FIN.
    Towards defining a microservice migration framework2018Inngår i: ACM International Conference Proceeding Series, Association for Computing Machinery , 2018, Vol. Part F147763Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Microservices are more and more popular. As a result, some companies started to believe that microservices are the solution to all of their problems and rush to adopt microservices without sufficient knowledge about the impacts. Most of the time they expect to decrease their maintenance effort or to ease the deployment process. However, re-architecting a system to microservices is not always beneficial. In this work we propose a work-plan to identify a decision framework that supports practitioners in the understanding of possible migration based benefits and issues. This will lead to more reasoned decisions and mitigate the risk of migration. © 2018 Copyright held by the owner/author(s).

  • 110.
    Augustsson, Christopher
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Dynamic vs Static user-interface: Which one is easier to learn? And will it make you more efficient?2019Independent thesis Basic level (university diploma), 10 poäng / 15 hpOppgave
    Abstract [en]

    Excel offers great flexibility and allows non-programmers to create complex functionality – but at the same time, it can become very nested with cells pointing to other cells, especially if there have been many changes over a more extended period. This has happened to ICS – a small company who has its focus on calibration, out of an array of different things relating to material testing. The system they have for field calibrations today have been overly complicated and hard to maintain and consists of multiple Excel spreadsheets. The conclusion has been that a new system needs to be developed – but question how, remains. By creating a prototype using modern web-technologies, this study has evaluated if a web application can meet the specific functional requirements ICS have and if it is a suitable solution for a new system. The prototype was put under manual user test, and the results find that the prototype meets all the requirements, meaning that a webapplication could work as a replacement. During the user tests, this study has also evaluated the differences in learnability and efficiency of users, between the static user interface of the current Excel-based system and the dynamic user interface of the web-based prototype. The users have performed a calibration with both systems, and parameters such as time to completion or number of errors made have been recorded. By comparing the test results from both systems, this study has concluded that a dynamic user interface is more likely to improve learnability for novice users, but have a low impact on efficiency for expert users.

  • 111. Aurum, Aybüke
    et al.
    Jeffery, RossWohlin, ClaesHandzic, Meliha
    Managing Software Engineering Knowledge2003Collection/Antologi (Annet vitenskapelig)
  • 112. Aurum, Aybüke
    et al.
    Petersson, Håkan
    Wohlin, Claes
    State-of-the-art: Software Inspections after 25 Years2002Inngår i: Software testing, verification & reliability, ISSN 0960-0833, E-ISSN 1099-1689, Vol. 12, nr 3, s. 133-154Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Software inspections, which were originally developed by Michael Fagan in 1976, are an important means to verify and achieve sufficient quality in many software projects today. Since Fagan's initial work, the importance of software inspections has been long recognized by software developers and many organizations. Various proposals have been made by researchers in the hope of improving Fagan's inspection method. The proposals include structural changes to the process and several types of support for the inspection process. Most of the proposals have been empirically investigated in different studies. This is a review paper focusing on the software inspection process in the light of Fagan's inspection method and it summarizes and reviews other types of software inspection processes that have emerged in the last 25 years. This paper also addresses important issues related to the inspection process and examines experimental studies and their findings that are of interest with the purpose of identifying future avenues of research in software inspection.

  • 113. Aurum, Aybüke
    et al.
    Wohlin, Claes
    A Value-Based Approach in Requirements Engineering: Explaining Some of the Fundamental Concepts2007Konferansepaper (Fagfellevurdert)
  • 114. Aurum, Aybüke
    et al.
    Wohlin, Claes
    Aligning Requirements with Business Objectives: A Framework for Requirements Engineering Decisions2005Konferansepaper (Fagfellevurdert)
    Abstract [en]

    As software development continues to increase in complexity, involving far-reaching consequences, there is a need for decision support to improve the decision making process in requirements engineering (RE) activities. This research begins with a detailed investigation of the complexity of decision making during RE activities on organizational, product and project levels. Secondly, it presents a conceptual model which describes the RE decision making environment in terms of stakeholders, information requirements, decision types and business objectives. The purpose of this model is to facilitate the development of decision support systems in RE and to help further structure and analyse the decision making process in RE.

  • 115. Aurum, Aybüke
    et al.
    Wohlin, Claes
    Applying Decision-Making Models in Requirements Engineering.2002Konferansepaper (Fagfellevurdert)
  • 116. Aurum, Aybüke
    et al.
    Wohlin, Claes
    The Fundamental Nature of Requirements Engineering Activities as a Decision-Making Process2003Inngår i: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 45, nr 14, s. 945-954Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    The requirements engineering (RE) process is a decision-rich complex problem solving activity. This paper examines the elements of organization-oriented macro decisions as well as process-oriented micro decisions in the RE process and illustrates how to integrate classical decision-making models with RE process models. This integration helps in formulating a common vocabulary and model to improve the manageability of the RE process, and contributes towards the learning process by validating and verifying the consistency of decision-making in RE activities.

  • 117. Aurum, Aybüke
    et al.
    Wohlin, Claes
    Petersson, Håkan
    Increasing the Understanding of Effectiveness in Software Inspections Using Published Data Sets2005Inngår i: Journal of Research and Practice in Information Technology, ISSN 1443-458X , Vol. 37, nr 3, s. 51-64Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Since its inception into software engineering software inspection has been viewed as a cost-effective way of increasing software quality. Despite this many questions remain unanswered regarding, for example, ideal team size or cost effectiveness. This paper addresses some of these questions by performing an analysis using 30 published data sets from empirical experiments of software inspections. The main question is concerned with determining a suitable team size for software inspections. The effectiveness of different team sizes is also studied. Furthermore, the differences in mean effectiveness between different team sizes are investigated based on the inspection environmental context, document types and reading technique. It is concluded that it is possible to choose a suitable team size based on the effectiveness of inspections. This can be used as a tool to assist in the planning of inspections. A particularly interesting result is that variation in the effectiveness between different teams is considerably higher for certain types of documents than for others. Our findings contain important information for anyone planning, controlling or managing software inspections.

  • 118. Aurum, Aybüke
    et al.
    Wohlin, Claes
    Porter, A.
    Aligning Software Project Decisions: A Case Study2006Inngår i: International Journal of Software Engineering and Knowledge Engineering, ISSN 0218-1940 , Vol. 16, nr 6, s. 795-818Artikkel i tidsskrift (Fagfellevurdert)
  • 119. Avritzer, Alberto
    et al.
    Beecham, Sarah
    Britto, Ricardo
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Kroll, Josiane
    Menaché, Daniel
    Noll, John
    Paasivaara, Maria
    Extending Survivability Models for Global Software Development with Media Synchronicity Theory2015Inngår i: Proceeding of the IEEE 10th International Conference on Global Software Engineering, IEEE Communications Society, 2015, s. 23-32Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this paper we propose a new framework to assess survivability of software projects accounting for media capability details as introduced in Media Synchronicity Theory (MST). Specifically, we add to our global engineering framework the assessment of the impact of inadequate conveyance and convergence available in the communication infrastructure selected to be used by the project, on the system ability to recover from project disasters. We propose an analytical model to assess how the project recovers from project disasters related to process and communication failures. Our model is based on media synchronicity theory to account for how information exchange impacts recovery. Then, using the proposed model we evaluate how different interventions impact communication effectiveness. Finally, we parameterize and instantiate the proposed survivability model based on a data gathering campaign comprising thirty surveys collected from senior global software development experts at ICGSE'2014 and GSD'2015.

  • 120.
    Awan, Nasir Majeed
    et al.
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Alvi, Adnan Khadem
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Predicting software test effort in iterative development using a dynamic Bayesian network2010Independent thesis Advanced level (degree of Master (Two Years))Oppgave
    Abstract [en]

    It is important to manage iterative projects in a way to maximize quality and minimize cost. To achieve high quality, accurate project estimates are of high importance. It is challenging to predict the effort that is required to perform test activities in an iterative development. If testers put extra effort in testing then schedule might be delayed, however, if testers spend less effort then quality could be affected. Currently there is no model for test effort prediction in iterative development to overcome such challenges. This paper introduces and validates a dynamic Bayesian network to predict test effort in iterative software development. In this research work, the proposed framework is evaluated in a number of ways: First, the framework behavior is observed by considering different parameters and performing initial validation. Then secondly, the framework is validated by incorporating data from two industrial projects. The accuracy of the results has been verified through different prediction accuracy measurements and statistical tests. The results from the verification confirmed that the framework has the ability to predict test effort in iterative projects accurately.

  • 121.
    Awan, Rashid
    Blekinge Tekniska Högskola, Sektionen för teknik, Avdelningen för programvarusystem.
    Requirements Engineering Process Maturity Model for Market Driven Projects: The REPM-M Model2005Independent thesis Advanced level (degree of Master (One Year))Oppgave
    Abstract [en]

    Several software projects are over budgeted or have to face failures during operations. One big reason of this is Software Company develops wrong software due to wrong interpretation of requirements. Requirements engineering is one of the well known discipline within Software engineering which deals with this problem. RE is the process of eliciting, analyzing and specifying requirements so that there won’t be any ambiguity between the development company and the customers. Another emerging discipline within requirements engineering is requirements engineering for market driven projects. It deals with the requirements engineering of a product targeting a mass market. In this thesis, a maturity model is developed which can be used to assess the maturity of requirements engineering process for market driven projects. The objective of this model is to provide a quick assessment tool through which a company would be able to know what are the strengths and weaknesses of their requirements engineering process.

  • 122.
    Axelsson, Mattias
    et al.
    Blekinge Tekniska Högskola, Sektionen för teknik, Avdelningen för programvarusystem.
    Sonesson, Johan
    Blekinge Tekniska Högskola, Sektionen för teknik, Avdelningen för programvarusystem.
    Business Process Performance Measurement for Rollout Success2004Independent thesis Advanced level (degree of Master (One Year))Oppgave
    Abstract [en]

    Business process improvement for increased product quality is of continuous importance in the software industry. Quality managers in this sector need effective, hands-on tools for decision-making in engineering projects and for rapidly spotting key improvement areas. Measurement programs are a widespread approach for introducing quality improvement in software processes, yet employing all-embracing state-of-the art quality assurance models is labor intensive. Unfortunately, these do not primarily focus on measures, revealing a need for an instant and straightforward technique for identifying and defining measures in projects without resources or need for entire measurement programs. This thesis explores and compares prevailing quality assurance models using measures, rendering the Measurement Discovery Process constructed from selected parts of the PSM and GQM techniques. The composed process is applied to an industrial project with the given prerequisites, providing a set of measures that are subsequently evaluated. In addition, the application gives foundation for analysis of the Measurement Discovery Process. The application and analysis of the process show its general applicability to projects with similar constraints as well as the importance of formal target processes and exhaustive project domain knowledge among measurement implementers. Even though the Measurement Discovery Process is subject to future refinement, it is clearly a step towards rapid delivery of tangible business performance indicators for process improvement.

  • 123. Axelsson, Stefan
    Using Normalized Compression Distance for Classifying File Fragments2010Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We have applied the generalised and universal distance measure NCD-Normalised Compression Distance-to the problem of determining the types of file fragments via example. A corpus of files that can be redistributed to other researchers in the field was developed and the NCD algorithm using k-nearest-neighbour as the classification algorithm was applied to a random selection of file fragments. The experiment covered circa 2000 fragments from 17 different file types. While the overall accuracy of the n-valued classification only improved the prior probability of the class from approximately 6% to circa 50% overall, the classifier reached accuracies of 85%-100% for the most successful file types.

  • 124.
    Axelsson, Stefan
    et al.
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Bajwa, Kamran Ali
    Srikanth, Mandhapati Venkata
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    File Fragment Analysis Using Normalized Compression Distance2013Konferansepaper (Fagfellevurdert)
    Abstract [en]

    The first step when recovering deleted files using file carving is to identify the file type of a block, also called file fragment analysis. Several researchers have demonstrated the applicability of Kolmogorov complexity methods such as the normalized compression distance (NCD) to this problem. NCD methods compare the results of compressing a pair of data blocks with the compressed concatenation of the pair. One parameter that is required is the compression algorithm to be used. Prior research has identified the NCD compressor properties that yield good performance. However, no studies have focused on its applicability to file fragment analysis. This paper describes the results of experiments on a large corpus of files and file types with different block lengths. The experimental results demonstrate that, in the case of file fragment analysis, compressors with the desired properties do not perform statistically better than compressors with less computational complexity.

  • 125.
    Ayalew, Tigist
    et al.
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Kidane, Tigist
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Identification and Evaluation of Security Activities in Agile Projects: A Systematic Literature Review and Survey Study2012Independent thesis Advanced level (degree of Master (Two Years))Oppgave
    Abstract [en]

    Context: Today’s software development industry requires high-speed software delivery from the development team. In order to do this, organizations make transformation from their conventional software development method to agile development method while preserving customer satisfaction. Even though this approach is becoming popular development method, from security point of view, it has some disadvantage. Because, this method has several constraints imposed such as lack of a complete overview of a product, higher development pace and lack of documentation. Although security-engineering (SE) process is necessary in order to build secure software, no SE process is developed specifically for agile model. As a result, SE processes that are commonly used in waterfall model are being used in agile models. However, there is a clash or disparity between the established waterfall SE processes and the ideas and methodologies proposed by the agile manifesto. This means that, while agile models work with short development increments that adapt easily to change, the existing SE processes work in plan-driven development setting and try to reduce defects found in a program before the occurrence of threats through heavy and inflexible process. This study aims at bridging the gap in agile model and security by providing insightful understanding of the SE process that are used in the current agile industry. Objectives: The objectives of this thesis are to identify and evaluate security activities from high-profile waterfall SE-process that are used in the current agile industry. Then, to suggest the most compatible and beneficial security activities to agile model based on the study results. Methods: The study involved two approaches: systematic literature review and survey. The systematic literature review has two main aims. The first aim is to gain a comprehensive understanding of security in an agile process model; the second one is to identify high-profile SE processes that are commonly used in waterfall model. Moreover, it helped to compare the thesis result with other previously done works on the area. A survey is conducted to identify and evaluate waterfall security activities that are used in the current agile industry projects. The evaluation criteria were based on the security activity integration cost and benefit provides to agile projects. Results: The results of the systematic review are organized in a tabular form for clear understanding and easy analysis. High-profile SE processes and their activities are obtained. These results are used as an input for the survey study. From the survey study, security activities that are used in the current agile industry are identified. Furthermore, the identified security activities are evaluated in terms of benefit and cost. As a result the best security activities, that are compatible and beneficial, are investigated to agile process model. Conclusions: To develop secure software in agile model, there is a need of SE-process or practice that can address security issues in every phase of the agile project lifecycle. This can be done either by integrating the most compatible and beneficial security activities from waterfall SE processes with agile process or by creating new SE-process. In this thesis, it has been found that, from the investigated high-profile waterfall SE processes, none of the SE processes was fully compatible and beneficial to agile projects.

  • 126. Azhar, Damir
    et al.
    Riddle, Patricia
    Mendes, Emilia
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Mittas, Nikolaos
    Angelis, Lefteris
    Using ensembles for web effort estimation2013Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Background: Despite the number of Web effort estimation techniques investigated, there is no consensus as to which technique produces the most accurate estimates, an issue shared by effort estimation in the general software estimation domain. A previous study in this domain has shown that using ensembles of estimation techniques can be used to address this issue. Aim: The aim of this paper is to investigate whether ensembles of effort estimation techniques will be similarly successful when used on Web project data. Method: The previous study built ensembles using solo effort estimation techniques that were deemed superior. In order to identify these superior techniques two approaches were investigated: The first involved replicating the methodology used in the previous study, while the second approach used the Scott-Knott algorithm. Both approaches were done using the same 90 solo estimation techniques on Web project data from the Tukutuku dataset. The replication identified 16 solo techniques that were deemed superior and were used to build 15 ensembles, while the Scott-Knott algorithm identified 19 superior solo techniques that were used to build two ensembles. Results: The ensembles produced by both approaches performed very well against solo effort estimation techniques. With the replication, the top 12 techniques were all ensembles, with the remaining 3 ensembles falling within the top 17 techniques. These 15 effort estimation ensembles, along with the 2 built by the second approach, were grouped into the best cluster of effort estimation techniques by the Scott-Knott algorithm. Conclusion: While it may not be possible to identify a single best technique, the results suggest that ensembles of estimation techniques consistently perform well even when using Web project data

  • 127.
    Azhar, Muhammad Saad Bin
    et al.
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Aslam, Ammad
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Multiple Coordinated Information Visualization Techniques in Control Room Environment2009Independent thesis Advanced level (degree of Master (Two Years))Oppgave
    Abstract [en]

    Presenting large amount of Multivariate Data is not a simple problem. When there are multiple correlated variables involved, it becomes difficult to comprehend data using traditional ways. Information Visualization techniques provide an interactive way to present and analyze such data. This thesis has been carried out at ABB Corporate Research, Västerås, Sweden. Use of Parallel Coordinates and Multiple Coordinated Views was has been suggested to realize interactive reporting and trending of Multivariate Data for ABB’s Network Manager SCADA system. A prototype was developed and an empirical study was conducted to evaluate the suggested design and test it for usability from an actual industry perspective. With the help of this prototype and the evaluations carried out, we are able to achieve stronger results regarding the effectiveness and efficiency of the visualization techniques used. The results confirm that such interfaces are more effective, efficient and intuitive for filtering and analyzing Multivariate Data.

  • 128.
    AZIZ, YASSAR
    et al.
    Blekinge Tekniska Högskola, Sektionen för ingenjörsvetenskap, Avdelningen för matematik och naturvetenskap.
    ASLAM, MUHAMMAD NAEEM
    Blekinge Tekniska Högskola, Sektionen för ingenjörsvetenskap, Avdelningen för matematik och naturvetenskap.
    Traffic Engineering with Multi-Protocol Label Switching, Performance Comparison with IP networks2008Independent thesis Advanced level (degree of Master (Two Years))Oppgave
    Abstract [en]

    Traffic Engineering (TE) is the stage which deals with geometric design planning and traffic operation of networks, network devices and relationship of routers for the transportation of data. TE is that feature of network engineering which concentrate on problems of performance optimization of operational networks. It involves techniques and application of knowledge to gain performance objectives, which includes movement of data through network, reliability, planning of network capacity and efficient use of network resources. This thesis addresses the problems of traffic engineering and suggests a solution by using the concept of Multi-Protocol Label Switching (MPLS). We have done simulation in Matlab environment to compare the performance of MPLS against the IP network in a simulated environment. MPLS is a modern technique for forwarding network data. It broadens routing according to path controlling and packet forwarding. In this thesis MPLS is computed on the basis of its performance, efficiency for sending data from source to destination. A MATLAB based simulation tool is developed to compare MPLS with IP network in a simulated environment. The results show the performance of MPLS network in comparison of IP network.

  • 129. Baca, Dejan
    Automated static code analysis: A tool for early vulnerability detection2009Licentiatavhandling, med artikler (Annet vitenskapelig)
    Abstract [en]

    Software vulnerabilities are added into programs during its development. Architectural flaws are introduced during planning and design, while implementation faults are created during coding. Penetration testing is often used to detect these vulnerabilities. This approach is expensive because it is performed late in development and any correction would increase lead-time. An alternative would be to detect and correct vulnerabilities in the phase of development where they are the least expensive to correct and detect. Source code audits have often been suggested and used to detect implementations vulnerabilities. However, manual audits are time consuming and require extended expertise to be efficient. A static code analysis tool could achieve the same results as a manual audit but at fraction of the time. Through a set of cases studies and experiments at Ericsson AB, this thesis investigates the technical capabilities and limitations of using a static analysis tool as an early vulnerability detector. The investigation is extended to studying the human factor by examining how the developers interact and use the static analysis tool. The contributions of this thesis include the identification of the tools capabilities so that further security improvements can focus on other types of vulnerabilities. By using static analysis early in development possible cost saving measures are identified. Additionally, the thesis presents the limitations of static code analysis. The most important limitation being the incorrect warnings that are reported by static analysis tools. In addition, a development process overhead was deemed necessary to successfully use static analysis in an industry setting.

  • 130.
    Baca, Dejan
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Developing Secure Software: in an Agile Process2012Doktoravhandling, med artikler (Annet vitenskapelig)
    Abstract [en]

    Background: Software developers are facing increased pressure to lower development time, release new software versions more frequent to customers and to adapt to a faster market. This new environment forces developers and companies to move from a plan based waterfall development process to a flexible agile process. By minimizing the pre development planning and instead increasing the communication between customers and developers, the agile process tries to create a new, more flexible way of working. This new way of working allows developers to focus their efforts on the features that customers want. With increased connectability and the faster feature release, the security of the software product is stressed. To develop secure software, many companies use security engineering processes that are plan heavy and inflexible. These two approaches are each others opposites and they directly contradict each other. Objective: The objective of the thesis is to evaluate how to develop secure software in an agile process. In particular, what existing best practices can be incorporated into an agile project and still provide the same benefit if the project was using a waterfall process. How the best practices can be incorporated and adapted to fit the process while still measuring the improvement. Some security engineering concepts are useful but the best practice is not agile compatible and would require extensive adaptation to integrate with an agile project. Method: The primary research method used throughout the thesis is case studies conducted in a real industry setting. As secondary methods for data collection a variety of approaches have been used, such as semi-structured interviews, workshops, study of literature, and use of historical data from the industry. Results: The security engineering best practices were investigated though a series of case studies. The base agile and security engineering compatibility was assessed in literature, by developers and in practical studies. The security engineering best practices were group based on their purpose and their compatibility with the agile process. One well known and popular best practice, automated static code analysis, was toughly investigated for its usefulness, deployment and risks of using as part of the process. For the risk analysis practices, a novel approach was introduced and improved. As such, a way of adapting existing practices to agile is proposed. Conclusion: With regard of agile and security engineering we did not find that any of the investigated processes was agile compatible. Agile is reaction driven that adapts to change, while the security engineering processes are proactive and try to prevent threats before they happen. To develop secure software in an agile process the developers should adopt and adapt key concepts from security engineering. These changes will affect the flexibility of the agile process but it is a necessity if developers want the same software security state as security engineering processes can provide.

  • 131. Baca, Dejan
    et al.
    Carlsson, Bengt
    Agile development with security engineering activities2011Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Agile software development has been used by industry to create a more flexible and lean software development process, i.e making it possible to develop software at a faster rate and with more agility during development. There are however concerns that the higher development pace and lack of documentation are creating less secure software. We have therefore looked at three known Security Engineering processes, Microsoft SDL, Cigatel touchpoints and Common Criteria and identified what specific security activities they performed. We then compared these activities with an Agile development process that is used in industry. Developers, from a large telecommunication manufacturer, were interviewed to learn their impressions on using these security activities in an agile development process. We produced a security enhanced Agile development process that we present in this paper. This new Agile process use activities from already established security engineering processes that provide the benefit the developers wanted but did not hinder or obstruct the Agile process in a significant way.

  • 132. Baca, Dejan
    et al.
    Carlsson, Bengt
    Lundberg, Lars
    Evaluating the Cost Reduction of Static Code Analysis for Software Security2008Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Automated static code analysis is an efficient technique to increase the quality of software during early development. This paper presents a case study in which mature software with known vul-nerabilities is subjected to a static analysis tool. The value of the tool is estimated based on reported failures from customers. An average of 17% cost savings would have been possible if the static analysis tool was used. The tool also had a 30% success rate in detecting known vulnerabilities and at the same time found 59 new vulnerabilities in the three examined products.

  • 133.
    Baca, Dejan
    et al.
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Carlsson, Bengt
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Petersen, Kai
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Lundberg, Lars
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Improving software security with static automated code analysis in an industry setting2013Inngår i: Software, practice & experience, ISSN 0038-0644, E-ISSN 1097-024X, Vol. 43, nr 3, s. 259-279Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Software security can be improved by identifying and correcting vulnerabilities. In order to reduce the cost of rework, vulnerabilities should be detected as early and efficiently as possible. Static automated code analysis is an approach for early detection. So far, only few empirical studies have been conducted in an industrial context to evaluate static automated code analysis. A case study was conducted to evaluate static code analysis in industry focusing on defect detection capability, deployment, and usage of static automated code analysis with a focus on software security. We identified that the tool was capable of detecting memory related vulnerabilities, but few vulnerabilities of other types. The deployment of the tool played an important role in its success as an early vulnerability detector, but also the developers perception of the tools merit. Classifying the warnings from the tool was harder for the developers than to correct them. The correction of false positives in some cases created new vulnerabilities in previously safe code. With regard to defect detection ability, we conclude that static code analysis is able to identify vulnerabilities in different categories. In terms of deployment, we conclude that the tool should be integrated with bug reporting systems, and developers need to share the responsibility for classifying and reporting warnings. With regard to tool usage by developers, we propose to use multiple persons (at least two) in classifying a warning. The same goes for making the decision of how to act based on the warning.

  • 134.
    Baca, Dejan
    et al.
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Petersen, Kai
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Countermeasure graphs for software security risk assessment: An action research2013Inngår i: Journal of Systems and Software, ISSN 0164-1212, Vol. 86, nr 9, s. 2411-2428Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Software security risk analysis is an important part of improving software quality. In previous research we proposed countermeasure graphs (CGs), an approach to conduct risk analysis, combining the ideas of different risk analysis approaches. The approach was designed for reuse and easy evolvability to support agile software development. CGs have not been evaluated in industry practice in agile software development. In this research we evaluate the ability of CGs to support practitioners in identifying the most critical threats and countermeasures. The research method used is participatory action research where CGs were evaluated in a series of risk analyses on four different telecom products. With Peltier (used prior to the use of CGs at the company) the practitioners identified attacks with low to medium risk level. CGs allowed practitioners to identify more serious risks (in the first iteration 1 serious threat, 5 high risk threats, and 11 medium threats). The need for tool support was identified very early, tool support allowed the practitioners to play through scenarios of which countermeasures to implement, and supported reuse. The results indicate that CGs support practitioners in identifying high risk security threats, work well in an agile software development context, and are cost-effective.

  • 135. Baca, Dejan
    et al.
    Petersen, Kai
    Prioritizing Countermeasures through the Countermeasure Method for Software Security (CM-Sec)2010Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Software security is an important quality aspect of a software system. Therefore, it is important to integrate software security touch points throughout the development life-cycle. So far, the focus of touch points in the early phases has been on the identification of threats and attacks. In this paper we propose a novel method focusing on the end product by prioritizing countermeasures. The method provides an extension to attack trees and a process for identification and prioritization of countermeasures. The approach has been applied on an open-source application and showed that countermeasures could be identified. Furthermore, an analysis of the effectiveness and cost-efficiency of the countermeasures could be provided.

  • 136. Baca, Dejan
    et al.
    Petersen, Kai
    Carlsson, Bengt
    Lundberg, Lars
    Static Code Analysis to Detect Software Security Vulnerabilities: Does Experience Matter?2009Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Code reviews with static analysis tools are today recommended by several security development processes. Developers are expected to use the tools' output to detect the security threats they themselves have introduced in the source code. This approach assumes that all developers can correctly identify a warning from a static analysis tool (SAT) as a security threat that needs to be corrected. We have conducted an industry experiment with a state of the art static analysis tool and real vulnerabilities. We have found that average developers do not correctly identify the security warnings and only developers with specific experiences are better than chance in detecting the security vulnerabilities. Specific SAT experience more than doubled the number of correct answers and a combination of security experience and SAT experience almost tripled the number of correct security answers.

  • 137. Badampudi, Deepika
    Decision-making support for choosing among different component origins.2018Doktoravhandling, med artikler (Annet vitenskapelig)
  • 138.
    Badampudi, Deepika
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Factors Affecting Efficiency of Agile Planning: A Case Study2012Independent thesis Advanced level (degree of Master (Two Years))Oppgave
    Abstract [en]

    Context: Planning in software projects is a difficult problem due to the uncertainty associated with it. There are many factors that cause difficulty in formulating a plan. Not many factors that influence the efficiency of planning are identified in the previous studies. The literature focuses only on technical aspects such as requirements selection and estimation in order to plan a release or iteration. Objectives. The objective of this study is to identify factors that affect planning efficiency. The context in which the objective is achieved is large scale complex projects that are distributed across multiple teams, in multiple global sites. The motivation for selecting large scale context is because most of the existing releases planning approaches discussed in the literature were investigated in small scale projects. Hence this context will allow studying the planning process in large scale industry. Methods. A case study was conducted at Siemens’ Development Centre in Bangalore, India. A total of 15 interviews were conducted to investigate the planning process adopted by Siemens. To achieve triangulation, process documents such as release planning documents are studied and direct observation of the planning meeting is performed. Therefore multiple sources are used to collect evidences. Results. The identified challenges are grouped into technical and non-technical category. In total 9 technical factors and 11 non-technical factors are identified. The identified factors are also classified based on the context in which they affect the planning. In addition 6 effects of the factors are identified and improvements perceived by the participants are discussed in this study.

  • 139.
    Badampudi, Deepika
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Reporting Ethics Considerations in Software Engineering Publications2017Inngår i: 11TH ACM/IEEE INTERNATIONAL SYMPOSIUM ON EMPIRICAL SOFTWARE ENGINEERING AND MEASUREMENT (ESEM 2017), IEEE , 2017, s. 205-210Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Ethical guidelines of software engineering journals require authors to provide statements related to the conflict of interest and the process of obtaining consent (if human subjects are involved). The objective of this study is to review the reporting of the ethical considerations in Empirical Software Engineering - An International Journal. The results indicate that two out of seven studies reported some ethical information however, not explicitly. The ethical discussions were focussed on anonymity and confidentiality. Ethical aspects such as competence, comprehensibility and vulnerability of the subjects were not discussed in any of the papers reviewed in this study. It is important to not only state that consent was obtained however, the procedure of obtaining consent should be reported to improve the accountability and trust.

  • 140.
    Badampudi, Deepika
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Towards decision-making to choose among different component origins2016Licentiatavhandling, med artikler (Annet vitenskapelig)
    Abstract [en]

    Context: The amount of software in solutions provided in various domains is continuously growing. These solutions are a mix of hardware and software solutions, often referred to as software-intensive systems. Companies seek to improve the software development process to avoid delays or cost overruns related to the software development.  

    Objective: The overall goal of this thesis is to improve the software development/building process to provide timely, high quality and cost efficient solutions. The objective is to select the origin of the components (in-house, outsource, components off-the-shelf (COTS) or open source software (OSS)) that facilitates the improvement. The system can be built of components from one origin or a combination of two or more (or even all) origins. Selecting a proper origin for a component is important to get the most out of a component and to optimize the development. 

    Method: It is necessary to investigate the component origins to make decisions to select among different origins. We conducted a case study to explore the existing challenges in software development.  The next step was to identify factors that influence the choice to select among different component origins through a systematic literature review using a snowballing (SB) strategy and a database (DB) search. Furthermore, a Bayesian synthesis process is proposed to integrate the evidence from literature into practice.  

    Results: The results of this thesis indicate that the context of software-intensive systems such as domain regulations hinder the software development improvement. In addition to in-house development, alternative component origins (outsourcing, COTS, and OSS) are being used for software development. Several factors such as time, cost and license implications influence the selection of component origins. Solutions have been proposed to support the decision-making. However, these solutions consider only a subset of factors identified in the literature.   

    Conclusions: Each component origin has some advantages and disadvantages. Depending on the scenario, one component origin is more suitable than the others. It is important to investigate the different scenarios and suitability of the component origins, which is recognized as future work of this thesis. In addition, the future work is aimed at providing models to support the decision-making process.

  • 141.
    Badampudi, Deepika
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Britto, Ricardo
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Unterkalmsteiner, Michael
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Modern code reviews: Preliminary results of a systematic mapping study2019Inngår i: PROCEEDINGS OF EASE 2019 - EVALUATION AND ASSESSMENT IN SOFTWARE ENGINEERING, Association for Computing Machinery , 2019, s. 340-345Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Reviewing source code is a common practice in a modern and collaborative coding environment. In the past few years, the research on modern code reviews has gained interest among practitioners and researchers. The objective of our investigation is to observe the evolution of research related to modern code reviews, identify research gaps and serve as a basis for future research. We use a systematic mapping approach to identify and classify 177 research papers. As preliminary result of our investigation, we present in this paper a classification scheme of the main contributions of modern code review research between 2005 and 2018. © 2019 Association for Computing Machinery.

  • 142.
    Badampudi, Deepika
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Claes, Wohlin
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Kai, Petersen
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Software Component Decision-making: In-house, OSS, COTS or Outsourcing: A Systematic Literature Review2016Inngår i: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 121, s. 105-124Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Component-based software systems require decisions on component origins for acquiring components. A component origin is an alternative of where to get a component from. Objective: To identify factors that could influence the decision to choose among different component origins and solutions for decision-making (For example, optimization) in the literature. Method: A systematic review study of peer-reviewed literature has been conducted. Results: In total we included 24 primary studies. The component origins compared were mainly focused on in-house vs. COTS and COTS vs. OSS. We identified 11 factors affecting or influencing the decision to select a component origin. When component origins were compared, there was little evidence on the relative (either positive or negative) effect of a component origin on the factor. Most of the solutions were proposed for in-house vs. COTS selection and time, cost and reliability were the most considered factors in the solutions. Optimization models were the most commonly proposed technique used in the solutions. Conclusion: The topic of choosing component origins is a green field for research, and in great need of empirical comparisons between the component origins, as well of how to decide between different combinations of them.

  • 143.
    Badampudi, Deepika
    et al.
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Fricker, Samuel
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Moreno, Ana
    Perspectives on Productivity and Delays in Large-Scale Agile Projects2013Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Many large and distributed companies run agile projects in development environments that are inconsistent with the original agile ideas. Problems that result from these inconsistencies can affect the productivity of development projects and the timeliness of releases. To be effective in such contexts, the agile ideas need to be adapted. We take an inductive approach for reaching this aim by basing the design of the development process on observations of how context, practices, challenges, and impacts interact. This paper reports the results of an interview study of five agile development projects in an environment that was unfavorable for agile principles. Grounded theory was used to identify the challenges of these projects and how these challenges affected productivity and delays according to the involved project roles. Productivity and delay-influencing factors were discovered that related to requirements creation and use, collaboration, knowledge management, and the application domain. The practitioners’ explanations about the factors' impacts are, on one hand, a rich empirical source for avoiding and mitigating productivity and delay problems and, on the other hand, a good starting point for further research on flexible large-scale development.

  • 144.
    Badampudi, Deepika
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Wnuk, Krzysztof
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Wohlin, Claes
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Franke, Ulrik
    Swedish Institute of Computer Science, SWE.
    Šmite, Darja
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Cicchetti, Antonio
    Mälardalens högskola, SWE.
    A decision-making process-line for selection of software asset origins and components2018Inngår i: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 135, s. 88-104Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Selecting sourcing options for software assets and components is an important process that helps companies to gain and keep their competitive advantage. The sourcing options include: in-house, COTS, open source and outsourcing. The objective of this paper is to further refine, extend and validate a solution presented in our previous work. The refinement includes a set of decision-making activities, which are described in the form of a process-line that can be used by decision-makers to build their specific decision-making process. We conducted five case studies in three companies to validate the coverage of the set of decision-making activities. The solution in our previous work was validated in two cases in the first two companies. In the validation, it was observed that no activity in the proposed set was perceived to be missing, although not all activities were conducted and the activities that were conducted were not executed in a specific order. Therefore, the refinement of the solution into a process-line approach increases the flexibility and hence it is better in capturing the differences in the decision-making processes observed in the case studies. The applicability of the process-line was then validated in three case studies in a third company. © 2017 Elsevier Inc.

    Fulltekst tilgjengelig fra 2020-01-01 12:19
  • 145.
    Badampudi, Deepika
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Wohlin, Claes
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Bayesian Synthesis for Knowledge Translation in Software Engineering: Method and Illustration2016Inngår i: 2016 42th Euromicro Conference on Software Engineering and Advanced Applications (SEAA), IEEE, 2016Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Systematic literature reviews in software engineering are necessary to synthesize evidence from multiple studies to provide knowledge and decision support. However, synthesis methods are underutilized in software engineering research. Moreover, translation of synthesized data (outcomes of a systematic review) to provide recommendations for practitioners is seldom practiced. The objective of this paper is to introduce the use of Bayesian synthesis in software engineering research, in particular to translate research evidence into practice by providing the possibility to combine contextualized expert opinions with research evidence. We adopted the Bayesian synthesis method from health research and customized it to be used in software engineering research. The proposed method is described and illustrated using an example from the literature. Bayesian synthesis provides a systematic approach to incorporate subjective opinions in the synthesis process thereby making the synthesis results more suitable to the context in which they will be applied. Thereby, facilitating the interpretation and translation of knowledge to action/application. None of the synthesis methods used in software engineering allows for the integration of subjective opinions, hence using Bayesian synthesis can add a new dimension to the synthesis process in software engineering research.

  • 146.
    Badampudi, Deepika
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Wohlin, Claes
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Gorschek, Tony
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    An Evaluation of Knowledge Translation in Software Engineering2019Inngår i: International Symposium on Empirical Software Engineering and Measurement, IEEE Computer Society , 2019Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Knowledge translation is defined, in health sciences, as 'the exchange, synthesis and ethically sound application of research results in practice'. The objective of this paper is to implement and conduct a feasibility evaluation of a knowledge translation framework in software engineering. We evaluated the outcome of the knowledge translation framework in an industrial setting, along with the effectiveness of the interventions undertaken as part of knowledge translation in a multi-case study. The results of the evaluation suggest that the practitioners perceive the knowledge translation framework to be valuable and useful. In conclusion, this paper contributes towards the reporting of a systematic implementation of knowledge translation and evaluating its use in software engineering. © 2019 IEEE.

  • 147.
    Badampudi, Deepika
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Wohlin, Claes
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Gorschek, Tony
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Contextualizing research evidence through knowledge translation in software engineering2019Inngår i: PROCEEDINGS OF EASE 2019 - EVALUATION AND ASSESSMENT IN SOFTWARE ENGINEERING, Association for Computing Machinery , 2019, s. 306-311Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Usage of software engineering research in industrial practice is a well-known challenge. Synthesis of knowledge from multiple research studies is needed to provide evidence-based decision-support for industry. The objective of this paper is to present a vision of how a knowledge translation framework may look like in software engineering research, in particular how to translate research evidence into practice by combining contextualized expert opinions with research evidence. We adopted the framework of knowledge translation from health care research, adapted and combined it with a Bayesian synthesis method. The framework provided in this paper includes a description of each step of knowledge translation in software engineering. Knowledge translation using Bayesian synthesis intends to provide a systematic approach towards contextualized, collaborative and consensus-driven application of research results. In conclusion, this paper contributes towards the application of knowledge translation in software engineering through the presented framework. © 2019 Association for Computing Machinery.

  • 148.
    Badampudi, Deepika
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Wohlin, Claes
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Gorschek, Tony
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Guidelines for Knowledge Translation in Software EngineeringInngår i: Artikkel i tidsskrift (Fagfellevurdert)
  • 149.
    Badampudi, Deepika
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Wohlin, Claes
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Petersen, Kai
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Experiences from Using Snowballing and Database Searches in Systematic Literature Studies2015Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Background: Systematic literature studies are commonly used in software engineering. There are two main ways of conducting the searches for these type of studies; they are snowballing and database searches. In snowballing, the reference list (backward snowballing - BSB) and citations (forward snowballing - FSB) of relevant papers are reviewed to identify new papers whereas in a database search, different databases are searched using predefined search strings to identify new papers. Objective: Snowballing has not been in use as extensively as database search. Hence it is important to evaluate its efficiency and reliability when being used as a search strategy in literature studies. Moreover, it is important to compare it to database searches. Method: In this paper, we applied snowballing in a literature study, and reflected on the outcome. We also compared database search with backward and forward snowballing. Database search and snowballing were conducted independently by different researchers. The searches of our literature study were compared with respect to the efficiency and reliability of the findings. Results: Out of the total number of papers found, snowballing identified 83% of the papers in comparison to 46% of the papers for the database search. Snowballing failed to identify a few relevant papers, which potentially could have been addressed by identifying a more comprehensive start set. Conclusion: The efficiency of snowballing is comparable to database search. It can potentially be more reliable than a database search however, the reliability is highly dependent on the creation of a suitable start set.

  • 150.
    Bahrieh, Sara
    Blekinge Tekniska Högskola, Sektionen för ingenjörsvetenskap.
    Sensor Central / Automotive Systems2013Independent thesis Basic level (degree of Bachelor)Oppgave
    Abstract [en]

    How to display objects which were detected from different devices in one coordinate system? Nowadays most vehicles are equipped with front and back sensors to help the driver in driving process. Companies who provide this technology need to have an application which enables them for easy data fusion from these sensors and recording the process. Besides sensor’s design, programming of them is an important aspect. BASELABS Connect has the solution in a user friendly way. Creating Sensor Central component for BASELABS Connect is the main goal of this thesis. Sensor Central from BASELABS Connect requires six variables of sensor’s position for each sensor to demonstrate the objects from all sensors to one unique coordinate system. In this thesis, it was intended to create such a component which was mounted between all the sensors and the charting component to convert the objects location from different sensor’s position to one coordinate system and to be usable from other vehicles too.

1234567 101 - 150 of 1861
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf