Endre søk
Begrens søket
1234567 51 - 100 of 1861
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Treff pr side
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
Merk
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 51. Akkermans, Hans
    et al.
    Ygge, Fredrik
    Gustavsson, Rune
    HOMEBOTS: Intelligent Decentralized Services for Energy Management1996Konferansepaper (Fagfellevurdert)
  • 52.
    Akkineni, Srinivasu
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    The impact of RE process factors and organizational factors during alignment between RE and V&V: Systematic Literature Review and Survey2015Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Context: Requirements engineering (RE) and Verification and validation (V&V) areas are treated to be integrated and assure successful development of the software project. Therefore, activation of both competences in the early stages of the project will support products in meeting the customer expectation regarding the quality and functionality. However, this quality can be achieved by aligning RE and V&V. There are different practices such as requirements, verification, validation, control, tool etc. that are followed by organizations for alignment and to address different challenges faced during the alignment between RE and V&V. However, there is a requisite for studies to understand the alignment practices, challenges and factors, which can enable successful alignment between RE and V&V.

    Objectives: In this study, an exploratory investigation is carried out to know the impact of factors i.e. RE process and organizational factors during the alignment between RE and V&V. The main objectives of this study are:

    1. To find the list of RE practices that facilitate alignment between RE and V&V.
    2. To categorize RE practices with respect to their requirement phases.
    3. To find the list of RE process and organizational factors that influence alignment between RE and V&V besides their impact.
    4. To identify the challenges that are faced during the alignment between RE and V&V.
    5. To obtain list of challenges that are addressed by RE practices during the alignment between RE and V&V.

    Methods: In this study Systematic Literature Review (SLR) is conducted using snowballing procedure to identify the relevant information about RE practices, challenges, RE process factors and organizational factors. The studies were captured from Engineering Village database. Rigor and relevance analysis is performed to assess the quality of the studies obtained through SLR. Further, a questionnaire intended for industrial survey was prepared from the gathered literature and distributed to practitioners from the software industry in order to collect empirical information about this study. Thereafter, data obtained from industrial survey was analyzed using statistical analysis and chi-square significance test.

    Results: 20 studies were identified through SLR, which are relevant to this study. After analyzing the obtained studies, the list of RE process factors, organizational factors, challenges and RE practices during alignment between RE and V&V are gathered. Thereupon, an industrial survey is conducted from the obtained literature, which has obtained 48 responses. Alignment between RE and V&V possess an impact of RE process factors and organizational factors and this is also mentioned by the respondents of the survey. Moreover, this study finds an additional RE process factors and organizational factors during the alignment between RE and V&V, besides their impact. Another contribution is, addressing the unaddressed challenges by RE practices obtained through the literature. Additionally, validation of categorized RE practices with respect to their requirement phases is carried out.

    Conclusions: To conclude, the obtained results from this study will benefit practitioners for capturing more insight towards the alignment between RE and V&V. This study identified the impact of RE process factors and organizational factors during the alignment between RE and V&V along with the importance of challenges faced during the alignment between RE and V&V. This study also addressed the unaddressed challenges by RE practices obtained through literature. Respondents of the survey believe that many RE process and organizational factors have negative impact on the alignment between RE and V&V based on the size of an organization. In addition to this, validation of results for applying RE practices at different requirement phases is toted through survey. Practitioners can identify the benefits from this research and researchers can extend this study to remaining alignment practices.

  • 53.
    Alahyari, Hiva
    et al.
    Chalmers; Göteborgs Universitet, SWE.
    Berntsson Svensson, Richard
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Gorschek, Tony
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    A study of value in agile software development organizations2017Inngår i: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 125, s. 271-288Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    The Agile manifesto focuses on the delivery of valuable software. In Lean, the principles emphasise value, where every activity that does not add value is seen as waste. Despite the strong focus on value, and that the primary critical success factor for software intensive product development lies in the value domain, no empirical study has investigated specifically what value is. This paper presents an empirical study that investigates how value is interpreted and prioritised, and how value is assured and measured. Data was collected through semi-structured interviews with 23 participants from 14 agile software development organisations. The contribution of this study is fourfold. First, it examines how value is perceived amongst agile software development organisations. Second, it compares the perceptions and priorities of the perceived values by domains and roles. Third, it includes an examination of what practices are used to achieve value in industry, and what hinders the achievement of value. Fourth, it characterises what measurements are used to assure, and evaluate value-creation activities.

  • 54.
    Alahyari, Hiva
    et al.
    Chalmers, SWE.
    Gorschek, Tony
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Berntsson Svensson, Richard
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    An exploratory study of waste in software development organizations using agile or lean approaches: A multiple case study at 14 organizations2019Inngår i: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 107, s. 78-94Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Context: The principal focus of lean is the identification and elimination of waste from the process with respect to maximizing customer value. Similarly, the purpose of agile is to maximize customer value and minimize unnecessary work and time delays. In both cases the concept of waste is important. Through an empirical study, we explore how waste is approached in agile software development organizations. Objective: This paper explores the concept of waste in agile/lean software development organizations and how it is defined, used, prioritized, reduced, or eliminated in practice Method: The data were collected using semi-structured open-interviews. 23 practitioners from 14 embedded software development organizations were interviewed representing two core roles in each organization. Results: Various wastes, categorized in 10 different categories, were identified by the respondents. From the mentioned wastes, not all were necessarily waste per se but could be symptoms caused by wastes. From the seven wastes of lean, Task-switching was ranked as the most important, and Extra-features, as the least important wastes according to the respondents’ opinion. However, most companies do not have their own or use an established definition of waste, more importantly, very few actively identify or try to eliminate waste in their organizations beyond local initiatives on project level. Conclusion: In order to identify, recognize and eliminate waste, a common understanding, and a joint and holistic view of the concept is needed. It is also important to optimize the whole organization and the whole product, as waste on one level can be important on another, thus sub-optimization should be avoided. Furthermore, to achieve a sustainable and effective waste handling, both the short-term and the long-term perspectives need to be considered. © 2018 Elsevier B.V.

  • 55.
    Alam, Payam Norouzi
    Blekinge Tekniska Högskola, Institutionen för programvaruteknik och datavetenskap.
    Agile Process Recommendations for a Market-driven Company2003Independent thesis Advanced level (degree of Master (One Year))Oppgave
    Abstract [en]

    In this master thesis problems in a small market-driven software development company are discussed. Problems such as fast changes in the company are a result of the constantly changing market. The fast changes must be managed within the company with tailored process covering the needs like short time-to-market. Misunderstandings when managing ideas from marketing and challenging issues such as communication gaps between marketing staff and developers are discussed. These problem areas are identified by the case study through interviews of selected staff. The problems adhere from fast changes and lack of processes and structures. This paper has recommended several points influenced by agile software development with Scrum and XP. The recommendations are chosen to fit the problem areas identified by the interviews. They will work as a starting point for the company to improve the situation and to use XP and Scrum for further improvements.

  • 56.
    Al-Daajeh, Saleh
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Balancing Dependability Quality Attributes for Increased Embedded Systems Dependability2009Independent thesis Advanced level (degree of Master (Two Years))Oppgave
    Abstract [en]

    Embedded systems are used in many critical applications where a failure can have serious consequences. Therefore, achieving a high level of dependability is an ultimate goal. However, in order to achieve this goal we are in need of understanding the interrelationships between the different dependability quality attributes and other embedded systems’ quality attributes. This research study provides indicators of the relationship between the dependability quality attributes and other quality attributes for embedded systems by identifying the impact of architectural tactics as the candidate solutions to construct dependable embedded systems.

  • 57. Alegroth, Emil
    et al.
    Feldt, Robert
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Kolstrom, Pirjo
    Maintenance of automated test suites in industry: An empirical study on Visual GUI Testing2016Inngår i: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 73, s. 66-80Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Context: Verification and validation (V&V) activities make up 20-50% of the total development costs of a software system in practice. Test automation is proposed to lower these V&V costs but available research only provides limited empirical data from industrial practice about the maintenance costs of automated tests and what factors affect these costs. In particular, these costs and factors are unknown for automated GUI-based testing. Objective: This paper addresses this lack of knowledge through analysis of the costs and factors associated with the maintenance of automated GUI-based tests in industrial practice. Method: An empirical study at two companies, Siemens and Saab, is reported where interviews about, and empirical work with, Visual GUI Testing is performed to acquire data about the technique's maintenance costs and feasibility. Results: 13 factors are observed that affect maintenance, e.g. tester knowledge/experience and test case complexity. Further, statistical analysis shows that developing new test scripts is costlier than maintenance but also that frequent maintenance is less costly than infrequent, big bang maintenance. In addition a cost model, based on previous work, is presented that estimates the time to positive return on investment (ROI) of test automation compared to manual testing. Conclusions: It is concluded that test automation can lower overall software development costs of a project while also having positive effects on software quality. However, maintenance costs can still be considerable and the less time a company currently spends on manual testing, the more time is required before positive, economic, ROI is reached after automation. (C) 2016 Elsevier B.V. All rights reserved.

  • 58.
    Ali, Israr
    et al.
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Shah, Syed Shahab Ali
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Usability Requirements for GIS Application: Comparative Study of Google Maps on PC and Smartphone2011Independent thesis Advanced level (degree of Master (Two Years))Oppgave
    Abstract [en]

    Context: Smartphone is gaining popularity due to its feasible mobility, computing capacity and efficient energy. Emails, text messaging, navigation and visualizing geo-spatial data through browsers are common features of smartphone. Display of geo-spatial data is collected in computing format and made publically available. Therefore the need of usability evaluation becomes important due to its increasing demand. Identifying usability requirements are important as conventional functional requirements in software engineering. Non-functional usability requirements are objectives and testable using measurable metrics. Objectives: Usability evaluation plays an important role in the interaction design process as well as identifying user needs and requirements. Comparative usability requirements are identified for the evaluation of a geographical information system (Google Maps) on personal computer (Laptop) and smartphone (iPhone). Methods: ISO 9241-11 guide on usability is used as an input model for identifying and specifying usability level of Google Maps on both personal computer and smartphone for intended output. Authors set target value for usability requirements of tasks and questionnaire on each device, such as acceptability level of tasks completion, rate of efficiency and participant’s agreement of each measure through ISO 9241-11 respectively. The usability test is conducted using Co-discovery technique on six pairs of graduate students. Interviews are conducted for validation of test results and questionnaires are distributed to get feedback from participants. Results: The non-functional usability requirements were tested and used five metrics measured on user performance and satisfaction. Through usability test, the acceptability level of tasks completion and rate of efficiency was matched on personal computer but did not match on iPhone. Through questionnaire, both the devices did not match participant’s agreement of each measure but only effectiveness matched on personal computer. Usability test, interview and questionnaire feedback are included in the results. Conclusions: The authors provided suggestions based on test results and identified usability issues for the improvement of Google Maps on personal computer and iPhone.

  • 59.
    Ali, Nauman Bin
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Is effectiveness sufficient to choose an intervention?: Considering resource use in empirical software engineering2016Inngår i: Proceedings of the 10th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement, ESEM 2016, Ciudad Real, Spain, September 8-9, 2016, 2016, artikkel-id 54Konferansepaper (Fagfellevurdert)
  • 60.
    Ali, Nauman bin
    et al.
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Edison, Henry
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Towards Innovation Measurement in Software Industry2010Independent thesis Advanced level (degree of Master (Two Years))Oppgave
    Abstract [en]

    Context: In today’s highly competitive business environment, shortened product and technology life-cycles, it is critical for software industry to continuously innovate. To help an organisation to achieve this goal, a better understanding and control of the activities and determinants of innovation is required. This can be achieved through innovation measurement initiative which assesses innovation capability, output and performance. Objective: This study explores definitions of innovation, innovation measurement frameworks, key elements of innovation and metrics that have been proposed in literature and used in industry. The degree of empirical validation and context of studies was also investigated. It also elicited the perception of innovation, its importance, challenges and state of practice of innovation measurement in software industry. Methods: In this study, a systematic literature review, followed by online questionnaire and face-to-face interviews were conducted. The systematic review used seven electronic databases, including Compendex, Inspec, IEEE Xplore, ACM Digital Library, and Business Source premier, Science Direct and Scopus. Studies were subject to preliminary, basic and advanced criteria to judge the relevance of papers. The online questionnaire targeted software industry practitioners with different roles and firm sizes. A total of 94 completed and usable responses from 68 unique firms were collected. Seven face-to-face semi-structured interviews were conducted with four industry practitioners and three academics. Results: Based on the findings of literature review, interviews and questionnaire a comprehensive definition of innovation was identified which may be used in software industry. The metrics for evaluation of determinants, inputs, outputs and performance were aggregated and categorised. A conceptual model of the key measurable elements of innovation was constructed from the findings of the systematic review. The model was further refined after feedback from academia and industry through interviews. Conclusions: The importance of innovation measurement is well recognised in both academia and industry. However, innovation measurement is not a common practice in industry. Some of the major reasons are lack of available metrics and data collection mechanisms to measure innovation. The organisations which do measure innovation use only a few metrics that do not cover the entire spectrum of innovation. This is partly because of the lack of consistent definition of innovation in industry. Moreover, there is a lack of empirical validation of the metrics and determinants of innovation. Although there is some static validations, full scale industry trials are currently missing. For software industry, a unique challenge is development of alternate measures since some of the existing metrics are inapplicable in this context. The conceptual model constructed in this study is one step towards identifying measurable key aspects of innovation to understanding the innovation capability and performance of software firms.

  • 61.
    Ali, Nauman bin
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Engström, Emelie
    Lund University, SWE.
    Taromirad, Masoumeh
    Halmstad University, SWE.
    Mousavi, Muhammad Raza
    Halmstad University, SWE.
    Minhas, Nasir Mehmood
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Helgesson, Daniel
    Lund University, SWE.
    Kunze, Sebastian
    Halmstad University, SWE.
    Varshosaz, Mahsa
    Halmstad University, SWE.
    On the search for industry-relevant regression testing research2019Inngår i: Journal of Empirical Software Engineering, ISSN 1382-3256, E-ISSN 1573-7616, Vol. 24, nr 4, s. 2020-2055Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Regression testing is a means to assure that a change in the software, or

    its execution environment, does not introduce new defects. It involves the expensive

    undertaking of rerunning test cases. Several techniques have been proposed

    to reduce the number of test cases to execute in regression testing, however, there

    is no research on how to assess industrial relevance and applicability of such techniques.

    We conducted a systematic literature review with the following two goals:

    rstly, to enable researchers to design and present regression testing research with

    a focus on industrial relevance and applicability and secondly, to facilitate the industrial

    adoption of such research by addressing the attributes of concern from the

    practitioners' perspective. Using a reference-based search approach, we identied

    1068 papers on regression testing. We then reduced the scope to only include papers

    with explicit discussions about relevance and applicability (i.e. mainly studies

    involving industrial stakeholders). Uniquely in this literature review, practitioners

    were consulted at several steps to increase the likelihood of achieving our aim of

    identifying factors important for relevance and applicability. We have summarised

    the results of these consultations and an analysis of the literature in three taxonomies,

    which capture aspects of industrial-relevance regarding the regression

    testing techniques. Based on these taxonomies, we mapped 38 papers reporting

    the evaluation of 26 regression testing techniques in industrial settings.

  • 62.
    Ali, Nauman Bin
    et al.
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Petersen, Kai
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    A consolidated process for software process simulation: State of the Art and Industry Experience2012Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Software process simulation is a complex task and in order to conduct a simulation project practitioners require support through a process for software process simulation modelling (SPSM), including what steps to take and what guidelines to follow in each step. This paper provides a literature based consolidated process for SPSM where the steps and guidelines for each step are identified through a review of literature and are complemented by experience from using these recommendations in an action research at a large Telecommunication vendor. We found five simulation processes in SPSM literature, resulting in a seven-step process. The consolidated process was successfully applied at the studied company, with the experiences of doing so being reported.

  • 63.
    Ali, Nauman bin
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Petersen, Kai
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    FLOW-assisted value stream mapping in the early phases of large-scale software development2016Inngår i: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 111, s. 213-227Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Value stream mapping (VSM) has been successfully applied in the context of software process improvement. However, its current adaptations from Lean manufacturing focus mostly on the flow of artifacts and have taken no account of the essential information flows in software development. A solution specifically targeted toward information flow elicitation and modeling is FLOW. This paper aims to propose and evaluate the combination of VSM and FLOW to identify and alleviate information and communication related challenges in large-scale software development. Using case study research, FLOW-assisted VSM was used for a large product at Ericsson AB, Sweden. Both the process and the outcome of FLOW-assisted VSM have been evaluated from the practitioners’ perspective. It was noted that FLOW helped to systematically identify challenges and improvements related to information flow. Practitioners responded favorably to the use of VSM and FLOW, acknowledged the realistic nature and impact on the improvement on software quality, and found the overview of the entire process using the FLOW notation very useful. The combination of FLOW and VSM presented in this study was successful in systematically uncovering issues and characterizing their solutions, indicating their practical usefulness for waste removal with a focus on information flow related issues.

  • 64.
    Ali, Nauman Bin
    et al.
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Petersen, Kai
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Mäntylä, Mika
    Testing highly complex system of systems: An industrial case study2012Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Systems of systems (SoS) are highly complex and are integrated on multiple levels (unit, component, system, system of systems). Many of the characteristics of SoS (such as operational and managerial independence, integration of system into system of systems, SoS comprised of complex systems) make their development and testing challenging. Contribution: This paper provides an understanding of SoS testing in large-scale industry settings with respect to challenges and how to address them. Method: The research method used is case study research. As data collection methods we used interviews, documentation, and fault slippage data. Results: We identified challenges related to SoS with respect to fault slippage, test turn-around time, and test maintainability. We also classified the testing challenges to general testing challenges, challenges amplified by SoS, and challenges that are SoS specific. Interestingly, the interviewees agreed on the challenges, even though we sampled them with diversity in mind, which meant that the number of interviews conducted was sufficient to answer our research questions. We also identified solution proposals to the challenges that were categorized under four classes of developer quality assurance, function test, testing in all levels, and requirements engineering and communication. Conclusion: We conclude that although over half of the challenges we identified can be categorized as general testing challenges still SoS systems have their unique and amplified challenges stemming from SoS characteristics. Furthermore, it was found that interviews and fault slippage data indicated that different areas in the software process should be improved, which indicates that using only one of these methods would have led to an incomplete picture of the challenges in the case company.

  • 65.
    Ali, Nauman bin
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Usman, Muhammad
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    A critical appraisal tool for systematic literature reviews in software engineering2019Inngår i: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 112, s. 48-50Artikkel, forskningsoversikt (Fagfellevurdert)
    Abstract [en]

    Context: Methodological research on systematic literature reviews (SLRs)in Software Engineering (SE)has so far focused on developing and evaluating guidelines for conducting systematic reviews. However, the support for quality assessment of completed SLRs has not received the same level of attention. Objective: To raise awareness of the need for a critical appraisal tool (CAT)for assessing the quality of SLRs in SE. To initiate a community-based effort towards the development of such a tool. Method: We reviewed the literature on the quality assessment of SLRs to identify the frequently used CATs in SE and other fields. Results: We identified that the CATs currently used is SE were borrowed from medicine, but have not kept pace with substantial advancements in the field of medicine. Conclusion: In this paper, we have argued the need for a CAT for quality appraisal of SLRs in SE. We have also identified a tool that has the potential for application in SE. Furthermore, we have presented our approach for adapting this state-of-the-art CAT for assessing SLRs in SE. © 2019 The Authors

  • 66.
    Ali, Nauman bin
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Usman, Muhammad
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Reliability of search in systematic reviews: Towards a quality assessment framework for the automated-search strategy2018Inngår i: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, ISSN 0950-5849, Vol. 99, s. 133-147Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Context: The trust in systematic literature reviews (SLRs) to provide credible recommendations is critical for establishing evidence-based software engineering (EBSE) practice. The reliability of SLR as a method is not a given and largely depends on the rigor of the attempt to identify, appraise and aggregate evidence. Previous research, by comparing SLRs on the same topic, has identified search as one of the reasons for discrepancies in the included primary studies. This affects the reliability of an SLR, as the papers identified and included in it are likely to influence its conclusions. Objective: We aim to propose a comprehensive evaluation checklist to assess the reliability of an automated-search strategy used in an SLR. Method: Using a literature review, we identified guidelines for designing and reporting automated-search as a primary search strategy. Using the aggregated design, reporting and evaluation guidelines, we formulated a comprehensive evaluation checklist. The value of this checklist was demonstrated by assessing the reliability of search in 27 recent SLRs. Results: Using the proposed evaluation checklist, several additional issues (not captured by the current evaluation checklist) related to the reliability of search in recent SLRs were identified. These issues severely limit the coverage of literature by the search and also the possibility to replicate it. Conclusion: Instead of solely relying on expensive replications to assess the reliability of SLRs, this work provides means to objectively assess the likely reliability of a search-strategy used in an SLR. It highlights the often-assumed aspect of repeatability of search when using automated-search. Furthermore, by explicitly considering repeatability and consistency as sub-characteristics of a reliable search, it provides a more comprehensive evaluation checklist than the ones currently used in EBSE. © 2018 Elsevier B.V.

  • 67.
    Ali, Nauman
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Petersen, Kai
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Evaluating strategies for study selection in systematic literature studies2014Inngår i: ESEM '14 Proceedings of the 8th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement, ACM , 2014, Vol. article 45Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Context: The study selection process is critical to improve the reliability of secondary studies. Goal: To evaluate the selection strategies commonly employed in secondary studies in software engineering. Method: Building on these strate- gies, a study selection process was formulated and evalu- ated in a systematic review. Results: The selection process used a more inclusive strategy than the one typically used in secondary studies, which led to additional relevant articles. Conclusions: The results indicates that a good-enough sam- ple could be obtained by following a less inclusive but more efficient strategy, if the articles identified as relevant for the study are a representative sample of the population, and there is a homogeneity of results and quality of the articles.

  • 68.
    Alipour, Philip Baback
    et al.
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Ali, Muhammad
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    An Introduction and Evaluation of a Lossless Fuzzy Binary AND/OR Compressor2010Independent thesis Advanced level (degree of Master (Two Years))Oppgave
    Abstract [en]

    We report a new lossless data compression algorithm (LDC) for implementing predictably-fixed compression values. The fuzzy binary and-or algorithm (FBAR), primarily aims to introduce a new model for regular and superdense coding in classical and quantum information theory. Classical coding on x86 machines would not suffice techniques for maximum LDCs generating fixed values of Cr >= 2:1. However, the current model is evaluated to serve multidimensional LDCs with fixed value generations, contrasting the popular methods used in probabilistic LDCs, such as Shannon entropy. The currently introduced entropy is of ‘fuzzy binary’ in a 4D hypercube bit flag model, with a product value of at least 50% compression. We have implemented the compression and simulated the decompression phase for lossless versions of FBAR logic. We further compared our algorithm with the results obtained by other compressors. Our statistical test shows that, the presented algorithm mutably and significantly competes with other LDC algorithms on both, temporal and spatial factors of compression. The current algorithm is a steppingstone to quantum information models solving complex negative entropies, giving double-efficient LDCs > 87.5% space savings.

  • 69.
    Allahyari, Hiva
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    On the concept of Understandability as a Property of Data mining Quality2010Independent thesis Advanced level (degree of Master (Two Years))Oppgave
    Abstract [en]

    This paper reviews methods for evaluating and analyzing the comprehensibility and understandability of models generated from data in the context of data mining and knowledge discovery. The motivation for this study is the fact that the majority of previous work has focused on increasing the accuracy of models, ignoring user-oriented properties such as comprehensibility and understandability. Approaches for analyzing the understandability of data mining models have been discussed on two different levels: one is regarding the type of the models’ presentation and the other is considering the structure of the models. In this study, we present a summary of existing assumptions regarding both approaches followed by an empirical work to examine the understandability from the user’s point of view through a survey. From the results of the survey, we obtain that models represented as decision trees are more understandable than models represented as decision rules. Using the survey results regarding understandability of a number of models in conjunction with quantitative measurements of the complexity of the models, we are able to establish correlation between complexity and understandability of the models.

  • 70.
    Almroth, Tobias
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Data visualization for the modern web: A look into tools and techniques for visualizing data in Angular 5 applications2018Independent thesis Basic level (university diploma), 10 poäng / 15 hpOppgave
    Abstract [en]

    This paper looks into how data is best visualized and how visualizations should be designed to be most easily perceived. Furthermore the study looks into what tools there are available on the market today for visualizing data in angular 5 applications. With regards to a client, a developer team from the swedish police IT-department, the tools are evaluated and the one most suitable for the client is found. The paper also looks into how a dynamic data solution can be developed in angular 5. A solution where data can be selected in one component and displayed in another.

    To answer the questions sought a study of previous research into data visualization was done as well as a look into how angular 5 applications can be developed. Interviews with the clients were held where their specific requirements on visualization tools were identified. After searching and listing available visualization tools on the market the tools were evaluated against the clients requirements and a prototype application were developed. Showcasing both the most suitable tool and its integration but also a dynamic data solution in angular 5.

    As a conclusion data visualizations should be made as simple as possible with the main focus on the data. When it comes to tools the one most suitable to the client was Chart.js that easily integrated into an angular 5 application. An application that thanks to angular’s features is well equipped for handling and developing dynamic data solutions.

  • 71.
    Almström, Malin
    et al.
    Blekinge Tekniska Högskola, Institutionen för programvaruteknik och datavetenskap.
    Olsson, Christina
    Blekinge Tekniska Högskola, Institutionen för programvaruteknik och datavetenskap.
    Förbättrad Kravhantering med hjälp av Lösningsinriktad Pedagogik2002Independent thesis Basic level (degree of Bachelor)Oppgave
    Abstract [sv]

    Abstract The purpose of writing this thesis was to improve methods during requirements engineering phase. Usercentred system engineering has some problem areas, which are examined and verified to create a new guideline for developers. This guideline tends to make requirements engineering more effective and help developers create more concrete requirements. It is not uncommon that system development projects ends up with unsatisfied users or delay in deliveries. The reasons are different kinds of communication problems between users and developers during verification of requirements. There is a therapy model, called solution-focused therapy, used in family and individual therapy. The model focuses on solutions for the future instead of problems in the past. This method has never been used in system developing until today. Based on literature studies and scenarios we have shown that it is possible to transfer this pedagogy into the system developing branch especially in requirements engineering. During our investigation we have shown that the pedagogy refute the difficulties within usercentred design. The pedagogy model can be used in four kinds of methods for capturing requirements; questionnaires, interviews, workshops and observations. The model activates and makes the user more implicated. To show this, we have applied the pedagogy model on scenarios taken from earlier experiences of requirements engineering. The outcome of this investigation is that developers can create a more pleasant communication atmosphere with this pedagogy. A result of this is that users becomes more willing and helpful to create a new system and therefore makes it easier for developers and users to cooperate. You can avoid many communication problems if you know how to go around them.

  • 72.
    Al-Refai, Ali
    et al.
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Pandiri, Srinivasreddy
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Cloud Computing: Trends and Performance Issues2011Independent thesis Advanced level (degree of Master (Two Years))Oppgave
    Abstract [en]

    Context: Cloud Computing is a very fascinating concept these days, it is attracting so many organiza-tions to move their utilities and applications into a dedicated data centers, and so it can be accessed from the Internet. This allows the users to focus solely on their businesses while Cloud Computing providers handle the technology. Choosing a best provider is a challenge for organizations that are willing to step into the Cloud Computing world. A single cloud center generally could not deliver large scale of resources for the cloud tenants; therefore, multiple cloud centers need to collaborate to achieve some business goals and to provide the best possible services at lowest possible costs. How-ever a number of aspects, legal issues, challenges, and policies should be taken into consideration when moving our service into the Cloud environment. Objectives: The aim of this research is to identify and elaborate the major technical and strategy differences between the cloud-computing providers in order to enable the organizations managements, system designers and decision makers to have better insight into the strategies of the different Cloud Computing providers. It is also to understand the risks and challenges due to implementing Cloud Computing, and “how” those issues can be moderated. This study will try to define Multi-Cloud Computing by studying the pros and cons of this new domain. It is also aiming to study the concept of load balancing in the cloud in order to examine the performance over multiple cloud environments. Methods: In this master thesis a number of research methods are used, including the systematic litera-ture review, contacting experts from the relevant field (Interviews) and performing a quantitative methodology (Experiment). Results: Based on the findings of the Literature Review, Interviews and Experiment, we got out the results for the research questions as, 1) A comprehensive study for identifying and comparing the major Cloud Computing providers, 2) Addressing a list of impacts of Cloud Computing (legal aspects, trust and privacy). 3) Creating a definition for Multi-Cloud Computing and identifying the benefits and drawbacks, 4) Finding the performance results on the cloud environment by performing an expe-riment on a load balancing solution. Conclusions: Cloud Computing becomes a central interest for many organizations nowadays. More and more companies start to step into the Cloud Computing service technologies, Amazon, Google, Microsoft, SalesForce, and Rackspace are the top five major providers in the market today. However, there is no Cloud that is perfect for all services. The legal framework is very important for the protection of the user’s private data; it is an important key factor for the safety of the user’s personal and sensitive information. The privacy threats vary according to the nature of the cloud scenario, since some clouds and services might face a very low privacy threats compare to the others, the public cloud that is accessed through the Internet is one of the most means when it comes the increasing threats of the privacy concerns. Lack of visibility of the provider supply chain will lead to suspicion and ultimately distrust. The evolution of Cloud Computing shows that it is likely, in a near future, the so-called Cloud will be in fact a Multi-cloud environment composed of a mixture of private and public Clouds to form an adaptive environment. Load balancing in the Cloud Computing environment is different from the typical load balancing. The architecture of cloud load balancing is using a number of commodity servers to perform the load balancing. The performance of the cloud differs depending on the cloud’s location even for the same provider. HAProxy load balancer is showing positive effect on the cloud’s performance at high amount of load, the effect is unnoticed at lower amounts of load. These effects can vary depending on the location of the cloud.

  • 73.
    Alégroth, Emil
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Feldt, Robert
    Chalmers, SWE.
    On the long-term use of visual gui testing in industrial practice: a case study2017Inngår i: Journal of Empirical Software Engineering, ISSN 1382-3256, E-ISSN 1573-7616, Vol. 22, nr 6, s. 2937-2971Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Visual GUI Testing (VGT) is a tool-driven technique for automated GUI-based testing that uses image recognition to interact with and assert the correctness of the behavior of a system through its GUI as it is shown to the user. The technique’s applicability, e.g. defect-finding ability, and feasibility, e.g. time to positive return on investment, have been shown through empirical studies in industrial practice. However, there is a lack of studies that evaluate the usefulness and challenges associated with VGT when used long-term (years) in industrial practice. This paper evaluates how VGT was adopted, applied and why it was abandoned at the music streaming application development company, Spotify, after several years of use. A qualitative study with two workshops and five well chosen employees is performed at the company, supported by a survey, which is analyzed with a grounded theory approach to answer the study’s three research questions. The interviews provide insights into the challenges, problems and limitations, but also benefits, that Spotify experienced during the adoption and use of VGT. However, due to the technique’s drawbacks, VGT has been abandoned for a new technique/framework, simply called the Test interface. The Test interface is considered more robust and flexible for Spotify’s needs but has several drawbacks, including that it does not test the actual GUI as shown to the user like VGT does. From the study’s results it is concluded that VGT can be used long-term in industrial practice but it requires organizational change as well as engineering best practices to be beneficial. Through synthesis of the study’s results, and results from previous work, a set of guidelines are presented that aim to aid practitioners to adopt and use VGT in industrial practice. However, due to the abandonment of the technique, future research is required to analyze in what types of projects the technique is, and is not, long-term viable. To this end, we also present Spotify’s Test interface solution for automated GUI-based testing and conclude that it has its own benefits and drawbacks.

  • 74.
    Alégroth, Emil
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Gonzalez-Huerta, Javier
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Towards a mapping of software technical debt onto testware2017Inngår i: Proceedings - 43rd Euromicro Conference on Software Engineering and Advanced Applications, SEAA 2017, Institute of Electrical and Electronics Engineers Inc. , 2017, s. 404-411, artikkel-id 8051379Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Technical Debt (TD) is a metaphor used to explain the negative impacts that sub-optimal design decisions have in the long-term perspective of a software project. Although TD is acknowledged by both researchers and practitioners to have strong negative impact on Software development, its study on Testware has so far been very limited. A gap in knowledge that is important to address due to the growing popularity of Testware (scripted automated testing) in software development practice.In this paper we present a mapping analysis that connects 21 well-known, Software, object-oriented TD items to Testware, establishing them as Testware Technical Debt (TTD) items. The analysis indicates that most Software TD items are applicable or observable as TTD items, often in similar form and with roughly the same impact as for Software artifacts (e.g. reducing quality of the produced artifacts, lowering the effectiveness and efficiency of the development process whilst increasing costs). In the analysis, we also identify three types of connections between software TD and TTD items with varying levels of impact and criticality. Additionally, the study finds support for previous research results in which specific TTD items unique to Testware were identified. Finally, the paper outlines several areas of future research into TTD. © 2017 IEEE.

  • 75.
    Alégroth, Emil
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Gorschek, Tony
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Petersen, Kai
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Mattsson, Michael
    Characteristics that affect Preference of Decision Models for Asset Selection: An Industrial Questionnaire Survey - Appendix A: Questionnaire Introduction. Decision-making in Practice / Appendix B: Survey results2019Dataset
  • 76.
    Alégroth, Emil
    et al.
    Chalmers, SWE.
    Gustafsson, Johan
    SAAB AB, SWE.
    Ivarsson, Henrik
    SAAB AB, SWE.
    Feldt, Robert
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Replicating Rare Software Failures with Exploratory Visual GUI Testing2017Inngår i: IEEE Software, ISSN 0740-7459, E-ISSN 1937-4194, Vol. 34, nr 5, s. 53-59, artikkel-id 8048660Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Saab AB developed software that had a defect that manifested itself only after months of continuous system use. After years of customer failure reports, the defect still persisted, until Saab developed failure replication based on visual GUI testing. © 1984-2012 IEEE.

  • 77.
    Alégroth, Emil
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Karlsson, Arvid
    Cilbuper IT, Gothenburg, SWE.
    Radway, Alexander
    Techship Krokslatts Fabriker, SWE.
    Continuous Integration and Visual GUI Testing: Benefits and Drawbacks in Industrial Practice2018Inngår i: Proceedings - 2018 IEEE 11th International Conference on Software Testing, Verification and Validation, ICST 2018, Institute of Electrical and Electronics Engineers Inc. , 2018, s. 172-181Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Continuous integration (CI) is growing in industrial popularity, spurred on by market trends towards faster delivery and higher quality software. A key facilitator of CI is automated testing that should be executed, automatically, on several levels of system abstraction. However, many systems lack the interfaces required for automated testing. Others lack test automation coverage of the system under test's (SUT) graphical user interface (GUI) as it is shown to the user. One technique that shows promise to solve these challenges is Visual GUI Testing (VGT), which uses image recognition to stimulate and assert the SUT's behavior. Research has presented the technique's applicability and feasibility in industry but only limited support, from an academic setting, that the technique is applicable in a CI environment. In this paper we presents a study from an industrial design research study with the objective to help bridge the gap in knowledge regarding VGT's applicability in a CI environment in industry. Results, acquired from interviews, observations and quantitative analysis of 17.567 test executions, collected over 16 weeks, show that VGT provides similar benefits to other automated test techniques for CI. However, several significant drawbacks, such as high costs, are also identified. The study concludes that, although VGT is applicable in an industrial CI environment, its severe challenges require more research and development before the technique becomes efficient in practice. © 2018 IEEE.

  • 78.
    Alégroth, Emil
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Matsuki, Shinsuke
    Veriserve Corporation, JPN.
    Vos, Tanja
    Open University of the Netherlands, NLD.
    Akemine, Kinji
    Nippon Telegraph and Telephone Corporation, JPN.
    Overview of the ICST International Software Testing Contest2017Inngår i: Proceedings - 10th IEEE International Conference on Software Testing, Verification and Validation, ICST 2017, IEEE Computer Society, 2017, s. 550-551Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In the software testing contest, practitioners and researcher's are invited to test their test approaches against similar approaches to evaluate pros and cons and which is perceivably the best. The 2017 iteration of the contest focused on Graphical User Interface-driven testing, which was evaluated on the testing tool TESTONA. The winner of the competition was announced at the closing ceremony of the international conference on software testing (ICST), 2017. © 2017 IEEE.

  • 79.
    Amaradri, Anand Srivatsav
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Nutalapati, Swetha Bindu
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Continuous Integration, Deployment and Testing in DevOps Environment2016Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Context. Owing to a multitude of factors like rapid changes in technology, market needs, and business competitiveness, software companies these days are facing pressure to deliver software rapidly and on a frequent basis. For frequent and faster delivery, companies should be lean and agile in all phases of the software development life cycle. An approach called DevOps, which is based on agile principles has come into play. DevOps bridges the gap between development and operations teams and facilitates faster product delivery. The DevOps phenomenon has gained a wide popularity in the past few years, and several companies are adopting DevOps to leverage its perceived benefits. However, the organizations may face several challenges while adopting DevOps. There is a need to obtain a clear understanding of how DevOps functions in an organization.

    Objectives. The main aim of this study is to provide a clear understanding about how DevOps works in an organization to researchers and software practitioners. The objectives of the study are to identify the benefits of implementing DevOps in organizations where agile development is in practice, the challenges faced by organizations during DevOps adoption, to identify the solutions/ mitigation strategies, to overcome the challenges,the DevOps practices, and the problems faced by DevOps teams during continuous integration, deployment and testing.

    Methods. A mixed methods approach having both qualitative and quantitative research methods is used to accomplish the research objectives.A Systematic Literature Review is conducted to identify the benefits and challenges of DevOps adoption, and the DevOps practices. Interviews are conducted to further validate the SLR findings, and identify the solutions to overcome DevOps adoption challenges, and the DevOps practices. The SLR and interview results are mapped, and a survey questionnaire is designed.The survey is conducted to validate the qualitative data, and to identify the other benefits and challenges of DevOps adoption, solutions to overcome the challenges, DevOps practices, and the problems faced by DevOps teams during continuous integration, deployment and testing.

    Results. 31 primary studies relevant to the research are identified for conducting the SLR. After analysing the primary studies, an initial list of the benefits and challenges of DevOps adoption, and the DevOps practices is obtained. Based on the SLR findings, a semi-structured interview questionnaire is designed, and interviews are conducted. The interview data is thematically coded, and a list of the benefits, challenges of DevOps adoption and solutions to overcome them, DevOps practices, and problems faced by DevOps teams is obtained. The survey responses are statistically analysed, and a final list of the benefits of adopting DevOps, the adoption challenges and solutions to overcome them, DevOps practices and problems faced by DevOps teams is obtained.

    Conclusions. Using the mixed methods approach, a final list of the benefits of adopting DevOps, DevOps adoption challenges, solutions to overcome the challenges, practices of DevOps, and the problems faced by DevOps teams during continuous integration, deployment and testing is obtained. The list is clearly elucidated in the document. The final list can aid researchers and software practitioners in obtaining a better understanding regarding the functioning and adoption of DevOps. Also, it has been observed that there is a need for more empirical research in this domain.

  • 80.
    Ambreen, T.
    et al.
    Int Islamic Univ, PAK.
    Ikram, N.
    Riphah Int Univ, PAK.
    Usman, Muhammad
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Niazi, M.
    King Fahd Univ Petr & Minerals, SAU.
    Empirical research in requirements engineering: trends and opportunities2018Inngår i: Requirements Engineering, ISSN 0947-3602, E-ISSN 1432-010X, Vol. 23, nr 1, s. 63-95Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Requirements engineering (RE) being a foundation of software development has gained a great recognition in the recent era of prevailing software industry. A number of journals and conferences have published a great amount of RE research in terms of various tools, techniques, methods, and frameworks, with a variety of processes applicable in different software development domains. The plethora of empirical RE research needs to be synthesized to identify trends and future research directions. To represent a state-of-the-art of requirements engineering, along with various trends and opportunities of empirical RE research, we conducted a systematic mapping study to synthesize the empirical work done in RE. We used four major databases IEEE, ScienceDirect, SpringerLink and ACM and Identified 270 primary studies till the year 2012. An analysis of the data extracted from primary studies shows that the empirical research work in RE is on the increase since the year 2000. The requirements elicitation with 22 % of the total studies, requirements analysis with 19 % and RE process with 17 % are the major focus areas of empirical RE research. Non-functional requirements were found to be the most researched emerging area. The empirical work in the sub-area of requirements validation and verification is little and has a decreasing trend. The majority of the studies (50 %) used a case study research method followed by experiments (28 %), whereas the experience reports are few (6 %). A common trend in almost all RE sub-areas is about proposing new interventions. The leading intervention types are guidelines, techniques and processes. The interest in RE empirical research is on the rise as whole. However, requirements validation and verification area, despite its recognized importance, lacks empirical research at present. Furthermore, requirements evolution and privacy requirements also have little empirical research. These RE sub-areas need the attention of researchers for more empirical research. At present, the focus of empirical RE research is more about proposing new interventions. In future, there is a need to replicate existing studies as well to evaluate the RE interventions in more real contexts and scenarios. The practitioners’ involvement in RE empirical research needs to be increased so that they share their experiences of using different RE interventions and also inform us about the current requirements-related challenges and issues that they face in their work. © 2016 Springer-Verlag London

  • 81.
    Amiri, Javad Mohammadian
    et al.
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Padmanabhuni, Venkata Vinod Kumar
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    A Comprehensive Evaluation of Conversion Approaches for Different Function Points2011Independent thesis Advanced level (degree of Master (Two Years))Oppgave
    Abstract [en]

    Context: Software cost and effort estimation are important activities for planning and estimation of software projects. One major player for cost and effort estimation is functional size of software which can be measured in variety of methods. Having several methods for measuring one entity, converting outputs of these methods becomes important. Objectives: In this study we investigate different techniques that have been proposed for conversion between different Functional Size Measurement (FSM) techniques. We addressed conceptual similarities and differences between methods, empirical approaches proposed for conversion, evaluation of the proposed approaches and improvement opportunities that are available for current approaches. Finally, we proposed a new conversion model based on accumulated data. Methods: We conducted a systematic literature review for investigating the similarities and differences between FSM methods and proposed approaches for conversion. We also identified some improvement opportunities for the current conversion approaches. Sources for articles were IEEE Xplore, Engineering Village, Science Direct, ISI, and Scopus. We also performed snowball sampling to decrease chance of missing any relevant papers. We also evaluated the existing models for conversion after merging the data from publicly available datasets. By bringing suggestions for improvement, we developed a new model and then validated it. Results: Conceptual similarities and differences between methods are presented along with all methods and models that exist for conversion between different FSM methods. We also came with three major contributions for existing empirical methods; for one existing method (piecewise linear regression) we used a systematic and rigorous way of finding discontinuity point. We also evaluated several existing models to test their reliability based on a merged dataset, and finally we accumulated all data from literature in order to find the nature of relation between IFPUG and COSMIC using LOESS regression technique. Conclusions: We concluded that many concepts used by different FSM methods are common which enable conversion. In addition statistical results show that the proposed approach to enhance piecewise linear regression model slightly increases model’s test results. Even this small improvement can affect projects’ cost largely. Results of evaluation of models show that it is not possible to say which method can predict unseen data better than others and it depends on the concerns of practitioner that which model should be used. And finally accumulated data confirms that empirical relation between IFPUG and COSMIC is not linear and can be presented by two separate lines better than other models. Also we noted that unlike COSMIC manual’s claim that discontinuity point should be around 200 FP, in merged dataset discontinuity point is around 300 to 400. Finally we proposed a new conversion approach using systematic approach and piecewise linear regression. By testing on new data, this model shows improvement in MMRE and Pred(25).

  • 82.
    Andersson, Alve
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Att sticka ut i mängden: En studie av tekniker för variation av instansierade modeller2013Independent thesis Basic level (degree of Bachelor)Oppgave
    Abstract [sv]

    Trots den senaste tidens hårdvaruutveckling är realtidsrendering av stora folkmassor fortfarande ingen trivial uppgift. Denna uppgift beskrivs som crowd rendering. Effektiv crowd rendering bygger ofta på instansiering, men instansiering kommer med ett problem, det skapar kloner. Denna uppsats syftar till att undersöka och utvärdera ett antal tekniker som används för att skapa mångfald för instansierade modeller. Dessa tekniker kommer tillsammans att kallas varierad instansiering. Ett annat mål är att avgöra hur många modeller som behövs för att varierad instansiering skall betala sig i jämförelse med icke- instansierad utritning. Metoden som används är att mäta tiden för varje uppdatering på GPU för varje teknik med hjälp av ett mätinstrument. Varje teknik har implementerats i en applikation som skapats speciellt för detta ändamål. Analysen av mätningarna resulterade i tre kategorier. Kategorierna är GPU procentuell arbetsbörda stigande för instans avtagande för polygon, sjunkande för instans avtagande för polygon och jämn för instans och polygon. Antalet instanser som behövs för varierad instansiering skall betala sig i jämförelse med en icke- instansierad utritning bestämdes till någonstans mellan 100 och 300 modeller, beroende på antalet polygoner.

  • 83.
    Andersson, Björn
    et al.
    Blekinge Tekniska Högskola, Sektionen för teknik, Avdelningen för programvarusystem.
    Persson, Marie
    Blekinge Tekniska Högskola, Sektionen för teknik, Avdelningen för programvarusystem.
    Software Reliability Prediction – An Evaluation of a Novel Technique2004Independent thesis Advanced level (degree of Master (One Year))Oppgave
    Abstract [en]

    Along with continuously increasing computerization, our expectations on software and hardware reliability increase considerably. Therefore, software reliability has become one of the most important software quality attributes. Software reliability modeling based on test data is done to estimate whether the current reliability level meets the requirements for the product. Software reliability modeling also provides possibilities to predict reliability. Costs of software developing and tests together with profit issues in relation to software reliability are one of the main objectives to software reliability prediction. Software reliability prediction currently uses different models for this purpose. Parameters have to be set in order to tune the model to fit the test data. A slightly different prediction model, Time Invariance Estimation, TIE is developed to challenge the models used today. An experiment is set up to investigate whether TIE could be found useful in a software reliability prediction context. The experiment is based on a comparison between the ordinary reliability prediction models and TIE.

  • 84. Andersson, Emma
    et al.
    Peterson, Anders
    Törnquist Krasemann, Johanna
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Quantifying railway timetable robustness in critical points2013Inngår i: Journal of Rail Transport Planning and Management, ISSN 2210-9706, Vol. 3, nr 3, s. 95-110Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Several European railway traffic networks experience high capacity consumption during large parts of the day resulting in delay-sensitive traffic system with insufficient robustness. One fundamental challenge is therefore to assess the robustness and find strategies to decrease the sensitivity to disruptions. Accurate robustness measures are needed to determine if a timetable is sufficiently robust and suggest where improvements should be made.Existing robustness measures are useful when comparing different timetables with respect to robustness. They are, however, not as useful for suggesting precisely where and how robustness should be increased. In this paper, we propose a new robustness measure that incorporates the concept of critical points. This concept can be used in the practical timetabling process to find weaknesses in a timetable and to provide suggestions for improvements. In order to quantitatively assess how crucial a critical point may be, we have defined the measure robustness in critical points (RCP). In this paper, we present results from an experimental study where a benchmark of several measures as well as RCP has been done. The results demonstrate the relevance of the concept of critical points and RCP, and how it contributes to the set of already defined robustness measures

  • 85.
    Andersson, Madelene
    Blekinge Tekniska Högskola, Institutionen för arbetsvetenskap och medieteknik.
    Systemutveckling i praktiken: konsten att tillmötesgå den okända användarens krav2002Independent thesis Basic level (degree of Bachelor)Oppgave
    Abstract [sv]

    ABSTRACT System development has become more and more concentrated on development for the Web and this has resulted in larger target groups. It will most surely continue to be so considering that the Web will be the infrastructure of business and services in the future. A big target group involves that the owner of a system can earn a lot of money from the paying users, but that assumes that the system can meet user needs. If a system on the Web does not satisfy the user?s demands then they will use the competitor?s system instead because it is only a mouse-click away. That is why the business, already during the development process, has to take the role of the users seriously. Even if all users cannot take part in the process at least some users can and it would be a shame not to take advantages of this kind of expert knowledge. This report describes how a system development project can be practised and how a developer can do to satisfy each user?s requirements even though he or she are not specified, and also why a useful system is classified as an investment in the future.

  • 86.
    Andersson, Magnus
    Blekinge Tekniska Högskola.
    Sökmotoroptimering med analysverktyg2018Independent thesis Basic level (university diploma), 10 poäng / 15 hpOppgave
  • 87.
    Andersson, Marcus
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Nilsson, Alexander
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Improving Integrity Assurances of Log Entries From the Perspective of Intermittently Disconnected Devices2014Oppgave
    Abstract [en]

    It is common today in large corporate environments for system administrators to employ centralized systems for log collection and analysis. The log data can come from any device between smart-phones and large scale server clusters. During an investigation of a system failure or suspected intrusion these logs may contain vital information. However, the trustworthiness of this log data must be confirmed. The objective of this thesis is to evaluate the state of the art and provide practical solutions and suggestions in the field of secure logging. In this thesis we focus on solutions that do not require a persistent connection to a central log management system. To this end a prototype logging framework was developed including client, server and verification applications. The client employs different techniques of signing log entries. The focus of this thesis is to evaluate each signing technique from both a security and performance perspective. This thesis evaluates "Traditional RSA-signing", "Traditional Hash-chains"', "Itkis-Reyzin's asymmetric FSS scheme" and "RSA signing and tick-stamping with TPM", the latter being a novel technique developed by us. In our evaluations we recognized the inability of the evaluated techniques to detect so called `truncation-attacks', therefore a truncation detection module was also developed which can be used independent of and side-by-side with any signing technique. In this thesis we conclude that our novel Trusted Platform Module technique has the most to offer in terms of log security, however it does introduce a hardware dependency on the TPM. We have also shown that the truncation detection technique can be used to assure an external verifier of the number of log entries that has at least passed through the log client software.

  • 88.
    Andersson, Patrik
    et al.
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Johansson, Sakarias
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Rendering with Marching Cubes, looking at Hybrid Solutions2012Independent thesis Basic level (degree of Bachelor)Oppgave
    Abstract [en]

    Marching Cubes is a rendering technique that has many advantages for a lot of areas. It is a technique for representing scalar fields as a three-dimensional mesh. It is used for geographical applications as well as scientific ones, mainly in the medical industry to visually render medical data of the human body. But it's also an interesting technique to explore for the usage in computer games or other real-time applications since it can create some really interesting rendering. The main focus in this paper is to present a novel hybrid solution using marching cubes and heightmaps to render terrain; moreover, to find if it’s suitable for real-time applications. The paper will follow a theoretical approach as well as an implementational one on the hybrid solution. The results across several tests for different scenarios show that the hybrid solution works well for today's real-time applications using a modern graphics card and CPU (Central Processing Unit).

  • 89.
    Andersson, Tobias
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Brenden, Christoffer
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Parallelism in Go and Java: A Comparison of Performance Using Matrix Multiplication2018Independent thesis Basic level (degree of Bachelor), 10 poäng / 15 hpOppgave
    Abstract [en]

    This thesis makes a comparison between the performance of Go and Java using parallelizedimplementations of the Classic Matrix Multiplication Algorithm (CMMA). The comparisonattempts to only use features for parallelization, goroutines for Go and threads for Java,while keeping other parts of the code as generic and comparable as possible to accuratelymeasure the performance during parallelization.In this report we ask the question of how programming languages compare in terms of multi-threaded performance? In high-performance systems such as those designed for mathemati-cal calculations or servers meant to handle requests from millions of users, multithreadingand by extension performance are vital. We would like to find out if and how much of a dif-ference the choice of programming language could benefit these systems in terms of parallel-ism and multithreading.Another motivation is to analyze techniques and programming languages that have emergedthat hide the complexity of handling multithreading and concurrency from the user, lettingthe user specify keywords or commands from which the language takes over and creates andmanages the thread scheduling on its own. The Go language is one such example. Is this newtechnology an improvement over developers coding threads themselves or is the technologynot quite there yet?To these ends experiments were done with multithreaded matrix multiplication and was im-plemented using goroutines for Go and threads for Java and was performed with sets of4096x4096 matrices. Background programs were limited and each set of calculations wasthen run multiple times to get average values for each calculation which were then finallycompared to one another.Results from the study showed that Go had ~32-35% better performance than Java between 1and 4 threads, with the difference diminishing to ~2-5% at 8 to 16 threads. The differencehowever was believed to be mostly unrelated to parallelization as both languages maintainednear identical performance scaling as the number of threads increased until the scaling flat-lined for both languages at 8 threads and up. Java did continue to gain a slight increase goingfrom 4 to 8 threads, but this was believed to be due to inefficient resource utilization onJava’s part or due to Java having better utilization of hyper-threading than Go.In conclusion, Go was found to be considerably faster than Java when going from the mainthread and up to 4 threads. At 8 threads and onward Java and Go performed roughly equal.For performance difference between the number of threads in the languages themselves nonoticeable performance increase or decrease was found when creating 1 thread versus run-ning the matrix multiplication directly on the main thread for either of the two languages.Coding multithreading in Go was found to be easier than in Java while providing greater toequal performance. Go just requires the ‘go’ keyword while Java requires thread creation andmanagement. This would put Go in favor for those trying to avoid the complexity of multi-threading while also seeking its benefits.

  • 90.
    Ansari, Rehan Javed.
    et al.
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Dodda, Sandhya Rani.
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    The Use of SCRUM in Global Software Development – An Empirical Study2010Independent thesis Advanced level (degree of Master (Two Years))Oppgave
    Abstract [en]

    The trend for global software development is increasing day by day. Global software development takes into account, the development of software globally by bringing knowledge about the market. There are several challenges that have an impact in developing software globally. In this study we investigate several management challenges faced in globally distributed projects and scrum practices that are being implemented by the organization. We also study the benefits in implementing scrum. For our research, we have performed literature review to find out the various challenges in managing globally distributed software projects and various scrum practices that are discussed. We conducted industrial case studies to find out the challenges being faced by them in globally distributed projects and the various scrum practices that are followed by them to overcome those challenges and also to know the benefits of implementing scrum in GSD. In order to provide quantitative support of management challenges and scrum practices discussed in the literature review, surveys have been conducted. We used grounded theory for analyzing the data gathered during the study. There are several challenges that are being faced by the organizations while developing software globally. There are also several scrum practices that have been found from the interviews. There are few challenges that need to be addressed in future research.

  • 91.
    Ansari, Umair Azeem
    et al.
    Blekinge Tekniska Högskola, Fakulteten för teknikvetenskaper, Institutionen för industriell ekonomi.
    Ali, Syed Umair
    Blekinge Tekniska Högskola, Fakulteten för teknikvetenskaper, Institutionen för industriell ekonomi.
    Application of LEAN and BPR principles for Software Process Improvement (SPI): A case study of a large software development organization2014Independent thesis Advanced level (degree of Master (One Year))Oppgave
    Abstract [en]

    Abstract ------------- Background ---------------- Like other businesses, the failures and problems faced by the software development industry over the time have motivated experts to look for software process improvement to create quality software rapidly, repeatedly, and reliably. Objective ------------ The purpose of this study is to evaluate if and how Lean thinking and principles primarily associated with auto manufacturing industry can be applied to software development lifecycle for Software Process Improvement (SPI). The secondary aim is to analyze how BPR can be integrated with Lean software development for process improvement. Method ---------- A derived Lean-BPR adoption pattern model is used as a theoretical framework for this thesis. The seven Lean software development principles along with four-step BPR process are selected as process improvement patterns, which effects the KPIs of a software organization. This research study incorporates both Qualitative and Quantitative methods and data to analyze the objectives of this study. The methodological framework of Plan-Do-Check-Act is used in the case study to implement process re-engineering incorporating Lean and BPR principles. The impact of adopting the Lean and BPR principles is assessed in terms of cost, productivity, quality of products and resource management. Results ---------- Application of Lean and BPR principles for software process improvement in the organization under study resulted in 79% improvement in test coverage, 60% reduction in time for test execution and analysis and 44% reduction in cost for fixing defects that were being passed to customer in past. Conclusion ------------- Based on case study results, it can be concluded that Lean, a bottom up approach, characterized by empowerment of employees to analyze and improve their own working process can be effectively combined with IT centric traditionally top down BPR approach for improving KPI’s and software processes.

  • 92.
    ANWAR, WALEED
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Software Quality Characteristics Tested For Mobile Application Development: Literature Review and Empirical Survey2015Independent thesis Advanced level (degree of Master (One Year)), 10 poäng / 15 hpOppgave
    Abstract [en]

    Smart phones use is increasing day by day as there is large number of app users. Due to more use of apps, the testing of mobile application should be done correctly and flawlessly to ensure the effectiveness of mobile applications.

  • 93.
    Aouachria, Moufida
    et al.
    Universite du Quebec a Montreal, CAN.
    Leshob, Abderrahmane
    Universite du Quebec a Montreal, CAN.
    Gonzalez-Huerta, Javier
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Ghomari, Abdessamed Réda
    Ecole nationale superieure d'Informatique, DZA.
    Hadaya, Pierre
    Universite du Quebec a Montreal, CAN.
    Business Process Integration: How to Achieve Interoperability through Process Patterns2017Inngår i: Proceedings - 14th IEEE International Conference on E-Business Engineering, ICEBE 2017 - Including 13th Workshop on Service-Oriented Applications, Integration and Collaboration, SOAIC 207, Institute of Electrical and Electronics Engineers Inc. , 2017, s. 109-117Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Business process integration (BPI) is a crucial technique for supporting inter-organizational business interoperability. BPI allows automation of business processes and the integration of systems across numerous organizations. The integration of organizations' process models is one of the most addressed and used approach to achieve BPI. However, this model integration is complex and requires that designers have extensive experience in particular when organizations' business processes are incompatible. This paper considers the issue of modeling cross-organization processes out of a collection of organizations' private process models. To this end, we propose six adaptation patterns to resolve incompatibilities when combining organizations' processes. Each pattern is formalized with workflow net. © 2017 IEEE.

  • 94.
    Ardito, Luca
    et al.
    Politecnico di Torino, ITA.
    Coppola, Riccardo
    Politecnico di Torino, ITA.
    Torchiano, Marco
    Politecnico di Torino, ITA.
    Alégroth, Emil
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Towards automated translation between generations of GUI-based tests for mobile devices2018Inngår i: Companion Proceedings for the ISSTA/ECOOP 2018 Workshops, Association for Computing Machinery, Inc , 2018, s. 46-53Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Market demands for faster delivery and higher software quality are progressively becoming more stringent. A key hindrance for software companies to meet such demands is how to test the software due to to the intrinsic costs of development, maintenance and evolution of testware. Especially since testware should be defined, and aligned, with all layers of system under test (SUT), including all graphical user interface (GUI) abstraction levels. These levels can be tested with different generations of GUI-based test approaches, where 2nd generation, or Layout-based, tests leverage GUI properties and 3rd generation, or Visual, tests make use of image recognition. The two approaches provide different benefits and drawbacks and are seldom used together because of the aforementioned costs, despite growing academic evidence of the complementary benefits. In this work we propose the proof of concept of a novel two-step translation approach for Android GUI testing that we aim to implement, where a translator first creates a technology independent script with actions and elements of the GUI, and then translates it to a script with the syntax chosen by the user. The approach enables users to translate Layout-based to Visual scripts and vice versa, to gain the benefits (e.g. robustness, speed and ability to emulate the user) of both generations, whilst minimizing the drawbacks (e.g. development and maintenance costs). We outline our approach from a technical perspective, discuss some of the key challenges with the realization of our approach, evaluate the feasibility and the advantages provided by our approach on an open-source Android application, and discuss the potential industrial impact of this work. © 2018 ACM.

  • 95.
    Areskoug, Andreas
    Blekinge Tekniska Högskola, Sektionen för teknik, Avdelningen för programvarusystem.
    Jämförelse av J2EE och .NET från ett Web Services perspektiv.2006Independent thesis Advanced level (degree of Master (One Year))Oppgave
    Abstract [en]

    This thesis compares the performance of Web Services when hosted on either the J2EE or the .NET platform. The thesis will investigate which platform should be choosen to host Web Services mainly based on performance.

  • 96.
    Arneng, Per
    et al.
    Blekinge Tekniska Högskola, Institutionen för programvaruteknik och datavetenskap.
    Bladh, Richard
    Blekinge Tekniska Högskola, Institutionen för programvaruteknik och datavetenskap.
    Performance Analysis of Distributed Object Middleware Technologies2003Independent thesis Advanced level (degree of Master (One Year))Oppgave
    Abstract [en]

    Each day new computers around the world connects to the Internet or some network. The increasing number of people and computers on the Internet has lead to a demand for more services in different domains that can be accessed from many locations in the network. When the computers communicate they use different kinds of protocols to be able to deliver a service. One of these protocol families are remote procedure calls between computers. Remote procedure calls has been around for quite some time but it is with the Internet that its usage has increased a lot and especially in its object oriented form which comes from the fact that object oriented programming has become a popular choice amongst programmers. When a programmer has to choose a distributed object middleware there is a lot to take into consideration and one of those things is performance. This master thesis aims to give a performance comparison between different distributed object middleware technologies and give an overview of the performance difference between them and make it easier for a programmer to choose one of the technologies when performance is an important factor. In this thesis we have evaluated the performance of CORBA, DCOM, RMI, RMI-IIOP, Remoting-TCP and Remoting-HTTP. The results we have seen from this evaluation is that DCOM and RMI are the distributed object middleware technologies with the best overall performance in terms of throughput and round trip time. Remoting-TCP generally generates the least amount of network traffic, while Remoting-HTTP generates the most amount of network traffic due to it's SOAP-formated protocol.

  • 97.
    Arnesson, Andreas
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Codename one and PhoneGap, a performance comparison2015Independent thesis Basic level (degree of Bachelor), 10 poäng / 15 hpOppgave
    Abstract [en]

    Creating smartphone applications for more than one operating system requires knowledge of several code languages, more code maintenance, higher development costs and longer development time. To make this easier cross-platform tools (CPTs) exist. But using a CPT can decrease performance of the application. Applications with low performance are more likely to get uninstalled and this makes developers lose income. There are four main CPT approaches hybrid, interpreter, web and cross-compiler. Each has different disadvantages .and advantages. This study will examine the performance difference between two CPTs, Codename One and PhoneGap. The performance measurements, CPU load, memory usage, energy consumption, time execution and application size will be made to compare the CPTs. If cross-compilers have better performance than other CPT approaches will also be investigated. An experiment where three applications are created with native Android, Codename One and PhoneGap will be made and performance measurements will be made. A literature study with research from IEEE and Engineering village will be conducted on different CPT approaches. PhoneGap performed best with shortest execution time, least energy consumption and least CPU usage while Codename One had smallest application size and least memory usage. The research available on performance for CPTs is short and not well done. The difference between PhoneGap and Codename One is not big except for writing to SQLite. No basis was found for the statement that cross-compilers have better performance than other CPT approaches.  

  • 98.
    Aroseus, Zara
    et al.
    Blekinge Tekniska Högskola, Institutionen för arbetsvetenskap och medieteknik.
    Langeström, Emmie
    Blekinge Tekniska Högskola, Institutionen för arbetsvetenskap och medieteknik.
    Lindberg, Tobias
    Blekinge Tekniska Högskola, Institutionen för arbetsvetenskap och medieteknik.
    Vår utvecklingsprocess: designaspekter på ett befintligt webbsystem2001Independent thesis Basic level (degree of Bachelor)Oppgave
    Abstract [sv]

    Den här rapporten är ett resultat av ett kandidatarbete på 20-poäng. Projektet som rapporten handlar om, beskriver hur utvecklingen av ett webbutikssystem åt ett flertal företag gått till. De har i sin tur köpt webbsystemet från företaget Opti Use som givit oss uppgiften. I rapporten diskuteras hur utvecklingsprocessen gått till och hur ett arbete med ett befintligt system upplevts. Rapporten beskriver även hur lösningen på uppgiften har tagit form och hur utvecklingsprocessen påverkat den. Metoderna som används under projektets gång beskrivs utifrån hur de förändrats för att önskat resultat skulle uppnås. Titelns innebörd beskriver projektgruppens utvecklingsprocess under arbetet med projektet i förhållande till tidigare erfarenheter under MDA-utbildningen.

  • 99.
    Arsalan, Muhammad
    Blekinge Tekniska Högskola, Sektionen för ingenjörsvetenskap.
    Future Tuning Process For Embedded Control Systems2009Independent thesis Advanced level (degree of Master (Two Years))Oppgave
    Abstract [en]

    This master’s thesis concerns development of embedded control systems.Development process for embedded control systems involves several steps, such as control design, rapid prototyping, fixedpoint implementation and hardware-in-the-loop-simulations. Another step, which Volvo is not currently (September 2009) using within climate control is on-line tuning. One reason for not using this technique today is that the available tools for this task (ATI Vision, INCA from ETAS or CalDesk from dSPACE) do not handle parameter dependencies in a atisfactory way. With these constraints of today, it is not possible to use online tuning and controller development process is more laborious and time consuming.The main task of this thesis is to solve the problem with parameter dependencies and to make online tuning possible.

  • 100.
    Arslan, Muhammad
    et al.
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Riaz, Muhammad Assad
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    A Roadmap for Usability and User Experience Measurement during early phases of Web Applications Development2010Independent thesis Advanced level (degree of Master (Two Years))Oppgave
    Abstract [en]

    Web usability and User Experience (UX) play a vital role in the success and failure of web applications. However, the usability and UX measurement during the software development life cycle provide many challenges. Based on a systematic literature review, this thesis discusses the current usability and user experience evaluation and measurement methods and the defined measures as well as their applicability during the software development life cycle. The challenges of using those methods also identified. In order to elaborate more on the challenges, we conducted informal interviews within a software company. Based on the findings, we defined a usability and user experience measurement and evaluation roadmap for web applications development companies. The roadmap contains a set of usability evaluation and measurement methods as well as measures that we found suitable to be used during the early stages (requirement, design, and development) of web application development lifecycle. To validate the applicability of the defined roadmap, a case study was performed on a real time market oriented real estate web application. The results and the discussions of the findings as well as the future research directions are presented.

1234567 51 - 100 of 1861
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf