Change search
Refine search result
1234567 1 - 50 of 3848
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the 'Create feeds' function.
  • 1.
    ABBAS, FAHEEM
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Intelligent Container Stacking System at Seaport Container Terminal2016Independent thesis Advanced level (degree of Master (One Year)), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Context: The workload at seaport container terminal is increasing gradually. We need to improve the performance of terminal to fulfill the demand. The key section of the container terminal is container stacking yard which is an integral part of the seaside and the landside. So its performance has the effects on both sides. The main problem in this area is unproductive moves of containers. However, we need a well-planned stacking area in order to increase the performance of terminal and maximum utilization of existing resources.

    Objectives: In this work, we have analyzed the existing container stacking system at Helsingborg seaport container terminal, Sweden, investigated the already provided solutions of the problem and find the best optimization technique to get the best possible solution. After this, suggest the solution, test the proposed solution and analyzed the simulation based results with respect to the desired solution.

    Methods: To identify the problem, methods and proposed solutions of the given problem in the domain of container stacking yard management, a literature review has been conducted by using some e-resources/databases. A GA with best parametric values is used to get the best optimize solution. A discrete event simulation model for container stacking in the yard has been build and integrated with genetic algorithm. A proposed mathematical model to show the dependency of cost minimization on the number of containers’ moves.

    Results: The GA has been achieved the high fitness value versus generations for 150 containers to storage at best location in a block with 3 tier levels and to minimize the unproductive moves in the yard. A comparison between Genetic Algorithm and Tabu Search has been made to verify that the GA has performed better than other algorithm or not. A simulation model with GA has been used to get the simulation based results and to show the container handling by using resources like AGVs, yard crane and delivery trucks and container stacking and retrieval system in the yard. The container stacking cost is directly proportional to the number of moves has been shown by the mathematical model.

    Conclusions: We have identified the key factor (unproductive moves) that is the base of other key factors (time & cost) and has an effect on the performance of the stacking yard and overall the whole seaport terminal. We have focused on this drawback of stacking system and proposed a solution that makes this system more efficient. Through this, we can save time and cost both. A Genetic Algorithm is a best approach to solve the unproductive moves problem in container stacking system.

  • 2.
    Abbas, Gulfam
    et al.
    Blekinge Institute of Technology, School of Computing.
    Asif, Naveed
    Blekinge Institute of Technology, School of Computing.
    Performance Tradeoffs in Software Transactional Memory2010Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Transactional memory (TM), a new programming paradigm, is one of the latest approaches to write programs for next generation multicore and multiprocessor systems. TM is an alternative to lock-based programming. It is a promising solution to a hefty and mounting problem that programmers are facing in developing programs for Chip Multi-Processor (CMP) architectures by simplifying synchronization to shared data structures in a way that is scalable and compos-able. Software Transactional Memory (STM) a full software approach of TM systems can be defined as non-blocking synchronization mechanism where sequential objects are automatically converted into concurrent objects. In this thesis, we present performance comparison of four different STM implementations – RSTM of V. J. Marathe, et al., TL2 of D. Dice, et al., TinySTM of P. Felber, et al. and SwissTM of A. Dragojevic, et al. It helps us in deep understanding of potential tradeoffs involved. It further helps us in assessing, what are the design choices and configuration parameters that may provide better ways to build better and efficient STMs. In particular, suitability of an STM is analyzed against another STM. A literature study is carried out to sort out STM implementations for experimentation. An experiment is performed to measure performance tradeoffs between these STM implementations. The empirical evaluations done as part of this thesis conclude that SwissTM has significantly higher throughput than state-of-the-art STM implementations, namely RSTM, TL2, and TinySTM, as it outperforms consistently well while measuring execution time and aborts per commit parameters on STAMP benchmarks. The results taken in transaction retry rate measurements show that the performance of TL2 is better than RSTM, TinySTM and SwissTM.

  • 3.
    Abbireddy, Sharath
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    A Model for Capacity Planning in Cassandra: Case Study on Ericsson’s Voucher System2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Cassandra is a NoSQL(Not only Structured Query Language) database which serves large amount of data with high availability .Cassandra data storage dimensioning also known as Cassandra capacity planning refers to predicting the amount of disk storage required when a particular product is deployed using Cassandra. This is an important phase in any product development lifecycle involving Cassandra data storage system. The capacity planning is based on many factors which are classified as Cassandra specific and Product specific.This study is to identify the different Cassandra specific and product specific factors affecting the disk space in Cassandra data storage system. Based on these factors a model is to be built which would predict the disk storage for Ericsson’s voucher system.A case-study is conducted on Ericsson’s voucher system and its Cassandra cluster. Interviews were conducted on different Cassandra users within Ericsson R&D to know their opinion on capacity planning approaches and factors affecting disk space for Cassandra. Responses from the interviews were transcribed and analyzed using grounded theory.A total of 9 Cassandra specific factors and 3 product specific factors are identified and documented. Using these 12 factors a model was built. This model was used in predicting the disk space required for voucher system’s Cassandra.The factors affecting disk space for deploying Cassandra are now exhaustively identified. This makes the capacity planning process more efficient. Using these factors the Voucher system’s disk space for deployment is predicted successfully.

  • 4.
    Abdelraheem, Mohamed Ahmed
    et al.
    SICS Swedish ICT AB, SWE.
    Gehrmann, Christian
    SICS Swedish ICT AB, SWE.
    Lindström, Malin
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Nordahl, Christian
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Executing Boolean queries on an encrypted Bitmap index2016In: CCSW 2016 - Proceedings of the 2016 ACM Cloud Computing Security Workshop, co-located with CCS 2016, Association for Computing Machinery (ACM), 2016, 11-22 p.Conference paper (Refereed)
    Abstract [en]

    We propose a simple and efficient searchable symmetric encryption scheme based on a Bitmap index that evaluates Boolean queries. Our scheme provides a practical solution in settings where communications and computations are very constrained as it offers a suitable trade-off between privacy and performance.

  • 5.
    Abdelrasoul, Nader
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Optimization Techniques For an Artificial Potential Fields Racing Car Controller2013Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Context. Building autonomous racing car controllers is a growing field of computer science which has been receiving great attention lately. An approach named Artificial Potential Fields (APF) is used widely as a path finding and obstacle avoidance approach in robotics and vehicle motion controlling systems. The use of APF results in a collision free path, it can also be used to achieve other goals such as overtaking and maneuverability. Objectives. The aim of this thesis is to build an autonomous racing car controller that can achieve good performance in terms of speed, time, and damage level. To fulfill our aim we need to achieve optimality in the controller choices because racing requires the highest possible performance. Also, we need to build the controller using algorithms that does not result in high computational overhead. Methods. We used Particle Swarm Optimization (PSO) in combination with APF to achieve optimal car controlling. The Open Racing Car Simulator (TORCS) was used as a testbed for the proposed controller, we have conducted two experiments with different configuration each time to test the performance of our APF- PSO controller. Results. The obtained results showed that using the APF-PSO controller resulted in good performance compared to top performing controllers. Also, the results showed that the use of PSO proved to enhance the performance compared to using APF only. High performance has been proven in the solo driving and in racing competitions, with the exception of an increased level of damage, however, the level of damage was not very high and did not result in a controller shut down. Conclusions. Based on the obtained results we have concluded that the use of PSO with APF results in high performance while taking low computational cost.

  • 6.
    Abghari, Shahrooz
    et al.
    Blekinge Institute of Technology, School of Computing.
    Kazemi, Samira
    Blekinge Institute of Technology, School of Computing.
    Open Data for Anomaly Detection in Maritime Surveillance2012Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Context: Maritime Surveillance (MS) has received increased attention from a civilian perspective in recent years. Anomaly detection (AD) is one of the many techniques available for improving the safety and security in the MS domain. Maritime authorities utilize various confidential data sources for monitoring the maritime activities; however, a paradigm shift on the Internet has created new sources of data for MS. These newly identified data sources, which provide publicly accessible data, are the open data sources. Taking advantage of the open data sources in addition to the traditional sources of data in the AD process will increase the accuracy of the MS systems. Objectives: The goal is to investigate the potential open data as a complementary resource for AD in the MS domain. To achieve this goal, the first step is to identify the applicable open data sources for AD. Then, a framework for AD based on the integration of open and closed data sources is proposed. Finally, according to the proposed framework, an AD system with the ability of using open data sources is developed and the accuracy of the system and the validity of its results are evaluated. Methods: In order to measure the system accuracy, an experiment is performed by means of a two stage random sampling on the vessel traffic data and the number of true/false positive and negative alarms in the system is verified. To evaluate the validity of the system results, the system is used for a period of time by the subject matter experts from the Swedish Coastguard. The experts check the detected anomalies against the available data at the Coastguard in order to obtain the number of true and false alarms. Results: The experimental outcomes indicate that the accuracy of the system is 99%. In addition, the Coastguard validation results show that among the evaluated anomalies, 64.47% are true alarms, 26.32% are false and 9.21% belong to the vessels that remain unchecked due to the lack of corresponding data in the Coastguard data sources. Conclusions: This thesis concludes that using open data as a complementary resource for detecting anomalous behavior in the MS domain is not only feasible but also will improve the efficiency of the surveillance systems by increasing the accuracy and covering some unseen aspects of maritime activities.

  • 7.
    Abheeshta, Putta
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Comparative Analysis of Software Development Practices across Software Organisations: India and Sweden2016Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. System Development Methodologies (SDM’s) have been an area of intensive research in the field of software engineering. Different software organisations adopt different development methodologies and use different development practices. The frequency of usage of development practices and acceptance factors for adoption of development methodology are crucial for software organisations. The factors of acceptance and development practices differ across geographical locations. Many challenges have been presented in the literature with respect to the mismatch of the development practices across organisations while collaborating across organisations in distributed development. There is no considerable amount of research done in context of differences across development practices and acceptance factors for adoption of a particular development methodology. Objectives. The primary objectives of the research are to find out a) differences in (i) practice usage (ii) acceptance factors such as organisational, social and cultural b) explore the reasons for the differences and also investigate consequences of such differences while collaborating, across organisations located in India and Sweden. Methods. A literature review was conducted by searching in scientific databases for identifying common agile and plan-driven development practices and acceptance theories for development methodologies. Survey was conducted across organisations located in India and Sweden to find out the usage frequency of development practices and acceptance factors. Ten interviews were conducted to investigate, reasons for differences and consequences of differences from the software practitioners from organisations located in India and Sweden. Literature evidences were used to support the results collected from interviews. Results. From the survey, organisations in India have adopted a higher frequency of plan driven practices when compared to Sweden and agile practices were adopted at higher frequency in Sweden when compared to India. The number of organisations adopting "pure agile" methodologies have been significantly higher in Sweden. There was significant differences were found across the acceptance factors such as cultural, organisational, image and career factors between India and Sweden. The factors such as cultural, social, human, business and organisational factors are responsible for such differences across development practices and acceptance factors. Challenges related to communication, coordination and control were found due to the differences, while collaborating between Indian and Sweden sites. Conclusions. The study signifies the importance of identifying the frequency of development practices and also the acceptance factors responsible for adoption of development methodologies in the software organisations. The mismatch between these practices will led to various challenges. The study draws insights into various non-technical factors such as cultural, human, organisational, business and social while collaborating between organisations. Variations across these factors will lead to many coordination, communication and control issues. Keywords: Development Practices, Agile Development, Plan Driven Development, Acceptance Factors, Global Software Development.

  • 8.
    Abrahamsson, Charlotte
    et al.
    Blekinge Institute of Technology, School of Engineering, Department of Systems and Software Engineering.
    Wessman, Mattias
    Blekinge Institute of Technology, School of Engineering, Department of Systems and Software Engineering.
    WLAN Security: IEEE 802.11b or Bluetooth - which standard provides best security methods for companies?2004Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [en]

    Which security holes and security methods do IEEE 802.11b and Bluetooth offer? Which standard provides best security methods for companies? These are two interesting questions that this thesis will be about. The purpose is to give companies more information of the security aspects that come with using WLANs. An introduction to the subject of WLAN is presented in order to give an overview before the description of the two WLAN standards; IEEE 802.11b and Bluetooth. The thesis will give an overview of how IEEE 802.11b and Bluetooth works, a in depth description about the security issues of the two standards will be presented, security methods available for companies, the security flaws and what can be done in order to create a secure WLAN are all important aspects to this thesis. In order to give a guidance of which WLAN standard to choose, a comparison of the two standards with the security issues in mind, from a company's point of view is described. We will present our conclusion which entails a recommendation to companies to use Bluetooth over IEEE 802.11b, since it offers better security methods.

  • 9.
    Abrahamsson, Lisa
    et al.
    Blekinge Institute of Technology.
    Lagerqvist, Amelie
    Blekinge Institute of Technology.
    Att formas och att formge: Normkritisk mönsterformgivning2015Independent thesis Basic level (degree of Bachelor), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Our society is made up of norms that say how we should look, behave and live our lives. Many of these norms focuses on how you as a man or woman should behave in relation to your gender. Gender norms are reflected in design and what we design in products, furniture and buildings. With a gender specified norm critical approach, we examine how gender norms are reflected in design and how we as designers can design with norm criticism. With pattern design as an exploratory approach, we want to highlight and examine how gender norms affect and is repeated in what we design, just like a pattern is defined by lines and shapes that are repeated or experienced to be repeated.

    During our investigation it appears that patterns can contribute with discussion and stance that will work in multiple forums. We examine how unisex and male and female can be embodied in patterns, and how norm criticism works as an approach in a design process.

  • 10.
    Abualhana, Munther
    et al.
    Blekinge Institute of Technology, School of Computing.
    Tariq, Ubaid
    Blekinge Institute of Technology, School of Computing.
    Improving QoE over IPTV using FEC and Retransmission2009Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    IPTV (Internet Protocol Television), a new and modern concept of emerging technologies with focus on providing cutting edge high-resolution television, broadcast, and other fascinating services, is now easily available with only requirement of high-speed internet. Everytime a new technology is made local, it faces tremendous problems whether from technological point of view to enhance the performance or when it comes down to satisfy the customers. This cutting edge technology has provided researchers to embark and play with different tools to provide better quality while focusing on existing tools. Our target in dissertation is to provide a few interesting facets of IPTV and come up with a concept of introducing an imaginary cache that can re-collect the packets travelling from streaming server to the end user. In the access node this cache would be fixed and then on the basis of certain pre-assumed research work we can conclude how quick retransmission can take place when the end user responds back using RTCP protocol and asks for the retransmission of corrupted/lost packets. In the last section, we plot our scenario of streaming server on one side and client, end user on the other end and make assumption on the basis of throughput, response time and traffic.

  • 11.
    Abu-Sheikh, Khalil
    Blekinge Institute of Technology, School of Engineering, Department of Systems and Software Engineering.
    Reviewing and Evaluating Techniques for Modeling and Analyzing Security Requirements2007Independent thesis Advanced level (degree of Master (One Year))Student thesis
    Abstract [en]

    The software engineering community recognized the importance of addressing security requirements with other functional requirements from the beginning of the software development life cycle. Therefore, there are some techniques that have been developed to achieve this goal. Thus, we conducted a theoretical study that focuses on reviewing and evaluating some of the techniques that are used to model and analyze security requirements. Thus, the Abuse Cases, Misuse Cases, Data Sensitivity and Threat Analyses, Strategic Modeling, and Attack Trees techniques are investigated in detail to understand and highlight the similarities and differences between them. We found that using these techniques, in general, help requirements engineer to specify more detailed security requirements. Also, all of these techniques cover the concepts of security but in different levels. In addition, the existence of different techniques provides a variety of levels for modeling and analyzing security requirements. This helps requirements engineer to decide which technique to use in order to address security issues for the system under investigation. Finally, we found that using only one of these techniques will not be suitable enough to satisfy the security requirements of the system under investigation. Consequently, we consider that it would be beneficial to combine the Abuse Cases or Misuse Cases techniques with the Attack Trees technique or to combine the Strategic Modeling and Attack Trees techniques together in order to model and analyze security requirements of the system under investigation. The concentration on using the Attack Trees technique is due to the reusability of the produced attack trees, also this technique helps in covering a wide range of attacks, thus covering security concepts as well as security requirements in a proper way.

  • 12.
    Acharya, Mod Nath
    et al.
    Blekinge Institute of Technology, School of Computing.
    Aslam, Nazam
    Blekinge Institute of Technology, School of Computing.
    Coordination in Global Software Development: Challenges, associated threats, and mitigating practices2012Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Global Software Development (GSD) is an emerging trend in today's software world in which teams are geographically dispersed, either in close proximity or globally. GSD provides certain advantages to development companies like low development cost, access to cheap and skilled labour etc. This type of development is noted as a more risky and challenging as compared to projects developed with teams under same roof. Inherently the nature of GSD projects are cooperative in which many software developers work on a common project, share information and coordinate activities. Coordination is a fundamental part of software development. GSD comprises different types of development systems i.e. insourcing, outsourcing, nearshoring, or farshoring, whatever the types of development systems selected by a company there exist the challenges to coordination. Therefore the knowledge of potential challenges, associated threats to coordination and practices to mitigate them plays a vital role for running a successful global project.

  • 13.
    Adamala, Szymon
    et al.
    Blekinge Institute of Technology, School of Management.
    Cidrin, Linus
    Blekinge Institute of Technology, School of Management.
    Key Success Factors in Business Intelligence2011Independent thesis Advanced level (degree of Master (One Year))Student thesis
    Abstract [en]

    Business Intelligence can bring critical capabilities to an organization, but the implementation of such capabilities is often plagued with problems and issues. Why is it that certain projects fail, while others succeed? The theoretical problem and the aim of this thesis is to identify the factors that are present in successful Business Intelligence projects and organize them into a framework of critical success factors. A survey was conducted during the spring of 2011 to collect primary data on Business Intelligence projects. It was directed to a number of different professionals operating in the Business Intelligence field in large enterprises, primarily located in Poland and primarily vendors, but given the similarity of Business Intelligence initiatives across countries and increasing globalization of large enterprises, the conclusions from this thesis may well have relevance and be applicable for projects conducted in other countries. Findings confirm that Business Intelligence projects are wrestling with both technological and nontechnological problems, but the non-technological problems are found to be harder to solve as well as more time consuming than their technological counterparts. The thesis also shows that critical success factors for Business Intelligence projects are different from success factors for IS projects in general and Business Intelligences projects have critical success factors that are unique to the subject matter. Major differences can be predominately found in the non-technological factors, such as the presence of a specific business need to be addressed by the project and a clear vision to guide the project. Results show that successful projects have specific factors present more frequently than nonsuccessful. Such factors with great differences are the type of project funding, business value provided by each iteration of the project and the alignment of the project to a strategic vision for Business Intelligence. Furthermore, the thesis provides a framework of critical success factors that, according to the results of the study, explains 61% of variability of success of projects. Given these findings, managers responsible for introducing Business Intelligence capabilities should focus on a number of non-technological factors to increase the likelihood of project success. Areas which should be given special attention are: making sure that the Business Intelligence solution is built with end users in mind, that the Business Intelligence solution is closely tied to company‟s strategic vision and that the project is properly scoped and prioritized to concentrate on best opportunities first. Keywords: Critical Success Factors, Business Intelligence, Enterprise Data Warehouse Projects, Success Factors Framework, Risk Management

  • 14.
    Adamov, Alexander
    et al.
    Kharkiv Natl Univ Radio Elect, NioGuard Secur Lab, Kharkov, Kharkiv Oblast, Ukraine..
    Carlsson, Anders
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    A Sandboxing Method to Protect Cloud Cyberspace2015In: PROCEEDINGS OF 2015 IEEE EAST-WEST DESIGN & TEST SYMPOSIUM (EWDTS), IEEE Communications Society, 2015Conference paper (Refereed)
    Abstract [en]

    This paper addresses the problem of protecting cloud environments against targeted attacks, which have become a popular mean of gaining access to organization's confidential information and resources of cloud providers. Only in 2015 eleven targeted attacks have been discovered by Kaspersky Lab. One of them - Duqu2 - successfully attacked the Lab itself. In this context, security researchers show rising concern about protecting corporate networks and cloud infrastructure used by large organizations against such type of attacks. This article describes a possibility to apply a sandboxing method within a cloud environment to enforce security perimeter of the cloud.

  • 15. Adams, Liz
    et al.
    Börstler, Jürgen
    What It's Like to Participate in an ITiCSE Working Group2011In: ACM SIGCSE Bulletin, Vol. 43, no 1Article in journal (Other academic)
  • 16. Adams, R.
    et al.
    Fincher, S.
    Pears, A.
    Börstler, Jürgen
    Umeå universitet, Institutionen för datavetenskap.
    Bousted, J.
    Dalenius, P.
    Eken, G.
    Heyer, T.
    Jacobsson, A.
    Lindberg, V.
    Molin, B.
    Moström, J.-E.
    Umeå universitet, Institutionen för datavetenskap.
    Wiggberg, M.
    What is the Word for Engineering in Swedish: Swedish Students' Conceptions of their Discipline2007Report (Other academic)
  • 17.
    Adebomi, OYEKANLU Emmanuel
    et al.
    Blekinge Institute of Technology, School of Computing.
    Mwela, JOHN Samson
    Blekinge Institute of Technology, School of Computing.
    Impact of Packet Losses on the Quality of Video Streaming2010Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    In this thesis, the impact of packet losses on the quality of received videos sent across a network that exhibit normal network perturbations such as jitters, delays, packet drops etc has been examined. Dynamic behavior of a normal network has been simulated using Linux and the Network Emulator (NetEm). Peoples’ perceptions on the quality of the received video were used in rating the qualities of several videos with differing speeds. In accordance with ITU’s guideline of using Mean Opinion Scores (MOS), the effects of packet drops were analyzed. Excel and Matlab were used as tools in analyzing the peoples’ opinions which indicates the impacts that different loss rates has on the transmitted videos. Statistical methods used for evaluation of data are mean and variance. We conclude that people have convergence of opinions when losses become extremely high on videos with highly variable scene changes

  • 18.
    Adeyinka, Oluwaseyi
    Blekinge Institute of Technology, School of Engineering, Department of Interaction and System Design.
    Service Oriented Architecture & Web Services: Guidelines for Migrating from Legacy Systems and Financial Consideration2008Independent thesis Advanced level (degree of Master (One Year))Student thesis
    Abstract [en]

    The purpose of this study is to present guidelines that can be followed when introducing Service-oriented architecture through the use of Web services. This guideline will be especially useful for organizations migrating from their existing legacy systems where the need also arises to consider the financial implications of such an investment whether it is worthwhile or not. The proposed implementation guide aims at increasing the chances of IT departments in organizations to ensure a successful integration of SOA into their system and secure strong financial commitment from the executive management. Service oriented architecture technology is a new concept, a new way of looking at a system which has emerged in the IT world and can be implemented by several methods of which Web services is one platform. Since it is a developing technology, organizations need to be cautious on how to implement this technology to obtain maximum benefits. Though a well-designed, service-oriented environment can simplify and streamline many aspects of information technology and business, achieving this state is not an easy task. Traditionally, management finds it very difficult to justify the considerable cost of modernization, let alone shouldering the risk without achieving some benefits in terms of business value. The study identifies some common best practices of implementing SOA and the use of Web services, steps to successfully migrate from legacy systems to componentized or service enabled systems. The study also identified how to present financial return on investment and business benefits to the management in order to secure the necessary funds. This master thesis is based on academic literature study, professional research journals and publications, interview with business organizations currently working on service oriented architecture. I present guidelines that can be of assistance to migrate from legacy systems to service-oriented architecture based on the analysis from comparing information sources mentioned above.

  • 19.
    Adolfsen, Linus
    Blekinge Institute of Technology, School of Engineering.
    Parameterstyrd tillverkning av rör för marina fartyg2012Student thesis
    Abstract [sv]

    Innehållet i denna rapport är ett resultat av ett moment i utbildningen till Utvecklingsingenjör i Maskinteknik. Arbetet har skett genom ett samarbete mellan Linus Adolfsen, Kockums AB och Blekinge Tekniska Högskola. Rapporten behandlar i stort två moment, ett praktiskt och ett teoretiskt. Den första delen, den praktiska, gick ut på att finna en metod för att överbrygga steget från modell till verklighet på ett effektivt sätt. Detta resulterade i en egenutvecklad programvara som kan läsa in utdatafilen från Tribon (CAD programvara) och översätta detta till en programfil för Herber CNC 90 bockningsmaskin. Den andra delen är teoretisk och är en analys av verksamheten utifrån perspektivet att medge förtillverkning. Resultatet blev en analys av den berörda verksamheten med förslag på hur man ska åtgärda de problem och hinder som finns idag. Det gav även stort upphov till förslag på vidare studier.

  • 20.
    Adolfsson, Elin
    Blekinge Institute of Technology, School of Computing.
    Sociala medier för att hantera kundkontakter2012Independent thesis Advanced level (degree of Master (One Year))Student thesis
    Abstract [sv]

    KONTEXT. I takt med att Internets framväxt och fler och fler människor ansluter sig till sociala medieplattformar som Facebook, uppkommer nya sätt att interagera och kommunicera mellan varandra. Samtidigt ökar antalet organisationer och företag, som i sin tur måste hitta nya vägar för att deras marknadsföring ska sticka ut och bli uppfattad. För att nå ut till målgruppen med budskap, måste organisationen vara där målgruppen är. Därför har Facebook blivit en ny del av företags marknadsföringsredskap. Denna studie är fokuserad på Malmö stadsbiblioteks användning av Facebook som ett kommunikationsverktyg i hanteringen av kundkontakter. MÅL. Syftet med denna magisteruppsats är att undersöka Malmö stadsbiblioteks användning av Facebook, med ett huvudsakligt fokus på undperception och kundens upplevelse av kommunikationen, för att utreda hur Malmö stadsbibliotek bör använda Facebook i marknadsföringssyfte. METOD. Denna studie är baserad på en empirisk undersökning innehållandes en innehållsanalys av Malmö stadsbiblioteks Facebooksida, och också en onlineenkät gjord på de som ”gillar” och följer Facebooksidan. RESULTAT. Resultatet av den empiriska studien visar att Malmö stadsbibliotek står inför en bred målgrupp där segmentering är nödvändigt för att skapa och sprida rätt budskap till den primära målgruppen. Malmö stadsbibliotek publicerar statusuppdateringar relativt jämnt fördelat på månadens dagar och det är möjligt att peka ut olika kategorier av uppdateringar med olika målgruppsresponser och upplevelser från följarna. Resultatet av studien presenteras som diagram och tabeller med beskrivningar för att förbättra illustrationen och därför öka förståelsen. SLUTSATS. Slutsatsen av denna magisteruppsats är att de flesta respondenterna är positiva inför Malmö stadsbiblioteks användning av Facebook. Baserat på diverse befintliga teorier är marknadsföring på sociala nätverk ett bra sätt att interagera, kommunicera och få feedback från kunder för att kunna bygga goda relationer med kunderna. Detta är någonting som är väldigt viktigt i nuläget när antalet företag och reklambruset är så högt som det är. Det finns ingen enkel väg till framgång, med enbart en korrekt väg. Det är beroende av varje specifik organisation och deras specifika primärmålgrupp.

  • 21.
    Adolfsson, Victor
    Blekinge Institute of Technology, Department of Business Administration and Social Science.
    Säkerhetskapital En del av det Intellektuella Kapitalet2002Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [sv]

    Det saknas metoder att mäta informationssäkerhet inom företag och företagets tillgångar har förändrats från ett fokus på maskiner och råvaror till kunskap (intellektuellt kapital). Rapporten utforskar om det finns delar av företags intellektuella kapital som beskyddar företagets tillgångar och processer. Detta kapital kallas säkerhetskapital. Hur skulle företags informationssäkerhet kunna tydliggöras genom dess intellektuella kapital och hur kan begrepp inom informationssäkerhet och företagsvärdering hänga samman? Syftet med uppsatsen är att öka förståelsen hur informationssäkerhet är relaterat till intellektuellt kapital. Rapporten bygger på litteraturstudier om intellektuellt kapital och informationssäkerhet. Data har samlats in från dels börsnoterade företags årsredovisningar och dels från pressreleaser och börsinformation. Denna information har sedan analyserats både kvantitativt och kvalitativt och begreppet säkerhetskapital har växt fram. Teorier om företagsvärdering, intellektuellt kapital, risk management och informationssäkerhet presenteras och blir den referensram i vilket begreppet säkerhetskapital sätts i sitt sammanhang. Begreppet säkerhetskapital presenteras i form av modeller och situationer vari olika perspektiv på säkerhetskapital analyseras och utvärderas. Slutsatserna är främst i form av modeller och beskrivningar av hur man kan se på säkerhetskapital i förhållande till intellektuellt kapital och andra begrepp. Området är komplext men delar av resultaten (som är på en hög abstraktionsnivå) kan användas för att värdera andra typer av immateriella tillgångar.

  • 22.
    Adolfsson, Victor
    Blekinge Institute of Technology, Department of Software Engineering and Computer Science.
    The State of the Art in Distributed Mobile Robotics2001Independent thesis Advanced level (degree of Master (One Year))Student thesis
    Abstract [en]

    Distributed Mobile Robotics (DMR) is a multidisciplinary research area with many open research questions. This is a survey of the state of the art in Distributed Mobile Robotics research. DMR is sometimes referred to as cooperative robotics or multi-robotic systems. DMR is about how multiple robots can cooperate to achieve goals and complete tasks better than single robot systems. It covers architectures, communication, learning, exploration and many other areas presented in this master thesis.

  • 23.
    Aftarczuk, Kamila
    Blekinge Institute of Technology, School of Engineering, Department of Systems and Software Engineering.
    Evaluation of selected data mining algorithms implemented in Medical Decision Support Systems2007Independent thesis Advanced level (degree of Master (One Year))Student thesis
    Abstract [en]

    The goal of this master’s thesis is to identify and evaluate data mining algorithms which are commonly implemented in modern Medical Decision Support Systems (MDSS). They are used in various healthcare units all over the world. These institutions store large amounts of medical data. This data may contain relevant medical information hidden in various patterns buried among the records. Within the research several popular MDSS’s are analyzed in order to determine the most common data mining algorithms utilized by them. Three algorithms have been identified: Naïve Bayes, Multilayer Perceptron and C4.5. Prior to the very analyses the algorithms are calibrated. Several testing configurations are tested in order to determine the best setting for the algorithms. Afterwards, an ultimate comparison of the algorithms orders them with respect to their performance. The evaluation is based on a set of performance metrics. The analyses are conducted in WEKA on five UCI medical datasets: breast cancer, hepatitis, heart disease, dermatology disease, diabetes. The analyses have shown that it is very difficult to name a single data mining algorithm to be the most suitable for the medical data. The results gained for the algorithms were very similar. However, the final evaluation of the outcomes allowed singling out the Naïve Bayes to be the best classifier for the given domain. It was followed by the Multilayer Perceptron and the C4.5.

  • 24. Afzal, Wasif
    Lessons from applying experimentation in software engineering prediction systems2008Conference paper (Refereed)
    Abstract [en]

    Within software engineering prediction systems, experiments are undertaken primarliy to investigate relationships and to measure/compare models' accuracy. This paper discusses our experience and presents useful lessons/guidelines in experimenting with software engineering prediction systems. For this purpose, we use a typical software engineering experimentation process as a baseline. We found that the typical software engineering experimentation process in software engineering is supportive in developing prediction systems and have highlighted issues more central to the domain of software engineering prediction systems.

  • 25.
    Afzal, Wasif
    Blekinge Institute of Technology, School of Engineering, Department of Systems and Software Engineering.
    Metrics in Software Test Planning and Test Design Processes2007Independent thesis Advanced level (degree of Master (One Year))Student thesis
    Abstract [en]

    Software metrics plays an important role in measuring attributes that are critical to the success of a software project. Measurement of these attributes helps to make the characteristics and relationships between the attributes clearer. This in turn supports informed decision making. The field of software engineering is affected by infrequent, incomplete and inconsistent measurements. Software testing is an integral part of software development, providing opportunities for measurement of process attributes. The measurement of software testing process attributes enables the management to have better insight in to the software testing process. The aim of this thesis is to investigate the metric support for software test planning and test design processes. The study comprises of an extensive literature study and follows a methodical approach. This approach consists of two steps. The first step comprises of analyzing key phases in software testing life cycle, inputs required for starting the software test planning and design processes and metrics indicating the end of software test planning and test design processes. After establishing a basic understanding of the related concepts, the second step identifies the attributes of software test planning and test design processes including metric support for each of the identified attributes. The results of the literature survey showed that there are a number of different measurable attributes for software test planning and test design processes. The study partitioned these attributes in multiple categories for software test planning and test design processes. For each of these attributes, different existing measurements are studied. A consolidation of these measurements is presented in this thesis which is intended to provide an opportunity for management to consider improvement in these processes.

  • 26. Afzal, Wasif
    Search-based approaches to software fault prediction and software testing2009Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    Software verification and validation activities are essential for software quality but also constitute a large part of software development costs. Therefore efficient and cost-effective software verification and validation activities are both a priority and a necessity considering the pressure to decrease time-to-market and intense competition faced by many, if not all, companies today. It is then perhaps not unexpected that decisions related to software quality, when to stop testing, testing schedule and testing resource allocation needs to be as accurate as possible. This thesis investigates the application of search-based techniques within two activities of software verification and validation: Software fault prediction and software testing for non-functional system properties. Software fault prediction modeling can provide support for making important decisions as outlined above. In this thesis we empirically evaluate symbolic regression using genetic programming (a search-based technique) as a potential method for software fault predictions. Using data sets from both industrial and open-source software, the strengths and weaknesses of applying symbolic regression in genetic programming are evaluated against competitive techniques. In addition to software fault prediction this thesis also consolidates available research into predictive modeling of other attributes by applying symbolic regression in genetic programming, thus presenting a broader perspective. As an extension to the application of search-based techniques within software verification and validation this thesis further investigates the extent of application of search-based techniques for testing non-functional system properties. Based on the research findings in this thesis it can be concluded that applying symbolic regression in genetic programming may be a viable technique for software fault prediction. We additionally seek literature evidence where other search-based techniques are applied for testing of non-functional system properties, hence contributing towards the growing application of search-based techniques in diverse activities within software verification and validation.

  • 27. Afzal, Wasif
    Search-Based Prediction of Software Quality: Evaluations and Comparisons2011Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Software verification and validation (V&V) activities are critical for achieving software quality; however, these activities also constitute a large part of the costs when developing software. Therefore efficient and effective software V&V activities are both a priority and a necessity considering the pressure to decrease time-to-market and the intense competition faced by many, if not all, companies today. It is then perhaps not unexpected that decisions that affects software quality, e.g., how to allocate testing resources, develop testing schedules and to decide when to stop testing, needs to be as stable and accurate as possible. The objective of this thesis is to investigate how search-based techniques can support decision-making and help control variation in software V&V activities, thereby indirectly improving software quality. Several themes in providing this support are investigated: predicting reliability of future software versions based on fault history; fault prediction to improve test phase efficiency; assignment of resources to fixing faults; and distinguishing fault-prone software modules from non-faulty ones. A common element in these investigations is the use of search-based techniques, often also called metaheuristic techniques, for supporting the V&V decision-making processes. Search-based techniques are promising since, as many problems in real world, software V&V can be formulated as optimization problems where near optimal solutions are often good enough. Moreover, these techniques are general optimization solutions that can potentially be applied across a larger variety of decision-making situations than other existing alternatives. Apart from presenting the current state of the art, in the form of a systematic literature review, and doing comparative evaluations of a variety of metaheuristic techniques on large-scale projects (both industrial and open-source), this thesis also presents methodological investigations using search-based techniques that are relevant to the task of software quality measurement and prediction. The results of applying search-based techniques in large-scale projects, while investigating a variety of research themes, show that they consistently give competitive results in comparison with existing techniques. Based on the research findings, we conclude that search-based techniques are viable techniques to use in supporting the decision-making processes within software V&V activities. The accuracy and consistency of these techniques make them important tools when developing future decision-support for effective management of software V&V activities.

  • 28. Afzal, Wasif
    Using faults-slip-through metric as a predictor of fault-proneness2010Conference paper (Refereed)
    Abstract [en]

    The majority of software faults are present in small number of modules, therefore accurate prediction of fault-prone modules helps improve software quality by focusing testing efforts on a subset of modules. This paper evaluates the use of the faults-slip-through (FST) metric as a potential predictor of fault-prone modules. Rather than predicting the fault-prone modules for the complete test phase, the prediction is done at the specific test levels of integration and system test. We applied eight classification techniques to the task of identifying fault-prone modules, representing a variety of approaches, including a standard statistical technique for classification (logistic regression), tree-structured classifiers (C4.5 and random forests), a Bayesian technique (Na\"{i}ve Bayes), machine-learning techniques (support vector machines and back-propagation artificial neural networks) and search-based techniques (genetic programming and artificial immune recognition systems) on FST data collected from two large industrial projects from the telecommunication domain. \emph{Results:} Using area under the receiver operating characteristic (ROC) curve and the location of (PF, PD) pairs in the ROC space, GP showed impressive results in comparison with other techniques for predicting fault-prone modules at both integration and system test levels. The use of faults-slip-through metric in general provided good prediction results at the two test levels. The accuracy of GP is statistically significant in comparison with majority of the techniques for predicting fault-prone modules at integration and system test levels. (ii) Faults-slip-through metric has the potential to be a generally useful predictor of fault-proneness at integration and system test levels.

  • 29. Afzal, Wasif
    et al.
    Ghazi, Ahmad Nauman
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Itkonen, Juha
    Torkar, Richard
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Andrews, Anneliese
    Bhatti, Khurram
    An experiment on the effectiveness and efficiency of exploratory testing2015In: Empirical Software Engineering, ISSN 1382-3256, Vol. 20, no 3, 844-878 p.Article in journal (Refereed)
    Abstract [en]

    The exploratory testing (ET) approach is commonly applied in industry, but lacks scientific research. The scientific community needs quantitative results on the performance of ET taken from realistic experimental settings. The objective of this paper is to quantify the effectiveness and efficiency of ET vs. testing with documented test cases (test case based testing, TCT). We performed four controlled experiments where a total of 24 practitioners and 46 students performed manual functional testing using ET and TCT. We measured the number of identified defects in the 90-minute testing sessions, the detection difficulty, severity and types of the detected defects, and the number of false defect reports. The results show that ET found a significantly greater number of defects. ET also found significantly more defects of varying levels of difficulty, types and severity levels. However, the two testing approaches did not differ significantly in terms of the number of false defect reports submitted. We conclude that ET was more efficient than TCT in our experiment. ET was also more effective than TCT when detection difficulty, type of defects and severity levels are considered. The two approaches are comparable when it comes to the number of false defect reports submitted.

  • 30. Afzal, Wasif
    et al.
    Torkar, Richard
    A Comparative Evaluation of Using Genetic Programming for Predicting Fault Count Data2008Conference paper (Refereed)
    Abstract [en]

    There have been a number of software reliability growth models (SRGMs) proposed in literature. Due to several reasons, such as violation of models' assumptions and complexity of models, the practitioners face difficulties in knowing which models to apply in practice. This paper presents a comparative evaluation of traditional models and use of genetic programming (GP) for modeling software reliability growth based on weekly fault count data of three different industrial projects. The motivation of using a GP approach is its ability to evolve a model based entirely on prior data without the need of making underlying assumptions. The results show the strengths of using GP for predicting fault count data.

  • 31.
    Afzal, Wasif
    et al.
    Blekinge Institute of Technology, School of Engineering, Department of Systems and Software Engineering.
    Torkar, Richard
    Blekinge Institute of Technology, School of Engineering, Department of Systems and Software Engineering.
    Incorporating Metrics in an Organizational Test Strategy2008Conference paper (Refereed)
    Abstract [en]

    An organizational level test strategy needs to incorporate metrics to make the testing activities visible and available to process improvements. The majority of testing measurements that are done are based on faults found in the test execution phase. In contrast, this paper investigates metrics to support software test planning and test design processes. We have assembled metrics in these two process types to support management in carrying out evidence-based test process improvements and to incorporate suitable metrics as part of an organization level test strategy. The study is composed of two steps. The first step creates a relevant context by analyzing key phases in the software testing lifecycle, while the second step identifies the attributes of software test planning and test design processes along with metric(s) support for each of the identified attributes.

  • 32. Afzal, Wasif
    et al.
    Torkar, Richard
    On the application of genetic programming for software engineering predictive modeling: A systematic review2011In: Expert Systems with Applications, ISSN 0957-4174 , Vol. 38, no 9, 11984-11997 p.Article, review/survey (Refereed)
    Abstract [en]

    The objective of this paper is to investigate the evidence for symbolic regression using genetic programming (GP) being an effective method for prediction and estimation in software engineering, when compared with regression/machine learning models and other comparison groups (including comparisons with different improvements over the standard GP algorithm). We performed a systematic review of literature that compared genetic programming models with comparative techniques based on different independent project variables. A total of 23 primary studies were obtained after searching different information sources in the time span 1995-2008. The results of the review show that symbolic regression using genetic programming has been applied in three domains within software engineering predictive modeling: (i) Software quality classification (eight primary studies). (ii) Software cost/effort/size estimation (seven primary studies). (iii) Software fault prediction/software reliability growth modeling (eight primary studies). While there is evidence in support of using genetic programming for software quality classification, software fault prediction and software reliability growth modeling: the results are inconclusive for software cost/effort/size estimation.

  • 33. Afzal, Wasif
    et al.
    Torkar, Richard
    Suitability of Genetic Programming for Software Reliability Growth Modeling2008Conference paper (Refereed)
    Abstract [en]

    Genetic programming (GP) has been found to be effective in finding a model that fits the given data points without making any assumptions about the model structure. This makes GP a reasonable choice for software reliability growth modeling. This paper discusses the suitability of using GP for software reliability growth modeling and highlights the mechanisms that enable GP to progressively search for fitter solutions.

  • 34. Afzal, Wasif
    et al.
    Torkar, Richard
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Towards benchmarking feature subset selection methods for software fault prediction2016In: Studies in Computational Intelligence, Springer, 2016, 617, Vol. 617, 33-58 p.Chapter in book (Refereed)
    Abstract [en]

    Despite the general acceptance that software engineering datasets often contain noisy, irrelevant or redundant variables, very few benchmark studies of feature subset selection (FSS) methods on real-life data from software projects have been conducted. This paper provides an empirical comparison of state-of-the-art FSS methods: information gain attribute ranking (IG); Relief (RLF); principal component analysis (PCA); correlation-based feature selection (CFS); consistencybased subset evaluation (CNS); wrapper subset evaluation (WRP); and an evolutionary computation method, genetic programming (GP), on five fault prediction datasets from the PROMISE data repository. For all the datasets, the area under the receiver operating characteristic curve—the AUC value averaged over 10-fold cross-validation runs—was calculated for each FSS method-dataset combination before and after FSS. Two diverse learning algorithms, C4.5 and naïve Bayes (NB) are used to test the attribute sets given by each FSS method. The results show that although there are no statistically significant differences between the AUC values for the different FSS methods for both C4.5 and NB, a smaller set of FSS methods (IG, RLF, GP) consistently select fewer attributes without degrading classification accuracy. We conclude that in general, FSS is beneficial as it helps improve classification accuracy of NB and C4.5. There is no single best FSS method for all datasets but IG, RLF and GP consistently select fewer attributes without degrading classification accuracy within statistically significant boundaries. © Springer International Publishing Switzerland 2016.

  • 35. Afzal, Wasif
    et al.
    Torkar, Richard
    Feldt, Robert
    A Systematic Mapping Study on Non-Functional Search-Based Software Testing2008Conference paper (Refereed)
  • 36. Afzal, Wasif
    et al.
    Torkar, Richard
    Feldt, Robert
    A systematic review of search-based testing for non-functional system properties2009In: Information and Software Technology, ISSN 0950-5849, Vol. 51, no 6, 957-976 p.Article in journal (Refereed)
    Abstract [en]

    Search-based software testing is the application of metaheuristic search techniques to generate software tests. The test adequacy criterion is transformed into a fitness function and a set of solutions in the search space are evaluated with respect to the fitness function using a metaheuristic search technique. The application of metaheuristic search techniques for testing is promising due to the fact that exhaustive testing is infeasible considering the size and complexity of software under test. Search-based software testing has been applied across the spectrum of test case design methods; this includes white-box (structural), black-box (functional) and grey-box (combination of structural and functional) testing. In addition, metaheuristic search techniques have also been applied to test non-functional properties. The overall objective of undertaking this systematic review is to examine existing work into non-functional search-based software testing (NFSBST). We are interested in types of non-functional testing targeted using metaheuristic search techniques, different fitness functions used in different types of search-based non-functional testing and challenges in the application of these techniques. The systematic review is based on a comprehensive set of 35 articles obtained after a multi-stage selection process and have been published in the time span 1996-2007. The results of the review show that metaheuristic search techniques have been applied for non-functional testing of execution time, quality of service, security, usability and safety. A variety of metaheuristic search techniques are found to be applicable for non-functional testing including simulated annealing, tabu search, genetic algorithms, ant colony methods, grammatical evolution, genetic programming (and its variants including linear genetic programming) and swarm intelligence methods. The review reports on different fitness functions used to guide the search for each of the categories of execution time, safety, usability, quality of service and security; along with a discussion of possible challenges in the application of metaheuristic search techniques.

  • 37. Afzal, Wasif
    et al.
    Torkar, Richard
    Feldt, Robert
    Prediction of fault count data using genetic programming2008Conference paper (Refereed)
    Abstract [en]

    Software reliability growth modeling helps in deciding project release time and managing project resources. A large number of such models have been presented in the past. Due to the existence of many models, the models' inherent complexity, and their accompanying assumptions; the selection of suitable models becomes a challenging task. This paper presents empirical results of using genetic programming (GP) for modeling software reliability growth based on weekly fault count data of three different industrial projects. The goodness of fit (adaptability) and predictive accuracy of the evolved model is measured using five different measures in an attempt to present a fair evaluation. The results show that the GP evolved model has statistically significant goodness of fit and predictive accuracy.

  • 38.
    Afzal, Wasif
    et al.
    Blekinge Institute of Technology, School of Computing.
    Torkar, Richard
    Blekinge Institute of Technology, School of Computing.
    Feldt, Robert
    Blekinge Institute of Technology, School of Computing.
    Resampling Methods in Software Quality Classification2012In: International Journal of Software Engineering and Knowledge Engineering, ISSN 0218-1940, Vol. 22, no 2, 203-223 p.Article in journal (Refereed)
    Abstract [en]

    In the presence of a number of algorithms for classification and prediction in software engineering, there is a need to have a systematic way of assessing their performances. The performance assessment is typically done by some form of partitioning or resampling of the original data to alleviate biased estimation. For predictive and classification studies in software engineering, there is a lack of a definitive advice on the most appropriate resampling method to use. This is seen as one of the contributing factors for not being able to draw general conclusions on what modeling technique or set of predictor variables are the most appropriate. Furthermore, the use of a variety of resampling methods make it impossible to perform any formal meta-analysis of the primary study results. Therefore, it is desirable to examine the influence of various resampling methods and to quantify possible differences. Objective and method: This study empirically compares five common resampling methods (hold-out validation, repeated random sub-sampling, 10-fold cross-validation, leave-one-out cross-validation and non-parametric bootstrapping) using 8 publicly available data sets with genetic programming (GP) and multiple linear regression (MLR) as software quality classification approaches. Location of (PF, PD) pairs in the ROC (receiver operating characteristics) space and area under an ROC curve (AUC) are used as accuracy indicators. Results: The results show that in terms of the location of (PF, PD) pairs in the ROC space, bootstrapping results are in the preferred region for 3 of the 8 data sets for GP and for 4 of the 8 data sets for MLR. Based on the AUC measure, there are no significant differences between the different resampling methods using GP and MLR. Conclusion: There can be certain data set properties responsible for insignificant differences between the resampling methods based on AUC. These include imbalanced data sets, insignificant predictor variables and high-dimensional data sets. With the current selection of data sets and classification techniques, bootstrapping is a preferred method based on the location of (PF, PD) pair data in the ROC space. Hold-out validation is not a good choice for comparatively smaller data sets, where leave-one-out cross-validation (LOOCV) performs better. For comparatively larger data sets, 10-fold cross-validation performs better than LOOCV.

  • 39. Afzal, Wasif
    et al.
    Torkar, Richard
    Feldt, Robert
    Search-based prediction of fault count data2009Conference paper (Refereed)
    Abstract [en]

    Symbolic regression, an application domain of genetic programming (GP), aims to find a function whose output has some desired property, like matching target values of a particular data set. While typical regression involves finding the coefficients of a pre-defined function, symbolic regression finds a general function, with coefficients, fitting the given set of data points. The concepts of symbolic regression using genetic programming can be used to evolve a model for fault count predictions. Such a model has the advantages that the evolution is not dependent on a particular structure of the model and is also independent of any assumptions, which are common in traditional time-domain parametric software reliability growth models. This research aims at applying experiments targeting fault predictions using genetic programming and comparing the results with traditional approaches to compare efficiency gains.

  • 40. Afzal, Wasif
    et al.
    Torkar, Richard
    Feldt, Robert
    Gorschek, Tony
    Genetic programming for cross-release fault count predictions in large and complex software projects2010In: Evolutionary Computation and Optimization Algorithms in Software Engineering: Applications and Techniques / [ed] Chis, Monica, Hershey: IGI Global, Hershey, USA , 2010Chapter in book (Refereed)
    Abstract [en]

    Software fault prediction can play an important role in ensuring software quality through efficient resource allocation. This could, in turn, reduce the potentially high consequential costs due to faults. Predicting faults might be even more important with the emergence of short-timed and multiple software releases aimed at quick delivery of functionality. Previous research in software fault prediction has indicated that there is a need i) to improve the validity of results by having comparisons among number of data sets from a variety of software, ii) to use appropriate model evaluation measures and iii) to use statistical testing procedures. Moreover, cross-release prediction of faults has not yet achieved sufficient attention in the literature. In an attempt to address these concerns, this paper compares the quantitative and qualitative attributes of 7 traditional and machine-learning techniques for modeling the cross-release prediction of fault count data. The comparison is done using extensive data sets gathered from a total of 7 multi-release open-source and industrial software projects. These software projects together have several years of development and are from diverse application areas, ranging from a web browser to a robotic controller software. Our quantitative analysis suggests that genetic programming (GP) tends to have better consistency in terms of goodness of fit and accuracy across majority of data sets. It also has comparatively less model bias. Qualitatively, ease of configuration and complexity are less strong points for GP even though it shows generality and gives transparent models. Artificial neural networks did not perform as well as expected while linear regression gave average predictions in terms of goodness of fit and accuracy. Support vector machine regression and traditional software reliability growth models performed below average on most of the quantitative evaluation criteria while remained on average for most of the qualitative measures.

  • 41. Afzal, Wasif
    et al.
    Torkar, Richard
    Blekinge Institute of Technology, School of Computing.
    Feldt, Robert
    Blekinge Institute of Technology, School of Computing.
    Gorschek, Tony
    Blekinge Institute of Technology, School of Computing.
    Prediction of faults-slip-through in large software projects: an empirical evaluation2014In: Software quality journal, ISSN 0963-9314, E-ISSN 1573-1367, Vol. 22, no 1, 51-86 p.Article in journal (Refereed)
    Abstract [en]

    A large percentage of the cost of rework can be avoided by finding more faults earlier in a software test process. Therefore, determination of which software test phases to focus improvement work on has considerable industrial interest. We evaluate a number of prediction techniques for predicting the number of faults slipping through to unit, function, integration, and system test phases of a large industrial project. The objective is to quantify improvement potential in different test phases by striving toward finding the faults in the right phase. The results show that a range of techniques are found to be useful in predicting the number of faults slipping through to the four test phases; however, the group of search-based techniques (genetic programming, gene expression programming, artificial immune recognition system, and particle swarm optimization-based artificial neural network) consistently give better predictions, having a representation at all of the test phases. Human predictions are consistently better at two of the four test phases. We conclude that the human predictions regarding the number of faults slipping through to various test phases can be well supported by the use of search-based techniques. A combination of human and an automated search mechanism (such as any of the search-based techniques) has the potential to provide improved prediction results.

  • 42. Afzal, Wasif
    et al.
    Torkar, Richard
    Feldt, Robert
    Wikstrand, Greger
    Search-based prediction of fault-slip-through in large software projects2010Conference paper (Refereed)
    Abstract [en]

    A large percentage of the cost of rework can be avoided by finding more faults earlier in a software testing process. Therefore, determination of which software testing phases to focus improvements work on, has considerable industrial interest. This paper evaluates the use of five different techniques, namely particle swarm optimization based artificial neural networks (PSO-ANN), artificial immune recognition systems (AIRS), gene expression programming (GEP), genetic programming (GP) and multiple regression (MR), for predicting the number of faults slipping through unit, function, integration and system testing phases. The objective is to quantify improvement potential in different testing phases by striving towards finding the right faults in the right phase. We have conducted an empirical study of two large projects from a telecommunication company developing mobile platforms and wireless semiconductors. The results are compared using simple residuals, goodness of fit and absolute relative error measures. They indicate that the four search-based techniques (PSO-ANN, AIRS, GEP, GP) perform better than multiple regression for predicting the fault-slip-through for each of the four testing phases. At the unit and function testing phases, AIRS and PSO-ANN performed better while GP performed better at integration and system testing phases. The study concludes that a variety of search-based techniques are applicable for predicting the improvement potential in different testing phases with GP showing more consistent performance across two of the four test phases.

  • 43.
    Agardh, Johannes
    et al.
    Blekinge Institute of Technology, Department of Human Work Science and Media Technology.
    Johansson, Martin
    Blekinge Institute of Technology, Department of Human Work Science and Media Technology.
    Pettersson, Mårten
    Blekinge Institute of Technology, Department of Human Work Science and Media Technology.
    Designing Future Interaction with Today's Technology1999Independent thesis Advanced level (degree of Master (One Year))Student thesis
    Abstract [en]

    Information Technology has an increasing part of our lives. In this thesis we will discuss how technology can relate to humans and human activity. We take our standing point in concepts like Calm Technology and Tacit Interaction and examine how these visions and concepts can be used in the process of designing an artifact for a real work practice. We have done work-place studies of truck-drivers and traffic leaders regarding how they find their way to the right addresses and design a truck navigation system that aims to suit the truck drivers work practice.

  • 44. Agbesi, Collinson Colin Mawunyo
    Promoting Accountable Governance Through Electronic Government2016Independent thesis Advanced level (degree of Master (One Year)), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Electronic government (e-Government) is a purposeful system of organized delegation of power, control, management and resource allocation in a harmonized centralized or decentralized way via networks assuring efficiency, effectiveness and transparency of processes and transactions. This new phenomenon is changing the way of business and service of governments all over the world. The betterment of service to citizens as well as other groups and the efficient management of scarce resources have meant that governments seek alternatives to rendering services and efficient management processes. Analog and mechanical processes of governing and management have proved inefficient and unproductive in recent times. The search for alternative and better ways of governing and control have revealed that digital and electronic ways of governing is the best alternative and beneficial more than the mechanical process of governing. The internet, information and communication technology (ICT/IT) have registered a significant change in governments. There has also been an increased research in the area of electronic government but the field still lacks sound theoretical framework which is necessary for a better understanding of the factors influencing the adoption of electronic government systems, and the integration of various electronic government applications.

    Also the efficient and effective allocation and distribution of scarce resources has become an issue and there has been a concerted effort globally to improve the use and management of scarce resources in the last decade. The purpose of this research is to gain an in depth and better understanding of how electronic government can be used to provide accountability, security and transparency in government decision making processes in allocation and distribution of resources in the educational sector of Ghana. Research questions have been developed to help achieve the aim. The study has also provided detailed literature review, which helped to answer research questions and guide to data collection. A quantitative and qualitative research method was chosen to collect vital information and better understand the study area issue. Both self administered questionnaire as well as interviews were used to collect data relevant to the study. Also a thorough analysis of related works was conducted.

    Finally, the research concluded by addressing research questions, discussing results and providing some vital recommendations.  It was also found that electronic government is the best faster, reliable, accountable and transparent means of communication and interaction between governments, public institutions and citizens. Thus electronic government is crucial in transforming the educational sector of Ghana for better management of resources. It has also been noted that information and communication technology (ICT) is the enabling force that helps electronic government to communicate with its citizens, support e-government operation and provide efficiency, effectiveness and better services within the educational sector of Ghana.

  • 45.
    Agushi, Camrie
    Blekinge Institute of Technology, School of Engineering, Department of Systems and Software Engineering.
    Innovation inom Digital Rights Management2005Independent thesis Advanced level (degree of Master (One Year))Student thesis
    Abstract [en]

    The thesis deals with the topic of Digital Rights Management (DRM), more specifically the innovation trends within DRM. It is focused on three driving forces of DRM. Firstly, DRM technologies, secondly, DRM standards and thirdly, DRM interoperability. These driving forces are discussed and analyzed in order to explore innovation trends within DRM. In the end, a multi-facetted overview of today’s DRM context is formed. One conclusion is that the aspect of Intellectual Property Rights is considered to be an important indicator of the direction DRM innovation is heading.

  • 46.
    Ahl, Viggo
    Blekinge Institute of Technology, School of Engineering, Department of Systems and Software Engineering.
    An experimental comparison of five prioritization methods: Investigating ease of use, accuracy and scalability2005Independent thesis Advanced level (degree of Master (One Year))Student thesis
    Abstract [en]

    Requirements prioritization is an important part of developing the right product in the right time. There are different ideas about which method is the best to use when prioritizing requirements. This thesis takes a closer look at five different methods and then put them into an controlled experiment, in order to find out which of the methods that would be the best method to use. The experiment was designed to find out which method yields the most accurate result, the method’s ability to scale up to many more requirements, what time it took to prioritize with the method, and finally how easy the method was to use. These four criteria combined will indicate which method is more suitable, i.e. be the best method, to use in prioritizing of requirements. The chosen methods are the well-known analytic hierarchy process, the computer algorithm binary search tree, and from the ideas of extreme programming come planning game. The fourth method is an old but well used method, the 100 points method. The last method is a new method, which combines planning game with the analytic hierarchy process. Analysis of the data from the experiment indicates that the planning game combined with analytic hierarchy process could be a good candidate. However, the result from the experiment clearly indicates that the binary search tree yields accurate result, is able to scale up and was the easiest method to use. For these three reasons the binary search tree clearly is the better method to use for prioritizing requirements

  • 47.
    Ahlberg, Mårten
    et al.
    Blekinge Institute of Technology, School of Technoculture, Humanities and Planning.
    Liedstrand, Peter
    Blekinge Institute of Technology, School of Technoculture, Humanities and Planning.
    24-timmarsmyndighetens användbarhet2004Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [en]

    Kommunikationen med myndigheter och förvaltningar via Internet har ökat under de senaste åren. Därför har vi valt att fokusera vårt kandidatarbete på detta områ-de, samt på behovet av användbara webbtjänster för medborgarna. I detta kandi-datarbete studerar vi en växande grupp användare, denna grupp är äldre medbor-gare. Under studien har vi analyserat 24-timmarsmyndighetens användbarhet ge-nom användartester. Kombinationen av samtal och möten med individer, iaktta-gelser av interaktioner och litteraturstudier ger oss möjligheten att utforska använ-darnas behov. Behovet hos användarna är det som är centralt i hur de uppfattar och interagerar med 24-timmarsmyndigheten. Webbplatserna vi har använt oss av vid våra användartester har anknytning till 24-timmarsmyndigheten. Genom att analytiskt studera informationen har vi kommit fram till fem viktiga designförslag och riktlinjer, som vi anser behövs när e-tjänster inom 24-timmarsmyndigheten utvecklas.

  • 48.
    Ahlgren, Johan
    et al.
    Blekinge Institute of Technology, Department of Software Engineering and Computer Science.
    Karlsson, Robert
    Blekinge Institute of Technology, Department of Software Engineering and Computer Science.
    En studie av inbyggda brandväggar: Microsoft XP och Red Hat Linux2003Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [sv]

    Detta kandidatarbete utreder hur väl två operativsystems inbyggda brandväggar fungerar i symbios med en användares vanligaste tjänsteutnyttjande på Internet, samt att se hur likartade de är i sitt skydd från hot. De två operativsystemen som vi utgick ifrån var Microsoft Windows XP samt Red Hat Linux 8.0. Den hypotes vi arbetat kring lyder enligt följande: De två inbyggda brandväggarna är i stort likartade rörande skydd från hot på Internet och uppfyller användarnas tjänsteutnyttjande. De metoder vi använt, för att svara på vår frågeställning, har delats upp i ett funktionalitetstest och ett säkerhetstest. I funktionalitetstestet provades de vanligaste Internettjänsterna med den inbyggda brandväggen och ifall det uppstod några komplikationer eller ej. De två inbyggda brandväggarna genom gick i säkerhetstestet skannings- och svaghetskontroll via ett flertal verktyg. Genom resultatet kan vi konstatera att de inbyggda brandväggarna klarar av de vanligaste tjänsterna på Internet, men att en skillnad föreligger hos dem vad gäller exponeringen ut mot Internet. Windows XP ligger helt osynligt utåt, medan Red Hats inbyggda brandvägg avslöjar en mängd information om värddatorn, som kan komma att användas i illvilliga syften. Slutsatsen blev att vi avslutningsvis falsifierade vår hypotes då de två inbyggda brandväggarna ej var jämlika i sitt skydd mot yttre hot på Internet.

  • 49.
    Ahlström, Catharina
    et al.
    Blekinge Institute of Technology, Department of Human Work Science and Media Technology.
    Fridensköld, Kristina
    Blekinge Institute of Technology, Department of Human Work Science and Media Technology.
    How to support and enhance communication: in a student software development project2002Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [en]

    This report, in which we have put an emphasis on the word communication, is based on a student software development project conducted during spring 2002. We describe how the use of design tools plays a key role in supporting communication in group activities and to what extent communication can be supported and enhanced by tools such as mock-ups and metaphors in a group project. We also describe a design progress from initial sketches to a final mock-up of a GUI for a postcard demo application.

  • 50.
    Ahlström, Eric
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Holmqvist, Lucas
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Goswami, Prashant
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Comparing Traditional Key Frame and Hybrid Animation2017In: SCA '17 Proceedings of the ACM SIGGRAPH / Eurographics Symposium on Computer Animation, ACM Digital Library, 2017, nr. a20Conference paper (Refereed)
    Abstract [en]

    In this research the authors explore a hybrid approach which usesthe basic concept of key frame animation together with proceduralanimation to reduce the number of key frames needed for an animationclip. The two approaches are compared by conducting anexperiment where the participating subjects were asked to ratethem based on their visual appeal.

1234567 1 - 50 of 3848
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf