Change search
Refine search result
1234567 101 - 150 of 4932
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 101.
    Akhter, Adeel
    et al.
    Blekinge Institute of Technology, School of Computing.
    Azhar, Hassan
    Blekinge Institute of Technology, School of Computing.
    Statistical Debugging of Programs written in Dynamic Programming Language: RUBY2010Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Debugging is an important and critical phase during the software development process. Software debugging is serious and tough practice involved in functional base test driven development. Software vendors encourages their programmers to practice test driven development during the initial development phases to capture the bug traces and the associated code coverage infected from diagnosed bugs. Application’s source code with fewer threats of bug existence or faulty executions is assumed as highly efficient and stable especially when real time software products are in consideration. Due to the fact that process of development of software projects relies on great number of users and testers which required having an effective fault localization technique. This specific fault localization technique can highlight the most critical areas of software system at code as well as modular level so that debugging algorithm can be used to debug the application source code. Nowadays many complex or simple software systems are in corporation with open bug repositories to localize the bugs. Any inconsistency or imperfection in early development phase of software product results in low efficient system and less reliability. Statistical debugging of program source code for visualization of fault is an important and efficient way to select and rank the suspicious lines of code. This research provides guidelines for practicing statistical debugging technique for programs coded in Ruby programming language. This thesis presents statistical debugging techniques available for dynamic programming languages. Firstly, the statistical debugging techniques were thoroughly observed with different predicate base approaches followed in previous work done in the subject area. Secondly, the new process of statistical debugging for programs coded in Ruby programming language is introduced by generating dynamic predicates. Results were analyzed by implementing multiple programs written in Ruby programming language with different complexity level. The analysis of experimentation performed on candidate programs depict that SOBER is more efficient and accurate in bug identification than Cause Isolation Scheme. It is concluded that despite of extensive research in the field of statistical debugging and fault localization it is not possible to identify majority of the bugs. Moreover SOBER and Cause Isolation Scheme algorithms are found to be two most mature and effective statistical debugging algorithms for bug identification with in software source code.

    Download full text (pdf)
    FULLTEXT01
  • 102.
    Akinwande, Gbenga Segun
    Blekinge Institute of Technology, School of Computing.
    Signaling Over Protocols Gateways in Next-Generation Networks2009Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    In this thesis, I examined various signalling both in wired and mobile networks, with more emphasis on SIGTRAN. The SIGTRAN is the protocol suite applicable in the current new generation and next-generation networks, most especially as it enables service provider to be able to interpolate both wireline and wireless services within the same architecture. This concept is an important component in today’s Triple-play communication, and hence this thesis has provided a broad view on Signalling and Protocol Gateways in Traditional and Next Generations Networks. Signal flow in a typical new generation network was examined by carrying out discrete event simulation of UMTS network using OPNET modeller 14.5. Through both Packet-Switching (PS) and Circuit-Switching (CS) signalling, I was able to examine the QoS on a UMTS. Precisely, I looked at throughput on UMTS network by implementing WFQ and MDRR scheduling schemes.

    Download full text (pdf)
    FULLTEXT01
  • 103. Akkermans, Hans
    et al.
    Gustavsson, Rune
    Ygge, Fredrik
    An Integrated Structured Analysis Approach to Intelligent Agent Communication1998Report (Other academic)
    Abstract [en]

    Intelligent multi-agent systems offer promising approaches for knowledge-intensive distributed applications. Now that such systems are becoming applied on a wider industrial scale, there is a practical need for structured analysis and design methods, similarly as exist for more conventional information and knowledge systems. This is still lacking for intelligent agent software. In this paper, we describe how the process of agent communication specification can be carried out through a structured analysis approach. The structured analysis approach we propose is an integrated extension of the CommonKADS methodology, a widely used standard for knowledge analysis and systems development. Our approach is based on and illustrated by a large-scale multi-agent application for distributed energy load management in industries and households, called Homebots, which is discussed as an extensive industrial case study.

    Download full text (pdf)
    FULLTEXT01
  • 104. Akkermans, Hans
    et al.
    Gustavsson, Rune
    Ygge, Fredrik
    Pragmatics of Agent Communication1998Report (Other academic)
    Abstract [en]

    The process of agent communication modeling has not yet received much attention in the knowledge systems area. Conventional knowledge systems are rather simple with respect to their communication structure: often it is a straightforward question-and-answer sequence between system and end user. However, this is different in recent intelligent multi-agent systems. Therefore, agent communication aspects are now in need of a much more advanced treatment in knowledge management, acquisition and modeling. In general, a much better integration between the respective achievements of multi-agent and knowledge-based systems modeling is an important research goal. In this paper, we describe how agent communications can be specified as an extension of well-known knowledge modeling techniques. The emphasis is on showing how a structured process of communication requirements analysis proceeds, based on existing results from agent communication languages. The guidelines proposed are illustrated by and based on a large-scale industrial multi-agent application for distributed energy load management in industries and households, called Homebots. Homebots enable cost savings in energy consumption by coordinating their actions through an auction mechanism.

    Download full text (pdf)
    FULLTEXT01
  • 105. Akkermans, Hans
    et al.
    Ygge, Fredrik
    Smart Software as Costumer Assistant in Large-Scale Distributed Load Management1997Conference paper (Refereed)
  • 106. Akkermans, Hans
    et al.
    Ygge, Fredrik
    Gustavsson, Rune
    Homebots: Intelligent Decentralized Services for Energy Management1996Report (Other academic)
    Abstract [en]

    The deregulation of the European energy market, combined with emerging ad-vanced capabilities of information technology, provides strategic opportunities for new knowledge-oriented services on the power grid. HOMEBOTS is the name we have coined for one of these innovative services: decentralized power load management at the customer side, automatically carried out by a ‘society’ of interactive house-hold, industrial and utility equipment. They act as independent intelligent agents that communicate and negotiate in a computational market economy. The knowl-edge and competence aspects of this application are discussed, using an improved version of task analysis according to the COMMONKADS knowledge methodology. Illustrated by simulation results, we indicate how customer knowledge can be mo-bilized to achieve joint goals of cost and energy savings. General implications for knowledge creation and its management are discussed.

    Download full text (pdf)
    FULLTEXT01
  • 107. Akkermans, Hans
    et al.
    Ygge, Fredrik
    Gustavsson, Rune
    HOMEBOTS: Intelligent Decentralized Services for Energy Management1996Conference paper (Refereed)
  • 108.
    Akkineni, Srinivasu
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    The impact of RE process factors and organizational factors during alignment between RE and V&V: Systematic Literature Review and Survey2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context: Requirements engineering (RE) and Verification and validation (V&V) areas are treated to be integrated and assure successful development of the software project. Therefore, activation of both competences in the early stages of the project will support products in meeting the customer expectation regarding the quality and functionality. However, this quality can be achieved by aligning RE and V&V. There are different practices such as requirements, verification, validation, control, tool etc. that are followed by organizations for alignment and to address different challenges faced during the alignment between RE and V&V. However, there is a requisite for studies to understand the alignment practices, challenges and factors, which can enable successful alignment between RE and V&V.

    Objectives: In this study, an exploratory investigation is carried out to know the impact of factors i.e. RE process and organizational factors during the alignment between RE and V&V. The main objectives of this study are:

    1. To find the list of RE practices that facilitate alignment between RE and V&V.
    2. To categorize RE practices with respect to their requirement phases.
    3. To find the list of RE process and organizational factors that influence alignment between RE and V&V besides their impact.
    4. To identify the challenges that are faced during the alignment between RE and V&V.
    5. To obtain list of challenges that are addressed by RE practices during the alignment between RE and V&V.

    Methods: In this study Systematic Literature Review (SLR) is conducted using snowballing procedure to identify the relevant information about RE practices, challenges, RE process factors and organizational factors. The studies were captured from Engineering Village database. Rigor and relevance analysis is performed to assess the quality of the studies obtained through SLR. Further, a questionnaire intended for industrial survey was prepared from the gathered literature and distributed to practitioners from the software industry in order to collect empirical information about this study. Thereafter, data obtained from industrial survey was analyzed using statistical analysis and chi-square significance test.

    Results: 20 studies were identified through SLR, which are relevant to this study. After analyzing the obtained studies, the list of RE process factors, organizational factors, challenges and RE practices during alignment between RE and V&V are gathered. Thereupon, an industrial survey is conducted from the obtained literature, which has obtained 48 responses. Alignment between RE and V&V possess an impact of RE process factors and organizational factors and this is also mentioned by the respondents of the survey. Moreover, this study finds an additional RE process factors and organizational factors during the alignment between RE and V&V, besides their impact. Another contribution is, addressing the unaddressed challenges by RE practices obtained through the literature. Additionally, validation of categorized RE practices with respect to their requirement phases is carried out.

    Conclusions: To conclude, the obtained results from this study will benefit practitioners for capturing more insight towards the alignment between RE and V&V. This study identified the impact of RE process factors and organizational factors during the alignment between RE and V&V along with the importance of challenges faced during the alignment between RE and V&V. This study also addressed the unaddressed challenges by RE practices obtained through literature. Respondents of the survey believe that many RE process and organizational factors have negative impact on the alignment between RE and V&V based on the size of an organization. In addition to this, validation of results for applying RE practices at different requirement phases is toted through survey. Practitioners can identify the benefits from this research and researchers can extend this study to remaining alignment practices.

    Download full text (pdf)
    fulltext
  • 109.
    Akser, M.
    et al.
    Ulster University, GBR.
    Bridges, B.
    Ulster University, GBR.
    Campo, G.
    Ulster University, GBR.
    Cheddad, Abbas
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Curran, K.
    Ulster University, GBR.
    Fitzpatrick, L.
    Ulster University, GBR.
    Hamilton, L.
    Ulster University, GBR.
    Harding, J.
    Ulster University, GBR.
    Leath, T.
    Ulster University, GBR.
    Lunney, T.
    Ulster University, GBR.
    Lyons, F.
    Ulster University, GBR.
    Ma, M.
    University of Huddersfield, GBR.
    Macrae, J.
    Ulster University, GBR.
    Maguire, T.
    Ulster University, GBR.
    McCaughey, A.
    Ulster University, GBR.
    McClory, E.
    Ulster University, GBR.
    McCollum, V.
    Ulster University, GBR.
    Mc Kevitt, P.
    Ulster University, GBR.
    Melvin, A.
    Ulster University, GBR.
    Moore, P.
    Ulster University, GBR.
    Mulholland, E.
    Ulster University, GBR.
    Muñoz, K.
    BijouTech, CoLab, Letterkenny, Co., IRL.
    O’Hanlon, G.
    Ulster University, GBR.
    Roman, L.
    Ulster University, GBR.
    SceneMaker: Creative technology for digital storytelling2018In: Lect. Notes Inst. Comput. Sci. Soc. Informatics Telecommun. Eng. / [ed] Brooks A.L.,Brooks E., Springer Verlag , 2018, Vol. 196, p. 29-38Conference paper (Refereed)
    Abstract [en]

    The School of Creative Arts & Technologies at Ulster University (Magee) has brought together the subject of computing with creative technologies, cinematic arts (film), drama, dance, music and design in terms of research and education. We propose here the development of a flagship computer software platform, SceneMaker, acting as a digital laboratory workbench for integrating and experimenting with the computer processing of new theories and methods in these multidisciplinary fields. We discuss the architecture of SceneMaker and relevant technologies for processing within its component modules. SceneMaker will enable the automated production of multimodal animated scenes from film and drama scripts or screenplays. SceneMaker will highlight affective or emotional content in digital storytelling with particular focus on character body posture, facial expressions, speech, non-speech audio, scene composition, timing, lighting, music and cinematography. Applications of SceneMaker include automated simulation of productions and education and training of actors, screenwriters and directors. © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2017.

    Download full text (pdf)
    fulltext
  • 110.
    Alahari, Yeshwanth
    et al.
    Blekinge Institute of Technology, School of Computing.
    Buddhiraja, Prashant
    Blekinge Institute of Technology, School of Computing.
    Analysis of packet loss and delay variation on QoE for H.264 andWebM/VP8 Codecs2011Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    The popularity of multimedia services over Internet has increased in the recent years. These services include Video on Demand (VoD) and mobile TV which are predominantly growing, and the user expectations towards the quality of videos are gradually increasing. Different video codec’s are used for encoding and decoding. Recently Google has introduced the VP8 codec which is an open source compression format. It is introduced to compete with existing popular codec namely H.264/AVC developed by ITU-T Video Coding Expert Group (VCEG), as by 2016 there will be a license fee for H.264. In this work we compare the performance of H.264/AVC and WebM/VP8 in an emulated environment. NetEm is used as an emulator to introduce delay/delay variation and packet loss. We have evaluated the user perception of impaired videos using Mean Opinion Score (MOS) by following the International Telecommunication Union (ITU) Recommendations Absolute Category Rating (ACR) and analyzed the results using statistical methods. It was found that both video codec’s exhibit similar performance in packet loss, But in case of delay variation H.264 codec shows better results when compared to WebM/VP8. Moreover along with the MOS ratings we also studied the effect of user feelings and online video watching experience impacts on their perception.

    Download full text (pdf)
    FULLTEXT01
  • 111.
    Alahyari, Hiva
    et al.
    Chalmers; Göteborgs Universitet, SWE.
    Berntsson Svensson, Richard
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Gorschek, Tony
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    A study of value in agile software development organizations2017In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 125, p. 271-288Article in journal (Refereed)
    Abstract [en]

    The Agile manifesto focuses on the delivery of valuable software. In Lean, the principles emphasise value, where every activity that does not add value is seen as waste. Despite the strong focus on value, and that the primary critical success factor for software intensive product development lies in the value domain, no empirical study has investigated specifically what value is. This paper presents an empirical study that investigates how value is interpreted and prioritised, and how value is assured and measured. Data was collected through semi-structured interviews with 23 participants from 14 agile software development organisations. The contribution of this study is fourfold. First, it examines how value is perceived amongst agile software development organisations. Second, it compares the perceptions and priorities of the perceived values by domains and roles. Third, it includes an examination of what practices are used to achieve value in industry, and what hinders the achievement of value. Fourth, it characterises what measurements are used to assure, and evaluate value-creation activities.

  • 112.
    Alahyari, Hiva
    et al.
    Chalmers, SWE.
    Gorschek, Tony
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Berntsson Svensson, Richard
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    An exploratory study of waste in software development organizations using agile or lean approaches: A multiple case study at 14 organizations2019In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 107, p. 78-94Article in journal (Refereed)
    Abstract [en]

    Context: The principal focus of lean is the identification and elimination of waste from the process with respect to maximizing customer value. Similarly, the purpose of agile is to maximize customer value and minimize unnecessary work and time delays. In both cases the concept of waste is important. Through an empirical study, we explore how waste is approached in agile software development organizations. Objective: This paper explores the concept of waste in agile/lean software development organizations and how it is defined, used, prioritized, reduced, or eliminated in practice Method: The data were collected using semi-structured open-interviews. 23 practitioners from 14 embedded software development organizations were interviewed representing two core roles in each organization. Results: Various wastes, categorized in 10 different categories, were identified by the respondents. From the mentioned wastes, not all were necessarily waste per se but could be symptoms caused by wastes. From the seven wastes of lean, Task-switching was ranked as the most important, and Extra-features, as the least important wastes according to the respondents’ opinion. However, most companies do not have their own or use an established definition of waste, more importantly, very few actively identify or try to eliminate waste in their organizations beyond local initiatives on project level. Conclusion: In order to identify, recognize and eliminate waste, a common understanding, and a joint and holistic view of the concept is needed. It is also important to optimize the whole organization and the whole product, as waste on one level can be important on another, thus sub-optimization should be avoided. Furthermore, to achieve a sustainable and effective waste handling, both the short-term and the long-term perspectives need to be considered. © 2018 Elsevier B.V.

  • 113.
    ALAM, MD. SHAMSER
    Blekinge Institute of Technology, School of Engineering.
    On sphere detection for OFDM based MIMO systems2010Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    The mobile wireless communication systems has been growing fast and continuously over the past two decades. Therefore, in order to fulfill the demand for this rapid growth, the standardization bodies along with wireless researchers and mobile operators around the world have been constantly working on new technical specifications.An important problem in modern communication is known as NP complete problem in the Maximum Likelihood (ML) detection of signals transmitting over Multiple Input Multiple Output channel of the OFDM transceiver system. Development of the Sphere Decoder (SD) as a result of the rapid advancement in signal processing techniques provides ML detection for MIMO channels at polynomial time complexity average case. There are weaknesses in the existing SDs. The sphere decoder performance is very sensitive for the most current proposals in order to choose the search radius parameter. At high spectral efficiencies SNR is low or as the problem dimension is high and the complexity coefficient can become very large too. Digital communications of detecting a vector of symbols has importance as, is encountered in several different applications. These symbols are as the finite alphabet and transmitted over a multiple-input multiple-output (MIMO) channel with Gaussian noise. There are no limitation to the detection of symbols spatially multiplexed over a multiple-antenna channel and the multi user detection problem. Efficient algorithms are considered for the detection problems and have recognized well. The algorithm of sphere decoder, orders has optimal performance considering the error probability and this has proved extremely efficient in terms of computational complexity for moderately sized problems in case of signal to noise ratio. At high SNR the algorithm has a polynomial average complexity and it is understood the algorithm has an exponential worst case complexity. The efficiency of the algorithm is ordered the exponential rate derivation of growth. Complexity is positive for the finite SNR and small in the high SNR. To achieve the sphere decoding solution applying Schnorr-Euchner by Maximum likelihood method , Depth-first Stack-based Sequential decoding is used. This thesis focuses on the receiver part of the transceiver system and takes a good look at the near optimal algorithm for sphere detection of a vector of symbols transmitted over MIMO channel. The analysis and algorithms are general in nature.

    Download full text (pdf)
    FULLTEXT01
  • 114.
    Alam, Payam Norouzi
    Blekinge Institute of Technology, Department of Software Engineering and Computer Science.
    Agile Process Recommendations for a Market-driven Company2003Independent thesis Advanced level (degree of Master (One Year))Student thesis
    Abstract [en]

    In this master thesis problems in a small market-driven software development company are discussed. Problems such as fast changes in the company are a result of the constantly changing market. The fast changes must be managed within the company with tailored process covering the needs like short time-to-market. Misunderstandings when managing ideas from marketing and challenging issues such as communication gaps between marketing staff and developers are discussed. These problem areas are identified by the case study through interviews of selected staff. The problems adhere from fast changes and lack of processes and structures. This paper has recommended several points influenced by agile software development with Scrum and XP. The recommendations are chosen to fit the problem areas identified by the interviews. They will work as a starting point for the company to improve the situation and to use XP and Scrum for further improvements.

    Download full text (pdf)
    FULLTEXT01
  • 115.
    Alam, Tariq
    et al.
    Blekinge Institute of Technology, School of Computing.
    Ali, Muhammad
    Blekinge Institute of Technology, School of Computing.
    The Challenge of Usability Evaluation of Online Social Networks with a Focus on Facebook2010Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    In today’s era online social networks are getting extensive popularity among internet users. People are using online social networks for different purposes like sharing information, chatting with friends, family and planning to hang out. It is then no surprise that online social network should be easy to use and easily understandable. Previously many researchers have evaluated different online social networks but there is no such study which addresses usability concerns about online social network with a focus on Facebook on an academic level (using students as subjects). The main rationale behind this study is to find out efficiency of different usability testing techniques from social network’s point of view, with a focus on Facebook, and issues related to usability. To conduct this research, we have adopted the combination of both qualitative and quantitative approach. Graduate students from BTH have participated in usability tests. Our findings are that although think aloud is more efficient then remote testing, but this difference is not very significant. We found from survey that different usability issues are in Facebook profile, media, Picture Tagging, Chatting etc.

    Download full text (pdf)
    FULLTEXT01
  • 116.
    Alam, Zahidul
    Blekinge Institute of Technology, School of Computing.
    Usability of a GNU/Linux Distribution from Novice User’s Perspective2009Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    The term Open Source Software (OSS) has been around for a long time in the world of computer science. Open source software development is a process by which we can manufacture economical and qualitative software and its source could be re-use in the improvement of the software. The success of OSS relies on several factors, e.g. usability, functionality, market focus etc. But in the end how popular the software will be measured by the number of users downloading the software and how much the software is usable to the users. Open Source Software achieve the status for stability, security and functionality. Most of this software has been utilized by expert level users of IT. But from the general users or the non-computer user’s point of view the usability issues of Open source software has been faced the most criticism [25, 26, 27, 28, 29, and 30]. This factor i.e. the usability issues of general user is also responsible for the limited distribution of the open source software [24]. The development process should apply the “user-centered” methodology [25, 26, 27, 28, 29, and 30]. In this thesis paper the issues of usability in OSS development and how the usability of open source software can be improved will be discussed. Beside this I investigate the usability quality of free Open Source Linux-based operating system Ubuntu and try to find out the usability standards of this OSS.

    Download full text (pdf)
    FULLTEXT01
  • 117. Alaves, Dimas
    et al.
    Machado, Renato
    Blekinge Institute of Technology, Faculty of Engineering, Department of Mathematics and Natural Sciences.
    da Costa, Daniel Benevides
    Legg, Andrei Piccinini
    Uchoa-Filho, Bartolomeu F.
    A dynamic hybrid antenna/relay selection scheme for the multiple-access relay channel2014In: 2014 11TH INTERNATIONAL SYMPOSIUM ON WIRELESS COMMUNICATIONS SYSTEMS (ISWCS), IEEE , 2014, p. 594-599Conference paper (Refereed)
    Abstract [en]

    We propose a dynamic hybrid antenna/relay selection scheme for multiple-access relay systems. The proposed scheme aims to boost the system throughput while keeping a good error performance. By using the channel state information, the destination node performs a dynamic selection between the signals provided by the multi-antenna relay, located in the inter-cell region, and the relay nodes geographically distributed over the cells. The multi-antenna relay and the single-antenna relay nodes employ the decode-remodulate-and-forward and amplify-and-forward protocols, respectively. Results reveal that the proposed scheme offers a good tradeoff between spectral efficiency and diversity gain, which is one of the main requirements for the next generation of wireless communications systems.

    Download full text (pdf)
    fulltext
  • 118. Albin, Bernhardsson
    et al.
    Björling, Ivar
    Generation and evaluation of collision geometry based on drawings2018Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Background. Many video games allow for creative expression. Attractive Interactive AB is developing such a game, in which players draw their own levels using pen and paper. For such a game to work, collision geometry needs to be generated from photos of hand-drawn video game levels.

    Objectives. The main goal of the thesis is to create an algorithm for generating collision geometry from photos of hand-drawn video game levels and to determine whether the generated geometry can replace handcrafted geometry. Handcrafted geometry is manually created using vector graphics editing tools.

    Methods. A method for generating collision geometry from photos of drawings is implemented. The quality of the generated geometry is evaluated and compared to handcrafted geometry in terms of vertex count and positional accuracy. Ground truths are used to determine the positional accuracy of collision geometry by calculating the resemblance of the created collision geometry and the respective ground truth.

    Results. The generated geometry has a higher positional accuracy and on average a lower vertex count than the handcrafted geometry. Performance measurements for two different configurations of the algorithm are presented.

    Conclusions. Collision geometry generated by the presented algorithm has a higher quality than handcrafted geometry. Thus, the generated geometry can replace handcrafted geometry.

    Download full text (pdf)
    fulltext
  • 119.
    Albinsson, Mattias
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Andersson, Linus
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Improving Quality of Experience through Performance Optimization of Server-Client Communication2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In software engineering it is important to consider how a potential user experiences the system during usage. No software user will have a satisfying experience if they perceive the system as slow, unresponsive, unstable or hiding information. Additionally, if the system restricts the users to only having a limited set of actions, their experience will further degrade. In order to evaluate the effect these issues have on a user‟s perceived experience, a measure called Quality of Experience is applied.

    In this work the foremost objective was to improve how a user experienced a system suffering from the previously mentioned issues, when searching for large amounts of data. To achieve this objective the system was evaluated to identify the issues present and which issues were affecting the user perceived Quality of Experience the most. The evaluated system was a warehouse management system developed and maintained by Aptean AB‟s office in Hässleholm, Sweden. The system consisted of multiple clients and a server, sending data over a network. Evaluation of the system was in form of a case study analyzing its performance, together with a survey performed by Aptean staff to gain knowledge of how the system was experienced when searching for large amounts of data. From the results, three issues impacting Quality of Experience the most were identified: (1) interaction; limited set of actions during a search, (2) transparency; limited representation of search progress and received data, (3) execution time; search completion taking long time.

    After the system was analyzed, hypothesized technological solutions were implemented to resolve the identified issues. The first solution divided the data into multiple partitions, the second decreased data size sent over the network by applying compression and the third was a combination of the two technologies. Following the implementations, a final set of measurements together with the same survey was performed to compare the solutions based on their performance and improvement gained in perceived Quality of Experience.

    The most significant improvement in perceived Quality of Experience was achieved by the data partitioning solution. While the combination of solutions offered a slight further improvement, it was primarily thanks to data partitioning, making that technology a more suitable solution for the identified issues compared to compression which only slightly improved perceived Quality of Experience. When the data was partitioned, updates were sent more frequently and allowed the user not only a larger set of actions during a search but also improved the information available in the client regarding search progress and received data. While data partitioning did not improve the execution time it offered the user a first set of data quickly, not forcing the user to idly wait, making the user experience the system as fast. The results indicated that to increase the user‟s perceived Quality of Experience for systems with server-client communication, data partitioning offered several opportunities for improvement.

    Download full text (pdf)
    fulltext
  • 120.
    Alborn, Jonas
    Blekinge Institute of Technology, School of Planning and Media Design.
    3D och kommunal fysisk planering2012Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [sv]

    På senare år har tekniker för 3D-visualisering fått ett allt mer utbrett användande inom kommunal fysisk planering. Detta, plus att jag själv använder tekniken i mitt dagliga arbete som planarkitekt, väckte frågor kring skälet till införandet av tekniken, vilka beslut och förväntningar som legat till grund för införandet och vilket forskningsstöd tekniken har, vad gäller visualisering, tydlighet och kommunicerbarhet i planarbetet. Detta examensarbete i Fysisk planering vid BTH, vill belysa dessa frågor. Arbetet består av litteratursökning efter adekvat forskning i ämnet, en enkät ställd till ett litet urval av anställda och politiker i fyra kommuner som är medlemmar i ett 3D nätverk, en dokumentsökning på samma kommuners hemsidor samt en kompletterande enkätundersökning bland planarkitekter i fem andra kommuner som inte är medlemmar i ovan nämnda nätverk. Forskningsstöd för effektiviteten i eller för- och nackdelar med att använda 3D-modeller för ökad förståelse och kommunikation mellan tjänstemän och politiker samt med allmänheten, i samband med kommunal fysisk planering, saknas. Genom sammanställningar av forskning inom fälten miljöpsykologi och åskådlig planredovisning samt svensk arkitekturforskning kan man ändå få ledtrådar till möjligheter och svårigheter med användningen av 3D visualisering och dess roll som kommunikationsmedel. Rätt använd skulle tekniken kunna stärka möjligheten till åskådliggörande, men det finns också risker, kopplade till användande av tekniken. Det främsta skälet till att kommuner inför 3D-teknik inom fysisk planering, uppges av såväl Planarkitekter, kommuntjänstemän med ansvar för 3D tekniken, samt bland politiker vara önskan om att öka förståelsen hos medborgarna, av förslag till förändring av den fysiska miljön. Något som också oftast motsvaras i kommunernas dokument, där sådana finns. Alla kommuner har inte dokumenterade officiella inriktnings- och policydokument i frågan. Forskningen pekar dock på risker vid användning av visualisering i tidiga skeden av processen. I den avslutande diskussionen berörs arbetets frågeställningar, potentiella problem och möjligheter med användandet av tekniken samt förslag till områden för vidare studier. I slutsatserna konstateras att det finns en diskrepans mellan den övervägande positiva synen på användningen i kommunerna och det som forskningen visar.

    Download full text (pdf)
    FULLTEXT01
  • 121.
    Al-Daajeh, Saleh
    Blekinge Institute of Technology, School of Computing.
    Balancing Dependability Quality Attributes for Increased Embedded Systems Dependability2009Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Embedded systems are used in many critical applications where a failure can have serious consequences. Therefore, achieving a high level of dependability is an ultimate goal. However, in order to achieve this goal we are in need of understanding the interrelationships between the different dependability quality attributes and other embedded systems’ quality attributes. This research study provides indicators of the relationship between the dependability quality attributes and other quality attributes for embedded systems by identifying the impact of architectural tactics as the candidate solutions to construct dependable embedded systems.

    Download full text (pdf)
    FULLTEXT01
  • 122.
    Aldabaldetreku, Rita
    et al.
    Blekinge Institute of Technology, Faculty of Engineering, Department of Strategic Sustainable Development.
    Lautiainen, Juuso
    Blekinge Institute of Technology, Faculty of Engineering, Department of Strategic Sustainable Development.
    Minkova, Alina
    Blekinge Institute of Technology, Faculty of Engineering, Department of Strategic Sustainable Development.
    The Role of Knowledge Management in Strategic Sustainable Development: Comparing Theory and Practice in Companies Applying the FSSD2016Independent thesis Advanced level (degree of Master (One Year)), 20 HE creditsStudent thesis
    Abstract [en]

    The purpose of this study is to explore the role of knowledge management (KM) in integrating sustainability into business strategy in companies applying the framework for strategic sustainable development (FSSD).Corporations have the potential to be key players in moving society towards sustainability, but they lack clear definitions and guidelines around strategic sustainable development (SSD). The authors focus on the benefits of KM in organisations applying the FSSD, which offers general strategic guidelines, but does not refer to the complexity of managing the new sustainability knowledge.This study first examines the scientific literature around KM and FSSD and compares it with the results of expert interviews to develop a State of the Art Model of KM for SSD. Then the model is compared to current practices of corporations applying the FSSD and the gap is examined.The results of the analysis show that the concept of KM is widely discussed in the literature, yet it does not have much presence in the business world. The value of knowledge is recognised, but KM is not much used and no structured practices were identified. It was concluded that companies would benefit from a strategic KM system when integrating sustainability.

    Download full text (pdf)
    fulltext
  • 123.
    Aldalaty, Khalid
    Blekinge Institute of Technology, School of Engineering.
    Mobile IP handover delay reduction using seamless handover architecture2009Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Seamless communication is becoming the main aspect for the next generation of the mobile and wireless networks. Roaming among multiple wireless access networks connected together through one IP core makes the mobility support for the internet is very critical and more important research topics nowadays. Mobile IP is one of the most successful solutions for the mobility support in the IP based networks, but it has poor performance in term of handover delay. Many improvements have been done to reduce the handover delay, which result in two new standards: the Hierarchical MIP (HMIP) and the Fast MIP (FMIP), but the delay still does not match the seamless handover requirements. Finally Seamless MIP (S-MIP) has been suggested by many work groups, which combine between an intelligent handover algorithm and a movement tracking scheme. In This thesis, we show the handover delay reduction approaches, specifically the Seamless Mobile IP. The thesis studies the effects of the S-MIP in the handover delay and the network performance as well. A simulation study takes place to compare between the standard MIP and the new suggested S-MIP protocol in term of handover delay, packet loss and bandwidth requirement. The thesis concludes with the analyzing of the simulation results, evaluating the S-MIP performance and finally gives some suggestions for the future work.

    Download full text (pdf)
    FULLTEXT01
  • 124. Alegroth, Emil
    et al.
    Feldt, Robert
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Kolstrom, Pirjo
    Maintenance of automated test suites in industry: An empirical study on Visual GUI Testing2016In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 73, p. 66-80Article in journal (Refereed)
    Abstract [en]

    Context: Verification and validation (V&V) activities make up 20-50% of the total development costs of a software system in practice. Test automation is proposed to lower these V&V costs but available research only provides limited empirical data from industrial practice about the maintenance costs of automated tests and what factors affect these costs. In particular, these costs and factors are unknown for automated GUI-based testing. Objective: This paper addresses this lack of knowledge through analysis of the costs and factors associated with the maintenance of automated GUI-based tests in industrial practice. Method: An empirical study at two companies, Siemens and Saab, is reported where interviews about, and empirical work with, Visual GUI Testing is performed to acquire data about the technique's maintenance costs and feasibility. Results: 13 factors are observed that affect maintenance, e.g. tester knowledge/experience and test case complexity. Further, statistical analysis shows that developing new test scripts is costlier than maintenance but also that frequent maintenance is less costly than infrequent, big bang maintenance. In addition a cost model, based on previous work, is presented that estimates the time to positive return on investment (ROI) of test automation compared to manual testing. Conclusions: It is concluded that test automation can lower overall software development costs of a project while also having positive effects on software quality. However, maintenance costs can still be considerable and the less time a company currently spends on manual testing, the more time is required before positive, economic, ROI is reached after automation. (C) 2016 Elsevier B.V. All rights reserved.

  • 125. Alegroth, Emil
    et al.
    Feldt, Robert
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Ryrholm, Lisa
    Visual GUI testing in practice: challenges, problems and limitations2015In: Journal of Empirical Software Engineering, ISSN 1382-3256, E-ISSN 1573-7616, Vol. 20, no 3, p. 694-744Article in journal (Refereed)
    Abstract [en]

    In today’s software development industry, high-level tests such as Graphical User Interface (GUI) based system and acceptance tests are mostly performed with manual practices that are often costly, tedious and error prone. Test automation has been proposed to solve these problems but most automation techniques approach testing from a lower level of system abstraction. Their suitability for high-level tests has therefore been questioned. High-level test automation techniques such as Record and Replay exist, but studies suggest that these techniques suffer from limitations, e.g. sensitivity to GUI layout or code changes, system implementation dependencies, etc. Visual GUI Testing (VGT) is an emerging technique in industrial practice with perceived higher flexi- bility and robustness to certain GUI changes than previous high-level (GUI) test automation techniques. The core of VGT is image recognition which is applied to analyze and interact with the bitmap layer of a system’s front end. By coupling image recognition with test scripts, VGT tools can emulate end user behavior on almost any GUI-based system, regardless of implementation language, operating system or platform. However, VGT is not without its own challenges, problems and limitations (CPLs) but, like for many other automated test techniques, there is a lack of empirically-based knowledge of these CPLs and how they impact industrial applicability. Crucially, there is also a lack of information on the cost of applying this type of test automation in industry. This manuscript reports an empirical, multi-unit case study performed at two Swedish companies that develop safety-critical software. It studies their transition from manual system test cases into tests auto- mated with VGT. In total, four different test suites that together include more than 300 high-level system test cases were automated for two multi-million lines of code systems. The results show that the transitioned test cases could find defects in the tested systems and that all applicable test cases could be automated. However, during these transition projects a number of hurdles had to be addressed; a total of 58 different CPLs were identified and then categorized into 26 types. We present these CPL types and an analysis of the implications for the transition to and use of VGT in industrial software development practice. In addition, four high-level solutions are presented that were identified during the study, which would address about half of the identified CPLs. Furthermore, collected metrics on cost and return on investment of the VGT transition are reported together with information about the VGT suites’ defect finding ability. Nine of the identified defects are reported, 5 of which were unknown to testers with extensive experience from using the manual test suites. The main conclusion from this study is that even though there are many challenges related to the transition and usage of VGT, the technique is still valuable, flexible and considered cost-effective by the industrial practitioners. The presented CPLs also provide decision support in the use and advancement of VGT and potentially other automated testing techniques similar to VGT, e.g. Record and Replay.

    Download full text (pdf)
    fulltext
  • 126.
    Aleksandr, Polescuk
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Linking Residential Burglaries using the Series Finder Algorithm in a Swedish Context2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. A minority of criminals performs a majority of the crimes today. It is known that every criminal or group of offenders to some extent have a particular pattern (modus operandi) how crime is performed. Therefore, computers' computational power can be employed to discover crimes that have the same model and possibly are carried out by the same criminal. The goal of this thesis was to apply the existing Series Finder algorithm to a feature-rich dataset containing data about Swedish residential burglaries.

    Objectives. The following objectives were achieved to complete this thesis: Modifications performed on an existing Series Finder implementation to fit the Swedish police forces dataset and MatLab code converted to Python. Furthermore, experiment setup designed with appropriate metrics and statistical tests. Finally, modified Series Finder implementation's evaluation performed against both Spatial-Temporal and Random models.

    Methods. The experimental methodology was chosen in order to achieve the objectives. An initial experiment was performed to find right parameters to use for main experiments. Afterward, a proper investigation with dependent and independent variables was conducted.

    Results. After the metrics calculations and the statistical tests applications, the accurate picture revealed how each model performed. Series Finder showed better performance than a Random model. However, it had lower performance than the Spatial-Temporal model. The possible causes of one model performing better than another are discussed in analysis and discussion section.

    Conclusions. After completing objectives and answering research questions, it could be clearly seen how the Series Finder implementation performed against other models. Despite its low performance, Series Finder still showed potential, as presented in future work.

    Download full text (pdf)
    fulltext
  • 127.
    Alesö, Rikard
    et al.
    Blekinge Institute of Technology, School of Planning and Media Design.
    Widén, Fredrika
    Blekinge Institute of Technology, School of Planning and Media Design.
    Simone de Beauvoirs feminism: De digitala spelen idag2013Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [sv]

    Detta kandidatarbete utgår ifrån två stycken frågeställningar kopplade till feminism: ”Hur kan Simone de Beauvoirs feminism gestaltas i digitala spel?” och ”Vad finns det för typiska könsroller i dagens digitala spel?”. För att svara på dess frågeställningar studerade vi de Beauvoirs bok ”Det Andra Könet”, utförde spelanalyser samt delade ut skriftliga intervjuer till spelare. Resultaten blev en prototyp till ett digitalt spel vars handling är en direkt inspiration ifrån utvalda kapitel från Simone De Beauvoirs bok ”Det Andra Könet”. Vi avser att detta spel kan bli ett exempel för framtida speldesigners hur man kan utveckla spel på ett alternativt sätt och även att digitala spel kan bli ett nytt medium för filosofer att bruka.

    Download full text (pdf)
    FULLTEXT01
  • 128. Alexandrova, A. A.
    et al.
    Ibragimov, Nail
    Blekinge Institute of Technology, School of Engineering, Department of Mathematics and Natural Sciences.
    Lukashchuk, V. O.
    Group classification and conservation laws of nonlinear filtration equation with a small parameter2014In: Communications in nonlinear science & numerical simulation, ISSN 1007-5704, E-ISSN 1878-7274, Vol. 19, no 2, p. 364-370Article in journal (Refereed)
    Abstract [en]

    Group classification of the perturbed nonlinear filtration equation is performed assuming that the perturbation is an arbitrary function of the dependent variable. The nonlinear self-adjointness of the equation under consideration is investigated. Using these results, the approximate conservation laws are constructed.

  • 129. Algestam, Henrik
    et al.
    Offesson, Marcus
    Lundberg, Lars
    Using components to increase mailtainability in a large telecommunication system2002Conference paper (Refereed)
  • 130.
    Ali, Hazrat
    Blekinge Institute of Technology, School of Computing.
    A Performance Evaluation of RPL in Contiki2012Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    A Wireless Sensor Network is formed of several small devices encompassing the capability of sensing a physical characteristic and sending it hop by hop to a central node via low power and short range transceivers. The Sensor network lifetime strongly depends on the routing protocol in use. Routing protocol is responsible for forwarding the traffic and making routing decisions. If the routing decisions made are not intelligent, more re-transmissions will occur across the network which consumes limited resources of the wireless sensor network like energy, bandwidth and processing. Therefore a careful and extensive performance analysis is needed for the routing protocols in use by any wireless sensor network. In this study we investigate Objective Functions and the most influential parameters on Routing Protocol for Low power and Lossy Network (RPL) performance in Contiki (WSN OS) and then evaluate RPL performance in terms of Energy, Latency, Packet Delivery Ratio, Control overhead, and Convergence Time for the network. We have carried out extensive simulations yielding a detailed analysis of different RPL parameters with respect to the five performance metrics. The study provides an insight into the different RPL settings suitable for different application areas. Experimental results show ETX is a better objective, and that ContikiRPL provides very efficient network Convergence (14s), Control traffic overhead (1300 packets), Energy consumption (1.5% radio on time), Latency (0.5s), and Packet Delivery Ratio (98%) in our sample RPL simulation of one hour with 80 nodes, after careful configuration of DIO interval minimum/doublings, Radio duty cycling, and Frequency of application messages.

    Download full text (pdf)
    FULLTEXT01
  • 131.
    Ali, Israr
    et al.
    Blekinge Institute of Technology, School of Computing.
    Shah, Syed Shahab Ali
    Blekinge Institute of Technology, School of Computing.
    Usability Requirements for GIS Application: Comparative Study of Google Maps on PC and Smartphone2011Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Context: Smartphone is gaining popularity due to its feasible mobility, computing capacity and efficient energy. Emails, text messaging, navigation and visualizing geo-spatial data through browsers are common features of smartphone. Display of geo-spatial data is collected in computing format and made publically available. Therefore the need of usability evaluation becomes important due to its increasing demand. Identifying usability requirements are important as conventional functional requirements in software engineering. Non-functional usability requirements are objectives and testable using measurable metrics. Objectives: Usability evaluation plays an important role in the interaction design process as well as identifying user needs and requirements. Comparative usability requirements are identified for the evaluation of a geographical information system (Google Maps) on personal computer (Laptop) and smartphone (iPhone). Methods: ISO 9241-11 guide on usability is used as an input model for identifying and specifying usability level of Google Maps on both personal computer and smartphone for intended output. Authors set target value for usability requirements of tasks and questionnaire on each device, such as acceptability level of tasks completion, rate of efficiency and participant’s agreement of each measure through ISO 9241-11 respectively. The usability test is conducted using Co-discovery technique on six pairs of graduate students. Interviews are conducted for validation of test results and questionnaires are distributed to get feedback from participants. Results: The non-functional usability requirements were tested and used five metrics measured on user performance and satisfaction. Through usability test, the acceptability level of tasks completion and rate of efficiency was matched on personal computer but did not match on iPhone. Through questionnaire, both the devices did not match participant’s agreement of each measure but only effectiveness matched on personal computer. Usability test, interview and questionnaire feedback are included in the results. Conclusions: The authors provided suggestions based on test results and identified usability issues for the improvement of Google Maps on personal computer and iPhone.

    Download full text (pdf)
    FULLTEXT01
  • 132.
    Ali, Muhammad Usman
    Blekinge Institute of Technology, School of Computing.
    Cloud Computing as a Tool to Secure and Manage Information Flow in Swedish Armed Forces Networks2012Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    In the last few years cloud computing has created much hype in the IT world. It has provided new strategies to cut down costs and provide better utilization of resources. Apart from all drawbacks, the cloud infrastructure has been long discussed for its vulnerabilities and security issues. There is a long list of service providers and clients, who have implemented different service structures using cloud infrastructure. Despite of all these efforts many organizations especially with higher security concerns have doubts about the data privacy or theft protection in cloud. This thesis aims to encourage Swedish Armed Forces (SWAF) networks to move to cloud infrastructures as this is the technology that will make a huge difference and revolutionize the service delivery models in the IT world. Organizations avoiding it would lag behind but at the same time organizations should consider to adapt a cloud strategy most reliable and compatible with their requirements. This document provides an insight on different technologies and tools implemented specifically for monitoring and security in cloud. Much emphasize is given on virtualization technology because cloud computing highly relies on it. Amazon EC2 cloud is analyzed from security point of view. An intensive survey has also been conducted to understand the market trends and people’s perception about cloud implementation, security threats, cost savings and reliability of different services provided.

    Download full text (pdf)
    FULLTEXT01
  • 133.
    Ali, Muhammad Usman
    et al.
    Blekinge Institute of Technology, School of Computing.
    Aasim, Muhammad
    Blekinge Institute of Technology, School of Computing.
    Usability Evaluation of Digital Library BTH a case study2009Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Libraries have for hundreds of years been an important entity for every kind of institute, especially in the educational sector. So now it is an age of computers and internet. People are now using electronic resources to fulfill their needs and requirements of their life. Therefore libraries have also converted to computerized systems. People can access and use library resources just sitting at their computers by using the internet. This modern way of running a library has been called or given the name of digital libraries. Digital libraries are getting famous for flexibility of use and because more users can be facilitated at a time. As numbers of users are increasing, some issues relevant to interaction also arise while using digital libraries interface and utilizing its e-resources. In this thesis we evaluate usability factors and issues in digital libraries and the authors have taken as a case study the real time existing system of the digital library in BTH. This thesis report describes digital libraries and how users are being facilitated by them. Usability issues are also discussed relevant to digital libraries. Users have been the main source to evaluate and judge usability issues while interacting and using this digital library. The results obtained showed dis¬satisfaction of users regarding the usability evaluation of BTH:s digital library. The authors used usability evaluation techniques to evaluate functionality and services provided by the BTH digital library system interface. Moreover, based on the results of our case study, suggestions of improvement in BTH:s digital library are presented. Hopefully, these suggestions will help to make BTH digital library system more usable in an efficient and effective manner for users.

    Download full text (pdf)
    FULLTEXT01
  • 134.
    Ali, Nauman Bin
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Is effectiveness sufficient to choose an intervention?: Considering resource use in empirical software engineering2016In: Proceedings of the 10th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement, ESEM 2016, Ciudad Real, Spain, September 8-9, 2016, 2016, article id 54Conference paper (Refereed)
  • 135.
    Ali, Nauman bin
    et al.
    Blekinge Institute of Technology, School of Computing.
    Edison, Henry
    Blekinge Institute of Technology, School of Computing.
    Towards Innovation Measurement in Software Industry2010Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Context: In today’s highly competitive business environment, shortened product and technology life-cycles, it is critical for software industry to continuously innovate. To help an organisation to achieve this goal, a better understanding and control of the activities and determinants of innovation is required. This can be achieved through innovation measurement initiative which assesses innovation capability, output and performance. Objective: This study explores definitions of innovation, innovation measurement frameworks, key elements of innovation and metrics that have been proposed in literature and used in industry. The degree of empirical validation and context of studies was also investigated. It also elicited the perception of innovation, its importance, challenges and state of practice of innovation measurement in software industry. Methods: In this study, a systematic literature review, followed by online questionnaire and face-to-face interviews were conducted. The systematic review used seven electronic databases, including Compendex, Inspec, IEEE Xplore, ACM Digital Library, and Business Source premier, Science Direct and Scopus. Studies were subject to preliminary, basic and advanced criteria to judge the relevance of papers. The online questionnaire targeted software industry practitioners with different roles and firm sizes. A total of 94 completed and usable responses from 68 unique firms were collected. Seven face-to-face semi-structured interviews were conducted with four industry practitioners and three academics. Results: Based on the findings of literature review, interviews and questionnaire a comprehensive definition of innovation was identified which may be used in software industry. The metrics for evaluation of determinants, inputs, outputs and performance were aggregated and categorised. A conceptual model of the key measurable elements of innovation was constructed from the findings of the systematic review. The model was further refined after feedback from academia and industry through interviews. Conclusions: The importance of innovation measurement is well recognised in both academia and industry. However, innovation measurement is not a common practice in industry. Some of the major reasons are lack of available metrics and data collection mechanisms to measure innovation. The organisations which do measure innovation use only a few metrics that do not cover the entire spectrum of innovation. This is partly because of the lack of consistent definition of innovation in industry. Moreover, there is a lack of empirical validation of the metrics and determinants of innovation. Although there is some static validations, full scale industry trials are currently missing. For software industry, a unique challenge is development of alternate measures since some of the existing metrics are inapplicable in this context. The conceptual model constructed in this study is one step towards identifying measurable key aspects of innovation to understanding the innovation capability and performance of software firms.

    Download full text (pdf)
    FULLTEXT01
  • 136.
    Ali, Nauman bin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Engström, Emelie
    Lund University, SWE.
    Taromirad, Masoumeh
    Halmstad University, SWE.
    Mousavi, Muhammad Raza
    Halmstad University, SWE.
    Minhas, Nasir Mehmood
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Helgesson, Daniel
    Lund University, SWE.
    Kunze, Sebastian
    Halmstad University, SWE.
    Varshosaz, Mahsa
    Halmstad University, SWE.
    On the search for industry-relevant regression testing research2019In: Journal of Empirical Software Engineering, ISSN 1382-3256, E-ISSN 1573-7616, Vol. 24, no 4, p. 2020-2055Article in journal (Refereed)
    Abstract [en]

    Regression testing is a means to assure that a change in the software, or

    its execution environment, does not introduce new defects. It involves the expensive

    undertaking of rerunning test cases. Several techniques have been proposed

    to reduce the number of test cases to execute in regression testing, however, there

    is no research on how to assess industrial relevance and applicability of such techniques.

    We conducted a systematic literature review with the following two goals:

    rstly, to enable researchers to design and present regression testing research with

    a focus on industrial relevance and applicability and secondly, to facilitate the industrial

    adoption of such research by addressing the attributes of concern from the

    practitioners' perspective. Using a reference-based search approach, we identied

    1068 papers on regression testing. We then reduced the scope to only include papers

    with explicit discussions about relevance and applicability (i.e. mainly studies

    involving industrial stakeholders). Uniquely in this literature review, practitioners

    were consulted at several steps to increase the likelihood of achieving our aim of

    identifying factors important for relevance and applicability. We have summarised

    the results of these consultations and an analysis of the literature in three taxonomies,

    which capture aspects of industrial-relevance regarding the regression

    testing techniques. Based on these taxonomies, we mapped 38 papers reporting

    the evaluation of 26 regression testing techniques in industrial settings.

    Download full text (pdf)
    fulltext
  • 137.
    Ali, Nauman Bin
    et al.
    Blekinge Institute of Technology, School of Computing.
    Petersen, Kai
    Blekinge Institute of Technology, School of Computing.
    A consolidated process for software process simulation: State of the Art and Industry Experience2012Conference paper (Refereed)
    Abstract [en]

    Software process simulation is a complex task and in order to conduct a simulation project practitioners require support through a process for software process simulation modelling (SPSM), including what steps to take and what guidelines to follow in each step. This paper provides a literature based consolidated process for SPSM where the steps and guidelines for each step are identified through a review of literature and are complemented by experience from using these recommendations in an action research at a large Telecommunication vendor. We found five simulation processes in SPSM literature, resulting in a seven-step process. The consolidated process was successfully applied at the studied company, with the experiences of doing so being reported.

    Download full text (pdf)
    fulltext
  • 138.
    Ali, Nauman bin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    FLOW-assisted value stream mapping in the early phases of large-scale software development2016In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 111, p. 213-227Article in journal (Refereed)
    Abstract [en]

    Value stream mapping (VSM) has been successfully applied in the context of software process improvement. However, its current adaptations from Lean manufacturing focus mostly on the flow of artifacts and have taken no account of the essential information flows in software development. A solution specifically targeted toward information flow elicitation and modeling is FLOW. This paper aims to propose and evaluate the combination of VSM and FLOW to identify and alleviate information and communication related challenges in large-scale software development. Using case study research, FLOW-assisted VSM was used for a large product at Ericsson AB, Sweden. Both the process and the outcome of FLOW-assisted VSM have been evaluated from the practitioners’ perspective. It was noted that FLOW helped to systematically identify challenges and improvements related to information flow. Practitioners responded favorably to the use of VSM and FLOW, acknowledged the realistic nature and impact on the improvement on software quality, and found the overview of the entire process using the FLOW notation very useful. The combination of FLOW and VSM presented in this study was successful in systematically uncovering issues and characterizing their solutions, indicating their practical usefulness for waste removal with a focus on information flow related issues.

    Download full text (pdf)
    fulltext
  • 139.
    Ali, Nauman Bin
    et al.
    Blekinge Institute of Technology, School of Computing.
    Petersen, Kai
    Blekinge Institute of Technology, School of Computing.
    Mäntylä, Mika
    Testing highly complex system of systems: An industrial case study2012Conference paper (Refereed)
    Abstract [en]

    Systems of systems (SoS) are highly complex and are integrated on multiple levels (unit, component, system, system of systems). Many of the characteristics of SoS (such as operational and managerial independence, integration of system into system of systems, SoS comprised of complex systems) make their development and testing challenging. Contribution: This paper provides an understanding of SoS testing in large-scale industry settings with respect to challenges and how to address them. Method: The research method used is case study research. As data collection methods we used interviews, documentation, and fault slippage data. Results: We identified challenges related to SoS with respect to fault slippage, test turn-around time, and test maintainability. We also classified the testing challenges to general testing challenges, challenges amplified by SoS, and challenges that are SoS specific. Interestingly, the interviewees agreed on the challenges, even though we sampled them with diversity in mind, which meant that the number of interviews conducted was sufficient to answer our research questions. We also identified solution proposals to the challenges that were categorized under four classes of developer quality assurance, function test, testing in all levels, and requirements engineering and communication. Conclusion: We conclude that although over half of the challenges we identified can be categorized as general testing challenges still SoS systems have their unique and amplified challenges stemming from SoS characteristics. Furthermore, it was found that interviews and fault slippage data indicated that different areas in the software process should be improved, which indicates that using only one of these methods would have led to an incomplete picture of the challenges in the case company.

  • 140.
    Ali, Nauman bin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Usman, Muhammad
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    A critical appraisal tool for systematic literature reviews in software engineering2019In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 112, p. 48-50Article, review/survey (Refereed)
    Abstract [en]

    Context: Methodological research on systematic literature reviews (SLRs)in Software Engineering (SE)has so far focused on developing and evaluating guidelines for conducting systematic reviews. However, the support for quality assessment of completed SLRs has not received the same level of attention. Objective: To raise awareness of the need for a critical appraisal tool (CAT)for assessing the quality of SLRs in SE. To initiate a community-based effort towards the development of such a tool. Method: We reviewed the literature on the quality assessment of SLRs to identify the frequently used CATs in SE and other fields. Results: We identified that the CATs currently used is SE were borrowed from medicine, but have not kept pace with substantial advancements in the field of medicine. Conclusion: In this paper, we have argued the need for a CAT for quality appraisal of SLRs in SE. We have also identified a tool that has the potential for application in SE. Furthermore, we have presented our approach for adapting this state-of-the-art CAT for assessing SLRs in SE. © 2019 The Authors

    Download full text (pdf)
    fulltext
  • 141.
    Ali, Nauman bin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Usman, Muhammad
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Reliability of search in systematic reviews: Towards a quality assessment framework for the automated-search strategy2018In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, ISSN 0950-5849, Vol. 99, p. 133-147Article in journal (Refereed)
    Abstract [en]

    Context: The trust in systematic literature reviews (SLRs) to provide credible recommendations is critical for establishing evidence-based software engineering (EBSE) practice. The reliability of SLR as a method is not a given and largely depends on the rigor of the attempt to identify, appraise and aggregate evidence. Previous research, by comparing SLRs on the same topic, has identified search as one of the reasons for discrepancies in the included primary studies. This affects the reliability of an SLR, as the papers identified and included in it are likely to influence its conclusions. Objective: We aim to propose a comprehensive evaluation checklist to assess the reliability of an automated-search strategy used in an SLR. Method: Using a literature review, we identified guidelines for designing and reporting automated-search as a primary search strategy. Using the aggregated design, reporting and evaluation guidelines, we formulated a comprehensive evaluation checklist. The value of this checklist was demonstrated by assessing the reliability of search in 27 recent SLRs. Results: Using the proposed evaluation checklist, several additional issues (not captured by the current evaluation checklist) related to the reliability of search in recent SLRs were identified. These issues severely limit the coverage of literature by the search and also the possibility to replicate it. Conclusion: Instead of solely relying on expensive replications to assess the reliability of SLRs, this work provides means to objectively assess the likely reliability of a search-strategy used in an SLR. It highlights the often-assumed aspect of repeatability of search when using automated-search. Furthermore, by explicitly considering repeatability and consistency as sub-characteristics of a reliable search, it provides a more comprehensive evaluation checklist than the ones currently used in EBSE. © 2018 Elsevier B.V.

  • 142.
    Ali, Nauman
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Evaluating strategies for study selection in systematic literature studies2014In: ESEM '14 Proceedings of the 8th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement, ACM , 2014, Vol. article 45Conference paper (Refereed)
    Abstract [en]

    Context: The study selection process is critical to improve the reliability of secondary studies. Goal: To evaluate the selection strategies commonly employed in secondary studies in software engineering. Method: Building on these strate- gies, a study selection process was formulated and evalu- ated in a systematic review. Results: The selection process used a more inclusive strategy than the one typically used in secondary studies, which led to additional relevant articles. Conclusions: The results indicates that a good-enough sam- ple could be obtained by following a less inclusive but more efficient strategy, if the articles identified as relevant for the study are a representative sample of the population, and there is a homogeneity of results and quality of the articles.

    Download full text (pdf)
    fulltext
  • 143.
    Ali, Sajjad
    et al.
    Blekinge Institute of Technology, School of Computing.
    Ali, Asad
    Blekinge Institute of Technology, School of Computing.
    Performance Analysis of AODV, DSR and OLSR in MANET2010Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    A mobile ad hoc network (MANET) consists of mobile wireless nodes. The communication between these mobile nodes is carried out without any centralized control. MANET is a self organized and self configurable network where the mobile nodes move arbitrarily. The mobile nodes can receive and forward packets as a router. Routing is a critical issue in MANET and hence the focus of this thesis along with the performance analysis of routing protocols. We compared three routing protocols i.e. AODV, DSR and OLSR. Our simulation tool will be OPNET modeler. The performance of these routing protocols is analyzed by three metrics: delay, network load and throughput. All the three routing protocols are explained in a deep way with metrics. The comparison analysis will be carrying out about these protocols and in the last the conclusion will be presented, that which routing protocol is the best one for mobile ad hoc networks.

    Download full text (pdf)
    FULLTEXT01
  • 144.
    Ali, Wajahat
    et al.
    Blekinge Institute of Technology, School of Computing.
    Muhammad, Asad
    Blekinge Institute of Technology, School of Computing.
    Response Time Effects on Quality of Security Experience2012Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    The recent decade has witnessed an enormous development in internet technology worldwide. Initially internet was designed for applications such as Electronic Mail and File Transfer. With technology evolving and becoming popular, people use internet for e-banking, e-shopping, social networking, e-gaming, voice and a lot of other applications. Most of the internet traffic is generated by activities of end users, when they request a specific webpage or web based application. The high demand for internet applications has driven service operators to provide reliable services to the end user and user satisfaction has now become a major challenge. Quality of Service is a measure of the performance of a particular service. Quality of Experience is a subjective measure of user’s perception of the overall performance of network. The high demand for internet usage in everyday life has got people concerned about security of information over web pages that require authentication. User perceived Quality of Security Experience depends on Quality of Experience and Response Time for web page authentication. Different factors such as jitter, packet loss, delay, network speed, supply chains and the type of security algorithm play a vital role in the response time for authentication. In this work we have tried to do qualitative and quantitative analysis of user perceived security and Quality of Experience with increasing and decreasing Response Times towards a web page authentication. We have tried to derive a relationship between Quality of Experience of security and Response Time.

    Download full text (pdf)
    FULLTEXT01
  • 145.
    Ali, Waqas
    Blekinge Institute of Technology, School of Computing.
    Case Study Of Mobile Internet User Experience2012Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Mobile Internet is currently considered as the future for the Internet. Apparently the number of mobile handset sold as compared to desktop PCs is noticeable. These hints depict the potential of mobile Internet and the future market strongly relying on mobile devices. But at the same time mobile internet users are growing slower in numbers. Particularly in market where the internet access is very simple through computers, mobile internet users seems not very enthusiastic to use internet on mobile phones. Author of this study supposed on the basis of literature findings that this lack of interest is due to an unsatisfactory mobile internet user experience. This thesis work is an effort into the complex area of mobile internet and shed some light on how to improve user experience for mobile internet. The main focus of this research work is the identification of hurdles/challenges for mobile internet user experience and explores the concepts present in academia. In order to understand it properly, the author performed a systematic literature review (SLR). The overall objective of SLR is to examine the existing work on thesis study topic. This in depth study of literature revealed that mobile internet user experience is categorized into aspects, elements and factors by different researchers and considered as a central part of mobile internet user experience. There are few other factors that affect and make this job complicated and difficult such as usage context and user expectations. In this work current problems of the mobile internet user experience are identified systematically that never happened before and then discussed in a way that provide a better understanding of mobile internet user experience to academia. To fulfill the aim and objectives author of this study conducted the detailed systematic review analysis of the empirical studies from year 1998 to 2012. The research studies were identified from the most authentic databases that are scientifically and technically peer reviewed such as Scopus, Evillage, IEEE Xplore, ACM digital library. From SLR results, we have found different aspects, elements, factors and challenges of mobile internet user experience. The most common challenge faced by user and reported in academia was screen size, input facilities, usability of services, and data traffic costs. The information attained during this thesis study through academia (literature) is presented in a descriptive way which reflects that there is an emerging trend of using internet on mobile devices. Through this study author presented the influencing perspective of mobile internet user experience that needs to be considered for the advancement of mobile internet. The presented work adds contribution in a sense as to the best of knowledge no systematic review effort has been done in this area.

    Download full text (pdf)
    FULLTEXT01
  • 146.
    Ali, Zahoor
    et al.
    Blekinge Institute of Technology, School of Computing.
    Arfeen, Muhammad Qummer ul
    Blekinge Institute of Technology, School of Computing.
    The role of Machine Learning in Predicting CABG Surgery Duration2011Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Context. Operating room (OR) is one of the most expensive resources of a hospital. Its mismanagement is associated with high costs and revenues. There are various factors which may cause OR mismanagement, one of them is wrong estimation of surgery duration. The surgeons underestimate or overestimate surgery duration which causes underutilization or overutilization of OR and medical staff. Resolving the issue of wrong estimate can result improvement of the overall OR planning. Objectives. In this study we investigate two different techniques of feature selection, compare different regression based modeling techniques for surgery duration prediction. One of these techniques (with lowest mean absolute) is used for building a model. We further propose a framework for implementation of this model in the real world setup. Results. In our case the selected technique (correlation based feature selection with best first search in backward direction) for feature selection could not produce better results than the expert’s opinion based approach for feature selection. Linear regression outperformed on both the data sets. Comparatively the mean absolute error of linear regression on experts’ opinion based data set was the lowest. Conclusions. We have concluded that patterns exist for the relationship of the resultant prediction (surgery duration) and other important features related to patient characteristics. Thus, machine learning tools can be used for predicting surgery duration. We have also concluded that the proposed framework may be used as a decision support tool for facilitation in surgery duration prediction which can improve the planning of ORs and their resources.

    Download full text (pdf)
    FULLTEXT01
  • 147.
    Alipour, Philip Baback
    et al.
    Blekinge Institute of Technology, School of Computing.
    Ali, Muhammad
    Blekinge Institute of Technology, School of Computing.
    An Introduction and Evaluation of a Lossless Fuzzy Binary AND/OR Compressor2010Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    We report a new lossless data compression algorithm (LDC) for implementing predictably-fixed compression values. The fuzzy binary and-or algorithm (FBAR), primarily aims to introduce a new model for regular and superdense coding in classical and quantum information theory. Classical coding on x86 machines would not suffice techniques for maximum LDCs generating fixed values of Cr >= 2:1. However, the current model is evaluated to serve multidimensional LDCs with fixed value generations, contrasting the popular methods used in probabilistic LDCs, such as Shannon entropy. The currently introduced entropy is of ‘fuzzy binary’ in a 4D hypercube bit flag model, with a product value of at least 50% compression. We have implemented the compression and simulated the decompression phase for lossless versions of FBAR logic. We further compared our algorithm with the results obtained by other compressors. Our statistical test shows that, the presented algorithm mutably and significantly competes with other LDC algorithms on both, temporal and spatial factors of compression. The current algorithm is a steppingstone to quantum information models solving complex negative entropies, giving double-efficient LDCs > 87.5% space savings.

    Download full text (pdf)
    FULLTEXT01
  • 148.
    Alisic, Senadin
    et al.
    Blekinge Institute of Technology, School of Management.
    Karapistoli, Eirini
    Blekinge Institute of Technology, School of Management.
    Katkic, Adis
    Blekinge Institute of Technology, School of Management.
    Key Drivers for the Successful Outsourcing of IT Services2012Independent thesis Advanced level (degree of Master (One Year))Student thesis
    Abstract [en]

    Background: Services are without doubt the driving force in today’s economies in many countries. The increased importance of the service sector in industrialized economies and its productivity rates are testified by the fact that the current list of Fortune 500 companies contains more service companies and fewer manufacturing companies than in previous decades. Many products today are being transformed into services or have a higher service component than previously. In the development of this increasingly important bundling of services with products, outsourcing and offshoring play a key role. Companies have been outsourcing work for many years now appointing the latter a well-established phenomenon. Outsourcing to foreign countries, referred to as offshoring, has also been fuelled by ICT and globalization, where firms can capitalize on price and cost differentials between countries. Constant improvements in technology and global communications virtually guarantee that the future will bring much more outsourcing of services, and more specifically, outsourcing of IT services. While outsourcing and offshoring strategies play an important role in IT services, we would like to investigate the drivers that affect the successful outcome of an offshore outsourcing engagement. Purpose: The principle aim of the present study is therefore twofold: a) to identify key drivers for the successful outsourcing of IT services seen from the outsourcing partner’s perspective and b) to investigate how the outsourcing partner prioritizes these drivers.

    Download full text (pdf)
    FULLTEXT01
  • 149.
    Allahyari, Hiva
    Blekinge Institute of Technology, School of Computing.
    On the concept of Understandability as a Property of Data mining Quality2010Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    This paper reviews methods for evaluating and analyzing the comprehensibility and understandability of models generated from data in the context of data mining and knowledge discovery. The motivation for this study is the fact that the majority of previous work has focused on increasing the accuracy of models, ignoring user-oriented properties such as comprehensibility and understandability. Approaches for analyzing the understandability of data mining models have been discussed on two different levels: one is regarding the type of the models’ presentation and the other is considering the structure of the models. In this study, we present a summary of existing assumptions regarding both approaches followed by an empirical work to examine the understandability from the user’s point of view through a survey. From the results of the survey, we obtain that models represented as decision trees are more understandable than models represented as decision rules. Using the survey results regarding understandability of a number of models in conjunction with quantitative measurements of the complexity of the models, we are able to establish correlation between complexity and understandability of the models.

    Download full text (pdf)
    FULLTEXT01
  • 150. Allahyari, Hiva
    et al.
    Lavesson, Niklas
    User-oriented Assessment of Classification Model Understandability2011Conference paper (Refereed)
    Abstract [en]

    This paper reviews methods for evaluating and analyzing the understandability of classification models in the context of data mining. The motivation for this study is the fact that the majority of previous work has focused on increasing the accuracy of models, ignoring user-oriented properties such as comprehensibility and understandability. Approaches for analyzing the understandability of data mining models have been discussed on two different levels: one is regarding the type of the models’ presentation and the other is considering the structure of the models. In this study, we present a summary of existing assumptions regarding both approaches followed by an empirical work to examine the understandability from the user’s point of view through a survey. The results indicate that decision tree models are more understandable than rule-based models. Using the survey results regarding understandability of a number of models in conjunction with quantitative measurements of the complexity of the models, we are able to establish correlation between complexity and understandability of the models.

    Download full text (pdf)
    FULLTEXT01
1234567 101 - 150 of 4932
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf