Change search
Refine search result
45678910 301 - 350 of 4803
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 301.
    Ayalew, Tigist
    et al.
    Blekinge Institute of Technology, School of Computing.
    Kidane, Tigist
    Blekinge Institute of Technology, School of Computing.
    Identification and Evaluation of Security Activities in Agile Projects: A Systematic Literature Review and Survey Study2012Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Context: Today’s software development industry requires high-speed software delivery from the development team. In order to do this, organizations make transformation from their conventional software development method to agile development method while preserving customer satisfaction. Even though this approach is becoming popular development method, from security point of view, it has some disadvantage. Because, this method has several constraints imposed such as lack of a complete overview of a product, higher development pace and lack of documentation. Although security-engineering (SE) process is necessary in order to build secure software, no SE process is developed specifically for agile model. As a result, SE processes that are commonly used in waterfall model are being used in agile models. However, there is a clash or disparity between the established waterfall SE processes and the ideas and methodologies proposed by the agile manifesto. This means that, while agile models work with short development increments that adapt easily to change, the existing SE processes work in plan-driven development setting and try to reduce defects found in a program before the occurrence of threats through heavy and inflexible process. This study aims at bridging the gap in agile model and security by providing insightful understanding of the SE process that are used in the current agile industry. Objectives: The objectives of this thesis are to identify and evaluate security activities from high-profile waterfall SE-process that are used in the current agile industry. Then, to suggest the most compatible and beneficial security activities to agile model based on the study results. Methods: The study involved two approaches: systematic literature review and survey. The systematic literature review has two main aims. The first aim is to gain a comprehensive understanding of security in an agile process model; the second one is to identify high-profile SE processes that are commonly used in waterfall model. Moreover, it helped to compare the thesis result with other previously done works on the area. A survey is conducted to identify and evaluate waterfall security activities that are used in the current agile industry projects. The evaluation criteria were based on the security activity integration cost and benefit provides to agile projects. Results: The results of the systematic review are organized in a tabular form for clear understanding and easy analysis. High-profile SE processes and their activities are obtained. These results are used as an input for the survey study. From the survey study, security activities that are used in the current agile industry are identified. Furthermore, the identified security activities are evaluated in terms of benefit and cost. As a result the best security activities, that are compatible and beneficial, are investigated to agile process model. Conclusions: To develop secure software in agile model, there is a need of SE-process or practice that can address security issues in every phase of the agile project lifecycle. This can be done either by integrating the most compatible and beneficial security activities from waterfall SE processes with agile process or by creating new SE-process. In this thesis, it has been found that, from the investigated high-profile waterfall SE processes, none of the SE processes was fully compatible and beneficial to agile projects.

  • 302. Ayani, Rassul
    et al.
    Ismailov, Yuri
    Liljenstam, Michael
    Popescu, Adrian
    Rajaei, Hassan
    Rönngren, Robert
    Modeling and Simulation of a High Speed LAN1995In: Simulation (San Diego, Calif.), ISSN 0037-5497, E-ISSN 1741-3133, Vol. 64, no 1, p. 7-14Article in journal (Refereed)
    Abstract [en]

    Simulation is a tool that can be used to assess functionality and performance of communication networks and protocols. However, efficient simulation of complex communication systems is not a trivial task. In this paper, we discuss modeling and simulation of bus-based communication networks and present the results of modeling and simulation of a multigigabit/s LAN. We used parallel simulation techniques to reduce the simulation time of the LAN and implemented both an optimistic and a conservative parallel simulation scheme. Our experimental results on a shared memory multiprocessor indicate that the conservative parallel simulation scheme is superior to the optimistic one for this specific application. The parallel simulator based on the conservative scheme demonstates a linear speedup for large networks.

  • 303.
    Ayichiluhm, Theodros
    et al.
    Blekinge Institute of Technology, School of Computing.
    Mohan, Vivek
    Blekinge Institute of Technology, School of Computing.
    IPv6 Monitoring and Flow Detection2013Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    IPv6 Privacy extensions, implemented in major operating systems, hide user’s identity by using a temporary and a randomly generated IPv6 addresses rather than using the former, EUI-64 format where the MAC address is part of the IPv6 address. This solution for privacy has created a problem for network administrators to back-trace an IPv6 address to a specific MAC address, since the temporary IP address used once by the node is removed from the interface after a period of time. An IPv6 Ethernet test bed is setup to investigate IPv6 implementation dynamics in Windows 7 and Ubuntu10.04 operating systems. The testbed is extended to investigate the effects of temporary IPv6 addresses due to IPv6 privacy extensions on the on-going sessions of different applications including ping, File Transfer Protocol (FTP) and video streaming (HTTP and RTP). On the basis of the knowledge obtained from investigations about dynamics of IPv6 privacy extensions, this work proposes Internet Protocol version 6 Host Tracking (IPv6HoT), a web based IPv6 to MAC mapping solution. IPv6HoT uses Simple Network Management Protocol (SNMP) to forward IPv6 Neighbor table from routers to Network Management Stations (NMS). This thesis work provides guidelines for configuring IPv6 privacy extensions in Ubuntu10.04 and Windows 7; the difference of implementation between these two operating systems is also presented in this work. The results show that temporary IPv6 addressing has a definite effect on the on-going sessions of video streaming and FTP applications. Applications running as server on Temporary IPv6 address encountered more frequent on-going session interruptions than applications running as a server over public IPv6 address. When temporary IPv6 addresses were configured to host FTP and video streaming applications, their on-going sessions were permanently interrupted. It is also observed that LFTP, a client FTP application, resumes an interrupted session.

  • 304.
    Ayoubi, Tarek
    Blekinge Institute of Technology, School of Engineering, Department of Interaction and System Design.
    Distributed Data Management Supporting Healthcare Workflow from Patients’ Point of View2007Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Patient’s mobility throughout his lifetime leaves a trial of information scattered in laboratories, clinical institutes, primary care units, and other hospitals. Hence, the medical history of a patient is valuable when subjected to special healthcare units or undergoes home-care/personal-care in elderly stage cases. Despite the rhetoric about patient-centred care, few attempts were made to measure and improve in this arena. In this thesis, we will describe and implement a high-level view of a Patient Centric information management, deploying at a preliminary stage, the use of Agent Technologies and Grid Computing. Thus, developing and proposing an infrastructure that allows us to monitor and survey the patient, from the doctor’s point of view, and investigate a Persona, from the patients’ side, that functions and collaborates among different medical information structures. The Persona will attempt to interconnect all the major agents (human and software), and realize a distributed grid info-structure that directly affect the patient, therefore, revealing an adequate and cost-effective solution for most critical information needs. The results comprehended in the literature survey, consolidating Healthcare Information Management with emerged intelligent Multi-Agent System Technologies (MAS) and Grid Computing; intends to provide a solid basis for further advancements and assessments in this field, by bridging and proposing a framework between the home-care sector and the flexible agent architecture throughout the healthcare domain.

  • 305.
    Ayub, Muhammad
    Blekinge Institute of Technology, School of Engineering, Department of Mathematics and Natural Sciences.
    Choquet and Sugeno Integrals2009Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    In real word many problems, most of criteria have interdependent or interactive characteristics, which cannot be evaluated by additive measures exactly. For the human subjective evaluation processes it will be more better to apply Choquet and Sugeno integrals model together with the definition of lambda − fuzzy measure, in which the property of additivity is not necessary. My thesis presents the application of fuzzy integrals as tool for criteria aggregation in the decision problems. Finally, this research gives the examples of evaluating medicine with illustrations of hierarchicalstructure of lambda− fuzzy measure for Choquet and Sugeno integrals model.

  • 306.
    Ayub, Yasir
    et al.
    Blekinge Institute of Technology, School of Computing.
    Faruki, Usman
    Blekinge Institute of Technology, School of Computing.
    Container Terminal Operations Modeling through Multi agent based Simulation2009Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    This thesis aims to propose a multi-agent based hierarchical model for the operations of container terminals. We have divided our model into four key agents that are involved in each sub processes. The proposed agent allocation policies are recommended for different situations that may occur at a container terminal. A software prototype is developed which implements the hierarchical model. This web based application is used in order to simulate the various processes involved in the following operations on the marine side in a case study of a container terminal in Sweden by adopting a multi-agent based simulation technique. Due to the increase in usage of container transportation, container terminals are experiencing difficulties in the management of the operations. The software provides a decision support capability to terminal managers for scheduling and managing the operations effectively while also visually presenting the time it takes to complete the process and its associated cost. Terminal managers need to implement certain policies to improve the management and operations of the container terminal. The policies are evaluated and tested under various cases to provide a more comparative overview. The results of the simulation experiments indicate that the waiting time for arriving vessels is decreasing when in queue with more than three vessels arriving on same day.

  • 307.
    Azam, Muhammad
    et al.
    Blekinge Institute of Technology, School of Computing.
    Ahmad, Luqman
    Blekinge Institute of Technology, School of Computing.
    A Comparative Evaluation of Usability for the iPhone and iPad2011Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Many everyday systems and products seem to be designed with little regard to usability. This leads to the frustration, wasted time and errors. So the usability of the product is important for its survival in the market. In many previous studies the usability evaluation of the iPhone and iPad carried out individually and very little work has been done on the comparative usability evaluation. However, there was not any study conducted on the comparative usability evaluation and measuring the performance of the iPhone versus iPad in a controlled environment. In this research work, the authors performed the comparative usability evaluation and measured the performances of the iPhone and iPad on the selected applications by considering the young users as well as the elderly users. Another objective of this study is to identify the usability issues in performances of the iPhone and iPad. A survey and experiment techniques were used to achieve the dened objectives. The survey questionnaire consisted of 42 statements that presented the different usability aspects. The objectives of the survey study were to validate the identified issues from the literature study, identify new issues and measure the signicant difference in user opinions for the iPhone and iPad. However, the experiment studies helped to measure the performance significances between the devices against the three user groups (novice user, experienced user, elderly user) and among the groups over the devices. Further, objective was to measure the satisfaction level of the participated users against the iPhone and iPad. The experiment was performed in a controlled environment. Total six tasks (two tasks per application) were dened and each participant performed the same task on both devices. Generally the authors found that the participants performed better on the iPad with lower error rates as compare to the iPhone.

  • 308.
    Azam, Muhammad
    et al.
    Blekinge Institute of Technology, School of Computing.
    Hussain, Izhar
    Blekinge Institute of Technology, School of Computing.
    The Role of Interoperability in eHealth2009Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    In the light of challenges the lack of interoperability in systems and services has long been recognized as one of the major challenge to the wider implementation of the eHealth applications. The opportunities and positive benefits of achieving interoperability are eventually considerable, whereas various barriers and challenges act as impediments. The purpose of this study was to investigate the interoperability among different health care organizations. The knowledge of this study would be supportive to health care organizations to understand the interoperability problems in health care organizations. In the first phase of literature review interoperability challenges in Sweden and other EU countries were identified. On the basis of findings interviews were conducted to know the strategies and planning about interoperability in health care organizations. After analysis of interviews, questionnaires were conducted to know the opinions of different medical IT administrator and health professionals. The authors find after the analysis of interviews and questionnaire that adopting eHealth standard, same system, insuring the security of patient’s health record information and same medical language could be implemented in Sweden and other EU countries health organizations.

  • 309. Azhar, Damir
    et al.
    Riddle, Patricia
    Mendes, Emilia
    Blekinge Institute of Technology, School of Computing.
    Mittas, Nikolaos
    Angelis, Lefteris
    Using ensembles for web effort estimation2013Conference paper (Refereed)
    Abstract [en]

    Background: Despite the number of Web effort estimation techniques investigated, there is no consensus as to which technique produces the most accurate estimates, an issue shared by effort estimation in the general software estimation domain. A previous study in this domain has shown that using ensembles of estimation techniques can be used to address this issue. Aim: The aim of this paper is to investigate whether ensembles of effort estimation techniques will be similarly successful when used on Web project data. Method: The previous study built ensembles using solo effort estimation techniques that were deemed superior. In order to identify these superior techniques two approaches were investigated: The first involved replicating the methodology used in the previous study, while the second approach used the Scott-Knott algorithm. Both approaches were done using the same 90 solo estimation techniques on Web project data from the Tukutuku dataset. The replication identified 16 solo techniques that were deemed superior and were used to build 15 ensembles, while the Scott-Knott algorithm identified 19 superior solo techniques that were used to build two ensembles. Results: The ensembles produced by both approaches performed very well against solo effort estimation techniques. With the replication, the top 12 techniques were all ensembles, with the remaining 3 ensembles falling within the top 17 techniques. These 15 effort estimation ensembles, along with the 2 built by the second approach, were grouped into the best cluster of effort estimation techniques by the Scott-Knott algorithm. Conclusion: While it may not be possible to identify a single best technique, the results suggest that ensembles of estimation techniques consistently perform well even when using Web project data

  • 310.
    Azhar, Muhammad Saad Bin
    et al.
    Blekinge Institute of Technology, School of Computing.
    Aslam, Ammad
    Blekinge Institute of Technology, School of Computing.
    Multiple Coordinated Information Visualization Techniques in Control Room Environment2009Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Presenting large amount of Multivariate Data is not a simple problem. When there are multiple correlated variables involved, it becomes difficult to comprehend data using traditional ways. Information Visualization techniques provide an interactive way to present and analyze such data. This thesis has been carried out at ABB Corporate Research, Västerås, Sweden. Use of Parallel Coordinates and Multiple Coordinated Views was has been suggested to realize interactive reporting and trending of Multivariate Data for ABB’s Network Manager SCADA system. A prototype was developed and an empirical study was conducted to evaluate the suggested design and test it for usability from an actual industry perspective. With the help of this prototype and the evaluations carried out, we are able to achieve stronger results regarding the effectiveness and efficiency of the visualization techniques used. The results confirm that such interfaces are more effective, efficient and intuitive for filtering and analyzing Multivariate Data.

  • 311.
    Aziz, Hussein
    Blekinge Institute of Technology, School of Computing.
    Streaming Video over Unreliable and Bandwidth Limited Networks2013Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    The main objective of this thesis is to provide a smooth video playout on the mobile device over wireless networks. The parameters that specify the wireless channel include: bandwidth variation, frame losses, and outage time. These parameters may affect the quality of the video negatively, and the mobile users may notice sudden stops during the playout video, i.e., the picture is momentarily frozen, followed by a jump from one scene to a different one. This thesis focuses on eliminating frozen pictures and reducing the amount of video data that need to be transmitted. In order to eliminate frozen scenes on the mobile screen, we propose three different techniques. In the first technique, the video frames are split into sub-frames; these sub-frames are streamed over different channels. In the second technique the sub-frames will be “crossed” and sent together with other sub-frames that are from different positions in the streaming video sequence. If some sub-frames are lost during the transmission a reconstruction mechanism will be applied on the mobile device to recreate the missing sub-frames. In the third technique, we propose a Time Interleaving Robust Streaming (TIRS) technique to stream the video frames in different order. The benefit of that is to avoid losing a sequence of neighbouring frames. A missing frame from the streaming video will be reconstructed based on the surrounding frames on the mobile device. In order to reduce the amount of video data that are streamed over limited bandwidth channels, we propose two different techniques. These two techniques are based on identifying and extracting a high motion region of the video frames. We call this the Region Of Interest (ROI); the other parts of the video frames are called the non-Region Of Interest (non-ROI). The ROI is transmitted with high quality, whereas the non-ROI is interpolated from a number of references frames. In the first technique the ROI is a fixed size region; we considered four different types of ROI and three different scenarios. The scenarios are based on the position of the reference frames in the streaming frame sequence. In the second technique the ROI is identified based on the motion in the video frames, therefore the size, position, and shape of the ROI will be different from one video to another according to the video characteristic. The videos are coded using ffmpeg to study the effect of the proposed techniques on the encoding size. Subjective and objective metrics are used to measure the quality level of the reconstructed videos that are obtained from the proposed techniques. Mean Opinion Score (MOS) measurements are used as a subjective metric based on human opinions, while for objective metric the Structural Similarity (SSIM) index is used to compare the similarity between the original frames and the reconstructed frames.

  • 312.
    Aziz, Hussein Muzahim
    et al.
    Blekinge Institute of Technology, School of Computing.
    Fiedler, Markus
    Blekinge Institute of Technology, School of Computing.
    Grahn, Håkan
    Blekinge Institute of Technology, School of Computing.
    Lundberg, Lars
    Blekinge Institute of Technology, School of Computing.
    Compressing Video Based on Region of Interest2013Conference paper (Refereed)
    Abstract [en]

    Real-time video streaming suffer from bandwidth limitation that are unable to handle the high amount of video data. To reduce the amount of data to be streamed, we propose an adaptive technique to crop the important part of the video frames, and drop the part that are outside the important part; this part is called the Region of Interest (ROI). The Sum of Absolute Differences (SAD) is computed to the consecutive video frames on the server side to identify and extract the ROI. The ROI are extracted from the frames that are between reference frames based on three scenarios. The scenarios been designed to position the reference frames in the video frames sequence. Linear interpolation is performed from the reference frames to reconstruct the part that are outside the ROI on the mobile side. We evaluate the proposed approach for the three scenarios by looking at the size of the compressed videos and measure the quality of the videos by using the Mean Opinion Score (MOS). The results show that our technique significantly reduces the amount of data to be streamed over wireless networks with acceptable video quality are provided to the mobile viewers.

  • 313. Aziz, Hussein Muzahim
    et al.
    Fiedler, Markus
    Blekinge Institute of Technology, School of Computing.
    Grahn, Håkan
    Blekinge Institute of Technology, School of Computing.
    Lundberg, Lars
    Blekinge Institute of Technology, School of Computing.
    Eliminating the Effects of Freezing Frames on User Perceptive by Using a Time Interleaving Technique2012In: Multimedia Systems, ISSN 0942-4962, E-ISSN 1432-1882, Vol. 18, no 3, p. 251-262Article in journal (Refereed)
    Abstract [en]

    Streaming video over a wireless network faces several challenges such as high packet error rates, bandwidth variations, and delays, which could have negative effects on the video streaming and the viewer will perceive a frozen picture for certain durations due to loss of frames. In this study, we propose a Time Interleaving Robust Streaming (TIRS) technique to significantly reduce the frozen video problem and provide a satisfactory quality for the mobile viewer. This is done by reordering the streaming video frames as groups of even and odd frames. The objective of streaming the video in this way is to avoid the losses of a sequence of neighbouring frames in case of a long sequence interruption. We evaluate our approach by using a user panel and mean opinion score (MOS) measurements; where the users observe three levels of frame losses. The results show that our technique significantly improves the smoothness of the video on the mobile device in the presence of frame losses, while the transmitted data are only increased by almost 9% (due to reduced time locality).

  • 314. Aziz, Hussein Muzahim
    et al.
    Fiedler, Markus
    Grahn, Håkan
    Lundberg, Lars
    Streaming Video as Space – Divided Sub-Frames over Wireless Networks2010Conference paper (Refereed)
    Abstract [en]

    Real time video streaming suffers from lost, delayed, and corrupted frames due to the transmission over error prone channels. As an effect of that, the user may notice a frozen picture in their screen. In this work, we propose a technique to eliminate the frozen video and provide a satisfactory quality to the mobile viewer by splitting the video frames into sub- frames. The multiple descriptions coding (MDC) is used to generate multiple bitstreams based on frame splitting and transmitted over multichannels. We evaluate our approach by using mean opinion score (MOS) measurements. MOS is used to evaluate our scenarios where the users observe three levels of frame losses for real time video streaming. The results show that our technique significantly improves the video smoothness on the mobile device in the presence of frame losses during the transmission.

  • 315. Aziz, Hussein Muzahim
    et al.
    Grahn, Håkan
    Lundberg, Lars
    Eliminating the Freezing Frames for the Mobile User over Unreliable Wireless Networks2009Conference paper (Refereed)
    Abstract [en]

    The main challenge of real time video streaming over a wireless network is to provide good quality service (QoS) to the mobile viewer. However, wireless networks have a limited bandwidth that may not be able to handle the continues video frame sequence and also with the possibility that video frames could be dropped or corrupted during the transmission. This could severely affect the video quality. In this study we come up with a mechanism to eliminate the frozen video and provide a quality satisfactory for the mobile viewer. This can be done by splitting the video frames to sub-frame and transmitted over multiple channels. We will present a subjective test, the Mean Opinion Score (MOS). MOS is used to evaluate our scenarios where the users can observe three levels of frame losses for real time video streaming. The results for our technique significantly improves the indicate perceived that video quality.

  • 316. Aziz, Hussein Muzahim
    et al.
    Grahn, Håkan
    Lundberg, Lars
    Sub-Frame Crossing for Streaming Video over Wireless Networks2010Conference paper (Refereed)
    Abstract [en]

    Transmitting a real time video streaming over a wireless network cannot guarantee that all the frames could be received by the mobile devices. The characteristics of a wireless network in terms of the available bandwidth, frame delay, and frame losses cannot be known in advanced. In this work, we propose a new mechanism for streaming video over a wireless channel. The proposed mechanism prevents freezing frames in the mobile devices. This is done by splitting the video frame in two sub-frames and combines them with another sub-frame from different sequence position in the streaming video. In case of lost or dropped frame, there is still a possibility that another half (sub-frame) will be received by the mobile device. The receiving sub-frames will be reconstructed to its original shape. A rate adaptation mechanism will be also highlight in this work. We show that sever can skip up to 50% of the sub-frames and we can still be able to reconstruct the receiving sub-frame and eliminate the freezing picture in the mobile device.

  • 317. Aziz, Hussein Muzahim
    et al.
    Lundberg, Lars
    Graceful degradation of mobile video quality over wireless network2009Conference paper (Refereed)
    Abstract [en]

    Real-time video transmission over wireless channels has become an important topic in wireless communication because of the limited bandwidth of wireless network that should handle high amount of video frames. Video frames must arrive at the client before the playout time with enough time to display the contents of the frames. Real-time video transmission is particularly sensitive to delay as it has a strict bounded end-to-end delay constraint; video applications impose stringent requirements on communication parameters, such as frame lost and frame dropped due to excessive delay are the primary factors affecting the user-perceived quality. In this study we investigate ways of obtaining a graceful and controlled degradation of the quality, by introducing redundancy in the frame sequence and compensating this by limiting colourcoding and resolution. The effect of that is to use double streaming mechanism, in this way we will obtain less freezing at the expense of limited colours and resolution. Our experiments, applied to scenarios where users can observe three types of dropping load for real time video streaming, the analytical measurements tools are used in this study to evaluate the video quality is the mean opinion score and we will demonstrate this and argue that the proposed technique improves the use perceived of the video quality.

  • 318. Aziz, Maryam
    et al.
    Masum, M. E.
    Babu, M. J.
    Rahman, Suhaimi Ab
    Nordberg, Jörgen
    Blekinge Institute of Technology, School of Computing.
    Mobility impact on the end-to-end delay performance for VoIP over LTE2012In: Procedia Engineering, Coimbatore: Elsevier , 2012, Vol. 30, p. 491-498Conference paper (Refereed)
    Abstract [en]

    Long Term Evolution (LTE) is the last step towards the 4th generation of cellular networks. This revolution is necessitated by the unceasing increase in demand for high speed connection on LTE networks. This paper focuses on the performance evaluation of End-to-End delay under variable mobility speed for VoIP (Voice over IP) in the LTE network. In the course of E2E performance evaluation, realizing simulation approach three scenarios have been modeled using OPNET 16.0. The first one is the baseline network while among other two, one consists of VoIP traffic solely and the other consists of FTP along with VoIP. E2E delay has been measured for both scenarios in various cases under the varying mobility speed of the node. Simulation results have been studied and presented in terms of comparative performance analysis of the three network scenarios. In light of the result analysis, the performance quality of a VoIP network (with and without the presence of additional network traffic) in LTE has been determined and discussed. The simulation results for baseline VoIP network (non-congested) congested VoIP network and congested VoIP with FTP network show that as the speed of node is gradually increased, E2E delay slightly increases.

  • 319.
    Aziz, Md. Tariq
    et al.
    Blekinge Institute of Technology, School of Computing.
    Islam, Mohammad Saiful
    Blekinge Institute of Technology, School of Computing.
    Performance Evaluation of Real–Time Applications over DiffServ/MPLS in IPv4/IPv6 Networks2011Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Over the last years, we have witnessed a rapid deployment of real-time applications on the Internet as well as many research works about Quality of Service (QoS) in particularly IPv4 (Internet Protocol version 4). The inevitable exhaustion of the remaining IPv4 address pool has become progressively evident. As the evolution of Internet Protocol (IP) continues, the deployment of IPv6 QoS is underway. Today, there is limited experience in the deployment of QoS for IPv6 traffic in MPLS backbone networks in conjunction with DiffServ (Differentiated Services) support. DiffServ itself does not have the ability to control the traffic which has been taken for end-to-end path while a number of links of the path are congested. In contrast, MPLS Traffic Engineering (TE) is accomplished to control the traffic and can set up end-to-end routing path before data has been forwarded. From the evolution of IPv4 QoS solutions, we know that the integration of DiffServ and MPLS TE satisfies the guaranteed QoS requirement for real-time applications. This thesis presents a QoS performance study of real-time applications such as voice and video conferencing over DiffServ with or without MPLS TE in IPv4/IPv6 networks using Optimized Network Engineering Tool (OPNET). This thesis also studies the interaction of Expedited Forwarding (EF), Assured Forwarding (AF) traffic aggregation, link congestion, as well as the effect of various performance metrics such as packet end-to-end delay, packet delay variation, queuing delay, throughput and packet loss. The effectiveness of DiffServ and MPLS TE integration in IPv4/IPv6 network is illustrated and analyzed. The thesis shows that IPv6 experiences more delay and loss performance than their IPv4 counterparts.

  • 320.
    AZIZ, YASSAR
    et al.
    Blekinge Institute of Technology, School of Engineering, Department of Mathematics and Natural Sciences.
    ASLAM, MUHAMMAD NAEEM
    Blekinge Institute of Technology, School of Engineering, Department of Mathematics and Natural Sciences.
    Traffic Engineering with Multi-Protocol Label Switching, Performance Comparison with IP networks2008Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Traffic Engineering (TE) is the stage which deals with geometric design planning and traffic operation of networks, network devices and relationship of routers for the transportation of data. TE is that feature of network engineering which concentrate on problems of performance optimization of operational networks. It involves techniques and application of knowledge to gain performance objectives, which includes movement of data through network, reliability, planning of network capacity and efficient use of network resources. This thesis addresses the problems of traffic engineering and suggests a solution by using the concept of Multi-Protocol Label Switching (MPLS). We have done simulation in Matlab environment to compare the performance of MPLS against the IP network in a simulated environment. MPLS is a modern technique for forwarding network data. It broadens routing according to path controlling and packet forwarding. In this thesis MPLS is computed on the basis of its performance, efficiency for sending data from source to destination. A MATLAB based simulation tool is developed to compare MPLS with IP network in a simulated environment. The results show the performance of MPLS network in comparison of IP network.

  • 321.
    Babaeeghazvini, Parinaz
    Blekinge Institute of Technology, School of Engineering.
    EEG enhancement for EEG source localization in brain-machine speller2013Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    A Brain-Computer Interface (BCI) is a system to communicate with external world through the brain activity. The brain activity is measured by Electro-Encephalography (EEG) and then processed by a BCI system. EEG source reconstruction could be a way to improve the accuracy of EEG classification in EEGbased brain–computer interface (BCI). In this thesis BCI methods were applied on derived sources which by their EEG enhancement it became possible to obtain a more accurate EEG detection and brought a new application to BCI technology that are recognition of writing letters imagery from brain waves. The BCI system enables people to write and type letters by their brain activity (EEG). To this end, first part of the thesis is dedicated to EEG source reconstruction techniques to select the most optimal EEG channels for task classification purposes. Due to this reason the changes in EEG signal power from rest state to motor imagery task was used, to find the location of an active single equivalent dipole. Implementing an inverse problem solution on the power changes by Multiple Sparse Priors (MSP) method generated a scalp map where its fitting showed the localization of EEG electrodes. Having the optimized locations the secondary objective was to choose the most optimal EEG features and rhythm for an efficient classification. This became possible by feature ranking, 1- Nearest Neighbor leave-one-out. The feature vectors were computed by applying the combined methods of multitaper method, Pwelch. The features were classified by several methods of Normal densities based quadratic classifier (qdc), k-nearest neighbor classifier (knn), Mixture of Gaussians classification and Train neural network classifier using back-propagation. Results show that the selected features and classifiers are able to recognize the imagination of writing alphabet with the high accuracy.

  • 322.
    Babar, Shahzad
    et al.
    Blekinge Institute of Technology, School of Computing.
    Mehmood, Aamer
    Blekinge Institute of Technology, School of Computing.
    Enhancing Accessibility of Web Based GIS Applications through User Centered Design2010Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Web Accessibility emerged as problem when disabled and elder people started interaction with web contents soon after the inception of World Wide Web. When web based GIS applications appeared on the scene of web and users of these kinds of applications increased, these applications faced the similar problem of accessibility. The intensity of web accessibility problems in GIS based applications has increased rapidly during recent years due to extensive interaction of user with maps. Web Accessibility problems faced by users of GIS applications are identified by content evaluation and user interaction. Users are involved in identification of accessibility problems because guidelines and automated tools are not sufficient for that purpose. User Centered Approach is used to include users in the development process and this has also helped in identification of accessibility problems of the users at early stages. The thesis report identify the accessibility issues in Web based GIS application by content evaluation and user interaction evaluation. MapQuest, a web based GIS application, is taken as a case study to identify the web accessibility problems in GIS applications. This thesis report has also studied that how accessibility of the web based GIS applications can be enhanced by using UCD approach in development process of GIS applications.

  • 323. Baca, Dejan
    Automated static code analysis: A tool for early vulnerability detection2009Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    Software vulnerabilities are added into programs during its development. Architectural flaws are introduced during planning and design, while implementation faults are created during coding. Penetration testing is often used to detect these vulnerabilities. This approach is expensive because it is performed late in development and any correction would increase lead-time. An alternative would be to detect and correct vulnerabilities in the phase of development where they are the least expensive to correct and detect. Source code audits have often been suggested and used to detect implementations vulnerabilities. However, manual audits are time consuming and require extended expertise to be efficient. A static code analysis tool could achieve the same results as a manual audit but at fraction of the time. Through a set of cases studies and experiments at Ericsson AB, this thesis investigates the technical capabilities and limitations of using a static analysis tool as an early vulnerability detector. The investigation is extended to studying the human factor by examining how the developers interact and use the static analysis tool. The contributions of this thesis include the identification of the tools capabilities so that further security improvements can focus on other types of vulnerabilities. By using static analysis early in development possible cost saving measures are identified. Additionally, the thesis presents the limitations of static code analysis. The most important limitation being the incorrect warnings that are reported by static analysis tools. In addition, a development process overhead was deemed necessary to successfully use static analysis in an industry setting.

  • 324.
    Baca, Dejan
    Blekinge Institute of Technology, School of Computing.
    Developing Secure Software: in an Agile Process2012Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Background: Software developers are facing increased pressure to lower development time, release new software versions more frequent to customers and to adapt to a faster market. This new environment forces developers and companies to move from a plan based waterfall development process to a flexible agile process. By minimizing the pre development planning and instead increasing the communication between customers and developers, the agile process tries to create a new, more flexible way of working. This new way of working allows developers to focus their efforts on the features that customers want. With increased connectability and the faster feature release, the security of the software product is stressed. To develop secure software, many companies use security engineering processes that are plan heavy and inflexible. These two approaches are each others opposites and they directly contradict each other. Objective: The objective of the thesis is to evaluate how to develop secure software in an agile process. In particular, what existing best practices can be incorporated into an agile project and still provide the same benefit if the project was using a waterfall process. How the best practices can be incorporated and adapted to fit the process while still measuring the improvement. Some security engineering concepts are useful but the best practice is not agile compatible and would require extensive adaptation to integrate with an agile project. Method: The primary research method used throughout the thesis is case studies conducted in a real industry setting. As secondary methods for data collection a variety of approaches have been used, such as semi-structured interviews, workshops, study of literature, and use of historical data from the industry. Results: The security engineering best practices were investigated though a series of case studies. The base agile and security engineering compatibility was assessed in literature, by developers and in practical studies. The security engineering best practices were group based on their purpose and their compatibility with the agile process. One well known and popular best practice, automated static code analysis, was toughly investigated for its usefulness, deployment and risks of using as part of the process. For the risk analysis practices, a novel approach was introduced and improved. As such, a way of adapting existing practices to agile is proposed. Conclusion: With regard of agile and security engineering we did not find that any of the investigated processes was agile compatible. Agile is reaction driven that adapts to change, while the security engineering processes are proactive and try to prevent threats before they happen. To develop secure software in an agile process the developers should adopt and adapt key concepts from security engineering. These changes will affect the flexibility of the agile process but it is a necessity if developers want the same software security state as security engineering processes can provide.

  • 325. Baca, Dejan
    Identifying Security Relevant Warnings from Static Code Analysis Tools through Code Tainting2010Conference paper (Refereed)
    Abstract [en]

    Static code analysis tools are often used by developers as early vulnerability detectors. Due to their automation they are less time-consuming and error-prone then manual reviews. However, they produce large quantities of warnings that developers have to manually examine and understand. In this paper, we look at a solution that makes static code analysis tools more useful as an early vulnerability detector. We use flow-sensitive, interprocedural and context-sensitive data flow analysis to determine the point of user input and its migration through the source code to the actual exploit. By determining a vulnerabilities point of entry we lower the number of warnings a tool produces and we provide the developer with more information why this warning could be a real security threat. We use our approach in three different ways depending on what tool we examined. First,With the commercial static code analysis tool, Coverity, we reanalyze its results and create a set of warnings that are specifically relevant from a security perspective. Secondly, we altered the open source analysis tool Findbugs to only analyze code that has been tainted by user input. Third, we created an own analysis tool that focuses on XSS vulnerabilities in Java code.

  • 326. Baca, Dejan
    et al.
    Boldt, Martin
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Carlsson, Bengt
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Jacobsson, Andreas
    A Novel Security-Enhanced Agile Software Development Process Applied in an Industrial Setting2015In: Proceedings 10th International Conference on Availability, Reliability and Security ARES 2015, IEEE Computer Society Digital Library, 2015Conference paper (Refereed)
    Abstract [en]

    A security-enhanced agile software development process, SEAP, is introduced in the development of a mobile money transfer system at Ericsson Corp. A specific characteristic of SEAP is that it includes a security group consisting of four different competences, i.e., security manager, security architect, security master and penetration tester. Another significant feature of SEAP is an integrated risk analysis process. In analyzing risks in the development of the mobile money transfer system, a general finding was that SEAP either solves risks that were previously postponed or solves a larger proportion of the risks in a timely manner. The previous software development process, i.e., the baseline process of the comparison outlined in this paper, required 2.7 employee hours spent for every risk identified in the analysis process compared to, on the average, 1.5 hours for the SEAP. The baseline development process left 50% of the risks unattended in the software version being developed, while SEAP reduced that figure to 22%. Furthermore, SEAP increased the proportion of risks that were corrected from 12.5% to 67.1%, i.e., more than a five times increment. This is important, since an early correction may avoid severe attacks in the future. The security competence in SEAP accounts for 5% of the personnel cost in the mobile money transfer system project. As a comparison, the corresponding figure, i.e., for security, was 1% in the previous development process.

  • 327. Baca, Dejan
    et al.
    Carlsson, Bengt
    Agile development with security engineering activities2011Conference paper (Refereed)
    Abstract [en]

    Agile software development has been used by industry to create a more flexible and lean software development process, i.e making it possible to develop software at a faster rate and with more agility during development. There are however concerns that the higher development pace and lack of documentation are creating less secure software. We have therefore looked at three known Security Engineering processes, Microsoft SDL, Cigatel touchpoints and Common Criteria and identified what specific security activities they performed. We then compared these activities with an Agile development process that is used in industry. Developers, from a large telecommunication manufacturer, were interviewed to learn their impressions on using these security activities in an agile development process. We produced a security enhanced Agile development process that we present in this paper. This new Agile process use activities from already established security engineering processes that provide the benefit the developers wanted but did not hinder or obstruct the Agile process in a significant way.

  • 328. Baca, Dejan
    et al.
    Carlsson, Bengt
    Lundberg, Lars
    Evaluating the Cost Reduction of Static Code Analysis for Software Security2008Conference paper (Refereed)
    Abstract [en]

    Automated static code analysis is an efficient technique to increase the quality of software during early development. This paper presents a case study in which mature software with known vul-nerabilities is subjected to a static analysis tool. The value of the tool is estimated based on reported failures from customers. An average of 17% cost savings would have been possible if the static analysis tool was used. The tool also had a 30% success rate in detecting known vulnerabilities and at the same time found 59 new vulnerabilities in the three examined products.

  • 329.
    Baca, Dejan
    et al.
    Blekinge Institute of Technology, School of Computing.
    Carlsson, Bengt
    Blekinge Institute of Technology, School of Computing.
    Petersen, Kai
    Blekinge Institute of Technology, School of Computing.
    Lundberg, Lars
    Blekinge Institute of Technology, School of Computing.
    Improving software security with static automated code analysis in an industry setting2013In: Software, practice & experience, ISSN 0038-0644, E-ISSN 1097-024X, Vol. 43, no 3, p. 259-279Article in journal (Refereed)
    Abstract [en]

    Software security can be improved by identifying and correcting vulnerabilities. In order to reduce the cost of rework, vulnerabilities should be detected as early and efficiently as possible. Static automated code analysis is an approach for early detection. So far, only few empirical studies have been conducted in an industrial context to evaluate static automated code analysis. A case study was conducted to evaluate static code analysis in industry focusing on defect detection capability, deployment, and usage of static automated code analysis with a focus on software security. We identified that the tool was capable of detecting memory related vulnerabilities, but few vulnerabilities of other types. The deployment of the tool played an important role in its success as an early vulnerability detector, but also the developers perception of the tools merit. Classifying the warnings from the tool was harder for the developers than to correct them. The correction of false positives in some cases created new vulnerabilities in previously safe code. With regard to defect detection ability, we conclude that static code analysis is able to identify vulnerabilities in different categories. In terms of deployment, we conclude that the tool should be integrated with bug reporting systems, and developers need to share the responsibility for classifying and reporting warnings. With regard to tool usage by developers, we propose to use multiple persons (at least two) in classifying a warning. The same goes for making the decision of how to act based on the warning.

  • 330.
    Baca, Dejan
    et al.
    Blekinge Institute of Technology, School of Computing.
    Petersen, Kai
    Blekinge Institute of Technology, School of Computing.
    Countermeasure graphs for software security risk assessment: An action research2013In: Journal of Systems and Software, ISSN 0164-1212, Vol. 86, no 9, p. 2411-2428Article in journal (Refereed)
    Abstract [en]

    Software security risk analysis is an important part of improving software quality. In previous research we proposed countermeasure graphs (CGs), an approach to conduct risk analysis, combining the ideas of different risk analysis approaches. The approach was designed for reuse and easy evolvability to support agile software development. CGs have not been evaluated in industry practice in agile software development. In this research we evaluate the ability of CGs to support practitioners in identifying the most critical threats and countermeasures. The research method used is participatory action research where CGs were evaluated in a series of risk analyses on four different telecom products. With Peltier (used prior to the use of CGs at the company) the practitioners identified attacks with low to medium risk level. CGs allowed practitioners to identify more serious risks (in the first iteration 1 serious threat, 5 high risk threats, and 11 medium threats). The need for tool support was identified very early, tool support allowed the practitioners to play through scenarios of which countermeasures to implement, and supported reuse. The results indicate that CGs support practitioners in identifying high risk security threats, work well in an agile software development context, and are cost-effective.

  • 331. Baca, Dejan
    et al.
    Petersen, Kai
    Prioritizing Countermeasures through the Countermeasure Method for Software Security (CM-Sec)2010Conference paper (Refereed)
    Abstract [en]

    Software security is an important quality aspect of a software system. Therefore, it is important to integrate software security touch points throughout the development life-cycle. So far, the focus of touch points in the early phases has been on the identification of threats and attacks. In this paper we propose a novel method focusing on the end product by prioritizing countermeasures. The method provides an extension to attack trees and a process for identification and prioritization of countermeasures. The approach has been applied on an open-source application and showed that countermeasures could be identified. Furthermore, an analysis of the effectiveness and cost-efficiency of the countermeasures could be provided.

  • 332. Baca, Dejan
    et al.
    Petersen, Kai
    Carlsson, Bengt
    Lundberg, Lars
    Static Code Analysis to Detect Software Security Vulnerabilities: Does Experience Matter?2009Conference paper (Refereed)
    Abstract [en]

    Code reviews with static analysis tools are today recommended by several security development processes. Developers are expected to use the tools' output to detect the security threats they themselves have introduced in the source code. This approach assumes that all developers can correctly identify a warning from a static analysis tool (SAT) as a security threat that needs to be corrected. We have conducted an industry experiment with a state of the art static analysis tool and real vulnerabilities. We have found that average developers do not correctly identify the security warnings and only developers with specific experiences are better than chance in detecting the security vulnerabilities. Specific SAT experience more than doubled the number of correct answers and a combination of security experience and SAT experience almost tripled the number of correct security answers.

  • 333.
    Bachu, Rajesh
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    A framework to migrate and replicate VMware Virtual Machines to Amazon Elastic Compute Cloud: Performance comparison between on premise and the migrated Virtual Machine2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context Cloud Computing is the new trend in the IT industry. Traditionally obtaining servers was quiet time consuming for companies. The whole process of research on what kind of hardware to buy, get budget approval, purchase the hardware and get access to the servers could take weeks or months. In order to save time and reduce expenses, most companies are moving towards the cloud. One of the known cloud providers is Amazon Elastic Compute Cloud (EC2). Amazon EC2 makes it easy for companies to obtain virtual servers (known as computer instances) in a cloud quickly and inexpensively. Another advantage of using Amazon EC2 is the flexibility that they offer, so the companies can even import/export the Virtual Machines (VM) that they have built which meets the companies IT security, configuration, management and compliance requirements into Amazon EC2.

    Objectives In this thesis, we investigate importing a VM running on VMware into Amazon EC2. In addition, we make a performance comparison between a VM running on VMware and the VM with same image running on Amazon EC2.

    Methods A Case study research has been done to select a persistent method to migrate VMware VMs to Amazon EC2. In addition an experimental research is conducted to measure the performance of Virtual Machine running on VMware and compare it with same Virtual Machine running on EC2. We measure the performance in terms of CPU, memory utilization as well as disk read/write speed using well-known open-source benchmarks from Phoronix Test Suite (PTS).

    Results Investigation on importing VM snapshots (VMDK, VHD and RAW format) to EC2 was done using three methods provided by AWS. Comparison of performance was done by running each benchmark for 25 times on each Virtual Machine.

    Conclusions Importing VM to EC2 was successful only with RAW format and replication was not successful as AWS installs some software and drivers while importing the VM to EC2. Migrated EC2 VM performs better than on premise VMware VM in terms of CPU, memory utilization and disk read/write speed.

  • 334. Bachu, Yashwanth
    Packaging Demand Forecasting in Logistics using Deep Neural Networks2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Background: Logistics have a vital role in supply chain management and those logistics operations are dependent on the availability of packaging material for packing goods and material to be shipped. Forecasting packaging material demand for a long period of time will help organization planning to meet the demand. Using time-series data with Deep Neural Networks for long term forecasting is proposed for research. Objectives: This study is to identify the DNN used in forecasting packaging demand and in similar problems in terms of data, data similar to the available data with the organization (Volvo). Identifying the best-practiced approach for long-term forecasting and then combining the approach with identified and selected DNN for forecasting. The end objective of the thesis is to suggest the best DNN model for packaging demand forecasting. Methods: An experiment is conducted to evaluate the DNN models selected for demand forecasting. Three models are selected by a preliminary systematic literature review. Another Systematic literature review is performed in parallel for identifying metrics to evaluate the models to measure performance. Results from the preliminary literature review were instrumental in performing the experiment. Results: Three models observed in this study are performing well with considerable forecasting values. But based on the type and amount of historical data that models were given to learn, three models have a very slight difference in performance measures in terms of forecasting performance. Comparisons are made with different measures that are selected by the literature review. For a better understanding of the batch size impact on model performance, experimented three models were developed with two different batch sizes. Conclusions: Proposed models are performing considerable forecasting of packaging demand for planning the next 52 weeks (∼ 1 Year). Results show that by adopting DNN in forecasting, reliable packaging demand can be forecasted on time series data for packaging material. The combination of CNN-LSTM is better performing than the respective individual models by a small margin. By extending the forecasting at the granule level of the supply chain (Individual suppliers and plants) will benefit the organization by controlling the inventory and avoiding excess inventory.

  • 335. Badampudi, Deepika
    Decision-making support for choosing among different component origins.2018Doctoral thesis, comprehensive summary (Other academic)
  • 336.
    Badampudi, Deepika
    Blekinge Institute of Technology, School of Computing.
    Factors Affecting Efficiency of Agile Planning: A Case Study2012Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Context: Planning in software projects is a difficult problem due to the uncertainty associated with it. There are many factors that cause difficulty in formulating a plan. Not many factors that influence the efficiency of planning are identified in the previous studies. The literature focuses only on technical aspects such as requirements selection and estimation in order to plan a release or iteration. Objectives. The objective of this study is to identify factors that affect planning efficiency. The context in which the objective is achieved is large scale complex projects that are distributed across multiple teams, in multiple global sites. The motivation for selecting large scale context is because most of the existing releases planning approaches discussed in the literature were investigated in small scale projects. Hence this context will allow studying the planning process in large scale industry. Methods. A case study was conducted at Siemens’ Development Centre in Bangalore, India. A total of 15 interviews were conducted to investigate the planning process adopted by Siemens. To achieve triangulation, process documents such as release planning documents are studied and direct observation of the planning meeting is performed. Therefore multiple sources are used to collect evidences. Results. The identified challenges are grouped into technical and non-technical category. In total 9 technical factors and 11 non-technical factors are identified. The identified factors are also classified based on the context in which they affect the planning. In addition 6 effects of the factors are identified and improvements perceived by the participants are discussed in this study.

  • 337.
    Badampudi, Deepika
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Reporting Ethics Considerations in Software Engineering Publications2017In: 11TH ACM/IEEE INTERNATIONAL SYMPOSIUM ON EMPIRICAL SOFTWARE ENGINEERING AND MEASUREMENT (ESEM 2017), IEEE , 2017, p. 205-210Conference paper (Refereed)
    Abstract [en]

    Ethical guidelines of software engineering journals require authors to provide statements related to the conflict of interest and the process of obtaining consent (if human subjects are involved). The objective of this study is to review the reporting of the ethical considerations in Empirical Software Engineering - An International Journal. The results indicate that two out of seven studies reported some ethical information however, not explicitly. The ethical discussions were focussed on anonymity and confidentiality. Ethical aspects such as competence, comprehensibility and vulnerability of the subjects were not discussed in any of the papers reviewed in this study. It is important to not only state that consent was obtained however, the procedure of obtaining consent should be reported to improve the accountability and trust.

  • 338.
    Badampudi, Deepika
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Towards decision-making to choose among different component origins2016Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    Context: The amount of software in solutions provided in various domains is continuously growing. These solutions are a mix of hardware and software solutions, often referred to as software-intensive systems. Companies seek to improve the software development process to avoid delays or cost overruns related to the software development.  

    Objective: The overall goal of this thesis is to improve the software development/building process to provide timely, high quality and cost efficient solutions. The objective is to select the origin of the components (in-house, outsource, components off-the-shelf (COTS) or open source software (OSS)) that facilitates the improvement. The system can be built of components from one origin or a combination of two or more (or even all) origins. Selecting a proper origin for a component is important to get the most out of a component and to optimize the development. 

    Method: It is necessary to investigate the component origins to make decisions to select among different origins. We conducted a case study to explore the existing challenges in software development.  The next step was to identify factors that influence the choice to select among different component origins through a systematic literature review using a snowballing (SB) strategy and a database (DB) search. Furthermore, a Bayesian synthesis process is proposed to integrate the evidence from literature into practice.  

    Results: The results of this thesis indicate that the context of software-intensive systems such as domain regulations hinder the software development improvement. In addition to in-house development, alternative component origins (outsourcing, COTS, and OSS) are being used for software development. Several factors such as time, cost and license implications influence the selection of component origins. Solutions have been proposed to support the decision-making. However, these solutions consider only a subset of factors identified in the literature.   

    Conclusions: Each component origin has some advantages and disadvantages. Depending on the scenario, one component origin is more suitable than the others. It is important to investigate the different scenarios and suitability of the component origins, which is recognized as future work of this thesis. In addition, the future work is aimed at providing models to support the decision-making process.

  • 339.
    Badampudi, Deepika
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Britto, Ricardo
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Unterkalmsteiner, Michael
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Modern code reviews - Preliminary results of a systematic mapping study2019In: ACM International Conference Proceeding Series, Association for Computing Machinery , 2019, p. 340-345Conference paper (Refereed)
    Abstract [en]

    Reviewing source code is a common practice in a modern and collaborative coding environment. In the past few years, the research on modern code reviews has gained interest among practitioners and researchers. The objective of our investigation is to observe the evolution of research related to modern code reviews, identify research gaps and serve as a basis for future research. We use a systematic mapping approach to identify and classify 177 research papers. As preliminary result of our investigation, we present in this paper a classification scheme of the main contributions of modern code review research between 2005 and 2018. © 2019 Association for Computing Machinery.

  • 340.
    Badampudi, Deepika
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Claes, Wohlin
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Kai, Petersen
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Software Component Decision-making: In-house, OSS, COTS or Outsourcing: A Systematic Literature Review2016In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 121, p. 105-124Article in journal (Refereed)
    Abstract [en]

    Component-based software systems require decisions on component origins for acquiring components. A component origin is an alternative of where to get a component from. Objective: To identify factors that could influence the decision to choose among different component origins and solutions for decision-making (For example, optimization) in the literature. Method: A systematic review study of peer-reviewed literature has been conducted. Results: In total we included 24 primary studies. The component origins compared were mainly focused on in-house vs. COTS and COTS vs. OSS. We identified 11 factors affecting or influencing the decision to select a component origin. When component origins were compared, there was little evidence on the relative (either positive or negative) effect of a component origin on the factor. Most of the solutions were proposed for in-house vs. COTS selection and time, cost and reliability were the most considered factors in the solutions. Optimization models were the most commonly proposed technique used in the solutions. Conclusion: The topic of choosing component origins is a green field for research, and in great need of empirical comparisons between the component origins, as well of how to decide between different combinations of them.

    The full text will be freely available from 2019-11-01 12:16
  • 341.
    Badampudi, Deepika
    et al.
    Blekinge Institute of Technology, School of Computing.
    Fricker, Samuel
    Blekinge Institute of Technology, School of Computing.
    Moreno, Ana
    Perspectives on Productivity and Delays in Large-Scale Agile Projects2013Conference paper (Refereed)
    Abstract [en]

    Many large and distributed companies run agile projects in development environments that are inconsistent with the original agile ideas. Problems that result from these inconsistencies can affect the productivity of development projects and the timeliness of releases. To be effective in such contexts, the agile ideas need to be adapted. We take an inductive approach for reaching this aim by basing the design of the development process on observations of how context, practices, challenges, and impacts interact. This paper reports the results of an interview study of five agile development projects in an environment that was unfavorable for agile principles. Grounded theory was used to identify the challenges of these projects and how these challenges affected productivity and delays according to the involved project roles. Productivity and delay-influencing factors were discovered that related to requirements creation and use, collaboration, knowledge management, and the application domain. The practitioners’ explanations about the factors' impacts are, on one hand, a rich empirical source for avoiding and mitigating productivity and delay problems and, on the other hand, a good starting point for further research on flexible large-scale development.

  • 342.
    Badampudi, Deepika
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Wnuk, Krzysztof
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Wohlin, Claes
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Franke, Ulrik
    Swedish Institute of Computer Science, SWE.
    Šmite, Darja
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Cicchetti, Antonio
    Mälardalens högskola, SWE.
    A decision-making process-line for selection of software asset origins and components2018In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 135, p. 88-104Article in journal (Refereed)
    Abstract [en]

    Selecting sourcing options for software assets and components is an important process that helps companies to gain and keep their competitive advantage. The sourcing options include: in-house, COTS, open source and outsourcing. The objective of this paper is to further refine, extend and validate a solution presented in our previous work. The refinement includes a set of decision-making activities, which are described in the form of a process-line that can be used by decision-makers to build their specific decision-making process. We conducted five case studies in three companies to validate the coverage of the set of decision-making activities. The solution in our previous work was validated in two cases in the first two companies. In the validation, it was observed that no activity in the proposed set was perceived to be missing, although not all activities were conducted and the activities that were conducted were not executed in a specific order. Therefore, the refinement of the solution into a process-line approach increases the flexibility and hence it is better in capturing the differences in the decision-making processes observed in the case studies. The applicability of the process-line was then validated in three case studies in a third company. © 2017 Elsevier Inc.

    The full text will be freely available from 2020-01-01 12:19
  • 343.
    Badampudi, Deepika
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Wohlin, Claes
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Bayesian Synthesis for Knowledge Translation in Software Engineering: Method and Illustration2016In: 2016 42th Euromicro Conference on Software Engineering and Advanced Applications (SEAA), IEEE, 2016Conference paper (Refereed)
    Abstract [en]

    Systematic literature reviews in software engineering are necessary to synthesize evidence from multiple studies to provide knowledge and decision support. However, synthesis methods are underutilized in software engineering research. Moreover, translation of synthesized data (outcomes of a systematic review) to provide recommendations for practitioners is seldom practiced. The objective of this paper is to introduce the use of Bayesian synthesis in software engineering research, in particular to translate research evidence into practice by providing the possibility to combine contextualized expert opinions with research evidence. We adopted the Bayesian synthesis method from health research and customized it to be used in software engineering research. The proposed method is described and illustrated using an example from the literature. Bayesian synthesis provides a systematic approach to incorporate subjective opinions in the synthesis process thereby making the synthesis results more suitable to the context in which they will be applied. Thereby, facilitating the interpretation and translation of knowledge to action/application. None of the synthesis methods used in software engineering allows for the integration of subjective opinions, hence using Bayesian synthesis can add a new dimension to the synthesis process in software engineering research.

  • 344.
    Badampudi, Deepika
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Wohlin, Claes
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Gorschek, Tony
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Contextualizing research evidence through knowledge translation in software engineering2019In: ACM International Conference Proceeding Series, Association for Computing Machinery , 2019, p. 306-311Conference paper (Refereed)
    Abstract [en]

    Usage of software engineering research in industrial practice is a well-known challenge. Synthesis of knowledge from multiple research studies is needed to provide evidence-based decision-support for industry. The objective of this paper is to present a vision of how a knowledge translation framework may look like in software engineering research, in particular how to translate research evidence into practice by combining contextualized expert opinions with research evidence. We adopted the framework of knowledge translation from health care research, adapted and combined it with a Bayesian synthesis method. The framework provided in this paper includes a description of each step of knowledge translation in software engineering. Knowledge translation using Bayesian synthesis intends to provide a systematic approach towards contextualized, collaborative and consensus-driven application of research results. In conclusion, this paper contributes towards the application of knowledge translation in software engineering through the presented framework. © 2019 Association for Computing Machinery.

  • 345.
    Badampudi, Deepika
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Wohlin, Claes
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Gorschek, Tony
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Guidelines for Knowledge Translation in Software EngineeringIn: Article in journal (Refereed)
  • 346.
    Badampudi, Deepika
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Wohlin, Claes
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Experiences from Using Snowballing and Database Searches in Systematic Literature Studies2015Conference paper (Refereed)
    Abstract [en]

    Background: Systematic literature studies are commonly used in software engineering. There are two main ways of conducting the searches for these type of studies; they are snowballing and database searches. In snowballing, the reference list (backward snowballing - BSB) and citations (forward snowballing - FSB) of relevant papers are reviewed to identify new papers whereas in a database search, different databases are searched using predefined search strings to identify new papers. Objective: Snowballing has not been in use as extensively as database search. Hence it is important to evaluate its efficiency and reliability when being used as a search strategy in literature studies. Moreover, it is important to compare it to database searches. Method: In this paper, we applied snowballing in a literature study, and reflected on the outcome. We also compared database search with backward and forward snowballing. Database search and snowballing were conducted independently by different researchers. The searches of our literature study were compared with respect to the efficiency and reliability of the findings. Results: Out of the total number of papers found, snowballing identified 83% of the papers in comparison to 46% of the papers for the database search. Snowballing failed to identify a few relevant papers, which potentially could have been addressed by identifying a more comprehensive start set. Conclusion: The efficiency of snowballing is comparable to database search. It can potentially be more reliable than a database search however, the reliability is highly dependent on the creation of a suitable start set.

  • 347.
    Bahrieh, Sara
    Blekinge Institute of Technology, School of Engineering.
    Sensor Central / Automotive Systems2013Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [en]

    How to display objects which were detected from different devices in one coordinate system? Nowadays most vehicles are equipped with front and back sensors to help the driver in driving process. Companies who provide this technology need to have an application which enables them for easy data fusion from these sensors and recording the process. Besides sensor’s design, programming of them is an important aspect. BASELABS Connect has the solution in a user friendly way. Creating Sensor Central component for BASELABS Connect is the main goal of this thesis. Sensor Central from BASELABS Connect requires six variables of sensor’s position for each sensor to demonstrate the objects from all sensors to one unique coordinate system. In this thesis, it was intended to create such a component which was mounted between all the sensors and the charting component to convert the objects location from different sensor’s position to one coordinate system and to be usable from other vehicles too.

  • 348. Bai, Guohua
    A Sociocybernetic Model of Sustainable Social Systems2009Conference paper (Refereed)
    Abstract [en]

    The ongoing economic crisis world around has asked for a theoretical understanding and deep analysis of what have been wrong in our economic system in specific and social system as whole. Discussions in many forums and mass media have mostly focused on a level of first order casual-effects such as bank and credit system in relation to house loans and car industries, and where and how much the stimulating packages should be distributed, etc. This is what I called a liveability level problem. A second order understanding of fundamental systems structure and social subsystems relationships however, have not been much addressed properly. This is what I called the sustainability problem. This paper will propose an epistemological model based on cybernetic feedback principle and the Activity Theory to interpret the second order problems that deeply embed in our social-economic system structure. So the liveability and sustainability are coherently discussed within a socio-cybernetic system. The first part of the paper introduces shortly principles of feedbacks from cybernetics, especially understanding the behaviours of positive and negative feedbacks. Then, the Activity Theory and related concepts from social autopoietic theory are introduced. The aim of introducing those concepts is to provide the basic elements/components to the construction of a double-loops feedback model in the second part. In the last, the current economic crisis is interpreted based on the constructed model, to verify the usability of the proposed model.

  • 349.
    Bai, Guohua
    Blekinge Institute of Technology, School of Computing.
    A Sustainable Platform for E-services System Design2004In: Journal of Systems Science and Systems Engineering, ISSN 1004-3756, E-ISSN 1861-9576, Vol. 13, no No.4Article in journal (Refereed)
    Abstract [en]

    By integrating system thinking and social psychology, this paper presents an Activity System Theory (AST) approach to the platform design of e-service systems in general, and e-healthcare systems in specific. In the first part, some important principles of AST and a sustainable model of human activity system are introduced. Then a project ‘Integrated Mobile Information System for Healthcare (IMIS)’ is presented to demonstrate how to construct a comprehensive platform for various complex e-service systems based on the sustainable model of AST. Our research focused on the complex e-healthcare system in Sweden, and the results showed that the model of AST can provide the designers of e-service system with a comprehensive and sustainable platform for designing various kinds of e-service system

  • 350.
    Bai, Guohua
    Blekinge Institute of Technology, School of Computing.
    Activity System Theory Approach to Healthcare Information System2004Conference paper (Refereed)
    Abstract [en]

    Healthcare information system is a very complex system and has to be approached from systematic perspectives. This paper presents an Activity System Theory (ATS) approach by integrating system thinking and social psychology. First part of the paper, the activity system theory is presented, especially a recursive model of human activity system is introduced. A project ‘Integrated Mobile Information System for Diabetic Healthcare (IMIS)’ is then used to demonstrate a practical application of the Activity System Theory especially in constructing healthcare information system. Our conclusion is that the activity system model can provide the service system designers with a comprehensive and integrated framework for designing healthcare information system in specific, and for designing various kinds of service systems in general.

45678910 301 - 350 of 4803
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf