Change search
Refine search result
45678910 301 - 350 of 1682
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 301.
    Chu, Thi My Chinh
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Zepernick, Hans-Juergen
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Non-orthogonal multiple access for DF cognitive cooperative radio networks2018In: IEEE International Conference on Communications Workshops, Institute of Electrical and Electronics Engineers Inc. , 2018, p. 1-6Conference paper (Refereed)
    Abstract [en]

    In this paper, we study a power domain non-orthogonal multiple access (NOMA) scheme for cognitive cooperative radio networks (CCRNs). In the proposed scheme, a secondary transmitter communicates with two secondary users (SUs) by allocating transmit powers inversely proportional to the channel power gains on the links to the respective SUs. A decode-and-forward (DF) secondary relay is deployed which decodes the superimposed signals associated with the two SUs. Then, power domain NOMA is used to forward the signals from the relay to the two SUs based on the channel power gains on the corresponding two links. Mathematical expressions for the outage probability and ergodic capacity of each SU and the overall power domain NOMA CCRN are derived. Numerical results are provided to reveal the impact of the power allocation coefficients at the secondary transmitter and secondary relay, the interference power threshold at the primary receiver, and the normalized distances of the SUs on the outage probability and ergodic capacity of each SU and the whole NOMA system. © 2018 IEEE.

  • 302.
    Chu, Thi My Chinh
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Zepernick, Hans-Juergen
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    On Secrecy Capacity of Full-Duplex Cognitive Cooperative Radio Networks2017In: 2017 IEEE GLOBECOM WORKSHOPS (GC WKSHPS), IEEE , 2017Conference paper (Refereed)
    Abstract [en]

    In this paper, we analyze the secrecy capacity of a full-duplex underlay cognitive cooperative radio network (CCRN) in the presence of an eavesdropper and under the interference power constraint of a primary network. The full-duplex mode is used at the secondary relay to improve the spectrum efficiency which in turn leads to an improvement of the secrecy capacity of the full-duplex CCRN. We utilize an approximation-and-fitting method to convert the complicated expression of the signal-to-interference-plus-noise ratio into polynomial form which is then utilized to derive an expression for the secrecy capacity. Numerical results are provided to illustrate the effect of network parameters such as transmit power, interference power limit, self-interference parameters of the full-duplex mode, and distances among links on the secrecy capacity. To reveal the benefits of the full-duplex CCRN, we compare the secrecy capacity obtained when the secondary relay operates in full-duplex and half-duplex mode.

  • 303.
    Chu, Thi My Chinh
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Zepernick, Hans-Juergen
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Optimal Power Allocation for Hybrid Cognitive Cooperative Radio Networks with Imperfect Spectrum Sensing2018In: IEEE Access, E-ISSN 2169-3536, Vol. 6, p. 10365-10380Article in journal (Refereed)
    Abstract [en]

    In this paper, two optimal power allocation strategies for hybrid interweave-underlay cognitive cooperative radio networks (CCRNs) are proposed to maximize channel capacity and minimize outage probability. The proposed power allocation strategies are derived for the case of Rayleigh fading taking into account the impact of imperfect spectrum sensing on the performance of the hybrid CCRN. Based on the optimal power allocation strategies, the transmit powers of the secondary transmitter and secondary relay are adapted according to the fading conditions, the interference power constraint imposed by the primary network (PN), the interference from the PN to the hybrid CCRN, and the total transmit power limit of the hybrid CCRN. Numerical results are provided to illustrate the effect of the interference power constraint of the PN, arrival rate of the PN, imperfect spectrum sensing, and the transmit power constraint of the hybrid CCRN on channel capacity and outage probability. Finally, comparisons of the channel capacity and outage probability of underlay, overlay, and hybrid interweaveunderlay CCRNs are presented to show the advantages of the hybrid spectrum access. OAPA

  • 304.
    Chu, Thi My Chinh
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Zepernick, Hans-Juergen
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies. Blekinge Inst Technol, SE-37179 Karlskrona, Sweden..
    Outage Probability and Secrecy Capacity of a Non-orthogonal Multiple Access System2017In: 11th International Conference on Signal Processing and Communication Systems, ICSPCS, 2017 / [ed] Wysocki, TA Wysocki, BJ, IEEE , 2017Conference paper (Refereed)
    Abstract [en]

    In this paper, we analyze the outage probability and secrecy capacity of a non-orthogonal multiple access (NOMA) system in the presence of an eavesdropper. In order to enhance spectral efficiency, a base station communicates with two users simultaneously in the same frequency band by superpimposing the transmit signals to the users in the power domain. Specifically, the user with the worse channel conditions is allocated higher power such that it is able to directly decode its signal from the received superimposed signal. At the user with the better channel conditions, the interference due to NOMA is processed by successive interference cancelation. Given these system settings and accounting for decoding thresholds, we analyze the outage probability of the NOMA system over Rayleigh fading channels. Furthermore, based on the locations of the users and eavesdropper, the secrecy capacity is analyzed to assess the level of security provided to the legitimate users in the presence of an eavesdropper. Here, the decoding thresholds of legitimate users and eavesdropper are also included in the analysis of the secrecy capacity. Through numerical results, the effects of network parameters on system performance are assessed as well as the the superiority of NOMA in terms of secrecy capacity over traditional orthogonal multiple access.

  • 305.
    Chu, Thi My Chinh
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Zepernick, Hans-Juergen
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies. Blekinge Inst Technol, SE-37179 Karlskrona, Sweden..
    Outage Probability of a Hybrid AF-DF Protocol for Two-Way Underlay Cognitive Cooperative Radio Networks2017In: 11th International Conference on Signal Processing and Communication Systems, ICSPCS 2017 / [ed] Wysocki, TA Wysocki, BJ, IEEE , 2017, p. 1-6Conference paper (Refereed)
    Abstract [en]

    In this paper, we study a hybrid amplify-and-forward (AF) and decode-and-forward (DF) scheme for two-way cognitive cooperative radio networks (CCRNs). The proposed scheme applies the AF scheme when the signal-to-interferenceplus-noise ratio (SINR) at the relay is below a predefined threshold such that the relay cannot successfully decode the signal. On the other hand, when the SINR at the relay is greater than the predefined threshold, it decodes the signal and then forwards it to the destination, i.e. avoids noise and interference amplification at the relay. An analytical expression of the outage probability of the hybrid AF-DF two-way CCRN is derived based on the probability density function and cumulative distribution function of the SINR in AF and DF mode. Numerical results are provided to illustrate the influence of network parameters such as transmit power, interference power constraint of the primary network, fading conditions, and link distances on the outage probability. Finally, the numerical results show that the hybrid strategy is able to improve system performance significantly compared to conventional AF or DF relaying.

  • 306.
    Chu, Thi My Chinh
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Zepernick, Hans-Juergen
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Performance of a Non-orthogonal Multiple Access System with Full-Duplex Relaying2018In: IEEE Communications Letters, ISSN 1089-7798, E-ISSN 1558-2558, Vol. 22, no 10, p. 2084-2087Article in journal (Refereed)
    Abstract [en]

    In this paper, we study a power-domain nonorthogonal multiple access (NOMA) system in which a base station (BS) superimposes the transmit signals to the users. To enhance spectral efficiency and link reliability for the far-distance user, a full-duplex (FD) relay assists the BS while the neardistance user is reached over the direct link. For this setting, we analyze outage probability and sum rate of the NOMA system over Nakagami-m fading with integer fading severity parameter m. Numerical results are provided for outage probability and sum rate to show the effect of system parameters on the performance of the FD NOMA system over Nakagami-m fading. IEEE

  • 307.
    Chu, Thi My Chinh
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Zepernick, Hans-Juergen
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Performance Optimization for Hybrid Two-Way Cognitive Cooperative Radio Networks with Imperfect Spectrum Sensing2018In: IEEE Access, E-ISSN 2169-3536, Vol. 6, p. 70582-70596Article in journal (Refereed)
    Abstract [en]

    In this paper, we consider a two-way cognitive cooperative radio network (TW-CCRN) with hybrid interweaveunderlay spectrum access in the presence of imperfect spectrum sensing. Power allocation strategies are proposed that maximize the sum-rate and minimize the outage probability of the hybrid TW-CCRN. Specifically, based on the state of the primary network (PN), fading conditions, and system parameters, suitable power allocation strategies subject to the interference power constraint of the PN are derived for each transmission scenario of the hybrid TW-CCRN. Given the proposed power allocation strategies, we analyze the sum-rate and outage probability of the hybrid TW-CCRN over Rayleigh fading taking imperfect spectrum sensing into account. Numerical results are presented to illustrate the effect of the arrival rate, interference power threshold, transmit power of the PN, imperfect spectrum sensing, and maximum total transmit power on the sum-rate and outage probability of the hybrid TW-CCRN. OAPA

  • 308.
    Chu, Thi My Chinh
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Zepernick, Hans-Juergen
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Symbol error rate and achievable rate of cognitive cooperative radio networks utilizing non-orthogonal multiple access2018In: 2018 IEEE 7th International Conference on Communications and Electronics, ICCE 2018, Institute of Electrical and Electronics Engineers Inc. , 2018, Vol. Code 141951, p. 33-38Conference paper (Refereed)
    Abstract [en]

    In this paper, we study the employment of power-domain non-orthogonal multiple access (NOMA) concepts for a cooperative cognitive relay network (CCRN) downlink system in order to allow a base station (BS) to simultaneously transmit signals to a primary user (PU) and a secondary user (SU). As such, the considered system falls into the field of cognitive radio inspired power-domain NOMA. In this scheme, the interference power constraint of the PU imposed to SUs in conventional underlay CCRNs is replaced by controlling the power allocation coefficients at the BS and relay. Specifically, expressions for the symbol error rates at the PU and SU for different modulation schemes as well as expressions for the achievable rates are derived. On this basis, the effect of system parameters such as total transmit power and power allocation coefficients on the performance of the CCRN with power-domain NOMA is numerically studied. These numerical results provide insights into selecting favorable operation modes of the CCRN employing power-domain NOMA. © 2018 IEEE.

  • 309.
    Chu, Thi My Chinh
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Zepernick, Hans-Juergen
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Phan, H.
    Duy Tan University, VNM.
    MAC Protocol for Opportunistic Spectrum Access in Multi-Channel Cognitive Relay Networks2017In: IEEE Vehicular Technology Conference, Institute of Electrical and Electronics Engineers Inc. , 2017Conference paper (Refereed)
    Abstract [en]

    In this paper, we propose a medium access control (MAC) protocol for multi-channel cognitive cooperative radio networks (CCRNs). In this protocol, each secondary user (SU) senses for spectrum opportunities within M licensed bands of the primary users (PUs). To enhance the accuracy of spectrum sensing, we employ cooperative sequential spectrum sensing where SUs mutually exchange their sensing results. Moreover, the information obtained from cooperative spectrum sensing at the physical layer is integrated into the channel negotiation process at the MAC layer to alleviate the hidden terminal problem. Finally, the performance of the proposed MAC protocol in terms of aggregate throughput of the CCRNs is analyzed. Numerical results are provided to assess the impact of channel utilization by PUs, number of contenting CCRNs, number of licensed bands, and false alarm probability of SUs on the aggregate throughput. © 2017 IEEE.

  • 310.
    Chu, Thi My Chinh
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Zepernick, Hans-Juergen
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Phan, Hoc
    Channel Reservation for Dynamic Spectrum Access of Cognitive Radio Networks with Prioritized Traffic2015In: 2015 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATION WORKSHOP, IEEE Communications Society, 2015, p. 883-888Conference paper (Refereed)
    Abstract [en]

    In this paper, we propose a strategy to coordinate the dynamic spectrum access (DSA) of different types of traffic. It is assumed that the DSA assigns spectrum bands to three kinds of prioritized traffic, the traffic of the primary network, the Class 1 traffic and Class 2 traffic of the secondary network. Possessing the licensed spectrum, the primary traffic has the highest access priority and can access the spectrum bands at anytime. The secondary Class 1 traffic has higher priority compared to secondary Class 2 traffic. In this system, a channel reservation scheme is deployed to control spectrum access of the traffic. Specifically, the optimal number of reservation channels is applied to minimize the forced termination probability of the secondary traffic while satisfying a predefined blocking probability of the primary network. To investigate the system performance, we model state transitions of the DSA as a multi-dimensional Markov chain with three-state variables representing the number of primary, Class 1, and Class 2 packets in the system. Based on this chain, important performance measures, i.e., blocking probability and forced termination probability are derived for the Class 1 and Class 2 secondary traffic.

  • 311.
    Chu, Thi My Chinh
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Zepernick, Hans-Juergen
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Sundstedt, Veronica
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Analysis of Variance of Opinion Scores for MPEG-4 Scalable and Advanced Video Coding2018In: 2018 12TH INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING AND COMMUNICATION SYSTEMS (ICSPCS) / [ed] Wysocki, TA Wysocki, BJ, IEEE , 2018Conference paper (Refereed)
    Abstract [en]

    In this paper, we conduct an analysis of variance (ANOVA) on opinion scores for MPEG-4 scalable video coding (SVC) and advanced video coding (AVC) standards. This work resorts on a publicly available database providing opinion scores from subjective experiments for several scenarios such as different bit rates and resolutions. In particular, ANOVA is used for statistical hypothesis testing to compare two or more sets of opinion scores instead of being constrained to pairs of sets of opinion scores as would be the case for t-tests. As the ANOVA tests of the different scenarios are performed for mean opinion scores (MOS), box plots are also provided in order to assess the distribution of the opinion scores around the median. It is shown that the opinion scores given to the reference videos in SVC and AVC for different resolutions are statistically significantly similar regardless of the content. Further, for the opinion scores of the considered database, the ANOVA tests support the hypothesis that AVC generally outperforms SVC although the performance difference may be less pronounced for higher bit rates. This work also shows that additional insights on the results of subjective experiments can be obtained by extending the analysis of opinion scores beyond MOS to ANOVA tests and box plots.

  • 312.
    Chu, Thi My Chinh
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Zepernick, Hans-Jürgen
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Phan, Hoc
    Univ Reading, GBR.
    Hybrid spectrum access with relay assisting both primary and secondary networks under imperfect spectrum sensing2016In: EURASIP Journal on Wireless Communications and Networking, ISSN 1687-1472, E-ISSN 1687-1499, article id 244Article in journal (Refereed)
    Abstract [en]

    This paper proposes a novel hybrid interweave-underlay spectrum access for a cognitive amplify-and-forward relay network where the relay forwards the signals of both the primary and secondary networks. In particular, the secondary network (SN) opportunistically operates in interweave spectrum access mode when the primary network (PN) is sensed to be inactive and switches to underlay spectrum access mode if the SN detects that the PN is active. A continuous-time Markov chain approach is utilized to model the state transitions of the system. This enables us to obtain the probability of each state in the Markov chain. Based on these probabilities and taking into account the impact of imperfect spectrum sensing of the SN, the probability of each operation mode of the hybrid scheme is obtained. To assess the performance of the PN and SN, we derive analytical expressions for the outage probability, outage capacity, and symbol error rate over Nakagami-m fading channels. Furthermore, we present comparisons between the performance of underlay cognitive cooperative radio networks (CCRNs) and the performance of the considered hybrid interweave-underlay CCRN in order to reveal the advantages of the proposed hybrid spectrum access scheme. Eventually, with the assistance of the secondary relay, performance improvements for the PN are illustrated by means of selected numerical results.

  • 313.
    Chu, Thi My Chinh
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Zepernick, Hans-Jürgen
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Phan, Hoc
    Duy Tan Univ, VNM.
    Optimal Secrecy Capacity of Underlay Cognitive Radio Networks with Multiple Relays2016In: MILCOM 2016 - 2016 IEEE MILITARY COMMUNICATIONS CONFERENCE, 2016, p. 162-167Conference paper (Refereed)
    Abstract [en]

    In this paper, we study the secrecy capacity of an underlay cooperative cognitive radio network (CCRN) where multiple relays are deployed to assist the secondary transmission. An optimal power allocation algorithm is proposed for the secondary transmitter and secondary relays to obtain the maximum secrecy capacity while satisfy the interference power constraint at the primary receiver and the transmit power budget of the CCRN. Since the optimization problem for the secrecy capacity is non-convex, we utilize an approximation and fitting method to convert the optimization problem into a geometric programming problem which then is solved by applying the Logarithmic barrier function. Numerical results are provided to study the effect of network parameters on the secrecy capacity. Through the numerical results, the advantage of the proposed power allocation algorithm compared to equal power allocation can also be observed.

  • 314.
    Chu, Thi My Chinh
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Zepernick, Hans-Jürgen
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Phan, Hoc
    Performance Evaluation of Cognitive Multi-Relay Networks with Multi-Receiver Scheduling2014Conference paper (Refereed)
    Abstract [en]

    In this paper, we investigate the performance of cognitive multiple decode-and-forward relay networks under the interference power constraint of the primary receiver wherein the cognitive downlink channel is shared among multiple secondary relays and secondary receivers. In particular, only one relay and one secondary receiver which offers the highest instantaneous signal-to-noise ratio is scheduled to transmit signals. Accordingly, only one transmission route that offers the best end-to-end quality is selected for communication at a particular time instant. To quantify the system performance, we derive expressions for outage probability and symbol error rate over Nakagami-m fading with integer values of fading severity parameter m. Finally, numerical examples are provided to illustrate the effect of system parameters such as fading conditions, the number of secondary relays and secondary receivers on the secondary system performance.

  • 315.
    Chunduri, Annapurna
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    An Effective Verification Strategy for Testing Distributed Automotive Embedded Software Functions: A Case Study2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. The share and importance of software within automotive vehicles is growing steadily. Most functionalities in modern vehicles, especially safety related functions like advanced emergency braking, are controlled by software. A complex and common phenomenon in today’s automotive vehicles is the distribution of such software functions across several Electronic Control Units (ECUs) and consequently across several ECU system software modules. As a result, integration testing of these distributed software functions has been found to be a challenge. The automotive industry neither has infinite resources, nor has the time to carry out exhaustive testing of these functions. On the other hand, the traditional approach of implementing an ad-hoc selection of test scenarios based on the tester’s experience, can lead to test gaps and test redundancies. Hence, there is a pressing need within the automotive industry for a feasible and effective verification strategy for testing distributed software functions.

    Objectives. Firstly, to identify the current approach used to test the distributed automotive embedded software functions in literature and in a case company. Secondly, propose and validate a feasible and effective verification strategy for testing the distributed software functions that would help improve test coverage while reducing test redundan- cies and test gaps.

    Methods. To accomplish the objectives, a case study was conducted at Scania CV AB, Södertälje, Sweden. One of the data collection methods was through conducting interviews of different employees involved in the software testing activities. Based on the research objectives, an interview questionnaire with open-ended and close-ended questions has been used. Apart from interviews, data from relevant ar- tifacts in databases and archived documents has been used to achieve data triangulation. Moreover, to further strengthen the validity of the results obtained, adequate literature support has been presented throughout. Towards the end, a verification strategy has been proposed and validated using existing historical data at Scania.

    Conclusions. The proposed verification strategy to test distributed automotive embedded software functions has given promising results by providing means to identify test gaps and test redundancies. It helps establish an effective and feasible approach to capture function test coverage information that helps enhance the effectiveness of integration testing of the distributed software functions. 

  • 316.
    Chunduri, Annapurna
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Feldt, Robert
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Adenmark, Mikael
    Scania AB, SWE.
    An effective verification strategy for testing distributed automotive embedded software functions: A case study2016In: Lecture Notes in Computer Science / [ed] Amasaki S.,Mikkonen T.,Felderer M.,Abrahamsson P.,Duc A.N.,Jedlitschka A., 2016, p. 233-248Conference paper (Refereed)
    Abstract [en]

    Integration testing of automotive embedded software functions that are distributed across several Electronic Control Unit (ECU) system software modules is a complex and challenging task in today’s automotive industry. They neither have infinite resources, nor have the time to carry out exhaustive testing of these functions. On the other hand, the traditional approach of implementing an ad-hoc selection of test scenarios based on the testers’ experience typically leads to both test gaps and test redundancies. Here, we address this challenge by proposing a verification strategy that enhances the process in order to identify and mitigate such gaps and redundancies in automotive system software testing. This helps increase test coverage by taking more data-driven decisions for integration testing of the functions. The strategy was developed in a case study at a Swedish automotive company that involved multiple data collection steps. After static validation of the proposed strategy it was evaluated on one distributed automotive software function, the Fuel Level Display, and found to be both feasible and effective. © Springer International Publishing AG 2016.

  • 317.
    Cicchetti, Antonio
    et al.
    Malardalen Univ, Vasteras, Sweden..
    Borg, Markus
    SICS Swedish Inst Comp Sci, Kista, Sweden..
    Sentilles, Severine
    Malardalen Univ, Vasteras, Sweden..
    Wnuk, Krzysztof
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Carlson, Jan
    Malardalen Univ, Vasteras, Sweden..
    Papatheocharous, Efi
    SICS Swedish Inst Comp Sci, Kista, Sweden..
    Towards Software Assets Origin Selection Supported by a Knowledge Repository2016In: PROCEEDINGS 2016 1ST INTERNATIONAL WORKSHOP ON DECISION MAKING IN SOFTWARE ARCHITECTURE, IEEE Computer Society, 2016, p. 22-29Conference paper (Refereed)
    Abstract [en]

    Software architecture is no more a mere system specification as resulting from the design phase, but it includes the process by which its specification was carried out. In this respect, design decisions in component-based software engineering play an important role: they are used to enhance the quality of the system, keep the current market level, keep partnership relationships, reduce costs, and so forth. For non trivial systems, a recurring situation is the selection of an asset origin, that is if going for in-house, outsourcing, open-source, or COTS, when in the need of a certain missing functionality. Usually, the decision making process follows a case-by-case approach, in which historical information is largely neglected: hence, it is avoided the overhead of keeping detailed documentation about past decisions, but it is hampered consistency among multiple, possibly related, decisions. The ORION project aims at developing a decision support framework in which historical decision information plays a pivotal role: it is used to analyse current decision scenarios, take well-founded decisions, and store the collected data for future exploitation. In this paper, we outline the potentials of such a knowledge repository, including the information it is intended to be stored in it, and when and how to retrieve it within a decision case.

  • 318.
    Clark, David
    et al.
    UCL, GBR.
    Feldt, Robert
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Poulding, Simon
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Yoo, Shin
    UCL, GBR.
    Information Transformation: An Underpinning Theory for Software Engineering2015In: 2015 IEEE/ACM 37th IEEE International Conference on Software Engineering, Vol 2, IEEE , 2015, p. 599-602Conference paper (Refereed)
    Abstract [en]

    Software engineering lacks underpinning scientific theories both for the software it produces and the processes by which it does so. We propose that an approach based on information theory can provide such a theory, or rather many theories. We envision that such a benefit will be realised primarily through research based on the quantification of information involved and a mathematical study of the limiting laws that arise. However, we also argue that less formal but more qualitative uses for information theory will be useful. The main argument in support of our vision is based on the fact that both a program and an engineering process to develop such a program are fundamentally processes that transform information. To illustrate our argument we focus on software testing and develop an initial theory in which a test suite is input/output adequate if it achieves the channel capacity of the program as measured by the mutual information between its inputs and its outputs. We outline a number of problems, metrics and concrete strategies for improving software engineering, based on information theoretical analyses. We find it likely that similar analyses and subsequent future research to detail them would be generally fruitful for software engineering.

  • 319.
    Clementson, Martin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Augustsson, John
    User Study of Quantized MIP Level Data In Normal Mapping Techniques2017Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

     The standard MIP mapping technique halves the resolution of textures for each level of the MIP chain. In this thesis the bits per pixel(bpp) is reduced as well. Normal maps are generally used with MIP maps, and todays industry standard for these are usually 24 bpp.The reduction is simulated as there is currently no support for the lower bpp in GPU hardware.

    Objectives: To render images of normal mapped objects with decreasing bpp for each level in a MIP chain and evaluate these against the standard MIP mapping technique using a subjective user study and an objective image comparison method.

    Methods: A custom software is implemented to render the images with quantized normal maps manually placed in a MIP chain. For the subjective experiment a 2AFC test is used, and the objective part consists of a PDIFF test for the images.

    Results: The results indicate that as the MIP level is increased and the bpp is lowered, users can increasingly see a difference.

    Conclusions: The results show that participants can see a difference as the bpp is reduced, which indicates normal mapping as not suitable for this method, however further study is required before this technique can be dismissed as an applicable method

  • 320.
    Cosic Prica, Srdjan
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Video Games and Software Engineers: Designing a study based on the benefits from Video Games and how they can improve Software Engineers2017Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Context: This is a study about investigating if playing video games can improve any skills and characteristics in a software engineer. Due to lack of resources and time, this study will focus on designing a study that others may use to measure the results and if video games actually can improve software engineers.

    Objectives: The main objectives are finding the benefits of playing video games and how those benefits are discovered. Meaning what types of games and for how long someone needs to play in order to be affected and show improvements. Another objective is to find out what skills are requested and required in a software engineer. Then it is time to design the study based on the information gathered.

    Methods: There is a lot of literature studying involved. The method is parallel research which is when reading about the benefits of playing video games, then also reading and trying to find corresponding benefits in what is requested and required in software engineers.

    Results: There are many cognitive benefits from video games that are also beneficial in software engineers. There is no recorded limit to how long a study can go on playing video games that it proves to have negative consequences. That means that the study designed from the information gathered is very customizable and there are many results that can be measured.

    Conclusions: There is a very high chance that playing video games can result in better software engineers because the benefits that games provide are connected to skills requested and required by employers and other expert software engineers that have been in the business for a long time and have a high responsibilities over other teams of software engineers.

  • 321. Cousin, Philippe
    et al.
    Fricker, Samuel
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Felmy, Dean
    Le Gall, Franck
    Fiedler, Markus
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Validation and Quality in FI-PPP e-Health Use Case, FI-STAR Project2014Conference paper (Refereed)
  • 322.
    Cuenca, Juan
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    What are the affordances fostered by social media for amateurs artists?2015Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    The research engages the use of social media sites for musicians with a focus on Facebook. It determines which are the advantages the platform makes available for musicians, allowing them to employ Do It Yourself strategies of production, audience relationship management and self-management. The importance of audience response and demographics allow any musician integrate keen insight into the content delivery and thus, optimize their management accordingly. This thesis will establish the affordances that engage what aspects and uses of Facebook are changing the way amateurs operate. The research appropriates the context of professionalism to the variable of knowledgeability, know-how, and Stebbins’ (1977) seven variables (confidence, perseverance, continuation commitment, preparedness and self-conception) in order to note a definition of the modern amateur in contrast to professionals.

  • 323.
    Cynthia, Salkovic
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Esoko and WhatsApp Communication in Ghana: Mobile Services such as Esoko and WhatsApp in Reshaping Interpersonal Digital Media Communication in Ghana2015Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    The predominant use of mobile media such as SMS and MIM across various sectors in Ghana is incontrovertibly influencing and reshaping interpersonal communications. This paper looked at the use of the Esoko SMS and WhatsApp MIM platforms and how the use of these two dominant platforms are enhancing and reshaping digital communication in the rural and urban Ghana respectively, as barriers of socioeconomic factors limits the use of sophisticated technologies in the rural setting. This is done by employing Madianou and Miller's notion of polymedia” to draw on the moral, social and the emotional use of mobile media in enacting interpersonal relationships and communications whilst keeping in focus the recursive repercussions.

  • 324.
    Dagala, Wadzani Jabani
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Analysis of Total Cost of Ownership for Medium Scale Cloud Service Provider with emphasis on Technology and Security2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Total cost of ownership is a great factor to consider when deciding to deploy cloud computing. The cost to own a data centre or run a data centre outweighs the thought of IT manager or owner of the business organisation.The research work is concerned with specifying the factors that sum the TCO for medium scale service providers with respect to technology and security. A valid analysis was made with respect to the cloud service providers expenses and how to reduce the cost of ownership.In this research work, a review of related articles was used from a wide source, reading through the abstract and overview of the articles to find its relevance to the subject. A further interview was conducted with two medium scale cloud service providers and one cloud user.In this study, an average calculation of the TCO was made and we implemented a proposed cost reduction method. We made a proposal on which and how to decide as to which cloud services users should deploy in terms of cost and security.We conclude that many articles have focused their TCO calculation on the building without making emphasis on the security. The security accumulates huge amount under hidden cost and this research work identified the hidden cost, made an average calculation and proffer a method of reducing the TCO.

  • 325.
    Damm, Johan
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Lindgren, Alexander
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Rockman, Carl
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Afrika, webben och den digitala skiljelinjen2014Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [en]

    Uppsatsen beskriver en undersökning och produktion kring att utveckla en anpassad webbapplikation som kan trotsa fenomenet av den digitala skiljelinjen.

  • 326.
    Dan, Sjödahl
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Cascaded Voxel Cone-Tracing Shadows: A Computational Performance Study2019Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Background. Real-time shadows in 3D applications have for decades been implemented with a solution called Shadow Mapping or some variant of it. This is a solution that is easy to implement and has good computational performance, nevertheless it does suffer from some problems and limitations. But there are newer alternatives and one of them is based on a technique called Voxel Cone-Tracing. This can be combined with a technique called Cascading to create Cascaded Voxel Cone-Tracing Shadows (CVCTS).

    Objectives. To measure the computational performance of CVCTS to get better insight into it and provide data and findings to help developers make an informed decision if this technique is worth exploring. And to identify where the performance problems with the solution lies.

    Methods. A simple implementation of CVCTS was implemented in OpenGL aimed at simulating a solution that could be used for outdoor scenes in 3D applications. It had several different parameters that could be changed. Then computational performance measurements were made with these different parameters set at different settings.

    Results. The data was collected and analyzed before drawing conclusions. The results showed several parts of the implementation that could potentially be very slow and why this was the case.

    Conclusions. The slowest parts of the CVCTS implementation was the Voxelization and Cone-Tracing steps. It might be possible to use the CVCTS solution in the thesis in for example a game if the settings are not too high but that is a stretch. Little time could be spent during the thesis to optimize the solution and thus it’s possible that its performance could be increased.

  • 327.
    Daniel, Emil
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    John, Johansson
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Hjälte vs Skurk: ”I rymden kan ingen höra dig skrika yippee-ki-yay motherfucker”2016Independent thesis Basic level (degree of Bachelor), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Meetings between people can be exciting and make you discover something new, also in movies. Meetings between the characters drives the plot forward, and create exciting interaction between the characters. We have chosen to focus on meetings between the hero and the villain, two strong and important character roles in the film. The character roles are taken from Vogler, The Writer's Journey (1997). To get a broader picture of how roles may appear in film, we have chosen to analyze two different films in different genres, the films we have chosen are Die Hard (1988) and Alien (1979). We have used the analytical model presented in From Antz to Titanic - A student's guide to film analysis (2000) and formed the model to our fitting. One of the parts of the analysis focuses on the technical aspects of the scenes that can help move the story forward. Based on our survey of character roles and meetings between them, we have created a movie to portray this.

  • 328.
    Danielsson, Max
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Grahn, Håkan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Sievert, Thomas
    Blekinge Institute of Technology, Faculty of Engineering, Department of Mathematics and Natural Sciences.
    Rasmusson, Jim
    Sony Mobile Communications AB, SWE.
    Comparing Two Generations of Embedded GPUs Running a Feature Detection AlgorithmManuscript (preprint) (Other academic)
    Abstract [en]

    Graphics processing units (GPUs) in embedded mobile platforms are reaching performance levels where they may be useful for computer vision applications. We compare two generations of embedded GPUs for mobile devices when run- ning a state-of-the-art feature detection algorithm, i.e., Harris- Hessian/FREAK. We compare architectural differences, execu- tion time, temperature, and frequency on Sony Xperia Z3 and Sony Xperia XZ mobile devices. Our results indicate that the performance soon is sufficient for real-time feature detection, the GPUs have no temperature problems, and support for large work-groups is important.

  • 329.
    Danielsson, Max
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies. Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Sievert, Thomas
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies. Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Viability of Feature Detection on Sony Xperia Z3 using OpenCL2015Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. Embedded platforms GPUs are reaching a level of perfor-mance comparable to desktop hardware. Therefore it becomes inter-esting to apply Computer Vision techniques to modern smartphones.The platform holds different challenges, as energy use and heat gen-eration can be an issue depending on load distribution on the device.

    Objectives. We evaluate the viability of a feature detector and de-scriptor on the Xperia Z3. Specifically we evaluate the the pair basedon real-time execution, heat generation and performance.

    Methods. We implement the feature detection and feature descrip-tor pair Harris-Hessian/FREAK for GPU execution using OpenCL,focusing on embedded platforms. We then study the heat generationof the application, its execution time and compare our method to twoother methods, FAST/BRISK and ORB, to evaluate the vision per-formance.

    Results. Execution time data for the Xperia Z3 and desktop GeForceGTX660 is presented. Run time temperature values for a run ofnearly an hour are presented with correlating CPU and GPU ac-tivity. Images containing comparison data for BRISK, ORB andHarris-Hessian/FREAK is shown with performance data and discus-sion around notable aspects.

    Conclusion. Execution times on Xperia Z3 is deemed insufficientfor real-time applications while desktop execution shows that there isfuture potential. Heat generation is not a problem for the implemen-tation. Implementation improvements are discussed to great lengthfor future work. Performance comparisons of Harris-Hessian/FREAKsuggest that the solution is very vulnerable to rotation, but superiorin scale variant images. Generally appears suitable for near duplicatecomparisons, delivering much greater number of keypoints. Finally,insight to OpenCL application development on Android is given

  • 330. Danielsson, Max
    et al.
    Sievert, Thomas
    Blekinge Institute of Technology, Faculty of Engineering, Department of Mathematics and Natural Sciences.
    Grahn, Håkan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Rasmusson, Jim
    Sony Mobile Communications AB.
    Feature Detection and Description using a Harris-Hessian/FREAK Combination on an Embedded GPU2016Conference paper (Refereed)
    Abstract [en]

    GPUs in embedded platforms are reaching performance levels comparable to desktop hardware, thus it becomes interesting to apply Computer Vision techniques. We propose, implement, and evaluate a novel feature detector and descriptor combination, i.e., we combine the Harris-Hessian detector with the FREAK binary descriptor. The implementation is done in OpenCL, and we evaluate the execution time and classification performance. We compare our approach with two other methods, FAST/BRISK and ORB. Performance data is presented for the mobile device Xperia Z3 and the desktop Nvidia GTX 660. Our results indicate that the execution times on the Xperia Z3 are insufficient for real-time applications while desktop execution shows future potential. Classification performance of Harris-Hessian/FREAK indicates that the solution is sensitive to rotation, but superior in scale variant images.

  • 331.
    Dantuluri, Kishore Varma
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Impact of Virtualization on Timestamp Accuracy2015Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    The ever-increasing demand for high quality services require a good quantification of performance parameters such as delay and jitter. Let’s consider one of the parameters, jitter, which is the difference between the inter arrival time of two subsequent packets and the average inter-arrival time. The arrival or departure time of a packet is termed as time-stamp. The accuracy of the timestamp will influence any performance metrics based on the arrival/departure time of a packet. Hence, the knowledge or awareness of time-stamping accuracy is important for performance evaluation. This study investigates how the time-stamping process is affected by virtualization

  • 332.
    Darisipudi, Veeravenkata Naga S Maniteja
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Sustainable Throughput – QoE Perspective2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In recent years, there has been a significant increase in the demand for streaming of high quality videos on the smart mobile phones. In order to meet the user quality requirements, it is important to maintain the end user quality while taking the resource consumption into consideration. This demand caught the attention of the research communities and network providers to prioritize Quality of Experience (QoE) in addition to the Quality of Service (QoS). In order to meet the users’ expectations, the QoE studies have gained utmost importance, thus creating the challenge of evaluating it in such a way that the quality, cost and energy consumption are taken into account. This gave way to the concept of QoE-aware sustainable throughput, which denotes the maximal throughput at which QoE problems can be still kept at a desired level.

    The aim of the thesis is to determine the sustainable throughput values from the QoE perspective. The values are observed for different delay and packet loss values in wireless and mobile scenarios. The evaluation is done using the subjective video quality assessment method.

    In the subjective assessment method, the evaluation is done using the ITU-T recommended Absolute Category Rating (ACR). The video quality ratings are taken from the users, and are then averaged to obtain the Mean Opinion Score (MOS). The obtained scores are used for analysis in determining the sustainable throughput values from the users’ perspective.

    From the results it is determined that, for all the video test cases, the videos are rated better quality at low packet loss values and low delay values. The quality of the videos with the presence of delay is rated high compared to the video quality in the case of packet loss. It was observed that the high resolution videos are feeble in the presence of higher disturbances i.e. high packet loss and larger delays. From considering all the cases, it can be observed that the QoE disturbances due to the delivery issues is at an acceptable minimum for the 360px video. Hence, the 480x360 video is the threshold to sustain the video quality.

  • 333.
    Dasari, Siva Krishna
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science. Blekinge Institute of Technology.
    Tree Models for Design Space Exploration in Aerospace Engineering2019Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    A crucial issue in the design of aircraft components is the evaluation of a larger number of potential design alternatives. This evaluation involves too expensive procedures, consequently, it slows down the search for optimal design samples. As a result, scarce or small number of design samples with high dimensional parameter space and high non-linearity pose issues in learning of surrogate models. Furthermore, surrogate models have more issues in handling qualitative data (discrete) than in handling quantitative data (continuous). These issues bring the need for investigations of methods of surrogate modelling for the most effective use of available data. 

     The thesis goal is to support engineers in the early design phase of development of new aircraft engines, specifically, a component of the engine known as Turbine Rear Structure (TRS). For this, tree-based approaches are explored for surrogate modelling for the purpose of exploration of larger search spaces and for speeding up the evaluations of design alternatives. First, we have investigated the performance of tree models on the design concepts of TRS. Second, we have presented an approach to explore design space using tree models, Random Forests. This approach includes hyperparameter tuning, extraction of parameters importance and if-then rules from surrogate models for a better understanding of the design problem. With this presented approach, we have shown that the performance of tree models improved by hyperparameter tuning when using design concepts data of TRS. Third, we performed sensitivity analysis to study the thermal variations on TRS and hence support robust design using tree models. Furthermore, the performance of tree models has been evaluated on mathematical linear and non-linear functions. The results of this study have shown that tree models fit well on non-linear functions. Last, we have shown how tree models support integration of value and sustainability parameters data (quantitative and qualitative data) together with TRS design concepts data in order to assess these parameters impact on the product life cycle in the early design phase.

     

  • 334.
    Dasari, Siva Krishna
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science. Blekinge Institute of Technology.
    Cheddad, Abbas
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science. Blekinge Institute of Technology.
    Andersson, Petter
    GKN Aerospace Engine Systems, SWE.
    Predictive Modelling to Support Sensitivity Analysis for Robust Design in Aerospace EngineeringIn: Article in journal (Refereed)
  • 335.
    Dasari, Siva Krishna
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Cheddad, Abbas
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Andersson, Petter
    GKN Aerospace Engine Systems, SWE.
    Random Forest Surrogate Models to Support Design Space Exploration in Aerospace Use-case2019In: IFIP Advances in Information and Communication Technology, Springer-Verlag New York, 2019, Vol. 559Conference paper (Refereed)
    Abstract [en]

    In engineering, design analyses of complex products rely on computer simulated experiments. However, high-fidelity simulations can take significant time to compute. It is impractical to explore design space by only conducting simulations because of time constraints. Hence, surrogate modelling is used to approximate the original simulations. Since simulations are expensive to conduct, generally, the sample size is limited in aerospace engineering applications. This limited sample size, and also non-linearity and high dimensionality of data make it difficult to generate accurate and robust surrogate models. The aim of this paper is to explore the applicability of Random Forests (RF) to construct surrogate models to support design space exploration. RF generates meta-models or ensembles of decision trees, and it is capable of fitting highly non-linear data given quite small samples. To investigate the applicability of RF, this paper presents an approach to construct surrogate models using RF. This approach includes hyperparameter tuning to improve the performance of the RF's model, to extract design parameters' importance and \textit{if-then} rules from the RF's models for better understanding of design space. To demonstrate the approach using RF, quantitative experiments are conducted with datasets of Turbine Rear Structure use-case from an aerospace industry and results are presented.

  • 336.
    Dasari, Siva Krishna
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Lavesson, Niklas
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Andersson, Petter
    Engineering Method Development, GKN Aerospace Engine Systems Sweden.
    Persson, Marie
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Tree-Based Response Surface Analysis2015Conference paper (Refereed)
    Abstract [en]

    Computer-simulated experiments have become a cost effective way for engineers to replace real experiments in the area of product development. However, one single computer-simulated experiment can still take a significant amount of time. Hence, in order to minimize the amount of simulations needed to investigate a certain design space, different approaches within the design of experiments area are used. One of the used approaches is to minimize the time consumption and simulations for design space exploration through response surface modeling. The traditional methods used for this purpose are linear regression, quadratic curve fitting and support vector machines. This paper analyses and compares the performance of four machine learning methods for the regression problem of response surface modeling. The four methods are linear regression, support vector machines, M5P and random forests. Experiments are conducted to compare the performance of tree models (M5P and random forests) with the performance of non-tree models (support vector machines and linear regression) on data that is typical for concept evaluation within the aerospace industry. The main finding is that comprehensible models (the tree models) perform at least as well as or better than traditional black-box models (the non-tree models). The first observation of this study is that engineers understand the functional behavior, and the relationship between inputs and outputs, for the concept selection tasks by using comprehensible models. The second observation is that engineers can also increase their knowledge about design concepts, and they can reduce the time for planning and conducting future experiments.

  • 337.
    Davidsson, Paul
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Holmgren, Johan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Persson, Jan A.
    Ramstedt, Linda
    Multi Agent Based Simulation of Transport Chains2008Conference paper (Refereed)
    Abstract [en]

    An agent-based tool for micro-level simulation of transport chains (TAPAS) is described. It is more powerful than traditional approaches as it is able to capture the interactions between individual actors of a transport chain, as well as their heterogeneity and decision making processes. Whereas traditional approaches rely on assumed statistical correlation between different parameters, TAPAS relies on causality, i.e., the decisions and negotiations that lead to the transports being performed. An additional advantage is that TAPAS is able to capture time aspects, such as, the influence of timetables, arrival times, and time-differentiated taxes and fees. TAPAS is composed of two layers, one layer simulating the physical activities taking place in the transport chain, e.g., production, storage, and transports of goods, and another layer simulating the different actors’ decision making processes and interaction. The decision layer is implemented as a multi-agent system using the JADE platform, where each agent corresponds to a particular actor. We demonstrate the use of TAPAS by investigating how the actors in a transport chain are expected to act when different types of governmental control policies are applied, such as, fuel taxes, road tolls, and vehicle taxes. By analyzing the costs and environmental effects, TAPAS provides guidance in decision making regarding such control policies. We argue that TAPAS may also complement existing approaches in different ways, for instance by generating input data such as transport demand. Since TAPAS models a larger part of the supply chain, the transport demand is a natural part of the output. Studies may concern operational decisions like choice of consignment size and frequency of deliveries, as well as strategic decisions like where to locate storages, terminals, etc., choice of producer, and adaptation of vehicle fleets.

  • 338.
    de Carvalho, Renata M.
    et al.
    Univ Quebec, LATECE Lab, Montreal, PQ, Canada..
    Mili, Hafedh
    Univ Quebec, LATECE Lab, Montreal, PQ, Canada..
    Boubaker, Anis
    Univ Quebec, LATECE Lab, Montreal, PQ, Canada..
    Gonzalez-Huerta, Javier
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Ringuette, Simon
    Trisotech Inc, Montreal, PQ, Canada..
    On the analysis of CMMN expressiveness: revisiting workflow patterns2016In: 2016 IEEE 20TH INTERNATIONAL ENTERPRISE DISTRIBUTED OBJECT COMPUTING WORKSHOP (EDOCW), 2016, p. 54-61Conference paper (Refereed)
    Abstract [en]

    Traditional business process modeling languages use an imperative style to specify all possible execution flows, leaving little flexibility to process operators. Such languages are appropriate for low-complexity, high-volume, mostly automated processes. However, they are inadequate for case management, which involves low-volume, high-complexity, knowledge-intensive work processes of today's knowledge workers. OMG's Case Management Model and Notation (CMMN), which uses a declarative style to specify constraints placed at a process execution, aims at addressing this need. To the extent that typical case management situations do include at least some measure of imperative control, it is legitimate to ask whether an analyst working exclusively in CMMN can comfortably model the range of behaviors s/he is likely to encounter. This paper aims at answering this question by trying to express the extensive collection of Workflow Patterns in CMMN. Unsurprisingly, our study shows that the workflow patterns fall into three categories: 1) the ones that are handled by CMMN basic constructs, 2) those that rely on CMMN's engine capabilities and 3) the ones that cannot be handled by current CMMN specification. A CMMN tool builder can propose patterns of the second category as companion modeling idioms, which can be translated behind the scenes into standard CMMN. The third category is problematic, however, since its support in CMMN tools will break model interoperability.

  • 339.
    de la Vara, Jose Luis
    et al.
    Carlos III University of Madrid, ESP.
    Borg, Markus
    SICS Swedish ICT AB, SWE.
    Wnuk, Krzysztof
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Moonen, Leon
    Certus Centre for S oftware V&V, NOR.
    An Industrial Survey of Safety Evidence Change Impact Analysis Practice2016In: IEEE Transactions on Software Engineering, ISSN 0098-5589, E-ISSN 1939-3520, Vol. 42, no 12, p. 1095-1117Article in journal (Refereed)
    Abstract [en]

    Context. In many application domains, critical systems must comply with safety standards. This involves gathering safety evidence in the form of artefacts such as safety analyses, system specifications, and testing results. These artefacts can evolve during a system's lifecycle, creating a need for change impact analysis to guarantee that system safety and compliance are not jeopardised. Objective. We aim to provide new insights into how safety evidence change impact analysis is addressed in practice. The knowledge about this activity is limited despite the extensive research that has been conducted on change impact analysis and on safety evidence management. Method. We conducted an industrial survey on the circumstances under which safety evidence change impact analysis is addressed, the tool support used, and the challenges faced. Results. We obtained 97 valid responses representing 16 application domains, 28 countries, and 47 safety standards. The respondents had most often performed safety evidence change impact analysis during system development, from system specifications, and fully manually. No commercial change impact analysis tool was reported as used for all artefact types and insufficient tool support was the most frequent challenge. Conclusion. The results suggest that the different artefact types used as safety evidence co-evolve. In addition, the evolution of safety cases should probably be better managed, the level of automation in safety evidence change impact analysis is low, and the state of the practice can benefit from over 20 improvement areas.

  • 340.
    Deekonda, Rahul
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Sirigudi, Prithvi Raj
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Assessment of Agile Maturity Models: A Survey2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. In recent years Agile has gained lots of importance in the fieldof software development. Many organization and software practitioners hasalready adopted agile practice due to its flexibility in nature. Hence, agiledevelopment methodologies have been replaced to traditional developmentmethods. Agile is a family of several methodologies namely Scrum. eXtremeprogramming (XP) and several others. These several methods areembedded with different set of agile practices for the organizations to adoptand implement for their development process. But there is still a need forempirical research to understand the benefits of implementing the Agilepractices which contributes to the overall success of accomplishment of thesoftware project. Several agile maturity models have been published over adecade but not all of the models have been empirically validated. Hence,additional research in the context of agile maturity is essential and needed.

    Objectives. This study focus on providing a comprehensive knowledgeon the Agile Maturity Models which help in guiding the organizations regardingthe implementation of Agile practices. There are several maturitymodels published with different set of Agile practices that are recommendedto the industries. The primary aim is to compare the agile maturity maturitymodels and to investigate how the agile practices are implemented inthe industry Later the benefits and limitations faced by the software practitionersdue to implementation of agile practices are identified.

    Methods. For this particular research an industrial survey was conductedto identify the agile practices that are implemented in the industry. Inaddition, this survey aims at identifying the benefits and limitations of implementingthe agile practices. A literature review is conducted to identifythe order of agile practices recommended from the literature in agile MaturityModels.

    Results. From the available literature nine Maturity Models have beenextracted with their set of recommended agile practices. Then the resultsfrom the survey and literature are compared and analyzed to see if thereexist any commonalities or differences regarding the implementation of agilepractices in a certain order. From the results of the survey the benefitsand limitations of implementing the Agile practices in a particular order areidentified and reported.

    Conclusions. The findings from the literature review and the survey resultsin evaluating the agile maturity models regarding the implementationof agile practices.

  • 341.
    Dehghannayyeri, Atefeh
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Developing a Fluid Flow Model for Mobile Video Transmission in the Presence of Play-Out Hysteresis2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This work focuses on improving video transmission quality over a mobile link. More specifically, the impact of buffering and link outages on the freeze probability of transmitted videos is studied. It introduces a new fluid flow model that provides an approximation of the freeze probability in the presence of play-out hysteresis. The proposed model is used to study the impact of two streaming buffer sizes over different possible combinations of outage parameters (data channel on/off times). The outcome of this thesis shows that outage parameters play a dominant role in freezing of streaming video content, and that an increase in these parameters cannot be easily compensated for by an increase in the size of the receiving buffer. Generally, in most cases when there is a variation in outage parameters, an increased buffer size has a negative impact on the freeze probability. To lower the probability of freeze during video playback over a weak mobile link, it is better to sacrifice resolution just to keep the video content playing. Similarly, shifting focus from off to on times brings better results than increasing buffer size.

  • 342.
    Dehqan, Agri
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    "Writing For the enemy": Kurdish Language standardization online2014Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [en]

    The aim of this thesis is to study some of the challenges that the Kurdish language and its standardization face, and offer a bottom-up solution through the “collective intelligence” and “good faith collaboration” of Wikipedia. Therefore, the fragmentation in the Kurdish language—caused by both external factors and those that are inherent to the language itself— is discussed and analyzed. Furthermore, this thesis describes some of the efforts that have been made to unify the Kurdish language, its dialects and its different writing systems. Even though these issues exist both in the physical world as well as online, they are rendered more conspicuous on the Internet. As a result, the problems in Kurdish cross-dialect communication are more pronounced. In spite of that, web 2.0 and its favored platforms for online collaboration provide ample opportunity for the general user of the language to participate in solving such linguistic problems. An overview of Wikipedia, as the world’s most successful platform for online collaboration, is presented along with some of its rules and policies. Additionally, an account of the current Kurdish Wikipedia in three dialects of Kurdish: Kurmanji, Sorani and Zazaki is provided. The situation and shortcomings of Kurdish versions of Wikipedia are examined through two case studies based on two Wikipedia articles in Kurdish and their English and Persian counterparts. Moreover, I argue that a robust Kurdish Wikipedia can be a viable solution for standardizing the language, encouraging orthographic consistency, and unifying Kurdish writing systems and knowledge/information dissemination in Kurdish.

  • 343.
    Delgado, Sergio Mellado
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Velasco, Alberto Díaz
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Indoor Positioning using the Android Platform2014Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [en]

    In recent years, there has been a great increase in the development of wireless technologies and location services. For this reason, numerous projects in the location field, have arisen. In addition, with the appearance of the open Android operating system, wireless technologies are being developed faster than ever. This Project approaches the design and development of a system that combines the technologies of wireless, location and Android with the implementation of an indoor positioning system. As a result, an Android application has been obtained, which detects the position of a phone in a simple and useful way. The application is based on the WIFI manager API of Android. It combines the data stored in a SQL database with the wifi data received at any given time. Afterwards the position of the user is determined with the algorithm that has been implemented. This application is able to obtain the position of any person who is inside a building with Wi-Fi coverage, and display it on the screen of any device with the Android operating system. Besides the estimation of the position, this system displays a map that helps you see in which quadrant of the room are positioned in real time. This system has been designed with a simple interface to allow people without technology knowledge. Finally, several tests and simulations of the system have been carried out to see its operation and accuracy. The performance of the system has been verified in two different places and changes have been made in the Java code to improve its precision and effectiveness. As a result of the several tests, it has been noticed that the placement of the access point (AP) and the configuration of the Wireless network is an important point that should be taken into account to avoid interferences and errors as much as possible, in the estimation of the position.

  • 344.
    Demir, Muhammed Fatih
    et al.
    Karatay Üniversitesi, TUR.
    Cankirli, Aysenur
    Karatay Üniversitesi, TUR.
    Karabatak, Begum
    Turkcell, Nicosia, CYP.
    Yavariabdi, Amir
    Karatay Üniversitesi, TUR.
    Mendi, Engin
    Karatay Üniversitesi, TUR.
    Kusetogullari, Hüseyin
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Real-Time Resistor Color Code Recognition using Image Processing in Mobile Devices2018In: 9th International Conference on Intelligent Systems 2018: Theory, Research and Innovation in Applications, IS 2018 - Proceedings / [ed] JardimGoncalves, R; Mendonca, JP; Jotsov, V; Marques, M; Martins, J; Bierwolf, R, Institute of Electrical and Electronics Engineers Inc. , 2018, p. 26-30Conference paper (Refereed)
    Abstract [en]

    This paper proposes a real-time video analysis algorithm to read the resistance value of a resistor using a color recognition technique. To achieve this, firstly, a nonlinear filtering is applied to input video frame to smooth intensity variations and remove impulse noises. After that, a photometric invariants technique is employed to transfer the video frame from RGB color space to Hue-Saturation-Value (HSV) color space, which decreases sensitivity of the proposed method to illumination changes. Next, a region of interest is defined to automatically detect resistor's colors and then an Euclidean distance based clustering strategy is employed to recognize the color bars. The proposed method provides a wide range of color classification which includes twelve colors. In addition, it utilizes relatively low computational time which makes it suitable for real-time mobile video applications. The experiments are performed on a variety of test videos and results show that the proposed method has low error rate compared to the other resistor color code recognition mobile applications. © 2018 IEEE.

  • 345.
    Demirsoy, Ali
    et al.
    Borsa Istanbul, TUR.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Semantic Knowledge Management System to Support Software Engineers: Implementation and Static Evaluation through Interviews at Ericsson2018In: e-Informatica Software Engineering Journal, ISSN 1897-7979, E-ISSN 2084-4840, Vol. 12, no 1, p. 237-263Article in journal (Refereed)
    Abstract [en]

    Background: In large-scale corporations in the software engineering context information overload problems occur as stakeholders continuously produce useful information on process life-cycle issues, matters related to specific products under development, etc. Information overload makes finding relevant information (e.g., how did the company apply the requirements process for product X?) challenging, which is in the primary focus of this paper. Contribution: In this study the authors aimed at evaluating the ease of implementing a semantic knowledge management system at Ericsson, including the essential components of such systems (such as text processing, ontologies, semantic annotation and semantic search). Thereafter, feedback on the usefulness of the system was collected from practitioners. Method: A single case study was conducted at a development site of Ericsson AB in Sweden. Results: It was found that semantic knowledge management systems are challenging to implement, this refers in particular to the implementation and integration of ontologies. Specific ontologies for structuring and filtering are essential, such as domain ontologies and ontologies distinct to the organization. Conclusion: To be readily adopted and transferable to practice, desired ontologies need to be implemented and integrated into semantic knowledge management frameworks with ease, given that the desired ontologies are dependent on organizations and domains.

  • 346.
    Dennis, Rojas
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Värdeskapande i agil systemutveckling: En komparativ studie mellan mjukvaruverksamheter i Karlskronaregionen och om hur de ser på värdeskapande2017Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    This thesis searches for the answer of how five companies interpret and deliver value in their software processes. The analysis uses the Software Value Map model that can be used as a tool for decision making in value creation. The purpose is to understand how different decisions affect the value of each delivery and product. By studying economics and decision theories, we understand the importance and impact they have in value creation when products are developed. The result of this study shows that local businesses prioritize customer-based value aspects to generate value. There are also similarities and differences in staff and how companies value different aspects that generate value.

  • 347.
    Devagiri, Vishnu Manasa
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Splicing Forgery Detection and the Impact of Image Resolution2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context: There has been a rise in the usage of digital images these days. Digital images are being used in many areas like in medicine, wars, etc. As the images are being used to make many important decisions, it is necessary to know if the images used are clean or forged. In this thesis, we have considered the area of splicing forgery. In this thesis, we are also considering and analyzing the impact of low-resolution images on the considered algorithms.

    Objectives. Through this thesis, we try to improve the detection rate of splicing forgery detection. We also examine how the examined splicing forgery detection algorithm works on low-resolution images and considered classification algorithms (classifiers).

    Methods: The research methods used in this research are Implementation and Experimentation. Implementation was used to answer the first research question i.e., to improve the detection rate in splicing forgery. Experimentation was used to answer the second research question. The results of the experiment were analyzed using statistical analysis to find out how the examined algorithm works on different image resolutions and on the considered classifiers.

    Results: One-tailed Wilcoxon signed rank test was conducted to compare which algorithm performs better, the T+ value obtained was less than To so the null hypothesis was rejected and the alternative hypothesis which states that Algorithm 2 (our enhanced version of the algorithm) performs better than Algorithm 1 (original algorithm), is accepted. Experiments were conducted and the accuracy of the algorithms in different cases were noted, ROC curves were plotted to obtain the AUC parameter. The accuracy, AUC parameters were used to determine the performance of the algorithms.

    Conclusions: After the results were analyzed using statistical analysis, we came to the conclusion that Algorithm 2 performs better than Algorithm 1 in detecting the forged images. It was also observed that Algorithm 1 improves its performance on low-resolution images when trained on original images and tested on images of different resolutions but, in the case of Algorithm 2, its performance is improved when trained and tested on images of the same resolution. There was not much variance in the performance of both of the algorithms on images of different resolution. Coming to the classifiers, Algorithm 1 improves its performance on linear SVM whereas Algorithm 2 improves its performance when using the simple tree classifier.

  • 348.
    Devagiri, Vishnu Manasa
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Cheddad, Abbas
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Splicing Forgery Detection and the Impact of Image Resolution2017In: PROCEEDINGS OF THE 9TH INTERNATIONAL CONFERENCE ON ELECTRONICS, COMPUTERS AND ARTIFICIAL INTELLIGENCE - ECAI 2017, IEEE , 2017Conference paper (Refereed)
    Abstract [en]

    With the development of the Internet, and the increase in the online storage space, there has been an explosion in the volume of videos and images circulating online. An important part of the digital forensics' tasks is to scrutinise part of these images to make important decisions. Digital tampering of images can impede reliability of these decisions. Through this paper we attempt to improve the detection rate of splicing forgery. We also examine how well the examined splicing forgery detection algorithm works on low-resolution images. In this paper, the aim is to enhance the accuracy of an existing algorithm. One tailed Wilcoxon signed rank test was utilised to compare the performance of the different algorithms.

  • 349.
    Devulapally, Gopi Krishna
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Agile in the context of Software Maintenance: A Case Study2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context: Adopting agile practices has proven to be successful for many practitioners both academically and practically in development scenario. But the context of agile practices adoption during software maintenance is partially studied and is mostly focused on benefits. The success factors of agile practices during development cannot be related to maintenance, as maintenance differs in many aspects from development. The context of this research is to study the adoption of different agile practices during software maintenance.

    Objectives: In this study, an attempt has been made to accomplish the following objectives: Firstly, to identify different agile practices that are adopted in practice during software maintenance. Secondly, identifying advantages and disadvantages of adopting those agile practices during software maintenance.

    Methods: To accomplish the objectives a case study is conducted at Capgemini, Mumbai, India. Data is collected by conducting two rounds of interviews among five different projects which have adopted agile practices during software maintenance. Close-ended questionnaire and open-ended questionnaires have been used respectively for first and second round interviews. The motivation for selecting different questionnaire is because each round aimed to accomplish different research objectives. Apart from interviews, direct observation of agile practices in each project is done to achieve data triangulation. Finally, a validation survey is conducted among second round interview participants and other practitioners from outside the case study to validate the data collected during second round interviews.

    Results: A simple literature review identified 30 agile practices adopted during software maintenance. On analyzing first round of interviews 22 practices are identified to be mostly adopted and used during software maintenance. The result of adopting those agile practices are categorized as advantages and disadvantages. In total 12 advantages and 8 disadvantages are identified and validated through second round interviews and validation survey respectively. Finally, a cause-effect relationship is drawn among the identified agile practices and consequences.

    Conclusions: Adopting agile practices has both positive and negative result. Adopting agile practices during perfective and adaptive type of maintenance has more advantages, but adopting agile practices during corrective type of maintenance may not have that many advantages as compared to other type of maintenance. Hence, one has to consider the type of maintenance work before adopting agile practices during software maintenance.

  • 350.
    Diebold, Philipp
    et al.
    Fraunhofer IESE, GER.
    Mendez, Daniel
    Technische Universitat Munchen, GER.
    Wagner, Stefan
    Universitat Stuttgart, GER.
    Šmite, Darja
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Results of the 2nd international workshop on the impact of agile practices (ImpAct 2017)2017In: ACM International Conference Proceeding Series, Association for Computing Machinery (ACM), 2017, Vol. F129907Conference paper (Refereed)
    Abstract [en]

    At present, agile development is a dominating development process in software engineering. Yet, due to different contexts, also agile methods require adaptations (e.g. Scrum-but). Since adaptation means adding, modifying or dropping some agile elements, it is important to know what the effects and importance of these elements are. Given the weak state of empirical evidence in this area, we initiated the workshop series on the Impact of Agile Practices (ImpAct). This paper provides a summary of the second workshop of this series, especially its lightning talks and discussions. The major outcomes include interesting observations such as negatively rated practices and contradicting experiences as well as follow-up activities ordered in a roadmap. © 2017 ACM.

45678910 301 - 350 of 1682
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf