Change search
Refine search result
3456789 251 - 300 of 1681
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 251.
    Carlsson, Anders
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Gustavsson, Rune
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Resilient Smart Grids2014In: 2014 FIRST INTERNATIONAL SCIENTIFIC-PRACTICAL CONFERENCE PROBLEMS OF INFOCOMMUNICATIONS SCIENCE AND TECHNOLOGY (PIC S&T), IEEE , 2014, p. 79-82Conference paper (Refereed)
    Abstract [en]

    The usefulness of configurable and shared experiment platforms in design and implementation of future Resilient Smart Grids is demonstrated. A set of antagonistic threats is identified and remotely controlled experiments to harness those are presented and assessed.

  • 252.
    Carlsson, Anders
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Gustavsson, Rune
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    The art of war in the cyber world2018In: 2017 4th International Scientific-Practical Conference Problems of Infocommunications Science and Technology, PIC S and T 2017 - Proceedings, Institute of Electrical and Electronics Engineers Inc. , 2018, p. 42-44Conference paper (Refereed)
    Abstract [en]

    The paper focus on cyber weapons used in Advanced Persistent Threat (ATP) attacks in present and future cyber warfare. The combined use of propaganda and cyber warfare supports military operations on the ground and is exemplified with the ongoing Russian hybrid warfare in Ukraine. New models and methods to develop future trustworthy critical infrastructures in our societies are presented. Some mitigation ideas to meet the challenges of future hybrid warfare are also discussed. © 2017 IEEE.

  • 253.
    Carlsson, Anders
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Kuzminykh, Ievgeniia
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Gustavsson, Rune
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Virtual Security Labs Supporting Distance Education in ReSeLa Framework2019In: Advances in Intelligent Systems and Computing / [ed] Auer M.E.,Tsiatsos T., Springer Verlag , 2019, Vol. 917, p. 577-587Conference paper (Refereed)
    Abstract [en]

    To meet the high demand of educating the next generation of MSc students in Cyber security, we propose a well-composed curriculum and a configurable cloud based learning support environment ReSeLa. The proposed system is a result of the EU TEMPUS project ENGENSEC and has been extensively validated and tested. © 2019, Springer Nature Switzerland AG.

  • 254.
    Carlsson, Oskar
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Nabhani, Daniel
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    User and Entity Behavior Anomaly Detection using Network Traffic2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
  • 255.
    Carlström, Elin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Danbrant, Sofie
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Att bryta den Fjärde Väggen: Metalepsis i spel2019Independent thesis Basic level (degree of Bachelor), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The act of doing something that is called “breaking the fourth wall” in video games, to address the player across the games’ scope, is something that has been in our interest for a long time. To search about the fourth wall and video games on the internet have been futile, until we stumbled upon the term metalepsis. Metalepsis literally means “a jump across”, it originates from narratology and handles how the different levels in a narrative are managed. In this Bachelor Thesis we take a deep dive into the term, with the help of Karin Kukkonen’s and Sonja Klimek’s work, and how it’s used in different medias such as literature, film, music videos and video games. We discovered why metalepsis in video games is an uncharted area with the help of Astrid Ensslin’s and Huaxin Wei’s work, and why studies of narratives in video games are executed in a shallow grade. With our research question we want to contribute with more information about metalepsis in video games and to examine a conscious use of metalepsis in the development of a video game. By surveying metalepsis in video games and other media, we have created an estimate on how it has been used so far and then to applicate the observation to our video game.

    This thesis lay it’s focus on a metalepsis called Möbius Strip, that consists of a narrative that restarts within itself. Using Georg Kreisler’s book Der Schattenspringer, we have made an interpretation on how Möbius Strip could work out. We have then developed a video game based on this interpretation and compared the results to the books version of the metalepsis.

    What we have gathered with this thesis is more tangible examples of the use of metalepsis in video games so far. We have been able to create a discussion about how our interpretation of Möbius Strip relate to the book, with the help from Sonja Klimek’s description of the metalepsis. We have only managed to scrape the surface on the topic of metalepsis in video games and feel like there are multiple other roads to investigate.

  • 256.
    Casalicchio, Emiliano
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    A study on performance measures for auto-scaling CPU-intensive containerized applications2019In: Cluster Computing, ISSN 1386-7857, E-ISSN 1573-7543Article in journal (Refereed)
    Abstract [en]

    Autoscaling of containers can leverage performance measures from the different layers of the computational stack. This paper investigate the problem of selecting the most appropriate performance measure to activate auto-scaling actions aiming at guaranteeing QoS constraints. First, the correlation between absolute and relative usage measures and how a resource allocation decision can be influenced by them is analyzed in different workload scenarios. Absolute and relative measures could assume quite different values. The former account for the actual utilization of resources in the host system, while the latter account for the share that each container has of the resources used. Then, the performance of a variant of Kubernetes’ auto-scaling algorithm, that transparently uses the absolute usage measures to scale-in/out containers, is evaluated through a wide set of experiments. Finally, a detailed analysis of the state-of-the-art is presented.

  • 257.
    Casalicchio, Emiliano
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Cardellini, Valeria
    University of Rome, ITA.
    Interino, Gianluca
    University of Rome, ITA.
    Palmirani, Monica
    University of Bologna, ITA.
    Research challenges in legal-rule and QoS-aware cloud service brokerage2018In: Future generations computer systems, ISSN 0167-739X, E-ISSN 1872-7115, Vol. 78, no Part 1, p. 211-223Article in journal (Refereed)
    Abstract [en]

    Abstract The ICT industry and specifically critical sectors, such as healthcare, transportation, energy and government, require as mandatory the compliance of ICT systems and services with legislation and regulation, as well as with standards. In the era of cloud computing, this compliance management issue is exacerbated by the distributed nature of the system and by the limited control that customers have on the services. Today, the cloud industry is aware of this problem (as evidenced by the compliance program of many cloud service providers), and the research community is addressing the many facets of the legal-rule compliance checking and quality assurance problem. Cloud service brokerage plays an important role in legislation compliance and QoS management of cloud services. In this paper we discuss our experience in designing a legal-rule and QoS-aware cloud service broker, and we explore relate research issues. Specifically we provide three main contributions to the literature: first, we describe the detailed design architecture of the legal-rule and QoS-aware broker. Second, we discuss our design choices which rely on the state of the art solutions available in literature. We cover four main research areas: cloud broker service deployment, seamless cloud service migration, cloud service monitoring, and legal rule compliance checking. Finally, from the literature review in these research areas, we identify and discuss research challenges.

  • 258.
    Casalicchio, Emiliano
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Lundberg, Lars
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Shirinbab, Sogand
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Energy-Aware Adaptation in Managed Cassandra Datacenters2016In: Proceedings - 2016 International Conference on Cloud and Autonomic Computing, ICCAC / [ed] Gupta I.,Diao Y., IEEE, 2016, p. 60-71Conference paper (Refereed)
    Abstract [en]

    Today, Apache Cassandra, an highly scalable and available NoSql datastore, is largely used by enterprises of each size and for application areas that range from entertainment to big data analytics. Managed Cassandra service providers are emerging to hide the complexity of the installation, fine tuning and operation of Cassandra datacenters. As for all complex services, human assisted management of a multi-tenant cassandra datacenter is unrealistic. Rather, there is a growing demand for autonomic management solutions. In this paper, we present an optimal energy-aware adaptation model for managed Cassandra datacenters that modify the system configuration orchestrating three different actions: horizontal scaling, vertical scaling and energy aware placement. The model is built from a real case based on real application data from Ericsson AB. We compare the performance of the optimal adaptation with two heuristics that avoid system perturbations due to re-configuration actions triggered by subscription of new tenants and/or changes in the SLA. One of the heuristic is local optimisation and the second is a best fit decreasing algorithm selected as reference point because representative of a wide range of research and practical solutions. The main finding is that heuristic’s performance depends on the scenario and workload and no one dominates in all the cases. Besides, in high load scenarios, the suboptimal system configuration obtained with an heuristic adaptation policy introduce a penalty in electric energy consumption in the range [+25%, +50%] if compared with the energy consumed by an optimal system configuration.

  • 259.
    Casalicchio, Emiliano
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Lundberg, Lars
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Shirinbab, Sogand
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Energy-aware Auto-scaling Algorithms for Cassandra Virtual Data Centers2017In: Cluster Computing, ISSN 1386-7857, E-ISSN 1573-7543, Vol. 20, no 3, p. 2065-2082Article in journal (Refereed)
    Abstract [en]

    Apache Cassandra is an highly scalable and available NoSql datastore, largely used by enterprises of each size and for application areas that range from entertainment to big data analytics. Managed Cassandra service providers are emerging to hide the complexity of the installation, fine tuning and operation of Cassandra Virtual Data Centers (VDCs). This paper address the problem of energy efficient auto-scaling of Cassandra VDC in managed Cassandra data centers. We propose three energy-aware autoscaling algorithms: \texttt{Opt}, \texttt{LocalOpt} and \texttt{LocalOpt-H}. The first provides the optimal scaling decision orchestrating horizontal and vertical scaling and optimal placement. The other two are heuristics and provide sub-optimal solutions. Both orchestrate horizontal scaling and optimal placement. \texttt{LocalOpt} consider also vertical scaling. In this paper: we provide an analysis of the computational complexity of the optimal and of the heuristic auto-scaling algorithms; we discuss the issues in auto-scaling Cassandra VDC and we provide best practice for using auto-scaling algorithms; we evaluate the performance of the proposed algorithms under programmed SLA variation, surge of throughput (unexpected) and failures of physical nodes. We also compare the performance of energy-aware auto-scaling algorithms with the performance of two energy-blind auto-scaling algorithms, namely \texttt{BestFit} and \texttt{BestFit-H}. The main findings are: VDC allocation aiming at reducing the energy consumption or resource usage in general can heavily reduce the reliability of Cassandra in term of the consistency level offered. Horizontal scaling of Cassandra is very slow and make hard to manage surge of throughput. Vertical scaling is a valid alternative, but it is not supported by all the cloud infrastructures.

  • 260.
    Casalicchio, Emiliano
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Lundberg, Lars
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Shirinbab, Sogand
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Optimal adaptation for Apache Cassandra2016In: SoSeMC workshop at 13th IEEE International Conference on Autonomic Computing / [ed] IEEE, IEEE Computer Society, 2016Conference paper (Refereed)
  • 261.
    Casalicchio, Emiliano
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Lundberg, Lars
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Shirinbad, Sogand
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    An Energy-Aware Adaptation Model for Big Data Platforms2016In: 2016 IEEE International Conference on Autonomic Computing (ICAC) / [ed] IEEE, IEEE, 2016, p. 349-350Conference paper (Refereed)
    Abstract [en]

    Platforms for big data includes mechanisms and tools to model, organize, store and access big data (e.g. Apache Cassandra, Hbase, Amazon SimpleDB, Dynamo, Google BigTable). The resource management for those platforms is a complex task and must account also for multi-tenancy and infrastructure scalability. Human assisted control of Big data platform is unrealistic and there is a growing demand for autonomic solutions. In this paper we propose a QoS and energy-aware adaptation model designed to cope with the real case of a Cassandra-as-a-Service provider.

  • 262.
    Casalicchio, Emiliano
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Perciballi, Vanessa
    Spindox S.p.A, ITA.
    Auto-scaling of Containers: The Impact of Relative and Absolute Metrics2017In: 2017 IEEE 2nd International Workshops on Foundations and Applications of Self* Systems, FAS*W 2017 / [ed] IEEE, IEEE, 2017, p. 207-214, article id 8064125Conference paper (Refereed)
    Abstract [en]

    Today, The cloud industry is adopting the container technology both for internal usage and as commercial offering. The use of containers as base technology for large-scale systems opens many challenges in the area of resource management at run-time. This paper addresses the problem of selecting the more appropriate performance metrics to activate auto-scaling actions. Specifically, we investigate the use of relative and absolute metrics. Results demonstrate that, for CPU intense workload, the use of absolute metrics enables more accurate scaling decisions. We propose and evaluate the performance of a new autoscaling algorithm that could reduce the response time of a factor between 0.66 and 0.5 compared to the actual Kubernetes' horizontal auto-scaling algorithm.

  • 263.
    Casalicchio, Emiliano
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Perciballi, Vanessa
    University of Rome, ITA.
    Measuring Docker Performance: What a Mess!!!2017In: ICPE 2017 - Companion of the 2017 ACM/SPEC International Conference on Performance Engineering, ACM , 2017, p. 11-16Conference paper (Refereed)
    Abstract [en]

    Today, a new technology is going to change the way platforms for the internet of services are designed and managed. This technology is called container (e.g. Docker and LXC). The internet of service industry is adopting the container technology both for internal usage and as commercial offering. The use of container as base technology for large-scale systems opens many challenges in the area of resource management at run-time, for example: autoscaling, optimal deployment and monitoring. Specifically, monitoring of container based systems is at the ground of any resource management solution, and it is the focus of this work. This paper explores the tools available to measure the performance of Docker from the perspective of the host operating system and of the virtualization environment, and it provides a characterization of the CPU and disk I/O overhead introduced by containers.

  • 264.
    Cavallin, Fritjof
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Pettersson, Timmie
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Real-time View-dependent Triangulation of Infinite Ray Cast Terrain2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Background. Ray marching is a technique that can be used to render images of infinite terrains defined by a height field by sampling consecutive points along a ray until the terrain surface is intersected. However, this technique can be expensive, and does not generate a mesh representation, which may be useful in certain use cases.

    Objectives. The aim of the thesis is to implement an algorithm for view-dependent triangulation of infinite terrains in real-time without making use of any preprocessed data, and compare the performance and visual quality of the implementation with that of a ray marched solution.

    Methods. Performance metrics for both implementations are gathered and compared. Rendered images from both methods are compared using an image quality assessment algorithm.

    Results. In all tests performed, the proposed method performs better in terms of frame rate than a ray marched version. The visual similarity between the two methods highly depend on the quality setting of the triangulation.

    Conclusions. The proposed method can perform better than a ray marched version, but is more reliant on CPU processing, and can suffer from visual popping artifacts as the terrain is refined.

  • 265.
    Chadalapaka, Gayatri
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering. BTH.
    Performance Assessment of Spectrum Sharing Systems: with Service Differentiation2018Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
  • 266.
    Chai, Yi
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    A novel progressive mesh representation method based on the half-edge data structure and √3 subdivision2015Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Progressive mesh (PM) representation can perfectly meet the requirements of generating multi-resolutions for a detailed 3D model. This research proposes a new PM representation method to improve the PM representation storage efficiency and reduce PM generation time. In existing PM representation methods, more than 4 adjacent vertices will be stored for one vertex in the PM representation. Furthermore, the methods always use the inefficient vertex and face list representation during the generation process. In our proposed method, only three vertices are stored by using the √3 subdivision scheme and the efficient half-edge data structure replaces the vertex and face list representation. To evaluate the proposed method, a designed experiment is conducted by using three common testing 3D models. The result illustrates the improvements by comparing to other previous methods.

  • 267.
    Chalasani, Trishala
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    AUTOMATED ASSESSMENT FOR THE THERAPY SUCCESS OF FOREIGN ACCENT SYNDROME: Based on Emotional Temperature2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. Foreign Accent Syndrome is a rare neurological disorder, where among other symptoms of the patient’s emotional speech is affected. As FAS is one of the mildest speech disorders, there has not been much research done on the cost-effective biomarkers which reflect recovery of competences speech.

    Objectives. In this pilot study, we implement the Emotional Temperature biomarker and check its validity for assessing the FAS. We compare the results of implemented biomarker with another biomarker based on the global distances for FAS and identify the better one.

    Methods. To reach the objective, the emotional speech data of two patients at different phases of the treatment are considered. After preprocessing, experiments are performed on various window sizes and the observed correctly classified instances in automatic recognition are used to calculate Emotional temperature. Further, we use the better biomarker for tracking the recovery in the patient’s speech.

    Results. The Emotional temperature of the patient is calculated and compared with the ground truth and with that of the other biomarker. The Emotional temperature is calculated to track the emergence of compensatory skills in speech.

    Conclusions. A biomarker based on the frame-view of speech signal has been implemented. The implementation has used the state of art feature set and thus is an unproved version of the classical Emotional Temperature. The biomarker has been used to automatically assess the recovery of two patients diagnosed with FAS. The biomarker has been compared against the global view biomarker and has advantages over it. It also has been compared to human evaluations and captures the same dynamics.

  • 268.
    Chamala, Navneet Reddy
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Reasons Governing the Adoption and Denial of TickITplus: A Survey2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. Software Process Improvement (SPI) initiatives like Capability Maturity Model (CMM), Capability Maturity Model Integration (CMMI), Bootstrap etc., have been developed on the primary agenda of continuous software process improvement. Similarly, about two decades ago, the United Kingdom Accreditation Services (UKAS) have laid down a set of guidelines based on the ISO quality standards for providing certification to organizations named TickIT. TickIT is now obsolete with its successor scheme TickITplus taking up its place with a lot of significant additions. All the companies which were certified based on TickIT guidelines (more than 1000 companies) were asked to move to TickITplus in order to keep their TickIT certification. However, until now it has been three years since the inception of TickITplus and only 70 companies have adopted TickITplus. This is way below relative to the number of TickIT certified organizations. The present thesis is done in order to find the factors why most of the companies have not adopted TickITplus and also why the 70 organizations have moved to TickITplus.  

    Objectives In this study, an attempt has been made to accomplish the following objectives: Identifying the changes that have been brought about in the new scheme. The factors that a software organization looks into while adopting or migrating to a new software quality certification scheme are identified. Validate these factors with the help of survey and interviews. Analyze the results of survey and interviews to provide the reasons why most of the organizations haven’t adopted TickITplus certification scheme.

    Methods. This research is done by using a mixed method approach by incorporating both quantitative and qualitative research methods. An online survey is conducted with the help of an online questionnaire as part of the quantitative leg. Two survey questionnaires have been framed to gather responses. With respect to the qualitative research method interviews are conducted to get a wider understanding about the factors that led an organization to migrate or not to migrate to TickITplus. The gathered data is analyzed using statistical methods like bivariate and univariate analysis for the quantitative method and thematic coding has been applied for the qualitative method. Triangulation method is used to validate the data obtained by correlating the results from the survey and interviews with those extracted from the literature review.

    Results. Results pertaining to the reasons why companies have moved to and also why other companies haven’t taken up TickITplus have been gathered from the survey and interviews. It was identified that high costs and low customer demand were the main reasons for the organizations not to choose TickITplus while among the organizations which have moved to TickITplus have also chosen the scheme based on customer requirement. However, few other reasons apart from these have also been identified which are presented in this document

    Conclusions. Conclusions have been drawn citing the importance of costs incurred for implementing TickITplus as a reason for not selecting TickITplus as it was considered very expensive. Among other reasons customer requirement was also low which was identified as a factor for the relatively low number of TickITplus certified organizations. On the other hand, among the TickITplus certified firms, customer demand forms the prominent reason for moving to TickITplus and lack of appropriate people to take up the work was considered as an important hindrance while implementing TickITplus. Several other reasons and challenges have also been identified which are clearly detailed in the document.

  • 269.
    Chapala, Usha Kiran
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Peteti, Sridhar
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Continuous Video Quality of Experience Modelling using Machine Learning Model Trees1996Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Adaptive video streaming is perpetually influenced by unpredictable network conditions, whichcauses playback interruptions like stalling, rebuffering and video bit rate fluctuations. Thisleads to potential degradation of end-user Quality of Experience (QoE) and may make userchurn from the service. Video QoE modelling that precisely predicts the end users QoE underthese unstable conditions is taken into consideration quickly. The root cause analysis for thesedegradations is required for the service provider. These sudden changes in trend are not visiblefrom monitoring the data from the underlying network service. Thus, this is challenging toknow this change and model the instantaneous QoE. For this modelling continuous time, QoEratings are taken into consideration rather than the overall end QoE rating per video. To reducethe user risk of churning the network providers should give the best quality to the users.

    In this thesis, we proposed the QoE modelling to analyze the user reactions change over timeusing machine learning models. The machine learning models are used to predict the QoEratings and change patterns in ratings. We test the model on video Quality dataset availablepublicly which contains the user subjective QoE ratings for the network distortions. M5P modeltree algorithm is used for the prediction of user ratings over time. M5P model gives themathematical equations and leads to more insights by given equations. Results of the algorithmshow that model tree is a good approach for the prediction of the continuous QoE and to detectchange points of ratings. It is shown that to which extent these algorithms are used to estimatechanges. The analysis of model provides valuable insights by analyzing exponential transitionsbetween different level of predicted ratings. The outcome provided by the analysis explains theuser behavior when the quality decreases the user ratings decrease faster than the increase inquality with time. The earlier work on the exponential transitions of instantaneous QoE overtime is supported by the model tree to the user reaction to sudden changes such as video freezes.

  • 270.
    Charla, Shiva Bhavani Reddy
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Examining Various Input Patterns Effecting Software  Application Performance: A Quasi-experiment on Performance Testing2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Nowadays, non-functional testing has a great impact on the real-time environment. Non-functional testing helps to analyze the performance of the application on both server and client. Load testing attempts to cause the system under test to respond incorrectly in a situation that differs from its normal operation, but rarely encountered in real world use. Examples include providing abnormal inputs to the software or placing real-time software under unexpectedly high loads. High loads are induced over the application to test the performance, but there is a possibility that particular pattern of the low load could also induce load on a real-time system. For example, repeatedly making a request to the system every 11 seconds might cause a fault if the system transitions to standby state after 10 seconds of inactivity. The primary aim of this study is to find out various low load input patterns affecting the software, rather than simply high load inputs. A quasi-experiment was chosen as a research method for this study. Performance testing was performed on the web application with the help of a tool called HP load runner. A comparison was made between low load and high load patterns to analyze the performance of the application and to identify bottlenecks under different load.

  • 271.
    Chatzipetrou, Panagiota
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Alégroth, Emil
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Papatheocharous, Efi
    RISE SICS AB, SWE.
    Borg, Markus
    RISE SICS AB, SWE.
    Gorschek, Tony
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Wnuk, Krzysztof
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Component selection in Software Engineering: Which attributes are the most important in the decision process?2018In: EUROMICRO Conference Proceedings, IEEE conference proceedings, 2018, p. 198-205Conference paper (Refereed)
    Abstract [en]

    Abstract— Component-based software engineering is a common approach to develop and evolve contemporary software systems where different component sourcing options are available: 1)Software developed internally (in-house), 2)Software developed outsourced, 3)Commercial of the shelf software, and 4) Open Source Software. However, there is little available research on what attributes of a component are the most important ones when selecting new components. The object of the present study is to investigate what matters the most to industry practitioners during component selection. We conducted a cross-domain anonymous survey with industry practitioners involved in component selection. First, the practitioners selected the most important attributes from a list. Next, they prioritized their selection using the Hundred-Dollar ($100) test. We analyzed the results using Compositional Data Analysis. The descriptive results showed that Cost was clearly considered the most important attribute during the component selection. Other important attributes for the practitioners were: Support of the component, Longevity prediction, and Level of off-the-shelf fit to product. Next an exploratory analysis was conducted based on the practitioners’ inherent characteristics. Nonparametric tests and biplots were used. It seems that smaller organizations and more immature products focus on different attributes than bigger organizations and mature products which focus more on Cost

  • 272.
    Chatzipetrou, Panagiota
    et al.
    Aristotle Univ Thessaloniki, Dept Informat, GR-54006 Thessaloniki, Greece..
    Angelis, Lefteris
    Aristotle Univ Thessaloniki, Dept Informat, GR-54006 Thessaloniki, Greece..
    Barney, Sebastian
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Wohlin, Claes
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    An experience-based framework for evaluating alignment of software quality goals2015In: Software quality journal, ISSN 0963-9314, E-ISSN 1573-1367, Vol. 23, no 4, p. 567-594Article in journal (Refereed)
    Abstract [en]

    Efficient quality management of software projects requires knowledge of how various groups of stakeholders involved in software development prioritize the product and project goals. Agreements or disagreements among members of a team may originate from inherent groupings, depending on various professional or other characteristics. These agreements are not easily detected by conventional practices (discussions, meetings, etc.) since the natural language expressions are often obscuring, subjective, and prone to misunderstandings. It is therefore essential to have objective tools that can measure the alignment among the members of a team; especially critical for the software development is the degree of alignment with respect to the prioritization goals of the software product. The paper proposes an experience-based framework of statistical and graphical techniques for the systematic study of prioritization alignment, such as hierarchical cluster analysis, analysis of cluster composition, correlation analysis, and closest agreement-directed graph. This framework can provide a thorough and global picture of a team's prioritization perspective and can potentially aid managerial decisions regarding team composition and leadership. The framework is applied and illustrated in a study related to global software development where 65 individuals in different roles, geographic locations and professional relationships with a company, prioritize 24 goals from individual perception of the actual situation and for an ideal situation.

  • 273.
    Chatzipetrou, Panagiota
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Ouriques, Raquel
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Gonzalez-Huerta, Javier
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Approaching the Relative Estimation Concept with Planning Poker2018In: ACM International Conference Proceeding Series, Association for Computing Machinery , 2018, p. 21-25Conference paper (Refereed)
    Abstract [en]

    Simulation is a powerful instrument in the education process that can help students experience a reality context and understand complex concepts required to accomplish practitioners' tasks. The present study aims to investigate the software engineering students' perception about the usefulness of the Planning Poker technique in relation to their understanding of the relative estimation concept. We conducted a simulation exercise where students first estimated tasks applying the concepts of relative estimation based on the concepts explained in the lecture, and then to estimate tasks applying the Agile Planning Poker technique. To investigate the students' perception, we used a survey at the end of each exercise. The preliminary results did not show statistical significance on the students' confidence to estimate relatively the user stories. However, from the students' comments and feedback, there are indications that students are more confident in using Agile Planning Poker when they are asked to estimate user stories. The study will be replicated in the near future to a different group of students with a different background, to have a better understanding and also identify possible flaws of the exercise. © 2018 Association for Computing Machinery.

  • 274.
    Chatzipetrou, Panagiota
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Šmite, Darja
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Van Solingen, Rini
    Delft University of Technology, NLD.
    When and who leaves matters: Emerging results from an empirical study of employee turnover2018In: PROCEEDINGS OF THE 12TH ACM/IEEE INTERNATIONAL SYMPOSIUM ON EMPIRICAL SOFTWARE ENGINEERING AND MEASUREMENT (ESEM 2018), Association for Computing Machinery (ACM), 2018, article id a53Conference paper (Refereed)
    Abstract [en]

    Background: Employee turnover in GSD is an extremely important issue, especially in Western companies offshoring to emerging nations. Aims: In this case study we investigated an offshore vendor company and in particular whether the employees' retention is related with their experience. Moreover, we studied whether we can identify a threshold associated with the employees' tendency to leave the particular company. Method: We used a case study, applied and presented descriptive statistics, contingency tables, results from Chi-Square test of association and post hoc tests. Results: The emerging results showed that employee retention and company experience are associated. In particular, almost 90% of the employees are leaving the company within the first year, where the percentage within the second year is 50-50%. Thus, there is an indication that the 2 years' time is the retention threshold for the investigated offshore vendor company. Conclusions: The results are preliminary and lead us to the need for building a prediction model which should include more inherent characteristics of the projects to aid the companies avoiding massive turnover waves. © 2018 ACM.

  • 275.
    CHAVALI, SRIKAVYA
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems. Select...
    AUTOMATION OF A CLOUD HOSTED APPLICATION: Performance, Automated Testing, Cloud Computing2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context: Software testing is the process of assessing quality of a software product to determine whether it matches with the existing requirements of the customer or not. Software testing is one of the “Verification and Validation,” or V&V, software practices. The two basic techniques of software testing are Black-box testing and White box testing. Black-box testing focuses solely on the outputs generated in response to the inputs supplied neglecting the internal components of the software. Whereas, White-box testing focuses on the internal mechanism of the software of any application. To explore the feasibility of black-box and white-box testing under a given set of conditions, a proper test automation framework needs to be deployed. Automation is deployed in order to reduce the manual effort and to perform testing continuously, thereby increasing the quality of the product.

     

    Objectives: In this research, cloud hosted application is automated using TestComplete tool. The objective of this thesis is to verify the functionality of Cloud application known as Test data library or Test Report Analyzer through automation and to measure the impact of the automation on release cycles of the organization.

     

    Methods: Here automation is implemented using scrum methodology which is an agile development software process. Using scrum methodology, the product with working software can be delivered to the customers incrementally and empirically with updating functionalities in it. Test data library or Test Report Analyzer functionality of Cloud application is verified deploying testing device thereby the test cases can be analyzed thereby analyzing the pass or failed test cases.

     

    Results: Automation of test report analyzer functionality of cloud hosted application is made using TestComplete and impact of automation on release cycles is reduced. Using automation, nearly 24% of change in release cycles can be observed thereby reducing the manual effort and increasing the quality of delivery.

     

    Conclusion: Automation of a cloud hosted application provides no manual effort thereby utilization of time can be made effectively and application can be tested continuously increasing the efficiency and the quality of an application.

  • 276. Che, X.
    et al.
    Niu, Y.
    Shui, B.
    Fu, J.
    Fei, G.
    Goswami, Prashant
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Zhang, Y.
    A novel simulation framework based on information asymmetry to evaluate evacuation plan2015In: The Visual Computer, ISSN 0178-2789, E-ISSN 1432-2315, Vol. 31, no 6-8, p. 853-861Article in journal (Refereed)
    Abstract [en]

    In this paper, we present a novel framework to simulate the crowd behavior under emergency situations in a confined space with multiple exits. In our work, we take the information asymmetry into consideration, which is used to model the different behaviors presented by pedestrians because of their different knowledge about the environment. We categorize the factors influencing the preferred velocity into two groups, the intrinsic and extrinsic factors, which are unified into a single space called influence space. At the same time, a finite state machine is employed to control the individual behavior. Different strategies are used to compute the preferred velocity in different states, so that our framework can produce the phenomena of decision change. Our experimental results prove that our framework can be employed to analyze the factors influencing the escape time, such as the number and location of exits, the density distribution of the crowd and so on. Thus it can be used to design and evaluate the evacuation plans. © 2015 Springer-Verlag Berlin Heidelberg

  • 277.
    Chebudie, Abiy Biru
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Monitoring of Video Streaming Quality from Encrypted Network Traffic: The Case of YouTube Streaming2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The video streaming applications contribute to a major share of the Internet traffic. Consequently, monitoring and management of video streaming quality has gained a significant importance in the recent years. The disturbances in the video, such as, amount of buffering and bitrate adaptations affect user Quality of Experience (QoE). Network operators usually monitor such events from network traffic with the help of Deep Packet Inspection (DPI). However, it is becoming difficult to monitor such events due to the traffic encryption. To address this challenge, this thesis work makes two key contributions. First, it presents a test-bed, which performs automated video streaming tests under controlled time-varying network conditions and measures performance at network and application level. Second, it develops and evaluates machine learning models for the detection of video buffering and bitrate adaptation events, which rely on the information extracted from packets headers. The findings of this work suggest that buffering and bitrate adaptation events within 60 second intervals can be detected using Random Forest model with an accuracy of about 70%. Moreover, the results show that the features based on time-varying patterns of downlink throughput and packet inter-arrival times play a distinctive role in the detection of such events.

  • 278.
    Cheddad, Abbas
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Structure Preserving Binary Image Morphing using Delaunay Triangulation2017In: Pattern Recognition Letters, ISSN 0167-8655, E-ISSN 1872-7344, Vol. 85, p. 8-14Article in journal (Refereed)
    Abstract [en]

    Mathematical morphology has been of a great significance to several scientific fields. Dilation, as one of the fundamental operations, has been very much reliant on the common methods based on the set theory and on using specific shaped structuring elements to morph binary blobs. We hypothesised that by performing morphological dilation while exploiting geometry relationship between dot patterns, one can gain some advantages. The Delaunay triangulation was our choice to examine the feasibility of such hypothesis due to its favourable geometric properties. We compared our proposed algorithm to existing methods and it becomes apparent that Delaunay based dilation has the potential to emerge as a powerful tool in preserving objects structure and elucidating the influence of noise. Additionally, defining a structuring element is no longer needed in the proposed method and the dilation is adaptive to the topology of the dot patterns. We assessed the property of object structure preservation by using common measurement metrics. We also demonstrated such property through handwritten digit classification using HOG descriptors extracted from dilated images of different approaches and trained using Support Vector Machines. The confusion matrix shows that our algorithm has the best accuracy estimate in 80% of the cases. In both experiments, our approach shows a consistent improved performance over other methods which advocates for the suitability of the proposed method.

  • 279.
    Cheddad, Abbas
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Towards Query by Text Example for pattern spotting in historical documents2016In: Proceedings - CSIT 2016: 2016 7th International Conference on Computer Science and Information Technology, IEEE Computer Society, 2016, article id 7549479Conference paper (Refereed)
    Abstract [en]

    Historical documents are essentially formed of handwritten texts that exhibit a variety of perceptual environment complexities. The cursive and connected nature of text lines on one hand and the presence of artefacts and noise on the other hand hinder achieving plausible results using current image processing algorithm. In this paper, we present a new algorithm which we termed QTE (Query by Text Example) that allows for training-free and binarisation-free pattern spotting in scanned handwritten historical documents. Our algorithm gives promising results on a subset of our database revealing ∌83% success rate in locating word patterns supplied by the user.

  • 280.
    Cheddad, Abbas
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Kusetogullari, Hüseyin
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Grahn, Håkan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Object recognition using shape growth pattern2017In: Proceedings of the 10th International Symposium on Image and Signal Processing and Analysis, ISPA, IEEE Computer Society Digital Library, 2017, p. 47-52, article id 8073567Conference paper (Refereed)
    Abstract [en]

    This paper proposes a preprocessing stage to augment the bank of features that one can retrieve from binary images to help increase the accuracy of pattern recognition algorithms. To this end, by applying successive dilations to a given shape, we can capture a new dimension of its vital characteristics which we term hereafter: the shape growth pattern (SGP). This work investigates the feasibility of such a notion and also builds upon our prior work on structure preserving dilation using Delaunay triangulation. Experiments on two public data sets are conducted, including comparisons to existing algorithms. We deployed two renowned machine learning methods into the classification process (i.e., convolutional neural network-CNN- and random forests-RF-) since they perform well in pattern recognition tasks. The results show a clear improvement of the proposed approach's classification accuracy (especially for data sets with limited training samples) as well as robustness against noise when compared to existing methods.

  • 281.
    Chen, Hao
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Xu, Luyang
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Software Architecture and Framework for Programmable Automation Controller: A Systematic Literature Review and A Case Study2018Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Background. PAC controller is a strengthened version of PLC controller. Its function is very similar, but its essence and construction are different. PLC and PAC have many successful applications in the field of industrial automation control. There is a lot of literature about the software architecture of PLC control system. However, there is almost no relevant literature on software architecture based on PAC control system. A well-performing and stable automatic control system is indispensable to the design and development of suitable software architecture. The quality and pattern of software architecture can even affect the stability and efficiency of the control system.

    Objectives. Based on these problems, we defined two primary objectives. The first is to investigate the architecture of some existing large industrial control systems, to analyze and summarize the scenarios and advantages and disadvantages of these architectural patterns. The second, based on the results of effort for the first objective, we want to propose and design a set of automated control solution architecture model based on PAC control system, which is implemented and applied in a printing house. In the process, we sum up the challenges and obstacles encountered in implementing the solution and provide some guidance or reference for those involved in the field.

    Methods. For the first objective, we used a systematic literature review to collect data about existing ICS architecture. Concerning the second objective, a case study was conducted in a printing house in Karlskrona Sweden, in the study, we proposed a software architecture model suitable for PAC automation control system. Then, we developed and tested the automation control system and summarized some challenges and obstacles in the process of the implementation.

    Results. The existing ICS (Industrial Control System) architecture models and critical problems and challenges in the implementation of ICS are identified. From the existing literature, we have summarized five commonly used large industrial control system architecture models, which are mainly using composite structures, that is, a combination of multiple architecture patterns. Also, some critical problems in the industrial control system, such as information security, production reliability, etc. are also identified. In the case study, we put forward an automatic control solution for Printing House based on SLR results. We designed the hardware deployment architecture of the system and the software control architecture. Generally speaking, this architecture is based on C/S architecture. In the development of client, we adopt the popular MVC architecture mode. In the longitudinal view of the whole system, an extended hierarchical architecture model is adopted. In the core control system, we adopt the modular architecture design idea. The whole control system is composed of 6 parts, four subsystems of PAC terminal, one server-side program and one client program. After a long time, development and test, our system finally goes online for the production, and its production efficiency is improved compared with the old system. Its expansion functions, such as Production Report and Tag Print, are deeply satisfying for the customers.

    Conclusions. In this research, we summarize and compare the advantages and disadvantages of several commonly used industrial control systems. Besides, we proposed a software architecture model and developed an automation control system based on PAC. We fill the gap that there is a lack of studies about the software architecture about the implementation of the automation control system based on PAC. Our result can help software engineers and developers in ICS fields to develop their own PAC based automation control system.

  • 282.
    Chen, Mingda
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    He, Yao
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Exploration on Automated Software Requirement Document Readability Approaches2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. The requirements analysis phase, as the very beginning of software development process, has been identified as a quite important phase in the software development lifecycle. Software Requirement Specification (SRS) is the output of requirements analysis phase, whose quality factors play an important role in the evaluation work. Readability is a quite important SRS quality factor, but there are few available automated approaches for readability measurement, because of the tight dependency on readers' perceptions. Low readability of SRS documents has a serious impact on the whole process of software development. Therefore, it's extremely urgent to propose effective automated approaches for SRS documents readability measurement. Using traditional readability indexes to analyze readability of SRS documents automatically is a potentially feasible approach. However, the effectiveness of this approach is not systematically evaluated before.

    Objectives. In this study, firstly, we aim to understand the readability of texts and investigate approaches to score texts readability manually. Then investigate existing automated readability approaches for texts with their working theories. Next, evaluate the effectiveness of measuring the readability of SRS documents by using these automated readability approaches. Finally, rank these automated approaches by their effectiveness.

    Methods. In order to find out the way how human score the readability of texts manually and investigate existing automated readability approaches for texts, systematic literature review is chosen as the research methodology. Experiment is chosen to explore the effectiveness of automated readability approaches.

    Results. We find 67 articles after performing systematic literature review. According to systematic literature review, human judging the readability of texts through reading is the most common way of scoring texts readability manually. Additionally, we find four available automated readability assessments tools and seven available automated readability assessments formulas. After executing the experiment, we find the actual value of effectiveness of all selected approaches are not high and Coh-Metrix presents the highest actual value of effectiveness of automated readability approach among the selected approaches.

    Conclusions. Coh-Metrix is the most effective automated readability approach, but the feasibility in directly applying Coh-Metrix in SRS documents readability assessments cannot be permitted. Since the actual value of evaluated effectiveness is not high enough. In addition, all selected approaches are based on metrics of readability measures, but no semantic factors are blended in readability assessments. Hence studying more on human perception quantifying and adding semantic analysis in SRS documents readability assessment could be two research directions in future.

  • 283.
    Chen, Shajin
    Blekinge Institute of Technology, Faculty of Computing, Department of Technology and Aesthetics.
    Weibo's Role in Shaping Public Opinion and Political Participation in China2014Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [en]

    This thesis examines the role of microblogging in shaping public opinion and political participation in China with particular focus on the question of what sociopolitical implications and challenges that weibo phenomenon has brought to the Chinese society. I explore some of the prominent features of weibo for the role they plays in framing public sphere. Along with an in-depth study of two weibo cases, the results show that microblogging provide a unique platform for Chinese citizens to participate in civic engagement and to organize their collective opinions. The study also demonstrates that weibo has a significant impact on spurring social change. Further, weibo discourse encourages interaction between government and ordinary citizens, and it also changes traditional Chinese politics through enabling public political participation. However, the spread of rumors and network violence are some of the disadvantages inherent to the weibo phenomenon that should be of concern. More importantly, the analysis reveals that the initial reasons behind the weibo phenomenon were the long-term social conflicts and continuous information control by the state. Weibo certainly provides a remarkable platform for the freedom of speech but it should not be considered as a panacea for the social changes in China.

  • 284.
    Chen, Xiaoran
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Image enhancement effect on the performance of convolutional neural networks2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. Image enhancement algorithms can be used to enhance the visual effects of images in the field of human vision. So can image enhancement algorithms be used in the field of computer vision? The convolutional neural network, as the most powerful image classifier at present, has excellent performance in the field of image recognition. This paper explores whether image enhancement algorithms can be used to improve the performance of convolutional neural networks.

    Objectives. The purpose of this paper is to explore the effect of image enhancement algorithms on the performance of CNN models in deep learning and transfer learning, respectively. The article selected five different image enhancement algorithms, they are the contrast limited adaptive histogram equalization (CLAHE), the successive means of the quantization transform (SMQT), the adaptive gamma correction, the wavelet transform, and the Laplace operator.

    Methods. In this paper, experiments are used as research methods. Three groups of experiments are designed; they respectively explore whether the enhancement of grayscale images can improve the performance of CNN in deep learning, whether the enhancement of color images can improve the performance of CNN in deep learning and whether the enhancement of RGB images can improve the performance of CNN in transfer learning?Results. In the experiment, in deep learning, when training a complete CNN model, using the Laplace operator to enhance the gray image can improve the recall rate of CNN. However, the remaining image enhancement algorithms cannot improve the performance of CNN in both grayscale image datasets and color image datasets. In addition, in transfer learning, when fine-tuning the pre-trained CNN model, using contrast limited adaptive histogram equalization (CLAHE), successive means quantization transform (SMQT), Wavelet transform, and Laplace operator will reduce the performance of CNN.

    Conclusions. Experiments show that in deep learning, using image enhancement algorithms may improve CNN performance when training complete CNN models, but not all image enhancement algorithms can improve CNN performance; in transfer learning, when fine-tuning the pre- trained CNN model, image enhancement algorithms may reduce the performance of CNN.

  • 285.
    Chennamsetty, Harish
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Experimentation in Global Software Engineering2015Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Context: Software engineering researchers are guided by research principles to conduct any type of research. Though, there are many guidelines to detail how a particular research method can be applied, there is always a necessity to continue and to improve the existing empirical research strategies. The context of this thesis is to address guidelines for conducting controlled experiments in Global Software Engineering (GSE). With this thesis, the state-of-the-art of conducting experiments in GSE research has been explored. Objectives: The goal of this thesis is to analyze the existing experiments in GSE research. Research problems addressed with GSE experiments and the state-of-the-art of overall GSE experiment design need to be analyzed. Appropriate guidelines should be drawn in order to provide strategies to future GSE researchers in mitigating or solving GSE specific experimentation challenges. Methods: A systematic literature review (SLR) is conducted to review all the GSE experiments that are found in the literature. The search process was done on 6 databases. A specific search and quality assessment criterion is used to select these GSE experiments. Furthermore, scientific interviews are conducted with GSE research experts to evaluate a set of guidelines (thesis author’s recommendations) that address the challenges when conducting GSE experiments. Thematic analysis has been performed to analyze the evaluation results and to further improve or implement any suggestions given by the interviewees. Conclusions: The results obtained from the SLR have provided a chance to understand the state-of-the-art and to analyze the challenges or problems when conducting controlled experiments in GSE. The challenges that are identified in GSE controlled experiments are found to be with experiment study-setting, involving subjects and addressing GSE relevant threats to validity in a GSE experiments. 9 guidelines are framed, with each guideline addressing a specific challenge. The final guidelines (that resulted after interviews) provide effective recommendations to GSE researchers when conducting controlled experiments.

  • 286.
    chikkala, sai sandeep
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    EVALUATION CRITERIA FOR SELECTION OF API PRODUCTS: Practitioners' Perspective2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. The approach of developing software systems with the use of third partycomponents i.e. COTS or OSS has increased globally. In this study API product refers toeither a software component or a software service or both packaged together, that can beaccessed through an API. Developers are faced with plethora of alternative choices to selectan API product. With this increase in components adoption, API product providers are facedwith challenge of designing their product to be more attractive than others. This needs theproviders to be educated about the developer behavior when they choose an API product.Understanding the selection practices of developers can help providers to improve thepackaging of API products, making them more suitable for selection.

    Objectives. The objectives of this study is to investigate the criteria that developers usewhen reasoning about acceptability of a software component.

    Methods. A background study is performed to identify the evaluation criteria proposed inthe literature. An empirical study using Qualitative content analysis is performed. In the study the 480 reviews of different API products are analyzed to understand the criteria frompractitioners’ perspective.

    Results. 9 relevant criteria that developer use to reason about accepting or rejecting an APIProduct are identified. 30 sub criteria related to the 9 criteria are described in the study.

    Conclusions. This study concludes that the identified 9 criteria play an important role indeveloper assessment of the API product. It is also found that the criteria have significantimpact on the ratings of API product. These criteria could guide API product providers tomake better choices when developing the product.

  • 287.
    Chilukuri, Megh Phani Dutt
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Power Profiling of Network Switches2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context In the present world, there is an increase in the usage of the telecommunication networking services, as there is a need of efficient networking services in various fields which can be obtained by using the efficient networking components. For that purpose we have to know about the components parameters. One of the most important parameter is the energy usage of networking components. Therefore, there is a need in power profiling of the network switches.

    Objectives The objective of this research is to profile the power usage of different network components(Switches) for various load scenarios. Power measurements are done by using the open energy monitoring tool called emonpi.

    Methods The research method has been carried out by using an experimental test bed. In this research, we are going to conduct the experiments with different configurations to obtain different load conditions for sources and destinations which will be passed through DUT(Device Under Test). For that DUT’s we will measure power usage by monitoring tool called emonpi. Then the experiments are conducted for different load scenarios for different switches and results are discussed.

    Conclusion From the results obtained, the Power profiles of different DUT’s are tabulated and analyzed. These were done under different ports and load scenarios for Cisco2950, Cisco3560 and Netgear GS-724T. From the results and analysis it can be stated that the power usage of Cisco 2950 is having the maximum power usage in all the considered scenarios with respect to packet rate and also number of active ports. The Netgear-GS724T is having the minimum power usage from the three switches as it having the green switch characteristics in all scenarios. And the Cisco 3560 is in between the above two switches as it is having energy efficient management from Cisco. From this we have proposed a simple model for energy/power measurement.

  • 288.
    Chinta, Ruthvik
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Measurement of Game Immersion through Subjective Approach2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. People in recent times are getting engaged more often in playing video games. Few play for enjoyment, few play for stress relaxation and so on. Generally, the degree of involvement of a player with the game is described as game immersion. People when immersed into playing a game doesn't realize that they are getting dissociated with the outside world and are losing track of time.

    Objectives. In this research, the main objective is to explore the relationship between the game immersion and game experience using the five factors of game immersion. In addition, the study also involves exploring different methods that can be used to measure game immersion.

    Methods. In this research, initially literature review has been conducted to explore the meaning of game immersion and also different methods that can be used to measure it and next user studies in the form an experiment was conducted to measure game immersion. After the experiment was conducted regression analysis was performed on the data obtained from the results to describe the relation between game immersion and game experience.

    Results. After the experiment participants were asked to answer the IEQ questionnaire and the answers obtained from the questionnaire are analyzed using regression analysis. An inverse linear regression was observed between game immersion and game experience.

    Conclusions. After analyzing the data, from the observed inverse linear regression, it is concluded that game immersion levels decrease with the increase in the game experience.

  • 289.
    Chivukula, Krishna Varaynya
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Monitoring and Analysis of CPU load relationships between Host and Guests in a Cloud Networking Infrastructure: An Empirical Study2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
  • 290.
    Chodapaneedi, Mani Teja
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Manda, Samhith
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Engagement of Developers in Open Source Projects: A Multi-Case Study2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

     In the present world, the companies on using the open source projects have been tend to increase in the innovation and productivity which is beneficial in sustaining the competence. These involve various developers across the globe who may be contributing to several other projects, they constantly engage with the project to improve and uplift the overall project. In each open source project, the level of intensity and the motivation with which the developers engage and contribute vary among time.

    Initially the research is aimed to identify how the engagement and activity of the developers in open source projects vary over time. Secondly to assess the reasons over the variance in engagement activities of the developers involved in various open source projects.

    Firstly, a literature review was conducted to identify the list of available metrics that are helpful to analyse the developer’s engagement in open source projects. Secondly, we conducted a multi-case study, that involved the investigation of developer’s engagement in 10 different open source projects of Apache foundation. The GitHub repositories were mined to gather the data regarding the engagement activities of the developers over the selected projects. To identify the reasons for the variation in engagement and activity of developers, we analysed documentation about each project and also interviewed 10 developers and 5 instructors, who provided additional insights about the challenges faced to contribute in open source projects.

    The results of this research contain the list of factors that affect the developer’s engagement with open source projects which are extracted from the case studies and are strengthened through interviews. From the data that is collected by performing repository mining, the selected projects have been categorized with the increase, decrease activeness of developers among the selected projects. By utilizing the archival data that is collected from the selected projects, the factors corporate support, community involvement, distribution of issues and contributions to open source projects and specificity of guidelines have been identified as the crucial and key factors upon the success of the open source projects reflecting the engagement of contributors. In addition to this finding the insights on using open source projects are also collected from both perspectives of developers and instructors are presented.

     This research had provided us a deeper insight on the working of open source projects and driving factors that influence engagement and activeness of the contributors. It has been evident from this research that the stated factors corporate support, community involvement, distribution of issues and contributions to open source projects and specificity of guidelines impacts the engagement and activeness of the developers. So, the open source projects minimally satisfying these projects can tend to see the increase of the engagement and activeness levels of the contributors. It also helps to seek the existing challenges and benefits upon contributing to open source projects from different perspectives.

  • 291.
    Chu, Thi My Chin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Zepernick, Hans-Jürgen
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    On Capacity of Full-Duplex Cognitive Cooperative Radio Networks with Optimal Power Allocation2017In: 2017 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE (WCNC), IEEE , 2017Conference paper (Refereed)
    Abstract [en]

    In this paper, we examine a full-duplex transmission scheme for cognitive cooperative radio networks (CCRNs) to improve capacity. In this network, the secondary transmitter and secondary relay are allowed to utilize the licensed spectrum of the primary user by using underlay spectrum access. We assume that the CCRN is subject to the interference power constraint of the primary receiver and maximum transmit power limit of the secondary transmitter and secondary relay. Under these constraints, we propose an optimal power allocation policy for the secondary transmitter and the secondary relay based on average channel state information (CSI) to optimize capacity. Then, we derive an expression for the corresponding achievable capacity of the secondary network over Nakagami-m fading. Numerical results are provided for several scenarios to study the achievable capacity that can be offered by this full-duplex underlay CCRN using the proposed optimal power allocation scheme.

  • 292.
    Chu, Thi My Chinh
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    On the Performance Assessment of Advanced Cognitive Radio Networks2015Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Due to the rapid development of wireless communications together with the inflexibility of the current spectrum allocation policy, radio spectrum becomes more and more exhausted. One of the critical challenges of wireless communication systems is to efficiently utilize the limited frequency resources to be able to support the growing demand of high data rate wireless services. As a promising solution, cognitive radios have been suggested to deal with the scarcity and under-utilization of radio spectrum. The basic idea behind cognitive radios is to allow unlicensed users, also called secondary users (SUs), to access the licensed spectrum of primary users (PUs) which improves spectrum utilization. In order to not degrade the performance of the primary networks, SUs have to deploy interference control, interference mitigating, or interference avoidance techniques to minimize the interference incurred at the PUs. Cognitive radio networks (CRNs) have stimulated a variety of studies on improving spectrum utilization. In this context, this thesis has two main objectives. Firstly, it investigates the performance of single hop CRNs with spectrum sharing and opportunistic spectrum access. Secondly, the thesis analyzes the performance improvements of two hop cognitive radio networks when incorporating advanced radio transmission techniques. The thesis is divided into three parts consisting of an introduction part and two research parts based on peer-reviewed publications. Fundamental background on radio propagation channels, cognitive radios, and advanced radio transmission techniques are discussed in the introduction. In the first research part, the performance of single hop CRNs is analyzed. Specifically, underlay spectrum access using M/G/1/K queueing approaches is presented in Part I-A while dynamic spectrum access with prioritized traffics is studied in Part I-B. In the second research part, the performance benefits of integrating advanced radio transmission techniques into cognitive cooperative radio networks (CCRNs) are investigated. In particular, opportunistic spectrum access for amplify-and-forward CCRNs is presented in Part II-A where collaborative spectrum sensing is deployed among the SUs to enhance the accuracy of spectrum sensing. In Part II-B, the effect of channel estimation error and feedback delay on the outage probability and symbol error rate (SER) of multiple-input multiple-output CCRNs is investigated. In Part II-C, adaptive modulation and coding is employed for decode-and-forward CCRNs to improve the spectrum efficiency and to avoid buffer overflow at the relay. Finally, a hybrid interweave-underlay spectrum access scheme for a CCRN is proposed in Part II-D. In this work, the dynamic spectrum access of the PUs and SUs is modeled as a Markov chain which then is utilized to evaluate the outage probability, SER, and outage capacity of the CCRN.

  • 293.
    Chu, Thi My Chinh
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Phan, Hoc
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Zepernick, Hans-Jürgen
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Adaptive Modulation and Coding with Queue Awareness in Cognitive Incremental Decode-and-Forward Relay Networks2014Conference paper (Refereed)
    Abstract [en]

    This paper studies the performance of adaptive modulation and coding in a cognitive incremental decode-and-forward relaying network where a secondary source can directly communicate with a secondary destination or via an intermediate relay. To maximize transmission efficiency, a policy which flexibly switches between the relaying and direct transmission is proposed. In particular, the transmission, which gives higher average transmission efficiency, will be selected for the communication. Specifically, the direct transmission will be chosen if its instantaneous signal-to-noise ratio (SNR) is higher than one half of that of the relaying transmission. In this case, the appropriate modulation and coding scheme (MCS) of the direct transmission is selected only based on its instantaneous SNR. In the relaying transmission, since the MCS of the transmissions from the source to the relay and from the relay to the destination are implemented independently to each other, buffering of packets at the relay is necessary. To avoid buffer overflow at the relay, the MCS for the relaying transmission is selected by considering both the queue state and the respective instantaneous SNR. Finally, a finite-state Markov chain is modeled to analyze key performance indicators such as outage probability and average transmission efficiency of the cognitive relay network.

  • 294.
    Chu, Thi My Chinh
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Phan, Hoc
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Zepernick, Hans-Jürgen
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Cognitive AF Relay Assisting both Primary and Secondary Transmission with Beamforming2014Conference paper (Refereed)
    Abstract [en]

    This paper investigates the system performance of a cognitive relay network with underlay spectrum sharing wherein the relay is exploited to assist both the primary and secondary transmitters in forwarding their signals to the respective destinations. To exploit spatial diversity, beamforming transmission is implemented at the transceivers of the primary and secondary networks. Particularly, exact expressions for the outage probability and symbol error rate (SER) of the primary transmission and tight bounded expressions for the outage probability and SER of the secondary transmission are derived. Furthermore, an asymptotic analysis for the primary network, which is utilized to investigate the diversity and coding gain of the network, is developed. Finally, numerical results are presented to show the benefits of the proposed system.

  • 295.
    Chu, Thi My Chinh
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Phan, Hoc
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Zepernick, Hans-Jürgen
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Delay Analysis for Cognitive Ad Hoc Networks Using Multi-channel Medium Access Control2014In: IET Communications, ISSN 1751-8628, E-ISSN 1751-8636, Vol. 8, no 7, p. 1083-1093Article in journal (Refereed)
    Abstract [en]

    In this study, the authors analyse the average end-to-end packet delay for a cognitive ad hoc network where multiple secondary nodes randomly contend for accessing the licensed bands of primary users in non-slotted time mode. Before accessing the licensed bands, each node must perform spectrum sensing and collaboratively exchange the sensing results with other nodes of the corresponding communication as a means of improving the accuracy of spectrum sensing. Furthermore, the medium access control with collision avoidance mechanism based distributed coordination function specified by IEEE802.11 is applied to coordinate spectrum access for this cognitive ad hoc network. To evaluate the system performance, the authors model the considered network as an open G/G/1 queuing network and utilise the method of diffusion approximation to analyse the end-to-end packet delay. The authors’ analysis takes into account not only the number of secondary nodes, the arrival rate of primary users and the arrival rate of secondary users but also the effect of the number of licensed bands when assessing the average end-to-end packet delay of the networks.

  • 296.
    Chu, Thi My Chinh
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Phan, Hoc
    Zepernick, Hans-Jürgen
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Dynamic Spectrum Access for Cognitive Radio Networks with Prioritized Traffics2014In: IEEE Communications Letters, ISSN 1089-7798, E-ISSN 1558-2558, Vol. 18, no 7, p. 1218-1221Article in journal (Refereed)
    Abstract [en]

    We develop a dynamic spectrum access (DSA) strategy for cognitive radio networks where prioritized traffic is considered. Assume that there are three classes of traffic, one traffic class of the primary user and two traffic classes of the secondary users, namely, Class 1 and Class 2. The traffic of the primary user has the highest priority, i.e., the primary users can access the spectrum at any time with the largest bandwidth demand. Furthermore, Class 1 has higher access and handoff priority as well as larger bandwidth demand as compared to Class 2. To evaluate the performance of the proposed DSA, we model the state transitions for DSA as a multi-dimensional Markov chain with three-state variables which present the number of packets in the system of the primary users, the secondary Class 1, and secondary Class 2. In particular, the blocking probability and dropping probability of the two secondary traffic classes are assessed.

  • 297.
    Chu, Thi My Chinh
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Phan, Hoc
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Zepernick, Hans-Jürgen
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Hybrid Interweave-Underlay Spectrum Access for Cognitive Cooperative Radio Networks2014In: IEEE Transactions on Communications, ISSN 0090-6778, E-ISSN 1558-0857, Vol. 62, no 7, p. 2183-2197Article in journal (Refereed)
    Abstract [en]

    In this paper, we study a hybrid interweave-underlay spectrum access system that integrates amplify-and-forward relaying. In hybrid spectrum access, the secondary users flexibly switch between interweave and underlay schemes based on the state of the primary users. A continuous-time Markov chain is proposed to model and analyze the spectrum access mechanism of this hybrid cognitive cooperative radio network (CCRN). Utilizing the proposed Markov model, steady-state probabilities of spectrum access for the hybrid CCRN are derived. Furthermore, we assess performance in terms of outage probability, symbol error rate (SER), and outage capacity of this CCRN for Nakagami-m fading with integer values of fading severity parameter m. Numerical results are provided showing the effect of network parameters on the secondary network performance such as the primary arrival rate, the distances from the secondary transmitters to the primary receiver, the interference power threshold of the primary receiver in underlay mode, and the average transmit signal-to-noise ratio of the secondary network in interweave mode. To show the performance improvement of the CCRN, comparisons for outage probability, SER, and capacity between the conventional underlay scheme and the hybrid scheme are presented. The numerical results show that the hybrid approach outperforms the conventional underlay spectrum access.

  • 298.
    Chu, Thi My Chinh
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Phan, Hoc
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Zepernick, Hans-Jürgen
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Performance analysis of MIMO cognitive amplify-and-forward relay networks with orthogonal space–time block codes2015In: Wireless Communications & Mobile Computing, ISSN 1530-8669, E-ISSN 1530-8677, Vol. 15, p. 1659-1679Article in journal (Refereed)
    Abstract [en]

    In this paper, we study the performance of multiple-input multiple-output cognitive amplify-and-forward relay networks using orthogonal space–time block coding over independent Nakagami-m fading. It is assumed that both the direct transmission and the relaying transmission from the secondary transmitter to the secondary receiver are applicable. In order to process the received signals from these links, selection combining is adopted at the secondary receiver. To evaluate the system performance, an expression for the outage probability valid for an arbitrary number of transceiver antennas is presented. We also derive a tight approximation for the symbol error rate to quantify the error probability. In addition, the asymptotic performance in the high signal-to-noise ratio regime is investigated to render insights into the diversity behavior of the considered networks. To reveal the effect of network parameters on the system performance in terms of outage probability and symbol error rate, selected numerical results are presented. In particular, these results show that the performance of the system is enhanced when increasing the number of antennas at the transceivers of the secondary network. However, increasing the number of antennas at the primary receiver leads to a degradation in the secondary system performance.

  • 299.
    Chu, Thi My Chinh
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Zepernick, Hans-Juergen
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies. Blekinge Inst Technol, SE-37179 Karlskrona, Sweden..
    Capacity Analysis of Two-Tier Networks with MIMO Cognitive Small Cells in Nagakami-m Fading2017In: 2017 IEEE 13TH INTERNATIONAL CONFERENCE ON WIRELESS AND MOBILE COMPUTING, NETWORKING AND COMMUNICATIONS (WIMOB), IEEE , 2017, p. 457-463Conference paper (Refereed)
    Abstract [en]

    In this paper, we consider a two-tier cellular network consisting of a primary macro cell base station (PMBS) which is overlaid by cognitive small cell base stations (CSBSs) to achieve efficient spectrum utilization. The deployment of two-tier cellular networks can provide higher capacity for the system but also causes cross-tier, intra-tier, and inter-tier interference within the cellular networks. Thus, we employ transmit and receive beamforming in the considered two-tier cellular network to mitigate interference. We first design the receive beamforming vector for a primary user (PU) such that it cancels all inter-tier interference from other PUs. Then, the transmit beamforming vectors at the secondary users (SUs) are designed to null out the cross-tier interference to the PUs. Further, the receive beamforming vectors at the SUs are designed to mitigate the crosstier interference from the PUs to the SUs. Finally, the transmit beamforming vector at the PMBS is designed to maximize the signal-to-interference-plus-noise ratio at the PUs. To quantify the performance of the system, we derive an expression for the channel capacity in the downlink from the CSBSs to the SUs. Numerical results are provided to reveal the effect of network parameters such as intra-tier interference distances, fading conditions, and number of antennas on the channel capacity of the SUs.

  • 300.
    Chu, Thi My Chinh
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Zepernick, Hans-Juergen
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Downlink outage analysis for cognitive cooperative radio networks employing non-orthogonal multiple access2018In: 2018 IEEE 7th International Conference on Communications and Electronics, ICCE 2018, Institute of Electrical and Electronics Engineers Inc. , 2018, p. 27-32Conference paper (Refereed)
    Abstract [en]

    In this paper, we employ power-domain non-orthogonal multiple access (NOMA) to simultaneously transmit signals to both a primary user (PU) and a secondary user (SU) of a cognitive cooperative radio network (CCRN). Higher priority is given to the PU over the SU by keeping the power allocation coefficients at the base station (BS) and relay (R) above a certain threshold. In this way, similar as the interference power limit imposed by the PU in a conventional underlay CCRN, the power allocation coefficients at the BS and R of the CCRN can be controlled to maintain a given outage performance. Analytical expressions of the cumulative distribution function of the end-to-end signal-to-interference-plus-noise ratios at the PU and SU are derived and then used to assess the outage probabilities of both users. Numerical results are presented to study the impact of system parameters on outage performance of the CCRN with power-domain NOMA. In addition, it is illustrated that increased downlink performance can be obtained by combining power-domain NOMA with CCRNs. © 2018 IEEE.

3456789 251 - 300 of 1681
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf