Change search
Refine search result
12345 101 - 150 of 223
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 101.
    Kona, Srinand
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Compactions in Apache Cassandra: Performance Analysis of Compaction Strategies in Apache Cassandra2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context: The global communication system is in a tremendous growth, leading to wide range of data generation. The Telecom operators in various Telecom Industries, that generate large amount of data has a need to manage these data efficiently. As the technology involved in the database management systems is increasing, there is a remarkable growth of NoSQL databases in the 20th century. Apache Cassandra is an advanced NoSQL database system, which is popular for handling semi-structured and unstructured format of Big Data. Cassandra has an effective way of compressing data by using different compaction strategies. This research is focused on analyzing the performances of different compaction strategies in different use cases for default Cassandra stress model. The analysis can suggest better usage of compaction strategies in Cassandra, for a write heavy workload.

    Objectives: In this study, we investigate the appropriate performance metrics to evaluate the performance of compaction strategies. We provide the detailed analysis of Size Tiered Compaction Strategy, Date Tiered Compaction Strategy, and Leveled Compaction Strategy for a write heavy (90/10) work load, using default cassandra stress tool.

    Methods: A detailed literature research has been conducted to study the NoSQL databases, and the working of different compaction strategies in Apache Cassandra. The performances metrics are considered by the understanding of the literature research conducted, and considering the opinions of supervisors and Ericsson’s Apache Cassandra team. Two different tools were developed for collecting the performances of the considered metrics. The first tool was developed using Jython scripting language to collect the cassandra metrics, and the second tool was developed using python scripting language to collect the Operating System metrics. The graphs have been generated in Microsoft Excel, using the values obtained from the scripts.

    Results: Date Tiered Compaction Strategy and Size Tiered Compaction strategy showed more or less similar behaviour during the stress tests conducted. Level Tiered Compaction strategy has showed some remarkable results that effected the system performance, as compared to date tiered compaction and size tiered compaction strategies. Date tiered compaction strategy does not perform well for default cassandra stress model. Size tiered compaction can be preferred for default cassandra stress model, but not considerable for big data.

    Conclusions: With a detailed analysis and logical comparison of metrics, we finally conclude that Level Tiered Compaction Strategy performs better for a write heavy (90/10) workload while using default cassandra stress model, as compared to size tiered compaction and date tiered compaction strategies.

  • 102.
    Konduru, Prathisrihas Reddy
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Performance Analysis of Service in Heterogeneous Operational Environments2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. In recent years there is a rapid increase in demand for cloud services, as cloud computing has become a flexible platform for hosting micro services over the Internet.~Micro services are the core elements of service oriented architecture (SOA) that facilitate the deployment of distributed software systems. 

    Objectives. This thesis work aims at developing a typical service architecture to facilitate the deployment of compute and I/O intensive services. The thesis work also aims at evaluating the service times of these services when their respective sub services are deployed in heterogeneous environments with various loads.

    Methods. The thesis work has been carried out using an experimental test bed in order to evaluate the performance. The transport level performance metric called Response time is measured. It is the time taken by the server to serve the request sent by the client. Experiments have been conducted based on the objectives that are to be achieved.

    Results. The results obtained from the experimentation contain the average service times of a service when it is deployed on both virtual and non-virtual environments. The virtual environment is provided by Docker containers. They also include the variation in the position of their sub services.

    Conclusions. From results, it can be concluded that the total service times obtained are less in the case of non-virtual environment when compared to container environment.

  • 103.
    Kuruganti, NSR Sankaran
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Distributed databases for Multi Mediation: Scalability, Availability & Performance2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context: Multi Mediation is a process of collecting data from network(s) & network elements, pre-processing this data and distributing it to various systems like Big Data analysis, Billing Systems, Network Monitoring Systems, and Service Assurance etc. With the growing demand for networks and emergence of new services, data collected from networks is growing. There is need for efficiently organizing this data and this can be done using databases. Although RDBMS offers Scale-up solutions to handle voluminous data and concurrent requests, this approach is expensive. So, alternatives like distributed databases are an attractive solution. Suitable distributed database for Multi Mediation, needs to be investigated.

    Objectives: In this research we analyze two distributed databases in terms of performance, scalability and availability. The inter-relations between performance, scalability and availability of distributed databases are also analyzed. The distributed databases that are analyzed are MySQL Cluster 7.4.4 and Apache Cassandra 2.0.13. Performance, scalability and availability are quantified, measurements are made in the context of Multi Mediation system.

    Methods: The methods to carry out this research are both qualitative and quantitative. Qualitative study is made for the selection of databases for evaluation. A benchmarking harness application is designed to quantitatively evaluate the performance of distributed database in the context of Multi Mediation. Several experiments are designed and performed using the benchmarking harness on the database cluster.

    Results: Results collected include average response time & average throughput of the distributed databases in various scenarios. The average throughput & average INSERT response time results favor Apache Cassandra low availability configuration. MySQL Cluster average SELECT response time is better than Apache Cassandra for greater number of client threads, in high availability and low availability configurations.Conclusions: Although Apache Cassandra outperforms MySQL Cluster, the support for transaction and ACID compliance are not to be forgotten for the selection of database. Apart from the contextual benchmarks, organizational choices, development costs, resource utilizations etc. are more influential parameters for selection of database within an organization. There is still a need for further evaluation of distributed databases.

  • 104.
    Lindholm, Rickard
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Analysis of Resource Isolation and Resource Management in Network Virtualization2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. Virtualized networks are considered a major advancement in the technology of today, virtualized networks are offering plenty of functional benefits compared to todays dedicated networking elements. The virtualization allows network designers to separate networks and adapting the resources depending on the actual loads in other words, Load Balancing. Virtual networks would enable a minimized downtime for deployment of updates and similar tasks by performing a simple migration and then updating the linking after properly testing and preparing the Virtual Machine with the new software. When this technology is successfully proven to be efficient or evaluated and later adapted to the existing flaws. Virtualized networks will at that point claim the tasks of todays dedicated networking elements. But there are still unknown behaviors and effects of the technology for example, how the scheduler or hypervisor handles the virtual separation since they do share the same physical transmission resources.Objectives. By performing the experiments in this thesis, the hope is to learn about the effects of virtualization and how it performs under stress. By learning about the performance under stress it would also increase the knowledge about the efficiency of network virtualization. The experiments are conducted by creating scripts, using already written programs and systems, adding different loads and measuring the effects, this is documented so that other students and researchers can benefit from the research done in this thesis.Methods. In this thesis 5 different methodologies are performed: Experimental validation, statistical comparative analysis, resource sharing, control theory and literature review. Two systems are compared to previous research by evaluating the statistical results and analyzing them. As mentioned earlier the investigation of this thesis is focusing on how the scheduler executes the resource sharing under stress. The first system which is the control test is designed without any interference and a 5 Mbit/s UDP stream which is going through the system under test and being timestamped on measurement points on both the ingress and the egress, the second experiment involves an interfering load of a 5 Mbit/s UDP stream on the same system under test. Since it is a complex system quite some literature reviewing was done but mostly to gain a understanding and an overview of the different parts of the system and so that some obstacles would be able to be avoided.Results. The statistical comparative analysis of the experiments produced two graphs and two tables containing the coefficient of variance of the two experiments. The graph of the control test produced a graph with a quite even distribution over the time intervals with a coefficient of variance difference to the power of 10−3 and increasing somewhat over the larger time intervals. The second experiment with two virtual machines and an interfering packet stream are more distributed over the 0.0025 seconds and the 0.005 seconds intervals with a larger difference than the control test having a difference to the power of 10−2, showing some signs of a bottleneck in the system.Conclusions. Since the performance of the experiments and also the statistical handling of the data took longer than expected the choice was made to not deploy the system using Open Virtual Switch instead of Linux Bridge, hence there is not any other experiments to compare the performance with. But from referred research under related works the researcher concluded that the difference between Open Virtual Switch and Linux Bridge is small when compared without introducing any load. This is also confirmed on the website of Open Virtual Switch which states that Open Virtual Switch uses the same base as Linux Bridge. Linux Bridge is performing according to the expectations, it is a simple yet powerful tool and the results are confirming the previous research which claims that there are bottlenecks in the system. According to the pre-set requirement for validity for this experiment the difference of the CoV would be greater than to the power of 10−5, the measured difference was to the power of 10−2 which gives support to the theory that there are bottlenecks in the system. In the future it would be interesting to examine more about the effects of different hypervisors, virtualization techniques, packet generators etcetera to tackle these problems. A company that have taken countermeasures is Intel who have developed DPDK which confronts these efficiency problems by tailoring the scheduler towards the specific tasks. The downside of Intel’s DPDK is that it limits the user to Intel processors and removes one of the most important benefits of virtualization, the independence. But Intel have tried to keep it as independent as possible by maintaining DPDK as open source.

  • 105.
    Loman, Helen Laestadius
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Emotionsreglering och stämningsläge som faktorer för högt och lågt risktagande i beslutsprocessen2015Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [sv]

    The purpose of this thesis is to investigate the ways in which emotion regulation and mood affects risk taking when it comes to decision making competence. Emotion regulation is an essential part of the attachment process, the development of an autonomous self and functional stress coping systems. Research indicates that there is a relation between a distorted emotion regulation and some psychological disorders like depression, PTSD, anxiety disorders and borderline (or instable emotional personality disorder). There is a definite need to analyze the relation between emotion regulation and decision making competence since it is relatively unexplored. A digital questionnaire was distributed amongst the participants. This questionnaire contained three different tests measuring mood, risk taking in relation to decision making competence, and emotion regulation strategies. Chi-square tests for independence were carried out to test the relations between emotion regulation and risk taking; mood and risk taking; mood and emotion regulation. The results showed no significance between any of the groups. However, there seemed to be a relation between mood and emotion regulation competence.

  • 106.
    Lorentzen, Charlott
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    On User Perception of Authentication in Networks2014Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Authentication solutions are designed to stop unauthorized users from getting access to a secured system. However, each time an authentication process occur an authorized user needs to wait in expectation of approved access. This effort can be perceived as either a positive or negative experience. If the effort is perceived as a security measure; the effort is usually perceived as a positive experience. On the other hand, if the effort is perceived as a waiting time; the effort is usually perceived as a negative experience. The trade-off between security, user-friendliness and simplicity plays an important role in the domain of user acceptability. From the users' point of view, security is both necessary and disturbing at the same time. The overall focus in this thesis is on user perception of authentication in communication networks. An authentication procedure, or login, normally includes several steps and messages between a client and a server. In addition, the connection could suffer from low Quality of Service, i.e., each step in the authentication process will add to a longer response time. The longer response times will then infer lower Quality of Experience, i.e., a worse user perception. The thesis first presents a concept of investigating user perception. A framework is developed in which different criteria and evaluation methods for authentication schemes are presented. This framework is then used to investigate user perception of the response times of a web authentication procedure. The derived result, which is an exponential function, is compared to models for user perception of web performance. The comparison indicates that users perceive logins similarly, but not identically, to how they perceive standard web page loading. The user perception, with regards to excessive authentication times, is further studied by determining the weak point of the Extensible Authentication Protocol Method for GSM Subscriber Identity Modules (EAPSIM) with the OpenID service. The response times are controllably increased by emulating bad network performance for EAP-SIM and other EAP methods in live setups. The obtained results show that one task of the EAP-SIM authentication deviates from the other tasks, and contributes more to the total response time. This deviation points out the direction for future optimization. Finally, this thesis investigates how users of social networks perceive security, and to which extent they contribute to it. One way of contributing to security by creating and using strong authentication credentials, e.g. passwords. Websites might enforce a password length which is insufficient to provide a strong password. This might then cause problems by giving users a false perception of what constitutes a strong password. The origin of the password problem, namely the construction of passwords, and the user perception of password security is studied. A survey is conducted and the results indicate that the passwords of the respondents are not as strong as the respondents perceive them to be.

  • 107.
    Louis, Sibomana
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Hans-Jürgen, Zepernick
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Ergodic Capacity of Multiuser Scheduling in Cognitive Radio Networks: Analysis and ComparisonArticle in journal (Refereed)
  • 108.
    Louis, Sibomana
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Hans-Jürgen, Zepernick
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Performance Analysis of Opportunistic Scheduling with HARQ for Cognitive Radio Networks2015In: IEEE Wireless Communications and Networking Conference, New Orleans, USA: IEEE Computer Society, 2015, p. 411-416-Conference paper (Refereed)
    Abstract [en]

    This paper analyzes the secondary network performance under the joint constraint of the primary user (PU) peak interference power and maximum transmit power limit of the secondary user (SU). In particular, N SU transmitters (SU-Txs) communicate with an SU receiver in the presence of a primary network with a PU transmitter and multiple PU receivers. Moreover, we exploit opportunistic scheduling where the SU-Tx with the best channel condition is scheduled for transmission. Analytical expressions of the outage probability and symbol error probability of the SU are obtained considering either perfect or statistical knowledge of the PU channel gains. Furthermore, we analyze the SU throughput for delay constrained applications with two hybrid automatic repeat request protocols, namely, repetition time diversity and incremental redundancy. Numerical results are provided to assess the effect of the number of SU-Txs and number of packet retransmissions on the secondary network performance. In addition, the impact of the primary network parameters on the secondary network is investigated.

  • 109.
    Louis, Sibomana
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Hans-Jürgen, Zepernick
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Hung, Tran
    On the Outage Capacity of an Underlay Cognitive Radio Network2015In: 2015 9TH INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING AND COMMUNICATION SYSTEMS (ICSPCS), IEEE Press, 2015, p. 1-7-Conference paper (Refereed)
    Abstract [en]

    In this paper, we consider a point-to-multipoint underlay cognitive radio network under the joint constraint of the primary user peak interference power and maximum transmit power limit of the secondary user (SU). Analytical expressions for the secondary outage capacity are obtained based on exact as well as approximate expressions of the first and second moments of the channel capacity. Numerical results are provided to assess the effect of the number of SU receivers and a given SU outage probability. We also evaluate the impact of the primary network parameters on the secondary network performance.

  • 110.
    Louis, Sibomana
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Hans-Jürgen, Zepernick
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Hung, Tran
    Mälardalens högskola, SWE.
    Charles, Kabiri
    University of Rwanda, RWA.
    A Framework for Packet Delay Analysis of Point-to-Multipoint Underlay Cognitive Radio Networks2017In: IEEE Transactions on Mobile Computing, ISSN 1536-1233, E-ISSN 1558-0660, Vol. 16, no 9, p. 2408-2421Article in journal (Refereed)
    Abstract [en]

    This paper presents a queueing analytical framework for the performance evaluation of the secondary user (SU) packet transmission with service differentiation in a point-to-multipoint underlay cognitive radio network. The transmit power of the SU transmitter is subject to the joint outage constraint imposed by the primary user receivers (PU-Rxs) and the SU maximum transmit power limit. The analysis considers a queueing model for secondary traffic with multiple classes, and different types of arrival and service processes under a non-preemptive priority service discipline. The SU quality of service (QoS) is characterized by a packet timeout threshold and target bit error rate. Given these settings, analytical expressions of the packet timeout probability and average transmission time are derived for opportunistic and multicast scheduling. Moreover, expressions of the average packet waiting time in the queue and the total time in the system for each class of traffic are obtained. Numerical examples are provided to illustrate the secondary network performance with respect to various parameters such as number of PU-Rxs and SU receivers, SU packet arrival process, QoS requirements, and the impact of interference from the primary network to the secondary network.

  • 111.
    Lundberg, Lars
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Grahn, Håkan
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Ilie, Dragos
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Melander, Christian
    Compuverde AB.
    Cache Support in a High Performance Fault-Tolerant Distributed Storage System for Cloud and Big Data2015In: 2015 IEEE 29TH INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM WORKSHOPS, IEEE Computer Society, 2015, p. 537-546Conference paper (Refereed)
    Abstract [en]

    Due to the trends towards Big Data and Cloud Computing, one would like to provide large storage systems that are accessible by many servers. A shared storage can, however, become a performance bottleneck and a single-point of failure. Distributed storage systems provide a shared storage to the outside world, but internally they consist of a network of servers and disks, thus avoiding the performance bottleneck and single-point of failure problems. We introduce a cache in a distributed storage system. The cache system must be fault tolerant so that no data is lost in case of a hardware failure. This requirement excludes the use of the common write-invalidate cache consistency protocols. The cache is implemented and evaluated in two steps. The first step focuses on design decisions that improve the performance when only one server uses the same file. In the second step we extend the cache with features that focus on the case when more than one server access the same file. The cache improves the throughput significantly compared to having no cache. The two-step evaluation approach makes it possible to quantify how different design decisions affect the performance of different use cases.

  • 112.
    MADALA, SRAVYA
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Efficient Ways to Upgrade Docker Containers in Cloud to Support Backward Compatibility: Various Upgrade Strategies to Measure Complexity2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    If the present world scenario in telecommunication systems is considered thousands of systems are getting moved into the cloud because of its wide features. This thesis explains the efficient ways to upgrade Docker containers in a way to support backward compatibility. It mainly concerns about the high-availability of systems in the cloud environment during upgrades. Smaller changes can be implemented automatically to some extent. The minor changes can be handled by Apache Avro where schema is defined in it. But at some point Avro also cannot handle the situation which becomes much more complex. In a real world example, we need to perform major changes on the top of an application. Here we are testing different upgrade strategies and comparing the code complexity, total time to upgrade, network usage of single upgrades strategy versus multiple upgrade strategy with and without Use of Avro. When code complexity is compared the case without Avro performs well in single upgrade strategy with less time to upgrade all six instances but the network usage is more compared to multiple upgrades. So single upgrade strategy is better to maintain high availability in Cloud by performing the upgrades in an efficient manner.

  • 113.
    Malkannagari, Akash Reddy
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Comparative Analysis of Virtual Desktops in Cloud: Performance of VMware Horizon View and OpenStack VDI2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. With the evolvement of cloud computing in recent years many companies have stated providing various services using it. Desktop as a Service, DaaS is one of services in the cloud computing in which the backend of Virtual Desktop Infrastructure, VDI is hosted by a cloud service provider. Although in SaaS, Software as a Service Web applications are used, all kinds of applications can be used in DaaS. There are many companies which provide VDI for private cloud and they are VMware Horizon, XenDesktops by Citrix, OpenStack VDI solution and etc. Objectives. In this thesis two VDI solutions are analyzed based on the virtual desktop launch time and the performance of the desktop in various test cases. The VDI solutions considered are VMware Horizon view and OpenStack VDI. Methods. The method for this research consists of two stages. The first stage was a qualitative analysis in which literature study and a survey was conducted. In the next stage, experiment was setup where different performance metrics were calculated when the virtual desktop was put under several test cases. Results. Results collected include the virtual desktops launch time in two scenarios and several performance metrics such as CPU usage, memory usage, average IO size, average latency, throughput, IOPS and queue length at the processor. Conclusions. Performance of Virtual desktop running on OpenStack VDI was better in most of the test cases. In the test scenario where disk was put under stress OpenStack VDI solution performance was better than VMware Horizon View. Even considering the launch time for virtual desktops, OpenStack VDI performed better compared to VMware Horizon View.

  • 114.
    Mehraban, Mehrdad
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Instant Feedback Loops – for short feedback loops and early quality assurance2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. In recent years, Software Quality Assurance (SQA) has become a crucial part of software development processes. Therefore, modern software development processes led to an increase in demand for manual and automated code quality assurance. Manual code quality reviews can be a time-consuming and expensive process with varying results. Thus, automated code reviews turn out to be a preferred alternative for mitigating this process. However, commercial and open-source static code analyzer tools often offer deep analysis with long lead time.

    Objectives. In this thesis work, the main aim is to introduce an early code quality assurance tool, which features a combination of software metrics. The tool should be able to examine code quality and complexity of a telecommunication grade software product such as source code of specific Ericsson product by Ericsson. This tool should encapsulate complexity and quality of a software product with regards to its efficiency, scope, flexibility, and execution time.

    Methods. For this purpose, the background section of the thesis is dedicated to in-depth research on software metrics included in well-known static code analyzers. Then, development environment, under investigation source code of Ericsson product, and collected software metric for evaluation were presented. Next, according to each software metric’s characteristics, point of interest, and requirement, a set of steps based on a Susman’s action research cycle were defined. Moreover, SWAT, a suitable software analytics toolkit, employed to extract conducted experiment data of each software metric from a static analyzer code named Lizard in order to detect most efficient software metrics. Outcome of conducted experiment demonstrates relationship of selected software metrics with one another.

    Results. The chosen software metrics were evaluated based on a variety of vital factors especially actual data from number of defects of specific Ericsson product. Highly effective software metrics from investigations in this thesis work were implemented as a new model named hybrid model to be utilized as an early quality assurance.

    Conclusions. The proposed model, which consist of well-performing software metrics, demonstrate an impressive performance as an early code quality indicator. Consequently, the utilized model in this master thesis could be studied in a future research to further investigate the effectiveness and robustness of this tool an early quality assurance. 

  • 115.
    Mekala, Saketha Ram
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    MOBILE CREDIT USING GSM NETWORK: TOPUP FOR MOBILE PHONES2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
  • 116. Metzger, Florian
    et al.
    Rafetseder, Albert
    Romirer-Maierhofer, Peter
    Tutschku, Kurt
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Exploratory Analysis of a GGSN’s PDP Context Signaling Load2014In: Journal of Computer Networks and Communications, ISSN 2090-7141, E-ISSN 2090-715X, Vol. 2014Article in journal (Refereed)
    Abstract [en]

    This paper takes an exploratory look on control plane signaling in a mobile cellular core network. In contrast to most contributions in this field, our focus does not lie on the wireless or user-oriented parts of the network, but on signaling in the core network. In an investigation of core network data we take a look at statistics related to GTP tunnels and their signaling. Based on the results thereof we propose a definition of load at the GGSN and create an initial load queuingmodel.We find signs of user devices putting burden on the core network through their behavior

  • 117.
    Michel, Thomas
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Festival vendors: A mapping of commercial and social participatory variables2012In: Tourism, Festivals, and Cultural Events in times of Crises / [ed] Lyck, Lise, Copenhagen: Copenhagen Business School , 2012Chapter in book (Refereed)
    Abstract [en]

    Vendors constitute an important part of the success of many festivals and fairs. Collectively, they may provide a central activity of the festival, both from a visitor’s point of view and that of the festival programming organization, yet are often overlooked or under-examined by festival researchers. This study presents the findings of a survey done of vendors at Sweden’s largest historic festival, Medieval Week on Gotland, which seeks to capture a more nuanced picture of the vendors and the possible factors involved in their participation. This study looks at distance travelled by vendors to the festival, years of participation, level of economic involvement, competitive awareness, and ‘tipping points’, or those elements which might be involved in a decision to not return to festival in the future. The implications for festival managers are discussed in terms of attracting and retaining a vital mix of festival vendors.

  • 118.
    Minhas, Tahir Nawaz
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Shahid, Muhammad
    Blekinge Institute of Technology, Faculty of Engineering, Department of Applied Signal Processing.
    Lövström, Benny
    Blekinge Institute of Technology, Faculty of Engineering, Department of Applied Signal Processing.
    Rossholm, Andreas
    Blekinge Institute of Technology, Faculty of Engineering, Department of Applied Signal Processing.
    Zepernick, Hans-Jürgen
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Fiedler, Markus
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    QoE rating performance evaluation of ITU-T recommended video quality metrics in the context of video freezes2016In: Australian Journal of Electrical and Electronics Engineering, ISSN 1448-837X, Vol. 13, no 2, p. 122-131Article in journal (Refereed)
    Abstract [en]

    In real-time video streaming, video quality can be degraded due to network performance issues. Among other artefacts, video freezing and video jumping are factors that influence user experience. Service providers, operators and manufacturers are interested in evaluating the quality of experience (QoE) objectively because subjective assessment of QoE is expensive and, in many user cases, subjective assessment is not possible to perform. Different algorithms have been proposed and implemented in this regard. Some of them are in the recommendation list of the ITU Telecommunication Standardization Sector (ITU-T). In this paper, we study the effect of the freezing artefact on user experience and compare the mean opinion score of these videos with the results of two algorithms, the perceptual evaluation of video quality (PEVQ) and temporal quality metric (TQM). Both metrics are part of the ITU-T Recommendation J.247 Annex B and C. PEVQ is a full-reference video quality metric, whereas TQM is a no-reference quality metric. Another contribution of this paper is the study of the impact of different resolutions and frame rates on user experience and how accurately PEVQ and TQM measure varying frame rates.

  • 119.
    Mohammad, Taha
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Eati, Chandra Sekhar
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    A Performance Study of VM Live Migration over the WAN2015Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Virtualization is the key technology that has provided the Cloud computing platforms a new way for small and large enterprises to host their applications by renting the available resources. Live VM migration allows a Virtual Machine to be transferred form one host to another while the Virtual Machine is active and running. The main challenge in Live migration over WAN is maintaining the network connectivity during and after the migration. We have carried out live VM migration over the WAN migrating different sizes of VM memory states and presented our solutions based on Open vSwitch/VXLAN and Cisco GRE approaches. VXLAN provides the mobility support needed to maintain the network connectivity between the client and the Virtual machine. We have setup an experimental testbed to calculate the concerned performance metrics and analyzed the performance of live migration in VXLAN and GRE network. Our experimental results present that the network connectivity was maintained throughout the migration process with negligible signaling overhead and minimal downtime. The downtime variation experience with change in the applied network delay was relatively higher when compared to variation experienced when migrating different VM memory states. The total migration time experienced showed a strong relationship with size of the migrating VM memory state.

  • 120.
    Moshirian, Sanaz
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Performance of International roaming Location Update in 3G and 4G networks2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Since Mobile network operator (MNO) relies on many Business Support Systems (BSS) and Operation Support Systems (OSS) it should be assured that operator’s systems supports the requirements of the future.This thesis shall focus on the “start-to-end” aspects that must be considered to ensure that International Roaming continues to operate flawless. The thesis experience Long Term Evolution (LTE) in case of international roaming by measuring the end to end location update delay.In order to evaluate the LTE performance of international roaming, the delay time has been measured by the means of tracing tools for several different international roamers and the results has been compared with the results achieved for local user. The outcome has been compared with the respecting results in 3G network the statistical results has been provided and the graphs has been plotted to study the performance.Based on the results obtain on this thesis, it has been concluded that local user acts more stable to get attach to the network, i.e. there are less fluctuation in delay times for local user. Also the delay time in 3G networks is more than the LTE networks, however 3G networks acts more stable and there are less fluctuation to get connects to 3G networks.

  • 121.
    MOUNIKA REDDY, CHANDIRI
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Customer Churn Predictive Heuristics from Operator and Users' Perspective2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Telecommunication organizations are confronting in expanding client administration weight as they launch various user-desired services. Conveying poor client encounters puts client connections and incomes at danger. One of the metrics used by telecommunications companies to determine their relationship with customers is “Churn”. After substantial research in the field of churn prediction over many years, Big Data analytics with Data Mining techniques was found to be an efficient way for identifying churn. These techniques are usually applied to predict customer churn by building models, pattern classification and learning from historical data. Although some work has already been undertaken with regards to users’ perspective, it appears to be in its infancy. The aim of this thesis is to validate churn predictive heuristics from the operator perspective and close to user end. Conducting experiments with different sections of people regarding their data usage, designing a model, which is close to the user end and fitting with the data obtained through the survey done. Correlating the examined churn indicators and their validation, validation with the traffic volume variation with the users’ feedback collected by accompanying theses. A Literature review is done to analyze previous works and find out the difficulties faced in analyzing the users’ feeling, also to understand methodologies to get around problems in handling the churn prediction algorithms accuracy. Experiments are conducted with different sections of people across the globe. Their experiences with quality of calls, data and if they are looking to change in future, what would be their reasons of churn be, are analyzed. Their feedback will be validated using existing heuristics. The collected data set is analyzed by statistical analysis and validated for different datasets obtained by operators’ data. Also statistical and Big Data analysis has been done with data provided by an operator’s active and churned customers monthly data volume usage. A possible correlation of the user churn with users’ feedback will be studied by calculating the percentages and further correlate the results with that of the operators’ data and the data produced by the mobile app. The results show that the monthly volumes have not shown much decision power and the need for additional attributes such as higher time resolution, age, gender and others are needed. Whereas the survey done globally has shown similarities with the operator’s customers’ feedback and issues “around the globe” such a data plan issues, pricing, issues with connectivity and speed. Nevertheless, data preprocessing and feature selection has shown to be the key factors. Churn predictive models have given a better classification of 69.7 % when more attributes were provided. Telecom Operators’ data classification have given an accuracy of 51.7 % after preprocessing and for the variables we choose. Finally, a close observation of the end user revealed the possibility to yield a much higher classification precision of 95.2 %.

  • 122. Mugga, Charles
    et al.
    Sun, Dong
    Ilie, Dragos
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Performance Comparison of IPv6 Multihoming and Mobility Protocols2014Conference paper (Refereed)
    Abstract [en]

    Multihoming and mobility protocols enable computing devices to stay always best connected (ABC) to the Internet. The focus of our study is on handover latency and rehoming time required by such protocols. We used simulations in OMNeT++ to study the performance of the following protocols that support multihoming, mobility or a combination thereof: Mobile IPv6 (MIPv6), Multiple Care-of Address Registration (MCoA), Stream Control Transmission Protocol (SCTP), and Host Identity Proto- col (HIP). Our results indicate that HIP shows best performance in all scenarios considered.

  • 123.
    Musinada, Suren
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    On energy consumption of mobile cloud gaming using GamingAnywhere2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In the contemporary world, there has been a great proliferation of using smart-phone devices and broadband wireless networks, the young generation using mobile gaming market is tremendously increasing because of the enormous entertainment features. Mobile cloud gaming is a promising technology that overcome the implicit restrictions such as computational capacity and limited battery life. GamingAnywhere is an open source cloud gaming system which is used in this thesis and calculate the energy consumption of mobile device when using GamingAnywhere. The aim of the thesis is to measure the power consumption of the mobile device when the game is streamed from the GamingAnywhere server to GamingAnywhere client. Total power consumption is calculated for four resolutions by using the hardware monsoon power monitoring tool and the individual components of mobile device such as CPU, LCD and Audio power are calculated by software PowerTutor. The memory usage of the mobile device is also calculated by using Trepn Profiler application when using GamingAnywhere. Based on the obtained results, it was found that there is an increase in power consumption and memory usage of the mobile device on client side when the resolution is varying from low to high. After mapping the results of the hardware with the software, it was identified that there is very small difference between the hardware results and software results from which we could estimate that the software PowerTutor can be used instead of hardware Monsoon power tool as the software is capable of calculating the power consumption of individual components of mobile device

  • 124.
    Nadella, Sai Anoop
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Araga, Nikhil Reddy
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Study on Reliable Vehicular Communication for Urban and Highway Traffic Mobility2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Due to its extensive applications, VANETs had emerged as one of the important research areas in wireless networks. The main motto of vehicular technologies is to enhance traffic management by improving safety and also provide a reliable data exchange and information services among vehicles.

     

    Vehicular communications is a co-operative technology that enables communication among different vehicles, infrastructure and other devices. V2V, V2I communication models are commonly used in vehicular networks. Recently, extensive research is being performed on hybrid model which integrates both V2V and V2I models. The main goal of this research is to study the nature of these communication models in an urban and highway traffic environment and suggest a simulated model which helps to which provide reliable vehicular communication.

     

    Literature study helps to gain knowledge on the background of vehicular networks. Later, a simulated model is designed with the help of SUMO and NS-3 which implements all these communication models. The simulated model which is developed is classified into different phases and each phase represents a different communication model. Each phase is completely different from one another. All these phases are incorporated in both urban and highway traffic environments.

     

    Performance metrics are evaluated and analyzed to study the behavior of these models. Throughput, PDR, Packet-Drop and Propagation-Delay are the performance metrics considered.

     

    Simulation analysis shows that hybrid model exhibits a stable communication behavior when compared with V2V and V2I in both urban and highway traffic environments.

     

  • 125.
    Nagathota, Hadassah Pearlyn
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Design and Implementation of CMT in Real-time: Evaluation based on scheduling mechanisms2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context: Standard transport layer protocols like UDP, TCP, andSCTP use only one access technology at a time. Concurrent MultipathTransmission (CMT), has been developed for parallel use of the access technologies. The main theme of this thesis work is to implement CMT in real-time and evaluate the impact of various scheduling algorithms on its performance.

    Objectives: The main objectives of this thesis are to implement a de-multiplexer at the source, re-sequencer at the receiver and to investigate some of the heuristics and analyzing their impact based on some performance metrics.

    Methods: Thorough understanding on this topic is attained by literature review of related works. To implement and evaluate the different scheduling patterns an experimental test bed is set up. For thetransmission of data, socket programming in Python is used. Varying various parameters that are involved in the experiment, performance metrics were measured and based on them statistical analysis is carried out for proper evaluation.

    Results: CMT is implemented in real-time test bed and concurrency is validated. Weighted Round-Robin has better performance compared to that of Round-Robin when the size of the packet is large whereas both exhibit nearly same behavior for smaller packet sizes.

    Conclusions: It can be concluded that Weighted Round-Robin attains higher throughput. It can be possibly due to more load of fragmentation when large packets are transmitted on the high reliable path and hence better performance than Round-Robin. There is need for further evaluation of other metrics like delay, jitter and using other scheduling mechanisms and in other environments as well.

  • 126.
    Nawaz, Omer
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems. Blekinge Inst Technol, Dept Commun Syst, Karlskrona, Sweden..
    Minhas, Tahir Nawaz
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems. Blekinge Inst Technol, Dept Commun Syst, Karlskrona, Sweden..
    Fiedler, Markus
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems. Blekinge Institute of Technology, School of Engineering, Department of Telecommunication Systems. Blekinge Institute of Technology, School of Computing. Blekinge Inst Technol, Dept Commun Syst, Karlskrona, Sweden..
    Optimal MTU for Realtime Video Broadcast with Packet Loss - A QoE Perspective2014In: 2014 9TH INTERNATIONAL CONFERENCE FOR INTERNET TECHNOLOGY AND SECURED TRANSACTIONS (ICITST), IEEE , 2014, p. 396-401Conference paper (Refereed)
    Abstract [en]

    Multimedia applications have become the prime source of Internet traffic in recent years due to high bandwidth offered by almost all access mechanisms. This highlights the need of intense research efforts to improve the already demanding user perception levels for high-performance video delivery networks. Quality of Experience (QoE) based metrics are often used to quantify user satisfaction levels regarding an application or service. In this paper, we have analyzed the impact of variable frame sizes at link layer with different packet loss scenarios to evaluate the performance degradation of H.264 based live video streams from the end user perspective using subjective tests. We have focused on both subjective and objective quantitative measures to analyze the myth that smaller packets provide better quality in error-prone networks. We found that this assumption may not be true for some cases with a considerable packet loss ratio. Moreover, we observed that full-reference video assessment software like PEVQ predicts QoE to an acceptable extent, which allows cutting cost and effort coming with subjective evaluations. Keywords-Quality of Experience, Multimedia communication, Streaming media

  • 127.
    Neelap, Akash Kiran
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Performance analysis of GPGPU and CPU on AES Encryption2014Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    The advancements in computing have led to tremendous increase in the amount of data being generated every minute, which needs to be stored or transferred maintaining high level of security. The military and armed forces today heavily rely on computers to store huge amount of important and secret data, that holds a big deal for the security of the Nation. The traditional standard AES encryption algorithm being the heart of almost every application today, although gives a high amount of security, is time consuming with the traditional sequential approach. Implementation of AES on GPUs is an ongoing research since few years, which still is either inefficient or incomplete, and demands for optimizations for better performance. Considering the limitations in previous research works as a research gap, this paper aims to exploit efficient parallelism on the GPU, and on multi-core CPU, to make a fair and reliable comparison. Also it aims to deduce implementation techniques on multi-core CPU and GPU, in order to utilize them for future implementations. This paper experimentally examines the performance of a CPU and GPGPU in different levels of optimizations using Pthreads, CUDA and CUDA STREAMS. It critically exploits the behaviour of a GPU for different granularity levels and different grid dimensions, to examine the effect on the performance. The results show considerable acceleration in speed on NVIDIA GPU (QuadroK4000), over single-threaded and multi-threaded implementations on CPU (Intel® Xeon® E5-1650).

  • 128. Ngo, Hien Quoc
    et al.
    Matthaiou, Michail
    Duong, Trung Q.
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Larsson, Erik G.
    Uplink Performance Analysis of Multicell MU-SIMO Systems With ZF Receivers2013In: IEEE Transactions on Vehicular Technology, ISSN 0018-9545, E-ISSN 1939-9359, Vol. 62, no 9, p. 4471-4483Article in journal (Refereed)
    Abstract [en]

    We consider the uplink of a multicell multiuser single-input multiple-output system (MU-SIMO), where the channel experiences both small- and large-scale fading. The data detection is done by using the linear zero-forcing technique, assuming the base station (BS) has perfect channel state information of all users in its cell. We derive new exact analytical expressions for the uplink rate, the symbol error rate (SER), and the outage probability per user, as well as a lower bound on the achievable rate. This bound is very tight and becomes exact in the large-number-of-antenna limit. We further study the asymptotic system performance in the regimes of high signal-to-noise ratio (SNR), large number of antennas, and large number of users per cell. We show that, at high SNRs, the system is interference limited, and hence, we cannot improve the system performance by increasing the transmit power of each user. Instead, by increasing the number of BS antennas, the effects of interference and noise can be reduced, thereby improving system performance. We demonstrate that, with very large antenna arrays at the BS, the transmit power of each user can be made inversely proportional to the number of BS antennas while maintaining a desired quality of service. Numerical results are presented to verify our analysis.

  • 129.
    Nutalapati, Hima Bindu
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Sustainable Throughput Measurements for Video Streaming2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    With the increase in demand for video streaming services on the hand held mobile terminals with limited battery life, it is important to maintain the user Quality of Experience (QoE) while taking the resource consumption into consideration. Hence, the goal is to offer as good quality as feasible, avoiding as much user-annoyance as possible. Hence, it is essential to deliver the video, avoiding any uncontrollable quality distortions. This can be possible when an optimal (or desirable) throughput value is chosen such that exceeding the particular threshold results in entering a region of unstable QoE, which is not feasible. Hence, the concept of QoE-aware sustainable throughput is introduced as the maximal value of the desirable throughput that avoids disturbances in the Quality of Experience (QoE) due to delivery issues, or keeps them at an acceptable minimum.

    The thesis aims at measuring the sustainable throughput values when video streams of different resolutions are streamed from the server to a mobile client over wireless links, in the presence of network disturbances packet loss and delay. The video streams are collected at the client side for quality assessment and the maximal throughput at which the QoE problems can still be kept at a desired level is determined.

    Scatter plots were generated for the individual opinion scores and their corresponding throughput values for the disturbance case and regression analysis is performed to find the best fit for the observed data. Logarithmic, exponential, linear and power regressions were considered in this thesis. The R-squared values are calculated for each regression model and the model with R-squared value closest to 1 is determined to be the best fit. Power regression model and logarithmic model have the R-squared values closest to 1.

     Better quality ratings have been observed for the low resolution videos in the presence of packet loss and delay for the considered test cases. It can be observed that the QoE disturbances can be kept at a desirable level for the low resolution videos and from the test cases considered for the investigation, 360px video is more resilient in case of high delay and packet loss values and has better opinion score values. Hence, it can be observed that the throughput is sustainable at this threshold.

  • 130.
    Palm, Eric
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Skoglund, Jakob
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Riktlinjer för yttre hot: En inblick i riktlinjer angående yttre hot för Karlskrona kommun2015Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [sv]

    Målet med detta arbete är att bättre förstå hur riktlinjer ska skapas och användas. Utgångspunkten har varit att media uppmärksammat ett flertal incidenter där främmande människor tog oönskad kontakt med olika förskolor i Karlskronaområdet. Författarna ville då se över hur kommunen arbetade med frågor om yttre hot vilket medförde att arbetet utmynnade i ett försök om att bättre försöka få förståelse för riktlinjer och dess användning då kommunen vid arbetets start saknade riktlinjer för yttre hot av denna typ. Metoden som använts är av kvalitativ art i form av intervjuer med några säkerhetsansvariga för förskolor i andra svenska kommuner. Resultatet av dessa intervjuer tillsammans med kompletterande material från generella tillvägagångssätt för identifikation och analys av säkerhetsproblem resulterade i ett förslag på arbetsmetoden vid skapandet av riktlinjer samt en påminnelse om att se över befintliga riktlinjer för yttre hot.

  • 131.
    Pasumarthy, Sarat Chandra
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Live Migration of Virtual Machines in the Cloud: An Investigation by Measurements2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Cloud computing has grown in prevalence from recent years due to its concept of computing as a service, thereby, allowing users to offload the infrastructure management costs and tasks to a cloud provider. Cloud providers leverage server virtualization technology for efficient resource utilization, faster provisioning times, reduced energy consumption, etc. Cloud computing inherits a key feature of server virtualization which is the live migration of virtual machines (VMs). This technique allows transferring of a VM from one host to another with minimal service interruption. However, live migration is a complex process and with a cloud management software used by cloud providers for management, there could be a significant influence on the migration process.

    This thesis work aims to investigate the complex process of live migration performed by the hypervisor as well as the additional steps involved when a cloud management software or platform is present and form a timeline of these collection of steps or phases. The work also aims to investigate the performance of these phases, in terms of time, when migrating VMs with different sizes and workloads. For this thesis, the Kernel-based Virtual Machine (KVM) hypervisor and the OpenStack cloud software have been considered.

    The methodology employed is experimental and quantitative. The essence of this work is investigation by network passive measurements. To elaborate, this thesis work performs migrations on physical test-beds and uses measurements to investigate and evaluate the migration process performed by the KVM hypervisor as well as the OpenStack platform deployed on KVM hypervisors. Experiments are designed and conducted based on the objectives to be met.

    The results of the work primarily include the timeline of the migration phases of both the KVM hypervisor and the OpenStack platform. Results also include the time taken by each migration phase as well as the total migration time and the VM downtime. The results indicate that the total migration time, downtime and few of the phases increase with increase in CPU load and VM size. However, some of the phases do not portray any such trend. It has also been observed that the transfer stage alone does not contribute and influence the total time but every phase of the process has significant influence on the migration process.

    The conclusions from this work is that although a cloud management software aids in managing the infrastructure, it has notable impact on the migration process carried out by the hypervisor. Moreover, the migration phases and their proportions not only depend on the VM but on the physical environment as well. This thesis work focuses solely on the time factor of each phase. Further evaluation of each phase with respect to its resource utilization can provide better insight into probable optimization opportunities.

  • 132.
    Peddireddy, Divya
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    IP Router Testing, Isolation and Automation2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. Test Automation is a technique followed by the present software development industries to reduce the time and effort invested for manual testing. The process of automating the existing manual tests has now gained popularity in the Telecommunications industry as well. The Telecom industries are looking for ways to improve their existing test methods with automation and express the benefit of introducing test automation.

    At the same time, the existing methods of testing for throughput calculation in industries involve measurements on a larger timescale, like one second. The possibility to measure the throughput of network elements like routers on smaller timescales gives a better understanding about the forwarding capabilities, resource sharing and traffic isolation in these network devices.

    Objectives. In this research, we develop a framework for automatically evaluating the performance of routers on multiple timescales, one second, one millisecond and less. The benefit of introducing test automation is expressed in terms of Return on Investment, by comparing the benefit of manual and automated testing. The performance of a physical router, in terms of throughput is measured for varying frame sizes and at multiple timescales.

    Methods. The method followed for expressing the benefit of test automation is quantitative. At the same time, the methodology followed for evaluating the throughput of a router on multiple timescales is experimental and quantitative, using passive measurements. A framework is developed for automatically conducting the given test, which enables the user to test the performance of network devices with minimum user intervention and with improved accuracy.

    Results. The results of this thesis work include the benefit of test automation, in terms of Return on Investment when compared to manual testing; followed by the performance of router on multiple timescales. The results indicate that test automation can improve the existing manual testing methods by introducing greater accuracy in testing. The throughput results indicate that the performance of a physical router varies on multiple timescales, like one second and one millisecond. The throughput of the router is evaluated for varying frame sizes. It is observed that the difference in the coefficient of variance at the egress and ingress of the router is more for smaller frame sizes, when compared to larger frame sizes. Also, the difference is more on smaller timescales when compared to larger timescales.

    Conclusions. This thesis work concludes that the developed test automation framework can be used and extended for automating several test cases at the network layer. The automation framework reduces the execution time and introduces accuracy when compared to manual testing. The benefit of test automation is expressed in terms of Return on Investment. The throughput results are in line with the hypothesis that the performance of a physical router varies on multiple timescales. The performance, in terms of throughput, is expressed using a previously suggested performance metric. It is observed that there is a greater difference in the Coefficient of Variance values (at the egress and ingress of a router) on smaller timescales when compared to larger timescales. This difference is more for smaller frame sizes when compared with larger frame sizes.

  • 133.
    Penmetsa, Jyothi Spandana
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    AUTOMATION OF A CLOUD HOSTED APPLICATION: Performance, Automated Testing, Cloud Computing2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context: Software testing is the process of assessing quality of a software product to determine whether it matches with the existing requirements of the customer or not. Software testing is one of the “Verification and Validation,” or V&V, software practices. The two basic techniques of software testing are Black-box testing and White box testing. Black-box testing focuses solely on the outputs generated in response to the inputs supplied neglecting the internal components of the software. Whereas, White-box testing focuses on the internal mechanism of the software of any application. To explore the feasibility of black-box and white-box testing under a given set of conditions, a proper test automation framework needs to be deployed. Automation is deployed in order to reduce the manual effort and to perform testing continuously, thereby increasing the quality of the product.

    Objectives: In this research, cloud hosted application is automated using TestComplete tool. The objective of this thesis is to verify the functionality of cloud application such as test appliance library through automation and to measure the impact of the automation on release cycles of the organisation.

    Methods: Here automation is implemented using scrum methodology which is an agile development software process. Using scrum methodology, the product with working software can be delivered to the customers incrementally and empirically with updating functionalities in it. Test appliance library functionality is verified deploying testing device thereby keeping track of automatic software downloads into the testing device and licenses updating in the testing device.

    Results: Automation of test appliance functionality of cloud hosted application is made using TestComplete tool and impact of automation on release cycles is found reduced. Through automation of cloud hosted application, nearly 24% of reduction in level of release cycles can be observed thereby reducing the manual effort and increasing the quality of delivery.

    Conclusion: Automation of a cloud hosted application provides no manual effort thereby utilisation of time can be made effectively and application can be tested continuously increasing the efficiency and

  • 134.
    Phan, Hoc
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Chu, Thi My Chinh
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Zepernick, Hans-Jurgen
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Arlos, Patrik
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Packet Loss Priority of Cognitive Radio Networks with Partial Buffer Sharing2015In: 2015 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC), IEEE Computer Society, 2015, p. 7646-7652Conference paper (Refereed)
    Abstract [en]

    In this paper, we consider the application of partial buffer sharing to an M/G/1/K queueing system for cognitive radio networks (CRNs). It is assumed that the CRN is subject to Nakagami-m. fading. Secondary users are allowed to utilize the licensed radio spectrum of the primary users through underlay spectrum access. A finite buffer at the secondary transmitter is partitioned into two regions, the first region serves both classes of packets while the second region serves only packets of the highest priority class. Therefore, the examined CRN can be modeled as an M/G/1/K queueing system using partial buffer sharing. An embedded Markov chain is applied to analyze the queueing behavior of the system. Utilizing the balance equations and the normalized equation, the equilibrium state distribution of the system at an arbitrary time instant can be found. This outcome is utilized to investigate the impact of queue length, arrival rates, and fading parameters on queueing performance measures such as blocking probability, throughput, mean packet transmission time, channel utilization, mean number of packets in the system, and mean packet waiting time for each class of packets.

  • 135.
    Phan, Hoc
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Chu, Thi My Chinh
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Zepernick, Hans-Jürgen
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Ngo, Hien Quoc
    Performance of Cognitive Radio Networks with Finite Buffer Using Multiple Vacations and Exhaustive Service2014In: 2014 8TH INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING AND COMMUNICATION SYSTEMS (ICSPCS), Gold Coast, Australia: IEEE , 2014Conference paper (Refereed)
    Abstract [en]

    In this paper, we analyze the performance of a cognitive radio network where the secondary transmitter, besides its own transmission, occasionally relays the primary signal. It is assumed that the secondary transmitter employs the exhaustive service mode to transmit the secondary signal and multiple vacations to relay the primary signal. When assisting the primary transmitter, we assume that the secondary transmitter utilizes the decode-and-forward protocol to process the primary signal and forwards it to the primary receiver. Furthermore, the secondary transmitter has a finite buffer, the arriving packets of the secondary network are modeled as a Poisson process, and all channels are subject to Nakagami-m fading. Modeling the system as an M/G/1/K queueing system with exhaustive service and multiple vacations, using an embedded Markov chain approach to analyze the system, we obtain several key queueing performance indicators, i.e., the channel utilization, blocking probability, mean number of packets, and mean serving time of a packet in the system. The derived formulas are then utilized to evaluate the performance of the considered system.

  • 136. Phan, Hoc
    et al.
    Chu, Thi My Chinh
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Zheng, Fu-Chun
    Amplify-and-forward relay networks with underlay spectrum access over frequency selective fading channels2015In: IEEE Vehicular Technology Conference Proceedings, IEEE Communications Society, 2015, Vol. 2015-July, p. Article number 7145676-Conference paper (Refereed)
    Abstract [en]

    In this paper, we investigate the system performance in terms of outage probability and symbol error rate of cognitive relay networks with underlay spectrum access over Nakagami-m frequency selective fading channels. Underlay spectrum access is deployed at the secondary transmitters, i.e., the secondary source and relay, as a means of providing high spectrum utilization efficiency. It is assumed that the whole system operates in frequency selective fading channels which commonly occur in broadband communication networks. In addition, direct communication from the secondary source to destination is present together with relay communication such that the selection combining is applied at the destination. That is, either the direct or relay channel is selected for communication depending on which one provides the best signal-to-noise ratio (SNR). Analytical expressions for crucial performance measures such as outage probability and symbol error rate are formulated. Based on these analytical outcomes, respective system performance is investigated through numerical results for various system parameters and scenarios. © 2015 IEEE.

  • 137.
    Phan, Hoc
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Zepernick, Hans-Jürgen
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Duong, Quang Trung
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Tran, Hung
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Chu, Thi My Chinh
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Cognitive amplify-and-forward relay networks with beamforming under primary user power constraint over Nakagami-m fading channels2015In: Wireless Communications & Mobile Computing, ISSN 1530-8669, E-ISSN 1530-8677, Vol. 15, no 1, p. 56-70Article in journal (Refereed)
    Abstract [en]

    In this paper, we analyze the performance of cognitive amplify-and-forward (AF) relay networks with beamforming under the peak interference power constraint of the primary user (PU). We focus on the scenario that beamforming is applied at the multi-antenna secondary transmitter and receiver. Also, the secondary relay network operates in channel state information assisted AF mode, and the signals undergo independent Nakagami-m fading. In particular, closed-form expressions for the outage probability and symbol error rate (SER) of the considered network over Nakagami-m fading are presented. More importantly, asymptotic closed-form expressions for the outage probability and SER are derived. These tractable closed-form expressions for the network performance readily enable us to evaluate and examine the impact of network parameters on the system performance. Specifically, the impact of the number of antennas, the fading severity parameters, the channel mean powers, and the peak interference power is addressed. The asymptotic analysis manifests that the peak interference power constraint imposed on the secondary relay network has no effect on the diversity gain. However, the coding gain is affected by the fading parameters of the links from the primary receiver to the secondary relay network.

  • 138. Phan, Hoc
    et al.
    Zheng, Fu-Chun
    Chu, Thi My Chinh
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Physical-layer network coding with multi-antenna transceivers in interference limited environments2016In: IET Communications, ISSN 1751-8628, E-ISSN 1751-8636, Vol. 10, no 4, p. 363-371Article in journal (Refereed)
    Abstract [en]

    In this study, the authors first analyse system performance of beamforming amplify-and-forward two-way relay networks with physical-layer network coding under the impact of co-channel interference from multiple surrounding terminals and then propose the associated power allocation strategies. The performance of the two-way communications in terms of outage probability, symbol error rate (SER), and total ergodic channel capacity of the system is quantified. Asymptotic performance analysis for sufficiently high signal-to-noise ratio is also provided to obtain further valuable insights into system designs. Based on the analysis, power allocation strategies to minimise the asymptotic outage probability and SER as well as to maximise the ergodic channel capacity under the total power constraint are developed. The numerical results show that the proposed power allocation approaches outperform equal power allocation given the same total power budget and other system parameters. © The Institution of Engineering and Technology.

  • 139.
    Podapati, Sasidhar
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Fitness Function for a Subscriber2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Mobile communication has become a vital part of modern communication. The cost of network infrastructure has become a deciding factor with rise in mobile phone usage. Subscriber mobility patterns have major effect on load of radio cell in the network. The need for data analysis of subscriber mobility data is of utmost priority.

    The paper aims at classifying the entire dataset provided by Telenor, into two main groups i.e. Infrastructure stressing and Infrastructure friendly with respect to their impact on the mobile network. The research aims to predict the behavior of new subscriber based on his MOSAIC group.

    A heuristic method is formulated to characterize the subscribers into three different segments based on their mobility. Tetris Optimization is used to reveal the “Infrastructure Stressing” subscribers in the mobile network. All the experiments have been conducted on the subscriber trajectory data provided by the telecom operator.

    The results from the experimentation reveal that 5 percent of subscribers from entire data set are “Infrastructure Stressing”. A classification model is developed and evaluated to label the new subscriber as friendly or stressing using WEKA machine learning tool. Naïve Bayes, k-nearest neighbor and J48 Decision tree are classification algorithms used to train the model and to find the relation between features in the labeled subscriber dataset 

  • 140.
    Popescu, Adrian
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems. Blekinge Institute of Technology.
    CONVINcE: Greening of Video Distribution Networks2015In: CONVINcE: Greening of Video Distribution Networks / [ed] Dr. Henry Tan - University of Aberdeen, United Kingdom, Karlskrona, 2015Conference paper (Other academic)
    Abstract [en]

    CONVINcE is a 2.5 years CELTIC -Plus project started in September 2014 that addresses the challenges of reducing the power consumption in IP-based video distribution networks. An end-to-end approach is adopted in the project, from the Head End, where contents are encoded and streamed, to the terminals, where they are consumed, also embracing access and core networks, Content Distribution Networks as well as Video Distribution Networks. A number of 18 industrial and academic partners from 5 European countries are participating in the project. Project leader is Thomson Video Networks in France and scientific project leader is Blekinge Institute of Technology in Sweden. 

  • 141.
    Popescu, Adrian
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Dealing with QoE and Power Consumption in Video Distribution Networks2016In: 2016 INTERNATIONAL CONFERENCE ON COMMUNICATIONS (COMM 2016), 2016, p. 3-8Conference paper (Refereed)
    Abstract [en]

    The paper is about the problem of reducing the power consumption in Video Distribution Networks (VDNs) under the condition of best performance provision in terms of Quality of Experience (QuE) measured at the end user. Related to this, it has been observed that, given an end-to-end video distribution network, it is the last networking segment ending to terminal that has the dominant role in the provision of end-user performance. On the other hand, the rest of the video distribution chain can be optimized such to reduce the power consumption under the requirements of provision of specific Quality of Service (QoS) parameters. The paper first provides an overview of VDNs, which is followed by a short presentation of the CONVINcE project. The second part is focused on the problems of performance provision in VDN in terms of best possible Quality of Experience and minimum end-to-end power consumption.

  • 142.
    Popescu, Adrian
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Greening of IP-Based Video Distribution Networks: Developments and Challenges2014Conference paper (Refereed)
    Abstract [en]

    The creation and the distribution of video content is a multistage process that refers to the aquisition of video source, content production and packaging, and distribution to customers. The major components are the access networks, metro/edge networks, core networks, data centers and storage networks. Today, the access networks, of type wireless and wired, dominate the power consumption of the chain. However, it is expected that, with increasing access speeds, the core network routing will dominate the power consumption of the chain as well. Furthermore, it is also expected that the power consumption of Data Centers (DCs) and Content Distribution Networks (CDNs) will be dominated by the power consumption of data storage for content that is infrequently downloaded as well as by the transport of data for content that is frequently downloaded. The paper provides an overview of the problems related to the greening of IP-based video distribution, with particular focus on recent developments and the associated challenges. These are research topics planned to be solved by the last call Celtic-Plus project proposal CONVINcE (Consumption OptimizatioN in Video Networks). This research project has received the EUREKA Celtic-Plus label for funding approval in five European countries: France, Sweden, Finland, Romania and Turkey. It is considered to be a very high quality research and the topic is considered to be very relevant to our future from ecological and economical view.

  • 143.
    Popescu, Alexandru
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Cognitive Radio Networks: Elements and Architectures2014Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    As mobility and computing becomes ever more pervasive in society and business, the non-optimal use of radio resources has created many new challenges for telecommunication operators. Usage patterns of modern wireless handheld devices, such as smartphones and surfboards, have indicated that the signaling traffic generated is many times larger than at a traditional laptop. Furthermore, in spite of approaching theoretical limits by, e.g., the spectral efficiency improvements brought by 4G, this is still not sufficient for many practical applications demanded by end users. Essentially, users located at the edge of a cell cannot achieve the high data throughputs promised by 4G specifications. Worst yet, the Quality of Service bottlenecks in 4G networks are expected to become a major issue over the next years given the rapid growth of mobile devices. The main problems are because of rigid mobile systems architectures with limited possibilities to reconfigure terminals and base stations depending on spectrum availability. Consequently, new solutions must be developed that coexist with legacy infrastructures and more importantly improve upon them to enable flexibility in the modes of operation. To control the intelligence required for such modes of operation, cognitive radio technology is a key concept suggested to be part of the so-called beyond 4th generation mobile networks. The basic idea is to allow unlicensed users access to licensed spectrum, under the condition that the interference perceived by the licensed users is minimal. This can be achieved with the help of devices capable of accurately sensing the spectrum occupancy, learning about temporarily unused frequency bands and able to reconfigure their transmission parameters in such a way that the spectral opportunities can be effectively exploited. Accordingly, this indicates the need for a more flexible and dynamic allocation of the spectrum resources, which requires a new approach to cognitive radio network management. Subsequently, a novel architecture designed at the application layer is suggested to manage communication in cognitive radio networks. The goal is to improve the performance in a cognitive radio network by sensing, learning, optimization and adaptation.

  • 144.
    Popescu, Alexandru
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Yao, Yong
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Fiedler, Markus
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Popescu, Adrian
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    A Management Architecture for Multimedia Communication in Cognitive Radio Networks2015In: Multimedia over Cognitive Radio Networks: Algorithms, Protocols, and Experiments / [ed] Hu, F; Kumar, S, CRC Press, 2015, p. 3-31Chapter in book (Other academic)
  • 145.
    Popescu, Alexandru
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Yao, Yong
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Fiedler, Markus
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Popescu, Adrian
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Network Architecture to Support Multimedia over CRN2014In: Multimedia over Cognitive Radio Networks: Algorithms, Protocols, and Experiments / [ed] Hu, Fei; Kumar, Sunil, CRC Press , 2014Chapter in book (Refereed)
  • 146.
    Pothuraju, Rohit
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Measuring and Modeling of Open vSwitch Performance: Implementation in KVM environment2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Network virtualization has become an important aspect of the Telecom industry. The need for efficient, scalable and reliable virtualized network functions is paramount to modern networking. Open vSwitch is a virtual switch that attempts to extend the usage of virtual switches to industry grade performance levels on heterogeneous platforms.The aim of the thesis is to give an insight into the working of Open vSwitch. To evaluate the performance of Open vSwitch in various virtualization scenarios such as KVM and Docker (from second companion thesis)[1]. To investigate different scheduling techniques offered by the Open vSwitch software and supported by the Linux kernel such as FIFO, SFQ, CODEL, FQCODEL, HTB and HFSC. To differentiate the performance of Open vSwitch in these scenarios and scheduling capacities and determine the best scenario for optimum performance.The methodology of the thesis involved a physical model of the system used for real-time experimentation as well as quantitative analysis. Quantitative analysis of obtained results paved the way for unbiased conclusions. Experimental analysis was required to measure metrics such as throughput, latency and jitter in order to grade the performance of Open vSwitch in the particular virtualization scenario.The result of this thesis must be considered in context with a second companion thesis[1]. Both the theses aim at measuring and modeling performance of Open vSwitch in NFV. However, the results of this thesis outline the performance of Open vSwitch and Linux bridge in KVM virtualization scenario. Various scheduling techniques were measured for network performance metrics and it was observed that Docker performed better in terms of throughput, latency and jitter. In the KVM scenario, from the throughput test it was observed that all algorithms perform similarly in terms of throughput, for both Open vSwitch and Linux bridges. In the round trip latency tests, it was seen that FIFO has the least round trip latency, CODEL and FQCODEL had the highest latencies. HTB and HFSC perform similarly in the latency test. In the jitter tests, it was seen that HTB and HFSC had highest average jitter measurements in UDP Stream test. CODEL and FQCODEL had the least jitter results for both Open vSwitch and Linux bridges.The conclusion of the thesis is that the virtualization layer on which Open vSwitch operates is one of the main factors in determining the switching performance. Docker performs better than KVM for both bridges. In the KVM scenario, irrespective of the scheduling algorithm considered, Open vSwitch performed better than Linux bridge. HTB had highest throughput and FIFO had least round trip latency. CODEL and FQCODEL are efficient scheduling algorithms with low jitter measurements.

  • 147.
    Qazi, Yasir Javed
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Malik, Jawad Ahmed
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Muhammad, Safwan
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Performance Evaluation of Error Correcting Techniques for OFDM Systems2014Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Orthogonal frequency-division multiplexing (OFDM) systems provide efficient spectral usage by allowing overlapping in the frequency domain. Additionally, they are highly immune to multipath delay spread. In these systems, modulation and demodulation can be done using Inverse Fast Fourier Transform (IFFT) and Fast Fourier Transform (FFT) operations, which are computationally efficient. OFDM allows suppression of inter-symbol interference (ISI), provides flexible bandwidth allocation and may increase the capacity in terms of number of users.In this work, we have investigated the performance of different error correcting techniques for OFDM systems. These techniques are based on Convolutional codes, Linear Block codes and Reed-Solomon codes. Simulations are performed to evaluate the considered techniques for different channel conditions.By comparing the three techniques, the results show that Reed-Solomon codes performs the best for all error rates due to its consistency in performance at both low and high code rates which we verified by results.

  • 148.
    Rachapudi, Navya
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Impact of User Behavior on Resource Scaling in the XIFI Node2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Resource scaling improves the capability of a datacenter or group of datacenters

    collaborated together to provide resources at low cost in order to meet the demands and

    objectives of application services, but, it is substantial to determine the requirements of the

    user, especially in the large projects like XIFI. It is important to estimate the number of

    users, their arrival rate and types of applications that are often requested for resource

    allocation, to expand the resource dimensions to proportionate degree.

    In this study we frame a structure that provides deep insights to comprehend XIFI

    infrastructure. Furthermore, we model behavior of users that approach the node for

    resource allocation to run their applications. We aim to provide an understanding on how

    the user behavior influences the resource scaling in XIFI node. The main objective of this

    thesis is to investigate different types of applications chosen by users who request for

    resource allocations and impact of their choice on the resource availability.

    In the systematic review, a number of deliverables of XIFI to understand the specifications of

    XIFI architecture are reviewed and analyzed. A model that meets basic requirements, which

    can be denoted as a XIFI node is developed and the developed design is implemented in a

    simulator.

    We simulated the designed structure for 30 iterations and analyzed 10,000 user requests for

    two cases where total RAM of the node is increased in the second case when compared to the

    first case. We analyze the reason for the failure of the number of requests and different types

    of virtual machines for different types of applications, due to unavailable resources.

    From the obtained results, we conclude that, by increasing total RAM in a XIFI node the

    failure of average number of requests can be reduced. Also the failure percentage of virtual

    machines that are to be instantiated, as requested by users decreases when the RAM is

    scaled to twice the present value. We also conclude that the user behavior that imposes load

    on the system, decides the degree of scalability of resources in the XIFI node.

  • 149.
    Rajana, Poojitha
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Performance Evaluation of Gluster and Compuverde Storage Systems: Comparative analysis2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. Big Data and Cloud Computing nowadays require large amounts of storage that are accessible by many servers. To overcome the performance bottlenecks and single point of failure distributed storage systems came into force. So, our main aim in this thesis is evaluating the performance of these storage systems. A file coding technique is used that is the erasure coding which will help in data protection for the storage systems.

    Objectives. In this study, we investigate the performance evaluation of distributed storage system and understand the effect on performance for various patterns of I/O operations that is the read and write and also different measurement approaches for storage performance.

    Methods. The method is to use synthetic workload generator by streaming and transcoding video data as well as benchmark tool which generates the workload like SPECsfs2014 is used to evaluate the performance of distributed storage systems of GlusterFS and Compuverde which are file based storage.

    Results. In terms of throughput results, Gluster and Compuverde perform similar for both NFS and SMB server.

    The average latency results for both NFS and SMB shares indicate that Compuverde has lower latency.

    When comparing results of both Compuverde and Gluster, Compuverde delivers 100% IOPS with NFS server and Gluster delivers relatively near to the requested OP rate and with SMB server Gluster delivers 100% IOPS and Compuverde delivers more than the requested OP rate.

  • 150.
    Ravu, Venkata Sathya Sita J S
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Compaction Strategies in Apache Cassandra: Analysis of Default Cassandra stress model2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. The present trend in a large variety of applications are ranging from the web and social networking to telecommunications, is to gather and process very large and fast growing amounts of information leading to a common set of problems known collectively as “Big Data”. The ability to process large scale data analytics over large number of data sets in the last decade proved to be a competitive advantage in a wide range of industries like retail, telecom and defense etc. In response to this trend, the research community and the IT industry have proposed a number of platforms to facilitate large scale data analytics. Such platforms include a new class of databases, often refer to as NoSQL data stores. Apache Cassandra is a type of NoSQL data store. This research is focused on analyzing the performance of different compaction strategies in different use cases for default Cassandra stress model. Objectives. The performance of compaction strategies are observed in various scenarios on the basis of three use cases, Write heavy- 90/10, Read heavy- 10/90 and Balanced- 50/50. For a default Cassandra stress model, so as to finally provide the necessary events and specifications that suggest when to switch from one compaction strategy to another. Methods. Cassandra single node network is deployed on a web server and its behavior of read and write performance with different compaction strategies is studied with read heavy, write heavy and balanced workloads. Its performance metrics are collected and analyzed. Results. Performance metrics of different compaction strategies are evaluated and analyzed. Conclusions. With a detailed analysis and logical comparison, we finally conclude that Level Tiered Compaction Strategy performs better for a read heavy (10/90) workload while using default Cassandra stress model , as compared to size tiered compaction and date tiered compaction strategies. And for Balanced Date tiered compaction strategy performs better than size tiered compaction strategy and date tiered compaction strategy.

12345 101 - 150 of 223
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf