Change search
Link to record
Permanent link

Direct link
BETA
Shirinbab, Sogand
Alternative names
Publications (10 of 15) Show all publications
Shirinbab, S., Lundberg, L. & Casalicchio, E. (2018). Performance Comparison between Horizontal Scaling of Hypervisor and Container Based Virtualization using Cassandra NoSQL Database. In: Proceeding of the 3rd International Conference on Virtualization Application and Technology: . Paper presented at 3rd International Conference on Virtualization Application and Technology (ICVAT 2018, Nov.16-18, Sanya, China.
Open this publication in new window or tab >>Performance Comparison between Horizontal Scaling of Hypervisor and Container Based Virtualization using Cassandra NoSQL Database
2018 (English)In: Proceeding of the 3rd International Conference on Virtualization Application and Technology, 2018, , p. 6Conference paper, Published paper (Refereed)
Abstract [en]

Cloud computing promises customers the ondemand ability to scale in face of workload variations. There are different ways to accomplish scaling, one is vertical scaling and the other is horizontal scaling. The vertical scaling refers to buying more power (CPU, RAM), buying a more expensive and robust server, which is less challenging to implement but exponentially expensive. While, the horizontal scaling refers to adding more servers with less processor and RAM, which is usually cheaper overall and can scale very well. The majority of cloud providers prefer the horizontal scaling approach, and for them would be very important to know about the advantages and disadvantages of both technologies from the perspective of the application performance at scale. In this paper, we compare performance differences caused by scaling of the different virtualization technologies in terms of CPU utilization, latency, and the number of transactions per second. The workload is Apache Cassandra, which is a leading NoSQL distributed database for Big Data platforms. Our results show that running multiple instances of the Cassandra database concurrently, affected the performance of read and write operations differently; for both VMware and Docker, the maximum number of read operations was reduced when we ran several instances concurrently, whereas the maximum number of write operations increased when we ran instances concurrently.

Publisher
p. 6
Keywords
Cassandra; Cloud computing; Docker container; Horizontal scaling; NoSQL database; Performance comparison; Virtualization; VMware virtual machine
National Category
Computer Systems
Identifiers
urn:nbn:se:bth-17212 (URN)
Conference
3rd International Conference on Virtualization Application and Technology (ICVAT 2018, Nov.16-18, Sanya, China
Available from: 2018-11-02 Created: 2018-11-02 Last updated: 2018-11-06Bibliographically approved
Shirinbab, S., Lundberg, L. & Casalicchio, E. (2017). Performance Evaluation of Container and Virtual Machine Running Cassandra Workload. In: Essaaidi, M Zbakh, M (Ed.), PROCEEDINGS OF 2017 3RD INTERNATIONAL CONFERENCE OF CLOUD COMPUTING TECHNOLOGIES AND APPLICATIONS (CLOUDTECH): . Paper presented at 3rd International Conference of Cloud Computing Technologies and Applications (CloudTech), Rabat (pp. 24-31).
Open this publication in new window or tab >>Performance Evaluation of Container and Virtual Machine Running Cassandra Workload
2017 (English)In: PROCEEDINGS OF 2017 3RD INTERNATIONAL CONFERENCE OF CLOUD COMPUTING TECHNOLOGIES AND APPLICATIONS (CLOUDTECH) / [ed] Essaaidi, M Zbakh, M, 2017, p. 24-31Conference paper, Published paper (Refereed)
Abstract [en]

Today, scalable and high-available NoSQL distributed databases are largely used as Big Data platforms. Such distributed databases typically run on a virtualized infrastructure that could be implemented using Hypervisorb ased virtualiz ation or Container-based virtualiz ation. Hypervisor-based virtualization is a mature technology but imposes overhead on CPU, memory, networking, and disk Recently, by sharing the operating system resources and simplifying the deployment of applications, container-based virtualization is getting more popular. Container-based virtualization is lightweight in resource consumption while also providing isolation. However, disadvantages are security issues and 110 performance. As a result, today these two technologies are competing to provide virtual instances for running big data platforms. Hence, a key issue becomes the assessment of the performance of those virtualization technologies while running distributed databases. This paper presents an extensive performance comparison between VMware and Docker container, while running Apache Cassandra as workload. Apache Cassandra is a leading NoSQL distributed database when it comes to Big Data platforms. As baseline for comparisons we used the Cassandra's performance when running on a physical infrastructure. Our study shows that Docker had lower overhead compared to the VMware when running Cassandra. In fact, the Cassandra's performance on the Dockerized infrastructure was as good as on the Non-Virtualized.

Keywords
Cassandra, Cloud computing, Containers, Docker, NoSQL databases, Virtual machine, VMware, Big Data, Performance evaluation
National Category
Computer Sciences
Identifiers
urn:nbn:se:bth-16000 (URN)000426451400004 ()978-1-5386-1115-9 (ISBN)
Conference
3rd International Conference of Cloud Computing Technologies and Applications (CloudTech), Rabat
Available from: 2018-03-23 Created: 2018-03-23 Last updated: 2018-11-06Bibliographically approved
Casalicchio, E., Lundberg, L. & Shirinbad, S. (2016). An Energy-Aware Adaptation Model for Big Data Platforms. In: IEEE (Ed.), 2016 IEEE International Conference on Autonomic Computing (ICAC): . Paper presented at IEEE International Conference on Autonomic Computing (ICAC), Würzburg (pp. 349-350). IEEE
Open this publication in new window or tab >>An Energy-Aware Adaptation Model for Big Data Platforms
2016 (English)In: 2016 IEEE International Conference on Autonomic Computing (ICAC) / [ed] IEEE, IEEE, 2016, p. 349-350Conference paper, Published paper (Refereed)
Abstract [en]

Platforms for big data includes mechanisms and tools to model, organize, store and access big data (e.g. Apache Cassandra, Hbase, Amazon SimpleDB, Dynamo, Google BigTable). The resource management for those platforms is a complex task and must account also for multi-tenancy and infrastructure scalability. Human assisted control of Big data platform is unrealistic and there is a growing demand for autonomic solutions. In this paper we propose a QoS and energy-aware adaptation model designed to cope with the real case of a Cassandra-as-a-Service provider.

Place, publisher, year, edition, pages
IEEE, 2016
Keywords
Big Data;fault tolerant computing;power aware computing;quality of service;resource allocation;Amazon SimpleDB;Apache Cassandra;Big Data platforms;Cassandra-as-a-Service provider;Dynamo;Google BigTable;Hbase;energy-aware adaptation model;human assisted control;infrastructure scalability;multitenancy;resource management;Adaptation models;Big data;Cloud computing;Optimization;Runtime;Scalability;Throughput;Apache Cassandra;Autonomic computing;Big Data;Cloud computing;Green computing
National Category
Computer Sciences
Identifiers
urn:nbn:se:bth-13669 (URN)10.1109/ICAC.2016.13 (DOI)000390681200054 ()978-1-5090-1654-9 (ISBN)
Conference
IEEE International Conference on Autonomic Computing (ICAC), Würzburg
Available from: 2016-12-26 Created: 2016-12-26 Last updated: 2018-01-13Bibliographically approved
Shirinbab, S., Lundberg, L. & Håkansson, J. (2016). Comparing Automatic Load Balancing using VMware DRS with a Human Expert. In: 2016 IEEE INTERNATIONAL CONFERENCE ON CLOUD ENGINEERING WORKSHOP (IC2EW): . Paper presented at IEEE International Conference on Cloud Engineering (IC2E), APR 04-08, 2016, TU Berlin, Berlin, GERMANY (pp. 239-246). IEEE
Open this publication in new window or tab >>Comparing Automatic Load Balancing using VMware DRS with a Human Expert
2016 (English)In: 2016 IEEE INTERNATIONAL CONFERENCE ON CLOUD ENGINEERING WORKSHOP (IC2EW), IEEE, 2016, p. 239-246Conference paper, Published paper (Refereed)
Abstract [en]

In recent years, there has been a rapid growth of interest in dynamic management of resources in virtualized systems. Virtualization provides great flexibility in terms of resource sharing but at the same time it also brings new challenges for load balancing using automatic migrations of virtual machines. In this paper, we have evaluated VMware's Distributed Resource Scheduler (DRS) in a number of realistic scenarios using multiple instances of a large industrial telecommunication application. We have measured the performance on the hosts before and after the migration in terms of CPU utilization, and compared DRS migrations with human expert migrations. According to our results, DRS with the most aggressive threshold gave us the best results. It could balance the load in 40% of cases while in other cases it could not balance the load properly. DRS did completely unnecessary migrations back and forth in some cases.

Place, publisher, year, edition, pages
IEEE, 2016
Keywords
Cloud Computing, Distributed Resource Scheduler (DRS), Virtual Machine Migration, Virtualization, VMware
National Category
Computer Sciences
Identifiers
urn:nbn:se:bth-13923 (URN)10.1109/IC2EW.2016.14 (DOI)000392269400047 ()978-1-5090-3684-4 (ISBN)
Conference
IEEE International Conference on Cloud Engineering (IC2E), APR 04-08, 2016, TU Berlin, Berlin, GERMANY
Available from: 2017-02-22 Created: 2017-02-22 Last updated: 2018-11-06Bibliographically approved
Casalicchio, E., Lundberg, L. & Shirinbab, S. (2016). Energy-Aware Adaptation in Managed Cassandra Datacenters. In: Gupta I.,Diao Y. (Ed.), Proceedings - 2016 International Conference on Cloud and Autonomic Computing, ICCAC: . Paper presented at International Conference on Cloud and Autonomic Computing, ICCAC 2016; Augsburg; Germany (pp. 60-71). IEEE
Open this publication in new window or tab >>Energy-Aware Adaptation in Managed Cassandra Datacenters
2016 (English)In: Proceedings - 2016 International Conference on Cloud and Autonomic Computing, ICCAC / [ed] Gupta I.,Diao Y., IEEE, 2016, p. 60-71Conference paper, Published paper (Refereed)
Abstract [en]

Today, Apache Cassandra, an highly scalable and available NoSql datastore, is largely used by enterprises of each size and for application areas that range from entertainment to big data analytics. Managed Cassandra service providers are emerging to hide the complexity of the installation, fine tuning and operation of Cassandra datacenters. As for all complex services, human assisted management of a multi-tenant cassandra datacenter is unrealistic. Rather, there is a growing demand for autonomic management solutions. In this paper, we present an optimal energy-aware adaptation model for managed Cassandra datacenters that modify the system configuration orchestrating three different actions: horizontal scaling, vertical scaling and energy aware placement. The model is built from a real case based on real application data from Ericsson AB. We compare the performance of the optimal adaptation with two heuristics that avoid system perturbations due to re-configuration actions triggered by subscription of new tenants and/or changes in the SLA. One of the heuristic is local optimisation and the second is a best fit decreasing algorithm selected as reference point because representative of a wide range of research and practical solutions. The main finding is that heuristic’s performance depends on the scenario and workload and no one dominates in all the cases. Besides, in high load scenarios, the suboptimal system configuration obtained with an heuristic adaptation policy introduce a penalty in electric energy consumption in the range [+25%, +50%] if compared with the energy consumed by an optimal system configuration.

Place, publisher, year, edition, pages
IEEE, 2016
Keywords
Adaptation models;Cloud computing;Mathematical model;Optimization;Scalability;Throughput;Tuning;Autonomic computing;apache cassandra;big data;cloud computing;green computing;optimisation;self-adaptation
National Category
Computer Sciences
Identifiers
urn:nbn:se:bth-13670 (URN)10.1109/ICCAC.2016.12 (DOI)000390252000007 ()978-1-5090-3536-6 (ISBN)
Conference
International Conference on Cloud and Autonomic Computing, ICCAC 2016; Augsburg; Germany
Available from: 2016-12-26 Created: 2016-12-26 Last updated: 2018-01-13Bibliographically approved
Casalicchio, E., Lundberg, L. & Shirinbab, S. (2016). Optimal adaptation for Apache Cassandra. In: IEEE (Ed.), SoSeMC workshop at 13th IEEE International Conference on Autonomic Computing: . Paper presented at SoSeMC workshop at 13th IEEE International Conference on Autonomic Computing, Wuerzburg. IEEE Computer Society
Open this publication in new window or tab >>Optimal adaptation for Apache Cassandra
2016 (English)In: SoSeMC workshop at 13th IEEE International Conference on Autonomic Computing / [ed] IEEE, IEEE Computer Society, 2016Conference paper, Published paper (Refereed)
Place, publisher, year, edition, pages
IEEE Computer Society, 2016
National Category
Computer Sciences
Identifiers
urn:nbn:se:bth-13004 (URN)
External cooperation:
Conference
SoSeMC workshop at 13th IEEE International Conference on Autonomic Computing, Wuerzburg
Available from: 2016-09-07 Created: 2016-09-07 Last updated: 2018-01-10Bibliographically approved
Shirinbab, S. & Lundberg, L. (2016). Performance implications of resource over-allocation during the live migration. In: 8TH IEEE INTERNATIONAL CONFERENCE ON CLOUD COMPUTING TECHNOLOGY AND SCIENCE (CLOUDCOM 2016): . Paper presented at 8th IEEE International Conference on Cloud Computing Technology and Science, CloudCom, Luxembourg (pp. 552-557). IEEE Computer Society
Open this publication in new window or tab >>Performance implications of resource over-allocation during the live migration
2016 (English)In: 8TH IEEE INTERNATIONAL CONFERENCE ON CLOUD COMPUTING TECHNOLOGY AND SCIENCE (CLOUDCOM 2016), IEEE Computer Society, 2016, p. 552-557Conference paper, Published paper (Refereed)
Abstract [en]

As the number of cloud users are increasing, it becomes essential for cloud service providers to allocate the right amount of resources to virtual machines, especially during live migration. In order to increase the resource utilization and reduce waste, the providers have started to think about the role of over-allocating the resources. However, the benefits of over-allocations are not without inherent risks. In this paper, we conducted an experiment using a large telecommunication application that runs inside virtual machines, here we have varied the number of vCPU resources allocated to these virtual machines in order to find the best choice which at the same time reduces the risk of underallocating resources after the migration and increases the performance during the live migration. During our measurements we have used VMware's vMotion to migrate virtual machines while they are running. The results of this study will help virtualized environment service providers to decide how much resources should be allocated for better performance during live migration as well as how much resource would be required for a given load.

Place, publisher, year, edition, pages
IEEE Computer Society, 2016
Series
International Conference on Cloud Computing Technology and Science, ISSN 2330-2194
Keywords
live migration, over-allocation, performance, virtualization, vmware, Cloud computing, Network security, Virtual reality, Cloud service providers, Live migrations, Resource utilizations, Telecommunication applications, Virtualized environment, Virtual machine
National Category
Computer Sciences
Identifiers
urn:nbn:se:bth-13962 (URN)10.1109/CloudCom.2016.0096 (DOI)000398536300080 ()2-s2.0-85012981838 (Scopus ID)978-1-5090-1445-3 (ISBN)
Conference
8th IEEE International Conference on Cloud Computing Technology and Science, CloudCom, Luxembourg
Available from: 2017-03-02 Created: 2017-03-02 Last updated: 2018-11-06Bibliographically approved
Shirinbab, S. & Lundberg, L. (2015). Performance Implications of Over-allocation of Virtual CPUs. In: 2015 International Symposium on Networks, Computers and Communications (ISNCC 2015): . Paper presented at 2015 International Symposium on Networks, Computers and Communications (ISNCC 2015), MAY 13-15, 2015, Yasmine Hammamet, TUNISIA. IEEE
Open this publication in new window or tab >>Performance Implications of Over-allocation of Virtual CPUs
2015 (English)In: 2015 International Symposium on Networks, Computers and Communications (ISNCC 2015), IEEE , 2015Conference paper, Published paper (Refereed)
Abstract [en]

A major advantage of cloud environments is that one can balance the load by migrating virtual machines (VMs) from one server to another. High performance and high resource utilization are also important in a cloud. We have observed that over-allocation of virtual CPUs to VMs (i.e. allocating more vCPUs to VMs than there CPU cores on the server) when there are many VMs running on one host can reduce performance. However, if we do not use any over-allocation of virtual CPUs we may suffer from poor resource utilization after VM migration. Thus, it is important to identify and quantify performance bottlenecks when running in virtualized environment. The results of this study will help virtualized environment service providers to decide how many virtual CPUs should be allocated to each VM.

Place, publisher, year, edition, pages
IEEE, 2015
Keywords
virtualization, over-allocation, VMware, virtual CPUs
National Category
Computer Systems
Identifiers
urn:nbn:se:bth-14572 (URN)000380545000019 ()978-1-4673-7467-5 (ISBN)
Conference
2015 International Symposium on Networks, Computers and Communications (ISNCC 2015), MAY 13-15, 2015, Yasmine Hammamet, TUNISIA
Available from: 2017-06-19 Created: 2017-06-19 Last updated: 2018-11-06Bibliographically approved
Shirinbab, S. (2014). Performance Aspects in Virtualized Software Systems. (Licentiate dissertation). Karlskrona: Blekinge Institute of Technology
Open this publication in new window or tab >>Performance Aspects in Virtualized Software Systems
2014 (English)Licentiate thesis, comprehensive summary (Other academic)
Abstract [en]

Virtualization has significantly improved hardware utilization by allowing IT service providers to create and run several independent virtual machine instances on the same physical hardware. One of the features of virtualization is live migration of the virtual machines while they are active, which requires transfer of memory and storage from the source to the destination during the migration process. This problem is gaining importance since one would like to provide dynamic load balancing in cloud systems where a large number of virtual machines share a number of physical servers. In order to reduce the need for copying files from one physical server to another during a live migration of a virtual machine, one would like all physical servers to share the same storage. Providing a physically shared storage to a relatively large number of physical servers can easily become a performance bottleneck and a single point of failure. This has been a difficult challenge for storage solution providers, and the state-of-the-art solution is to build a so called distributed storage system that provides a virtual shared disk to the outside world; internally a distributed storage system consists of a number of interconnected storage servers, thus avoiding the bottleneck and single point of failure problems. In this study, we have done a performance measurement on different distributed storage solutions and compared their performance during read/write/delete processes as well as their recovery time in case of a storage server going down. In addition, we have studied performance behaviors of various hypervisors and compare them with a base system in terms of application performance, resource consumption and latency. We have also measured the performance implications of changing the number of virtual CPUs, as well as the performance of different hypervisors during live migration in terms of downtime and total migration time. Real-time applications are also increasingly deployed in virtualized environments due to scalability and flexibility benefits. However, cloud computing research has not focused on solutions that provide real-time assurance for these applications in a way that also optimizes resource consumption in data centers. Here one of the critical issues is scheduling virtual machines that contain real-time applications in an efficient way without resulting in deadline misses for the applications inside the virtual machines. In this study, we have proposed an approach for scheduling real-time tasks with hard deadlines that are running inside virtual machines. In addition we have proposed an overhead model which considers the effects of overhead due to switching from one virtual machine to another.

Place, publisher, year, edition, pages
Karlskrona: Blekinge Institute of Technology, 2014. p. 78 p.
Series
Blekinge Institute of Technology Licentiate Dissertation Series, ISSN 1650-2140 ; 8
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-00599 (URN)oai:bth.se:forskinfoA5860FD9E97EA380C1257D710053191A (Local ID)978-91-7295-290-4 (ISBN)oai:bth.se:forskinfoA5860FD9E97EA380C1257D710053191A (Archive number)oai:bth.se:forskinfoA5860FD9E97EA380C1257D710053191A (OAI)
Available from: 2014-12-15 Created: 2014-10-14 Last updated: 2018-05-23Bibliographically approved
Shirinbab, S., Lundberg, L. & Ilie, D. (2014). Performance Comparison of KVM, VMware and XenServer using a Large Telecommunication Application. Paper presented at Cloud Computing. Paper presented at Cloud Computing. Venice, Italy: IARIA XPS Press
Open this publication in new window or tab >>Performance Comparison of KVM, VMware and XenServer using a Large Telecommunication Application
2014 (English)Conference paper, Published paper (Refereed) Published
Abstract [en]

One of the most important technologies in cloud computing is virtualization. This paper presents the results from a performance comparison of three well-known virtualization hypervisors: KVM, VMware and XenServer. In this study, we measure performance in terms of CPU utilization, disk utilization and response time of a large industrial real-time application. The application is running inside a virtual machine (VM) controlled by the KVM, VMware and XenServer hypervisors, respectively. Furthermore, we compare the three hypervisors based on downtime and total migration time during live migration. The results show that the Xen hypervisor results in higher CPU utilization and thus also lower maximum performance compared to VMware and KVM. However, VMware causes more write operations to disk than KVM and Xen, and Xen causes less downtime than KVM and VMware during live migration. This means that no single hypervisor has the best performance for all aspects considered here.

Place, publisher, year, edition, pages
Venice, Italy: IARIA XPS Press, 2014
Keywords
Cloud Computing, KVM, Live Migration, VMware vMotion, XenMotion
National Category
Computer Sciences
Identifiers
urn:nbn:se:bth-6482 (URN)oai:bth.se:forskinfoC6FA88A0BAE3E5B5C1257DAA005E74D0 (Local ID)978-1-61208-338-4 (ISBN)oai:bth.se:forskinfoC6FA88A0BAE3E5B5C1257DAA005E74D0 (Archive number)oai:bth.se:forskinfoC6FA88A0BAE3E5B5C1257DAA005E74D0 (OAI)
Conference
Cloud Computing
Available from: 2014-12-11 Created: 2014-12-10 Last updated: 2018-11-06Bibliographically approved
Organisations

Search in DiVA

Show all publications