Endre søk
Begrens søket
1 - 15 of 15
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Treff pr side
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
Merk
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 1.
    Casalicchio, Emiliano
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Lundberg, Lars
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Shirinbab, Sogand
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Energy-Aware Adaptation in Managed Cassandra Datacenters2016Inngår i: Proceedings - 2016 International Conference on Cloud and Autonomic Computing, ICCAC / [ed] Gupta I.,Diao Y., IEEE, 2016, s. 60-71Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Today, Apache Cassandra, an highly scalable and available NoSql datastore, is largely used by enterprises of each size and for application areas that range from entertainment to big data analytics. Managed Cassandra service providers are emerging to hide the complexity of the installation, fine tuning and operation of Cassandra datacenters. As for all complex services, human assisted management of a multi-tenant cassandra datacenter is unrealistic. Rather, there is a growing demand for autonomic management solutions. In this paper, we present an optimal energy-aware adaptation model for managed Cassandra datacenters that modify the system configuration orchestrating three different actions: horizontal scaling, vertical scaling and energy aware placement. The model is built from a real case based on real application data from Ericsson AB. We compare the performance of the optimal adaptation with two heuristics that avoid system perturbations due to re-configuration actions triggered by subscription of new tenants and/or changes in the SLA. One of the heuristic is local optimisation and the second is a best fit decreasing algorithm selected as reference point because representative of a wide range of research and practical solutions. The main finding is that heuristic’s performance depends on the scenario and workload and no one dominates in all the cases. Besides, in high load scenarios, the suboptimal system configuration obtained with an heuristic adaptation policy introduce a penalty in electric energy consumption in the range [+25%, +50%] if compared with the energy consumed by an optimal system configuration.

  • 2.
    Casalicchio, Emiliano
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Lundberg, Lars
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Shirinbab, Sogand
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Optimal adaptation for Apache Cassandra2016Inngår i: SoSeMC workshop at 13th IEEE International Conference on Autonomic Computing / [ed] IEEE, IEEE Computer Society, 2016Konferansepaper (Fagfellevurdert)
  • 3.
    Casalicchio, Emiliano
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Lundberg, Lars
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Shirinbad, Sogand
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    An Energy-Aware Adaptation Model for Big Data Platforms2016Inngår i: 2016 IEEE International Conference on Autonomic Computing (ICAC) / [ed] IEEE, IEEE, 2016, s. 349-350Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Platforms for big data includes mechanisms and tools to model, organize, store and access big data (e.g. Apache Cassandra, Hbase, Amazon SimpleDB, Dynamo, Google BigTable). The resource management for those platforms is a complex task and must account also for multi-tenancy and infrastructure scalability. Human assisted control of Big data platform is unrealistic and there is a growing demand for autonomic solutions. In this paper we propose a QoS and energy-aware adaptation model designed to cope with the real case of a Cassandra-as-a-Service provider.

  • 4.
    Lundberg, Lars
    et al.
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Sogand, Shirinbab
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Real-time scheduling in cloud-based virtualized software systems2013Konferansepaper (Fagfellevurdert)
    Abstract [en]

    The number of applications that use virtualized cloud-based systems is growing, and one would like to use this kind of systems also for real-time applications with hard deadlines. There is scheduling on two levels in real-time applications executing in a virtualized environment: traditional real-time scheduling of the tasks in the real-time application, and scheduling of different Virtual Machines (VMs) on the hypervisor level. Traditional real-time scheduling is well understood, and most of the existing results calculate schedules based on periods, deadlines and worst-case execution times of the real-time tasks. In order to apply the existing theory also to cloud-based virtualized environments we must obtain periods and worst-case execution times for the VMs containing real-time applications. In this paper, we describe a technique for calculating a period and a worst-case execution time for a VM containing a real-time application with hard deadlines. This new result makes it possible to apply existing real-time scheduling theory when scheduling VMs on the hypervisor level, thus making it possible to guarantee that the real-time tasks in a VM meet their deadlines

  • 5.
    Shirinbab, Sogand
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Performance Aspects in Virtualized Software Systems2014Licentiatavhandling, med artikler (Annet vitenskapelig)
    Abstract [en]

    Virtualization has significantly improved hardware utilization by allowing IT service providers to create and run several independent virtual machine instances on the same physical hardware. One of the features of virtualization is live migration of the virtual machines while they are active, which requires transfer of memory and storage from the source to the destination during the migration process. This problem is gaining importance since one would like to provide dynamic load balancing in cloud systems where a large number of virtual machines share a number of physical servers. In order to reduce the need for copying files from one physical server to another during a live migration of a virtual machine, one would like all physical servers to share the same storage. Providing a physically shared storage to a relatively large number of physical servers can easily become a performance bottleneck and a single point of failure. This has been a difficult challenge for storage solution providers, and the state-of-the-art solution is to build a so called distributed storage system that provides a virtual shared disk to the outside world; internally a distributed storage system consists of a number of interconnected storage servers, thus avoiding the bottleneck and single point of failure problems. In this study, we have done a performance measurement on different distributed storage solutions and compared their performance during read/write/delete processes as well as their recovery time in case of a storage server going down. In addition, we have studied performance behaviors of various hypervisors and compare them with a base system in terms of application performance, resource consumption and latency. We have also measured the performance implications of changing the number of virtual CPUs, as well as the performance of different hypervisors during live migration in terms of downtime and total migration time. Real-time applications are also increasingly deployed in virtualized environments due to scalability and flexibility benefits. However, cloud computing research has not focused on solutions that provide real-time assurance for these applications in a way that also optimizes resource consumption in data centers. Here one of the critical issues is scheduling virtual machines that contain real-time applications in an efficient way without resulting in deadline misses for the applications inside the virtual machines. In this study, we have proposed an approach for scheduling real-time tasks with hard deadlines that are running inside virtual machines. In addition we have proposed an overhead model which considers the effects of overhead due to switching from one virtual machine to another.

  • 6.
    Shirinbab, Sogand
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Lundberg, Lars
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Performance Implications of Over-allocation of Virtual CPUs2015Inngår i: 2015 International Symposium on Networks, Computers and Communications (ISNCC 2015), IEEE , 2015Konferansepaper (Fagfellevurdert)
    Abstract [en]

    A major advantage of cloud environments is that one can balance the load by migrating virtual machines (VMs) from one server to another. High performance and high resource utilization are also important in a cloud. We have observed that over-allocation of virtual CPUs to VMs (i.e. allocating more vCPUs to VMs than there CPU cores on the server) when there are many VMs running on one host can reduce performance. However, if we do not use any over-allocation of virtual CPUs we may suffer from poor resource utilization after VM migration. Thus, it is important to identify and quantify performance bottlenecks when running in virtualized environment. The results of this study will help virtualized environment service providers to decide how many virtual CPUs should be allocated to each VM.

  • 7.
    Shirinbab, Sogand
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Lundberg, Lars
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Performance implications of resource over-allocation during the live migration2016Inngår i: 8TH IEEE INTERNATIONAL CONFERENCE ON CLOUD COMPUTING TECHNOLOGY AND SCIENCE (CLOUDCOM 2016), IEEE Computer Society, 2016, s. 552-557Konferansepaper (Fagfellevurdert)
    Abstract [en]

    As the number of cloud users are increasing, it becomes essential for cloud service providers to allocate the right amount of resources to virtual machines, especially during live migration. In order to increase the resource utilization and reduce waste, the providers have started to think about the role of over-allocating the resources. However, the benefits of over-allocations are not without inherent risks. In this paper, we conducted an experiment using a large telecommunication application that runs inside virtual machines, here we have varied the number of vCPU resources allocated to these virtual machines in order to find the best choice which at the same time reduces the risk of underallocating resources after the migration and increases the performance during the live migration. During our measurements we have used VMware's vMotion to migrate virtual machines while they are running. The results of this study will help virtualized environment service providers to decide how much resources should be allocated for better performance during live migration as well as how much resource would be required for a given load.

  • 8.
    Shirinbab, Sogand
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Lundberg, Lars
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Scheduling Tasks with Hard Deadlines in CloudBased Virtualized Software SystemsManuskript (preprint) (Annet vitenskapelig)
    Abstract [en]

    There is scheduling on two levels in real-time applications executing in a virtualized environment: traditional real-time scheduling of the tasks in the realtime application, and scheduling of different Virtual Machines (VMs) on the hypervisor level. In this paper, we describe a technique for calculating a period and an execution time for a VM containing a real-time application with hard deadlines. This result makes it possible to apply existing real-time scheduling theory when scheduling VMs on the hypervisor level, thus making it possible to guarantee that the real-time tasks in a VM meet their deadlines. If overhead for switching from one VM to another is ignored, it turns out that (infinitely) short VM periods minimize the utilization that each VM needs to guarantee that all real-time tasks in that VM will meet their deadlines. Having infinitely short VM periods is clearly not realistic, and in order to provide more useful results we have considered a fixed overhead at the beginning of each execution of a VM. Considering this overhead, a set of real-time tasks, the speed of each processor core, and a certain processor utilization of the VM containing the real-time tasks, we present a simulation study and some performance bounds that make it possible to determine if it is possible to schedule the real-time tasks in the VM, and in that case for which periods of the VM that this is possible.

  • 9.
    Shirinbab, Sogand
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Lundberg, Lars
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Scheduling Tasks with Hard Deadlines in CloudBased Virtualized Software SystemsManuskript (preprint) (Annet vitenskapelig)
    Abstract [en]

    There is scheduling on two levels in real-time applications executing in a virtualized environment: traditional real-time scheduling of the tasks in the realtime application, and scheduling of different Virtual Machines (VMs) on the hypervisor level. In this paper, we describe a technique for calculating a period and an execution time for a VM containing a real-time application with hard deadlines. This result makes it possible to apply existing real-time scheduling theory when scheduling VMs on the hypervisor level, thus making it possible to guarantee that the real-time tasks in a VM meet their deadlines. If overhead for switching from one VM to another is ignored, it turns out that (infinitely) short VM periods minimize the utilization that each VM needs to guarantee that all real-time tasks in that VM will meet their deadlines. Having infinitely short VM periods is clearly not realistic, and in order to provide more useful results we have considered a fixed overhead at the beginning of each execution of a VM. Considering this overhead, a set of real-time tasks, the speed of each processor core, and a certain processor utilization of the VM containing the real-time tasks, we present a simulation study and some performance bounds that make it possible to determine if it is possible to schedule the real-time tasks in the VM, and in that case for which periods of the VM that this is possible.

  • 10.
    Shirinbab, Sogand
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Lundberg, Lars
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Scheduling Tasks with Hard Deadlines in Virtualized Software SystemsInngår i: Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    There is scheduling on two levels in real-time applications executing in a virtualized environment: traditional real-time scheduling of the tasks in the real-time application, and scheduling of different Virtual Machines (VMs) on the hypervisor level. In this paper, we describe a technique for calculating a period and an execution time for a VM containing a real-time application with hard deadlines. This result makes it possible to apply existing real-time scheduling theory when scheduling VMs on the hypervisor level, thus making it possible to guarantee that the real-time tasks in a VM meet their deadlines. If overhead for switching from one VM to another is ignored, it turns out that (infinitely) short VM periods minimize the utilization that each VM needs to guarantee that all real-time tasks in that VM will meet their deadlines. Having infinitely short VM periods is clearly not realistic, and in order to provide more useful results we have considered a fixed overhead at the beginning of each execution of a VM. Considering this overhead, a set of real-time tasks, the speed of  each processor core, and a certain processor utilization of the VM containing the real-time tasks, we present a simulation study and some performance bounds that make it possible to determine if it is possible to schedule the real-time tasks in the VM, and in that case for which periods of the VM that  is possible.

  • 11.
    Shirinbab, Sogand
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Lundberg, Lars
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Casalicchio, Emiliano
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Performance Comparison between Horizontal Scaling of Hypervisor and Container Based Virtualization using Cassandra NoSQL Database2018Inngår i: Proceeding of the 3rd International Conference on Virtualization Application and Technology, 2018, , s. 6Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Cloud computing promises customers the ondemand ability to scale in face of workload variations. There are different ways to accomplish scaling, one is vertical scaling and the other is horizontal scaling. The vertical scaling refers to buying more power (CPU, RAM), buying a more expensive and robust server, which is less challenging to implement but exponentially expensive. While, the horizontal scaling refers to adding more servers with less processor and RAM, which is usually cheaper overall and can scale very well. The majority of cloud providers prefer the horizontal scaling approach, and for them would be very important to know about the advantages and disadvantages of both technologies from the perspective of the application performance at scale. In this paper, we compare performance differences caused by scaling of the different virtualization technologies in terms of CPU utilization, latency, and the number of transactions per second. The workload is Apache Cassandra, which is a leading NoSQL distributed database for Big Data platforms. Our results show that running multiple instances of the Cassandra database concurrently, affected the performance of read and write operations differently; for both VMware and Docker, the maximum number of read operations was reduced when we ran several instances concurrently, whereas the maximum number of write operations increased when we ran instances concurrently.

  • 12.
    Shirinbab, Sogand
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Lundberg, Lars
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Casalicchio, Emiliano
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Performance Evaluation of Container and Virtual Machine Running Cassandra Workload2017Inngår i: PROCEEDINGS OF 2017 3RD INTERNATIONAL CONFERENCE OF CLOUD COMPUTING TECHNOLOGIES AND APPLICATIONS (CLOUDTECH) / [ed] Essaaidi, M Zbakh, M, 2017, s. 24-31Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Today, scalable and high-available NoSQL distributed databases are largely used as Big Data platforms. Such distributed databases typically run on a virtualized infrastructure that could be implemented using Hypervisorb ased virtualiz ation or Container-based virtualiz ation. Hypervisor-based virtualization is a mature technology but imposes overhead on CPU, memory, networking, and disk Recently, by sharing the operating system resources and simplifying the deployment of applications, container-based virtualization is getting more popular. Container-based virtualization is lightweight in resource consumption while also providing isolation. However, disadvantages are security issues and 110 performance. As a result, today these two technologies are competing to provide virtual instances for running big data platforms. Hence, a key issue becomes the assessment of the performance of those virtualization technologies while running distributed databases. This paper presents an extensive performance comparison between VMware and Docker container, while running Apache Cassandra as workload. Apache Cassandra is a leading NoSQL distributed database when it comes to Big Data platforms. As baseline for comparisons we used the Cassandra's performance when running on a physical infrastructure. Our study shows that Docker had lower overhead compared to the VMware when running Cassandra. In fact, the Cassandra's performance on the Dockerized infrastructure was as good as on the Non-Virtualized.

  • 13.
    Shirinbab, Sogand
    et al.
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Lundberg, Lars
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Erman, David
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Performance evaluation of distributed storage systems for cloud computing2013Inngår i: International Journal of Computers and Their Applications, ISSN 1076-5204, Vol. 20, nr 4, s. 195-207Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    The possibility to migrate a virtual server from one physical computer in a cloud to another physical computer in the same cloud is important in order to obtain a balanced load. In order to facilitate live migration of virtual servers, one needs to provide large shared storage systems that are accessible for all the physical servers that are used in the cloud. Distributed storage systems offer reliable and cost-effective storage of large amounts of data and such storage systems will be used in future Cloud Computing. We have evaluated four large distributed storage systems. Two of these use Distributed Hash Tables (DHTs) in order to keep track of how data is distributed, and two systems use multicasting to access the stored data. We measure the read/write/delete performance, as well as the recovery time when a storage node goes down. The evaluations are done on the same hardware, consisting of 24 storage nodes and a total storage capacity of 768 TB of data. These evaluations show that the multicast approach outperforms the DHT approach

  • 14.
    Shirinbab, Sogand
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Lundberg, Lars
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Håkansson, Jim
    Ericsson AB, SWE.
    Comparing Automatic Load Balancing using VMware DRS with a Human Expert2016Inngår i: 2016 IEEE INTERNATIONAL CONFERENCE ON CLOUD ENGINEERING WORKSHOP (IC2EW), IEEE, 2016, s. 239-246Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In recent years, there has been a rapid growth of interest in dynamic management of resources in virtualized systems. Virtualization provides great flexibility in terms of resource sharing but at the same time it also brings new challenges for load balancing using automatic migrations of virtual machines. In this paper, we have evaluated VMware's Distributed Resource Scheduler (DRS) in a number of realistic scenarios using multiple instances of a large industrial telecommunication application. We have measured the performance on the hosts before and after the migration in terms of CPU utilization, and compared DRS migrations with human expert migrations. According to our results, DRS with the most aggressive threshold gave us the best results. It could balance the load in 40% of cases while in other cases it could not balance the load properly. DRS did completely unnecessary migrations back and forth in some cases.

  • 15.
    Shirinbab, Sogand
    et al.
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Lundberg, Lars
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Ilie, Dragos
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för kommunikationssystem.
    Performance Comparison of KVM, VMware and XenServer using a Large Telecommunication Application2014Konferansepaper (Fagfellevurdert)
    Abstract [en]

    One of the most important technologies in cloud computing is virtualization. This paper presents the results from a performance comparison of three well-known virtualization hypervisors: KVM, VMware and XenServer. In this study, we measure performance in terms of CPU utilization, disk utilization and response time of a large industrial real-time application. The application is running inside a virtual machine (VM) controlled by the KVM, VMware and XenServer hypervisors, respectively. Furthermore, we compare the three hypervisors based on downtime and total migration time during live migration. The results show that the Xen hypervisor results in higher CPU utilization and thus also lower maximum performance compared to VMware and KVM. However, VMware causes more write operations to disk than KVM and Xen, and Xen causes less downtime than KVM and VMware during live migration. This means that no single hypervisor has the best performance for all aspects considered here.

1 - 15 of 15
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf