Virtualization has significantly improved hardware utilization by allowing IT service providers to create and run several independent virtual machine instances on the same physical hardware. One of the features of virtualization is live migration of the virtual machines while they are active, which requires transfer of memory and storage from the source to the destination during the migration process. This problem is gaining importance since one would like to provide dynamic load balancing in cloud systems where a large number of virtual machines share a number of physical servers. In order to reduce the need for copying files from one physical server to another during a live migration of a virtual machine, one would like all physical servers to share the same storage. Providing a physically shared storage to a relatively large number of physical servers can easily become a performance bottleneck and a single point of failure. This has been a difficult challenge for storage solution providers, and the state-of-the-art solution is to build a so called distributed storage system that provides a virtual shared disk to the outside world; internally a distributed storage system consists of a number of interconnected storage servers, thus avoiding the bottleneck and single point of failure problems. In this study, we have done a performance measurement on different distributed storage solutions and compared their performance during read/write/delete processes as well as their recovery time in case of a storage server going down. In addition, we have studied performance behaviors of various hypervisors and compare them with a base system in terms of application performance, resource consumption and latency. We have also measured the performance implications of changing the number of virtual CPUs, as well as the performance of different hypervisors during live migration in terms of downtime and total migration time. Real-time applications are also increasingly deployed in virtualized environments due to scalability and flexibility benefits. However, cloud computing research has not focused on solutions that provide real-time assurance for these applications in a way that also optimizes resource consumption in data centers. Here one of the critical issues is scheduling virtual machines that contain real-time applications in an efficient way without resulting in deadline misses for the applications inside the virtual machines. In this study, we have proposed an approach for scheduling real-time tasks with hard deadlines that are running inside virtual machines. In addition we have proposed an overhead model which considers the effects of overhead due to switching from one virtual machine to another.