Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Cache Support in a High Performance Fault-Tolerant Distributed Storage System for Cloud and Big Data
Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.ORCID iD: 0000-0001-9947-1088
Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
Compuverde AB.
2015 (English)In: 2015 IEEE 29TH INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM WORKSHOPS, IEEE Computer Society, 2015, p. 537-546Conference paper, Published paper (Refereed)
Abstract [en]

Due to the trends towards Big Data and Cloud Computing, one would like to provide large storage systems that are accessible by many servers. A shared storage can, however, become a performance bottleneck and a single-point of failure. Distributed storage systems provide a shared storage to the outside world, but internally they consist of a network of servers and disks, thus avoiding the performance bottleneck and single-point of failure problems. We introduce a cache in a distributed storage system. The cache system must be fault tolerant so that no data is lost in case of a hardware failure. This requirement excludes the use of the common write-invalidate cache consistency protocols. The cache is implemented and evaluated in two steps. The first step focuses on design decisions that improve the performance when only one server uses the same file. In the second step we extend the cache with features that focus on the case when more than one server access the same file. The cache improves the throughput significantly compared to having no cache. The two-step evaluation approach makes it possible to quantify how different design decisions affect the performance of different use cases.

Place, publisher, year, edition, pages
IEEE Computer Society, 2015. p. 537-546
Keywords [en]
big data; cloud; distributed storage systems; cache; performance evaluation
National Category
Computer Engineering
Identifiers
URN: urn:nbn:se:bth-11411DOI: 10.1109/IPDPSW.2015.65ISI: 000380446100062ISBN: 978-1-4673-9739-1 (print)OAI: oai:DiVA.org:bth-11411DiVA, id: diva2:894241
Conference
IEEE International Parallel and Distributed Processing Symposium Workshop (IPDPSW), Hyderabad
Part of project
Bigdata@BTH- Scalable resource-efficient systems for big data analytics, Knowledge Foundation
Funder
Knowledge FoundationAvailable from: 2016-01-14 Created: 2016-01-14 Last updated: 2021-05-05Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full text

Authority records

Lundberg, LarsGrahn, HåkanIlie, Dragos

Search in DiVA

By author/editor
Lundberg, LarsGrahn, HåkanIlie, Dragos
By organisation
Department of Computer Science and EngineeringDepartment of Communication Systems
Computer Engineering

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 369 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf