Building reliable Ceph clusters
Lars Marowsky-Brée
http://linux.conf.au/schedule/presentation/83/
Ceph is the most popular Software-Defined-Storage technology today, and extremely widespread in the OpenStack world.
It enables a multitude of block, object, and file storage use cases, and its flexibility allows it to be configured as required for many different scenarios. The hardware environment can be similarly tailored. This is a sizable decision matrix, but leads to an environment optimally tuned for the required balance between performance, functionality, and cost. Dependability aspects - availability and reliability in particular - are often overlooked.
Based on twenty years of development and hands-on experience with designing dependable distributed systems and supporting them in production, the goal of this presentation is to make you confident in your choice of Ceph for your use case, and to build an architecture you can trust.
Beginning with choosing the appropriate access method for your workload, we then continue to introduce the algorithms and technologies in Ceph as they relate to resilience and High-Availability. We will discuss the considerations involved in optimizing a distributed storage system for reliability, availability, and durability of data and fault tolerance. We will look at the performance of Ceph in degraded and recovery scenarios, and how to reduce exposure. This affects the choice of hardware, the approach to feature selection, and system architecture.
We will also talk about operational procedures to reduce unplanned downtime, speed-up recovery, and improve supportability.
Видео Building reliable Ceph clusters канала linux conf au 2017 - Hobart, Australia
http://linux.conf.au/schedule/presentation/83/
Ceph is the most popular Software-Defined-Storage technology today, and extremely widespread in the OpenStack world.
It enables a multitude of block, object, and file storage use cases, and its flexibility allows it to be configured as required for many different scenarios. The hardware environment can be similarly tailored. This is a sizable decision matrix, but leads to an environment optimally tuned for the required balance between performance, functionality, and cost. Dependability aspects - availability and reliability in particular - are often overlooked.
Based on twenty years of development and hands-on experience with designing dependable distributed systems and supporting them in production, the goal of this presentation is to make you confident in your choice of Ceph for your use case, and to build an architecture you can trust.
Beginning with choosing the appropriate access method for your workload, we then continue to introduce the algorithms and technologies in Ceph as they relate to resilience and High-Availability. We will discuss the considerations involved in optimizing a distributed storage system for reliability, availability, and durability of data and fault tolerance. We will look at the performance of Ceph in degraded and recovery scenarios, and how to reduce exposure. This affects the choice of hardware, the approach to feature selection, and system architecture.
We will also talk about operational procedures to reduce unplanned downtime, speed-up recovery, and improve supportability.
Видео Building reliable Ceph clusters канала linux conf au 2017 - Hobart, Australia
Показать
Комментарии отсутствуют
Информация о видео
19 января 2017 г. 18:23:33
00:40:17
Другие видео канала
![SDC 2017 - Goodbye, XFS: Building a new, faster storage backend for Ceph - Sage Weil](https://i.ytimg.com/vi/e9UPmVcq2jU/default.jpg)
![Ceph Intro & Architectural Overview](https://i.ytimg.com/vi/7I9uxoEhUdY/default.jpg)
![Accelerating Ceph Performance with High Speed Networks and Protocols](https://i.ytimg.com/vi/anBMPY6iTIU/default.jpg)
![My personal fight against the modern laptop](https://i.ytimg.com/vi/Fzmm87oVQ6c/default.jpg)
![Doing 'Blockchain' Things](https://i.ytimg.com/vi/kBE6u3A2SQk/default.jpg)
![A Conversation About Storage Clustering: Gluster VS Ceph (PART 1)](https://i.ytimg.com/vi/4XrU-zLaaqk/default.jpg)
![The Trouble with FreeBSD](https://i.ytimg.com/vi/Ib7tFvw34DM/default.jpg)
![The Tragedy of systemd](https://i.ytimg.com/vi/o_AIw9bGogo/default.jpg)
![Porting Games To Linux](https://i.ytimg.com/vi/d8kfva6G0c4/default.jpg)
![Adventures in laptop battery hacking](https://i.ytimg.com/vi/M1XqqvlfZsk/default.jpg)
![Tech Tip Tuesday - Why CephFS is Great](https://i.ytimg.com/vi/_w6gEFOnDj4/default.jpg)
![Kernel-bypass networking for fun and profit](https://i.ytimg.com/vi/noqSZorzooc/default.jpg)
![Ceph Day Germany - 10 ways to break your Ceph cluster](https://i.ytimg.com/vi/-FOYXz3Bz3Q/default.jpg)
![Rook Deep Dive: Ceph - Travis Nielsen & Sebastien Han, Red Hat](https://i.ytimg.com/vi/eTSokJ3-c-A/default.jpg)
![Couch to OpenStack - Understanding Where to Start Your Learning Journey](https://i.ytimg.com/vi/5_2zpMp6158/default.jpg)
![Destroying a Storage Cluster PART 2: A Catastrophic Failure with Recovery Process](https://i.ytimg.com/vi/otu9qhoOpaQ/default.jpg)
![Ceph and the CERN HPC Infrastructure](https://i.ytimg.com/vi/21LF2LC58MM/default.jpg)
![Failing Better - When Not To Ceph and Lessons Learned - Lars Marowsky-Brée, SUSE](https://i.ytimg.com/vi/vwBwlSWvSPQ/default.jpg)
![Optimizing Ceph Object Storage for Production in Multisite Clouds - Michael Hackett & Vikhyat Umrao](https://i.ytimg.com/vi/nLyEf59O4cY/default.jpg)
![Proxmox: Adding NVMe Journaled Ceph Storage!](https://i.ytimg.com/vi/h1oA1dWtexI/default.jpg)