(877) 808-1010
 In news

CEPH

ceph

The massive storage capability of Ceph can revitalize your organization’s IT infrastructure and your ability to manage vast amounts of data. If your organization runs applications with different storage interface needs, Ceph is for you! Ceph’s foundation is the Reliable Autonomic Distributed Object Store (RADOS), which provides your applications with object, block, and file system storage in a single unified storage cluster-making Ceph flexible, highly reliable, and easy for you to manage. Ceph’s RADOS provides you with extraordinary data storage scalability-thousands of client hosts or KVMs accessing petabytes to exabytes of data. Ceph is completely distributed without any single point of failure, scalable to exabyte levels, and is open sourced. Each one of your applications can use the object, block or file system interfaces to the same RADOS cluster simultaneously, which means your Ceph storage system serves as a flexible foundation for all of your data storage needs. You can use Ceph free, and deploy it on economical commodity hardware.

OBJECT-BASED STORAGE
Organizations like object-based storage when deploying large scale storage systems, because it stores data more efficiently. Object-based storage systems separate the object namespace from the underlying storage hardware—this simplifies data migration.

WHY IT MATTERS
By decoupling the namespace from the underlying hardware, object-based storage systems enable you to build much larger storage clusters. You can scale out object-based storage systems using economical commodity hardware, and you can replace hardware easily when it malfunctions or fails.

THE CEPH DIFFERENCE
Ceph’s CRUSH algorithm liberates storage clusters from the scalability and performance limitations imposed by centralized data table mapping. It replicates and re-balances data within the cluster dynamically—eliminating this tedious task for administrators, while delivering high-performance and infinite scalability.

CEPH ARCHITECTURE
A minimum Ceph storage cluster has one monitor node (MON), and an Object Storage Server (OSD). Administration tasks are done on an admin node, which can also be a MON node. MON nodes should be an odd number because they vote and determine which OSDs are in the cluster and working. If just Ceph Block Device or Ceph Object Storage is used, no separate metadata servers (MDS) are needed. If Ceph File System is stored then separate scalable MDS servers are needed.

Advantages

  • Provides strong data reliability for mission-critical operations
  • Scales to 100s of petabytes
  • Fast with TB/sec aggregate throughput
  • Can handle billions of files
  • File sizes can be from bytes to terabytes
  • Enterprise reliability
  • Unified storage (object, block, and file)

Disadvantages

  • A normal installation can require beefy networks, for example 2 to 4 10Gbe on each server with each VLAN’d across two switches, so you can lose a switch with losing either network
  • Non-trivial as far as the number of switches and servers required for a best practice Ceph storage system
  • Not the most efficient at using CPUs and SSD, but is getting better

Best Uses

  • Can Provide HA(high availability)
  • Deployed Ceph storage of ~ 30PB have been tested
  • Cloud storage services
  • Can perform unified storage (object, block, and file), this is rare
Recent Posts

Start typing and press Enter to search

Nor-Tech in the News