Smart SDS: LVM or ZFS
The Smart SDS layer is the foundation of our storage system. It combines all the available disks into a storage pool, eliminates the risk of data loss should a disk fail, allocates the storage space in whatever way you need, and provides block devices or file systems for use by the upper layers.
Cloud Storage: DRBD® or Ceph
Sometimes a storage system running on a single box just doesn't cut it: A single box can fail, and capacity cannot grow freely without incurring extreme costs.
With openATTIC you can remedy the danger of a single node failing by mirroring critical volumes using DRBD®, a rock-solid solution that has proven its reliability countless times. If you also require infinite scalability or are considering deploying openATTIC together with OpenStack, you should know that openATTIC integrates natively with Ceph.
Unified Storage: any client, any time
All that awesome, high-performance, clustered storage space isn't worth a plugged nickel if you don't have a way to connect your clients. openATTIC connects to NFS, CIFS and iSCSI clients just as well as to high-end FibreChannel ESX or OpenStack KVM hosts.
- openATTIC runs on practically any brand of off-the-shelf hardware. Have some boxes to spare? They will do just fine. And you can attach as many disks as you want. RAID controllers can be used, but even without one ZFS will put your disks to good use.
- The classic setup
Combining Hardware RAID, Software RAID and LVM, you can build a storage system that achieves great performance and gives admins the easy maintainability they are used to: If a disk fails, just pull it out, shove in the replacement and get on with your life. This setup handles high loads by evenly distributing operations across 8 or 16 data disks.
- The ZFS way
The classic setup however has some steep hardware requirements and cannot offer you advanced features such as deduplication or compression. For situations where you don't have room for enough disks, or RAID controllers cannot be used, or you require the advanced functionality, you can switch to ZFS and get the same features with less stringent hardware requirements. The price you have to pay is that ZFS is a bit more involved when a disk needs to be replaced.
When making a backup, data consistency is critical. However, you usually can't freeze all your data until the backup is complete, because that can take a couple hours of downtime, which is unacceptable. Snapshots allow you to freeze your data in an instant, giving you a consistent view while the live traffic can continue as usual. openATTIC allows for snapshots to be scheduled to cooperate smoothly with backup solutions.
- Automated Monitoring
openATTIC automatically configures a monitoring system to make sure that when something fails, you will know.
People often order way more space than they actually need. openATTIC allows you to save that space and use it for something else.
- Multi-Node Management
Manage all your openATTIC systems in one comprehensive GUI, without having to worry about where the volumes actually reside.
- Synchronous Mirroring with DRBD®
Mirror a volume to another openATTIC host to enable that host to take over in case the first host fails. This also allows for maintenance to be done without the need for a downtime.
- High Availability Clustering with Pacemaker
Cluster takeover can and should be automated, to keep the downtime as short as possible. That way, your clients won't even notice the outage because it will only last a few milliseconds.
- Cloud Storage with Ceph and OpenStack
If you need your storage space to grow infinitely over the next couple of years while being able to easily replace hardware that goes out of service, Ceph is exactly what you need — and it's built into openATTIC.
- Unprecedented Extensibility
With an open API as a central component, a storage system stops being a black box. Via cloud connectors using the REST API, openATTIC integrates itself into other systems like openQRM and OpenStack.
- Windows File Storage (CIFS)
openATTIC seamlessly joins into a Windows Domain, providing a highly-available centralized file storage system that gives you the reliability you need, and gives your users the features they love:
- Virtualization Storage (NFS)
Using NFS for virtualization stores has the great advantage of being able to trivially allow thin-provisioning, and when using XFS on the Storage system it gives you a massive performance improvement as well: About ten times as fast as iSCSI!
Systems that have been tested to run with openATTIC include:
- VMware ESX
- KVM (oVirt, RHEV, OpenStack)
- Block Storage (iSCSI, FibreChannel)
The LIO project provides a unified block storage target and allows for iSCSI, FC and FCoE to be integrated smoothly into Linux. So whatever LIO supports, openATTIC will support.
- Object Storage (Ceph)
For large-scale setups, Ceph is the way to go, so of course openATTIC includes it.