openATTIC joins SUSE


You may have seen today's announcement, that the openATTIC development team has joined SUSE, and with this SUSE has taken over the corporate sponsor role from openATTIC's parent company, it-novum.

I'd like to share my view about what this means for openATTIC and the community and ecosystem around the project.

First off, the license of the software or openness of the development process won't change. Quite the contrary: SUSE is fully committed to keeping openATTIC licensed under the GPL and growing the community around the project.

You will still be able to freely use it without arbitrary restrictions for your Ceph and "traditional" storage management needs.

Read more…

Automatically deploying Ceph using Salt Open and DeepSea

One key part of implementing Ceph management capabilities within openATTIC revolves around the possibilities to install, deploy and manage Ceph cluster nodes in an automatic fashion. This requires remote node management capabilities, that openATTIC currently does not provide out of the box. For "traditional" storage configurations, openATTIC needs to be installed on any storage node that is managed, but you can use a single web interface for managing all of the node's storage resources.

Naturally, installing openATTIC on all nodes belonging to a Ceph cluster is not feasible.

As I mentioned in my post Sneak Preview: Ceph Pool Performance Graphs, SUSE is developing a collection of Salt files for deploying, managing and automating Ceph that openATTIC will build on.

The DeepSea Documentation on github is a good start, but sometimes it's helpful to get a simple step-by-step guide on how to get started.

Thankfully, SUSE's Tim Serong has written up a nice article that guides you through the various steps and stages involved in installing Ceph with DeepSea: Hello Salty Goodness.

Hope you enjoy it!

Reduce KVM disk size with dd and sparsify

You can convert a raw or qcow2 non-sparse image to a sparse image with dd and sparsify. Or you can reduce the size of an existing image again.

Install the libguestfs-tools on your system

apt-get install libguestfs-tools

Now copy your existing image to a new one with dd

dd if=existing_imagefile.raw of=new_imagefile.raw conv=sparse

Afterwards use virt-sparsify to reduce your disk size again (in this example I sparsed and converted the image in just one step)

virt-sparsify new_imagefile.raw --convert qcow2 new_imagefile.qcow2

In my case I converted a block device with 65GB with dd sparse to 40GB raw image and afterwards I used virt-sparsify to reduce the size down to 6.8GB.

Developing with Ceph using Docker

As you're probably aware, we're putting a lot of effort into improving the Ceph management and monitoring capabilities of openATTIC in collaboration with SUSE.

One of the challenges here is that Ceph is a distributed system, usually running on a number of independent nodes/hosts. This can be somewhat of a challenge for a developer who just wants to "talk" to a Ceph cluster without actually having to fully set up and manage it.

Of course, you could be using tools like SUSE's Salt-based DeepSea project or ceph-ansible, which automate the deployment and configuration of an entire Ceph cluster to a high degree. But that still requires setting up multiple (virtual) machines, which could be a daunting or at least resource-intensive task for a developer.

While we do have a number of internal Ceph clusters in our data center that we can use for testing and development purposes, sometimes it's sufficient to have something that behaves like a Ceph cluster from an API perspective, but must not necessarily perform like a full-blown distributed system (and can be set up locally).

Fortunately, Docker comes to the rescue here - the nice folks at Ceph kindly provide a special Docker image labeled ceph/demo, which can be described as a "Ceph cluster in a box".

Read more…

Windows AD keytab file and ktutil merge

If you ever plan to setup a clustered samba fileserver within a windows active directory infastructure you'll need the following things.

The problem in a clustered samba environment is, that the clients always wants to connect their network share with the same hostname/machine account.

It would be possible to just use the cluster ip instead of a new machine account, but then your users/clients will always get that popup within their office programs, that this isn't a trusted location.

To get rid of that annoying problem you have to create a new machine account and merge that keytab into your existing one on your samba servers.

Read more…

Conference Report: Ceph Days 2016 Munich, Germany

Friday last week (23rd of September), I traveled to Munich, to attend and talk about openATTIC at the Ceph Day.

This Ceph Day was sponsored by Red Hat and SUSE and it was nice to see many representatives of both companies attending and speaking about Ceph-related topics. Even though the event was organized on short notice, almost 50 attendees showed up.

I was there for the entire day and attended all sessions. I also took some pictures, which can be found in my Flickr set.

Fortunately there was just a single track of presentations. While all talks provided ample insight and new information about Ceph, I learned a lot from the following sessions:

Read more…

Speaking about openATTIC at the Ceph Days in Munich (2016-09-23)

Ceph Days are full-day events from and for the Ceph community which take place around the globe. They usually provide a good variety of talks, including technical deep-dives, best practices and updates about recent developments.

The next Ceph Day will take place in Munich, Germany next week (Friday, 23rd of September). I'll be there, to give an overview and update about openATTIC, particularly on the current Ceph management and monitoring feature set as well as an outlook on ongoing and upcoming developments.

If you're using Ceph and would like to get updates on recent development "straight from the horse's mouth", next week is your chance! I look forward to being there.

Sneak Preview: Ceph Pool Performance Graphs

As I wrote in my call for feedback and testing of the Ceph management features in openATTIC 2.0.14, we still have a lot of tasks on our plate.

Currently, we're laying the groundwork for consuming SUSE's collection of Salt files for deploying, managing and automating Ceph. Dubbed the "DeepSea" project, this framework will form the foundation of how we plan to extend the Ceph management capabilities of openATTIC to deploy and orchestrate tasks on remote Ceph nodes.

In parallel, we are currently working on extending the openATTIC WebUI on making the existing backend functionality accessible and usable. Next up is displaying the performance statistics for Ceph pools that we already collect in the backend (OP-1405).

To whet your appetite, here's a screen shot of the ongoing development:


Keep in mind this is work in progress. What do you think?

Seeking your feedback on the Ceph monitoring and management functionality in openATTIC

With the release of openATTIC version 2.0.14 this week, we have reached an important milestone when it comes to the Ceph management and monitoring capabilities. It is now possible to monitor and view the health and overall performance of one or multiple Ceph clusters via the newly designed Ceph cluster dashboard.

In addition to that, openATTIC now offers many options to view, create or delete various Ceph objects like Pools, RDBs and OSDs.

We're well aware that we're not done yet. But even though we still have a lot of additional Ceph management features on our TODO list, we'd like to make sure that we're on the right track with what we have so far.

Therefore we are seeking feedback from early adopters and would like to encourage you to give openATTIC a try! If you are running a Ceph cluster in your environment, you could now start using openATTIC to monitor its status and perform basic administrative tasks.

All it requires is a Ceph admin key and config file. The installation of openATTIC for Ceph monitoring/management purposes is pretty lightweight, and you don't need any additional disks if you're not interested in the other storage management capabilities we provide.

We'd like to solicit your input on the following topics:

  • How do you like the existing functionality?
  • Did you find any bugs?
  • What can be improved?
  • What is missing?
  • What would be the next features we should look into?

Any feedback is welcome, either via our Google Group, IRC or our public Jira tracker. See the get involved page for details on how to get in touch with us.

Thanks in advance for your help and support!