Discussion:
oVirt and SAS shared storage??
Hans-Joachim
2013-11-16 06:28:24 UTC
Permalink
Hello,

unfortunally, I didn't got a reply for my question. So.. let's try again.

Does oVirt supports SAS shared storages (p. e. MSA2000sa) as storage domain?
If yes.. what kind of storage domain I've to choose at setup time?

Thank you for your help

Hans-Joachim
Ryan Barry
2013-11-16 14:22:34 UTC
Permalink
Post by Hans-Joachim
unfortunally, I didn't got a reply for my question. So.. let's try again.
Does oVirt supports SAS shared storages (p. e. MSA2000sa) as storage
domain?
If yes.. what kind of storage domain I've to choose at setup time?
SAS is a bus which implements the SCSI protocol in a point-to-point
fashion. The array you have is the effective equivalent of attaching
additional hard drives directly to your computer.

It is not necessarily faster than iSCSI or Fiber Channel; almost any
nearline storage these days will be SAS, almost all the SANs in production,
and most of the tiered storage as well (because SAS supports SATA drives).
I'm not even sure if NetApp uses FC-AL drives in their arrays anymore. I
think they're all SAS, but don't quote me on that.

What differentiates a SAN (iSCSI or Fiber Channel) from a NAS is that a SAN
presents raw devices over a fabric or switched medium rather than
point-to-point (point-to-point Fiber Channel still happens, but it's easier
to assume that it doesn't for the sake of argument). A NAS presents network
file systems (CIFS, GlusterFS, Lustre, NFS, Ceph, whatever), though this
also gets complicated when you start talking about distributed clustered
network file systems.

Anyway, what you have is neither of these. It's directly-attached storage.
It may work, but it's an unsupported configuration, and is only shared
storage in the sense that it has multiple controllers. If I were going to
configure it for oVirt, I would:

Attach it to a 3rd server and export iSCSI LUNs from it
Attach it to a 3rd server and export NFS from it
Attach it to multiple CentOS/Fedora servers, configure clustering (so you
get fencing, a DLM, and the other requisites of a clustered filesystem),
and use raw cLVM block devices or GFS2/OCFS filesystems as POSIXFS storage
for oVirt.

Thank you for your help
Post by Hans-Joachim
Hans-Joachim
Hans
--
while (!asleep) { sheep++; }
Jeff Bailey
2013-11-17 02:39:35 UTC
Permalink
Post by Hans-Joachim
unfortunally, I didn't got a reply for my question. So.. let's try again.
Does oVirt supports SAS shared storages (p. e. MSA2000sa) as
storage domain?
If yes.. what kind of storage domain I've to choose at setup time?
SAS is a bus which implements the SCSI protocol in a point-to-point
fashion. The array you have is the effective equivalent of attaching
additional hard drives directly to your computer.
It is not necessarily faster than iSCSI or Fiber Channel; almost any
nearline storage these days will be SAS, almost all the SANs in
production, and most of the tiered storage as well (because SAS
supports SATA drives). I'm not even sure if NetApp uses FC-AL drives
in their arrays anymore. I think they're all SAS, but don't quote me
on that.
What differentiates a SAN (iSCSI or Fiber Channel) from a NAS is that
a SAN presents raw devices over a fabric or switched medium rather
than point-to-point (point-to-point Fiber Channel still happens, but
it's easier to assume that it doesn't for the sake of argument). A NAS
presents network file systems (CIFS, GlusterFS, Lustre, NFS, Ceph,
whatever), though this also gets complicated when you start talking
about distributed clustered network file systems.
Anyway, what you have is neither of these. It's directly-attached
storage. It may work, but it's an unsupported configuration, and is
only shared storage in the sense that it has multiple controllers. If
It's shared storage in every sense of the word. I would simply use an
FC domain and choose the LUNs as usual.
Post by Hans-Joachim
Attach it to a 3rd server and export iSCSI LUNs from it
Attach it to a 3rd server and export NFS from it
Attach it to multiple CentOS/Fedora servers, configure clustering (so
you get fencing, a DLM, and the other requisites of a clustered
filesystem), and use raw cLVM block devices or GFS2/OCFS filesystems
as POSIXFS storage for oVirt.
These would be terrible choices for both performance and reliability.
It's exactly the same as fronting an FC LUN would be with all of that
crud when you could simply access the LUN directly. If the array port
count is a problem then just toss an SAS switch in between and you have
an all SAS equivalent of a Fibre Channel SAN. This is exactly what we
do in production vSphere environments and there are no technical reasons
it shouldn't work fine with oVirt.
Post by Hans-Joachim
Thank you for your help
Hans-Joachim
Hans
--
while (!asleep) { sheep++; }
_______________________________________________
Users mailing list
http://lists.ovirt.org/mailman/listinfo/users
Loading...