[clug] Redundant file systems the easy way

Michael Cohen michael.cohen at netspeed.com.au
Fri Oct 14 06:47:01 GMT 2005


Hi Paul,
  I have has similar thoughts recently with regards to redundancy in a
  distributed environment. I have even had some ideas about how it might
  work. A number of nodes all offer storage space and join a large peer
  to peer group. I was thinking of using a multicast group to transfer
  the data - this way all nodes can mirror files as they are being
  transmitted.

  When a single node wants to get a file, it multicasts for it, and then
  one of the other nodes might decided to transmit it. Other nodes might
  copy it also into their local cache.

  The idea is to have a normal standard file system actually store the
  data (be it fat, ext3 or whatever). So in the event of a failure, the
  files can be found by regular means,  but the p2p software maintains
  copies of files. When a single node wants to clean up they can work
  out which files have sufficient copies around the network so they can
  be deleted.

  This is particularly important for mythtv because you end up having
  3-4 160gb disks around the house (one on the backend, one on each
  frontend). It would be really nice to make them all look like 1 big
  disk. If you need redundancy you could require at least 2 nodes to
  have a copy of everything, and the thing will rebalance files
  automatically and transparently to ensure this is done. If two of the
  nodes are down at the moment, I dont mind getting an IO error. In
  other words it doesnt need to be that reliable.

Michael.
  
On Fri, Oct 14, 2005 at 01:12:58PM +1000, Paul Wayper wrote:
> People,
> 
> Having recently had to install LVM across a 250GB and 160GB
> disk (for MythTV, what else?), and finding a few SMART
> errors on the latter, I'm becoming more interested in file
> systems and underlying transports that provide some form of
> data redundancy.  Of course, if I mirror the disks I get
> full redundancy but lose half the space; if I stripe them I
> get the space but lose any chance of surviving a single disk
> failure.  I could do Multiple Device style RAID, but then I
> have to have disks that are pretty much equal in size.  All
> these solutions force you to do things a certain way in
> order to gain a large chunk of security, and I think there's
> a better way.
> 
> I've got an idea of how to create a file system that is
> distributed across multiple disks that transparently copies
> various bits of the file system across all the disks in the
> system.  The big problem is that things like LVM and MD
> exist to abstract away information about where the data is
> stored from the file system that's doing the storing; and,
> vice versa, LVM and MD don't pretend to know anything about
> the file system on top of them.  You've taken away the very
> information a file system has to know in order to protect
> itself.
> 
> Obviously this is of limited interested (in a programming,
> "how do I write one of these" sense) to most of the CLUG
> list.  My question is simply where should I go and who
> should I speak with to work on this idea?
> 
> Thanks in advance,
> 
> Paul
> -- 
> linux mailing list
> linux at lists.samba.org
> https://lists.samba.org/mailman/listinfo/linux


More information about the linux mailing list