Copying TBs -> error -> work around

Philip Rhoades phil at
Thu Sep 17 05:51:52 UTC 2020


On 2020-09-14 10:07, raf via rsync wrote:
> On Fri, Sep 11, 2020 at 10:53:14AM +1000, Philip Rhoades via rsync
> <rsync at> wrote:
>> Roland,
>> On 2020-09-10 21:27, Roland wrote:
>> > > with rsync hanging - after breakout on /home for writing I then get:
>> > > "Read-only file system"
>> >
>> > if your filesystem switches to read-only, you have a serious problem
>> > with your system/storage, not with rsync.
>> >
>> > rsync (or the workload) is simply triggering the problem.
>> Thanks for the response . .
>> Hmm . . but the drive that goes read-only is being read FROM not TO . 
>> . it
>> is hard to see how that should be an issue?
>> The backstory is that a relatively recent internal 8TB Seagate 
>> Barracuda had
>> its 7.2TB sda5 (home) partition corrupted - which itself was 
>> suspicious but
>> not impossible of course - so I had to switch temporarily to an 
>> external USB
>> 4TB drive (which was a backup drive and was already up-to-date) for 
>> /home.
>> So now this exercise is rsyncing back to a NEW internal 8TB Seagate
>> Barracuda (sda5 again) . .
>> If you are correct about rsync simply triggering an existing problem 
>> on the
>> 4TB USB drive, would that problem going to be recognised by a fsck 
>> (ext4)?
>> I will check this out after I switch over to the new internal sda5 for
>> /home.
>> Thanks,
>> Phil.
> file systems can be remounted read only when there are
> too many errors. perhaps that applies to read errors as
> well, not just write errors. check logs for i/o errors.
> if it were i/o errors that caused the kernel to remount
> the file system read only, it should have logged those
> errors. and you should be able to use fsck with a usb
> drive.

Hmm . . well I gave up trying to rsync the nearly whole 4TB at once and 
broke it down into individual dirs like I described in the OP but after 
that I did actually look at the 4TB USB "from" drive and there wasn't 
much wrong with it:

# fsck.ext4 /dev/sdc -y
e2fsck 1.45.3 (14-Jul-2019)
/dev/sdc contains a file system with errors, check forced.
Pass 1: Checking inodes, blocks, and sizes
Inode 9860147 extent tree (at level 1) could be shorter.  Optimize? yes

Pass 1E: Optimizing extent trees
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information

/dev/sdc: ***** FILE SYSTEM WAS MODIFIED *****
/dev/sdc: 10583339/244195328 files (0.4% non-contiguous), 
806625360/976754645 blocks

So then, from the corrupted original /dev/sda5 I tried to create an 
image to a second, new 8TB drive with an ext4 partition on the whole 

# ddrescue /dev/sda5 -d -r3 /mnte/sda5_ddrescue.img 
GNU ddrescue 1.25
Press Ctrl-C to interrupt
      ipos:    9253 MB, non-trimmed:        0 B,  current rate:       0 
      opos:    9253 MB, non-scraped:        0 B,  average rate:  37768 
non-tried:    7849 GB,  bad-sector:        0 B,    error rate:       0 
   rescued:    9253 MB,   bad areas:        0,        run time:      4m  
pct rescued:    0.11%, read errors:        0,  remaining time:     14h 
                               time since last successful read:         
Copying non-tried blocks... Pass 1 (forwards)
ddrescue: Write error: Read-only file system

So I am getting a FS changed to RO in _two different_ situations - I 
think there is some OS (or motherboard?) problem . . after I update from 
Fedora 31 to 32 on one of the new 8TB drives, I might go through the 
exercise again to see if the problem is still there . .


Philip Rhoades

PO Box 896
Cowra  NSW  2794
E-mail:  phil at

More information about the rsync mailing list