unpredictable behaviour
tim.conway at philips.com
tim.conway at philips.com
Tue Nov 6 10:04:00 EST 2001
I'd thought of the 32v64 issue. Here's a snatch of a trace (truss: i AM
running solaris 7, as you mentioned).
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
5089: read(0, " 0 , 1 2 0 )\0 _ c e l l".., 32768) = 32768
5085: read(8, "\0\0\0\0", 4) = 4
5085: poll(0xEFFF6B98, 1, 60000) = 1
5085: read(8, "BC02\0\0", 4) = 4
5085: poll(0xEFFF6B98, 1, 60000) = 1
5085: read(8, "\0\0\0\0", 4) = 4
5085:
open64("/sql/rsync/test/tools/DI/dis2.2.1/DI/tools/VLSIMemoryIntegrator/solaris_bin/vlsi_PhantomGen",
O_RDONLY) = 6
5085: fstat64(6, 0xEFFFF808) = 0
5085: poll(0xEFFF7098, 1, 60000) = 1
5089: write(1, " 0 , 1 2 0 )\0 _ c e l l".., 32768) = 32768
5089: poll(0xEFFFE000, 1, 60000) = 1
5089: read(0, "\0\0\0\0", 4) = 4
5089: poll(0xEFFFE0E8, 1, 60000) = 1
5085: write(4, "BB1F\0\0", 4) = 4
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
It's not file size anyway... the example I gave showed that multiple
duplicate runs failed on different files, and that one randomly chosen
file that had failed was very small. (<1M). That was what I meant by
unpredictable. I was hoping to find a certain file content, or exact file
size, or something, but it seems to be the product of randomness, rather
than of any particular file. No file fails in any two runs. That was why
i was wondering about total memory issues... maybe something is getting
close to using it all. There's 3G, and plenty of swap, but if there's a
case in which memory is pinned, that might make a difference. I've not
heard of pinning memory in any context except for AIX, so that may be
irrelevant. It means to make the allocation unpageable - never leaves
physical memory. I don't think that's even available in most unices, but
just in case, i though i'd bring it up.
Tim Conway
tim.conway at philips.com
303.682.4917
Philips Semiconductor - Longmont TC
1880 Industrial Circle, Suite D
Longmont, CO 80501
Available via SameTime Connect within Philips, n9hmg on AIM
perl -e 'print pack(nnnnnnnnnnnn,
19061,29556,8289,28271,29800,25970,8304,25970,27680,26721,25451,25970),
".\n" '
"There are some who call me.... Tim?"
Dave Dykstra <dwd at bell-labs.com>
11/05/2001 01:58 PM
To: Tim Conway/LMT/SC/PHILIPS at AMEC
cc: rsync at lists.samba.org
Subject: Re: unpredictable behaviour
Classification:
On Fri, Nov 02, 2001 at 08:55:14AM -0800, tim.conway at philips.com wrote:
> I see very odd results from rsync 2.4.7pre1, the latest cvs version
(sept
> 12, i think was the last modified file).
...
> It's about 128Gb of data in about 2.8M files.
> Any idea what this randomness is? might the "Value too large for
defined
> data type" be thrown if the system runs out of memory? These jobs get
up
> to over a half a gig memory used.
> It was compiled (and is running) on a 64-bit machine.
I don't think it would get that message from running out of memory,
although
a process size of >512MB of memory is awfully big.
The message "Value too large for defined data type" is what is printed for
an EOVERFLOW message, at least on Solaris 7. What operating system are
you using? It looks like all your messages all say "readlink" which is
printed in the function make_file() in flist.c after a failed call to
readlink_stat(). readlink_stat() calls do_lstat() in syscall.c, which
calls lstat64() if HAVE_OFF64_T is defined, otherwise it calls lstat().
Check your config.h to see if HAVE_OFF64_T is defined. With that much
data
I assume you've got large filesystems and you would need the 64 bit
interface.
rsync 2.4.7pre1 uses a relatively new autoconf rule for support of 64 bit
systems. You didn't happen to regenerate the configure script with
autoconf, did you? If you do, it has to be version 2.52 or later.
- Dave Dykstra
More information about the rsync
mailing list