[PATCH] Modify a vfs.fruit test to work on FreeBSD with fruit:resource=stream

Ralph Böhme slow at samba.org
Sun May 27 10:21:12 UTC 2018


Hi Timur,

On Sun, May 27, 2018 at 01:16:17AM +0200, Timur I. Bakeyev wrote:
> On 26 May 2018 at 15:12, Ralph Böhme via samba-technical <
> samba-technical at lists.samba.org> wrote:
> > Attached patch modifies a vfs.fruit test to make it work against FreeBSD
> > with
> > fruit:resource=stream.
> >
> > With fruit:resource=stream and vfs_streams_xattr stacked behind vfs_fruit,
> > the
> > AFP_Resource stream ends up being stored in filesytem xattrs. FreeBSD ZFS
> > seems
> > to support largish xattrs, but 1 GB seems to be too much. As a realistic
> > maximums size for AFP_Resource is 64 MB (the largest one I've ever seen
> > was 8
> > MB), the torture test is updated to use write a 64 MB resource fork
> > instead of 1
> > GB.
> >
> 
> Sorry, I seems lost here a bit - what a we trying to address with this
> test?

I was running the following test

$ bin/smbtorture -U slow%x //localhost/test "vfs.fruit.resource fork IO"

...on a FreeBSD VM and noticed that smbd crashed in memcpy:

...
#9  <signal handler called>
#10 0x0000000805d6c196 in memcpy () from /lib/libc.so.7
#11 0x000000081be0e5be in streams_xattr_pwrite (handle=0x814121040, fsp=0x814070c60, data=0x814050790, n=10, offset=4294967296) at ../source3/modules/vfs_streams_xattr.c:1024
#12 0x0000000801e3b0d7 in smb_vfs_call_pwrite (handle=0x814121040, fsp=0x814070c60, data=0x814050790, n=10, offset=4294967296) at ../source3/smbd/vfs.c:1737
#13 0x0000000801e3af9a in vfs_pwrite_data (req=0x814057560, fsp=0x814070c60, buffer=0x814050790 "1234567890c", N=10, offset=4294967296) at ../source3/smbd/vfs.c:460
#14 0x0000000801d99c29 in real_write_file (req=0x814057560, fsp=0x814070c60, data=0x814050790 "1234567890c", pos=4294967296, n=10) at ../source3/smbd/fileio.c:125
#15 0x0000000801d9887e in write_file (req=0x814057560, fsp=0x814070c60, data=0x814050790 "1234567890c", pos=4294967296, n=10) at ../source3/smbd/fileio.c:377
#16 0x0000000801e9b511 in smbd_smb2_write_send (mem_ctx=0x81407df60, ev=0x814057060, smb2req=0x81407df60, fsp=0x814070c60, in_data=..., in_offset=4294967296, in_flags=0) at ../source3/smbd/smb2_write.c:368
#17 0x0000000801e9af81 in smbd_smb2_request_process_write (req=0x81407df60) at ../source3/smbd/smb2_write.c:112
#18 0x0000000801e7c724 in smbd_smb2_request_dispatch (req=0x81407df60) at ../source3/smbd/smb2_server.c:2674
#19 0x0000000801e8497a in smbd_smb2_io_handler (xconn=0x81406fd60, fde_flags=1) at ../source3/smbd/smb2_server.c:3946
#20 0x0000000801e84036 in smbd_smb2_connection_handler (ev=0x814057060, fde=0x81404d320, flags=1, private_data=0x81406fd60) at ../source3/smbd/smb2_server.c:3984
#21 0x0000000802a9beec in poll_event_loop_poll (ev=0x814057060, tvalp=0x7fffffffdf88) at ../lib/tevent/tevent_poll.c:605
#22 0x0000000802a9b336 in poll_event_loop_once (ev=0x814057060, location=0x801fedc22 "../source3/smbd/process.c:4123") at ../lib/tevent/tevent_poll.c:662
#23 0x0000000802a97868 in _tevent_loop_once (ev=0x814057060, location=0x801fedc22 "../source3/smbd/process.c:4123") at ../lib/tevent/tevent.c:725
#24 0x0000000802a9b3d8 in poll_event_loop_wait (ev=0x814057060, location=0x801fedc22 "../source3/smbd/process.c:4123") at ../lib/tevent/tevent_poll.c:678
#25 0x0000000802a97d20 in _tevent_loop_wait (ev=0x814057060, location=0x801fedc22 "../source3/smbd/process.c:4123") at ../lib/tevent/tevent.c:867
#26 0x0000000801e5df0f in smbd_process (ev_ctx=0x814057060, msg_ctx=0x81404f300, sock_fd=37, interactive=false) at ../source3/smbd/process.c:4123
#27 0x0000000001033f6f in smbd_accept_connection (ev=0x814057060, fde=0x81404d320, flags=1, private_data=0x814120000) at ../source3/smbd/server.c:1031
#28 0x0000000802a9beec in poll_event_loop_poll (ev=0x814057060, tvalp=0x7fffffffe428) at ../lib/tevent/tevent_poll.c:605
#29 0x0000000802a9b336 in poll_event_loop_once (ev=0x814057060, location=0x1038682 "../source3/smbd/server.c:1383") at ../lib/tevent/tevent_poll.c:662
#30 0x0000000802a97868 in _tevent_loop_once (ev=0x814057060, location=0x1038682 "../source3/smbd/server.c:1383") at ../lib/tevent/tevent.c:725
#31 0x0000000802a9b3d8 in poll_event_loop_wait (ev=0x814057060, location=0x1038682 "../source3/smbd/server.c:1383") at ../lib/tevent/tevent_poll.c:678
#32 0x0000000802a97d20 in _tevent_loop_wait (ev=0x814057060, location=0x1038682 "../source3/smbd/server.c:1383") at ../lib/tevent/tevent.c:867
#33 0x0000000001030b18 in smbd_parent_loop (ev_ctx=0x814057060, parent=0x81404f5a0) at ../source3/smbd/server.c:1383
#34 0x000000000102e53f in main (argc=2, argv=0x7fffffffec00) at ../source3/smbd/server.c:2148

I have no intention to investigate the root cause why and where the test is
failing when vfs_streams_xattr has to do 1 GB allocation.

> FreeBSD ATM natively supports UFS2 and ZFS for its operations. Both have
> different constrains regarding xattrs(extattrs).
> 
> UFS2 supports extattrs of up to 64K in TOTAL, i.e. joint size of all
> extattrs should be within 64K limit.
> 
> ZFS, from other side doesn't limit extattrs size at all, they are first class
> citizens there and has the same size limitations as plain files.

So FreeBSD inherited that part from Solaris? Good.

> In theory, extattrs on ZFS may have their own extattrs and so on(We need to go
> deeper! (c)). In practice FreeBSD doesn't support that and, also has very
> crappy API for extattrs, where you have to write whole data blob in one write,
> despite it's size

Yeah, if FreeBSD would support the Solaris attropen() etc. API, that would be
cool.

> Saying that, both 64Mb and 1Gb xattrs will behave the same - for UFS2 writing
> them will fail, for ZFS both will succeed.

Not quite. It may somehow crash due to the large memory allocation, see above.

> So, that change will possibly improve total execution time only.
> 
> As for the observed ADS size we've seen in the wild a 10Mb one, generated
> by Adobe software on MacOS.

Oh, you won! :) Adobe Photoshop is the only remaining application writing
resource forks, although iirc they stopped doing this in the latest release.

-Ralph

-slow

-- 
Ralph Boehme, Samba Team       https://samba.org/
Samba Developer, SerNet GmbH   https://sernet.de/en/samba/
GPG Key Fingerprint:           FAE2 C608 8A24 2520 51C5
                               59E4 AA1E 9B71 2639 9E46



More information about the samba-technical mailing list