serverid(s)_exist, critical path work.
Volker Lendecke
Volker.Lendecke at SerNet.DE
Thu Feb 7 09:06:06 MST 2013
On Thu, Feb 07, 2013 at 10:40:48AM -0500, simo wrote:
> Well if you want to avoid the syscall you need to always consider that
> rogue writes may destroy memory locations. It's either/or, so I guess
> what you need to choose is whether you can accept the risk and do
> everything in user space using shared memory or similar techniques or
> whether the risk is too high and you need the kernel to act as arbiter.
>
> I wonder if there is a way to set canaries from user space to prevent
> processes to accidentally clobber memory areas with large memcpy or
> similar ... maybe a clever setting of mmap regions ...
>
> I've been using memory barrier and synchronization primitives to
> implement single writer multiple readers w/o locking, and robust mutexes
> basically do the same I think.
>
> Maybe there is a way to combine robust mutexes for performance with a
> fallback to fcntl locks for recovering from very bad situations ?
Falling back to fcntl under high pressure in my tests made
the situation much worse due to the single system-wide
spinlock gating every single fcntl lock operation. I made
the reference to epoll because I could imagine this is a
mechanism not dying under 10.000 simultaneous events.
Do the mutexes, fall back to a locking daemon nicely
queueing stuff in user space, communicated to via epoll. We
just have to fill in the details ... :-)
We need to coordinate share modes and leases of 30.000
clients connected to a cluster all opening the share root
directory.
Volker
--
SerNet GmbH, Bahnhofsallee 1b, 37081 Göttingen
phone: +49-551-370000-0, fax: +49-551-370000-9
AG Göttingen, HRB 2816, GF: Dr. Johannes Loxen
http://www.sernet.de, mailto:kontakt at sernet.de
**********************************************************
visit us at CeBIT: March 5th - 9th 2013, hall 6, booth E15
all about SAMBA and verinice, firewalls, Linux and Windows
free tickets available via email here : cebit at sernet.com !
**********************************************************
More information about the samba-technical
mailing list