[Samba] Samba memory usage - how big is it?
Mike Gallamore
mike at mpi-cbg.de
Thu Nov 13 12:34:28 GMT 2008
How large is large for a smbd process? Does it just use what is
available or what? My fileserver at work (32 core sparc T2, with 32GB
RAM) currently has 117 smbd processes running each with 29M total, 24M
resident. It looks like my servers processes are more than twice the
size as these ones for some reason. Is it just architecture
difference, or does samba allocate more space to a process if it has
room for it?
On Nov 13, 2008, at 1:14 PM, Stéphane PURNELLE wrote:
>> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
>> 10501 root 15 0 1690m 276m 828 S 0.0 27.4 1:01.48 820
>> 12333 tarmini 25 0 31128 26m 1532 R 26.8 2.6 0:00.81 cc1
>> 12342 tarmini 25 0 28592 22m 1532 R 24.8 2.3 0:00.75 cc1
>> 2577 root 16 0 31236 5408 4136 S 0.0 0.5 0:02.73
>> klnagent
>> 12351 tarmini 17 0 10140 5156 1520 R 4.0 0.5 0:00.12 cc1
>> 1732 root 16 0 12612 4952 4048 S 0.0 0.5 0:01.67 smbd
>> 13725 root 16 0 12760 4920 3952 S 0.0 0.5 0:06.74 smbd
>
>
> 820 use 1690m (mega) in virtual allocation and 276m in residual or
> resident memory
> smbd use 12612 ko in virtual allocation and 4952 ko in residual or
> resident memory
>
> extract of man top :
>
> o: VIRT -- Virtual Image (kb)
> The total amount of virtual memory used by the task. It
> includes
> all code, data and shared libraries plus pages that
> have
> been
> swapped out. (Note: you can define the STATSIZE=1
> environment
> vari-
> able and the VIRT will be calculated from the /proc/#/state
> VmSize
> field.)
>
> VIRT = SWAP + RES.
>
> p: SWAP -- Swapped size (kb)
> The swapped out portion of a taskâs total virtual memory
> image.
>
> q: RES -- Resident size (kb)
> The non-swapped physical memory a task has used.
>
> RES = CODE + DATA.
>
>
>
> -----------------------------------
> Stéphane PURNELLE stephane.purnelle at corman.be
> Service Informatique Corman S.A. Tel : 00 32
> 087/342467
>
> samba-bounces+stephane.purnelle=corman.be at lists.samba.org a écrit sur
> 13/11/2008 13:05:04 :
>
>> Well, this is my current top. You can see that smbd's use 12Mb in
> average.
>> I'll check this 820 even though I'm not really sure what is it. Thank
> you.
>>
>>
>> top - 18:57:30 up 4:30, 4 users, load average: 5.06, 3.36, 2.05
>> Tasks: 162 total, 6 running, 156 sleeping, 0 stopped, 0 zombie
>> Cpu(s): 91.2% us, 8.1% sy, 0.0% ni, 0.3% id, 0.2% wa, 0.1% hi,
>> 0.0%
> si
>> Mem: 1034040k total, 449740k used, 584300k free, 920k
>> buffers
>> Swap: 2031608k total, 131240k used, 1900368k free, 26144k
>> cached
>>
>> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
>> 10501 root 15 0 1690m 276m 828 S 0.0 27.4 1:01.48 820
>> 12333 tarmini 25 0 31128 26m 1532 R 26.8 2.6 0:00.81 cc1
>> 12342 tarmini 25 0 28592 22m 1532 R 24.8 2.3 0:00.75 cc1
>> 2577 root 16 0 31236 5408 4136 S 0.0 0.5 0:02.73
>> klnagent
>> 12351 tarmini 17 0 10140 5156 1520 R 4.0 0.5 0:00.12 cc1
>> 1732 root 16 0 12612 4952 4048 S 0.0 0.5 0:01.67 smbd
>> 13725 root 16 0 12760 4920 3952 S 0.0 0.5 0:06.74 smbd
>> 16248 ton.mart 15 0 12632 4848 3888 S 0.0 0.5 0:01.28 smbd
>> 8280 root 16 0 12640 4784 3920 S 0.0 0.5 0:01.51 smbd
>> 15274 root 16 0 12452 4776 3880 S 0.0 0.5 0:03.24 smbd
>> 26411 nobody 15 0 12400 4748 3988 S 1.7 0.5 0:01.32 smbd
>> 7157 root 16 0 12596 4720 3760 S 0.0 0.5 0:01.57 smbd
>> 28634 root 16 0 12356 4688 3860 S 0.0 0.5 0:00.83 smbd
>> 15270 root 16 0 12376 4620 3936 S 0.0 0.4 0:00.21 smbd
>> 15046 mayan.lo 16 0 12372 4600 3816 S 0.0 0.4 0:00.80 smbd
>> 13742 root 16 0 12412 4588 3904 S 0.0 0.4 0:00.15 smbd
>> 4737 neru.saf 16 0 12444 4576 3868 S 0.0 0.4 0:00.86 smbd
>> 13733 takeshi. 16 0 12404 4560 3892 S 0.0 0.4 0:00.92 smbd
>> 13722 root 16 0 12372 4476 3824 S 0.0 0.4 0:00.17 smbd
>> 13735 root 16 0 12396 4412 3748 S 0.0 0.4 0:00.05 smbd
>> 15859 root 16 0 12276 4404 3700 S 0.0 0.4 0:00.06 smbd
>> 5099 root 16 0 12344 4400 3756 S 0.0 0.4 0:00.15 smbd
>> 6849 root 16 0 12384 4400 3752 S 0.0 0.4 0:00.05 smbd
>> 15053 petrus.t 15 0 12236 4384 3732 S 0.0 0.4 0:00.74 smbd
>> 15278 root 16 0 12276 4384 3780 S 0.0 0.4 0:00.10 smbd
>> 4705 petrus.t 15 0 12324 4348 3672 S 0.0 0.4 0:00.19 smbd
>> 11060 root 16 0 12368 4344 3716 S 0.0 0.4 0:00.06 smbd
>> 13720 root 16 0 12356 4344 3680 S 0.0 0.4 0:00.02 smbd
>> 18499 root 16 0 12308 4316 3668 S 0.0 0.4 0:00.10 smbd
>> 13753 samsari 16 0 12428 4304 3636 S 0.0 0.4 0:00.03 smbd
>> 29261 son.murt 16 0 12352 4300 3656 S 0.0 0.4 0:00.15 smbd
>> 9134 security 16 0 12440 4292 3636 S 0.0 0.4 0:00.04 smbd
>> 22912 root 16 0 12320 4284 3648 S 0.0 0.4 0:00.09 smbd
>> 13730 root 16 0 12372 4280 3584 S 0.0 0.4 0:00.03 smbd
>> 12360 tarmini 25 0 8424 4260 1472 R 3.0 0.4 0:00.09 cc1
>> 8009 root 16 0 12404 4248 3548 S 0.0 0.4 0:00.06 smbd
>> 2957 root 16 0 8964 4240 1272 S 0.0 0.4 0:05.59 hald
>>
>>
>>
>> On Thu, Nov 13, 2008 at 6:31 PM, Volker Lendecke
>> <Volker.Lendecke at sernet.de>wrote:
>>
>>> On Thu, Nov 13, 2008 at 06:23:45PM +0700, FC Mario Patty wrote:
>>>> Hi,
>>>>
>>>
>>>
>>> Ok, the largest smbd uses 5MB memory (that "12752" is not
>>> relevant here). But -- what is is this process '820', *that*
>>> is your culprit.
>>>
>>> Volker
>>>
>> --
>> To unsubscribe from this list go to the following URL and read the
>> instructions: https://lists.samba.org/mailman/listinfo/samba
> --
> To unsubscribe from this list go to the following URL and read the
> instructions: https://lists.samba.org/mailman/listinfo/samba
More information about the samba
mailing list