[Samba] Aplication slow after migration

Scott Lovenberg scott.lovenberg at gmail.com
Thu Feb 7 16:06:35 GMT 2008


Felipe Martinez Hermo wrote:
>
>
> Scott Lovenberg escribió:
>>
>>
>> On Feb 6, 2008 4:19 AM, Felipe Martinez Hermo <felipe at galicia.ugt.org 
>> <mailto:felipe at galicia.ugt.org>> wrote:
>>
>>
>>
>>     Sinisa Bandin escribió:
>>     >
>>     >
>>     > Felipe Martinez Hermo wrote:
>>     >>
>>     >>>> OK, so we're apples to apples, so to speak; the servers are 
>> tuned
>>     >>>> the same.  I'll assume your disks are tuned from hdparm and 
>> up to
>>     >>>> snuff, otherwise you wouldn't be tuning sockets ;).  Did 
>> your old
>>     >>>> server have samba settings for oplocks set?
>>     >>>>
>>     >>>>
>>     >>>> --
>>     >>>> Peace and Blessings,
>>     >>>> -Scott.
>>     >>>>
>>     >>>> "Of course, that's just my opinion; I could be wrong"
>>     >>>> -Dennis Miller
>>     >>> Erm, sorry, I didn't catch that you had 2 .conf files there.  
>> I'm
>>     >>> back to the drawing board.  Sorry about that.  Anyone else
>>     have any
>>     >>> ideas?
>>     >> Yes, that's whats shocking me. Apparently we're apples to apples.
>>     >> Except for the kernel (new&slow 2.6.18-4-686 vs old&fast 2.6.8)
>>     >>
>>     >> I've sniffed both eth0 interfaces and I've got some more
>>     information.
>>     >> When talking to the slow server, the client needs to send 76 "TCP
>>     >> segment of a reassembled PDU" that are not sent when talking 
>> to the
>>     >> old and fast server.
>>     >>
>>     >> How can I workaround this issue? Should I lower server's MTU?
>>     How much?
>>     >>
>>     >> Thank you
>>     > Do you happen to have a Realtek 8169 based gigabit ethernet in new
>>     > server?
>>     >
>>     > If you do, I had the same problem several times last year, and
>>     solved
>>     > all of them by changing motherboards (all were integrated, and I
>>     like
>>     > them to stay that way because I can achieve full gigabit speed 
>> with
>>     > several concurent clients)
>>     >
>>     > Best regards,
>>     > Sinisa Bandin
>>     >
>>     >
>>
>>     No, machines are out-of-the-box HP DL servers:
>>     Ethernet controller: Broadcom Corporation NetXtreme BCM5705_2 
>> Gigabit
>>     Ethernet (rev 03)
>>
>>     I've made a spreadsheet with summarizing wireshark results and
>>     comparing
>>     results for both servers. You can see it here:
>>     http://spreadsheets.google.com/ccc?key=pnLL2fInqFq2YKuZIphtQdA
>>
>>     It's meaningful that fast server makes 406 Trans2 calls, while slow
>>     server makes 616 calls to perform the same operation. The
>>     difference is
>>     mainly in QUERY_PATH_INFO (200 vs 305) and FIND_FIRST2 (94 vs 199)
>>     calls.
>>
>>     Next try: change ethernet wire?  :-?
>>
>>
>>     --
>>     ==============================
>>     Felipe Martínez Hermo
>>     felipe at galicia.ugt.org <mailto:felipe at galicia.ugt.org>
>>     fmartinez at galicia.ugt.org <mailto:fmartinez at galicia.ugt.org>
>>     ==============================
>>     Servicios Informáticos
>>     UGT Galicia
>>     informatica at galicia.ugt.org <mailto:informatica at galicia.ugt.org>
>>     ugtgalicia at gmail.com <mailto:ugtgalicia at gmail.com>
>>     ==============================
>>     --
>>     To unsubscribe from this list go to the following URL and read the
>>     instructions:  https://lists.samba.org/mailman/listinfo/samba
>>
>>
>>
>> Hrm, are you using SACKs or DSACKs or tcp_low_delay in 
>> /proc/sys/net/somethingOrOther?  They didn't change congestion 
>> control default in your upstream kernel, did they?  Should be "reno" 
>> by default.  Doing a netstat -a, do you have many packets queued in 
>> either direction?  This one is puzzling me.
>> -- 
>> Peace and Blessings,
>> -Scott. 
> Apparently everything is configured the same way in /proc/sys/net 
> (both sack & dsack = 1). Regarding the kernel, Old&fast kernel is 
> 2.6.8 (no /proc/sys/net/ipv4/tcp_congestion_control) while new&slow is 
> 2.6.18-4-686 and congestion control is bic:
>
> ugtgalicia at max:~$ cat /proc/sys/net/ipv4/tcp_congestion_control
> bic
>
> Should I try other congestion control algorithm?
>
> I've made this rudimentary test, and old server is a little bit 
> faster, but I don't know if it is meaningful at all.
>
> felipe at nils:~$ ping -i 0.2 fast_server  --- fast_server ping 
> statistics ---
> 2156 packets transmitted, 2156 received, 0% packet loss, time 431208ms
> rtt min/avg/max/mdev = 0.135/0.171/0.245/0.018 ms
>
> felipe at nils:~$ ping -i 0.2 slow_server
> --- slow_server ping statistics ---
> 2146 packets transmitted, 2146 received, 0% packet loss, time 429165ms
> rtt min/avg/max/mdev = 0.152/0.179/0.333/0.021 ms
>
>
> Regards,
>
try:
echo "reno" > /proc/sys/net/ipv4/tcp_congestion_control


That'll make sure the tcp/ip stack isn't messing with the tests by doing 
window scaling and such.  OK, that's one more variable isolated... let's 
see what happens.  Sorry that this is taking to long to troubleshoot; 
I'm an armchair administrator.  Actually I'm a software development 
major in college, but either way, I'm a bit out of my element as 
compared to the professional administrators.


More information about the samba mailing list