Samba with multichannel and io_uring

Stefan Metzmacher metze at samba.org
Fri Oct 16 16:03:47 UTC 2020


Am 16.10.20 um 17:57 schrieb Jens Axboe:
> On 10/16/20 5:49 AM, Stefan Metzmacher wrote:
>> Hi Jens,
>>
>>> Thanks for sending this, very interesting! As per this email, I took a
>>> look at the NUMA bindings. If you can, please try this one-liner below.
>>> I'd be interested to know if that removes the fluctuations you're seeing
>>> due to bad locality.
>>>
>>> Looks like kthread_create_on_node() doesn't actually do anything (at
>>> least in terms of binding).
>>>
>>>
>>> diff --git a/fs/io-wq.c b/fs/io-wq.c
>>> index 74b84e8562fb..7bebb198b3df 100644
>>> --- a/fs/io-wq.c
>>> +++ b/fs/io-wq.c
>>> @@ -676,6 +676,7 @@ static bool create_io_worker(struct io_wq *wq, struct io_wqe *wqe, int index)
>>>  		kfree(worker);
>>>  		return false;
>>>  	}
>>> +	kthread_bind_mask(worker->task, cpumask_of_node(wqe->node));
>>>  
>>>  	raw_spin_lock_irq(&wqe->lock);
>>>  	hlist_nulls_add_head_rcu(&worker->nulls_node, &wqe->free_list);
>>>
>>
>> I no longer have access to that system, but I guess it will help, thanks!
> 
> I queued up it when I sent it out, and it'll go into stable as well.
> I since verified on NUMA here that it does the right thing, and that
> things weren't affinitized properly before. So pretty confident that it
> will indeed solve this issue!

Great thanks!

>> With this:
>>
>>         worker->task = kthread_create_on_node(io_wqe_worker, worker, wqe->node,
>>                                 "io_wqe_worker-%d/%d", index, wqe->node);
>>
>> I see only "io_wqe_worker-0" and "io_wqe_worker-1" in top, without '/0' or '/1' at the end,
>> this is because set_task_comm() truncates to 15 characters.
>>
>> As developer I think 'io_wqe' is really confusing, just from reading I thought it
>> means "work queue entry", but it's a per numa node worker pool container...
>> 'struct io_wq_node *wqn' would be easier to understand for me...
>>
>> Would it make sense to give each io_wq a unique identifier and use names like this:
>> (fdinfo of the io_uring fd could also include the io_wq id)
>>
>>  "io_wq-%u-%u%c", wq->id, wqn->node, index == IO_WQ_ACCT_BOUND ? 'B' : 'U')
>>
>>  io_wq-500-M
>>  io_wq-500-0B
>>  io_wq-500-0B
>>  io_wq-500-1B
>>  io_wq-500-0U
>>  io_wq-200-M
>>  io_wq-200-0B
>>  io_wq-200-0B
>>  io_wq-200-1B
>>  io_wq-200-0U
>>
>> I'm not sure how this interacts with workers moving between bound and unbound
>> and maybe a worker id might also be useful (or we rely on their pid)
> 
> I don't think that's too important, as it's just a snapshot in time. So
> it'll fluctuate based on the role of the worker.
> 
>> I just found that proc_task_name() handles PF_WQ_WORKER special
>> and cat /proc/$pid/comm can expose something like:
>>   kworker/u17:2-btrfs-worker-high
> 
> Yep, that's how they do fancier names. It's been on my agenda for a while
> to do something about this, I'll try and cook something up for 5.11.

With a function like wq_worker_comm being called by proc_task_name(),
you would capture current IO_WORKER_F_BOUND state and alter the name.

Please CC me on your patches in that direction.

Thanks!
metze



-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 833 bytes
Desc: OpenPGP digital signature
URL: <http://lists.samba.org/pipermail/samba-technical/attachments/20201016/3c47170b/signature.sig>


More information about the samba-technical mailing list