CVS update: samba/source/passdb

Jim McDonough jmcd at us.ibm.com
Fri Mar 19 03:07:01 GMT 2004






>Very nice work there!
Thanks!

>My only concern is for races.  If we have a password attack aimed at us,
>will we always increment/check the counters correctly?
Well, here's the thing...this was designed to be fairly loosely fit
together, to minimize traffic.  We really don't want a 150,000 user setup
with everyone's typo causing updates.  So by design, someone can have
(max-1)*(num_dcs) chances, and after that, the only real race I see is that
someone happens to type the correct password right before the lockout, in
which case only the local cache on that machine will be updated.  Really
what should happen along with this is to make sure 1) that the policy
numbers are in sync, i.e. we need to move the policy to the passdb backend,
so that it is replicated in the ldap case 2) the policy numbers are valid,
i.e. if you have lockout, you _must_ have reset count minute and lockout
duration.  I could very easily be missing some race conditions (my head
hurts after going through this so many times).

Remember, we only update the directory at lockout time.  We don't
"increment" the counter per se, we just set it to the lockout value.

Now, one other potential problem is what to do about time differences
between DCs.  That can cause problems.

>I'm thinking that for this particular case, we should have a passdb
>operation (like modify etc) that is 'pdb_set_bad_password()'.
>
>This would tie in with what I've been discussing regarding the LDAP
>password policy draft, where is is suggested that for a compatible LDAP
>server, that we send a 'there was a bad password' control to that LDAP
>server (which would perform modifications etc).
Are you talking about each time?  Or more like "there were bad passwords
and now there were too many"?  I think each bad password is way too often
to update.  Think of 150,000 users (Jerry mentioned he's talked to someone
with that many in openldap).  I'm all for better integration like this,
just concerned about the scalability (not to mention portability)
implications.

>In the case where we have a normal LDAP server, this would allow the
>backend and cache to loop until the correct atomic increment has
>occurred (which is not normally the desired behaviour in these
>database-like backends).
What about on a slave when the master is down?  Perhaps I'm not following
what you're point is here.

>What do you think of that?
I'm not completely clear on what you want, but it sounds like the
"consistent SAM" idea, which I've just about given up on in the replication
scenario.  Perhaps I'm just around too many large installs (my idea of
large is >10000).

Also, I know some folks who have been involved in the discussion on the
lists aren't thinking about the scenario with even not-so-large numbers of
clients but widely distributed DC's connected via less-than-reliable WAN's
(retail customers are like this, as well as banks, insurance companies, and
there are _many_ of these).  Can you tell I work for big blue? :-)  Even
school districts...   I just want to make sure folks are thinking of those
scenarios (I recall someone saying something about slurpd replication in 3
seconds max).

There's definitely room for improvement...no doubt.  I'm just not exactly
sure of everything you're trying to tell me.

----------------------------
Jim McDonough
IBM Linux Technology Center
Samba Team
6 Minuteman Drive
Scarborough, ME 04074
USA

jmcd at us.ibm.com
jmcd at samba.org

Phone: (207) 885-5565
IBM tie-line: 776-9984


More information about the samba-technical mailing list