[clug] PSIG last night

Michael Cohen scudette at gmail.com
Fri Nov 14 23:12:26 GMT 2008

I also think its important for a unit test to start with a blank
slate, and reach the tested condition. For example in our app we often
need to do lots of steps to get to the point we want to test from a
blank db. There is a temptation to chain those tests so they all use
the same setup and then just do different things. This seems to be
more efficient , but leads to a dependency hell in tests because all
of a sudden stuff changes depending on the order the tests run and
what they do. We cant just test the total number of rows in a table
because earlier tests may add unrelated rows etc.

Even if the setup code takes a while to run its worth repeating it at
the start of each test right after dropping and recreating the test

Also important is to manage a set of test data - this is the most
important because you dont want anything too big, or that setup phase
will take too long, but you want something fairly representative of
real data. We have a set of files which are managed through a home
brewed updater script - we publish a list of md5 hashes and the script
re-fetches the ones that changed (it also uncompresses them so we can
ship tarballs). This allows us to update the test data sets most
efficiently - all developers can automatically sync with the latest
data sets.


On Sat, Nov 15, 2008 at 9:53 AM, David Schoen <neerolyte at gmail.com> wrote:
> I think only small (unit level) tests make sense with unit testing. Testing
> that an object is still around in the DB in 6 months doesn't really help, if
> the test does fail what will you do? Writing multiple tests that cover the
> same thing is probably what you want.
> E.g. write one test that creates a new object and test that it's immediately
> accessible as suggested, but if you have atomic commits (or if testing is
> done in a single user environment) you could also ensure that the number of
> objects in the DB is 1 higher then what it was before you created the new
> object. You could also test that when you modify something pre-existing that
> the number of objects in the DB doesn't change.
> This way you've already begun to isolate the problem, if either of those
> tests fail there should be only a small portion of the code base responsible
> for the bug, but if the object goes missing "sometime in a 6 month window"
> all you know is "we have a bug somewhere in the system", a fact that can be
> generally assumed for any reasonably large code base :).
> I don't know anything about Django and haven't yet made it to a PSIG
> meeting, this is just my opinion, take it how you will.
> - Dave.
> On 11/15/08, Paul Wayper <paulway at mabula.net> wrote:
>> Hash: SHA1
>> Alex Satrapa wrote:
>> | Has anyone from the PSIG (testing in Django) last night tried writing
>> | tests today?  :)
>> I did :-)
>> I'm still struggling with some of the philosophy behind testing, however.
>>  For
>> example, with Django you can have a field with a maximum length.  I'm
>> assuming
>> that I don't have to test that it doesn't accept strings longer than that
>> (because I assume Django's testing framework already does).  Should I see
>> whether it can accept unicode characters?  Should I test whether it's
>> resilient to \0 characters?  Should I do this with every CharField I have
>> in a
>> model?  And so on with integers?  I can see that those are probably tests
>> already handled in Django - but the philosophical question is whether I
>> have
>> to check whether Django's working or not.  Sometimes a subtle bug might
>> show
>> through one of those assumptions.
>> OTOH the tests in the framework seem strangely simple - e.g. create a new
>> object, save it in the database, and immediately retrieve it.  Surely the
>> problem I'm wanting to test for is whether that record is still in the
>> database six months from now, or that no-one can change the ID field of
>> this
>> object through the web interface?
>> Don't get me wrong, I can see the value of unit tests.  I can see how
>> test-driven development makes a lot of sense when you're trying to make an
>> algorithm that processes data according to a variety of complex rules.  The
>> testing frameworks that Paul Leopardi was talking about for testing Sage
>> are
>> vital so that mathematicians can know that the results are correct.  I hate
>> it
>> when I go and change something seemingly innocuous and find out it had deep
>> consequences that a testing framework could have told me about right away.
>> I'm just struggling to get exactly how it applies.
>> Anyway, have fun,
>> Paul
>> Version: GnuPG v1.4.9 (GNU/Linux)
>> Comment: Using GnuPG with Fedora - http://enigmail.mozdev.org
>> M0gAoIj4SSa3ipB0Z6NCg9wURU87m+ER
>> =blXp
>> -----END PGP SIGNATURE-----
>> --
>> linux mailing list
>> linux at lists.samba.org
>> https://lists.samba.org/mailman/listinfo/linux
> --
> linux mailing list
> linux at lists.samba.org
> https://lists.samba.org/mailman/listinfo/linux

More information about the linux mailing list