[clug] PSIG last night

Daniel Pittman daniel at rimspace.net
Fri Nov 14 23:28:57 GMT 2008

Paul Wayper <paulway at mabula.net> writes:
> Alex Satrapa wrote:
> | Has anyone from the PSIG (testing in Django) last night tried writing
> | tests today?  :)
> I did :-)

I didn't attend the talk, what with being in another state, but I hope
y'all can forgive my adding some opinion anyhow. :)

> I'm still struggling with some of the philosophy behind testing,
> however.  For example, with Django you can have a field with a maximum
> length.  I'm assuming that I don't have to test that it doesn't accept
> strings longer than that (because I assume Django's testing framework
> already does).

In my experience, assuming that Django tests anything is bad; I have
dealt with a number of frameworks where my assumptions about what they
tested (or enforced) were wrong.

That doesn't mean that you should test that in your code, though, since
if Django do get it right -- and I have no reason to think otherwise --
they you just wasted some time.

My general approach is to make sure I have run the test suite for my
framework, at least once, and reviewed it briefly to have some idea of
what it does test.  (or doesn't, having found a couple that had
approximately zero tests in some key areas...)

> Should I see whether it can accept unicode characters?  Should I test
> whether it's resilient to \0 characters?  Should I do this with every
> CharField I have in a model?

Then, if I have a specific question I can check the code.  However,
Unicode characters and NULL characters are a *good* thing to test for in
your own suite -- at least, in an end to end test.

> And so on with integers?  I can see that those are probably tests
> already handled in Django - but the philosophical question is whether
> I have to check whether Django's working or not.  Sometimes a subtle
> bug might show through one of those assumptions.

*nod*  I usually only have framework specific tests when they either
show me that a bug is resolved or when I have my own code involved.

An example of that, from Moose, a Perl OO framework, is testing a field
where I use a custom type declaration:

    subtype 'Positive Or Zero Int' => as 'Int' => where { $_ >= 0 };

Having at least one check that this constraint works as expected is
valuable, just in case (for example) my where clause somehow breaks
things and causes it to accept a "foo" as a valid value.

> OTOH the tests in the framework seem strangely simple - e.g. create a
> new object, save it in the database, and immediately retrieve it.
> Surely the problem I'm wanting to test for is whether that record is
> still in the database six months from now,

In most cases the time involved isn't meaningful: if your database
deletes objects after a certain time you have another problem, not a
fault in Django.

> or that no-one can change the ID field of this object through the web
> interface?

This is a meaningful test, and I would fully expect that Django would
have exhaustive testing of this as a constraint.  If it doesn't then,
yes, adding those tests would be good.

You should probably do that by contributing them to the Django authors,
though, so that everyone benefits from them. :)

> Don't get me wrong, I can see the value of unit tests.  I can see how
> test-driven development makes a lot of sense when you're trying to
> make an algorithm that processes data according to a variety of
> complex rules.  The testing frameworks that Paul Leopardi was talking
> about for testing Sage are vital so that mathematicians can know that
> the results are correct.  I hate it when I go and change something
> seemingly innocuous and find out it had deep consequences that a
> testing framework could have told me about right away.
> I'm just struggling to get exactly how it applies.

My philosophy, which the above is a practical expression of, is this:

    The best code is code that you do not have to write.

The reason I use Moose in Perl is because it does a whole huge pile of
this work for me: it provides constraints, constructors, accessors and a
whole bunch of other framework code that I don't have to write *OR*
test, because I know their test suite does the right thing.

I am, as a corollary, also reasonably slow to adopt new frameworks,
because there is a non-trivial cost: I do spend the time getting
familiar with their test suite, generated code, and so forth, so I can
be confident in my assumption that I don't need to write code to test
them, because someone else /did/.


More information about the linux mailing list