[clug] OT: Protesting the proposed clean feed?

Alex Satrapa grail at goldweb.com.au
Thu Oct 23 06:46:15 GMT 2008


On 23/10/2008, at 13:42 , Michael Carter wrote:

> "The previous trial reported that when filters were connected to  
> the test network and actively filtering performance degradation  
> ranged from 75 per cent to a very high 98 per cent between the best  
> and worst performing filter products. In the current trial the  
> corresponding performance degradation varied across a greater range— 
> from a very low two per cent to 87 per cent between the best and  
> worst performing filter products."

I find it interesting that product Delta (a "software" filter  
delivered pre-installed on vendor's choice of hardware) actually  
improved performance over the baseline when deployed and actively  
filtering content in a highly saturated network. Product Delta also  
had higher performance when turned on than when it wasn't filtering  
content, according to the graphs in the report. I'd love to know how  
it achieves that, I'll put that product on every network I have! It's  
a kind of magic, just like that energy polariser for petrol, or  
perpetual motion machines :)

I love how they talk: "all products blocked in excess of 86 percent  
of [nasty stuff]", versus "all products had an over-blocking rate of  
below 0.08". To quote Han Solo, "Talk it up, furball!"  An  
overblocking rate of 0.08 (or 8%) means that 10 in 125 requests I  
make will be blocked by accident.

Now... the question to ask yourself is this: in all the years you've  
been using the Internet, how many times have you found objectionable  
material without being aware that you were about to find it? Would 8%  
false positives be a fair price to pay for having never seen those  
pictures? What about 3%? 3% is the best that any of these filters  
attain.

For me the number of incidents is 2: I have had the misfortune of  
accidentally viewing the goatse picture a couple of times due to  
clicking on links on sites such as Slashdot which were labelled  
apropos the discussion at the time, but were actually links to copies  
of the goatse picture (such booby traps are the reason Slashdot now  
annotates links with domain names).  Would a content filter have  
saved me in these occasions? Perhaps, if the picture wasn't a re- 
compressed version of the original (thus changing the MD5 checksum),  
and it had been viewed by others before me who had complained to the  
filter vendor (so it would now appear in the proscribed images  
index). Would being saved from seeing the goatse picture twice in 15  
years have been worth an 8% failure rate for all web requests in the  
meantime? Not for me. Would today's filtering technology have  
prevented me from accidentally seeing the goatse picture? Not likely  
- they were both edited to some degree, and both were posted a matter  
of days before I stumbled upon them.

Note that the ACMA's report ("kidsonline at home Internet use in  
Australian homes") into the NetAlert debacle indicated that most  
parents don't have a filter installed because they either trust their  
children to do the right thing, or are satisfied that they have other  
ways of controlling what their children are reading. This is in stark  
contrast to the Labour Party's claim that only 1/3 of people have  
installed some form of filter due to "cost and poor computer literacy".

NetAlert only cost about $3000 per operating copy. I wonder how much  
the "clean feed" project will cost?

Alex



More information about the linux mailing list