[clug] October CLUG - Thursday Oct 22 - Lightning talks - short focused fun for all
paulway at mabula.net
Sun Oct 11 23:01:20 UTC 2015
On 09/10/15 18:27, Bryan Kilgallin wrote:
> Dear Bob:
>> Volkswagen are presently claiming that just a handful of
>> software developers were responsible for the debacle VW are now in.
> I have read that could not have been the case. That software changes must have
> been signed-off by senior people!
Sorry, but in no organisation I've been in in the last, oh, five years has any
high level manager been required to sign off code changes. The Linux kernel
and the Samba project requires other people to sign off on a code change, but
a senior manager might never see any code at all. And that's they way they
It's worth keeping this in perspective, I feel. VWs didn't belch smoke and
flame when they weren't being monitored - so the increase in emissions for
ordinary road users probably wasn't massive. That's the main reason this
hasn't been detected for eight years. The testing bodies that thought there
was a problem didn't want to move without good evidence, and the researchers
that tested the cars drove from Los Angeles to Seattle and back (!) just to
make sure that they weren't "repeatedly [...] making the same mistake again
and again". If it had been more obvious, it would have been noticed sooner.
>> How is it that this code has gone undetected for up to 8 years, even
>> by our trusty government regulators?
> The US regulator was not funded to test cars! I read that Aussie has the most
> permissive scheme in the developed world.
Nonsense. Companies pay fees specifically to have their cars tested. One of
the things that did in Blade Electric in Melbourne (they made electric cars
from Hyundai I20s) was that over a certain volume a year, they have to give
the authorities a dozen or so cars to impact test - i.e. smash into walls,
poles, etc. All the cars are written off. The company loses 20 cars, and
pays for the testing as well. (Blade couldn't afford the testing and couldn't
grow without it.)
The idea that the Australian car market is "permissive" only pertains to
emissions. Compare that to Japan where every car over a number of years old
is considered to pollute more and therefore costs more to register (which is
why we have such a thriving grey import market in Australia). But that
doesn't mean that people don't check on the emissions. And you can still
report vehicles to the police if you think they're polluting too much.
>> How do we get these
>> "additional" functionalities detected and assessed?
> When I worked in academia, teaching was the go. And there wasn't serious dough
> applied to researching real stuff.
I think the more salient point here is that this was a hack that specifically
detected when it was being "tested" and behaved correctly in those
circumstances. Graphics card manufacturers wrote special bits in the code
which detected when they were being benchmarked and optimised for speed over
quality to win the benchmark wars. It's hard to detect these things when the
specific process of detection is being circumvented.
Let's also remember that the point of standardised testing is so that
manufacturers know that their cars are being tested fairly and without bias.
If the testing involved a real driver getting into the car and driving it
around the streets, then the emissions are going to be affected by the time of
day of the test, the traffic on the road, any detours taken, and importantly
the temperament of the tester. As we saw with Jeremy Clarkson's biased review
of the Tesla Roadster, and John Broder's biased review of the Tesla Model S
(http://www.teslamotors.com/blog/most-peculiar-test-drive), the tester can
deliberately drive the car harder or use it incorrectly in order to give a
So the difficulty is in finding a "real world" test that doesn't also allow
for the test to be gamed by either tester or manufacturer.
>> Are we ready for some of these safety-critical
>> systems to be tinkered with by hobbyists (aka hackers)?
> Which reminds me of the WikiLeaks saga. It's great if someone else does the
> work. But the ACT Greens wouldn't so much as donate $1 to the people who were
> doing that!
This is a fallacious argument. The ACT greens don't donate money to causes,
they support the causes in the legislative assembly. And I note that the
Wikileaks party has a bad reputation for having opaque internal processes and
not publicising its decisions.
But Bob's point about safety-critical systems being tinkered with is the point
we should pursue, and my take on that is that the world would be much better
if the people building safety-critical systems - e.g. putting SCADA systems
control devices on the internet - didn't treat them as if security through
obscurity was a valid precaution. The Jeep hacks, for example, demonstrate
that manufacturers are leaving wide open control in their systems because they
believe that no-one but them will know how to operate the car and no other
devices could be plugged into the bus. (CANBUS is the worst design ever for a
communications bus that users can access, and throws away every security
feature we've learned on the Internet).
>> Is there a line
>> that we can draw between what is safe to tinker with and what isn't?
> People are scared to test the powerful institutions!
Mouthing platitudes about fear of authority won't help, then.
Yes, car manufacturers, and governments, are powerful institutions. But it
was the US Environmental Protection Agency that discovered this hack, not some
random hacker. Government agencies do work for us as well.
My "safe to tinker with" line is my own things. I think we have a moral
obligation to each other to look at the safety of things we own and use and
tell people about what we consider to be dangerous, but we should neither
manufacture fear nor use other people as our test subjects. There are
government and private organisations, as well as the press, to which we can
report safety problems or concerns.
And we also create new systems that are better. I know a number of people
that are making their own home automation systems rather than buy an
off-the-shelf system that doesn't do what they want, costs too much or is of
doubtful security. The Debian reproducible build system tries to answer the
same question in software: how do we know whether the software we use doesn't
contain malicious code compiled in? We as users and programmers and
communicators can and do help make progress toward safety and security.
More information about the linux