[clug] How you know your Free or Open Source Software Project is doomed to FAIL

Scott Ferguson scott.ferguson.clug at gmail.com
Thu Jul 30 10:30:20 UTC 2015


On 30/07/15 15:40, Paul Harvey wrote:
> On 30 July 2015 at 14:15, Scott Ferguson <scott.ferguson.clug at gmail.com> wrote:
>>> Surely https, warts and all, allows for at least slightly better
>>> hygiene than blind faith in your physical network and http.
>>>
>>
>> If slightly better hygiene is running hot water over the scalpel instead
>> of giving it a "bit of a wipe on your sleeve" - I'm not going to let you
>> operate on me :)
> 
> Great analogy! But if only production systems were maintained with
> this level of hygiene. I think we all agree that untrusted input is
> untrusted input, whether delivered over https or not.

Yes to both points. You are right though about "slightly better", though
I'd use the term "slightly less worse" (and no, dear grammar pedants, it
is valid English).
Analogies by definition have limits. If your were a medic and I was
injured on a battlefield - I'll take the "bit of a wipe on your sleeve"
over running the scalpel under hot water. That is if you don't have a
flame handy. Damn context. :)

> 
> I'm just a little sceptical of how many production systems really are
> that "pure". Perhaps I've spent too much time in the devops crowd,
> where distro packages are actively avoided in preference to
> virtualenv, perlbrew, checkinstall, etc. Don't get me wrong, you can
> validate your sources if you do those things, but how many really do -
> at every critical step, for hundreds of dependencies! Especially when
> the shortest path is often to just give up, and pull random docker
> images built by someone else who themselves barely understand all the
> moving parts they've configured and built (from who knows how or
> where) on your behalf. Often with no communication on what validation
> or security choices/assumptions have been made beyond a Dockerfile
> which if you look carefully, has a "RUN curl | sh" line in it...

Agreed. Though there are sacrifices that have to be made for
functionality. Most of what we've been discussing regarding security is
about lowering exposure to predictable risks. Which is very much
probability.

Probability is just a form of generalisation. It looks good when applied
to broad enough range (a bit like the difference between quantum physics
and "normal" physics?) - but ceases to be an accurate indicator when
applied to a smaller range of instances. Hence my emphasis previously on
"how much will it hurt if it goes wrong?".

The rules I use, which shouldn't be confused with a formally proven
correct approach, is:-
what do you want to do
what do you hope to achieve
*who are the stakeholders*
what are the available resources
*when do you need an actionable outcome*
what are the foreseeable things that can go wrong
what are the known incidences (for the answer to the last question)
what is the worst thing that can go wrong
how will you deal with the worst case scenario
*at what point will you abandon things*
then it's an iterative process to determine a policy - a good policy
will include how to separate valuables.

I've over simplified things to fit in a digestible email, and the rules
vary according to circumstance. Valuation is important, and difficult.
I've emphasised what I consider the most critical questions.

Which circles back to my original meandering. You can never identify all
the stakeholders (things what may poke you with a pointy stick). But
failure to identify the obvious ones results in pain.

The most common mistakes, IMO, is that people calculate probability of
something going wrong as if all cases are equal. e.g. if you are known
as a system administrator, or you work for HQ-Jock - or you share a
house with any of those sort of people then you should not base the
probabilities on events like dumb IP incremental exploits.

Another mistake is to assume that you have nothing of value - if you
have a computer you do have something of value, add the internet, and
you have more of value, email - ditto, etc.

One that bites hardest - which probably best fits the scenario you
describe is failure to separate. DevOps can be dicks - and I don't mean
P.I.s that partially because they only have so many hours in a day to
study and think. A classic mistake that makes the failure to separate
(segregate, and isolate) is the "we're testing and resetting passwords
is hard, so we disabled the 3 failure lockout" (convenience is the enemy
of security) - which just leads back to the emphasis on "when do you
need a result?"

tl;dr?
When properly analysed some (a lot?) things just shouldn't be done - to
do them with an appropriate level of security isn't financially or
functionally (staff or time aren't available) to meet the requirements.

One measure might be - will Lloyds insure it?



> 
> In the face of all this, given a choice of blind faith in code
> delivered over http or https, at least https makes some attempt to
> ensure that we really have fetched something from some host that
> really is who it says it is. Even if the Georgiev et. al. paper
> implies otherwise...

Agreed, it's partway towards solving a problem that can never be
completely resolved (Shamir's 2nd law says a lot about the expense).
Bruce Schneier has beautifully encapsulated the trust problem in one of
his books with an analogy of paying the plumber with a cheque to fix his
washing machine.

There's probably an equation somewhere for this. Time limits, Need,
Risk, Worst Case Outcome - that could be applied to calculate
segregation and when to dump.


> 
> Perhaps fundamentally, the means we have available to us for verifying
> software provenance just doesn't scale once you venture outside of the
> core/standard packages. And even then... So I've always liked the idea
> of application whitelisting, it's just a shame there's no open source
> kernel modules for Linux to do it (one day, I'll find the time).

SELinux and AppArmor will do that. To which people often come up with
the most convoluted excuses on why they'll just sprinkle holy water on
the computer :D (Ohnose! SELinux is NSA - Russell Coker is the anti-christ)

Don't start me on why passwords are dumb when keys should be enforced -
I get drowned in spit before I even point out that key management is no
harder than password management - let alone the limitation of the login
buffer and why fail2ban is aptly named (have you heard of proxies?)

Suggest *adding* a layer of obscurity to security and most have their
fingers in their ear and are chanting "la la la" before I get to say
"security" (and they don't hear layer - they just smell someone asking
them to check their facts). They're all gone before I explain that
security is about lowering a profile as well as... wait, where did
everyone go? [sign]

> It
> seems as if all the distro packaging ecosystems have nearly all the
> signatures and shasums lying around already, but then there are those
> that would argue that whitelisting *all* binaries contained in
> packages signed by package maintainers wouldn't be enough hygiene
> either.

As a sole measure - I'd agree (FWIW). That's the limitations of sole
measures. Transparent proxies, VMs, snapshots, filesystem hash sum
checks are all good measures for limiting damage (did I mention
segregation?).
Which is also another way of forcing dangerously naive endeavours to be
abandoned. :)   (telling people their baby is ugly is an
under-appreciated job)

> 
> I've been working on populating a graph database based on debian
> package metadata (and eventually, other sources) to figure out and
> perhaps visualize just how many "points of trust" there actually is in
> all the installed software on a standard application server...
> 
> But now I'm ranting :)

Maybe :)  I've had mine earlier.
But it does sounds like a *very* good idea. There's a place for
infographics (and the club of knowledge). :D

> 
> --
> Paul
> .
> 

Thanks for more interesting ideas.

Kind regards

--
"I use readability tools, I also try and employ critical thought, and I
rely strongly on proofreaders. I'm not a professional writer. I've used
none of those things when writing this, and it only "seemed" OK after a
quick re-read - my apologies in advance for all the very likely errors."
~ standard weasel disclaimer



More information about the linux mailing list