Skip to main content

View Diary: Linux Creator Admits NSA Demanded Backdoor (43 comments)

Comment Preferences

  •  and all that coding is (3+ / 0-)
    Recommended by:
    petral, Hey338Too, Deep Texan

    visible and readable by all. Some pretty savvy programmers vet the code that is compiled into the kernel. Unless you think every single person that reads it is in on the conspiracy that code is backdoor free.

    47 is the new 51!

    by nickrud on Mon Sep 23, 2013 at 03:44:06 PM PDT

    [ Parent ]

    •  Perhaps they can't see what it in front of them . (0+ / 0-)

      You can get bugs to blend in to everyday code if you're good enough.  

      In other words, don't trust a damn thing regardless of how many times people say its 100% safe.  We have been compromised at almost every angle.  

      "So what if a guy threw a shoe at me!"

      by FoodChillinMFr on Mon Sep 23, 2013 at 09:51:00 PM PDT

      [ Parent ]

      •  Trusting nothing can be a useless strategy (0+ / 0-)

        It often means failing to make a chance based risk assessment, and is effectively the same as trusting everything, i.e. considering all things equally dangerous/harmless. It's ineffective in the same way "zero tolerance" moralistic strategies leads to injustice, and is how equivocation between corporations and government is viable.

        Systems should be assessed and placed in classes of risk (security domains) and behavior should be adjusted according to those domains. I'd also note that moral systems that regulate human behavior are significantly based on multiple observation: Pascal Boyer in "Religion Explained" suggests that marriage is as much about the union being observed by the community as anything else. That is something practical, not just an evolved trait, IMO. Similarly, source code that can be observed is less subject to tampering.

        As another commenter mentioned, there is the bootstrap problem: compromised hardware, firmware, chipsets, compilers, pre-compiled binaries, etc. There are cryptographic solutions to some of these, but they require care. There are technologies that make some of them worse, the "Trusted Computing" initiative, for instance. With Digital Rights Management (which is akin to calling a prison Personal Freedom Management) the question is of whose rights. As the end user of the technology, they certainly don't refer to yours. With TC, the question is of whose trust. Essentially, you are not being trusted to control your own computer.

        I'd say it's best to assume it's possible you're being observed and assign a probability to that, then act accordingly. After all, there are plenty of other methods than compromising technology directly. The questions are more of who would be interested, and what actions would they take (for instance, how much would they value not revealing themselves?)

Subscribe or Donate to support Daily Kos.

Click here for the mobile view of the site