How could data be used to hurt you?
At COFES in early April, I attended a small discussion about security with a dozen folks more technical than myself.
After the session, I spent a few minutes chatting with someone about how to discuss information security with children. His rule of thumb was to suggest that every time his kids are asked to share a piece of personal information, they should briefly ask: “how could I be hurt by this information?”
This is a pretty good sentiment, because if you actually followed it, you’d be forced to treat exchanges of personal information as transactions: you give up something of value in exchange for something of value. In most cases, when I sign up for a service and hand over my cell phone number, or access to my email, I am not thinking about it as a “purchase” because I am not used to thinking of my data as a currency. This question would force me to evaluate it each time.
But the information that could be most harmful to me, in the event I am being “attacked,” is probably fake information.
If someone can hack into the server infrastructure behind Target to remove emails and passwords, then they ought just as easily to be able to create a false purchase history for me. If someone can hack into a social media platform like Twitter — or for that matter, a blog where I write — they could in theory create a false history of statements made years ago; plant a thread of damaging activity or damning thought that could wind up being very difficult to disprove.
And even if I could disprove it, I probably couldn’t avoid the harm intended in the first place, because I would likely only discover it during the process of their attack.
Many of us rely on “security through obscurity” for much of our lives, which is this idea that because there are billions of people on the planet the chances of us getting randomly hacked are still fairly low. Basically then plan is: don’t become a target.
The best advice I’ve ever gotten on computer security is to always remember that nothing about it is binary. Instead, computer security exists on a spectrum, and moving yourself a little bit more towards “secure” can make a big difference.
If you believe that fake information is maybe more dangerous than real information, then you should be looking for ways to climb up the spectrum of security and become more immune to fake information. However, the best way to become more immune to fake information is also the very thing that will make you less secure on the scale of security through obscurity: fame. Fame can make you a target.
The more recognizeable you are, the more immune you are to planted, fake information. This is because people realize you are a target, and because many people will be “on your side.” Maybe the best recent example would be the Killian documents released to negatively affect President George W. Bush during the 2004 presidential election. Had those documents been released about a person less famous, the chances of permanent reputational harm would have been much greater.
There’s a lot I don’t understand about computer security, and I’m fairly technical compared to the average US citizen. But I wonder how much of our computer security in the future is going to rely on cultural change rather than a change in digital tools. Maybe we will all begin to adjust our expectations and norms based on the understanding that pretty much everyone could be subject to a digital attack, and when that happens, pretty much everything about them can be treated as unreliable information.
Back to all writing.