Once you find a vulnerability, how do you handle patching it? Especially when devs have their own work to do, there are only so many man hours in a sprint or development cycle, and the patching process could take up a good majority of that if the vuln is particularly nasty.
One method is to triage your patches, and we discuss that this week with Mr. Boettcher. We also talk about how our respective company's handle patching of systems.
We also discuss what happens when compensating controls run out of effectiveness, and if there is a point at which they no longer are 'compensating' for anything any further.
Checkbox Security... checklists required to follow by compliance people and many security people have to fall in line, because they often have no choice.
But what if there was a way to use compliance requirements to get beyond the baseline of PCI/SOCII/HIPAA, and get to be more secure?
Megan Wu (@tottenkoph), Mr. Boettcher, and I spent a bit of time discussing just that. We discuss basic issues with compliance frameworks, how to get management to buy-in to more security, and even how you can get Compliance people to help without them knowing it.
After last week's discussion of end-user training in the SANS top 20 security controls, we realized that it would be great to discuss how a company involved in training does proper training.
So we hit up our sponsor at Cybrary.it to discuss their end-user security training track and how companies can use it to help their employees to be more secure in their workplace.
We end the podcast with a bit of audio from the Bsides Austin blue/red panel Mr. Boettcher moderated. He asked them about training and it's worth. The first answer from Justin Whitehead is telling as to how he believes training will fail regardless. His answer was chilling in fact, and we hope to continue that conversation with him in the future about it.
For long time listeners of the podcast, back when Brian and I wanted to do the podcast, we were working at the same company, and the first podcast we did was on hashes.
Bob story: Bob was getting tired of explaining what MD5, SHA1, SHA2 were to developers, so as we were developing our idea for the podcast, this was the first episode we had. Mr. Boettcher had several ideas for podcasts prior to.
I was actually gonna go it alone, but wanted him to join me. Thankfully, he broached the idea of being on the podcast. This was actually the second take, as the first one was done in our office and we didn't want any legal issues doing it at work, so we trahed that one and made this version. I thought the first take was better, but what are gonna do... :)
End User training. Lots of companies have need of regular security training. Many treat it as a checkbox for compliance requirements, once a year. With the way training is carried out in many organizations, is it any wonder why phishing emails still get clicked, passwords still get compromised, and sensitive information is still leaked.
We discuss methods to make training more effective, and how to make people want to do training.
Finally, we dicsuss Capture-The-Flag competitions, and why it would behoove blue team people to attempt them. They become a great barometer for understanding your shortcomings, and what you as a blue teamer might need to study up on...
Katherine Carpenter is a privacy consultant who has worked all over the world helping to develop guidelines for ethical medical research, sharing of anonymized data, and helping companies understand privacy issues association with storing and sharing of medical data.
This week, we discuss how companies should assign value to their data, the difficulties of doing research with anonymized data, and the ramifications of research organizations that share data irresponsibly.
email contact: email@example.com
Katherine’s note, comment, and links.
It is good to be thinking about de-identification (especially regarding health care data)
I think a better question to ask is how easy is it to re-identify information that has been de-identified. The HIPAA rule has 18 Identifiers which count as Personally Identifiable Information (PII) or Personal Health Information (PHI) include birth date, zip code, and IP address; When data is collected in non-health contexts, these identifiers are not considered PII/PHI (for example: this kind of information can be used for marketing purposes or financial/credit-related purposes).
A brief history on the topic:
in 1997 a precocious grad student IDed the Governor of MA using purchased voter records to reID deIDed health information that was released. (This study was one motivator to pass HIPAA.) Further research along the same lines of the previous project can be summed up with a simple and scary statistic: in 2000, 87% of Americans may be uniquely identified by combining zip code, birthday and sex(gender).
For this reason, health information is threatened not only by deID’n & reID’n, but by the combination of and other types of information that are publicly available or available for purchase and could reveal things about an individual that would contribute to reID of individual’s health info.
Here are a bunch of articles that discuss the topic from different angles.
Dwork, C. and Yekhanin, S. (2008), “New Efficient Attacks on Statistical Disclosure Control Mechanisms,” Advances in Cryptology—CRYPTO 2008, to appear, also at http://research.microsoft.com/research/sv/DatabasePrivacy/dy08.pdf
Is Deidentification Sufficient to Protect Health Privacy in Research?