Ingolf Becker, Simon Parkin and M. Angela Sasse
Background: A person’s security behavior is driven by underlying mental constructs, perceptions and beliefs. Examination of security behavior is often based on dialogue with users of security, which is analysed in textual form by qualitative research methods such as Qualitative Coding (QC). Yet QC has drawbacks: security issues are often time-sensitive but QC is extremely time consuming. QC is often carried out by a single researcher raising questions about the validity and repeatability of the results. Previous research has identified frequent tensions between security and other tasks, which can evoke emotional responses. Sentiment Analysis (SA) is simpler to execute and has been shown to deliver accurate and repeatable results. / Aim: By combining QC with SA we aim to focus the analysis to areas of strongly represented sentiment. Additionally we can analyse the variations in sentiment across populations for each of the QC codes, allowing us to identify beneficial and harmful security practises. Method: We code QC-annotated transcripts independently for sentiment. The distribution of sentiment for each QC code is statistically tested against the distribution of sentiment of all other QC codes. Similarly we also test the sentiment of each QC code across population subsets. We compare our findings with the results from the original QC analysis. Here we analyse 21 QC treated interviews with 9 security specialists, 9 developers and 3 usability experts, at 3 large organisations claiming to develop ‘usable security products’. This combines 4983 manually annotated instances of sentiment with 3737 quotations over 76 QC codes. Results: The methodology identified 83 statistically significant variations (with p < 0.05). The original qualitative analysis implied that organisations considered usability only when not doing so impacted revenue; our approach finds that developers appreciate usability tools to aid the development process, but that conflicts arise due to the disconnect of customers and developers. We find organisational cultures which put security first, creating an artificial trade-off for developers between security and usability. Conclusions: Our methodology confirmed many of the QC findings, but gave more nuanced insights. The analysis across different organisations and employees con- firmed the repeatability of our approach, and provided evidence of variations that were lost in the QC findings alone. The methodology adds objectivity to QC in the form of reliable SA, but does not remove the need for interpretation. Instead it shifts it from large QC data to condensed statistical tables which make it more accessible to a wider audience not necessarily versed in QC and SA.
Date: 26 May 2016
Published: The LASER Workshop: Learning from Authoritative Security Experiment Results. Publisher: IEEE
Publisher URL: http://2016.laser-workshop.org/
Full Text: https://www.usenix.org/system/files/conference/laser2016/laser2016-paper-becker.pdf