Too Close for Comfort: a study of the effectiveness and acceptability of rich-media personalized advertising

Miguel Malheiros, Charlene Jennett, Snehalee Patel, Sacha Brostoff and Martina Angela Sasse 


Online display advertising is predicted to make $29.53 billion this year. Advertisers believe targeted and personalized ads to be more effective, but many users are concerned about their privacy. We conducted a study where 30 participants completed a simulated holiday booking task; each page showing ads with different degrees of personalization. Participants fixated twice as long when ads contained their photo. Participants reported being more likely to notice ads with their photo, holiday destination, and name, but also increasing levels of discomfort with increasing personalization. We conclude that greater personalization in ad content may achieve higher levels of attention, but that the most personalized ads are also the least acceptable. The notice-ability benefit in using someone’s photo to make them look at an ad may be offset by the privacy cost. As more personal data becomes available to advertisers, it becomes important that these trade-offs are considered.

Date: May 5, 2012
Presented: 30th ACM Conference on Human Factors in Computing Systems (CHI 2012)
Published: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI 2012), ACM, New York, NY, USA, pp. 579-588.
Publisher: ACM
ISBN: 978-1-4503-1015-4
Publisher URL:
Full Text:

Program Analysis Probably Counts

Alessandra Di Pierro, Chris Hankin and Herbert Wiklicky


Semantics-based program analysis uses an abstract semantics of programs/systems to statically determine run-time properties. Classic examples from compiler technology include analyses to support constant propagation and constant folding transformations and estimation of pointer values to prevent buffer overruns. More recent examples include the estimation of information flows (to enforce security constraints) and estimation of non-functional properties such as timing (to determine worst case execution times in hard real-time applications). The classical approaches are based on semantics involving discrete mathematics. Paralleling trends in model-checking, there have been recent moves towards using probabilistic and quantitative methods in program analysis. In this paper we start by reviewing both classical and probabilistic/quantitative approaches to program analysis. We shall provide a comparison of the two approaches. We shall use a simple information flow analysis to exemplify the classical approach. The existence of covert information flows through timing channels are difficult to detect using classical techniques; we show how such problems can be addressed using probabilistic techniques.

Source: COMPUTER JOURNAL Volume: 53 Issue: 6 Pages: 871-880 DOI: 10.1093/comjnl/bxp033
Published: JUL 2010
Accession Number: WOS:000279185400019
Document Type: Article
Author Keywords: program analysis; semantics; abstract interpretation
Reprint Address: Hankin, C (reprint author)
Univ London Imperial Coll Sci Technol & Med, Dept Comp, 180 Queens Gate, London SW7 2AZ, England.
Univ London Imperial Coll Sci Technol & Med, Dept Comp, London SW7 2AZ, England
Univ Verona, Dept Comp Sci, I-37134 Verona, Italy
E-mail Addresses:
Web of Science Categories: Computer Science, Hardware & Architecture; Computer Science, Information Systems; Computer Science, Software Engineering; Computer Science, Theory & Methods
Research Areas: Computer Science
IDS Number: 616CJ
ISSN: 0010-4620
Full Text: