DistriNet Research Group 

KU Leuven

Dept. Computer Science 

Celestijnenlaan 200A (postbox 2402) 

200A B-3001 Heverlee BELGIUM 

  • White Facebook Icon
  • White Twitter Icon

© 2020  DistriNet KU Leuven

 

Unawareness

In general

Being unaware of the consequences of sharing information.

Often users are not aware of the impact of sharing data. This can be data shared to friends on facebook, but also personal information shared with other services (i.e. loyalty cards, online services, …)

Unawareness threats differ from data minimization threats (L_DS) in the sense that concerning data minimization it is the responsibility of the (back-end) system to minimize the data that are being stored, while for unawareness it is the provider of the data himself who is responsible and should be aware of the consequences of sharing (too much) information. Nevertheless, the system itself can support the users in making privacy-aware decisions.

Ideally, all users (data providers) should be clearly informed and educated of the consequences of sharing data using (online) services. Our analysis can however not impact the entire society, hence these threats will only focus on the provisions the system itself can take to guide and educate the user concerning his data sharing.

This threat only applies to an entity, as other DFD elements do not input additional information in the system.

Consequences

  • Linkability/ identifiability: the more information is available, the easier it can be linked (and identified)

Impacted by

  • /

 

Unawareness of entity

LINDDUN unawareness threat tree

Tree in general

The user provides more personal identifiable information than required (about himself or another data subject), which has a negative influence on all the hard privacy objectives

Leaf nodes explanation

Providing too much information (U_1)

Unawareness often means the user provides too much (personal) information (U_1). This is specifically problematic when this information is identifiable, or when this information when combined with the already available data becomes identifiable. This threat thus closely relates to the identifiability threats concerning data minimization.

Currently some privacy-friendly techniques exist to assist the user in making aware decisions concerning the sharing of his data. When these techniques are not employed, the awareness is obviously threatened. Feedback and awareness tools (feedback tools that improve the user's understanding of privacy implications [1], or tools that visualize the user's data like the IdentityMirror [2] and privacy mirrors [3]) provide feedback on the data the user wants to share and presents its results when combined with already available (online) information.
Also several types of nudges have been proposed for social networks to encourage the user to reflect on the information he wants to share [4] [5]: sentiment nudges that inform the user how a certain message might be perceived, picture nudges that show profile pictures of (some) users that will be able to see the post, etc.
Facebook already provides some privacy feedback as it it allows the user to access his timeline with the access control settings of a specific user. This raises awareness and can help prevent the oversharing of information.
When no feedback and awareness tools (U_3) are used, the user is likely not aware of the information (or its impact) he is sharing.

Also, privacy support should be user-friendly (U_4). For example, default settings (e.g. facebook settings) should be privacy-friendly. It should be prevented that information is automatically shared with many parties, often without knowledge of the user. Privacy-friendly settings should limit the exposure of (personal) data. Also, in order for users to modify privacy settings according to their needs, the provided tools should be easy to use. The privacy configuration (e.g. Facebook privacy settings) should be easy to access and manage and should be represented in an understandable fashion.

Data accuracy (U_2)

Often data subjects are unaware of what data a system has actually collected and stored about him. A data subject should thus always have the possibility to review his own data (i.e. data that has been collected about him)[6].

[1] Lederer, s., Hong, J., Dey, A., Landay, J.: Personal Privacy through Understanding and Action: Five Pitfalls for Designers. Personal and Ubiquitous Computing 8, 440--454 (2004)

[2] Liu, H., Maes, P., Davenport, G.: Unraveling the Taste Fabric of Social Networks. International Journal on Semantic Web and Information Systems (IJSWIS) 2(1)

[3] Nguyen, D., Mynatt, E.: Privacy Mirrors: Understanding and Shaping Socio-technical Ubiquitous Computing Systems. (2001)

[4] Yang Wang, Pedro Giovanni Leon, Kevin Scott, Xiaoxuan Chen, Alessandro Acquisti, and Lorrie Faith Cranor. 2013. Privacy nudges for social media: an exploratory Facebook study. In Proceedings of the 22nd international conference on World Wide Web companion (WWW '13 Companion). 763-770.

[5] Presentation by Lorrie Cranor on Privacy Nudges and Self-Censorship on Social Media (summarizing [5])

[6] Note that a data subject should even be able to request updates of the data if certain information is no longer applicable or correct. This is however included in the Non-Repudiation threat tree (NR_ds7) as it not specific to user awareness.

[1] Wikipedia provides a nice overview of the different kinds of security tokens