This item is a Poster.
- Lindamood, Jack - Facebook
- Heatherly, Raymond - The University of Texas at Dallas
- Kantarcioglu, Murat - The University of Texas at Dallas
- Thuraisingham, Bhavani - The University of Texas at Dallas
On-line social networks, such as Facebook, are increasingly utilized by many users. These networks allow people to publish details about themselves and connect to their friends. Some of the information revealed inside these networks is private and it is possible that corporations could use learning algorithms on the released data to predict undisclosed private information. In this paper, we explore how to launch inference attacks using released social networking data to predict undisclosed private information about individuals. We then explore the effectiveness of possible sanitization techniques that can be used to combat such inference attacks under different scenarios. social network data could be used to predict some individual private trait that a user is not willing to disclose (e.g., political or religious affiliation) and explore the effect of possible data sanitization alternatives on preventing such private information leakage. To our knowledge this is the ﬁrst comprehensive paper that discusses the problem of inferring private traits using real-life social network data and possible sanitization approaches to prevent such inference. First, we present a ıve modiﬁcation of Na¨ Bayes classiﬁcation that is suitable for classifying large amount of social network data. Our modiﬁed Na¨ Bayes algorithm predicts privacy sensitive trait ıve information using both node traits and link structure. We compare the accuracy of our learning method based on link structure against the accuracy of our learning method based on node traits. Please see extended version of this paper  for further details of our modiﬁed Naive Bayes classiﬁer. In order to protect privacy, we sanitize both trait (e.g., deleting some information from a user’s on-line proﬁle) and link details (e.g., deleting links between friends) and explore the effect they have on combating possible inference attacks. Our initial results indicate that just sanitizing trait information or link information may not be enough to prevent inference attacks and comprehensive sanitization techniques that involve both aspects are needed in practice. Similar to our paper, in , authors consider ways to infer private information via friendship links by creating a Bayesian Network from the links inside a social network. A similar privacy problem for online social networks is discussed in . Compared to  and , we provide techniques that help in choosing the most effective traits or links that need to be removed for protecting privacy.
Fun web stuff for this record
- RKBExplorer (from linked data workshop)
- URI: http://eprints.rkbexplorer.com/id/www2009/eprints-153
Browse the data for this paper at RKBExplorer
- REST Interface
- ORE Resource Map
- ORE was described in the Linked Data Workshop. View Resource Map
- Export Record As...