A search engine that uses artificial intelligence (AI) to “read” through millions of online documents could help privacy researchers find ones that are related to online privacy. The researchers who designed the search engine suggest it could be an important tool for researchers trying to find ways to design a safer internet.
In a study, the researchers said that the search engine, which they dubbed , uses a type of AI called natural language processing – NLP – to identify online privacy documents, such as privacy policies, terms of service agreements, cookie policies, privacy bills and laws, regulatory guidelines and other related texts on the web.
Rather than attempting to search for privacy documents themselves, researchers could type their queries into the search engine to efficiently identify and collect the correct documentation.
Ultimately, though, the search engine could help researchers better understand online privacy in general and examine online privacy trends over time, which could one day lead to an internet that users could navigate more safely and securely, according to , assistant professor of at Penn State and an affiliate.
“This can be a resource for researchers both in natural language processing and privacy, who are interested in this domain of text,” said Wilson. “Given large volumes of text like this, we can find ways to identify and automatically label certain data practices that people might be interested in, which then enables building tools to help users understand online privacy.”
NLP combines linguistics, computer science and AI to program computers to process and analyze large amounts of text. In this case, the researchers used NLP to gather privacy policy documents from the web, according to Mukund Srinath, a doctoral student in information sciences and technology and the first author of the study.
“The NLP approach can differentiate between the privacy policy documents and nonprivacy policy documents based on certain words that occur in the text,” said Srinath. “Intuitively, you can think that privacy policies might have certain words in them that nonprivacy policies do not, such as data protection and privacy, which are just some of the common words. With the NLP approach, you could say that the algorithm learns to recognize the difference between those two different types of documents.”
He added that searching and classifying privacy documentation without machine learning would be time consuming and difficult, if not impossible.
Deeper insight on privacy information is needed because this type of documentation is largely ignored by regular users, according to Wilson.
“Most websites present you with information about their data practices and then you’re supposed to consent by actually going through and reading all of this information,” said Wilson. “But no one really does that because it’s not practical and it doesn’t fit into how people use the internet. People also typically don’t have the legal knowledge.”
The privacy policies were collected by the PrivaSeer search engine during two separate crawls of the web. A web crawl refers to systematically browsing the internet at a large scale, as performed by a software program. The first crawl occurred in July 2019. The second crawl occurred in February 2020.
The PrivaSeer database now consists of approximately 1.4 million English language website privacy policies.
“One thing that’s distinct about our database is we have the single largest snapshot in time of online privacy,” said Wilson.
Soundarya Nurani Sundareswara, former graduate student in information sciences and technology, currently a software engineer at Apple, and C. Lee Giles, David Reese Professor at the College of Information Sciences and Technology, both of Penn State, worked with Wilson and Srinath on the project.
The team published their findings in the .