Pseudonymization at Scale: OLCF's Summit Usage Data Case Study

by   Ketan Maheshwari, et al.

The analysis of vast amounts of data and the processing of complex computational jobs have traditionally relied upon high performance computing (HPC) systems. Understanding these analyses' needs is paramount for designing solutions that can lead to better science, and similarly, understanding the characteristics of the user behavior on those systems is important for improving user experiences on HPC systems. A common approach to gathering data about user behavior is to analyze system log data available only to system administrators. Recently at Oak Ridge Leadership Computing Facility (OLCF), however, we unveiled user behavior about the Summit supercomputer by collecting data from a user's point of view with ordinary Unix commands. Here, we discuss the process, challenges, and lessons learned while preparing this dataset for publication and submission to an open data challenge. The original dataset contains personal identifiable information (PII) about OLCF users which needed be masked prior to publication, and we determined that anonymization, which scrubs PII completely, destroyed too much of the structure of the data to be interesting for the data challenge. We instead chose to pseudonymize the dataset to reduce its linkability to users' identities. Pseudonymization is significantly more computationally expensive than anonymization, and the size of our dataset, approximately 175 million lines of raw text, necessitated the development of a parallelized workflow that could be reused on different HPC machines. We demonstrate the scaling behavior of the workflow on two leadership class HPC systems at OLCF, and we show that we were able to bring the overall makespan time from an impractical 20+ hours on a single node down to around 2 hours. As a result of this work, we release the entire pseudonymized dataset and make the workflows and source code publicly available.


Big Data Meets HPC Log Analytics: Scalable Approach to Understanding Systems at Extreme Scale

Today's high-performance computing (HPC) systems are heavily instrumente...

Why do Users Kill HPC Jobs?

Given the cost of HPC clusters, making best use of them is crucial to im...

Workflows Community Summit 2022: A Roadmap Revolution

Scientific workflows have become integral tools in broad scientific comp...

Applying Process Mining on Scientific Workflows: a Case Study

Computer-based scientific experiments are becoming increasingly data-int...

Shared High Value Research Resources: The CamCAN Human Lifespan Neuroimaging Dataset Processed on the Open Science Grid

The CamCAN Lifespan Neuroimaging Dataset, Cambridge (UK) Centre for Agei...

Minimizing privilege for building HPC containers

HPC centers face increasing demand for software flexibility, and there i...

Linking Scientific Instruments and HPC: Patterns, Technologies, Experiences

Powerful detectors at modern experimental facilities routinely collect d...

Please sign up or login with your details

Forgot password? Click here to reset