3D4ALL: Toward an Inclusive Pipeline to Classify 3D Contents

02/24/2021
by   Nahyun Kwon, et al.
0

Algorithmic content moderation manages an explosive number of user-created content shared online everyday. Despite a massive number of 3D designs that are free to be downloaded, shared, and 3D printed by the users, detecting sensitivity with transparency and fairness has been controversial. Although sensitive 3D content might have a greater impact than other media due to its possible reproducibility and replicability without restriction, prevailed unawareness resulted in proliferation of sensitive 3D models online and a lack of discussion on transparent and fair 3D content moderation. As the 3D content exists as a document on the web mainly consisting of text and images, we first study the existing algorithmic efforts based on text and images and the prior endeavors to encompass transparency and fairness in moderation, which can also be useful in a 3D printing domain. At the same time, we identify 3D specific features that should be addressed to advance a 3D specialized algorithmic moderation. As a potential solution, we suggest a human-in-the-loop pipeline using augmented learning, powered by various stakeholders with different backgrounds and perspectives in understanding the content. Our pipeline aims to minimize personal biases by enabling diverse stakeholders to be vocal in reflecting various factors to interpret the content. We add our initial proposal for redesigning metadata of open 3D repositories, to invoke users' responsible actions of being granted consent from the subject upon sharing contents for free in the public spaces.

READ FULL TEXT
research
03/17/2019

Impact of Contextual Factors on Snapchat Public Sharing

Public sharing is integral to online platforms. This includes the popula...
research
09/11/2019

FAT Forensics: A Python Toolbox for Algorithmic Fairness, Accountability and Transparency

Machine learning algorithms can take important decisions, sometimes lega...
research
09/08/2022

FAT Forensics: A Python Toolbox for Implementing and Deploying Fairness, Accountability and Transparency Algorithms in Predictive Systems

Predictive systems, in particular machine learning algorithms, can take ...
research
03/13/2023

Addressing Biases in the Texts using an End-to-End Pipeline Approach

The concept of fairness is gaining popularity in academia and industry. ...
research
11/08/2021

Creative Compensation (CC): Future of Jobs with Creative Works in 3D Printing

With the continuous growth of online 3D printing community and the democ...
research
04/02/2021

Fairness in Network-Friendly Recommendations

As mobile traffic is dominated by content services (e.g., video), which ...
research
04/10/2017

Matching Media Contents with User Profiles by means of the Dempster-Shafer Theory

The media industry is increasingly personalizing the offering of content...

Please sign up or login with your details

Forgot password? Click here to reset