External Stability Auditing to Test the Validity of Personality Prediction in AI Hiring

by   Alene K. Rhea, et al.

Automated hiring systems are among the fastest-developing of all high-stakes AI systems. Among these are algorithmic personality tests that use insights from psychometric testing, and promise to surface personality traits indicative of future success based on job seekers' resumes or social media profiles. We interrogate the validity of such systems using stability of the outputs they produce, noting that reliability is a necessary, but not a sufficient, condition for validity. Our approach is to (a) develop a methodology for an external audit of stability of predictions made by algorithmic personality tests, and (b) instantiate this methodology in an audit of two systems, Humantic AI and Crystal. Crucially, rather than challenging or affirming the assumptions made in psychometric testing – that personality is a meaningful and measurable construct, and that personality traits are indicative of future success on the job – we frame our methodology around testing the underlying assumptions made by the vendors of the algorithmic personality tests themselves. In our audit of Humantic AI and Crystal, we find that both systems show substantial instability with respect to key facets of measurement, and so cannot be considered valid testing instruments. For example, Crystal frequently computes different personality scores if the same resume is given in PDF vs. in raw text format, violating the assumption that the output of an algorithmic personality test is stable across job-irrelevant variations in the input. Among other notable findings is evidence of persistent – and often incorrect – data linkage by Humantic AI.


page 34

page 35

page 38


Preventing Poisoning Attacks on AI based Threat Intelligence Systems

As AI systems become more ubiquitous, securing them becomes an emerging ...

Black box tests for algorithmic stability

Algorithmic stability is a concept from learning theory that expresses t...

Measurement as governance in and for responsible AI

Measurement of social phenomena is everywhere, unavoidably, in sociotech...

AI-enabled exploration of Instagram profiles predicts soft skills and personality traits to empower hiring decisions

It does not matter whether it is a job interview with Tech Giants, Wall ...

Outsider Oversight: Designing a Third Party Audit Ecosystem for AI Governance

Much attention has focused on algorithmic audits and impact assessments ...

Arming the public with AI to counter social bots

The increased relevance of social media in our daily life has been accom...

Closing the AI Knowledge Gap

AI researchers employ not only the scientific method, but also methodolo...

Please sign up or login with your details

Forgot password? Click here to reset