AI tools can automate most of the steps in the recruiting and hiring process, resulting in several potential benefits, including improving the objectivity of recruitment, identifying irrelevant applications, avoiding overlooking qualified candidates, filling vacant positions faster, and enhancing the candidate experience and employer brand. The increases in efficiency and accuracy of recruitment and hiring make AI tools a boon for employers and job candidates alike. However, alongside these potential benefits lies a significant risk. Biased algorithms can exclude underrepresented, minority, and marginalized groups or individuals from hiring considerations. One of the most impactful new regulations is New York City Local Law 144 (NYC LL 144) that regulates the use of “automated employment decision tools” (AEDT) used in recruitment and hiring. This article will explain both the regulatory obligations that organisations must now meet and how organisations can perform the mandatory algorithmic bias audit to help employers be compliant with the new law.
NYC LL 144 prohibits employers from using AEDT to evaluate candidates for jobs or employees for promotion unless (1) an independent third party has audited the tool for bias within one year of use of the tool, (2) information about the audit is made publicly available, and (3) employees and job candidates have been informed about the use of the tool. Adding some clarification to the laws, the New York City Department of Consumer and Worker Protection (DCWP) published proposed rules stating that (4) An independent auditor is a person or group that is not involved in using or developing an AEDT that is responsible for conducting a bias audit of such AEDT, and (5) at a minimum the audit must include (a) a calculation of the selection rate for each race, ethnicity and sex category; and (b) comparison of selection rates to the most selected category to determine an “impact ratio” to ensure there is no adverse impact on a particular group.
The Equal Opportunity Employment Commission (EOEC) defines adverse impact as “a substantially different rate of selection in hiring, promotion, or other employment decision which works to the disadvantage of a race, sex, or ethnic group.” NYC Local Law 144 mandates an assessment of adverse impact for the following ethnicity categories: Hispanic or Latino, White, African American, Native Hawaiian or Pacific Islander, Asian, Native American or Alaska Native. It also mandates assessment of adverse impact on the sex categories of male and female. Finally, the law requires a bias audit of intersectional categories that combine the sex and ethnicity categories, e.g., the category of female Hispanic.
How To Complete the Bias Audit
Trilateral Research’s Automated AI Bias Audit Interface provides clients with a tailored tool for meeting the requirements of the NYC bias audit mandate. Created as direct solution to NYC Local Law 144, the Automated AI Bias Audit Interface delivers a report that is fully compliant with all aspects of the law. To use the interface, clients follow these steps:
- Register and login to the Automated AI Bias Audit Interface
- Input contact details of HR manager or other authorized recipients of the Bias Audit Report.
- Input details of the AEDT tool being used.
- Upload your HR/recruitment data into the tool as a standalone dataset or ingest data from your current platform directly into our tool via API.
After following these steps, the tool generates a Bias Audit Report, which is sent automatically to the registered email address added in Step 2. The image below is an example of an AI Audit Bias Report facilitating compliance with NYC LL 144.
Figure 1: Example of an AI Bias Audit Report for NYC LL 144
Our Automated AI Bias Audit Interface calculates and identifies bias in recruitment software in order to help organizations achieve compliance with the latest regulatory requirements imposed by NYC LL 144. With our tool at hand, organizations can better manage their compliance risks, meet regulatory obligations, and have greater confidence in the trustworthiness of their automated and/or AI-powered recruitment systems. Please get in touch with us to find our more.