Data Ethics Compliance
NYC AI Bias
Audit Support
for Employers
Preparing yourself for the NYC AI Bias Mandate
Starting April 15th, 2023, employers in NYC must comply with a new law if they use “automated employment decision tools” (AEDT). Local Law 144 (LL 144) states that all companies who wish to use their artificial intelligence (AI) tools in their recruitment process or for career progression purposes must:

Commission an independent, third-party audit of the AEDT for bias against race, ethnicity, and gender within one year of use of the tool

Communicate the outcome of the audit in a transparent way

Be transparent with job candidates and employees about the use of AEDTs
The law provides for enforcement by the NYC Corporation Counsel, including fines up to $1,500 per violation. Each day an unaudited tool is used constitutes a separate violation.
Why this affects you
Employers who use AI tools for hiring, recruiting or for evaluating an employee’s career progression in NYC are affected by this law. The employer – not the vendor who makes or sells the tool – is responsible for having a bias audit conducted prior to its deployment.
Trilateral Research can help you understand if you are impacted, and offers an all-encompassing, unbiased audit to aid your organization in achieving appropriate compliance. We possess years of proficiency in the fields of data ethics, data protection, and compliance support, and can cater to organizations of all sizes.

Our Independent Bias Audit Methodology
Get to know the data
Our team of data scientists, ethicists, and subject matter experts will assess the steps taken to collect, select, prepare and process the data used to train the AEDT. At this stage, it is essential to determine whether any special categories of data (e.g., vulnerable populations, or sensitive data such as medical data) are included in the datasets. We will examine the quality of the data, including completeness, correctness, and missing variables.
Look for explicit biases
We will examine whether an algorithm’s input data contains explicit bias or any protected characteristics. This step requires a subject matter expert who can identify biases corresponding to the hiring/recruiting context as well as a data scientist to examine the “gross” or “surface” properties of the acquired data (such as format and quantity), and evaluate whether the data satisfies the relevant requirements.
Look for hidden biases
Since bias can be hidden behind proxies, we will also examine correlations among diverse data points to check that selected features fit the overall objective of the algorithm and do not include unwanted correlations. Our team will identify the proxies for protected characteristics that could occur in the hiring context, and focus on data mining questions that concern patterns in the data (e.g., distribution of key attributes, relationships between pairs of attributes, properties of significant sub-populations, simple statistical analyses), through queries, visualization, and reporting techniques.
Assess operational accuracy
We will assess the operational accuracy of the algorithm. This process includes a close examination of how the algorithmic output contributes tohuman judgment and the employer’s decision-making processes, resulting in an assessment of whether any operational biases could emerge with the use of the tool.
Calculate selection rate and impact ratio
Finally, using the information gathered in the previous steps, we will calculate the selection rate and impact ratio for each protected category, including an assessment of intersectional impact, as required by the law.
Fulfill transparency requirements
Issue a report and summary, as required by the law. Advise on how to comply with the transparency and notification requirements of the law.
FAQs
April 15th, 2023
Bias can become embedded in an algorithm when algorithmic models are trained on historical data, and the historical data reflects past discriminatory decisions, policies, and procedures. For example, if from 1970-2000 a company primarily hired or disproportionately promoted men to senior positions, and an algorithm is trained on the dataset comprised of the company’s past hires and promotions to predict what makes a successful candidate, then the algorithm “learns” that being a man is a qualification for hiring or promotion.
Also, datasets can include “proxies” for protected characteristics like race, ethnicity, sex, or gender. A proxy, in this sense, is a substitute for one of these categories, and the presence of proxies can be difficult to identify. Studies reveal that ZIP code, ancestry, disease predisposition, linguistic characteristics, last name, criminal record, socioeconomic status, marital status, education, and occupation can be proxies for a person’s sex, gender, race and ethnicity. If an AI tool learns that these proxies are relevant for hiring, recruiting, or promoting, then its output can lead to discriminatory decision-making.
Local Law 144 requires that an audit include: (1) a calculation of the selection rate for each race/ethnicity and sex category; and (2) a calculation of the “impact ratio” for each category. The proposed rules also indicate that an intersectional analysis must be conducted. This means an analysis of the impact rate for ethnicity and sex combined (e.g. how African-American women compared to Hispanic men are impacted), in addition to each protected category independently.
Yes. The date of the bias audit and a summary of the results must be made public. Additionally, employers must provide at least 10 business days’ notice to candidates and employees who reside in New York City that an AEDT will be used in connection with a given assessment or decision.
Employers can be liable for a civil penalty of $500 for a first violation and each violation occurring on the same day, and up to $1,500 for each subsequent violation (which includes failure to comply with the transparency and notice requirements).

Why Trilateral Research?

Our Data Protection and Cyber-risk services are consistently positively evaluated by our clients. We have a 100% renewal rate on multi-year contracts, alongside repeat business and referrals. To find out more, please contact our team.
In the crowded data-protection market, it’s hard to find real experts. We provide research-driven, evidence-based advice that offers a solution that is tailor-made for you. Our team combines legal, cybersecurity, technology and social-science experts to fully address the technical and organizational aspects of compliance support.
We focus on transparency and accountability to develop a human-centred tailored approach. Our ethics experts hold PhDs in applied ethics and have 15+ years of experience operationalising ethics.