So you’ve submitted your resume and cover letter – now what?
As a pre-employment assessment from the employer populates into your inbox – remember that many firms are using a hiring algorithm. And you might be wondering – why would this company rather use some sort of automated screening process to review my responses? Simply put – to save both time and money.
So how are these algorithms created, and what do employers do with this data? And another question, are they really as unbiased as they say they are?
The words “fairness” and “unbiased” are, unfortunately, very vague. A lot of law firms you may be applying for are using third-party vendors to provide their screening services. And there is no way in knowing if your application has been rejected due to a biased process.
As we sit here scratching our heads and asking ourselves, “Do hiring algorithms hurt or help us with our chances of getting jobs?” There seems to be no clear answer.
Screening the talent pool
At the end of the day, hiring is composed of multiple steps and needs human involvement. Unknowingly, a hiring algorithm might be in effect way before you even send an application. That’s right – the ad that you clicked on that notified you about the job opening might have been part of the hiring algorithm as well.
Using these services helps filter out candidates to simplify the jobs of hiring managers. However, this artificial intelligence can also pick up patterns in applicants with similar interests or characteristics. And because of this, it may start narrowing its reach towards a specific audience. And this would be biased – unknowingly.
There have been times that this scenario was flipped. For example, certain companies have used this technology to uncover hiring “blind-spots” or passive candidates.
So as applicants start entering the funnel – employers need to filter out the strongest candidates. To do this, they often challenge the candidates with a variety of questions measured by machine learning.
Have you ever taken one of those pre-employment assessments after submitting a job application? Sometimes it asks you questions about your personality or how you would handle certain situations. It can be incredibly daunting to be assessed in such a non-human way.
Or how about a video interview that’s hosted on a platform that uses AI to screen candidates? It is incredibly difficult to determine if these assessments were created to consider every race, gender, religion, disability, age, etc.. If a candidate has a speech impediment, this “service” could potentially hurt their chances of moving forward in the hiring process.
However, on the other hand, it could also be perceived as the opposite. Maybe AI thrives in the areas people have failed. AI cannot be biased by your mannerisms and appearance. This algorithm can also detect hiring patterns that companies have made that defer from inclusivity and diversity. So perhaps this mysterious algorithm could help bridge that gap towards a more equitable society.
Regulating an algorithm
Although the U.S. has laws that constraint using predictive hiring tools – it is still difficult to determine if these practices are fair or not. So can they be regulated?
Employers should not use this technology to replace screening. Instead, it should supplement the hiring process, especially in detecting bias behavior. Many companies are not as transparent about their hiring practices as we all wish they could be. However, this should not discourage you from applying and taking those pre-employment assessments. Just keep in mind that more often than not, your qualifications are not always be measured by a person, but by an algorithm.