Whenever we speak of technological innovation, it is common for the conversation to become unmoored from the present reality and instead assess the technology (or really, any change) based on how close or far it comes to some perceived ideal. And that of course is an entirely unrealizable goal.
The article begins:
Artificial intelligence (AI) and face recognition technology is being used for the first time in job interviews in the UK to identify the best candidates.Yep. Beginning to happen everywhere.
The algorithms select the best applicants by assessing their performances in the videos against about 25,000 pieces of facial and linguistic information compiled from previous interviews of those who have gone on to prove to be good at the job.But then there are the concerns:
However, academics and campaigners warned that any AI or facial recognition technology would inevitably have in-built biases in its databases that could discriminate against some candidates and exclude talented applicants who might not conform to the norm.The academics seem to have a fair concern. But do they really?
“It is going to favour people who are good at doing interviews on video and any data set will have biases in it which will rule out people who actually would have been great at the job,” said Anna Cox, professor of human-computer interaction at UCL.
Of course the AI interviewing system is going to produce biased results. It is supposed to do so. It is supposed to be identifying those who are the most knowledgeable, proficient, capable and reliable. These are not randomly and normally distributed traits. There will obviously be relative degrees of unrepresentativeness based on age, class, education attainment, IQ, religion, ethnicity, personality type, cultural origin, etc.
The article elaborates on the how of the process.
“There are 350-ish features that we look at in language: do you use passive or active words? Do you talk about ‘I’ or ‘We.’ What is the word choice or sentence length? In doctors, you might expect a good one to use more technical language,” he said.All seems above board. It is a new field. Not all the variables will end up being predictive, some of the weighting will be poor, etc.
“Then we look at the tone of voice. If someone speaks really slowly, you are probably not going to stay on the phone to buy something from them. If someone speaks at 400 words a minute, people are not going to understand them. Empathy is a piece of that.”
The company says the tecnology is different to facial recognition and instead analyses expressions. Facial expressions assessed by the algorithms include brow furrowing, brow raising, eye widening or closing, lip tightening, chin raising and smiling, which are important in sales or other public-facing jobs.
“Facial expressions indicate certain emotions, behaviours and personality traits,” said Nathan Mondragon, Hirevue’s chief psychologist.
“We get about 25,000 data points from 15 minutes of video per candidate. The text, the audio and the video come together to give us a very clear analysis and rich data set of how someone is responding, the emotions and cognitions they go through.”
Candidates are ranked on a scale of one to 100 against the database of traits of previous “successful” candidates, with the process taking days rather than weeks or months, says the company. It claims one firm had a 15 per cent uplift in sales.
And it seems to work. If you get a 15% lift in sales in a competitive market, that is gold.
Now, of course they are selling something. It is a new technology so we need to anticipate a lot of overclaiming. Enthusiasm will displace the gimlet eye.
But the naysayers are perhaps even worse. Wanting to stop progress because someone, somewhere, might be upset.
There is no doubt the AI will make some bad calls, selecting someone who should not have been and overlooking someone who should not have been. It will also be the case that there will seem to be unintentional biases because some groups, however defined, will be over- and others under-represented.
But the question is not whether AI will make mistakes. The question is whether it will make fewer mistakes than the human process? I think it is safe to say that it will end up, through trial and error, to be found to make fewer and different errors. It will end up opening more opportunities to some (individuals fully able but with some unconsciously discriminated against attribute, and fewer opportunities to others (those individuals possessing attributes which are socially valued but unrelated to effectiveness).
It will be messy, it will take a while, but AI likely will end up being positively contributive to improved hiring outcomes.
Concerns that it will have its own hidden biases betrays the ideological nature of much of the nay-saying left.
It also betrays the incoherence of such thinking. You cannot reconcile a belief in diversity and multiculturalism with the outcomes of an unbiased AI system. If you value diversity and multiculturalism, you will inherently see disparate outcomes arising from unbiased selection processes. Disparate inputs will lead to disparate outputs.
If a business needs employees with exceptional attention to detail, strong consistency of performance, and high adherence to promptness, those attributes will not be equally distributed across all demographics of age, class, education attainment, IQ, religion, ethnicity, personality type, cultural origin, etc.. There will be patterns or correlation.
There will be patterns of disparate outcome.
Whether you see that as a problem or not is a moral and ideological issue. It is also a question as to whether you think humans are more likely to achieve completely non-prejudicial judgments or AI systems.
“I would much prefer having my first screening with an algorithm that treats me fairly rather than one that depends on how tired the recruiter is that day,” said Mr Larsen.Indeed. Unprejudiced AI systems should be a god-send in circumventing unconscious prejudice. But bias-free AI systems will still deliver disparate impact outcomes.
No comments:
Post a Comment