Big data, aka large data sets, is used to aid
communication methodology practices every day. For example, Facebook takes
cookies and internet history to tailor ads to your specific interests and even
Chicago police use an algorithm to predict crime before it happens, yes… just
like Minority Report. But sometimes the use of big data can bring about an
unwanted outcome.
Employers have started prioritizing their search for new
talent and they need to sift through thousands of resumes in a manageable
fashion. Programs are being used to create lists of potential employees on the
internet and predict job success. This sounds good and nice but in reality
there is potential for these programs to aid in discrimination based on race,
sex or other protected classes.
Bloomberg delves into this idea of formulaic discrimination where employers focus their
criteria on job retention and performance. The problem arises where the program
may be set to replicate their current workforce demographic by searching for
resembling features of their top rated performers. That type of search can end
up under representing woman, racial minorities, or other protected persons.
These types of suits can be difficult. The employer may
only be focused on their efficiency criteria and may not be aware of the
discriminatory effect. Another difficulty arises where discovery of these
programs, created by the company, may be protected under trade secret
protection laws for their formulas.
Programs only do what humans tell them to do, therefore
employers should account for the applicable laws and be clear about what they
want their programs to conclude. Using these programs to find that “perfect
match” employee should take into consideration of all the criteria when hiring,
including non-discriminatory practices.