Fighting Bias With Board Games : Code Switch : NPR

Interesting and innovative approach:

Quick, think of a physicist.

If you’re anything like me, you probably didn’t have to think very hard before the names Albert Einstein and Isaac Newton popped up.

But what if I asked you to think of a female physicist? What about a black, female physicist?

You may have to think a bit harder about that. For years, mainstream accounts of history have largely ignored or forgotten the scientific contributions of women and people of color.

This is where Buffalo — a card game designed by Dartmouth University’s Tiltfactor Lab — comes in. The rules are simple. You start with two decks of cards. One deck contains adjectives like Chinese, tall or enigmatic; the other contains nouns like wizard or dancer.

Draw one card from each deck, and place them face up. And then all the players race to shout out a real person or fictional character who fits the description.

So say you draw “dashing” and “TV show character.”

You may yell out “David Hasselhoff in Knight Rider!”

“Female” and “olympian?”

Gabby Douglas!

Female physicist?

Hmm. If everyone is stumped, or “buffaloed,” you draw another noun and adjective pair and try again. When the decks run out, the player who has made the most matches wins.

It’s the sort of game you’d pull out at dinner parties when the conversation lulls. But the game’s creators says it’s good for something else — reducing prejudice. By forcing players to think of people that buck stereotypes, Buffalo subliminally challenges those stereotypes.

“So it starts to work on a conscious level of reminding us that we don’t really know a lot of things we might want to know about the world around us,” explains Mary Flanagan, who leads Dartmouth University’s Tiltfactor Lab, which makes games designed for social change and studies their effects.

Buffalo might nudge us to get better acquainted with the work of female physicists, “but it also unconsciously starts to open up stereotypical patterns in the way we think,” Flanagan says.

In one of many tests she conducted, Flanagan rounded up about 200 college students and assigned half to play Buffalo. After one game, the Buffalo players were slightly more likely than their peers to strongly agree with statements like, “There is potential for good and evil in all of us,” and, “I can see myself fitting into many groups.”

Students who played Buffalo also scored better on a standard psychological test for tolerance. “After 20 minutes of gameplay, you’ve got some kind of measurable transformation with a player — I think that’s pretty incredible,” Flanagan says.

Buffalo isn’t Flanagan’s only bias-busting game. Tiltfactor makes two others called “Awkward Moment” and “Awkward Moment At Work.” They’re designed to reduce gender discrimination at school and in the workplace, respectively.

“I’m really weary of saying things like, ‘Games are going to save the world,'” Flanagan says. But she adds, “it’s a serious question to look at how a little game could try to address a massive, lived social problem that affects so many individuals.”

Buffalo.

Maanvi Singh for NPR

Scientists have tried all sorts of quick-fix tactics to train away racism, sexism and homophobia. In one small study, researchers at Oxford University even looked into whether Propranolol, a drug that’s normally used to reduce blood pressure, could ease away racist attitudes. Unsurprisingly, it turns out that there is no panacea capable of curing bigotry.

There are, however, good reasons to get behind the idea that games or any other sort of entertainment can change the way we think.

“People aren’t excited about showing up to diversity trainings or listening to people lecture them. People don’t generally want to be told what to think,” explains Betsy Levy Paluck, a professor of psychology at Princeton University who studies how media can change attitudes and behaviors. “But people like entertainment. So, just on a pragmatic basis, that’s one reason to use it to teach.”

There’s a long history of using literature, music and TV shows to encourage social change. In a 2009 study, Paluck found that radio soap opera helped bridge the divides in post-genocide Rwanda. “We know that various forms of pop-culture and entertainment help reduce prejudice,” Paluck says. “In terms of other types of entertainment — there’s less research. We’re still finding out whether and how something like a game can help.”

Anthony Greenwald, a psychologist at the University of Washington who has dedicated his career to studying people’s deep-seated prejudices, is skeptical. Like Flanagan, he says, several well-intentioned researchers have proved a handful of interventions — including thought exercises, writing assignments and games — can indeed reduce prejudice for a short period of time. But, “these desired effects generally disappear rapidly. Very few studies have looked at the effects even as much as one day later.”

After all, how can 20 minutes of anything dislodge attitudes that society has pounded into our skulls over a lifetime?

Flanagan says her lab is still looking into that question, and hopes to conduct more studies in the future that track long-term effects. “We do know that people play games often. If it really is a good game, people will return to it. They’ll play it over and over again,” Flanagan says. Her philosophy: maybe a game a day can help us keep at least some of our prejudices away.

via Fighting Bias With Board Games : Code Switch : NPR

Advertisements

ICYMI – Black job seekers have harder time finding retail and service work than their white counterparts, study suggests | Toronto Star

Interesting study:

Black applicants may have a harder time finding an entry level service or retail job in Toronto than white applicants with a criminal record, a new study has found.

For a city that claims to be multicultural, the results were “shocking,” said Janelle Douthwright, the study’s author, who recently graduated with a Masters of Arts in Criminology and Socio-Legal Studies from the University of Toronto.

Douthwright read a similar study from Milwaukee, Wis., during her undergraduate courses and she was “floored” by the findings.

“I thought there was no way this would be true here in Toronto,” she said.

She pursued her graduate studies to find out.

Douthwright created four fictional female applicants and submitted their resumes for entry level service and retail positions in Toronto over the summer.

She gave two of the applicants Black sounding names — Khadija Nzeogwu and Tameeka Okwabi — and gave one a criminal record. The Black applicants also listed participation in a Black or African student association on their resumes.

She gave the two other applicants white sounding names — Beth Elliot and Katie Foster — and also gave one of them a criminal record. The candidates with criminal records indicated in their cover letters that they had been convicted of summary offences, which are often less serious crimes.

 

Both Black applicants applied to the same 64 jobs and the white applicants applied to another 64 jobs.

Douthwright explained that she didn’t submit all four applications to the same jobs because the applications for the two candidates with criminal records and the two applicants without criminal records were almost identical except for the elements she used to indicate race, so they might have aroused suspicions among the employers if they were all submitted for the same jobs.

Though the resumes were nearly identical — each applicant had a high school education and experience working as a hostess and retail sales associate — the white applicant who didn’t have a criminal record received the most callbacks by far.

 

Of the 64 applications, the white applicant with no criminal record received 20 callbacks, a callback rate of 31.3 per cent. The white applicant with a criminal record received 12 callbacks, a callback rate of 18.8 per cent.

The Black applicant with no criminal record, meanwhile, received seven callbacks, a rate of 10.9 per cent. The Black applicant with a criminal record received just one callback out of 64 applications, a rate of 1.6 per cent.

Lorne Foster, a professor in the Department of Equity Studies at York University said Douthwright’s study bolsters the thesis that “the workplace is discriminatory on a covert level.”

“We have a number of acts that protect us against discrimination and many people think that because of that strong infrastructure discrimination is gone,” he said.

That’s not the case. “Implicit” or unconscious bias is a persistent issue.

“All of these implicit biases are automatic, they’re ambivalent, they’re ambiguous, and they’re much more dangerous than the old-fashioned prejudices and discrimination that used to exist because they go undetected but they have an equally destructive impact on people’s lives,” Foster said.

“It’s an invisible and tasteless poison and it’s difficult to eliminate.”

Individual employers, he said, should take a proactive approach to ensure their hiring practices are inclusive or at least adhering to the human rights code by testing and challenging their processes to uncover any hidden prejudices.

He pointed to the Windsor Police Service, who shifted their hiring practices when they discovered their existing process was excluding women, as an example.

They were one of the first services to do a demographic scan of who works for them, said Foster, who worked on a human rights review of the service.

Through that process they realized there was a “dearth” of female officers. They realized that the original process, which involved a number of physical tests “where there was all this male testosterone flying around,” was inhibiting women from attending the session.

In response they organized a series of targeted recruitment sessions and were able to hire five new women at the end of that process, Foster said.

“We all need to be vigilant about our thoughts about other people, our hidden biases and images of them,” he said.

via Black job seekers have harder time finding retail and service work than their white counterparts, study suggests | Toronto Star

Bias creeps into reference checks, so is it time to ditch them?

Hadn’t thought of this aspect of bias in reference checks. When hiring in government, I was always conscious of the selection bias in the references submitted – people generally do not submit negative references! When asked if I was willing to be a reference, I would flag if I had any issues that I would have to include in the reference:

As much as we’d like to think we’ve refined the hiring process over the years to carefully select the best candidate for the job, bias still creeps in.

Candidates who come from privileged backgrounds are more able to source impressive, well-connected referrers and this perpetuates the cycle of privilege. While the referrer’s reputation and personal clout make up one aspect of the recommendation, what they actually say – the content – completes the picture.

Research shows gender bias even invades in the content of recommendations. In this study female applicants for post-doctoral research positions in the field of geoscience were only half as likely as their male counterparts to receive excellent (as opposed to just good) endorsements from their referees. Since it’s unlikely that of the 1,200 recommendation letters analysed, female candidates were less excellent than the male candidates, it means something else is going on.

A result like this may be explained by the gender role conforming adjectives that are used to describe female versus male applicants. Women are more likely to be observed and described as “nurturing” and “helpful”, whereas men are attributed with stronger, more competence-based words like “confident” and “ambitious”. This can, in turn, lead to stronger recommendations for male candidates.

Worryingly, in another study similar patterns emerged in the way black versus white, and female versus male, medical students were described in performance evaluations. These were used as input to select residents.

In both cases the members of minority groups were described using less impressive words (like “competent” versus “exceptional”), a pattern that was observed even after controlling for licensing examination scores, an objective measure of competence.

Recommendations aren’t good predictors of performance

Let’s put the concerns about bias aside for a moment while we examine an even bigger question: are recommendations actually helpful, valid indicators of future job performance or are they based on outdated traditions that we keep enforcing?

Even back in the 90s, researchers were trying to alert hiring managers to the ineffectiveness of this as a tool, noting some major problems.

The first problem is leniency, referees are allowed to be chosen by the candidate and tend to be overly positive. The second is too little knowledge of the applicant, as referees are unlikely to see all aspects of a prospective employees’ work and personal character.

Reliability is another problem. It turns out there is higher agreement between two letters written by the same referee for different candidates, than there is for two letters (written by two different referees) for the same candidate!

There is evidence that people behave in different ways when they are in different situations at work, which would reasonably lead to different recommendations from various referees. However, the fact that there is more consistency between what referees say about different candidates than between what different referees say about the same candidate remains a problem.

The alternatives to the referee

There are a few initiatives that are currently being used as alternatives to standard recruitment processes. One example is gamification – where candidates play spatial awareness or other job-relevant games to demonstrate their competence. For example, Deloitte has teamed up with software developer, Arctic Shores, for a fresh take on recruitment in an attempt to move away from the more traditional methods of recruitment.

However, gamification is not without its flaws – these methods would certainly favour individuals who are more experienced with certain kinds of video games, and gamers are more likely to be male. So it’s a bit of a catch-22 for recruiters who are introducing bias through a process designed to try to eliminate bias.

If companies are serious about overcoming potential bias in recruitment and selection processes, they should consider addressing gender, racial, economic and other forms of inequality. One way to do this is through broadening the recruitment pool by making sure the language they use in position descriptions and jobs ads is more inclusive. Employers can indicate flexible work options are available and make the decision to choose the minority candidates when they are equally qualified as other candidates.

Another option is to increase the diversity of the selection committee to add some new perspectives to previously homogeneous committees. Diverse selectors are more likely to speak up about and consider the importance of hiring more diverse candidates.

Job seekers could even try running a letter of reference through software, such as Textio, that reports gender bias in pieces of text and provides gender-neutral alternatives. But just as crucial is the need for human resources departments to start looking for more accurate mechanisms to evaluate candidates’ competencies.

via Bias creeps into reference checks, so is it time to ditch them?

People Suffer at Work When They Can’t Discuss the Racial Bias They Face Outside of It

Interesting HBR-published study on the racial bias link between the outside and work environments by Sylvia Ann Hewlett, Melinda Marshall and Trudy Bourgeois:

Last month, in an unprecedented show of solidarity, 150 CEOs from the world’s leading companies banded together to advance diversity and inclusion in the workplace and, through an online platform, shared best practices for doing so. To drive home the urgency, the coalition’s website, CEOAction.com, directs visitors to research showing that diverse teams and inclusive leaders unleash innovation, eradicate groupthink, and spur market growth. But as Tim Ryan, U.S. Chair and senior partner at PwC and one of the organizers of the coalition, explains, what galvanized the group was widespread recognition that “we are living in a world of complex divisions and tensions that can have a significant impact on our work environment” — and they need to be openly addressed.

At the Center for Talent Innovation, we wanted to look into these suspicions. Do the political, racial, and social experiences that divide us outside of work undermine our contributions on the job? Our nationwide survey of 3,570 white-collar professionals(374 black, 2,258 white, 393 Asian, and 395 Hispanic) paints an unsettling landscape: For black, Asian, and Hispanic professionals, race-based discrimination is rampant outside the workplace. Black individuals are especially struggling, as fully 78% of black professionals say they’ve experienced discrimination or fear that they or their loved ones will — nearly three times as many as white professionals.

But 38% of black professionals also feel that it is never acceptable at their companies to speak out about their experiences of bias — a silence that makes them more than twice as vulnerable to feelings of isolation and alienation in the workplace. Black employees who feel muzzled are nearly three times as likely as those who don’t to have one foot out the door, and they’re 13 times as likely to be disengaged.

W170626_HEWLETT_WHATHAPPENS

 

The response, at most organizations, is no response. Leaders don’t inquire about coworkers’ life experiences; they stay quiet when headlines blare reports of racial violence or videos capture acts of blatant discrimination. Their silence is often born of a conviction that race, like politics, is best discussed elsewhere.

But as evidenced by the formation of the coalition and the initiatives we captured in our report, that attitude is shifting. Conscious that breaking the silence begins with their own example, captains of industry are talking about race, both internally with their employees and externally with the public. After a spate of shootings of unarmed black men last summer, Ryan initiated a series of discussion days to ensure that all employees at PwC better understood the experiences of their black colleagues. Michael Roth, CEO of Interpublic Group, issued an enterprise-wide email imploring coworkers to “connect, affirm our commitment to one another, and acknowledge the pain being felt in so many of our communities.” Bernard Tyson, CEO of Kaiser Permanente, published an essay in which, in a plea for empathy, he shared his own experiences of discrimination. And in an emotional recounting of his black friend’s experience outside the office that went viral on YouTube, AT&T chairman Randall Stephenson encouraged employees to get to know each other better.

Leaders who display this kind of courage don’t always see immediate rewards, but in the long term, our research suggests that the payoff could be extraordinary. Of those who are aware of companies responding to societal incidents of racial discrimination, robust majorities of black (77%), white (65%), Hispanic (67%), and Asian (83%) professionals say they view those companies in a more positive way. Interviews with employees at firms like Ernst & Young point to stronger bonds forged between team leaders and members as a result of guidelines disseminated to managers on how to have a trust-building conversation. Town halls at New York Life with members of the C-suite and black executives have likewise paved pathways for greater understanding across racial and political divides.

Source: People Suffer at Work When They Can’t Discuss the Racial Bias They Face Outside of It

Babies show racial bias at nine months, U of T study suggests

A pair of interesting studies, with some caveats by other researchers:

Two new University of Toronto studies suggest racial bias can develop in babies at an early age — before they’ve even started walking.

Led by the school’s Ontario Institute of Child Study professor Kang Lee, in partnership with researchers from the U.S., U.K., France, and China, the studies examined how infants react to individuals of their own race, compared to individuals of another race.

“The goal of the study was to find out at which age infants begin to show racial bias,” Lee said. “With existing studies, the evidence shows that kids show bias around 3 or 4 years of age. We wanted to look younger.”

The first study looked at 193 Chinese infants from three to ninth months, recruited from a hospital in China, who hadn’t had direct contact with people of other races. The babies were then shown videos of six Asian women and six African women, paired with either happy or sad music.

The study found that infants from three to six months old didn’t associate sad or happy music with people of the same race or of other races, which indicates they “are not biologically predisposed to associate own- and other-race faces with music of different emotional valence.”

However, at around nine months old, the reactions were different.

According to the study, nine-month-old babies looked at their own-race faces paired with happy music for a longer period of time, as well as other-race faces paired with sad music. Lee says this supports the hypothesis that infants associate people of the same race with happy music, and other races with sad music.

That’s not to say parents are teaching their children how to discriminate against other raced individuals, Lee says.

“We are very confident that the cause of this early racial bias is actually the lack of exposure to other raced individuals,” he said. “It tells us that in Canada, if we introduce our kids to other-raced individuals, then we are likely to have less racial bias in our kids against other-raced people.”

Andrew Baron, an associate professor of psychology the University of British Columbia, said while the goal of the study is “terrific,” there are many reasons infants would look for longer amounts of time at faces of different races. For example, he says an infant could spend more time looking at an own-race face because it is familiar, or at an other-race face because it is different and unexpected.

“It’s impossible to draw that conclusion about association from a single experiment when you could have half a dozen reasons why you would look longer that don’t support the conclusion that was made in that paper,” said Baron, who was not involved in the studies, but specializes in a similar field — the development of implicit associations among infants.

“There’s multiple reasons — and contradictory reasons — why we look longer at things. We look longer at things we fear, we look longer at things we like. That’s an inherent tension in how you choose to interpret the data.”

The second study took a closer look at that bias and how it affects children’s learning skills.

Researchers showed babies videos of own-race and other-race adults looking in the same direction that photos of animals appeared (indicating they are reliable) and looking in the wrong direction of the animals (indicating they are unreliable).

The study found that when adults were reliable and looking in the direction of the animals, the infants followed both own- and other-raced individuals equally. The same results occurred when the adults were unreliable and looking in the wrong direction.

However, when the adults gaze was only sometimes correct, the children were more likely to take cues provided by adults of their own race.

“In this situation, very interestingly, kids treated their own-raced individuals — who are only 50 per cent correct — as if they were 100 per cent correct,” Lee said.

“There is discrimination, but only when there is uncertainty.”

The first study was published in Developmental Science and the second was in Child Development.

The study was conducted in China, Lee says, because the researchers were able to control the exposure to other-raced individuals.

Lee said he has been trying for nearly 10 years to organize a study looking at babies born into mixed-race families. He suspects infants born into mixed-race families would show less racial bias.

When it comes to parents who want to try to eliminate racial bias from a young age, Lee says exposure is key.

“If parents want to prevent racial biases from emerging, the best thing to do is expose their kids to TV programs, books, and friends from different races,” he said.

“And the important message is they have to know them by name . . . it’s extremely important to know them as individuals.”

Source: Babies show racial bias at nine months, U of T study suggests | Toronto Star

A black woman in tech makes $79,000 for every $100,000 a white man makes – Recode

Impressive large-scale data analysis that show the extent of bias in the hiring process:

It’s no secret that the technology field can be brutal to anyone who isn’t a white male. New data shows just how those inequalities play out in today’s tech workers’ paychecks.

Nearly two in three women receive lower salary offers than men for the same job at the same company, according to Hired, a job website that focuses on placing people in tech jobs such as software engineer, product manager or data scientist. That’s slightly better than last year, when 69 percent of women received lower offers.

Women, on average, were paid 4 percent less than men for the same kind of job, the study found.

For the study, Hired mined data from 120,000 salary offers to 27,000 candidates at 4,000 companies. In general, applicants to these tech fields skew male (75 percent), but that doesn’t account for the disparity in who gets interviewed.

Companies interviewed only men for a position 53 percent of the time; 6 percent of the time, they interviewed only women.

“Not only are women getting lower offers when they actually get offers, but a large amount of time, companies have openings and they’re not interviewing women at all,” said Jessica Kirkpatrick, Hired’s data scientist.

Hired’s data also breaks down offer salaries by race, compared with a white man in the same job. The effects of race are even more dramatic:

  • Black women are offered 79 cents to every dollar offered to a white man.
  • Black men make 88 cents.
  • Latina women make 83 cents.
  • White women make 90 cents.

Additionally, LGBTQ women and men are offered less money than their non-LGBTQ counterparts.

There are numerous reasons for this pay inequity. Part of the problem is that women, minorities and LGBTQ people ask for less than white males for the same position.

According to Kirkpatrick, these groups ask for less because people base their salary expectations on what they’re already making. For these groups, their lower pay often reflects a lot of historical inequities accrued over their careers, like being denied raises or promotions.

By not offering people comparable wages, Kirkpatrick said that companies are jeopardizing their job retention. “When people figure out what their teammates are making, it’s ultimately not good for maintaining talent and creating a collegial environment,” she said.

It also makes Silicon Valley’s already tight talent pool even smaller.

Source: A black woman in tech makes $79,000 for every $100,000 a white man makes – Recode

Applying for a job in Canada with an Asian name: Policy Options

More good work on implicit biases and their effect on discrimination in hiring by Jeffrey G. Reitz, Philip Oreopoulos, and Rupa Banerjee:

Our most recent study analyzed factors that might affect discriminatory hiring practices: the size of an employer, the skill level of the posted job and the educational level of the applicant.

First, we divided the employers into large (500 or more employees), medium-sized (50 to 499 employees) and small (less than 50 employees). We expected that large employers might treat applicants more fairly because they have greater resources devoted to recruitment and often have a more professionalized recruitment process. They also tend to have more experience with ethno-racial diversity in their workforces.

Asian-named applicants’ relative callback rates were indeed the lowest in small and medium-sized organizations, and somewhat higher in the largest employers. Compared with applicants with Anglo names, the Asian-named applicants with all-Canadian qualifications got 20 percent fewer calls from the largest organizations, but 39 percent fewer from the medium-sized organizations and 37 percent fewer from the smallest organizations. So, the disadvantage of having an Asian name is less for applicants to the large organizations, although it is still evident.

Looking at treatment of Asian-named applicants with some foreign qualifications, we found the largest organizations are generally the most likely to call these applicants for interviews. Large employers called these applicants 35 percent less often than Anglo-Canadian applicants with Canadian education and experience; medium-sized employers called 60 percent less often, and the smaller employers called 66 percent less often.

We were also interested in whether the skill level of the job affected discriminatory hiring practices and, in particular, whether Asian-named applicants faced greater barriers in higher-skill jobs, which are likely to be better paid. We found that the extent of discrimination against Asian-named applicants with all-Canadian qualifications is virtually the same for both high-skill jobs and lower-skill jobs. For the high-skill jobs, these applicants were 33 percent less likely to get a call; for the low-skill jobs, 31 percent less likely.

Skill level matters much more when Asian-named applicants have some foreign qualifications. Overall, these applicants had about a 53 percent lower chance of receiving a callback than comparable Anglo-Canadian applicants. But their rate of receiving calls is significantly lower at higher skill levels: they receive 59 percent fewer callbacks for high-skill jobs, 46 percent fewer for low-skill jobs. Employers may respond less favourably to Asian-named and foreign-qualified applicants for higher-skill positions because in those jobs, more is at stake, and assessing foreign credentials is more difficult than checking local sources. Avoiding the issue by not calling applicants to an interview is apparently viewed as the safer option.

Finally, we asked whether having a higher level of education than Anglo-Canadian-named applicants would lessen the negative effect of having an Asian name. We found that Asian-named applicants with Canadian education including a Canadian master’s degree were 19 percent less likely to be called in for an interview than their Anglo-Canadian counterparts holding only a Bachelor’s degree. For Asian applicants with foreign qualifications and a Canadian master’s degree, the likelihood of a callback was 54 percent lower than the rate for less-educated Anglo-Canadian-named applicants. Acquiring a higher level of education in Canada did not seem to give Asian-named applicants much of an edge.

Overall, we found that employers both large and small discriminate in assessing Asian-named applicants, even when the applicants have Canadian qualifications; and they show even more reluctance to consider Asian-named applicants with foreign qualifications. These biases are particularly evident in hiring for jobs with the highest skill levels. However, there is a substantial difference between larger and smaller organizations. Larger organizations are more receptive to Asian-named applicants than smaller ones, whether or not the applicants have Canadian qualifications.

In order to fully understand the disadvantages that racial minorities experience in the Canadian labour market, it is crucial to go beyond surveys, in which discrimination may be hidden and difficult to identify. Audit studies like ours capture “direct discrimination” by observing actual employer responses to simulated resumés. This form of discrimination is particularly significant since the inability to get an interview may prevent potentially qualified job-seekers from finding appropriate work. Its effect may be compounded in promotions and other stages of the career process and in turn exacerbate ethno-racial income inequality in Canada.

Meanwhile, employers might also be unwittingly disadvantaged, because it can prevent them from finding the best-qualified applicants. Small employers are particularly disadvantaged since they may lack the resources and expertise to fully tap more diverse segments of the workforce.

A number of measures may help to reduce name-based discrimination in the hiring process. First, a relatively low-cost measure would be for employers to introduce anonymized resumés. They could simply mask the names of applicants during the initial screening, and then track whether this results in more diverse hiring. Second, employers should ensure that more than one person is involved in the screening and interview process and that the process of resumé evaluation is open and transparent. Lastly, hiring managers should receive training on implicit bias and how to recognize and mitigate their own biases when recruiting job applicants.

Source: Applying for a job in Canada with an Asian name

No simple fix to weed out racial bias in the sharing economy

Two options to combat implicit bias and discrimination: less information (blind cv approach) or more information (expanded online reviews). The first has empirical evidence behind it, the second is exploratory at this stage:

One of the underlying flaws of any workplace is the assumption that the cream rises to the top, meaning that the best people get promoted and are given opportunities to shine.

While it’s tempting to be lulled into believing in a meritocracy, years of research on women and minorities in the work force demonstrate this is rarely the case. Fortunately, in most corporate settings, protocols exist to try to weed out discriminatory practices.

The same cannot necessarily be said for the sharing economy. While companies such as Uber and Airbnb boast transparency and even mutual reviews, they remain far from immune to discriminatory practices.

In 2014, Benjamin Edelman and Michael Luca, both associate professors of business administration at Harvard Business School, uncovered that non-black hosts can charge 12 per cent more than black hosts for a similar property. In this new economy, that simply means non-white hosts earn less for a similar service. This sounds painfully familiar to those who continue to fight this battle in the corporate world – although in this case, it occurs without the watchful eye of a human-resources division.

In the corporate world, companies have moved from focusing on overt to subconscious bias, according to Mr. Edelman and Mr. Luca, but the nature of the bias in the sharing economy remains unclear.

It’s either statistical, meaning users infer that the property remains inferior based on the owner’s profile, or “taste-based,” suggesting the decision to rent comes down to user preference. To curb discriminatory practices, the authors recommend concealing basic information, such as photos and names, until a transaction is complete, as on Craigslist.

Reached by e-mail this week, Mr. Edelman stands by that approach.

“Broadly, my instinct is to conceal information that might give rise to discrimination. If we think hosts might reject guests of [a] disfavoured race, let’s not tell hosts the race of a guest when they’re deciding whether to accept. If we think drivers might reject passengers of [a] disfavoured race, again, don’t reveal the race in advance,” he advised.

While Mr. Edelman feels those really bent on discrimination will continue to do so, other, more casual discriminators will realize it’s too costly.

An Uber driver who only notices a passenger’s race at the pickup point might think to himself he already has driven about five kilometres. If he cancels, not only will he be without a fare, but also Uber might notice and become suspicious, Mr. Edelman surmised.

Not everyone agrees that less information is the best route to take to combat discrimination in the sharing economy. In fact, more information may be the fix, according to recent research conducted by Ruomeng Cui, an assistant professor at Indiana University’s Kelley School of Business, Jun Li, an assistant professor at the University of Michigan’s Stephen M. Ross School of Business, and Dennis Zhang, an assistant professor at the John M. Olin Business School at Washington University in Saint Louis.

The trio of academics argues that rental decisions on platforms such as Airbnb are based on racial preferences only when not enough information is available. When more information is shared, specifically through peer reviews, discriminatory practices are reduced or even eliminated.

“We recommend platforms take advantage of the online reputation system to fight discrimination. This includes creating and maintaining an easy-to-use online review system, as well as encouraging users to write reviews after transactions. For example, sending multiple e-mail reminders or offering monetary incentives such as discounts or credits, especially for those relatively new users,” Dr. Li said.

“Eventually, sharing-economy platforms have to figure how to better signal user quality; nevertheless, whatever they do, concealing information will not help,” she added.

Still, others believe technology itself can offer a solution to the incidents of bias in the sharing economy, such as Copenhagen-based Sara Green Brodersen, founder and chief executive of Deemly, which launched last October. The company’s mission is to build trust in the sharing economy through social ID verification and reputation software, which enables users to take their reputation with them across platforms. For example, if a user has ratings on Airbnb, they can collate it with their reviews on Upwork.

“Recent studies in this area suggest that ratings and reviews are what creates most trust between peers. [For example] when a user on Airbnb looks at a host, they put the most emphasis on the previous reviews from other guests more than anything else on the profile. Essentially, this means platforms could present anonymous profiles showing only the user’s reputation, but not gender, profile picture, ethnicity, name and age and, in this way, we can avoid the bias which has been presented,” Ms. Brodersen said.

Regardless of the solution, platforms and their users need to recognize that combatting discriminatory practices is their responsibility and the sharing economy, like the traditional work force, is no meritocracy.

“This issue is not going to be smaller on its own,” Ms. Brodersen warned.

Source: No simple fix to weed out racial bias in the sharing economy – The Globe and Mail

Barbara Kay: Actually, it turns out that you may be less racist than you’ve been led to believe

What Kay misses is the usefulness of the IAT for people to become more mindful of their implicit biases, and, in so doing, be more aware of their “thinking fast” mode to use Kahneman’s phrase.

It is not automatic that being more mindful or aware changes behaviour but it can play a significant role (and yes, the benefits can be overstated). Having implicit biases does not necessarily mean acting on them.

Kay did not mention whether or not she took the test. Given her biases evident in her columns, it would be interesting to know whether she took the IAT and what, if anything, she learned.

I certainly found it useful, revealing and most important, discomforting as I became more aware of the gap between my policy mind and views, and what was under the surface.

Anyone can take the test on the Project Implicit Website, hosted by Harvard U. By October 2015, more than 17 million individuals had completed it (with presumably 90-95 per cent of them then self-identifying as racist). Liberal observers love the IAT. New York Times columnist Nicholas Kristof wrote in 2015, “It’s sobering to discover that whatever you believe intellectually, you’re biased about race, gender, age or disability.” Kristof’s tone is more complacent than sober, though. For progressives, the more widespread bias can be demonstrated to be, the more justifiable institutional and state intrusions into people’s minds become.

Banaji and Greenwald have themselves made far-reaching claims for the test: the “automatic White preference expressed on the Race IAT is now established as signaling discriminatory behavior. It predicts discriminatory behavior even among research participants who earnestly (and, we believe, honestly) espouse egalitarian beliefs. …. Among research participants who describe themselves as racially egalitarian, the Race IAT has been shown, reliably and repeatedly, to predict discriminatory behavior that was observed in the research.”

Problem is, none of this can be authenticated. According to Singal, a great deal of scholarly work that takes the shine off the researchers’ claims has been ignored by the media. The IAT is not verifiable and correlates weakly with actual lived outcomes. Meta-analyses cannot examine whether IAT scores predict discriminatory behaviour accurately enough for real-world application. An individual can score high for bias on the IAT and never act in a biased manner. He can take the test twice and get two wildly different scores. After almost two decades, the researchers have never posted test-retest reliability of commonly used IATs in publication.

It’s a wonder the IAT has a shred of credibility left. In 2015 Greenwald and Banaji responded to a critic that the psychometric issues with race and ethnicity IATS “render them problematic to use to classify persons as likely to engage in discrimination,” and that “attempts to diagnostically use such measures for individuals risk undesirably high rates of erroneous classifications.” Greenwald acknowledged to Singal that “no one has yet undertaken a study of the race IAT’s test-related reliability.” In other words, the IAT is a useless tool for measuring implicit bias.

In an interesting aside, Singal points to a 2012 study published in Psychological Science by psychologist Jacquie Vorauer. As her experiment, Vorauer set white Canadians to work with aboriginal partners. Before doing so, some of the participants took an IAT that pertained to aboriginals, some took a non-race IAT and others were asked for their explicit feelings about the group. Aboriginals in the race-IAT group subsequently reported feeling less valued by their white partners as compared to aboriginals in all of the other groups. Vorauer writes, “If completing the IAT enhances caution and inhibition, reduces self-efficacy, or primes categorical thinking, the test may instead have negative effects.” As Singal notes, this “suggests some troubling possibilities.”The IAT has potentially misinformed millions of test-takers, who believe that they are likely to act, or are routinely acting, with bias against their fellow citizens. Harbouring biases is part of the human condition, and it is our right to hold them, especially those warranted by epidemiology and reason. Our actions are all that should concern our employers or the state’s legal apparatus. Any directive to submit to the IAT by the state or a state-sponsored entity like the CBC constitutes an undemocratic intrusion into the individual’s privacy.

Source: Barbara Kay: Actually, it turns out that you may be less racist than you’ve been led to believe | National Post

Bias Isn’t Just A Police Problem, It’s A Preschool Problem : NPR

Worth reading in terms of just how embedded implicit bias is:

New research from the Yale Child Study Center suggests that many preschool teachers look for disruptive behavior in much the same way: in just one place, waiting for it to appear.

The problem with this strategy (besides it being inefficient), is that, because of implicit bias, teachers are spending too much time watching black boys and expecting the worst.

The Study

Lead researcher Walter Gilliam knew that to get an accurate measure of implicit bias among preschool teachers, he couldn’t be fully transparent with his subjects about what, exactly, he was trying to study.

Implicit biases are just that — subtle, often subconscious stereotypes that guide our expectations and interactions with people.

“We all have them,” Gilliam says. “Implicit biases are a natural process by which we take information, and we judge people on the basis of generalizations regarding that information. We all do it.”

Even the most well-meaning teacher can harbor deep-seated biases, whether she knows it or not. So Gilliam and his team devised a remarkable — and remarkably deceptive — experiment.

At a big, annual conference for pre-K teachers, Gilliam and his team recruited 135 educators to watch a few short videos. Here’s what they told them:

“We are interested in learning about how teachers detect challenging behavior in the classroom. Sometimes this involves seeing behavior before it becomes problematic. The video segments you are about to view are of preschoolers engaging in various activities. Some clips may or may not contain challenging behaviors. Your job is to press the enter key on the external keypad every time you see a behavior that could become a potential challenge.”

Each video included four children: a black boy and girl and a white boy and girl.

Here’s the deception: There was no challenging behavior.

While the teachers watched, eye-scan technology measured the trajectory of their gaze. Gilliam wanted to know: When teachers expected bad behavior, who did they watch?

“What we found was exactly what we expected based on the rates at which children are expelled from preschool programs,” Gilliam says. “Teachers looked more at the black children than the white children, and they looked specifically more at the African-American boy.”

Indeed, according to recent data from the U.S. Department of Education, black children are 3.6 times more likely to be suspended from preschool than white children. Put another way, black children account for roughly 19 percent of all preschoolers, but nearly half of preschoolers who get suspended.

One reason that number is so high, Gilliam suggests, is that teachers spend more time focused on their black students, expecting bad behavior. “If you look for something in one place, that’s the only place you can typically find it.”

The Yale team also asked subjects to identify the child they felt required the most attention. Forty-two percent identified the black boy, 34 percent identified the white boy, while 13 percent and 10 percent identified the white and black girls respectively.

The Vignette

The Yale study had two parts. And, as compelling as the eye-scan results were, Gilliam’s most surprising takeaway came later.

He gave teachers a one-paragraph vignette to read, describing a child disrupting a class; there’s hitting, scratching, even toy-throwing. The child in the vignette was randomly assigned what researchers considered a stereotypical name (DeShawn, Latoya, Jake, Emily), and subjects were asked to rate the severity of the behavior on a scale of one to five.

White teachers consistently held black students to a lower standard, rating their behavior as less severe than the same behavior of white students.

Gilliam says this tracks with previous research around how people may shift standards and expectations of others based on stereotypes and implicit bias. In other words, if white teachers believe that black boys are more likely to behave badly, they may be less surprised by that behavior and rate it less severely.

Black teachers, on the other hand, did the opposite, holding black students to a higher standard and rating their behavior as consistently more severe than that of white students.

Here’s another key finding: Some teachers were also given information about the disruptive child’s home life, to see if it made them more empathetic:

[CHILD] lives with his/her mother, his/her 8- and 6-year old sisters, and his/her 10-month-old baby brother. His/her home life is turbulent, between having a father who has never been a constant figure in his/her life, and a mother who struggles with depression but doesn’t have the resources available to seek help. During the rare times when his/her parents are together, loud and sometimes violent disputes occur between them. In order to make ends meet, [CHILD’s] mother has taken on three different jobs, and is in a constant state of exhaustion. [CHILD] and his/her siblings are left in the care of available relatives and neighbors while their mother is at work.

Guess what happened.

Teachers who received this background did react more empathetically, lowering their rating of a behavior’s severity — but only if the teacher and student were of the same race.

Source: Bias Isn’t Just A Police Problem, It’s A Preschool Problem : NPR Ed : NPR