Dear Ditto,

How can tech organizations tackle unconscious bias?

Advice from:
Andy VanderLinde
Director of Product Management

Our reliance on technology is unavoidable. As a result, tech organizations must examine their role in shaping and perpetuating assumed cultural norms and unconscious biases. Whether intentional or involuntary, technology reinforces these behaviors — and prejudices. Social impact organizations must lead the charge to improve equity and inclusion in their technological strategy. Leadership within our sector is promising — and shows that there is great potential for other social impact and nonprofit organizations to provide a model for other organizations to follow.

Researchers investigated racial discrimination and residential segregation by measuring users’ behavior on Craigslist, and found that there were severe implications on subsets of the population. Their study also highlighted the behavior of users on the platform, which echoed the role of race in the larger context of our society and cultures.

The link between the use of technology and how it can empower discrimination is once again making headline news with the Coronavirus. Recent reports, as well as social media posts show that some Uber and Lyft drivers are discriminating against passengers who they believe are more likely to carry Coronavirus, based on the passenger’s race. Uber and Lyft are under current scrutiny to address their drivers’ discriminatory, bias-ridden behavior. These prejudices impact many: the rideshare passengers, drivers, delivery service people, restaurants, and more.

Stories like these underscore the need for organizations to responsibly and proactively address bias in their technology and that of their users.

Bias is a deep-seated part of the human experience that is often magnified by events and situations stemming from fear. In order to maintain a sense of control, human beings look for ways to place blame, and overlay their existing stereotypes across swaths of populations. When we feel vulnerable to epidemics like Coronavirus, Ebola, SARS, HIV, etc., biases create a sense of comfort and domination over the situation.

This is not the first time user biases created serious consequences for a technology organization. In 2016, Airbnb was the focus of a conversation about their users’ unconscious bias and the use of their platform. A study about racial discrimination in the sharing economy found that prospective Airbnb guests with “African American sounding names were roughly 16% less likely to be accepted than their white-sounding counterparts.” This occurred even at the expense of hosts having underutilized vacancies.

In response, Airbnb hired a director of diversity and belonging, rewrote their “Partners and community” policies, required users to acknowledge and comply with the Nondiscrimination Policy, and created a tool for inclusive design: Another Lens.

Tech apps in the sharing economy use personal data, including names, genders, and profile pictures to build trust between strangers. Uber and Lyft, for example, use this data to promote passenger safety by recommending that they confirm the name and appearance of their drivers. However, this data enables drivers to make biased decisions about which services to provide or cancel/decline, based on the other party’s appearance or name.

Uber and Lyft responded to questions about user bias by directing the community to their training and policies on unconscious bias and discrimination. They met the minimum obligation of acknowledging the existence of driver bias, and their stance against it. But given our dependency on technology, and our inability to extract it from defining our culture, meeting the minimum obligation of addressing bias is not enough. Whether it is intentional or not; benevolent or not, our interaction with technology changes our behavior.

In response to users making prejudiced decisions about who they will interact with via a platform or app, organizations can design micro-interactions that challenge users’ biases. An example of a user flow that would go beyond the minimum obligation to reduce bias could:

  • Hide the other party’s profile information until the user verifies availability for the request. 
  • Display the other party’s details and confirm the service. If the service is canceled, ask the user for the reason.
  • Include a short training after all cancelations about customer service and bias.

An interaction like this would help bring self-awareness to users’ behavior and provide an avenue for building and strengthening the community.

Organizations have the capacity to design and develop products that meet their organizational goals while also consciously and intentionally promoting a healthier, empathetic, and inclusive user. At Echo&Co, we’re committed to working with organizations to create inclusive strategy and products. We help you identify how your product (website, application, etc.) can have the greatest positive impact on your users and your organization. We provide training and consulting services on inclusion, as well as implementation techniques. Please get in touch if you’re interested in working with us to audit your interactive digital products. Together, we can build bridges and genuine connections.

What’s next from Echo&Co on interactive digital inclusion? We start to answer: what is inclusion? Spoiler: It’s a lot of things, and we’re here to help!

Ask Ditto your questions by emailing ditto@echo.co

(Don’t worry, your questions will remain anonymous.)