I call it digital gentrification. Similar to how neighborhood gentrification reshapes a neighborhood—moving out the culture, history, and people who built it—digital gentrification does the same to the present and the future. AI is being built without us in mind. Black people are either excluded from the data, misrepresented by it, or worse, labeled in ways that don’t accurately reflect us.
We’re not showing up in the data and images that power this AI wave. When we do, it’s either wrong, outdated, or laced with bias. That’s not a tech problem. That’s a design problem. That’s a decision-making problem. That’s a values problem.
What worries me most is that as AI gets faster, slicker, and more widespread, the gap gets wider. The folks already on the outside — Black communities — move even further to the margins. I fear that while everyone else is being ushered into the AI revolution, we’re being shown the back door—or ignored altogether.
I’m not just speaking on what I think. I’m speaking on what I’ve seen.
Last year, when Taliferro Group was asked to support the Washington State Office of Equity Department’s Generative AI Accountability Framework, we did not fill in a template. We looked under the hood of AI systems’ behavior to identify and assess the effects of bias creep and decision-making processes. Our internal Team held lively discussions about what equity means when the machine makes the call.
Moreover, our work with the State was about showing what serious accountability can look like and how to reach out to various communities throughout the State to engage and receive feedback about current and future Generative AI adoption. We helped set policy groundwork that recognizes how biases are not always malicious.
Nevertheless, biases in all forms are systemic, persistent, and thrive in silence.
What we found wasn’t surprising; however, our findings were sobering—Socioeconomic equity is not a priority in AI systems, and Black people are being erased in real time.
That’s why we began using the term “digital gentrification.” Our digital future is being redeveloped without us. When we are eventually invited in, the systems do know what to do with us. AI does not recognize our names, misreads our tones, and lacks Black cultural knowledge.
Further, the current state of AI cannot categorize our varied individual and collective experiences. Therefore, today’s AI systems do not accurately serve Black people, and will not without our input.
We decided to do something about it.
Frustration results from how some people frame equity as a feature to add later. Something modular. Something optional, which is not how fairness works. You don’t fix bias by slapping on a post-processing layer. You fix bias by rethinking how you build everything, from the ground up.
So while the rest of the world races to adopt AI, we ask the hard questions many do not want to hear: Who benefits? Who is left out? Who is misread? How do we fix biases at every level?
At Taliferro, we’re not only consultants. We’re builders. We began to bake equity into our tech at the algorithm level and develop our processes to detect bias drift. Such as processes to enrich data where marginalized groups are underrepresented or misclassified. Reconfiguring and correcting fields that AI tends to get wrong when providing answers about communities like ours.
We are not chasing efficiency — we seek fairness and accuracy in data and information that helps Black-led businesses and small to medium-sized teams boost overall capacity.
Taliferro created internal equity standards and terminology that guide our approach, from how we store and query data to how we validate whether outputs are reflective of lived experience. We call out the data that erases, reconfigure structures that obscure, and verify AI claims that pretend to be neutral but are shaped by outdated hierarchies.
Making the case for anti-bias AI is a challenge. Not totally because of opposition, but generally, anti-bias efforts are not urgent. Invisible harm is not tracked while erosion is happening in the background. Subtle, quiet, real.
The missed contract opportunities. The job applications never reviewed by a human. The faces that do not register. The mistaken identity. The “risk scores” that go up only because of a zip code. The inability to produce images of Black adults, teens, and children at all, or only as stereotypes. Those kinds of results are produced from bad data, which is what digital gentrification looks like in action.
The issue is about more than visibility. If AI is the future, then Black people deserve to be included. Fully seen, respected, and served.
A socially conscience Tech company, Taliferro is not waiting for industry-wide change to begin. Not only do we point out problems, but build technology tools to solve those problems.
Therefore, we are pushing ahead with our own protocols and building safeguards. We test every layer — from training data to the API because we know what happens when these systems go unchecked.
Unfortunately, we know what it’s like to be miscategorized, misjudged, and misunderstood by people and now by machines.
We are taking our seat at the table — and bringing our algorithm.
The reality: Many Black businesses are resource-challenged, but effectively adopting AI will increase competitive capabilities across operations and sales while establishing a company-wide knowledge base.
Are you curious to learn how your organization can benefit regardless of size? I am happy to help you harness the power of AI.
Email: [email protected].