Hogan Lovells 2024 Election Impact and Congressional Outlook Report
In a world dominated by continuous technological progress, the business landscape is simultaneously undergoing a transformative shift as artificial intelligence (AI) is integrated into day-to-day business operations, including in the retail sector. AI tools offer solutions for enhancing inventory and supply chain management, optimizing pricing strategies, improving customer service, and providing personalized product recommendations. Alongside the promises of enhancing efficiency and innovation, however, obstacles arise, one of which is the emergence of AI bias. Understanding the intricacies of this challenge is crucial for businesses seeking to make use of the full potential of AI systems and mitigate against the risks they may pose.
AI bias arises when algorithms, inadvertently or otherwise, reflect and perpetuate existing societal prejudices. Bias can seep into AI models through unrepresentative or incomplete training data, prejudiced algorithms, or the unintentional reinforcement of human bias during the development of an AI tool. And due to feedback loops, biases can be reinforced over time and exacerbated, such that AI systems may “learn” to be more and more biased. This phenomenon has the potential to be particularly problematic in shopper profiling, where AI is used to qualify consumers for offers or promotions, as well as in product development and testing where user experience is implicated. The so-called "black box" problem, which refers to the lack of explainability in complex algorithms, can cause further concern, especially in areas where transparency and accountability are important.
Much ink has been spilled about the potential perpetuation of bias by AI, and the risks and regulations surrounding that potential. But the reality is that AI is not itself biased, it simply reflects the human bias we put into it, either through the data we train it on or the assumptions we make. In this way, and far from being destined to perpetuate human bias, there is an opportunity to flip the narrative on AI, and recognize it as a potential tool to overcome the biases AI can help us to recognize, including in the retail sector.
One potential way to employ AI to this end is to use it to detect discrimination. Detecting discrimination in human decisions is often difficult because it is impossible to understand everything influencing a person’s decision-making process. No one wants to think of themselves as biased, and often, the factors that influence us are quite complex, such that we may not even be aware of the implicit biases we carry. But detecting bias in AI systems is, in many ways, more accessible. It’s a statistical exercise, and one which benefits from the vast amounts of data on which AI is trained. By testing the AI models we use and comparing their output to statistical information about the relevant populations, people, places, or things, we are able to identify whether AI models are, in fact representative, or if their training data or programming requires modifications to make them so.
In addition to detecting discrimination, it may likewise be possible to use AI in an effort to try to overcome it. While counteracting bias in humans can be challenging by virtue of the nature of human psychology, AI is in many ways simpler to calibrate and can be used as a check on human decisions, adjusting potentially biased human decision-making according to the unbiased metrics on which it has been trained.
Following this approach, the legal landscape concerning AI deployment might adapt as our understanding of AI bias deepens. Currently, regulatory frameworks primarily demand transparency, fairness, and comprehensive documentation from companies developing or using AI technologies. These standards are imperative not just for ethical reasons but also for legal compliance, aiming to protect consumer rights and prevent discriminatory practices. However, as AI systems are increasingly recognized as tools that can both perpetuate and mitigate bias, the criteria for regulatory compliance might need a recalibration. For instance, companies that proactively use AI to identify and correct biases might argue for a differentiated regulatory treatment. This could include allowances for adaptive AI algorithms that require iterative updates to improve fairness, which might not strictly adhere to traditional transparency guidelines.
Moreover, the legal system itself may need to evolve to better accommodate the nuances of AI-driven decisions. Legal standards could be recalibrated to account for the proactive measures companies take using AI to detect and counteract bias, thereby setting a precedent that encourages more responsible AI use. Such a shift would not only enhance consumer protection but also promote innovation in AI governance. By aligning legal standards with the latest technological advancements and their applications for bias mitigation, regulators can foster an environment where AI contributes positively to equitable business practices.
In this way, and while bias is often framed as a risk of AI, when used properly AI may in fact be a tool to mitigate it.
Far from being destined to perpetuate human bias in the retail sector, AI can be an opportunity to overcome it. To do so requires an understanding of the origins of AI bias, as well as solutions to mitigate against its emergence and impact. With careful planning, monitoring, and analysis, businesses can leverage AI systems as powerful tools to enhance objectivity and fairness within their operations and to mitigate against the human biases AI can help us to recognize. After all, the success of AI in retail also depends on the trust and confidence of the consumers it serves. Making consumers aware of the benefits of AI in retail is essential, emphasizing personalized experiences, enhanced convenience, and improved service quality.
Authored by Lauren Cury, Sebastian Faust, Vanessa Rinus, and Jasper Siems.