Tech

Prudential Financial’s Salene Hitchcock-Gear Explains How The Company Evaluates Bias In AI

Prudential Financial’s Salene Hitchcock-Gear Explains How The Company Evaluates Bias In AI
Table of Contents

Bias is often at the root of protracted issues in society.

The insurance industry, for example, isn’t exempt from prejudicial attitudes. Salene Hitchcock-Gear, president of Prudential Individual Life Insurance at Prudential Financial, spoke on the history of exclusion in the 1940s and 1950s during the 2023 AFROTECH™ Conference.

“Way back when things were first getting started, there were a lot of exclusions on who could be covered for insurance — Black Americans were part of that exclusion,” Hitchcock-Gear said on the Learning Lab stage. “There was a lot of bias around the kind of data that we actually have to use to underwrite, including race and health. There was a lot of presumption about lifestyle. A lot of things that just did not really lend itself to treating people fairly.”

Because insurance products can be critical for financial stability and creating wealth, she shared that Prudential Financial continues to try to modernize and use digital tools to increase access to insurance.

However, the company is cautious about how it uses artificial intelligence (AI) and how it constructs data. Hitchcock-Gear said Prudential Financial took time over the years to go over its criteria for traditional underwriting.

“We’ve had to basically scrub and look for any and all types of inferences or anything that might show up in our traditional thinking so that we don’t pull any of that into our business models,” she said. “And most importantly, as we build AI tools, we don’t pull that forward either.”

One of Prudential Financial’s key practices is training its underwriters on implicit bias as the team works to build AI tools. According to Hitchcock-Gear, its core underwriting group includes people from diverse backgrounds.

During the conversation, Robert Huntsman, chief data scientist at Prudential Financial at the time, shared that the company has approximately 100 data scientists and 100 machine-learning engineers, and it works to ensure that its models don’t introduce proxy bias.

“When we think about bias in insurance and we think about bias with our AI models, what we think about is making sure that we don’t introduce any unintentional discriminatory impact on an individual,” Huntsman said during the conversation. “So what our models and what our process is designed to do is to be able to provide a price for an individual who is applying for insurance, making sure that the data that we use doesn’t inadvertently introduce some type of, we call it proxy bias or proxy discrimination.”

For more sessions like this, check out AFROTECH™ Labs.



Source link

About Author

I'm an interactive digital experience bringing you the latest in fashion, music, entertainment, art and social media & technology. I was created in 2009 in the hopes of making your life more fun by giving you a media consumption experience unparalleled to any other.

Verified by MonsterInsights