Advances in technology offer the promise of making healthcare work better for our members. For Sven Peterson, Corporate Compliance and Ethics Officer, and Rose Riojas, Ethics Program Manager at Premera Blue Cross, these changes also raise numerous ethical, reputational, legal and regulatory concerns. We recently caught up with Sven and Rose to learn how artificial intelligence (AI) and machine learning (ML) are changing the landscape of healthcare and what Premera is doing to ensure our members and their data are protected.
Q: With advances in technology such as AI and ML, what are the top three ethical issues to consider when it comes to personal protected health information?
ML is an application of AI and both have enormous potential for improving our customers’ lives. Our goal is to make sure this potential is realized in an ethical way. Three of the most important ethical issues that arise regarding personal protected health information (PHI) are fairness, accountability and privacy.
First, we need to ensure PHI is used fairly, so that we do not discriminate against people based on race, color, national or ethnic origin, age, religion, sex, sexual orientation, disability, gender identity, or other protected classes.
One example of this work is always making sure opportunities are fairly distributed, such as access to products and services. That requires rigorous work to make certain data and conclusions are tested for bias and corrected if necessary. In a society that is structurally unjust, there can also be a trade-off between accuracy and fairness, which requires ethical judgment.
Second, we need to uphold and respect our customers’ privacy by not unnecessarily exposing their PHI. Realizing the promise and power of data and AI tools requires the use and careful sharing of more data than ever before. At Premera, privacy principles are incorporated into design, and our Chief Information Security Officer is deeply involved in governance of the use of data and AI/ML.
Lastly, we must make sure that real people retain the ultimate authority over AI development. Human oversight is a crucial safeguard against the probability of bias infiltrating our automated systems. When bias is identified, it is critical we have processes in place to immediately shut down the system and warn impacted users. This will not only limit the spread of biased outcomes, but also retain the trust of our customers and uphold our standing as an ethical leader in the industry.
Q: How are legislators and regulatory bodies governing emerging use of data and AI?
Legislators and regulators are taking a keen interest in emerging uses of data and AI, as they should. Their interests are ultimately the same as ours – to ensure the potential of AI and ML used with big data is realized in a way that improves human lives in an ethical way. Legislators and regulators are very concerned about fairness and privacy, across many other industries and applications, including health coverage. Many of these standards are still being developed, and we anticipate many new statutes and regulations will be adopted over the next few years to govern the use of data and AI.
The Equal Employment Opportunity Commission (EEOC) recently launched an initiative to prevent discrimination in AI and ML-driven hiring practices, the Federal Trade Commission (FTC) is considering rulemaking to ensure algorithmic decisions do not result in unlawful discrimination, and several bills have been proposed in Congress that would govern the use of AI and data. The Department of Justice has also increased enforcement, including entering in to a settlement with Meta regarding use of advertising algorithms.
At the state level, Colorado enacted legislation that not only bans direct, indirect, and proxy discrimination by insurers, but also requires insurers document the steps taken to mitigate the risk of discrimination when using AI and data. Meanwhile, the National Association of Insurance Commissioners is working to create possible model rules that could be adopted by various states.
Q: What is Premera doing to mitigate risk when it comes to AI and ML?
At Premera, AI tools, which include ML, are designed to conform to societal values, including transparency, fairness – avoiding unfair discrimination, privacy and security, and accountability. This includes compliance with all applicable laws and regulations. Premera has adopted a set of ethical principles to guide its use of AI/ML. Premera also formed a cross-functional Data & AI Ethics Committee composed of leaders from across the company to set policy and review use cases where appropriate. In addition to these efforts, Premera has also implemented a workgroup tasked with advising the committee on matters relating to policy, governance, use cases, and ethical considerations. Premera is also dedicated to holding our third parties accountable for the responsible use of AI.
Sven Peterson and Rose Riojas are the authors of How did you know that about me?, which explores ethics, best practices, and ongoing developments in legal standards for companies adopting the use of AI and ML.