If we can’t “engineer out” bias from the design and development of AI software applications, and if we can’t remove human bias from the use or application of the software, then what tools do we have to manage bias in and around AI?
How do the FDA's Good Machine Learning Practice for Medical Device Development: Guiding Principles align to the AI Risk Management Framework by NIST? How can you leverage more than one framework to achieve a complete lifecycle management system that is intended for AI?
Interested in learning more? Click to read our latest White Paper, “Managing Bias in AI”.