Projekta Nr.ISO/IEC NP TS 26312
NosaukumsThis document describes how to identify and address bias in AI systems for health service organizations during procurement, implementation, and monitoring of these systems used to support the delivery and allocation of care and preventative services. This includes both clinician-facing, and patient-facing AI applications provided to patients by their healthcare organization. This document is focused on bias in AI models, specifically models that exhibit decreased accuracy or cause differential impact that may unfairly disadvantage certain demographic groups (also referred to as “protected classes” in some jurisdictions). This includes algorithmic bias, data representation bias, labelling bias, and deployment-related bias. This document takes a broad view of bias, acknowledging that unfairness in AI outcomes may arise from a variety of technical and non-technical sources throughout the AI system lifecycle. This document is applicable to all types and sizes of health service organizations (e.g. hospitals, laboratories, pharmacies, community radiology clinics, and primary or ambulatory care services), and stakeholders involved in regulating AI use within health service organizations. This document does not consider AI system development or manufacturing processes, as these are expected to be addressed in separate standards (noting that some health service organizations may be directly involved in the development or manufacture of AI systems). Similarly, the document does not cover personal bias that may be exhibited by end users of AI systems (e.g. cognitive or automation bias).
Reģistrācijas numurs (WIID)93147
Darbības sfēraThis document describes how to identify and address bias in AI systems for health service organizations during procurement, implementation, and monitoring of these systems used to support the delivery and allocation of care and preventative services. This includes both clinician-facing, and patient-facing AI applications provided to patients by their healthcare organization. This document is focused on bias in AI models, specifically models that exhibit decreased accuracy or cause differential impact that may unfairly disadvantage certain demographic groups (also referred to as “protected classes” in some jurisdictions). This includes algorithmic bias, data representation bias, labelling bias, and deployment-related bias. This document takes a broad view of bias, acknowledging that unfairness in AI outcomes may arise from a variety of technical and non-technical sources throughout the AI system lifecycle. This document is applicable to all types and sizes of health service organizations (e.g. hospitals, laboratories, pharmacies, community radiology clinics, and primary or ambulatory care services), and stakeholders involved in regulating AI use within health service organizations. This document does not consider AI system development or manufacturing processes, as these are expected to be addressed in separate standards (noting that some health service organizations may be directly involved in the development or manufacture of AI systems). Similarly, the document does not cover personal bias that may be exhibited by end users of AI systems (e.g. cognitive or automation bias).
StatussIzstrādē
ICS grupaNav uzstādīts