felfri commited on
Commit
eb8c1eb
·
verified ·
1 Parent(s): 9cde108

Update questions.yaml

Browse files
Files changed (1) hide show
  1. questions.yaml +50 -50
questions.yaml CHANGED
@@ -5,20 +5,20 @@
5
  "1.1 Bias Detection Overview":
6
  explainer: "Has the AI system been comprehensively evaluated across multiple stages of the system development chain using diverse evaluation techniques?"
7
  questions:
8
- - "Evaluations at various stages (data collection, preprocessing, AI system architecture, training, deployment)"
9
- - "Have intrinsic properties of the AI system been evaluated for bias (e.g., embedding analysis)"
10
- - "Have extrinsic bias evaluations been run (e.g., downstream task performance)"
11
- - "Have evaluations been run across all applicable modalities"
12
- - "Have bias evaluations been run that take the form of automatic quantitative evaluation"
13
  - "Have bias evaluations been run with human participants?"
14
 
15
  "1.2 Protected Classes and Intersectional Measures":
16
  explainer: "Does the evaluation include a sufficiently broad range of protected classes that are disproportionately subject to harm by in-scope uses of the system, and the intersections of these classes?"
17
  questions:
18
- - "Do evaluations cover all applicable legal protected categories for in-scope uses of the system?"
19
- - "Do evaluations cover additional subgroups that are likely to be harmed based on other personal characteristics"
20
  - "Evaluation of how different aspects of identity interact and compound in AI system behavior"
21
- - "Evaluation of AI system biases for legal protected categories and additional relevant subgroups"
22
 
23
  "1.3 Measurement of Stereotypes and Harmful Associations":
24
  explainer: "Has the AI system been evaluated for the presence of harmful associations and stereotypes in its outputs?"
@@ -40,10 +40,10 @@
40
  explainer: "Has the AI system been comprehensively evaluated for cultural variation across multiple stages of the system development chain using diverse evaluation techniques?"
41
  questions:
42
  - "Evaluations at various stages (data collection, preprocessing, AI system architecture, training, deployment)"
43
- - "Have intrinsic properties of the AI system been evaluated for cultural variation (e.g., embedding analysis)"
44
- - "Have extrinsic cultural variation evaluations been run (e.g., downstream task performance)"
45
- - "Have evaluations been run across all applicable modalities"
46
- - "Have cultural variation evaluations been run that take the form of automatic quantitative evaluation"
47
  - "Have cultural variation evaluations been run with human participants?"
48
 
49
  "2.2 Cultural Diversity and Representation":
@@ -58,22 +58,22 @@
58
  "2.3 Generated Sensitive Content across Cultural Contexts":
59
  explainer: "Has the AI system been evaluated for the potential negative impacts and implications of its generated content across different cultural contexts? Has the system been evaluated for its handling of hate speech, harmful content, and culturally sensitive material?"
60
  questions:
61
- - "Has the AI system been evaluated for its likelihood of facilitating generation of threatening or violent content"
62
- - "Has the AI system been evaluated for its likelihood of facilitating generation of targeted harassment or discrimination"
63
- - "Has the AI system been evaluated for its likelihood of facilitating generation of hate speech"
64
- - "Has the AI system been evaluated for its likelihood of exposing its direct users to content embedding values and assumptions not reflective of their cultural context"
65
- - "Has the AI system been evaluated for its likelihood of exposing its direct users to inappropriate content for their use context"
66
- - "Has the AI system been evaluated for its likelihood of exposing its direct users to content with negative psychological impacts"
67
- - "Has the evaluation of the AI system's behaviors explicitly considered cultural variation in their definition"
68
 
69
  "2.4 Cultural Variation Transparency and Documentation":
70
  explainer: "Are the cultural limitations of the evaluation methods clearly documented? Has a comprehensive, culturally-informed evaluation methodology been implemented?"
71
  questions:
72
  - "Documentation of cultural contexts considered during development"
73
  - "Documentation of the range of cultural contexts covered by evaluations"
74
- - "Sufficient documentation of evaluation method to understand the scope of the findings"
75
  - "Construct validity, documentation of strengths, weaknesses, and assumptions"
76
- - "Domain shift between evaluation development and AI system development settings"
77
  - "Sufficient documentation of evaluation methods to replicate findings"
78
  - "Sufficient documentation of evaluation results to support comparison"
79
  - "Document of psychological impact on evaluators reviewing harmful content"
@@ -81,13 +81,13 @@
81
 
82
  "3. Disparate Performance Evaluation":
83
  "3.1 Disparate Performance Overview":
84
- explainer: "Has the AI system been comprehensively evaluated for disparity in performance across groups in specific task and deployment contexts?"
85
  questions:
86
  - "Have development choices and intrinsic properties of the AI system been evaluated for their contribution to disparate performance?"
87
- - "Have extrinsic disparate performance evaluations been run"
88
- - "Have evaluations been run across all applicable modalities"
89
- - "Have disparate performance evaluations been run that take the form of automatic quantitative evaluation"
90
- - "Have disparate performance evaluations been run with human participants"
91
 
92
  "3.2 Identifying Target Groups for Disparate Performance Evaluation":
93
  explainer: "Has the evaluation identified subgroups more likely to be harmed by disparate performance in context by considering the scope of the AI system's application and its relationship to existing systemic issues?"
@@ -103,15 +103,15 @@
103
  questions:
104
  - "Non-aggregated evaluation results across subpopulations, including feature importance and consistency analysis"
105
  - "Metrics to measure performance in decision-making tasks"
106
- - "Metrics to measure disparate performance in other tasks including generative tasks"
107
  - "Worst-case subgroup performance analysis, including performance on rare or underrepresented cases"
108
- - "Intersectional analysis examining performance across combinations of subgroup"
109
- - "Do evaluations of disparate performance account for implicit social group markers"
110
 
111
  "3.4 Disparate Performance Evaluation Transparency and Documentation":
112
  explainer: "Are the disparate performance evaluations clearly documented for easy reproduction and interpretation?"
113
  questions:
114
- - "Sufficient documentation of evaluation method to understand the scope of the findings"
115
  - "Documentation of strengths, weaknesses, and assumptions about the context"
116
  - "Documentation of domain shift between evaluation and deployment settings"
117
  - "Sufficient documentation of evaluation methods to replicate findings"
@@ -127,7 +127,7 @@
127
  - "Have evaluations been run across all applicable modalities?"
128
  - "Have evaluations been run on standardized benchmarks or metrics?"
129
  - "Have evaluations taken into account community feedback from regions affected by data center power consumption?"
130
- - "Do evaluations consider the full supply chain including environmental impact of hardware components and data centers used?"
131
 
132
  "4.2 Energy Cost and Environmental Impact of Development":
133
  explainer: "Has the AI system been comprehensively evaluated for its carbon footprint and broader environmental impact?"
@@ -141,7 +141,7 @@
141
  explainer: "Has the AI system been evaluated for its hardware resource usage and efficiency?"
142
  questions:
143
  - "Evaluation of inference FLOPS for the system"
144
- - "Evaluation of inference energy consumption on most common deployment setting"
145
  - "Evaluation of inference energy consumption on multiple deployment settings"
146
  - "Evaluation of task-specific energy consumption variations"
147
  - "Evaluation of carbon impact for deployment infrastructure"
@@ -151,7 +151,7 @@
151
  explainer: "Are the limitations of the evaluation methods clearly documented? Has a comprehensive environmental evaluation methodology been implemented?"
152
  questions:
153
  - "Documentation about equipment and infrastructure specifications"
154
- - "Sufficient documentation of evaluation methods including components covered"
155
  - "Sufficient documentation of evaluation methods to replicate findings"
156
  - "Sufficient documentation of evaluation results for comparison"
157
 
@@ -160,24 +160,24 @@
160
  explainer: "Has the AI system been comprehensively evaluated for privacy across multiple stages of the system development chain using diverse evaluation techniques?"
161
  questions:
162
  - "Evaluations at various stages (data collection, preprocessing, AI system architecture, training, deployment)"
163
- - "Have intrinsic properties of the AI system been evaluated for privacy vulnerabilities"
164
- - "Have extrinsic privacy evaluations been run"
165
- - "Have evaluations been run across all applicable modalities"
166
- - "Have privacy evaluations been run that take the form of automatic quantitative evaluation"
167
  - "Have privacy evaluations been run with human participants?"
168
 
169
  "5.2 Privacy, Likeness, and Publicity Harms":
170
  explainer: "Has the AI system been evaluated for risks to personal integrity, privacy, and control of one's likeness?"
171
  questions:
172
  - "Has the AI system been evaluated for its likelihood of revealing personal information from its training data?"
173
- - "Has the AI system been evaluated for its likelihood of facilitating generation of content impersonating an individual?"
174
- - "Has the AI system been evaluated for its likelihood of providing made up or confabulated personal information about individuals?"
175
 
176
  "5.3 Intellectual Property and Information Security":
177
  explainer: "Has the AI system been evaluated for its likelihood of reproducing sensitive information or information with attached property rights?"
178
  questions:
179
- - "Has the AI system been evaluated for its likelihood of reproducing other categories of information from its training data"
180
- - "Has the system been evaluated for other information security risks for in-scope uses"
181
 
182
  "5.4 Privacy Evaluation Transparency and Documentation":
183
  explainer: "Are the privacy evaluations clearly documented to enable understanding of privacy risks, limitations, and reproducibility of findings?"
@@ -193,10 +193,10 @@
193
  explainer: "Has the AI system been comprehensively evaluated for system costs across multiple stages of development and deployment?"
194
  questions:
195
  - "Evaluation of costs at various stages"
196
- - "Have costs been evaluated for different system components"
197
- - "Have cost evaluations been run across all applicable modalities"
198
- - "Have cost evaluations included both direct and indirect expenses"
199
- - "Have cost projections been validated against actual expenses"
200
 
201
  "6.2 Development and Training Costs":
202
  explainer: "Has the AI system been evaluated for costs associated with development and training phases?"
@@ -229,11 +229,11 @@
229
  explainer: "Has the AI system been comprehensively evaluated for labor practices across different stages of AI system development and deployment?"
230
  questions:
231
  - "Evaluation of labor practices at various stages"
232
- - "Have labor conditions been evaluated for different worker categories"
233
- - "Have labor evaluations been run across all applicable task types"
234
- - "Have labor practices been evaluated against established industry standards"
235
- - "Have labor evaluations included both direct employees and contracted workers"
236
- - "Have evaluations considered different regional and jurisdictional contexts"
237
 
238
  "7.2 Working Conditions and Compensation":
239
  explainer: "Has the AI system been evaluated for its labor practices, compensation structures, and working conditions?"
 
5
  "1.1 Bias Detection Overview":
6
  explainer: "Has the AI system been comprehensively evaluated across multiple stages of the system development chain using diverse evaluation techniques?"
7
  questions:
8
+ - "Have evaluations been done at various stages (data collection, preprocessing, AI system architecture, training, deployment)?"
9
+ - "Have intrinsic properties of the AI system been evaluated for bias (e.g., embedding analysis)?"
10
+ - "Have extrinsic bias evaluations been run (e.g., downstream task performance)?"
11
+ - "Have evaluations been run across all applicable modalities?"
12
+ - "Have bias evaluations been run that take the form of automatic quantitative evaluation?"
13
  - "Have bias evaluations been run with human participants?"
14
 
15
  "1.2 Protected Classes and Intersectional Measures":
16
  explainer: "Does the evaluation include a sufficiently broad range of protected classes that are disproportionately subject to harm by in-scope uses of the system, and the intersections of these classes?"
17
  questions:
18
+ - "Do evaluations cover all applicable legally protected categories for in-scope uses of the system?"
19
+ - "Do evaluations cover additional subgroups that are likely to be harmed based on other personal characteristics?"
20
  - "Evaluation of how different aspects of identity interact and compound in AI system behavior"
21
+ - "Evaluation of AI system biases for legally protected categories and additional relevant subgroups"
22
 
23
  "1.3 Measurement of Stereotypes and Harmful Associations":
24
  explainer: "Has the AI system been evaluated for the presence of harmful associations and stereotypes in its outputs?"
 
40
  explainer: "Has the AI system been comprehensively evaluated for cultural variation across multiple stages of the system development chain using diverse evaluation techniques?"
41
  questions:
42
  - "Evaluations at various stages (data collection, preprocessing, AI system architecture, training, deployment)"
43
+ - "Have intrinsic properties of the AI system been evaluated for cultural variation (e.g., embedding analysis)?"
44
+ - "Have extrinsic cultural variation evaluations been run (e.g., downstream task performance)?"
45
+ - "Have evaluations been run across all applicable modalities?"
46
+ - "Have cultural variation evaluations been run that take the form of an automatic quantitative evaluation?"
47
  - "Have cultural variation evaluations been run with human participants?"
48
 
49
  "2.2 Cultural Diversity and Representation":
 
58
  "2.3 Generated Sensitive Content across Cultural Contexts":
59
  explainer: "Has the AI system been evaluated for the potential negative impacts and implications of its generated content across different cultural contexts? Has the system been evaluated for its handling of hate speech, harmful content, and culturally sensitive material?"
60
  questions:
61
+ - "Has the AI system been evaluated for its likelihood of facilitating the generation of threatening or violent content?"
62
+ - "Has the AI system been evaluated for its likelihood of facilitating the generation of targeted harassment or discrimination?"
63
+ - "Has the AI system been evaluated for its likelihood of facilitating the generation of hate speech?"
64
+ - "Has the AI system been evaluated for its likelihood of exposing its direct users to content embedding values and assumptions not reflective of their cultural context?"
65
+ - "Has the AI system been evaluated for its likelihood of exposing its direct users to inappropriate content for their use context?"
66
+ - "Has the AI system been evaluated for its likelihood of exposing its direct users to content with negative psychological impacts?"
67
+ - "Has the evaluation of the AI system's behaviors explicitly considered cultural variation in their definition?"
68
 
69
  "2.4 Cultural Variation Transparency and Documentation":
70
  explainer: "Are the cultural limitations of the evaluation methods clearly documented? Has a comprehensive, culturally-informed evaluation methodology been implemented?"
71
  questions:
72
  - "Documentation of cultural contexts considered during development"
73
  - "Documentation of the range of cultural contexts covered by evaluations"
74
+ - "Sufficient documentation of the evaluation method to understand the scope of the findings"
75
  - "Construct validity, documentation of strengths, weaknesses, and assumptions"
76
+ - "Domain shift between evaluation, development, and AI system deployment settings"
77
  - "Sufficient documentation of evaluation methods to replicate findings"
78
  - "Sufficient documentation of evaluation results to support comparison"
79
  - "Document of psychological impact on evaluators reviewing harmful content"
 
81
 
82
  "3. Disparate Performance Evaluation":
83
  "3.1 Disparate Performance Overview":
84
+ explainer: "Has the AI system been comprehensively evaluated for disparity in performance across groups in specific tasks and deployment contexts?"
85
  questions:
86
  - "Have development choices and intrinsic properties of the AI system been evaluated for their contribution to disparate performance?"
87
+ - "Have extrinsic disparate performance evaluations been run?"
88
+ - "Have evaluations been run across all applicable modalities?"
89
+ - "Have disparate performance evaluations been run that take the form of automatic quantitative evaluation?"
90
+ - "Have disparate performance evaluations been run with human participants?"
91
 
92
  "3.2 Identifying Target Groups for Disparate Performance Evaluation":
93
  explainer: "Has the evaluation identified subgroups more likely to be harmed by disparate performance in context by considering the scope of the AI system's application and its relationship to existing systemic issues?"
 
103
  questions:
104
  - "Non-aggregated evaluation results across subpopulations, including feature importance and consistency analysis"
105
  - "Metrics to measure performance in decision-making tasks"
106
+ - "Metrics to measure disparate performance in other tasks, including generative tasks"
107
  - "Worst-case subgroup performance analysis, including performance on rare or underrepresented cases"
108
+ - "Intersectional analysis examining performance across combinations of subgroups"
109
+ - "Do evaluations of disparate performance account for implicit social group markers?"
110
 
111
  "3.4 Disparate Performance Evaluation Transparency and Documentation":
112
  explainer: "Are the disparate performance evaluations clearly documented for easy reproduction and interpretation?"
113
  questions:
114
+ - "Sufficient documentation of the evaluation method to understand the scope of the findings"
115
  - "Documentation of strengths, weaknesses, and assumptions about the context"
116
  - "Documentation of domain shift between evaluation and deployment settings"
117
  - "Sufficient documentation of evaluation methods to replicate findings"
 
127
  - "Have evaluations been run across all applicable modalities?"
128
  - "Have evaluations been run on standardized benchmarks or metrics?"
129
  - "Have evaluations taken into account community feedback from regions affected by data center power consumption?"
130
+ - "Do evaluations consider the full supply chain, including the environmental impact of hardware components and data centers used?"
131
 
132
  "4.2 Energy Cost and Environmental Impact of Development":
133
  explainer: "Has the AI system been comprehensively evaluated for its carbon footprint and broader environmental impact?"
 
141
  explainer: "Has the AI system been evaluated for its hardware resource usage and efficiency?"
142
  questions:
143
  - "Evaluation of inference FLOPS for the system"
144
+ - "Evaluation of inference energy consumption on the most common deployment setting"
145
  - "Evaluation of inference energy consumption on multiple deployment settings"
146
  - "Evaluation of task-specific energy consumption variations"
147
  - "Evaluation of carbon impact for deployment infrastructure"
 
151
  explainer: "Are the limitations of the evaluation methods clearly documented? Has a comprehensive environmental evaluation methodology been implemented?"
152
  questions:
153
  - "Documentation about equipment and infrastructure specifications"
154
+ - "Sufficient documentation of evaluation methods, including components covered"
155
  - "Sufficient documentation of evaluation methods to replicate findings"
156
  - "Sufficient documentation of evaluation results for comparison"
157
 
 
160
  explainer: "Has the AI system been comprehensively evaluated for privacy across multiple stages of the system development chain using diverse evaluation techniques?"
161
  questions:
162
  - "Evaluations at various stages (data collection, preprocessing, AI system architecture, training, deployment)"
163
+ - "Have intrinsic properties of the AI system been evaluated for privacy vulnerabilities?"
164
+ - "Have extrinsic privacy evaluations been run?"
165
+ - "Have evaluations been run across all applicable modalities?"
166
+ - "Have privacy evaluations been run that take the form of an automatic quantitative evaluation?"
167
  - "Have privacy evaluations been run with human participants?"
168
 
169
  "5.2 Privacy, Likeness, and Publicity Harms":
170
  explainer: "Has the AI system been evaluated for risks to personal integrity, privacy, and control of one's likeness?"
171
  questions:
172
  - "Has the AI system been evaluated for its likelihood of revealing personal information from its training data?"
173
+ - "Has the AI system been evaluated for its likelihood of facilitating the generation of content impersonating an individual?"
174
+ - "Has the AI system been evaluated for its likelihood of providing made-up or confabulated personal information about individuals?"
175
 
176
  "5.3 Intellectual Property and Information Security":
177
  explainer: "Has the AI system been evaluated for its likelihood of reproducing sensitive information or information with attached property rights?"
178
  questions:
179
+ - "Has the AI system been evaluated for its likelihood of reproducing other categories of information from its training data?"
180
+ - "Has the system been evaluated for other information security risks for in-scope uses?"
181
 
182
  "5.4 Privacy Evaluation Transparency and Documentation":
183
  explainer: "Are the privacy evaluations clearly documented to enable understanding of privacy risks, limitations, and reproducibility of findings?"
 
193
  explainer: "Has the AI system been comprehensively evaluated for system costs across multiple stages of development and deployment?"
194
  questions:
195
  - "Evaluation of costs at various stages"
196
+ - "Have costs been evaluated for different system components?"
197
+ - "Have cost evaluations been run across all applicable modalities?"
198
+ - "Have cost evaluations included both direct and indirect expenses?"
199
+ - "Have cost projections been validated against actual expenses?"
200
 
201
  "6.2 Development and Training Costs":
202
  explainer: "Has the AI system been evaluated for costs associated with development and training phases?"
 
229
  explainer: "Has the AI system been comprehensively evaluated for labor practices across different stages of AI system development and deployment?"
230
  questions:
231
  - "Evaluation of labor practices at various stages"
232
+ - "Have labor conditions been evaluated for different worker categories?"
233
+ - "Have labor evaluations been run across all applicable task types?"
234
+ - "Have labor practices been evaluated against established industry standards?"
235
+ - "Have labor evaluations included both direct employees and contracted workers?"
236
+ - "Have evaluations considered different regional and jurisdictional contexts?"
237
 
238
  "7.2 Working Conditions and Compensation":
239
  explainer: "Has the AI system been evaluated for its labor practices, compensation structures, and working conditions?"