House
Study
Bill
294
-
Introduced
HOUSE
FILE
_____
BY
(PROPOSED
COMMITTEE
ON
ECONOMIC
GROWTH
AND
TECHNOLOGY
BILL
BY
CHAIRPERSON
SORENSEN)
A
BILL
FOR
An
Act
relating
to
artificial
intelligence,
including
the
use
1
of
artificial
intelligence
to
create
materials
related
to
2
elections
and
protections
in
interactions
with
artificial
3
intelligence
systems,
and
making
penalties
applicable.
4
BE
IT
ENACTED
BY
THE
GENERAL
ASSEMBLY
OF
THE
STATE
OF
IOWA:
5
TLSB
1289YC
(6)
91
dg/jh
H.F.
_____
DIVISION
I
1
MATERIALS
RELATED
TO
ELECTIONS
2
Section
1.
Section
68A.405,
Code
2025,
is
amended
by
adding
3
the
following
new
subsection:
4
NEW
SUBSECTION
.
5.
a.
Published
material
generated
through
5
the
use
of
artificial
intelligence
and
designed
to
expressly
6
advocate
the
nomination,
election,
or
defeat
of
a
candidate
7
for
public
office
or
the
passage
or
defeat
of
a
ballot
issue
8
must
contain
a
disclosure
on
the
published
material
that
the
9
published
material
was
generated
using
artificial
intelligence.
10
The
disclosure
must
include
the
words
“this
material
was
11
generated
using
artificial
intelligence”.
12
b.
For
purposes
of
this
subsection,
“artificial
13
intelligence”
means
a
machine-based
system
that
can,
for
a
14
given
set
of
human-defined
objectives,
make
predictions,
15
recommendations,
or
decisions
influencing
real
or
virtual
16
environments.
17
c.
The
board
shall
adopt
rules
for
the
implementation
of
18
this
subsection.
19
d.
A
disclosure
made
in
compliance
with
this
subsection
20
does
not
preclude
a
private
right
of
action
arising
out
of
the
21
publication
of
published
material
generated
through
the
use
of
22
artificial
intelligence.
23
DIVISION
II
24
PROTECTIONS
IN
INTERACTIONS
WITH
ARTIFICIAL
INTELLIGENCE
25
SYSTEMS
26
Sec.
2.
NEW
SECTION
.
554I.1
Definitions.
27
As
used
in
this
chapter:
28
1.
a.
“Algorithmic
discrimination”
means
any
use
of
an
29
artificial
intelligence
system
that
results
in
unfavorable
30
treatment
due
to
an
individual
or
group
of
individuals’
actual
31
or
perceived
age,
race,
creed,
color,
sex,
sexual
orientation,
32
national
origin,
religion,
or
disability.
33
b.
“Algorithmic
discrimination”
does
not
include
the
offer,
34
license,
or
use
of
an
artificial
intelligence
system
for
the
35
-1-
LSB
1289YC
(6)
91
dg/jh
1/
21
H.F.
_____
sole
purpose
of
performing
any
of
the
following:
1
(1)
Testing
to
identify,
mitigate,
or
prevent
2
discrimination
or
otherwise
ensure
compliance
with
state
or
3
federal
law.
4
(2)
Expanding
an
applicant,
customer,
or
participant
pool
5
to
increase
diversity
or
redress
historic
discrimination.
6
(3)
Any
act
or
omission
by
or
on
behalf
of
a
private
club
7
or
other
establishment
not
in
fact
open
to
the
public,
as
8
established
in
the
federal
Civil
Rights
Act
of
1964,
Pub.
L.
9
No.
88-352,
as
amended.
10
2.
“Artificial
intelligence
system”
means
any
machine-based
11
system
that,
for
any
explicit
or
implicit
objective,
infers
12
from
the
inputs
the
system
receives
to
generate
outputs,
13
including
content,
decisions,
predictions,
or
recommendations,
14
that
can
influence
physical
or
virtual
environments.
15
3.
“Consequential
decision”
means
any
decision
that
has
a
16
material
legal
or
similarly
significant
effect
on
the
provision
17
or
denial
of
any
of
the
following
to
an
individual:
18
a.
A
pardon,
parole,
probation,
or
release.
19
b.
Enrollment
in
education
or
an
educational
opportunity.
20
c.
Employment.
21
d.
A
financial
or
lending
service.
22
e.
An
essential
government
service.
23
f.
A
health
care
service,
as
health
care
is
defined
in
24
section
144B.1.
25
g.
Insurance.
26
h.
A
legal
service.
27
4.
“Deployer”
means
a
person
doing
business
in
this
state
28
that
uses
a
high-risk
artificial
intelligence
system.
29
5.
“Developer”
means
a
person
doing
business
in
this
state
30
that
develops
or
intentionally
and
substantially
modifies
a
31
high-risk
artificial
intelligence
system.
32
6.
a.
“High-risk
artificial
intelligence
system”
means
any
33
artificial
intelligence
system
that
makes,
or
is
a
factor
that
34
would
likely
alter
the
outcome
of,
a
consequential
decision.
35
-2-
LSB
1289YC
(6)
91
dg/jh
2/
21
H.F.
_____
b.
“High-risk
artificial
intelligence
system”
does
not
1
include
any
of
the
following:
2
(1)
An
artificial
intelligence
system
that
is
only
intended
3
to
do
any
of
the
following:
4
(a)
Perform
a
narrow
procedural
task.
5
(b)
Improve
the
result
of
a
previously
completed
human
6
activity.
7
(c)
Perform
a
preparatory
task
relevant
to
a
consequential
8
decision.
9
(d)
Detect
any
decision-making
pattern
or
any
deviation
10
from
a
preexisting
decision-making
pattern.
11
(2)
Antifraud
technology.
12
(3)
Antimalware
technology.
13
(4)
Antivirus
technology.
14
(5)
Calculators.
15
(6)
Cybersecurity
technology.
16
(7)
Databases.
17
(8)
Data
storage
technology.
18
(9)
Firewalls.
19
(10)
Internet
domain
registration
technology.
20
(11)
Internet
website
loading
technology.
21
(12)
Networking.
22
(13)
Search
engine
or
similar
technology.
23
(14)
Spam
and
robocall
filtering
technology.
24
(15)
Spellchecking.
25
(16)
Spreadsheets.
26
(17)
Web
caching
technology.
27
(18)
Web
hosting
technology.
28
(19)
Any
technology
that
communicates
in
natural
language
29
for
the
purpose
of
providing
users
with
information,
making
30
referrals
or
recommendations,
answering
questions,
or
31
generating
other
content,
and
that
is
subject
to
an
accepted
32
use
policy
that
prohibits
generating
content
that
is
unlawful.
33
7.
“Intentional
and
substantial
modification”
or
34
“intentionally
and
substantially
modifies”
means
a
deliberate
35
-3-
LSB
1289YC
(6)
91
dg/jh
3/
21
H.F.
_____
change
made
to
an
artificial
intelligence
system
that
1
materially
increases
the
risk
of
algorithmic
discrimination.
2
8.
“Trade
secret”
means
the
same
as
defined
in
section
3
550.2.
4
Sec.
3.
NEW
SECTION
.
554I.2
Algorithmic
discrimination
5
prohibited.
6
1.
a.
A
developer
shall
use
reasonable
care
to
protect
7
individuals
from
any
known
or
reasonably
foreseeable
risks
8
of
algorithmic
discrimination
arising
from
the
intended
9
and
contracted
uses
of
the
developer’s
high-risk
artificial
10
intelligence
system.
11
b.
There
is
a
rebuttable
presumption
that
a
developer
used
12
reasonable
care
as
required
under
this
section
if
the
developer
13
complied
with
subsections
2
through
8
and
any
additional
14
requirements
established
in
rules
adopted
by
the
attorney
15
general
pursuant
to
section
554I.7.
16
2.
A
developer
shall
make
all
of
the
following
available
to
17
a
deployer
or
other
developer
that
uses
or
intends
to
use
the
18
developer’s
high-risk
artificial
intelligence
system:
19
a.
A
general
statement
describing
the
reasonably
foreseeable
20
uses
and
known
harmful
or
inappropriate
uses
of
the
high-risk
21
artificial
intelligence
system.
22
b.
Documentation
for
all
of
the
following:
23
(1)
Summaries
of
the
types
of
data
used
to
train
the
24
high-risk
artificial
intelligence
system.
25
(2)
The
known
or
reasonably
foreseeable
limitations
of
26
the
high-risk
artificial
intelligence
system
including
but
27
not
limited
to
the
known
or
reasonably
foreseeable
risks
of
28
algorithmic
discrimination
arising
from
the
intended
use
of
the
29
high-risk
artificial
intelligence
system.
30
(3)
The
purpose
and
intended
uses
of
the
high-risk
31
artificial
intelligence
system.
32
(4)
The
intended
outputs
of
the
high-risk
artificial
33
intelligence
system
and
how
to
understand
the
outputs.
34
(5)
The
intended
benefits
of
the
high-risk
artificial
35
-4-
LSB
1289YC
(6)
91
dg/jh
4/
21
H.F.
_____
intelligence
system.
1
(6)
How
the
high-risk
artificial
intelligence
system
2
was
evaluated
for
performance
and
mitigation
of
algorithmic
3
discrimination
before
the
high-risk
artificial
intelligence
4
system
was
sold,
leased,
licensed,
given,
or
otherwise
made
5
available
to
the
deployer
or
other
developer.
6
(7)
The
actions
or
processes
the
deployer
implemented
to
7
ensure
the
quality
and
consistency
of
training
datasets,
the
8
measures
used
to
examine
the
suitability
of
data
sources,
the
9
measures
used
to
evaluate
possible
biases
in
the
training
10
datasets
and
data
sources,
and
the
measures
used
to
mitigate
11
possible
biases
in
training
datasets
and
data
sources.
12
(8)
The
measures
the
developer
took
to
mitigate
any
known
13
or
reasonably
foreseeable
risks
of
algorithmic
discrimination
14
that
may
arise
from
using
the
high-risk
artificial
intelligence
15
system.
16
(9)
How
the
high-risk
artificial
intelligence
system
17
should
be
used,
should
not
be
used,
and
should
be
monitored
18
when
the
high-risk
artificial
intelligence
system
is
used
to
19
make,
or
is
a
factor
that
would
likely
alter
the
outcome
of,
a
20
consequential
decision.
21
(10)
Instructions
that
are
reasonably
necessary
to
assist
22
a
deployer
in
monitoring
the
performance
of
the
high-risk
23
artificial
intelligence
system
for
any
risk
of
algorithmic
24
discrimination.
25
3.
a.
A
developer
that
offers,
sells,
leases,
licenses,
26
gives,
or
otherwise
makes
a
high-risk
artificial
intelligence
27
system
available
to
a
deployer
shall
provide
documentation
28
and
information
to
the
deployer
necessary
for
the
deployer
to
29
complete
an
impact
assessment
under
section
554I.3.
Materials
30
and
data
include
but
are
not
limited
to
model
cards,
dataset
31
cards,
and
other
impact
assessments.
32
b.
This
subsection
shall
not
apply
if
the
deployer
is
33
affiliated
with
the
developer
that
is
providing
the
high-risk
34
artificial
intelligence
system.
35
-5-
LSB
1289YC
(6)
91
dg/jh
5/
21
H.F.
_____
4.
A
developer
shall
make
a
statement
available
in
a
manner
1
that
is
clear
and
readily
available
on
the
developer’s
internet
2
site
or
in
a
public
use
case
inventory
that
includes
a
summary
3
of
all
of
the
following:
4
a.
The
types
of
high-risk
artificial
intelligence
systems
5
the
developer
has
developed
or
intentionally
and
substantially
6
modified
and
is
currently
making
generally
available
to
7
deployers.
8
b.
How
the
developer
manages
known
or
reasonably
foreseeable
9
risks
of
algorithmic
discrimination
arising
from
development
10
or
intentional
and
substantial
modification
of
the
types
11
of
high-risk
artificial
intelligence
systems
described
in
12
paragraph
“a”
.
13
5.
Each
developer
shall
update
the
statement
described
14
in
subsection
4
as
necessary
to
ensure
that
the
statement
15
remains
accurate,
but
no
later
than
ninety
days
after
the
16
developer
develops
or
intentionally
and
substantially
modifies
17
a
high-risk
artificial
intelligence
system.
18
6.
a.
A
developer
shall
disclose
to
the
attorney
general
19
and
to
all
known
deployers
or
other
developers
of
the
high-risk
20
artificial
intelligence
system
any
known
or
foreseeable
risks
21
of
algorithmic
discrimination
arising
from
the
intended
uses
of
22
the
high-risk
artificial
intelligence
system.
23
b.
A
disclosure
under
paragraph
“a”
shall
be
given
to
the
24
attorney
general
no
later
than
ninety
days
after
the
earliest
25
of
any
of
the
following
occurs:
26
(1)
The
developer
discovers
through
the
developer’s
27
ongoing
testing
and
analysis
that
the
developer’s
high-risk
28
artificial
intelligence
system
has
been
used
and
has
caused
or
29
is
reasonably
likely
to
have
caused
algorithmic
discrimination.
30
(2)
The
developer
receives
from
a
deployer
a
credible
report
31
that
the
high-risk
artificial
intelligence
system
has
been
used
32
and
has
caused
algorithmic
discrimination.
33
7.
Subsections
2
through
6
do
not
require
a
developer
to
34
disclose
a
trade
secret,
information
protected
under
state
or
35
-6-
LSB
1289YC
(6)
91
dg/jh
6/
21
H.F.
_____
federal
law,
or
other
confidential
or
proprietary
information.
1
8.
a.
The
attorney
general
may
require
that
a
developer
2
disclose
to
the
attorney
general
any
statement
or
documentation
3
described
in
subsection
2
if
the
statement
or
documentation
is
4
relevant
to
an
investigation
conducted
by
the
attorney
general
5
regarding
a
violation
of
this
chapter.
6
b.
To
the
extent
that
a
statement
or
documentation
requested
7
by
the
attorney
general
pursuant
to
paragraph
“a”
includes
8
proprietary
information
or
a
trade
secret,
the
statement
9
or
documentation
is
exempt
from
disclosure.
The
developer
10
may
designate
the
statement
or
documentation
as
including
11
proprietary
information
or
a
trade
secret.
12
c.
To
the
extent
that
a
statement
or
documentation
13
requested
by
the
attorney
general
pursuant
to
paragraph
“a”
14
includes
information
subject
to
attorney-client
privilege
or
15
work-product
protection,
the
disclosure
does
not
constitute
a
16
waiver
of
the
privilege
or
protection.
17
Sec.
4.
NEW
SECTION
.
554I.3
Impact
assessments.
18
1.
A
deployer
shall
use
reasonable
care
to
protect
19
individuals
from
any
known
or
reasonably
foreseeable
risks
of
20
algorithmic
discrimination.
In
any
enforcement
action
brought
21
by
the
attorney
general,
there
is
a
rebuttable
presumption
that
22
a
deployer
used
reasonable
care
as
required
under
this
section
23
if
the
deployer
complied
with
subsections
2
through
7
and
any
24
additional
requirements
established
in
rules
adopted
by
the
25
attorney
general
pursuant
to
section
554I.7.
26
2.
a.
A
deployer
shall
implement
a
risk
management
27
policy
and
program
to
govern
the
deployer’s
use
of
high-risk
28
artificial
intelligence
systems.
29
b.
The
risk
management
policy
and
program
shall
specify
30
and
incorporate
the
principles,
processes,
and
personnel
that
31
the
deployer
uses
to
identify,
document,
and
mitigate
known
or
32
reasonably
foreseeable
risks
of
algorithmic
discrimination.
33
c.
The
risk
management
program
shall
be
an
iterative
process
34
that
is
planned,
implemented,
and
regularly
and
systematically
35
-7-
LSB
1289YC
(6)
91
dg/jh
7/
21
H.F.
_____
reviewed
and
updated
for
the
duration
of
the
deployer’s
use
of
1
high-risk
artificial
intelligence
systems.
2
d.
Each
risk
management
policy
and
program
shall
consider
3
all
of
the
following:
4
(1)
The
guidance
and
standards
established
by
all
of
the
5
following:
6
(a)
Standards
established
in
the
most
recent
version
of
the
7
artificial
intelligence
risk
management
framework
published
8
by
the
national
institute
of
standards
and
technology
of
the
9
United
States
department
of
commerce.
10
(b)
Standard
ISO/IEC
42001
of
the
international
11
organization
for
standardization.
12
(c)
A
nationally
or
internationally
recognized
risk
13
management
framework
for
artificial
intelligence
systems.
14
(d)
Standards
designated
by
the
attorney
general.
15
(2)
The
size
and
complexity
of
the
deployer.
16
(3)
The
nature
and
scope
of
the
high-risk
artificial
17
intelligence
system
used
by
the
deployer,
including
but
not
18
limited
to
the
intended
uses
of
the
high-risk
artificial
19
intelligence
system.
20
(4)
The
sensitivity
and
volume
of
data
processed
in
21
connection
with
the
high-risk
artificial
intelligence
systems
22
used
by
the
deployer.
23
3.
A
risk
management
policy
and
program
implemented
24
pursuant
to
this
section
may
cover
more
than
one
high-risk
25
artificial
intelligence
system.
26
4.
a.
A
deployer,
or
a
third
party
contracted
by
the
27
deployer
to
use
a
high-risk
artificial
intelligence
system,
28
shall
complete
an
impact
assessment
for
the
high-risk
29
artificial
intelligence
system.
30
b.
The
impact
assessment
shall
be
completed
no
later
than
31
ninety
days
after
a
high-risk
artificial
intelligence
system
32
or
an
intentional
and
substantial
modification
of
a
high-risk
33
artificial
intelligence
system
is
available
for
use.
34
c.
Each
impact
assessment
shall,
at
a
minimum,
include
all
35
-8-
LSB
1289YC
(6)
91
dg/jh
8/
21
H.F.
_____
of
the
following
to
the
extent
reasonably
known
by
or
available
1
to
the
deployer:
2
(1)
A
statement
disclosing
all
of
the
following:
3
(a)
The
purpose
for
using
the
high-risk
artificial
4
intelligence
system.
5
(b)
The
context
for
using
the
high-risk
artificial
6
intelligence
system.
7
(c)
The
benefits
afforded
by
using
the
high-risk
artificial
8
intelligence
system.
9
(2)
An
analysis
of
whether
the
use
of
the
high-risk
10
artificial
intelligence
system
poses
any
known
or
foreseeable
11
risk
of
algorithmic
discrimination
and,
if
so,
the
nature
of
12
the
algorithmic
discrimination
and
the
steps
that
have
been
13
taken
to
mitigate
the
risks.
14
(3)
A
description
of
the
categories
of
data
the
high-risk
15
artificial
intelligence
system
processes
as
inputs
and
the
16
outputs
the
high-risk
artificial
intelligence
system
produces.
17
(4)
Any
metrics
used
to
evaluate
the
performance
and
known
18
limitations
of
the
high-risk
artificial
intelligence
system.
19
(5)
A
description
of
any
transparency
measures
taken
20
concerning
the
high-risk
artificial
intelligence
system,
such
21
as
measures
taken
to
disclose
to
individuals
that
the
high-risk
22
artificial
intelligence
system
is
in
use.
23
(6)
A
description
of
the
post-use
monitoring
and
user
24
safeguards
provided
concerning
the
high-risk
artificial
25
intelligence
system,
such
as
the
oversight
process
established
26
by
the
deployer
to
address
issues
arising
from
using
the
27
high-risk
artificial
intelligence
system.
28
(7)
If
the
impact
statement
is
being
made
subsequent
to
29
an
intentional
and
substantial
modification
to
a
high-risk
30
artificial
intelligence
system,
a
statement
disclosing
the
31
extent
to
which
the
high-risk
artificial
intelligence
system
32
was
used
in
a
manner
that
was
consistent
with
or
varied
from
33
the
developer’s
intended
uses
of
the
high-risk
artificial
34
intelligence
system.
35
-9-
LSB
1289YC
(6)
91
dg/jh
9/
21
H.F.
_____
d.
A
single
impact
statement
may
address
a
comparable
set
of
1
high-risk
artificial
intelligence
systems
used
by
a
deployer.
2
5.
An
impact
assessment
completed
for
the
purpose
of
3
complying
with
another
applicable
law
or
regulation
shall
4
satisfy
the
requirements
of
this
section
if
the
impact
5
assessment
is
reasonably
similar
in
scope
and
effect
to
an
6
impact
assessment
that
would
otherwise
be
completed
pursuant
to
7
this
section.
8
6.
A
deployer
shall
maintain
the
most
recently
completed
9
impact
assessment
for
a
high-risk
artificial
intelligence
10
system
as
required
under
this
section
and
relevant
records
11
supporting
the
impact
assessment
for
a
period
of
at
least
three
12
years
following
the
final
use
of
the
high-risk
artificial
13
intelligence
system.
14
7.
A
deployer
shall
review,
at
least
annually,
the
15
deployment
of
each
high-risk
artificial
intelligence
system
16
used
by
the
deployer
to
ensure
that
the
high-risk
artificial
17
intelligence
system
is
not
causing
algorithmic
discrimination.
18
Sec.
5.
NEW
SECTION
.
554I.4
Disclosures
of
artificial
19
intelligence
system.
20
1.
A
deployer
shall
disclose
to
each
individual
that
21
interacts
with
the
artificial
intelligence
system
that
the
22
individual
is
interacting
with
an
artificial
intelligence
23
system.
Developers
who
make
a
high-risk
artificial
24
intelligence
system
available
in
this
state
shall
cooperate
25
with
deployers
to
allow
deployers
to
fulfill
the
requirements
26
of
this
subsection.
27
2.
A
disclosure
under
subsection
1
is
not
required
in
28
circumstances
in
which
it
would
be
obvious
to
a
reasonable
29
individual
that
the
individual
is
interacting
with
an
30
artificial
intelligence
system.
31
Sec.
6.
NEW
SECTION
.
554I.5
Exclusions.
32
1.
This
chapter
shall
not
be
construed
to
restrict
a
33
developer’s,
deployer’s,
or
other
person’s
ability
to
do
any
34
of
the
following:
35
-10-
LSB
1289YC
(6)
91
dg/jh
10/
21
H.F.
_____
a.
Comply
with
federal,
state,
or
municipal
law.
1
b.
Comply
with
a
civil,
criminal,
or
regulatory
inquiry,
2
investigation,
subpoena,
or
summons
by
a
federal,
state,
3
municipal,
or
other
governmental
authority.
4
c.
Cooperate
with
a
law
enforcement
agency
concerning
5
conduct
or
activity
that
a
developer,
deployer,
or
other
person
6
reasonably
and
in
good
faith
believes
may
violate
federal,
7
state,
or
municipal
law.
8
d.
Investigate,
establish,
exercise,
prepare
for,
or
defend
9
legal
claims.
10
e.
Take
immediate
steps
to
protect
an
interest
that
is
11
essential
for
the
life
or
physical
safety
of
an
individual.
12
f.
Engage
in
public
or
peer-reviewed
scientific
or
13
statistical
research
in
the
public
interest
that
adheres
to
14
all
other
applicable
ethics
and
privacy
laws
and
is
conducted
15
in
accordance
with
45
C.F.R.
pt.
46,
as
amended,
or
other
16
relevant
requirements
established
by
the
federal
food
and
drug
17
administration.
18
g.
Conduct
research,
testing,
and
development
activities
19
regarding
an
artificial
intelligence
system,
other
than
testing
20
conducted
under
real-world
conditions,
before
the
artificial
21
intelligence
system
is
placed
on
the
market,
used,
or
otherwise
22
put
into
service.
23
h.
Effectuate
a
product
recall.
24
i.
Identify
and
repair
technical
errors
that
impair
existing
25
or
intended
functionality
in
a
computer
system
or
an
artificial
26
intelligence
system.
27
j.
Assist
another
developer,
deployer,
or
person
with
any
of
28
the
requirements
of
this
chapter.
29
2.
This
chapter
does
not
apply
to
a
developer,
deployer,
30
or
other
person
if
the
circumstances
in
which
the
high-risk
31
artificial
intelligence
system
was
developed,
used,
or
32
intentionally
and
substantially
modified
are
described
by
any
33
of
the
following:
34
a.
The
high-risk
artificial
intelligence
system
has
been
35
-11-
LSB
1289YC
(6)
91
dg/jh
11/
21
H.F.
_____
approved,
authorized,
certified,
cleared,
developed,
or
granted
1
by
a
federal
agency
and
the
deployer,
developer,
or
other
2
person
is
described
by
any
of
the
following:
3
(1)
The
person
is
acting
under
the
authority
of
the
4
federal
agency
that
approved,
authorized,
certified,
cleared,
5
developed,
or
granted
the
high-risk
artificial
intelligence
6
system.
7
(2)
The
person
is
in
compliance
with
standards
established
8
by
a
federal
agency
for
use
of
high-risk
artificial
9
intelligence
systems
if
the
requirements
imposed
by
those
10
standards
are
substantially
similar
or
more
restrictive
than
11
the
requirements
of
this
chapter.
12
b.
The
developer,
deployer,
or
other
person
is
conducting
13
research
to
support
an
application
for
approval
or
14
certification
from
a
federal
agency.
15
c.
The
developer,
deployer,
or
other
person
is
performing
16
work
under
or
in
connection
with
a
contract
with
the
United
17
States
department
of
commerce,
the
United
States
department
of
18
defense,
or
the
national
aeronautics
and
space
administration,
19
unless
the
developer,
deployer,
or
other
person
is
performing
20
work
on
a
high-risk
artificial
intelligence
system
that
is
used
21
to
make,
or
is
a
factor
that
would
likely
alter
the
outcome
of,
22
a
decision
concerning
employment
or
housing.
23
d.
The
developer,
deployer,
or
other
person
is
a
covered
24
entity
within
the
meaning
of
the
Health
Insurance
Portability
25
and
Accountability
Act
of
1996,
Pub.
L.
No.
104-191,
as
26
amended,
and
is
providing
health
care
recommendations
that
27
are
generated
by
an
artificial
intelligence
system,
require
28
a
health
care
provider
to
take
action
to
implement
the
29
recommendations,
and
are
not
likely
to
alter
the
outcome
of
a
30
consequential
decision.
31
3.
This
chapter
does
not
apply
to
any
artificial
32
intelligence
system
that
is
acquired
by
or
for
the
federal
33
government,
a
federal
agency,
or
federal
department
unless
34
the
artificial
intelligence
system
is
a
high-risk
artificial
35
-12-
LSB
1289YC
(6)
91
dg/jh
12/
21
H.F.
_____
intelligence
system
used
to
make,
or
is
a
factor
that
would
1
likely
alter
the
outcome
of,
a
decision
concerning
employment
2
or
housing.
3
4.
A
developer,
deployer,
or
other
person
shall
have
the
4
burden
to
prove
an
action
qualifies
for
an
exclusion
under
this
5
section.
6
Sec.
7.
NEW
SECTION
.
554I.6
Enforcement.
7
1.
The
attorney
general
shall
have
exclusive
authority
to
8
enforce
this
chapter.
9
2.
Prior
to
initiating
any
action
for
a
violation
of
this
10
chapter,
the
attorney
general
shall
issue
a
notice
of
violation
11
to
the
developer,
deployer,
or
other
person
who
allegedly
12
violated
the
chapter.
The
notice
shall
contain
all
of
the
13
following:
14
a.
A
specific
description
of
the
alleged
violation.
15
b.
The
actions
the
developer,
deployer,
or
other
person
must
16
take
to
cure
the
violation.
17
3.
The
attorney
general
may
bring
an
action
for
the
alleged
18
violation
if
the
alleged
violation
is
not
cured
within
ninety
19
days
of
the
date
the
developer,
deployer,
or
other
person
20
received
the
notice
of
violation.
21
4.
A
violation
of
the
requirements
established
in
this
22
chapter
is
an
unlawful
practice
under
section
714.16.
23
5.
In
any
action
commenced
by
the
attorney
general
to
24
enforce
this
chapter,
it
is
an
affirmative
defense
that
the
25
developer,
deployer,
or
other
person
is
described
by
all
of
the
26
following:
27
a.
The
developer,
deployer,
or
other
person
discovered
and
28
cured
a
violation
of
this
chapter
as
a
result
of
any
of
the
29
following:
30
(1)
Feedback
that
the
developer,
deployer,
or
other
31
person
encourages
deployers
or
other
users
to
provide
to
the
32
developer,
deployer,
or
other
person.
33
(2)
Adversarial
testing
or
red
teaming,
as
those
terms
are
34
defined
or
used
by
the
national
institute
of
standards
and
35
-13-
LSB
1289YC
(6)
91
dg/jh
13/
21
H.F.
_____
technology.
1
(3)
An
internal
review
process.
2
b.
The
developer,
deployer,
or
other
person
is
otherwise
in
3
compliance
with
any
of
the
following:
4
(1)
The
latest
version
of
the
artificial
intelligence
risk
5
management
framework
as
published
by
the
national
institute
6
of
standards
and
technology
and
standard
ISO/IEC
42001
of
the
7
international
organization
for
standardization.
8
(2)
A
nationally
or
internationally
recognized
risk
9
management
framework
for
artificial
intelligence
systems,
if
10
the
requirements
imposed
by
those
standards
are
substantially
11
similar
or
more
restrictive
than
the
requirements
of
this
12
chapter.
13
(3)
A
risk
management
framework
for
artificial
intelligence
14
systems
that
the
attorney
general
designated.
15
6.
A
developer,
deployer,
or
other
person
bears
the
burden
16
of
demonstrating
to
the
attorney
general
that
the
requirements
17
of
subsection
5
have
been
satisfied.
18
7.
a.
Prior
to
initiating
any
action
under
this
chapter,
19
the
attorney
general
shall
consult
with
the
department
of
20
health
and
human
services
to
determine
whether
any
complaint
21
has
been
filed
that
is
founded
on
the
same
act
or
omission
that
22
constitutes
a
violation
of
this
chapter.
23
b.
The
attorney
general
shall
not
initiate
any
action
to
24
enforce
the
provisions
of
this
chapter
if
a
complaint
has
been
25
filed
with
the
department
of
health
and
human
services
relating
26
to
the
act
or
omission
that
constitutes
a
violation
of
this
27
chapter
unless
the
complaint
has
been
fully
adjudicated
or
28
resolved.
29
8.
This
chapter
shall
not
preempt
or
otherwise
affect
any
30
right,
claim,
remedy,
presumption,
or
defense
available
at
31
law
or
in
equity.
Any
rebuttable
presumption
or
affirmative
32
defense
established
under
this
chapter
shall
apply
only
to
an
33
enforcement
action
brought
by
the
attorney
general
pursuant
to
34
this
chapter
and
shall
not
apply
to
any
right,
claim,
remedy,
35
-14-
LSB
1289YC
(6)
91
dg/jh
14/
21
H.F.
_____
presumption,
or
defense
available
at
law
or
in
equity.
1
9.
The
attorney
general
shall
post
on
its
internet
site
2
how
to
properly
file
a
complaint
for
a
violation
under
this
3
chapter.
4
10.
This
section
does
not
provide
a
basis
for
a
private
5
right
of
action
for
violations
of
this
chapter
or
any
other
6
law.
7
Sec.
8.
NEW
SECTION
.
554I.7
Attorney
general
rulemaking
8
authority.
9
The
attorney
general
shall
adopt
rules
pursuant
to
chapter
10
17A
to
implement
this
chapter.
11
Sec.
9.
Section
714.16,
subsection
2,
Code
2025,
is
amended
12
by
adding
the
following
new
paragraph:
13
NEW
PARAGRAPH
.
r.
It
is
an
unlawful
practice
for
a
person
14
to
violate
any
of
the
provisions
of
chapter
554I.
15
EXPLANATION
16
The
inclusion
of
this
explanation
does
not
constitute
agreement
with
17
the
explanation’s
substance
by
the
members
of
the
general
assembly.
18
This
bill
relates
to
artificial
intelligence
(AI),
including
19
the
use
of
artificial
intelligence
to
create
materials
related
20
to
elections
and
protections
in
interactions
with
AI
systems.
21
DIVISION
I
——
MATERIALS
RELATED
TO
ELECTIONS.
The
bill
22
requires
that
published
material
generated
through
the
use
23
of
AI,
defined
in
this
division,
and
designed
to
expressly
24
advocate
the
nomination,
election,
or
defeat
of
a
candidate
for
25
public
office
or
the
passage
or
defeat
of
a
ballot
issue
to
26
include
a
disclosure
that
the
published
material
was
generated
27
using
AI.
The
disclosure
must
include
the
words
“this
material
28
was
generated
using
artificial
intelligence”.
A
disclosure
29
made
in
compliance
with
the
bill
does
not
preclude
a
private
30
right
of
action
arising
out
of
the
publication
of
published
31
material
generated
through
the
use
of
AI.
32
By
operation
of
law,
a
person
who
willfully
violates
the
33
division
of
the
bill
is
guilty
of
a
serious
misdemeanor.
A
34
serious
misdemeanor
is
punishable
by
confinement
for
no
more
35
-15-
LSB
1289YC
(6)
91
dg/jh
15/
21
H.F.
_____
than
one
year
and
a
fine
of
at
least
$430
but
not
more
than
1
$2,560.
2
DIVISION
II
——
PROTECTIONS
IN
INTERACTIONS
WITH
ARTIFICIAL
3
INTELLIGENCE
SYSTEMS.
The
bill
defines
“algorithmic
4
discrimination”
as
any
use
of
an
AI
system
that
results
5
in
unfavorable
treatment
due
to
an
individual
or
group
of
6
individuals’
actual
or
perceived
age,
race,
creed,
color,
sex,
7
sexual
orientation,
national
origin,
religion,
or
disability.
8
The
bill
lists
several
circumstances
which
do
not
constitute
9
“algorithmic
discrimination”.
10
The
bill
defines
“artificial
intelligence
system”
as
11
any
machine-based
system
that,
for
any
explicit
or
implicit
12
objective,
infers
from
the
inputs
the
system
receives
to
13
generate
outputs,
including
but
not
limited
to
content,
14
decisions,
predictions,
or
recommendations,
that
can
influence
15
physical
or
virtual
environments.
16
The
bill
defines
“consequential
decision”
as
any
decision
17
that
has
a
material
legal
or
similarly
significant
effect
on
18
the
provision
or
denial
of
a
pardon,
parole,
probation,
or
19
release;
an
education
enrollment
or
educational
opportunity;
20
employment;
a
financial
or
lending
service;
an
essential
21
government
service;
a
health
care
service;
insurance;
or
a
22
legal
service.
23
The
bill
defines
“deployer”
as
a
person
doing
business
in
24
this
state
that
uses
a
high-risk
AI
system.
25
The
bill
defines
“developer”
as
a
person
doing
business
in
26
this
state
that
develops
or
intentionally
and
substantially
27
modifies
a
high-risk
AI
system.
28
The
bill
defines
“high-risk
artificial
intelligence
system”
29
as
any
AI
system
that
makes,
or
is
a
factor
that
would
likely
30
alter
the
outcome
of,
a
consequential
decision.
The
bill
lists
31
several
systems
that
do
not
constitute
a
“high-risk
artificial
32
intelligence
system”.
33
The
bill
defines
“intentional
and
substantial
modification”
34
or
“intentionally
and
substantially
modifies”
as
a
deliberate
35
-16-
LSB
1289YC
(6)
91
dg/jh
16/
21
H.F.
_____
change
made
to
an
AI
system
that
materially
increases
the
risk
1
of
algorithmic
discrimination.
2
The
bill
defines
“trade
secret”
as
information,
including
3
but
not
limited
to
a
formula,
pattern,
compilation,
program,
4
device,
method,
technique,
or
process
that
derives
independent
5
economic
value,
actual
or
potential,
from
not
being
generally
6
known
to,
and
not
being
readily
ascertainable
by
proper
means
7
by
a
person
able
to
obtain
economic
value
from
its
disclosure
8
or
use;
and
is
the
subject
of
efforts
that
are
reasonable
under
9
the
circumstances
to
maintain
its
secrecy.
10
The
bill
requires
developers
to
use
reasonable
care
to
11
protect
individuals
from
any
known
or
reasonably
foreseeable
12
risks
of
algorithmic
discrimination
arising
from
the
13
intended
and
contracted
uses
of
the
developer’s
high-risk
AI
14
system.
The
bill
creates
a
rebuttable
presumption
that
a
15
developer
used
reasonable
care
if
the
developer
complied
with
16
certain
requirements
detailed
in
the
bill
and
any
additional
17
requirements
established
in
rules
adopted
by
the
attorney
18
general.
19
The
bill
requires
a
developer
to
make
certain
statements,
20
documentation,
and
summaries
related
to
the
high-risk
AI
21
system,
as
detailed
in
the
bill,
available
to
a
deployer
or
22
other
developer
that
uses
or
intends
to
use
the
developer’s
23
high-risk
AI
system.
24
The
bill
requires
a
developer,
if
the
developer
offers,
25
sells,
leases,
licenses,
gives,
or
otherwise
makes
a
high-risk
26
AI
system
available
to
a
deployer,
to
provide
documentation
27
and
information
to
the
deployer
necessary
for
the
deployer
to
28
complete
an
impact
assessment.
Documentation
and
information
29
includes
but
is
not
limited
to
model
cards,
dataset
cards,
and
30
other
impact
assessments.
This
requirement
does
not
apply
if
31
the
deployer
is
affiliated
with
the
developer
that
is
providing
32
the
high-risk
AI
system.
33
The
bill
requires
a
developer
to
make
a
statement
available
34
in
a
manner
that
is
clear
and
readily
available
on
the
35
-17-
LSB
1289YC
(6)
91
dg/jh
17/
21
H.F.
_____
developer’s
internet
site
or
in
a
public
use
case
inventory
1
that
includes
a
summary
of
the
types
of
high-risk
AI
systems
2
the
developer
has
developed
or
intentionally
and
substantially
3
modified
and
is
currently
making
generally
available
to
4
deployers,
and
how
the
developer
manages
known
or
reasonably
5
foreseeable
risks
of
algorithmic
discrimination
arising
from
6
development
or
intentional
and
substantial
modification
of
the
7
types
of
high-risk
AI
systems.
The
bill
requires
developers
to
8
update
the
statement
as
necessary
to
ensure
that
the
statement
9
remains
accurate,
but
no
later
than
90
days
after
the
developer
10
develops
or
intentionally
and
substantially
modifies
an
AI
11
system.
12
The
bill
requires
developers
to
disclose
to
the
attorney
13
general
and
to
all
known
deployers
or
other
developers
of
14
the
high-risk
AI
system
any
known
or
foreseeable
risks
of
15
algorithmic
discrimination
arising
from
the
intended
uses
of
16
the
high-risk
AI
system.
The
disclosure
must
be
given
to
the
17
attorney
general
no
later
than
90
days
after
the
developer
18
discovers
through
the
developer’s
ongoing
testing
and
analysis
19
that
the
developer’s
high-risk
AI
system
has
been
used
and
20
has
caused
or
is
reasonably
likely
to
have
caused
algorithmic
21
discrimination;
or
the
developer
receives
from
a
deployer
a
22
credible
report
that
the
high-risk
AI
system
has
been
used
and
23
has
caused
algorithmic
discrimination,
whichever
is
earlier.
24
The
bill
does
not
require
a
developer
to
disclose
a
trade
25
secret,
information
protected
under
state
or
federal
law,
or
26
other
confidential
or
proprietary
information.
27
The
bill
authorizes
the
attorney
general
to
require
a
28
developer
to
disclose
to
the
attorney
general
any
statement
29
or
documentation
the
bill
requires
the
developer
to
make
30
available
to
deployers
and
other
developers
if
the
statement
31
or
documentation
is
relevant
to
an
investigation
conducted
32
by
the
attorney
general
regarding
a
violation
of
the
bill.
33
The
bill
makes
statements
and
documentation
provided
to
the
34
attorney
general
exempt
from
disclosure
to
the
extent
the
35
-18-
LSB
1289YC
(6)
91
dg/jh
18/
21
H.F.
_____
statement
or
documentation
includes
any
proprietary
information
1
or
any
trade
secret.
The
developer
may
designate
the
statement
2
or
documentation
as
including
proprietary
information
or
a
3
trade
secret.
Disclosures
of
statements
and
documents
to
4
the
attorney
general
that
include
information
subject
to
5
attorney-client
privilege
or
work-product
protection
do
not
6
constitute
a
waiver
of
the
privilege
or
protection.
7
The
bill
requires
deployers
to
use
reasonable
care
to
8
protect
individuals
from
any
known
or
reasonably
foreseeable
9
risks
of
algorithmic
discrimination.
In
any
enforcement
10
action
brought
by
the
attorney
general,
there
is
a
rebuttable
11
presumption
that
a
deployer
used
reasonable
care
if
the
12
deployer
implemented
a
risk
management
policy
and
program
as
13
detailed
in
the
bill
and
complied
with
the
bill’s
requirements
14
regarding
impact
statements.
15
The
bill
requires
deployers,
or
a
third
party
contracted
by
16
a
deployer
to
use
a
high-risk
AI
system,
to
complete
an
impact
17
assessment
for
the
high-risk
AI
system.
Requirements
for
when
18
the
impact
assessment
must
be
completed
and
the
contents
of
the
19
impact
assessment
are
detailed
in
the
bill.
20
The
bill
requires
deployers
to
maintain
the
most
recently
21
completed
impact
assessment
for
a
high-risk
AI
system
and
22
relevant
records
supporting
the
impact
assessment
for
a
23
period
of
at
least
three
years
following
the
final
use
of
the
24
high-risk
AI
system.
25
The
bill
requires
deployers
to,
at
least
annually,
review
26
the
deployment
of
each
high-risk
AI
system
used
by
the
27
deployer
to
ensure
that
the
high-risk
AI
system
is
not
causing
28
algorithmic
discrimination.
29
The
bill
requires
deployers
to
disclose
to
each
individual
30
that
interacts
with
the
AI
system
that
the
individual
is
31
interacting
with
an
AI
system.
Developers
who
make
a
high-risk
32
AI
system
available
in
this
state
must
cooperate
with
deployers
33
to
allow
deployers
to
fulfill
the
bill’s
requirements.
A
34
disclosure
that
an
individual
is
interacting
with
an
AI
system
35
-19-
LSB
1289YC
(6)
91
dg/jh
19/
21
H.F.
_____
is
not
required
in
circumstances
in
which
it
would
be
obvious
1
to
a
reasonable
individual
that
the
individual
is
interacting
2
with
an
AI
system.
3
The
bill
clarifies
that
its
provisions
do
not
restrict
a
4
developer’s,
deployer’s,
or
other
person’s
abilities
to
perform
5
certain
actions
as
detailed
in
the
bill.
6
The
bill
clarifies
that
its
provisions
do
not
apply
to
a
7
developer,
deployer,
or
other
person
that
develops,
uses,
or
8
intentionally
and
substantially
modifies
a
high-risk
AI
system
9
in
certain
circumstances
detailed
in
the
bill.
10
The
bill
does
not
apply
to
an
AI
system
that
is
acquired
by
11
or
for
the
federal
government,
a
federal
agency,
or
a
federal
12
department
unless
the
AI
system
is
a
high-risk
AI
system
used
13
to
make,
or
is
a
factor
that
would
likely
alter
the
outcome
of,
14
a
decision
concerning
employment
or
housing.
15
The
bill
makes
a
developer,
deployer,
or
other
person
16
responsible
for
proving
an
action
qualifies
for
an
exclusion
17
from
the
bill’s
provisions.
18
The
bill
provides
the
attorney
general
with
exclusive
19
authority
to
enforce
the
bill’s
provisions.
20
The
bill
requires,
prior
to
initiating
any
action
for
a
21
violation
of
the
bill’s
provisions,
the
attorney
general
to
22
issue
a
notice
of
violation
to
the
developer,
deployer,
or
23
other
person
who
allegedly
violated
the
bill’s
provisions.
24
The
notice
must
contain
a
specific
description
of
the
alleged
25
violation
and
the
actions
the
developer,
deployer,
or
other
26
person
must
take
to
cure
the
violation.
The
bill
authorizes
27
the
attorney
general
to
bring
an
action
for
the
alleged
28
violation
if
the
alleged
violation
is
not
cured
within
90
days
29
of
the
date
the
developer,
deployer,
or
other
person
received
30
the
notice
of
violation.
31
The
bill
details
under
what
circumstances
a
developer,
32
deployer,
or
other
person
has
an
affirmative
defense
to
an
33
alleged
violation.
The
developer,
deployer,
or
other
person
34
bears
the
burden
of
demonstrating
to
the
attorney
general
that
35
-20-
LSB
1289YC
(6)
91
dg/jh
20/
21
H.F.
_____
the
person
meets
the
requirements
for
an
affirmative
defense.
1
The
bill
requires,
prior
to
initiating
any
action
under
the
2
bill’s
provisions,
the
attorney
general
to
consult
with
the
3
department
of
health
and
human
services
(HHS)
to
determine
4
whether
any
complaint
has
been
filed
that
is
founded
on
the
5
same
act
or
omission
that
constitutes
a
violation
of
the
bill’s
6
provisions.
The
attorney
general
is
prohibited
from
initiating
7
an
action
to
enforce
the
bill’s
provisions
if
a
complaint
has
8
been
filed
with
HHS
relating
to
the
same
act
or
omission
that
9
constitutes
a
violation
of
the
bill’s
provisions
unless
the
10
complaint
has
been
fully
adjudicated
or
resolved.
11
The
bill
does
not
preempt
or
otherwise
affect
any
right,
12
claim,
remedy,
presumption,
or
defense
available
at
law
or
in
13
equity.
Any
rebuttable
presumption
or
affirmative
defense
14
established
under
the
bill’s
provisions
applies
only
to
an
15
enforcement
action
brought
by
the
attorney
general
and
does
16
not
apply
to
any
right,
claim,
remedy,
presumption,
or
defense
17
available
at
law
or
in
equity.
18
The
bill
requires
the
attorney
general
to
post
on
its
19
internet
site
how
to
properly
file
a
complaint
for
a
violation
20
of
the
bill’s
provisions.
21
The
bill
does
not
provide
a
basis
for
a
private
right
of
22
action
for
violations
of
the
bill’s
provisions
or
any
other
23
law.
24
The
bill
requires
the
attorney
general
to
adopt
rules
to
25
implement
the
bill’s
provisions.
26
A
violation
of
this
division
of
the
bill
is
an
unlawful
27
practice
under
Code
section
714.16.
Several
types
of
remedies
28
are
available
if
a
court
finds
that
a
person
has
committed
an
29
unlawful
practice,
including
injunctive
relief,
disgorgement
of
30
moneys
or
property,
and
a
civil
penalty
not
to
exceed
$40,000
31
per
violation.
32
-21-
LSB
1289YC
(6)
91
dg/jh
21/
21