Hostname: page-component-848d4c4894-wzw2p Total loading time: 0 Render date: 2024-06-01T14:10:37.460Z Has data issue: false hasContentIssue false

The Future of AI Is in the States: The Case of Autonomous Vehicle Policies

Published online by Cambridge University Press:  31 July 2023

Daniel J. Mallinson*
Affiliation:
School of Public Affairs, Penn State Harrisburg, Middletown, PA, USA
Lauren Azevedo
Affiliation:
University of North Carolina at Charlotte, Charlotte, NC, USA
Eric Best
Affiliation:
University at Albany, Albany, NY, USA
Pedro Robles
Affiliation:
Penn State Lehigh Valley, Center Valley, PA, USA
Jue Wang
Affiliation:
Smeal College of Business, Penn State University, University Park, PA, USA
*
Corresponding author: Daniel J. Mallinson; Email: djm466@psu.edu
Rights & Permissions [Opens in a new window]

Abstract

The myriad applications of artificial intelligence (AI) by the private and public sectors have exploded in the public consciousness in the postpandemic period. However, researchers and businesses have been working on AI technology applications for decades, and in many ways, governments are rushing to catch up. This article presents an argument that the future of AI policy in the United States will be driven in large part by current and future state-level policy experiments. This argument is presented by drawing on scholarship surrounding federalism, regulatory fragmentation, and the effects of fragmentation on business and social equity. The article then presents the case of autonomous vehicle policy in the states to illustrate the degree of current fragmentation and considers the effects of layering new AI policies on top of existing rules surrounding privacy, licensing, and more. Following this consideration of existing research and its application of AI policy, the article presents a research agenda for leveraging state differences to study the effects of AI policy and develop a cohesive framework for governing AI.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2023. Published by Cambridge University Press on behalf of Vinod K. Aggarwal

Artificial intelligence (AI) has exploded in the public consciousness in the postpandemic period. It is defined as “the science and engineering of making intelligent machines, especially intelligent computer programs that exhibit characteristics associated with intelligence in human behavior including among other faculties of reasoning, learning, goal seeking, problem-solving, and adaptability.”Footnote 1 While widespread public attention is more recent, with curiosity over applications like ChatGPT, researchers and businesses have been working on AI technology applications for decades, and in many ways, governments are rushing to catch up.Footnote 2 Such applications include AI-powered assistants (chatbots), fraud detection, text digitization, personalized learning, autonomous vehicles, spam filters, facial recognition, hiring, medical record review, robotics, drug discovery, crime detection, and much more.Footnote 3 This means that stakeholders in the technology and its regulation include businesses from numerous sectors of the economy, as well as governments, nonprofits, and the public.

Governments and governance scholars, however, have yet to establish a common and integrated framework for governing AI use by both the public and private sectors.Footnote 4 As a disruptive technology, AI will change business and government practices in ways that cannot be fully imagined and will grow the global economy in ways that are difficult to predict.Footnote 5 Thus, governments and businesses need to wrestle with how to effectively regulate rapidly emerging and disruptive technologies, and their potential social side effects, without squashing innovation.

Within the context of the United States, we argue that the future of this struggle is very much in the hands of the states. American federalism has been described as an environment in which the states act as laboratories. Driven by competition and problem-solving, states learn from each other, copy each other, and can push their ideas up to the federal government.Footnote 6 Often, however, this situation results in significant regulatory fragmentation for businesses. And growing political polarization and gridlock in the national government means that even significant pressure from the states for federal action runs into strong federal stasis.Footnote 7 While the National Highway Traffic Safety Administration (NHTSA) has issued guidance and technical assistance,Footnote 8 states still have the power to go their own way on the myriad policy choices that need to be made when enabling and regulating AI applications. Fragmentation of regulatory authority across agencies and levels of government presents significant costs for industries.Footnote 9 On the other hand, it also offers firms leverage in extracting favorable policy arrangements from states that are competing for their business.Footnote 10 Technology companies have shifted from positions of opposing government regulation to supporting regulation that strikes a balance between laissez-faire and prohibitive approaches.Footnote 11 But they have also lobbied states for industry-friendly policies, like protections against customer suits for privacy violations.Footnote 12

It is within this environment that the future of American AI policy will be worked out. Given the prominence of the United States in the global economy, and in AI patenting,Footnote 13 policy lessons learned in the states could be transferred cross-nationally. Of course, with the European Union (EU) emerging as a distinctive leader in AI governance, policy transfer may very well occur in the other direction, at least until the United States catches up. The EU's AI Act, while not yet finally adopted, is being touted as the first regulatory framework for AI and a model for countries across the world. The United Nations, the Organisation for Economic Co-operation and Development, the G7, and the G20 have all either established or debated principles of AI governance.Footnote 14

Within the United States, the states have their own economies, cultures, and political contexts that will shape their approaches to regulating and promoting AI technology. We argue that the policy innovation and diffusion that is already occurring among the states, and will continue to occur, has significant implications for businesses engaged in AI development. Our aim in this article is to make this broader argument, but to flesh it out with a case study of state autonomous vehicle policy. We lay the broader argument with reviews of three distinct yet related literatures. We begin more broadly with the nature of American federalism, policy fragmentation, and innovation. We then examine research on the effects of this fragmentation on businesses and on social equity. Having established this theoretical foundation for understanding AI fragmentation, we present the case of state autonomous vehicle policies. This AI application has seen substantial and varied state activity over the last decade. Thus, it is a useful example of how states are moving in many different directions with respect to AI business regulations and social equity concerns. We then consider how new AI policies are being built on top of existing policy regimes and the implications of policy layering for effective implementation. Finally, we bring the existing literature and the specific challenges of AI technology together to present a research agenda for studying AI policy fragmentation, how it affects business activity, and how scholars can leverage state variation in establishing and evaluating an effective AI governance framework in the United States.

Federalism and regulatory fragmentation in the states

The horizontal and vertical relationships between governments in the American federal system have evolved substantially since the founding. A degree of rigid clarity regarding the respective powers and policy monopolies of the federal government and the states has given way to waves of competitive, cooperative, fragmented, and now increasingly polarized relationships. Policy innovation and diffusion has long been regarded, however, as a significant benefit of the decentralization of policymaking authority in the United States.Footnote 15 States can take up new policy ideas, try them out, learn from each other, and address failures in their initial experiments.Footnote 16 In theory, this allows the states to “kick the tires,” so to speak, before policy is nationalized by the federal government.

The diffusion of innovations framework is used by policy process scholars to understand why some governments adopt policy innovations and others do not.Footnote 17 Problem-solving through policy learning is only one driver of the diffusion of policy innovations.Footnote 18 States are also competitive with each other. They compete using available policy tools to attract benefits, like population, economic development, and tax revenue, as well as to deter unwanted populations.Footnote 19 Granted, both individual decisions to move and state policy choices in a competitive environment are far more complicated than those envisioned by Tiebout competition.Footnote 20 However, state legislators rhetorically point to competitor states, particularly contiguous neighbors, when pushing policy innovations. This rhetoric often comes from a defensive posture recognizing that some benefit is lost by population and business movement to a state with a more advantageous policy environment.Footnote 21

States as policy laboratories are affected not only by bottom-up identification of problems and horizontal patterns of policy learning and interstate competition. The federal government has used competitive programs like Race to the Top in education policy to foster competition, innovation, and diffusion among the states.Footnote 22 Additionally, the federal government can incentivize the top-down diffusion of policy innovations through coercive sticks and carrots, as well as through its own issue attention.Footnote 23 It has also formally devolved policymaking responsibility in a variety of areas to the states.Footnote 24 Over the last two decades, states have been pushed to act in a variety of policy domains—even on clear federal government issues like immigration—because of gridlock in national political institutions resulting from growing political polarization.Footnote 25 While this is not a formal devolution of power, states are filling the void left by the reduction of substantive federal policymaking activity.Footnote 26 This is no less the case for AI policy, as the federal government has established principles for its own use of AI and departments have developed recommended rules (e.g., NHTSA), but there is no uniform policy regime nationally. This means that states can serve as laboratories for exploring the best methods for using and regulating AI.

While this dynamic environment allows for substantial innovation at the state and local levels, it comes with costs. Policy innovation and diffusion can be slow and incomplete when left to the states.Footnote 27 Given the increasing friction within the federal policymaking system, it is not a given that even successful policy experiments will be nationalized.Footnote 28 There are also significant costs to businesses that must navigate different rules because of regulatory fragmentation. We turn now to considering these costs and how businesses adapt.

Fragmented regulations and ventures’ market and nonmarket strategies

When newly introduced controversial technologies emerge, such as AI, regulators tend to be uncertain about their potential benefits or risks. As a result, regulatory experiments are launched at various levels of government: city, county, state, and federal. Studies have demonstrated that fragmented regulations can increase the costs associated with commercializing technologies in two ways. First, they increase the cognitive burden placed on ventures, as firms must become familiar with all the fragmented regulations prior to starting their business. In fact, reducing regulatory variations helps increase the number of entrepreneurs.Footnote 29 Second, fragmented regulations increase the costs of compliance for firms that wish to do business in multiple areas, as they must take additional measures to meet the requirements of different regulations.Footnote 30

Fragmented regulations can have a considerable impact on the ecosystem surrounding new technologies, as investors, media, consumers, and suppliers may be reluctant to support or legitimize entrepreneurs when regulations are uncertain.Footnote 31 Faced with fragmented regulations, ventures have different responses. Ventures can decrease the scope of their innovations. Or they will try to pivot and adjust their businesses according to the different regulatory schemes. Such fragmented regulations can be a significant drawback for investors. For example, when the United Kingdom tried to follow a different set of rules from the EU on Sustainable Finance Disclosure Regulation, investors from JPMorgan Asset Management voiced their concern that the different regulations in European markets would make it difficult for ventures to develop.Footnote 32 Similarly, as regulations on autonomous vehicles are fragmented across states in the United States, investors are concerned about the scaling up of newly established ventures.Footnote 33 Yet regional variations of regulations also provide more innovation grounds for ventures, which may attract more equity financing.

Despite the potential downsides of fragmented regulations, ventures can also utilize nonmarket strategies to help coordinate these regulations to make them converge rather than diverge in the system. First, entrepreneurs that lobby or collaborate with regulators can push for preemption from higher levels of government to reduce conflict among local regulations.Footnote 34 In a converging regulatory structure, legally authorized institutional settings, regulatory scope, legal instruments, and procedural requirements converge into a consistent system.Footnote 35 Such a system can be gradually institutionalized as the regulation templates for new technologies or business models. For example, the Department of Transportation Act of 1966 created the Federal Railroad Administration, which helped consolidate railway regulations across different states to foster railroad development. Second, entrepreneurs can also pick locations with favorable regulations to develop and then push for other places to change regulations accordingly. This will create a race-to-the-bottom/top effect.Footnote 36 In fact, entrepreneurial businesses, as opposed to mature firms, view regulations and policy more favorably as it provides some control over the business environment.Footnote 37 The fragmentation of regulations in the United States not only creates challenges for firms, it also creates inequalities for citizens.

Fragmentation effects on equity

Utilization of AI and the fragmentation of regulatory policies present several opportunities and challenges for public administrators. Automating decision-making poses opportunities for the public sector and can increase efficiency and responsiveness while also raising concerns over equity and the efficacy of democracy.Footnote 38 In such instances, power is shared between bureaucrats, technocrats, and contractors with certain expertise during the creation of AI metrics. This, however, removes humanity from government transactions and limits adaptability to changes in contexts, social desirability, or the national agenda.Footnote 39 Further, the implementation of AI policies across states leads to increased inequities as states do not always implement policies equally, even when the same laws are in place.Footnote 40 This section discusses equity aspects in the creation of AI metrics and expands on fragmentation during implementation of AI policies that lead to inequity.

Equity in the creation of AI metrics

There are several justice and equity considerations in the creation of AI metrics. These include the digital divide (or unequal access to technology); algorithmic bias and values; differences in systems based on cultural and societal norms and values; fair decision-making mechanisms; and diversity, equity, and inclusion (DEI) practices.Footnote 41 Additional questions remain over the issue of responsibility in terms of machine choices. Automating weapons and vehicles, for instance, may seem desirable or inevitable but creates critical questions on policy and ethics.Footnote 42 These vary in relevance depending on the domains under consideration—such as in public health, social welfare, policing and criminal justice, military operations, and transportation, to name a few. Discrimination and equity concerns can play out differently in various contexts and when various social groups are involved.Footnote 43

For example, the military may use AI for efficiency in war response, or more specifically in geospatial intelligence to leverage algorithms for use in identifying targets in drone strikes. AI is becoming paramount for battlefield efficiency, strategy, and tactics. The war in Ukraine is a timely example, using geospatial intelligence to consider satellite images, locate photos, and other information from a variety of sources and governments. This war is unique in the willingness of intelligence entities around the world to assist Ukraine, providing AI software for analyzing the war and its movements.Footnote 44 However, if a mistake or war crime were to happen because of an algorithm that led to catastrophic consequences, difficult questions would arise over how AI presupposed human decisions and where responsibility should fall.Footnote 45 Additionally, AI is used for loitering and striking targets on current battlefields, a step before autonomous targeting. While the Pentagon's Joint Artificial Intelligence Center works to avoid AI accidents, particularly in crisis dynamics, escalation, and strategic stability around AI military use, immense justice and equity considerations remain for policymakers in terms of equitable uses of AI, responsibility for decision-making and ethical outcomes.Footnote 46

Possible remedies for AI equity considerations include external audits, algorithmic judgments across social categories, legal impacts to algorithmic systems, and considerations of certain groups for examining contexts, impacts, and gaining insight, in addition to considerations of DEI practices and management.Footnote 47 Policies are also needed to ensure that AI will serve the public good and that there are frameworks for responsibility surrounding AI that are satisfactory to stakeholders.Footnote 48 AI policy can be defined as policy that is “aimed at research, development, and deployment of AI technologies in the economy and society”Footnote 49 and involves multiple stakeholders and multistakeholder arrangements, including governments, industries, markets, society, and other organizations.Footnote 50 AI policymaking is complex, as decisions are multilevel and often situated within policy issues, solutions, and instruments and involve various policy instruments, governance modes, and different social, political, and economic contexts.Footnote 51 AI policy creation is impacted by design, dynamics and interactions of political regimes, and governance models, and it is impacted by national strategy and laws in place.

Fragmentation and implementation of policies leading to inequity

The United States and many other countries have national strategies for AI that include multistakeholder arrangements, such as vertical and horizontal governmental relationships, and generally aim to improve security and quality of life.Footnote 52 Composed of six general objectives, the US policy supports international cooperation and soft power mechanisms, and it follows OECD principles on AI.Footnote 53 State and local governments are increasingly becoming interested in AI policies and working to fill gaps of limited and general federal guidance.Footnote 54 State-level policies regarding AI may offer more creativity, flexibility, or agility than policies and strategies provided at the federal level.

Many states have unique approaches that impact citizens differently based on several factors and contexts, from the creation of task forces and special commissions to notices on legal protections and privacy. For instance, California requires organizations to disclose the use of artificially intelligent bots when communicating online with citizens or consumers. The State of Washington is considering requiring governments to use algorithmic accountability reports for automatic decisions.Footnote 55 Strategies and federal recommendations regarding the adoption of AI is one avenue for understanding AI policy adoption within states. While the literature generally considers the benefits of AI policies, states do not implement policies equally, even when the same laws are adopted.Footnote 56 Further, an additional equity challenge is often not what information is collected utilizing AI, but how granular information is implemented, which can happen based on context.Footnote 57 These are larger issues that exist in federalism, which creates inequality between groups and unequal treatment in benefits.Footnote 58

The devolution of social policy in the United States is instructive to the social equity problems that emerge with policy fragmentation. A tangible example is algorithmic risk assessments in sentencing, where judges and courts use automation to assist human judgment through risk scores and algorithms. While efforts to utilize an AI risk assessment tool were made to reduce incarceration and recidivism in Virginia, for example, results show that utilization of the tool can lead to age and racial disparities and left more questions in terms of issues with implementation, consistency in use, and adequacy of training to utilize such systems.Footnote 59 Welfare devolution to the states is another example of how inequities have been created as a result of uneven responsiveness of state and local governments.Footnote 60 Domestic violence policy devolution is another, whereby state and federal factors influence the adoption of policies through political and demographic indicators.Footnote 61 Because federal domestic violence laws are designed to leave states leeway, cases are seen under state laws, which vary in terms of definition of the crime, the resulting punishment, and the latitude given to officers on how to arrest and to prosecutors on how to classify domestic violence. Research has found clear differences in terms of inequities and responsiveness based on context across the states.Footnote 62 Similar to domestic violence policies, which have a relatively short history, varying support, and inherently unequal policy effects based on many social and demographic indicators,Footnote 63 AI must also attract more attention from policymakers and scholars to address the differences across governments at the local, state, and federal level and the way these laws are implemented in order to achieve equity and equality.

Divergent ways of interpreting AI policies horizontally and vertically—meaning across and among government levels and agencies—have happened in other political contexts as well. Af Malmborg and Trondal consider different factors influencing the adoption of AI policies in the EU and the coordination of policies by member states in Nordic countries. They find that there were differences in the adoption of AI policies based on national organizational capacities, thus suggesting that the framing of policies is filtered through organizational structures and governance mechanisms across countries. Because of the different organizational capacities within these countries, there could be weaker implementation of certain policies.Footnote 64 The implications for national and even transnational roles on policymaking and equity are abundant.

There are other personal and political costs of fragmented AI policies. Any sociotechnical system that utilizes algorithmic processes in decision-making can create discrimination. In fact, Williams, Brooks, and Shmargad find that censoring information like social category data can exacerbate discrimination.Footnote 65 Regardless, special considerations must be made for equitable considerations of AI policies and fragmentation in states. It is difficult, if not impossible, to ensure equity across the states without federal intervention. However, it is also true that when the federal government acts, it can crowd out desirable state policy innovation as states use their resources to pursue other goals.Footnote 66 Having considered the broader opportunities and challenges of federalism for innovation, as well as the specific issues of policy fragmentation for business and social equity, we now turn to illustrating this argument with the case of state autonomous vehicle policy.

A case of regulatory fragmentation: Autonomous vehicles

Autonomous vehicle (AV) policy fragmentation is a significant challenge presented by the development and deployment of self-driving cars.Footnote 67 Because of the lack of federal regulations, states have been taking the lead in creating their own policies, resulting in a patchwork of regulations and requirements that vary from state to state. This fragmentation has created a complex regulatory landscape that can be challenging for companies working on autonomous vehicles to navigate.Footnote 68 AVs are perhaps one of the most visible applications of commercial AI technology. The Insurance Institute for Highway Safety has predicted that 3.5 million self-driving vehicles will be on American roads by 2025, and McKinsey & Company has projected $300 billion to $400 billion in global revenues from AVs by 2035.Footnote 69 Top producers of personal and commercial AV technology include Tesla, Nvidia, Waymo, and Argo.ai. The industry is much broader than fully autonomous vehicles and includes other driver-assistance systems such as cameras, lidar, radar, and sensors.Footnote 70

To better understand the current state of autonomous vehicle regulation across the United States, we use the National Conference of State Legislatures (NCSL) database on autonomous vehicle laws passed by states from 2010 to 2023 for a descriptive analysis of this fragmentation.Footnote 71 Doing so offers a clear and concise overview of the current regulatory landscape, helping identify where gaps and inconsistencies exist. Specifically, we illustrate both the business and equity challenges presented by the variation in—and, in many cases, the lack of—state policies. To this end, we focus on five business-related topics that are present in state laws—commercial, insurance and liability, licensing and regulation, operator requirements, and vehicle testing—and one equity concern: privacy of collected vehicle data. Table 1 charts which states have adopted each of these components of AV policy.

Table 1. Autonomous vehicle policy components by state.

There are notable regional patterns in the adoption of these components. Northeastern states have been cautious in their approach to AVs, with more laws focused on safety regulations and testing. Southern states, like Texas and Florida, have been more permissive with testing and deployment. Midwestern states like Michigan and Illinois have led in the areas of research and development. Finally, western states like California and Washington have taken more progressive approaches to regulations that encourage the adoption of AV technology. California is a clear leader in AV policy innovation, which is understandable given that the state has become synonymous with tech innovation. But it does not lead on every component of AV policy, and there are substantial differences in the approaches taken by adopting states. We now consider these differences.

Commercial policy

Autonomous vehicles stand to benefit commercial applications by reducing labor costs and improving efficiency in transportation and logistics operations.Footnote 72 Commercial laws are policies that regulate business activities within a country. In the United States, each state has its own approach to commercial laws. While among the most widely adopted topics according to the NCSL, the substantial variability in what states have allowed, disallowed, or regulated is evident as we examine the details of these laws.

Regulations supportive of the industry include examples like Alabama's exemption for truck platoons from receiving citations for following too closely.Footnote 73 The exemption applies to platoons engaged in research for truck platooning technology. Similarly, Arkansas enacted a law that exempts truck platoons under certain conditions. The law defines a driver-assistive truck platooning system as technology that integrates sensor arrays, wireless communications, vehicle controls, and specialized software to coordinate acceleration and braking between two or more vehicles with a human operator in the lead vehicle.

Connecticut offers an example of commercial policy that includes more intrusive regulations. It provides the commissioner of transportation with the right to enter and utilize private property to correct unsafe conditions or restore the interruption of essential railroad or transit services. The commissioner is required to make a reasonable effort to notify the owner of record of such property prior to entering the property.

Other states’ commercial provisions have to do with taxation. California enacted specific tax reforms for vehicle testing in San Francisco. Indiana repealed the motor carrier surcharge tax and increased the special fuel tax. It also specified how netted International Fuel Tax Agreement Clearinghouse refunds and receipts are deposited or credited and the method for calculating the commercial vehicle excise tax rate. In sum, each state has a unique approach to commercial laws that reflects the state's economic and political interests.

Insurance and liability policy

Insurance and liability for autonomous vehicles have been significant concerns for lawmakers in multiple states.Footnote 74 Autonomous vehicle insurance and liability regulations have been enacted in several states, including California, Texas, Florida, Michigan, and New York, with the aim of ensuring that insurance coverage is in place to cover damages and that responsible parties are held liable in case of an accident involving an autonomous vehicle. For example, as early as 2012, California required manufacturers of autonomous vehicles to obtain a special permit before testing on public roads, and the state requires these vehicles to have liability insurance coverage. States like Texas, Florida, Michigan, and New York also have passed laws establishing minimum insurance and liability coverage requirements for AV testing.

In some states, like Alabama and Arkansas, laws have been more narrowly enacted to address insurance and liability concerns. Alabama established minimum liability insurance coverage requirements for autonomous commercial vehicles operated by an automated driving system and commercial motor vehicles with teleoperation systems. Arkansas established minimum liability insurance coverage requirements for motor carriers of property to ensure that sufficient insurance coverage is in place in case of an accident involving autonomous vehicles.

Other states, like Georgia, Iowa, Utah, Vermont, and West Virginia, have enacted laws that establish broader regulations for autonomous vehicles, including their operation and licensing requirements, while also requiring liability insurance coverage. For example, West Virginia passed the Fully Autonomous Vehicle Act in 2022, which provides requirements for the operation of fully autonomous vehicles without a human driver and with a human driver; provides for licensing, titling, and registration; and provides for the operation of on-demand autonomous vehicle networks and fully autonomous commercial and motor vehicle carriers, while also requiring liability insurance coverage.

Licensing and registration policy

Every state has a procedure for licensing and registering traditional motor vehicles, which serves several important purposes.Footnote 75 First, it ensures that states possess accurate records about vehicles and their owners.Footnote 76 Second, it guarantees that vehicles on the road are insured, thereby promoting financial safety for all.Footnote 77 Third, this process generates revenue for the state, which can be used to support various public services and initiatives.Footnote 78 Several states have enacted legislation to specifically license and register autonomous vehicles.Footnote 79

In New York and North Carolina, for instance, the state's department of transportation is authorized to conduct testing of autonomous vehicle technology, but operators must hold a valid driver's license for the type of vehicle being used. Georgia exempts persons operating an automated motor vehicle with the automated driving system engaged from the requirement to hold a driver's license and provides for registration requirements. Utah defines terms related to autonomous vehicles, allows the operation of a vehicle in the state by an automated driving system, and exempts a vehicle with an engaged automated driving system from licensure. Colorado allows the use of automated driving systems to control motor vehicles in compliance with state and federal laws. It also convened a stakeholder group to make recommendations for further regulations regarding autonomous commercial vehicles.

Operator requirement policy

State operator requirements regulate the use of autonomous and semiautonomous vehicles and establish the conditions and requirements for their operation. Each state has approached their operator requirements differently. Some states, like Alabama and Arkansas, have enacted laws that authorize autonomous commercial vehicles operated by an automated driving system and commercial motor vehicles with teleoperation systems. Other states, like Arizona, have enacted laws that relate specifically to autonomous vehicles, without operator requirements.

States like California have enacted laws that require operators of autonomous vehicles to comply with certain regulations, such as restrictions on the use of wireless communication devices. Autonomous vehicle operators are required to comply with these regulations to ensure the safety of the vehicle's passengers and other road users. Conversely, Florida has made a notable exemption for motor vehicle operators who are operating autonomous vehicles by allowing them to use wireless communication devices, which is otherwise prohibited while driving. This decision recognizes the fact that the operator of an autonomous vehicle may have less need for attention and focus on the road than traditional drivers.

Not yet decided on its regulations, Kansas has established an autonomous vehicle advisory committee to provide guidance on autonomous vehicle policy and development. This committee includes representatives from the transportation industry, academia, and government agencies. Louisiana has taken a more comprehensive approach to regulating autonomous vehicles. It has established a controlling authority for autonomous commercial motor vehicles, which outlines specific requirements for operators, establishes reporting requirements following an accident, and provides guidance on the use of remote drivers and teleoperation systems. These regulations are intended to promote safety and ensure that operators of autonomous vehicles are appropriately trained and qualified.

Vehicle testing policy

Eigheen states have passed legislation authorizing the testing of autonomous vehicles on public roads. The legislation varies from state to state but generally allows autonomous vehicles to be tested under certain conditions, such as having a human driver ready to take control if necessary. States like Arkansas, California, and Colorado have authorized testing by private companies on states roads or highways. However, some states, like Colorado, have stipulations that vehicles being tested must be equipped with technology that allows the human driver to take control at any time.

Several states have enacted their own testing programs. Connecticut and New Hampshire both adopted commissions or tasks forces to study and test AVs. New York has passed several testing laws. The first authorized an autonomous vehicle demonstration project to test autonomous vehicles that do not have a driver in the driver's seat and are not equipped with a steering wheel, brake pedal, or accelerator. The second extended the repeal date of provisions authorizing the department of transportation to conduct testing of technologies that enable drivers to safely operate motor vehicles with less than 100 feet between each vehicle or combination of vehicles. The third repealed a requirement that the department of motor vehicles notify the legislature of receipt of an application seeking approval to operate an autonomous vehicle capable of operating without a driver inside the vehicle on public roads. These laws demonstrate that several states are actively exploring and implementing policies and regulations to enable the safe and effective deployment of autonomous vehicles on their roads.

Vehicle data privacy policy

While each of the five preceding topics relates to the business of autonomous vehicles, the final topic, privacy, captures a facet of equity. As with other data-rich emergent technologies, like smart homes and devices, autonomous vehicles produce vast geo-location data that raise significant privacy concerns.Footnote 80 Alas, only five states have adopted laws specifically addressing the privacy of collected vehicle data. One potential explanation for the small number of states adopting privacy policy is that the policy is specific to a particular state's needs and circumstances, making it less applicable or necessary in other states.Footnote 81 For example, some states may prioritize different policy goals related to autonomous vehicles, such as safety regulations or liability laws, over privacy and data protection regulations. Another reason could be the political or cultural differences between states, which could make it more difficult to achieve consensus on certain policies across state lines.Footnote 82 Policies that require extensive time and resources to adopt and implement are less likely to be replicated by other states, as they may find it difficult to commit the same level of resources and time.Footnote 83 For example, it may be a challenge for other states to adopt privacy and protection regulations related to autonomous vehicles because the policies may require a significant investment of time, resources, and expertise to develop and implement. Additionally, existing privacy laws can be complex and vary from state to state, which could create confusion and inconsistencies if other states attempt to replicate policies that have been successful in other regions.

California, Georgia, Michigan, Nevada, and Pennsylvania are the five states that have specifically addressed privacy protection for autonomous vehicles. California has regulations in place requiring autonomous vehicle manufacturers to obtain written consent from passengers before collecting or sharing their personal information. Michigan and Nevada have also enacted laws related to the privacy of collected vehicle data, although they do not specifically mention autonomous vehicles. Policies adopted by Georgia and Pennsylvania both address the privacy of collected vehicle data. Overall, while some states have enacted laws related to the use of autonomous vehicles, there is no federal legislation in place that specifically addresses this issue, which may lead to inconsistencies in how different states regulate the use of autonomous vehicles. However, the trend toward regulating the use of autonomous vehicles and protecting privacy in relation to them is likely to continue in the coming years.

These five policy topics clearly illustrate how the regulation of autonomous vehicles varies substantially among states, including requirements for drivers, vehicles, operations, and the reporting of data.Footnote 84 The diversity of these requirements poses significant challenges for manufacturers now, as well as for fleet operators, human operators, and consumers of AVs in the future.Footnote 85 Therefore, navigating the complex regulatory landscape is crucial for successfully implementing AV technology.Footnote 86 On one hand, experimenting with these different rules offers the opportunity for policy lesson drawing and intergovernmental knowledge and policy transfer, though it is not guaranteed.Footnote 87 On the other hand, a more consistent and streamlined approach to regulating autonomous vehicles would support the growth and development of this new technology.Footnote 88

Fragmentation in new policies being adopted is not the only regulatory concern for autonomous vehicle manufacturers and eventual users. These new policies are layered on top of existing laws that regulate driving, commerce, and privacy. We now consider how administrative layering creates additional political and business challenges.Footnote 89

The consequences of layering AI policy

Often, the regulations discussed regarding autonomous vehicles are about relatively new policies and specifically targeted rules. However, when hardware, software, and communication systems are all considered, there are numerous regulatory issues involving policies that long predate autonomous vehicles or might not be relevant to vehicles at first glance, creating the possibility of burdensome administrative layering that can negatively affect policy implementation.Footnote 90 These issues include limitations on active sensor use, radio spectrum and power regulations, minimum safety and inspection standards, and required or disallowed equipment.

These issues also exist internationally, especially with differing spectrum standards and software control. For instance, a subsidiary of Baidu, a Chinese company, is testing autonomous vehicles in California, and current proposed regulation about vital technology systems developed outside the United States is likely to have unanticipated consequences.Footnote 91 Recently, the US Congress considered banning, limiting, or requiring ownership changes for the social media platform TikTok at a national level because of issues with data security and sensitive information being sent to a Chinese firm. Multiple state and local governments have already proposed banning the platform, with Montana adopting the first statewide ban in 2023.Footnote 92 There are likely to be similar issues with regulations requiring domestic ownership of critical software tools for vehicle brands that are owned or made outside the United States. If these policies are enacted at the state level, it is possible to imagine a future in which vehicles could be banned from operation after purchase because of software and data collection features. As autonomous vehicles age, they are likely to follow the same migration patterns from the nations currently leading autonomous vehicle development to the Global South, and these powerful technological tools may end up in areas with no effective vehicle regulation or autonomous mapping or infrastructure.Footnote 93

Returning to the domestic conversation, consider current vehicular hardware laws, such as models allowed in the United States that are in violation of more stringent emissions standards in states like California.Footnote 94 For both commercial and consumer vehicles, there are models that can be registered in some parts of the United States but not others. This precedent violates the idea that once registered, a vehicle can be used throughout the United States. With autonomous vehicles, these hardware issues have the potential to get even more complicated, as hardware changes over the life of vehicle production may force vehicles out of compliance in certain locations in ways that cannot be determined by make and model alone. A current example of this is Tesla's modifications to hardware in “autopilot” systems. Tesla shifted from relying on radar sensors for autonomous motion to visible light cameras and even disabled the radar sensors in existing vehicles.Footnote 95 This type of deprecation could make an existing vehicle uncompliant because these vehicles are now lacking effective obstacle detection as defined in state standards. Recently, Tesla returned to installing radar in equipped vehicles after a noticeable increase in self-driving accidents.Footnote 96

An autonomous vehicle is a combination of hardware sensors, external communication devices, and software, and it is likely not possible to know in real time whether all the systems in a vehicle are compliant with state regulations at a specific time and location. With technology companies, there is already an issue enforcing state and local regulation because compliance is too complicated, and these problems will be more severe when they are occurring in physical space instead of on screens.Footnote 97 For instance, most states require operators of photogrammetry or lidar sensors to be licensed to determine the precision location of items in space. While these regulations are not aimed at autonomous vehicles, the action performed by the operator is the same, and states could use existing regulations to limit the use of autonomous vehicles if they are considered a danger.Footnote 98 As autonomous vehicles become more common and certain technologies become clearly superior, states may also express preference for hardware standards only used by certain brands, essentially limiting how existing vehicles could function across municipalities. Administrative layering is particularly concerning due to the complexity of the technology and the potential risks associated with autonomous vehicles, including data privacy issues.Footnote 99 Smoothly functioning autonomous vehicles require collaboration between vehicle manufacturers, software developers, and regulators of transportation systems and other infrastructure. A relationship this complicated may thus require unified federal standards.

It is worth noting that while AVs are the presently most tangible and visible category of AI-enabled devices, they are far from the only category subject to these regulations. Personal and commercial unmanned aerial vehicles are increasingly controlled autonomously, and we are currently experiencing the beginning of widescale adoption of generative AI for everything from chatbots to writing code, class papers, or even legal opinions.Footnote 100 Like autonomous vehicles, sometimes these tools have disastrous results,Footnote 101 which is followed by heavy-handed regulation that leads to issues with effective enforcement and inconsistent performance across municipalities.Footnote 102 In the United States, the bizarre inconsistencies in drug and firearm policies serve as good examples of the complications with these regulations. Simply walking across a border can make an action or item a criminal offense, leading to the existence of complicated tools to predict the legality of an action or item by geography.Footnote 103 Additionally, these tools will run into subject matter regulatory issues for things like debt collection or medical discussions. Hence, this is why we use AVs as a case study but present the concepts of regulatory fragmentation and federalism as issues for AI technology, broadly speaking.

Discussion and conclusion

Federalism can be an incubator for policy innovation and learning. But it can also increase friction, fragmentation, and dis-integrated policies that are layered on top of each other. Like so many policy domains in the United States, we argue that significant policy innovation in AI will be driven by the states. Our review of the literatures on policy fragmentation and innovation, business regulation, and social equity demonstrates that the outcomes of state-level policy experimentation are not determinative. State competition to attract innovative AI firms could be good for entrepreneurs but problematic for national firms that must navigate a web of conflicting rules. Policy experimentation could offer best practices for addressing the myriad privacy and social equity concerns that surround different AI-driven technology applications, but fragmentation could also lead to unequal treatment and outcomes across the states. Our more focused analysis of the current state of autonomous vehicle policy in the states illustrates fragmentation and shows how administrative layering can significantly complicate the development and deployment of emergent AI technology.

Our aim with this article is to draw attention to studying state-level policy. Most research on AI governance is focused on national governments. But political science, public administration, and policy science research offers theory and methods that will be useful for understanding AI policy and its effects on businesses and citizens. Furthermore, the states and their variation in political, social, economic, and policy contexts have much to offer for significant theoretical development and empirical testing.Footnote 104 We did not set out to answer these questions, but to raise them to set an agenda for future AI policy and governance research.

The advent of new technologies presents policymakers with a unique set of challenges when dealing with new ventures. Primarily, these challenges involve the decision-making and knowledge-creating processes. On one hand, the utilization of digital technologies, such as AI and platforms, can shift the decision-making power from humans to machines. This shift raises the question of who should be held accountable for the decisions made by AI. Further research is needed to investigate how ventures allocate their responsibilities and how regulators can intervene to affect responsibility allocation after AI is adopted. On the other hand, AI technologies involve more stakeholders in the knowledge creation process, thereby creating potential conflicts of interests between ventures and stakeholders. Further research is needed to explore how ventures can resolve these conflicts and how policymakers can help coordinate the relationship between ventures and stakeholders.

Policymakers and academicians must also continue to address equity concerns in AI policy and what such policies mean for democracy. As governments expand the use of automated decision-making mechanisms for various public functions, these systems must be regulated and closely examined for equity, while keeping changing political contexts and the national mood in mind. Future research in this context must consider how fragmentation impacts AI policy implementation, the overall governance of AI, and how the implementation of AI technology by governments and private firms results in disparate outcomes for citizens.

Researchers can also leverage the diversity in policy approaches among the states to better understand the effects of different policy designs and tools on businesses and consumers. In this respect, the potential research questions are endless, as new applications of AI will continue to rapidly emerge. However, examples could include the effects of Tik Tok bans on social media activity, the effects of licensing and liability rules on autonomous vehicle uptake and use, the effects of algorithmic transparency and public engagement in the development of public trust of AI, and much more.

Studying the dynamic process of AI policy innovation across the states is also fruitful grounds for pushing forward theories of the policy process. Policy innovation and diffusion theory is an obvious potential beneficiary. Much is left to be unpacked from Table 1, including regional patterns of policy adoption and the specific drivers of AV policy innovation. AI is a highly technical policy, and thus it can be useful for expanding research on the effects of policy attributes on diffusion.Footnote 105 This will require additional data collection not only on AV policy, but also on other policies that emerge to address different AI technologies (e.g., generative AI). Data collection on state variation in policy tools, rhetoric and narratives, advocacy coalitions, administrative layering, collaborative governance, institutional grammar, policy feedback effects, and much more can help advance multiple theories.Footnote 106 Interdisciplinary research on AI policy that brings together business, political science, public administration, and public policy to study variation in state AI policy can thus yield significant dividends for both scholarly understanding and effective practice.

Competing interest

The authors declare none.

Acknowledgments

The authors are grateful to the Penn State Center for Responsible Use of Artificial Intelligence for its financial support of this work.

Footnotes

2 Robles and Mallinson (Reference Robles and Mallinson2023).

4 Wirtz and Müller (Reference Wirtz and Müller2019); Robles and Mallinson (Reference Robles and Mallinsonforthcoming).

7 Brock and Mallinson (Reference Brock and Mallinsonforthcoming); Mallinson and Hannah (Reference Mallinson and Lee Hannah2024); Goelzhauser and Konisky (Reference Goelzhauser and Konisky2020); Konisky and Nolette (Reference Konisky and Nolette2021).

8 National Highway Traffic Safety Administration (2017).

9 Kalmenovitz, Lowry, and Volkova (Reference Kalmenovitz, Lowry and Volkova2022).

10 Rahman and Thelen (Reference Rahman and Thelen2019).

11 de Laat (Reference de Laat2021).

12 Edgerton (Reference Edgerton2023).

14 Cihon, Maas, and Kemp (Reference Cihon, Maas and Kemp2020).

18 Graham, Shipan, and Volden (Reference Graham, Shipan and Volden2013).

19 Baybeck, Berry, and Siegel (Reference Baybeck, Berry and Siegel2011).

20 Tiebout (Reference Tiebout1956); Volden (Reference Volden2002); Berry, Fording, and Hanson (Reference Berry, Fording and Hanson2003).

21 Baybeck, Berry, and Siegel (Reference Baybeck, Berry and Siegel2011).

22 Manna and Ryan (Reference Manna and Ryan2011); Nicholson-Crotty and Staley (Reference Nicholson-Crotty and Staley2012); Mallinson and Lovell (Reference Mallinson and Lovell2022).

23 Welch and Thompson (Reference Welch and Thompson1980); Karch (Reference Karch2012); Karch and Rosenthal (Reference Karch and Rosenthal2016); Karch and Rose (Reference Karch and Rose2019); Baumgartner, Gray, and Lowery (Reference Baumgartner, Gray and Lowery2009).

24 Morehouse and Jewell (Reference Morehouse and Jewell2004); Hoornbeek (Reference Hoornbeek2005); Gainsborough (Reference Gainsborough2003).

25 Bowling and Pickerill (Reference Bowling and Mitchell Pickerill2013); Pickerill and Bowling (Reference Pickerill and Bowling2014); Rose and Bowling (Reference Rose and Bowling2015); Conlan and Posner (Reference Conlan and Posner2016); Konisky and Nolette (Reference Konisky and Nolette2021); Karch (Reference Karch2020).

26 Hetherington and Rudolph (Reference Hetherington and Rudolph2015); Critchfield, Reed, and Jarmolowicz (Reference Critchfield, Reed and Jarmolowicz2015).

27 Mallinson (Reference Mallinson2021b).

28 Brock and Mallinson (Under Review); Weissert and Scheller (Reference Weissert and Scheller2008).

30 Anderson (Reference Anderson2006).

31 Burford, Shipilov, and Furr (Reference Burford, Shipilov and Furr2022).

32 Klasa and Agnew (Reference Klasa and Agnew2022).

34 Briffault (Reference Briffault2018); Ballotpedia (2021).

35 McCubbins (Reference McCubbins1985).

37 Marcus and Cohen (Reference Marcus and Cohen2015).

40 Sidorsky and Schiller (Reference Sidorsky and Schiller2023).

43 Williams, Brooks, and Shmargad (Reference Williams, Brooks and Shmargad2018).

44 Fontes and Kamminga (Reference Fontes and Kamminga2023).

45 West and Allen (Reference West and Allen2020).

46 Tadjdeh (Reference Tadjdeh2021).

47 Williams, Brooks, and Shmargad (Reference Williams, Brooks and Shmargad2018).

49 Filgueiras (Reference Filgueiras2022, 2).

50 Raymond and DeNardis (Reference Raymond and DeNardis2015).

51 Filgueiras (Reference Filgueiras2022).

52 OECD (2022).

54 Electronic Privacy Information Center (2023).

58 Golembiewski and Wildavsky (Reference Golembiewski and Wildavsky1984); Kenyon and Kincaid (Reference Kenyon and Kincaid1991); Mettler (Reference Mettler1998).

59 Van Dam (Reference Van Dam2019).

61 Sidorsky and Schiller (Reference Sidorsky and Schiller2023).

64 Af Malmborg and Trondal (Reference af Malmborg and Trondal2023).

65 Williams, Brooks, and Shmargad (Reference Williams, Brooks and Shmargad2018).

69 Deichman et al. (Reference Deichman, Ebel, Heineke, Kellner and Steiner2023); Insurance Information Institute (2022).

71 NCSL (2022).

72 Brinkley, Daily, and Gilbert (Reference Brinkley, Daily and Gilbert2019)

73 A platoon is defined as a “group of individual commercial trucks traveling in a unified manner at electronically coordinated speeds at following distances that are closer than would be reasonable and prudent without the electronic coordination” (Alabama SB 172, 2018).

75 Hubbard (Reference Hubbard2018).

78 Pütz, Murphy, and Mullins (Reference Pütz, Murphy and Mullins2019).

79 NCSL (2022); Hubbard (Reference Hubbard2018).

80 Mallinson and Shafi (Reference Mallinson and Shafi2022).

82 Pütz, Murphy, and Mullins (Reference Pütz, Murphy and Mullins2019).

84 Hubbard (Reference Hubbard2018).

86 McAslan, Gabriele, and Miller (Reference McAslan, Gabriele and Miller2021).

91 California Office of Public Affairs (2021).

95 Tesla (2023).

96 Dnistran (Reference Dnistran2023).

99 Sella-Villa and Hodgson (Reference Sella-Villa and Hodgson2023).

100 Anders (Reference Anders2023).

101 Weiser (Reference Weiser2023).

102 Pollicino (Reference Pollicino2023).

104 See, for example, Lowery, Gray, and Cluverius (Reference Lowery, Gray and Cluverius2013); Lowery and Gray (Reference Lowery and Gray2007); Brasher and Lowery (Reference Brasher and Lowery2006).

105 Nicholson-Crotty (Reference Nicholson-Crotty2009); Mallinson (Reference Mallinson2016); Menon and Mallinson (Reference Menon and Mallinson2022); Makse and Volden (Reference Makse and Volden2011).

106 Shanahan, Jones, and McBeth (Reference Shanahan, Jones and McBeth2011); Michener (Reference Michener2019); Siddiki and Frantz (Reference Siddiki and Frantz2022); Scott and Thomas (Reference Scott and Thomas2017).

References

Adler, Jonathan H. 2010. “Cooperation, Commandeering, or Crowding Out: Federal Intervention and State Choices in Health Care Policy.” Kansas Journal of Law and Public Policy 20 (2): 199221.Google Scholar
Adler, Jonathan H. 2012. “Interstate Competition and the Race to the Top.” Harvard Journal of Law & Public Policy 35 (1): 8999.Google Scholar
af Malmborg, Frans, and Trondal, Jarle. 2023. “Discursive Framing and Organizational Venues: Mechanisms of Artificial Intelligence Policy Adoption.” International Review of Administrative Sciences 89 (1): 3958. https://doi.org/10.1177/00208523211007533.CrossRefGoogle Scholar
Allyn, Bobby. 2023. “Montana Becomes 1st State to Approve a Full Ban of Tik Tok.” National Public Radio, 14 April 2023. Accessed 21 April 2023, https://www.npr.org/2023/04/14/1170204627/montana-becomes-1st-state-to-approve-a-full-ban-of-tiktok.Google Scholar
Anders, Brent A. 2023. “Is Using ChatGPT Cheating, Plagiarism, Both, Neither, or Forward Thinking?Patterns 4 (3): 100694. https://doi.org/10.1016/j.patter.2023.100694.CrossRefGoogle ScholarPubMed
Anderson, Benedict. 2006. Imagined Communities: Reflections on the Origin and Spread of Nationalism. New York: Verso.Google Scholar
Bagloee, Saeed Asadi, Tavana, Madjid, Asadi, Mohsen, and Oliver, Tracey. 2016. “Autonomous Vehicles: Challenges, Opportunities, and Future Implications for Transportation Policies.” Journal of Modern Transportation 24 (4): 284303. https://doi.org/10.1007/s40534-016-0117-3.CrossRefGoogle Scholar
Ballotpedia. 2021. “Energy Infrastructure Preemption Conflicts Betwen State and Local Governments.” Ballotpedia. Accessed 20 April 2023, https://ballotpedia.org/Energy_infrastructure_preemption_conflicts_between_state_and_local_governments.Google Scholar
Baumgartner, Frank R., Gray, Virginia, and Lowery, David. 2009. “Federal Policy Activity and the Mobilization of State Lobbying Organizations.” Political Research Quarterly 62 (3): 552–67. https://doi.org/10.1177/1065912908322407.CrossRefGoogle Scholar
Baybeck, Brady, Berry, William D., and Siegel, David A.. 2011. “A Strategic Theory of Policy Diffusion via Intergovernmental Competition.” Journal of Politics 73 (1): 232–47.CrossRefGoogle Scholar
Berry, Frances Stokes, and Berry, William D.. 2018. “Innovation and Diffusion Models in Policy Research.” Theories of the Policy Process, edited by Weible, Christopher M. and Sabatier, Paul, 253–97. New York: Westview Press.CrossRefGoogle Scholar
Berry, William D., Fording, Richard C., and Hanson, Russell L.. 2003. “Reassessing the ‘Race to the Bottom’ in State Welfare Policy.” Journal of Politics 65 (2): 327–49.CrossRefGoogle Scholar
Biswal, Avijeet. 2023. “AI Applications: Top 18 Artificial Intelligence Applications in 2023.” Simplilearn, 4 April 2023. Accessed 16 June 2023, https://www.simplilearn.com/tutorials/artificial-intelligence-tutorial/artificial-intelligence-applications.Google Scholar
Bowling, Cynthia J., and Mitchell Pickerill, J.. 2013. “Fragmented Federalism: The State of American Federalism 2012–13.” Publius: The Journal of Federalism 43 (3): 315–46. https://doi.org/10.1093/publius/pjt022.CrossRefGoogle Scholar
Brasher, Holly, and Lowery, David. 2006. “The Corporate Context of Lobbying Activity.” Business and Politics 8 (1): 123. https://doi.org/10.2202/1469-3569.1124.CrossRefGoogle Scholar
Briffault, Richard. 2018. “The Challenge of the New Preemption.” Stanford Law Review 70: 19952027.Google Scholar
Brinkley, Julian, Daily, Shaundra B., and Gilbert, Juan E.. 2019. “A Policy Proposal to Support Self-Driving Vehicle Accessibility.” Journal on Technology and Persons with Disabilities 7. http://hdl.handle.net/10211.3/210388.Google Scholar
Brock, Clare, and Mallinson, Daniel J.. Forthcoming. “Measuring the Stasis: Punctuated Equilibrium Theory and Partisan Polarization.” Policy Studies Journal.Google Scholar
Brodsky, Jessica S. 2016. “Autonomous Vehicle Regulation: How an Uncertain Legal Landscape May Hit the Brakes on Self-Driving Cars.” Berkeley Technology Law Journal 31 (2): 851–78.Google Scholar
Brooks, Andrew. 2012. “Networks of Power and Corruption: The Trade of Japanese Used Cars to Mozambique.” The Geographical Journal 178 (1): 8092. https://doi.org/10.1111/j.1475-4959.2011.00410.x.CrossRefGoogle Scholar
Bughin, Jacques, Seong, Jeongmin, Manyika, James, Chui, Michael, and Joshi, Raoul. 2018. “Notes from the AI Frontier: Modeling the Impact of AI on the World Economy.” Discussion paper, McKinsey Global Institute, 4 September. Accessed 9 July 2023, https://www.mckinsey.com/featured-insights/artificial-intelligence/notes-from-the-ai-frontier-modeling-the-impact-of-ai-on-the-world-economy.Google Scholar
Burford, Natalie, Shipilov, Andrew V., and Furr, Nathan R.. 2022. “How Ecosystem Structure Affects Firm Performance in Response to a Negative Shock to Interdependencies.” Strategic Management Journal 43 (1): 3057. https://doi.org/10.1002/smj.3318.CrossRefGoogle Scholar
Cachat-Rosset, Gaelle, and Klarsfeld, Alain. 2023. “Diversity, Equity, and Inclusion in Artificial Intelligence: An Evaluation of Guidelines.” Applied Artificial Intelligence 37 (1): 2176618. https://doi.org/10.1080/08839514.2023.2176618.CrossRefGoogle Scholar
California Office of Public Affairs. 2021. “DMV Authorizes Baidu to Test Driverless Vehicles in Sunnyvale.” 27 January. Accessed 9 July 2023, https://www.dmv.ca.gov/portal/news-and-media/dmv-authorizes-baidu-to-test-driverless-vehicles-in-sunnyvale/.Google Scholar
Calo, Ryan. 2018. “Artificial Intelligence Policy: A Primer and Roadmap.” University of Bologna Law Review 3 (2): 180218.Google Scholar
Cihon, Peter, Maas, Matthijs M., and Kemp, Luke. 2020. “Fragmentation and the Future: Investigating Architectures for International AI Governance.” Global Policy 11 (5): 545–56. https://doi.org/10.1111/1758-5899.12890.CrossRefGoogle Scholar
Conlan, Timothy J., and Posner, Paul L.. 2016. “American Federalism in an Era of Partisan Polarization: The Intergovernmental Paradox of Obama's ‘New Nationalism.’Publius: The Journal of Federalism 46 (3): 281307. https://doi.org/10.1093/publius/pjw011.CrossRefGoogle Scholar
Costa-Font, Joan. 2010. “Does Devolution Lead to Regional Inequalities in Welfare Activity?” Environment and Planning C: Government and Policy 28 (3): 435–49. https://doi.org/10.1068/c09156.CrossRefGoogle Scholar
Critchfield, Thomas S., Reed, Derek D., and Jarmolowicz, David P.. 2015. “Historically Low Productivity by the United States Congress: Snapshot of a Reinforcement-Contingency System in Transition.” The Psychological Record 65 (1): 161–76. https://doi.org/10.1007/s40732-014-0098-8.CrossRefGoogle Scholar
de Laat, Paul B. 2021. “Companies Committed to Responsible AI: From Principles towards Implementation and Regulation?” Philosophy & Technology 34 (4): 1135–93. https://doi.org/10.1007/s13347-021-00474-3.CrossRefGoogle ScholarPubMed
Deichman, Johannes, Ebel, Eike, Heineke, Ruth Huess, Kellner, Martin, and Steiner, Fabian. 2023. “Autonomous Driving's Future: Convenient and Connected.” McKinsey & Company, 6 January. Accessed 9 July 2023, https://www.mckinsey.com/industries/automotive-and-assembly/our-insights/autonomous-drivings-future-convenient-and-connected.Google Scholar
Dnistran, Iulian. 2023. “Elon Musk Overrued Tesla Engineers Who Said Removing Radar Would Be Problematic: Report.” InsideEVs, 22 March. Accessed 21 April 2023, https://insideevs.com/news/658439/elon-musk-overruled-tesla-autopilot-engineers-radar-removal/.Google Scholar
Dwivedi, Yogesh K., Hughes, Laurie, Ismagilova, Elvira, Aarts, Gert, Coombs, Crispin, Crick, Tom, Duan, Yanqing, et al 2021. “Artificial Intelligence (AI): Multidisciplinary Perspectives on Emerging Challenges, Opportunities, and Agenda for Research, Practice and Policy.” International Journal of Information Management 57: 101994. https://doi.org/10.1016/j.ijinfomgt.2019.08.002.CrossRefGoogle Scholar
Edgerton, Anna. 2023. “Tech Lobbyists Don't Want States to Let You Sue Over Privacy Violations.” Bloomberg, 20 March. Accessed 16 June 2023, https://www.bloomberg.com/news/articles/2023-03-20/big-tech-lobbyists-are-fighting-strict-data-privacy-laws-state-by-state#xj4y7vzkg.Google Scholar
Eesley, Charles. 2016. “Institutional Barriers to Growth: Entrepreneurship, Human Capital and Institutional Change.” Organization Science 27 (5): 12901306.CrossRefGoogle Scholar
Ehsani, Johnathon P., Hellinger, Andrew, Stephens, Daniel K., Shin, Mi Ran, Michael, Jeffrey, Mccourt, Alexander, and Vernick, Jon. 2022. “State Laws for Autonomous Vehicle Safety, Equity, and Insurance.” Journal of Law, Medicine & Ethics 50 (3): 569–82. https://doi.org/10.1017/jme.2022.96.CrossRefGoogle ScholarPubMed
Electronic Privacy Information Center. 2023. “State Artificial Intelligence Policy.” Accessed 17 April 2023. https://epic.org/state-artificial-intelligence-policy/.Google Scholar
Feindt, Peter H., and Flynn, Andrew. 2009. “Policy Stretching and Institutional Layering: British Food Policy between Security, Safety, Quality, Health and Climate Change.” British Politics 4 (3): 386414. https://doi.org/10.1057/bp.2009.13.CrossRefGoogle Scholar
Filgueiras, Fernando. 2022. “Artificial Intelligence Policy Regimes: Comparing Politics and Policy to National Strategies for Artificial Intelligence.” Global Perspectives 3 (1). https://doi.org/10.1525/gp.2022.32362.CrossRefGoogle Scholar
Fontes, Robin, and Kamminga, Jorrit. 2023. “Ukraine a Living Lab for AI Warfare.” National Defense Magazine, 24 March. Accessed 20 April 2023, https://www.nationaldefensemagazine.org/articles/2023/3/24/ukraine-a-living-lab-for-ai-warfare.Google Scholar
Gainsborough, Juliet F. 2003. “To Devolve or Not to Devolve? Welfare Reform in the States.” Policy Studies Journal 31 (4): 603–23. https://doi.org/10.1111/1541-0072.00045.CrossRefGoogle Scholar
Goelzhauser, Greg, and Konisky, David M.. 2020. “The State of American Federalism 2019–2020: Polarized and Punitive Intergovernmental Relations.” Publius: The Journal of Federalism 50 (3): 311–43. https://doi.org/10.1093/publius/pjaa021.CrossRefGoogle ScholarPubMed
Golembiewski, Robert T., and Wildavsky, Aaron B., eds. 1984. The Costs of Federalism: In Honor of James W. Fesler. Brunswick, NJ: Transaction Books.Google Scholar
Gouvea, Raul, Linton, Jonathan D., Montoya, Manuel, and Walsh, Steven T.. 2012. “Emerging Technologies and Ethics: A Race-to-the-Bottom or the Top?” Journal of Business Ethics 109 (4): 553–67.CrossRefGoogle Scholar
Graham, Erin R., Shipan, Charles R., and Volden, Craig. 2013. “The Diffusion of Policy Diffusion Research in Political Science.” British Journal of Political Science 43 (3): 673701.CrossRefGoogle Scholar
Hacker, Jacob S. 2005. “Policy Drift: The Hidden Politics of US Welfare State Retrenchment.” In Beyond Continuity: Institutional Change in Advanced Political Economies, edited by Streek, Wolfgang and Thelen, Kathleen, 4081. Oxford: Oxford University Press.CrossRefGoogle Scholar
Hetherington, Marc J., and Rudolph, Thomas J.. 2015. Why Washington Won't Work. Chicago: University of Chicago Press.CrossRefGoogle Scholar
Hoornbeek, John A. 2005. “The Promises and Pitfalls of Devolution: Water Pollution Policies in the American States.” Publius: The Journal of Federalism 35 (1): 87114. https://doi.org/10.1093/publius/pji005.CrossRefGoogle Scholar
Hubbard, Sarah M. L. 2018. “Automated Vehicle Legislative Issues.” Transportation Research Record 2672 (7): 113.CrossRefGoogle Scholar
Insurance Information Institute. 2022. “Background On: Self-Driving Cars and Insurance.” 17 August. Accessed 26 June 2023, https://www.iii.org/article/background-on-self-driving-cars-and-insurance.Google Scholar
Kalmenovitz, Joseph, Lowry, Michelle, and Volkova, Ekaterina. 2022. “Regulatory Fragmentation.” SSRN. January 10. https://doi.org/10.2139/ssrn.3802888.CrossRefGoogle Scholar
Karch, Andrew. 2007. Democratic Laboratories: Policy Diffusion among the American States. Ann Arbor: University of Michigan Press.CrossRefGoogle Scholar
Karch, Andrew. 2012. “Vertical Diffusion and the Policy-Making Process: The Politics of Embryonic Stem Cell Research.” Political Research Quarterly 65 (1): 4861. https://doi.org/10.1177/1065912910385252.CrossRefGoogle Scholar
Karch, Andrew. 2020. “Filling a Vacuum: Subnational Governance amid National Government Inaction.” State and Local Government Review 52 (4): 232–40. https://doi.org/10.1177/01603233X21999585.CrossRefGoogle Scholar
Karch, Andrew, and Rose, Shanna. 2019. Responsive States: Federalism and American Public Policy. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Karch, Andrew, and Rosenthal, Aaron. 2016. “Vertical Diffusion and the Shifting Politics of Electronic Commerce.” State Politics & Policy Quarterly 16 (1): 2243. https://doi.org/10.1177/1532440015593811.CrossRefGoogle Scholar
Kenyon, Daphne A., and Kincaid, John. 1991. Competition among State and Local Governments: Efficiency and Equity in American Federalism. Washington, DC: Urban Institute.Google Scholar
King, David. 2018. “Putting the Reins on Autonomous Vehicle Liability: Why Horse Accidents Are the Best Common Law Analogy.” North Carolina Journal of Law & Technology 19 (4): 127–59.Google Scholar
Klasa, Adrienne, and Agnew, Harriet. 2022. “Fund Managers Sound Alarm over Fragmenting Regulation.” Financial Times, 10 November. Accessed 20 April 2023, https://www.ft.com/content/f8580286-82e0-403e-9828-d5c063728d1f.Google Scholar
Konisky, David M, and Nolette, Paul. 2021. “The State of American Federalism, 2020–2021: Deepening Partisanship amid Tumultuous Times.” Publius: The Journal of Federalism 51 (3): 327–64. https://doi.org/10.1093/publius/pjab023.CrossRefGoogle Scholar
Kordzadeh, Nima, and Ghasemaghaei, Maryam. 2022. “Algorithmic Bias: Review, Synthesis, and Future Research Directions.” European Journal of Information Systems 31 (3): 388409. https://doi.org/10.1080/0960085X.2021.1927212.CrossRefGoogle Scholar
Lowery, David, and Gray, Virginia. 2007. “Understanding Interest System Diversity: Health Interest Communities in the American States.” Business and Politics 9 (2): 138. https://doi.org/10.2202/1469-3569.1191.CrossRefGoogle Scholar
Lowery, David, Gray, Virginia, and Cluverius, John. 2013. “Economic Change and the Supply of Interest Representation in the American States.” Business and Politics 15 (1): 3361. https://doi.org/10.1515/bap-2012-0040.CrossRefGoogle Scholar
Makse, Todd, and Volden, Craig. 2011. “The Role of Policy Attributes in the Diffusion of Innovations.” Journal of Politics 73 (1): 108–24.CrossRefGoogle Scholar
Mallinson, Daniel J. 2016. “Building a Better Speed Trap: Measuring Policy Adoption Speed in the American States.” State Politics & Policy Quarterly 16 (1): 98120. https://doi.org/10.1177/1532440015596088.CrossRefGoogle Scholar
Mallinson, Daniel J. 2021a. “Growth and Gaps: A Meta-Review of Policy Diffusion Studies in the American States.” Policy & Politics 49 (3): 369–89. https://doi.org/10.1332/030557321X16119271286848.CrossRefGoogle Scholar
Mallinson, Daniel J. 2021b. “Policy Innovation Adoption across the Diffusion Life Course.” Policy Studies Journal 49 (2): 335–58.CrossRefGoogle Scholar
Mallinson, Daniel J., and Lee Hannah, A.. 2024. Green Rush: The Rise of Medical Marijuana in the American States. New York: New York University Press.Google Scholar
Mallinson, Daniel J., and Lovell, Darrell. 2022. “Race to the Top and the Diffusion of State Education Intervention Policy in the American States.” Politics & Policy 50 (6): 1221–40. https://doi.org/10.1111/polp.12508.CrossRefGoogle Scholar
Mallinson, Daniel J., and Shafi, Saahir. 2022. “Smart Home Technology: Challenges and Opportunities for Collaborative Governance and Policy Research.” Review of Policy Research 39 (3): 330–52. https://doi.org/10.1111/ropr.12470.CrossRefGoogle Scholar
Manna, Paul, and Ryan, Laura L.. 2011. “Competitive Grants and Educational Federalism: President Obama's Race to the Top Program in Theory and Practice.” Publius: The Journal of Federalism 41 (3): 522–46. https://doi.org/10.1093/publius/pjr021.CrossRefGoogle Scholar
Marcus, Alfred A., and Cohen, Susan K.. 2015. “Public Policies in a Regulated Entrepreneurial Setting.” Business and Politics 17 (2): 221–51. https://doi.org/10.1017/S1369525800001637.CrossRefGoogle Scholar
McAslan, Devon, Gabriele, Max, and Miller, Thaddeus R.. 2021. “Planning and Policy Directions for Autonomous Vehicles in Metropolitan Planning Organizations (MPOs) in the United States.” Journal of Urban Technology 28 (3-4): 175201. https://doi.org/10.1080/10630732.2021.1944751.CrossRefGoogle Scholar
McCubbins, Mathew D. 1985. “The Legislative Design of Regulatory Structure.” American Journal of Political Science 29 (4): 721–48. https://doi.org/10.2307/2111178.CrossRefGoogle Scholar
Menon, Aravind, and Mallinson, Daniel J. 2022. “Policy Diffusion Speed: A Replication Study Using the State Policy Innovation and Diffusion Database.” Political Studies Review 20 (4): 702–16. https://doi.org/10.1177/14789299211052828.CrossRefGoogle Scholar
Mettler, Suzanne. 1998. Dividing Citizens: Gender and Federalism in New Deal Public Policy. Ithaca, NY: Cornell University Press.CrossRefGoogle Scholar
Michener, Jamila. 2019. “Policy Feedback in a Racialized Polity.” Policy Studies Journal 47 (2): 423–50. https://doi.org/10.1111/psj.12328.CrossRefGoogle Scholar
Monostori, Laszlo. 2014. “Artificial Intelligence.” In CIRP Encyclopedia of Production Engineering, edited by Laperrière, Luc and Reinhart, Gunther, 4750. Berlin: Springer.CrossRefGoogle Scholar
Mooney, Christopher Z. 2021. The Study of US State Policy Diffusion: What Hath Walker Wrought? Cambridge: Cambridge University Press.Google Scholar
Morehouse, Sarah M., and Jewell, Malcolm E.. 2004. “States as Laboratories: A Reprise.” Annual Review of Political Science 7 (1): 177203. https://doi.org/10.1146/annurev.polisci.7.012003.104913.CrossRefGoogle Scholar
National Highway Traffic Safety Administration. 2017. “Automated Driving Systems: A Vision for Safety 2.0.” Accessed 9 July 2023, https://www.nhtsa.gov/sites/nhtsa.gov/files/documents/13069a-ads2.0_090617_v9a_tag.pdf.Google Scholar
NCSL (National Conference of State Legislatures). 2022. “Legislation Related to Artificial Intelligence.” Accessed 1 April 2023, https://www.ncsl.org/technology-andcommunication/legislation-related-to-artificial-intelligence.Google Scholar
Nicholson-Crotty, Sean. 2009. “The Politics of Diffusion: Public Policy in the American States.” Journal of Politics 71 (1): 192205.CrossRefGoogle Scholar
Nicholson-Crotty, Sean, and Staley, Tucker. 2012. “Competitive Federalism and Race to the Top Application Decisions in the American States.” Educational Policy 26 (1): 160–84. https://doi.org/10.1177/0895904811428974.CrossRefGoogle Scholar
Nikolaev, A. S., Maximova, T. G., Sakhno, I. E., Antipov, A. A., and Murashova, S. V.. 2023. “Facial Recognition Technologies Patent Landscape.” In Software Engineering Application in Systems Design: Proceedings of 6th Computational Methods in Systems and Software, vol. 1, edited by Silhavy, Radek, Silhavy, Petr, and Prokopova, Zdenka, 568–83. Cham: Springer International.CrossRefGoogle Scholar
OECD (Organisation for Economic Co-operation and Development). 2022. “OECD AI Principles Overview.” Accessed 9 July 2023, https://oecd.ai/en/ai-principles.Google Scholar
Olson, Parmy. 2022. “The Way to Police Big Tech Is Through US States.” Washington Post, 22 September. Accessed 21 April 2023, https://www.washingtonpost.com/business/the-way-to-police-big-tech-is-through-us-states/2022/09/22/aab2c0de-3a7f-11ed-b8af-0a04e5dc3db6_story.html.Google Scholar
Pickerill, J. Mitchell, and Bowling, Cynthia J.. 2014. “Polarized Parties, Politics, and Policies: Fragmented Federalism in 2013–2014.” Publius: The Journal of Federalism 44 (3): 369–98. https://doi.org/10.1093/publius/pju026.CrossRefGoogle Scholar
Pollicino, Oreste. 2023. “ChatGPT: Lessons Learned from Italy's Temporary Ban of the AI Chatbot.” The Conversation, 20 April. Accessed 26 June 2023, https://theconversation.com/chatgpt-lessons-learned-from-italys-temporary-ban-of-the-ai-chatbot-203206.Google Scholar
Power, Donal. 2016. “Not So Fast: Feds Defer to States for Self-Driving Laws.” ReadWrite, 14 June. Accessed 20 April 2023, https://readwrite.com/us-federal-government-wont-override-state-self-driving-car-laws-tl4/.Google Scholar
Pütz, Fabian, Murphy, Finbarr, and Mullins, Martin. 2019. “Driving to a Future without Accidents? Connected Automated Vehicles’ Impact on Accident Frequency and Motor Insurance Risk.” Environment Systems and Decisions 39 (4): 383–95. https://doi.org/10.1007/s10669-019-09739-x.CrossRefGoogle Scholar
Rahman, K. Sabeel, and Thelen, Kathleen. 2019. “The Rise of the Platform Business Model and the Transformation of Twenty-First-Century Capitalism.” Politics & Society 47 (2): 177204. https://doi.org/10.1177/0032329219838932.CrossRefGoogle Scholar
Raymond, Mark, and DeNardis, Laura. 2015. “Multistakeholderism: Anatomy of an Inchoate Global Institution.” International Theory 7 (3): 572616. https://doi.org/10.1017/S1752971915000081.CrossRefGoogle Scholar
Rayner, Jeremy, and Howlett, Michael. 2009. “Introduction: Understanding Integrated Policy Strategies and Their Evolution.” Policy and Society 28 (2): 99109. https://doi.org/10.1016/j.polsoc.2009.05.001.CrossRefGoogle Scholar
Robles, Pedro, and Mallinson, Daniel J.. 2023. “Catching Up with AI: Pushing Toward a Cohesive Governance Framework.” Politics & Policy 51 (3): 355–72.CrossRefGoogle Scholar
Robles, Pedro, and Mallinson, Daniel J.. Forthcoming. “Advancing AI Governance with a Unified Theoretical Framework: A Systematic Review.”Google Scholar
Rom, Mark Carl. 2006. “Taking the Brandeis Metaphor Seriously: Policy Experimentation within a Federal System.” In Promoting the General Welfare: New Perspectives on Government Performance, edited by Gerber, Alan S. and Patashnik, Eric M., 256–81. Washington, DC: Brookings Institution Press.Google Scholar
Rose, Richard. 1991. “What is Lesson-Drawing?” Journal of Public Policy 11 (1): 330. https://doi.org/10.1017/S0143814X00004918.CrossRefGoogle Scholar
Rose, Shanna, and Bowling, Cynthia J.. 2015. “The State of American Federalism 2014–15: Pathways to Policy in an Era of Party Polarization.” Publius: The Journal of Federalism 45 (3): 351–79. https://doi.org/10.1093/publius/pjv028.CrossRefGoogle Scholar
Scott, Tyler A., and Thomas, Craig W.. 2017. “Unpacking the Collaborative Toolbox: Why and When Do Public Managers Choose Collaborative Governance Strategies?Policy Studies Journal 45 (1): 191214. https://doi.org/10.1111/psj.12162.CrossRefGoogle Scholar
Sella-Villa, David, and Hodgson, Michael. 2023. “Privacy in the Age of Active Sensors.” SSRN. February 2. https://ssrn.com/abstract=4346211.Google Scholar
Shanahan, Elizabeth A., Jones, Michael D., and McBeth, Mark K.. 2011. “Policy Narratives and Policy Processes.” Policy Studies Journal 39 (3): 535–61. https://doi.org/10.1111/j.1541-0072.2011.00420.x.CrossRefGoogle Scholar
Sheely, Amanda. 2012. “Devolution and Welfare Reform: Re-evaluating ‘Success.’Social Work 57 (4): 321–31. https://doi.org/10.1093/sw/sws022.CrossRefGoogle ScholarPubMed
Siddiki, Saba, and Frantz, Christopher. 2022. “The Institutional Grammar in Policy Process Research.” Policy Studies Journal 50 (2): 299314. https://doi.org/10.1111/psj.12466.CrossRefGoogle Scholar
Sidorsky, Kaitlin, and Schiller, Wendy J.. 2023. Inequality across State Lines: How Policymakers Have Failed Domestic Violence Victims in the United States. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Smith, Maxwell J., Axler, Renata, Bean, Sally, Rudzicz, Frank, and Shaw, James. 2020. “Four Equity Considerations for the Use of Artificial Intelligence in Public Health.” Bulletin of the World Health Organization 98 (4): 290–92.CrossRefGoogle ScholarPubMed
Stivers, Camilla, Pandey, Sanjay K., DeHart-Davis, Leisha, Hall, Jeremy L., Newcomer, Kathryn, Portillo, Shannon, Sabharwal, Meghna, Strader, Eiko, and Wright, James II. 2023. “Beyond Social Equity: Talking Social Justice in Public Administration.” Public Administration Review 83 (2): 229–40. https://doi.org/10.1111/puar.13620.CrossRefGoogle Scholar
Stone, Diane. 2012. “Transfer and Translation of Policy.” Policy Studies 33 (6): 483–99. https://doi.org/10.1080/01442872.2012.695933.CrossRefGoogle Scholar
Tadjdeh, Yasmin. 2021. “How AI Could Go Disasterously Wrong.” National Defense Magazine, 8 September. Accessed 20 April 2023, https://www.nationaldefensemagazine.org/articles/2021/9/8/how-ai-could-go-disastrously-wrong.Google Scholar
Tesla. 2023. “Tesla Vision Update: Replacing Ultrasonic Sensors with Tesla Vison.” Accessed 21 April 2023, https://www.tesla.com/en_eu/support/transitioning-tesla-vision.Google Scholar
Thelen, Kathleen. 2003. “How Institutions Evolve: Insights from Comparative Historical Analysis.” In Comparative Historical Analysis in the Social Sciences, edited by Mahoney, James and Rueschemeyer, Dietrich, 208–40. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Tiebout, Charles M. 1956. “A Pure Theory of Local Expenditures.” Journal of Political Economy 64 (5): 416–24.CrossRefGoogle Scholar
Van Dam, Andrew. 2019. “Algorithms Were Supposed to Make Virginia Judges Fairer. What Happened Was Far More Complicated.” Washington Post, 19 November. Accessed 26 June 2023, https://www.washingtonpost.com/business/2019/11/19/algorithms-were-supposed-make-virginia-judges-more-fair-what-actually-happened-was-far-more-complicated/.Google Scholar
Vince, Joanna. 2015. “Integrated Policy Approaches and Policy Failure: The Case of Australia's Oceans Policy.” Policy Sciences 48 (2): 159–80. https://doi.org/10.1007/s11077-015-9215-z.CrossRefGoogle Scholar
Volden, Craig. 2002. “The Politics of Competitive Federalism: A Race to the Bottom in Welfare Benefits?” American Journal of Political Science 46 (2): 352–63. https://doi.org/10.2307/3088381.CrossRefGoogle Scholar
Walker, Jack L. 1969. “The Diffusion of Innovations among the American States.” American Political Science Review 63 (3): 880–99. https://doi.org/10.2307/1954434.CrossRefGoogle Scholar
Weiser, Benjamin. 2023. “Here's What Happens When Your Lawyer Uses ChatGPT.” New York Times, 27 May. Accessed 26 June 2023, https://www.nytimes.com/2023/05/27/nyregion/avianca-airline-lawsuit-chatgpt.html.Google Scholar
Weissert, Carol S., and Scheller, Daniel. 2008. “Learning from the States? Federalism and National Health Policy.” Public Administration Review 68 (s1): S162–74. https://doi.org/10.1111/j.1540-6210.2008.00986.x.CrossRefGoogle Scholar
Welch, Susan, and Thompson, Kay. 1980. “The Impact of Federal Incentives on State Policy Innovation.” American Journal of Political Science 24 (4): 715–29. https://doi.org/10.2307/2110955.CrossRefGoogle Scholar
West, Darrell M., and Allen, John R.. 2020. Turning Point: Policymaking in the Era of Artificial Intelligence. Washington, DC: Brookings Institution Press.Google Scholar
Williams, Betsy Anne, Brooks, Catherine F., and Shmargad, Yotam. 2018. “How Algorithms Discriminate Based on Data They Lack: Challenges, Solutions, and Policy Implications.” Journal of Information Policy 8: 78115.CrossRefGoogle Scholar
Wirtz, Bernd W., and Müller, Wilhelm M.. 2019. “An Integrated Artificial Intelligence Framework for Public Management.” Public Management Review 21 (7): 10761100. https://doi.org/10.1080/14719037.2018.1549268.CrossRefGoogle Scholar
Figure 0

Table 1. Autonomous vehicle policy components by state.