Election 2024: How will the candidates regulate AI?

The US presidential election is in its final stretch. Before election day on November 5, Engadget is looking at where the candidates, Kamala Harris and Donald Trump, stand on the most consequential tech issues of our day. While it might not garner the headlines that immigration, abortion or inflation do, AI is quietly one of the more consequential issues this election season. What regulations are put in place and how forcefully those rules are enforced will have wide ranging impacts on consumer privacy, intellectual property, the media industry and national security. Normally, politicians lack clear or coherent policies on emerging technologies. But somewhat shockingly, both former President Donald Trump and Vice President Kamala Harris have at least some track record handling artificial intelligence. VP Harris, in particular, has been very hands-on in shaping the current administration’s approach. And Donald Trump was the first president to sign an executive order regarding AI. That being said, neither has made AI a central component of their campaign, and we’re making some educated guesses here about how either would approach it once in the White House. Kamala Harris With Harris’ considerable involvement in the Biden administration’s AI efforts, it’s safe to assume she would move forward with many of those policies. While the White House started laying the groundwork for its AI initiatives in early 2021, it wasn’t until late 2023 that they kicked into high gear, and Harris has often been the public face of those efforts, including holding numerous press calls on the issue and appearing at the Global Summit on AI Safety in London. She has used these venues to draw attention to the potential pitfalls, both large and small, of AI ranging from “cyberattacks at a scale beyond anything we have seen before” to seniors being “kicked off [their] healthcare plan because of a faulty AI algorithm.” October 2023 saw the issuance of an executive order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. This order noted the potential for AI to solve broad societal issues as well as its ability to “exacerbate societal harms, such as fraud, discrimination, bias and disinformation; displace and disempower workers; stifle competition and pose risks to national security.” It laid out eight guiding principles focused on creating standardized evaluations for AI systems, protecting workers, consumer privacy and combating inherent bias. It also called for agencies to name a chief AI officer (CAIO) and directed the federal government to develop policies and strategies using and regulating AI. This included developing technologies for identifying and labeling AI-generated content and building guardrails to prevent the creation of images depicting sexual abuse and deepfake pornography. Harris helped secure commitments from Apple, Amazon, Anthropic, Google, Inflection, Meta, Microsoft, Adobe, Cohere, IBM, NVIDIA, Palantir, Salesforce, Scale AI, Stability and OpenAI to work towards the administration’s goals. She also worked to obtain endorsements from 31 nations of a declaration regarding the responsible creation and use of military AI. At this stage, the latter is merely a commitment to work together to establish rules and guidelines. But there are many absences on that list, most notably Russia, China and Israel. Because the technology is so new, however, there are still a lot of questions about the specifics of how a Harris administration would handle AI. Besides, without an act of Congress, the White House would be limited in how it could regulate the industry or punish those that run afoul of its policies. On the campaign trail, Harris hasn’t said much new about the issue, outside of a brief mention at a Wall Street fundraiser, during which she said, “We will encourage innovative technologies, like AI and digital assets, while protecting our consumers and investors.” Harris does have strong ties to Silicon Valley, so it remains to be seen just how much she would try to rein in the industry. But as of now, most of her statements have focused on protecting consumers and workers. Donald Trump Donald Trump holds the distinction of being the first president to sign an executive order regarding AI, but his actual public statements on the matter have been limited. In February 2019, he established the American AI Initiative, which created the first national AI research institutes, called for doubling the funding of AI research and set forth broad regulatory guidance. It also called for the creation of the National Artificial Intelligence Initiative Office, which would serve as a central hub for coordinating research and policy across the government. Unsurprisingly, the executive order signed by former President Trump and the policies set forth by his allies have focused more on encouraging private sector growth and limited government oversight. The official Republican party platform adopted at the RNC i

Oct 30, 2024 - 16:00
 0  5
Election 2024: How will the candidates regulate AI?

The US presidential election is in its final stretch. Before election day on November 5, Engadget is looking at where the candidates, Kamala Harris and Donald Trump, stand on the most consequential tech issues of our day.

While it might not garner the headlines that immigration, abortion or inflation do, AI is quietly one of the more consequential issues this election season. What regulations are put in place and how forcefully those rules are enforced will have wide ranging impacts on consumer privacy, intellectual property, the media industry and national security.

Normally, politicians lack clear or coherent policies on emerging technologies. But somewhat shockingly, both former President Donald Trump and Vice President Kamala Harris have at least some track record handling artificial intelligence. VP Harris, in particular, has been very hands-on in shaping the current administration’s approach. And Donald Trump was the first president to sign an executive order regarding AI.

That being said, neither has made AI a central component of their campaign, and we’re making some educated guesses here about how either would approach it once in the White House.

Kamala Harris

With Harris’ considerable involvement in the Biden administration’s AI efforts, it’s safe to assume she would move forward with many of those policies. While the White House started laying the groundwork for its AI initiatives in early 2021, it wasn’t until late 2023 that they kicked into high gear, and Harris has often been the public face of those efforts, including holding numerous press calls on the issue and appearing at the Global Summit on AI Safety in London. She has used these venues to draw attention to the potential pitfalls, both large and small, of AI ranging from “cyberattacks at a scale beyond anything we have seen before” to seniors being “kicked off [their] healthcare plan because of a faulty AI algorithm.”

October 2023 saw the issuance of an executive order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. This order noted the potential for AI to solve broad societal issues as well as its ability to “exacerbate societal harms, such as fraud, discrimination, bias and disinformation; displace and disempower workers; stifle competition and pose risks to national security.” It laid out eight guiding principles focused on creating standardized evaluations for AI systems, protecting workers, consumer privacy and combating inherent bias.

It also called for agencies to name a chief AI officer (CAIO) and directed the federal government to develop policies and strategies using and regulating AI. This included developing technologies for identifying and labeling AI-generated content and building guardrails to prevent the creation of images depicting sexual abuse and deepfake pornography.

Harris helped secure commitments from Apple, Amazon, Anthropic, Google, Inflection, Meta, Microsoft, Adobe, Cohere, IBM, NVIDIA, Palantir, Salesforce, Scale AI, Stability and OpenAI to work towards the administration’s goals. She also worked to obtain endorsements from 31 nations of a declaration regarding the responsible creation and use of military AI. At this stage, the latter is merely a commitment to work together to establish rules and guidelines. But there are many absences on that list, most notably Russia, China and Israel.

Because the technology is so new, however, there are still a lot of questions about the specifics of how a Harris administration would handle AI. Besides, without an act of Congress, the White House would be limited in how it could regulate the industry or punish those that run afoul of its policies.

On the campaign trail, Harris hasn’t said much new about the issue, outside of a brief mention at a Wall Street fundraiser, during which she said, “We will encourage innovative technologies, like AI and digital assets, while protecting our consumers and investors.” Harris does have strong ties to Silicon Valley, so it remains to be seen just how much she would try to rein in the industry. But as of now, most of her statements have focused on protecting consumers and workers.

Donald Trump

Donald Trump holds the distinction of being the first president to sign an executive order regarding AI, but his actual public statements on the matter have been limited. In February 2019, he established the American AI Initiative, which created the first national AI research institutes, called for doubling the funding of AI research and set forth broad regulatory guidance. It also called for the creation of the National Artificial Intelligence Initiative Office, which would serve as a central hub for coordinating research and policy across the government.

Unsurprisingly, the executive order signed by former President Trump and the policies set forth by his allies have focused more on encouraging private sector growth and limited government oversight. The official Republican party platform adopted at the RNC in July called for repealing Biden’s October 2023 executive order claiming it “hinders AI Innovation and imposes Radical Leftwing ideas on the development of this technology.” It goes on to call for the development of AI “rooted in Free Speech and Human Flourishing.”

Unfortunately the RNC platform and Trump don’t get much more specific than that. So we’ll have to look at what the former president’s allies at the America First Policy Institute and Heritage Foundation have put forth to get a better idea of how a second Trump presidency might handle AI.

America First began drafting a document earlier this year that called for launching Manhattan Projects for military AI and for reducing regulations. (Currently, there are limited regulations in place regarding AI, as the government is largely in the information-gathering stage of policy development. Congress has yet to pass any meaningful AI legislation.)

It also called for the creation of industry-led agencies tasked with evaluating and securing American artificial intelligence technologies. This is in contrast with the Biden administration’s executive order, which put responsibility for those efforts firmly in the hands of the federal government.

The Heritage Foundation’s Project 2025 (PDF) gets into more specifics, though it is worth noting Trump has tried to distance himself somewhat from that document. Much of the discourse around AI in the 922-page tome is dedicated to China: countering its technological advancements, limiting its access to American technology and preventing it from backing joint research projects with American interests, especially on college campuses. It calls for increasing the use of AI and machine learning in intelligence gathering and analysis, while simultaneously calling for a heavier reliance on the private sector to develop and manage the technology.

The document also spends significant time discussing AI’s potential to “reduce waste, fraud and abuse,” particularly with regards to Medicare and Medicaid. However, it makes almost no mention of protecting consumer privacy, ensuring the accuracy and fairness of algorithms, or identifying abusive or misleading uses of AI, beyond combating Chinese propaganda.

Predictable broad strokes

While both candidates’ platforms lack specifics regarding the regulation of artificial intelligence, they do lay out two clearly different approaches. Kamala Harris has made consumer protections and building guardrails against abuse a cornerstone of her AI policy proposals; Donald Trump has predictably focused on reducing regulation. Neither has suggested they would try to put the proverbial AI genie back in the bottle, not that such a thing would be feasible.

The big question marks are just how much of the America First Policy Institute or Project 2025 proposals a Trump administration would adopt. His own official platform mirrors many policy positions of Project 2025. While it may not reflect any of its AI proposals specifically, there’s little reason to believe his approach would differ dramatically on this specific issue.This article originally appeared on Engadget at https://www.engadget.com/ai/election-2024-how-will-the-candidates-regulate-ai-133045610.html?src=rss

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow