AI concerns continue as governments look for the right mix of regulations and protections

Boris SV/Getty Images

The technology is starting to rack up an impressive portfolio of success stories, but there could be dangers and downsides as well.

There is little doubt that the emerging science of artificial intelligence is continuing to advance and grow, both in capabilities and the number of things that AI is being tasked with doing. And this is happening at lightning speed, despite there being little to no regulation in place to govern its use.

In the case of AI, and especially the new generative AI models like ChatGPT 4, the reason for both agencies and the private sector to proceed, even without guardrails, is likely due to the potential of game-changing AI to provide incredible benefits that seem to outweigh any associated risks. And AI is also starting to rack up an impressive portfolio of success stories. 

For example, NASA and the National Oceanic and Atmospheric Administration recently tasked AI with predicting potentially deadly solar storms, and the AI is now able to give warnings about those events up to 30 minutes before a storm even forms on the surface of the sun. And in November, emergency managers from around the country will meet to discuss tasking AI with predicting storms and other natural disasters that originate right here on Earth, potentially giving more time for evacuations or preparations and possibly saving a lot of lives. Meanwhile, over in the military, unmanned aerial vehicles and drones are being paired up with AI in order to help generate better situational awareness, or even to fight on the battlefields of tomorrow, keeping humans out of harm’s way as much as possible. The list goes on and on.

But there could be dangers and downsides to AI as well, a fact that those who work with the technology are increasingly aware of. The results of a survey of over 600 software developers from across the public and private sector, with many of them tasked with working on projects involving AI, was released this week by Bitwarden and Propeller Insights. A full 78% of the survey respondents said that the use of generative AI would make security more challenging. In fact, 38% said that AI would become the top threat to cybersecurity over the next five years, which proved to be the most popular answer. The second biggest threat predicted by the developer community, ransomware, was cited by just 19% of the survey participants.

Self-regulation reigns

Although no laws yet exist in the United States for regulating AI, there are an increasing number of guidelines and frameworks to help provide direction on how to develop so-called ethical AI. One of the most detailed was recently unveiled by the Government Accountability Office. Called the AI Accountability Framework for Federal Agencies, it provides guidance for agencies that are building, selecting or implementing AI Systems. 

According to those from the GAO and the educational institutions that helped to draft the framework, the most responsible uses of AI in government should be centered around four complimentary principles — governance, data, performance and monitoring — all of which are covered in detail within the GAO framework.

Another framework that has gotten a lot of attention, although it has no legal power, is the White House Office of Science and Technology Policy’s AI Bill of Rights. The framework does not give specific advice but instead provides general rules about how AI should be employed and how it should be allowed or restricted from working with humans. For example, it states that people should not face discrimination based on the decision of an algorithm or an AI. The framework also asserts that people should know if an AI is being used to generate a decision. So, if someone is being considered for a loan, the bank they are applying to should disclose whether a human or an AI will make the final decision. And if an AI is doing it, people should be able to request to opt out of that process and instead have their application looked at by real people.

Even though the AI Bill of Rights is merely a guideline, there have been calls for the government to make it binding, at least in how it applies to federal agencies. In Europe, that kind of legal action may soon happen. If it does, it will affect all entities working with AI within the European Union, not just their government agencies. The proposed regulations were put forward as part of the Artificial Intelligence Act, which was first introduced in April 2021.

Unlike the more high-level guidance detailed in the AI Bill of Rights, the Artificial Intelligence Act more carefully defines what kinds of AI activities should be allowed, which ones will be highly regulated, and what will be fully restricted based on having an unacceptable amount of risk. For example, activities that would be illegal under the AI Act include having AI negatively manipulate children, such as an AI-powered toy that encourages bad behavior. Anything that uses AI to score or classify people based on personal characteristics, socio-economic status or behavior would also be illegal.

High-risk activities — like AI use in educational fields or training, law enforcement, assistance in legal actions, the management of critical infrastructure and other similar activities — would be allowed, but heavily regulated. There is even an entire section in the AI Act that applies to generative AI, allowing the technology but requiring users to disclose whenever content is AI-generated. The model owners would also need to disclose any copyrighted materials that went into the model’s creation, and would also be prevented from generating illegal content.

Finding a path forward

A highly regulated approach to AI development, like in the European model, could help to keep people safe, but it could also hinder innovation in countries that accept the new standard, something EU officials have said they want in place by the end of the year. That is why many industry leaders are urging Congress to adopt a lighter touch when it comes to AI regulations in the United States. They argue that the United States is currently the world’s leader in AI innovation, and strict regulations would severely hinder that. 

Plus, the emerging group of AI TRiSM — standing for trust, risk and security management — tools are just now being deployed, and could be used to help companies self-regulate AIs. The TRiSM tools do that by exposing the datasets used to train the AI models to look for bias, monitoring AI responses to ensure that they are compliant with existing regulations or guidelines, or they can help to train the AI to act appropriately.

Whether a strict approach to AI development like in the European model, a lighter set of guidelines like those currently used in the United States, or self-regulation by the companies which are programming and crafting new AIs, it’s clear that some form of regulation or guidance is probably needed. Even the developers working on AI projects acknowledge that the technology could prove dangerous under certain circumstances, especially as it continues to advance and improve its capabilities over the next few years. The question then becomes which one of those approaches is best? But it may be a long time before that question can be definitively answered.

John Breeden II is an award-winning journalist and reviewer with over 20 years of experience covering technology. He is the CEO of the Tech Writers Bureau, a group that creates technological thought leadership content for organizations of all sizes. Twitter: @LabGuys

X
This website uses cookies to enhance user experience and to analyze performance and traffic on our website. We also share information about your use of our site with our social media, advertising and analytics partners. Learn More / Do Not Sell My Personal Information
Accept Cookies
X
Cookie Preferences Cookie List

Do Not Sell My Personal Information

When you visit our website, we store cookies on your browser to collect information. The information collected might relate to you, your preferences or your device, and is mostly used to make the site work as you expect it to and to provide a more personalized web experience. However, you can choose not to allow certain types of cookies, which may impact your experience of the site and the services we are able to offer. Click on the different category headings to find out more and change our default settings according to your preference. You cannot opt-out of our First Party Strictly Necessary Cookies as they are deployed in order to ensure the proper functioning of our website (such as prompting the cookie banner and remembering your settings, to log into your account, to redirect you when you log out, etc.). For more information about the First and Third Party Cookies used please follow this link.

Allow All Cookies

Manage Consent Preferences

Strictly Necessary Cookies - Always Active

We do not allow you to opt-out of our certain cookies, as they are necessary to ensure the proper functioning of our website (such as prompting our cookie banner and remembering your privacy choices) and/or to monitor site performance. These cookies are not used in a way that constitutes a “sale” of your data under the CCPA. You can set your browser to block or alert you about these cookies, but some parts of the site will not work as intended if you do so. You can usually find these settings in the Options or Preferences menu of your browser. Visit www.allaboutcookies.org to learn more.

Sale of Personal Data, Targeting & Social Media Cookies

Under the California Consumer Privacy Act, you have the right to opt-out of the sale of your personal information to third parties. These cookies collect information for analytics and to personalize your experience with targeted ads. You may exercise your right to opt out of the sale of personal information by using this toggle switch. If you opt out we will not be able to offer you personalised ads and will not hand over your personal information to any third parties. Additionally, you may contact our legal department for further clarification about your rights as a California consumer by using this Exercise My Rights link

If you have enabled privacy controls on your browser (such as a plugin), we have to take that as a valid request to opt-out. Therefore we would not be able to track your activity through the web. This may affect our ability to personalize ads according to your preferences.

Targeting cookies may be set through our site by our advertising partners. They may be used by those companies to build a profile of your interests and show you relevant adverts on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. If you do not allow these cookies, you will experience less targeted advertising.

Social media cookies are set by a range of social media services that we have added to the site to enable you to share our content with your friends and networks. They are capable of tracking your browser across other sites and building up a profile of your interests. This may impact the content and messages you see on other websites you visit. If you do not allow these cookies you may not be able to use or see these sharing tools.

If you want to opt out of all of our lead reports and lists, please submit a privacy request at our Do Not Sell page.

Save Settings
Cookie Preferences Cookie List

Cookie List

A cookie is a small piece of data (text file) that a website – when visited by a user – asks your browser to store on your device in order to remember information about you, such as your language preference or login information. Those cookies are set by us and called first-party cookies. We also use third-party cookies – which are cookies from a domain different than the domain of the website you are visiting – for our advertising and marketing efforts. More specifically, we use cookies and other tracking technologies for the following purposes:

Strictly Necessary Cookies

We do not allow you to opt-out of our certain cookies, as they are necessary to ensure the proper functioning of our website (such as prompting our cookie banner and remembering your privacy choices) and/or to monitor site performance. These cookies are not used in a way that constitutes a “sale” of your data under the CCPA. You can set your browser to block or alert you about these cookies, but some parts of the site will not work as intended if you do so. You can usually find these settings in the Options or Preferences menu of your browser. Visit www.allaboutcookies.org to learn more.

Functional Cookies

We do not allow you to opt-out of our certain cookies, as they are necessary to ensure the proper functioning of our website (such as prompting our cookie banner and remembering your privacy choices) and/or to monitor site performance. These cookies are not used in a way that constitutes a “sale” of your data under the CCPA. You can set your browser to block or alert you about these cookies, but some parts of the site will not work as intended if you do so. You can usually find these settings in the Options or Preferences menu of your browser. Visit www.allaboutcookies.org to learn more.

Performance Cookies

We do not allow you to opt-out of our certain cookies, as they are necessary to ensure the proper functioning of our website (such as prompting our cookie banner and remembering your privacy choices) and/or to monitor site performance. These cookies are not used in a way that constitutes a “sale” of your data under the CCPA. You can set your browser to block or alert you about these cookies, but some parts of the site will not work as intended if you do so. You can usually find these settings in the Options or Preferences menu of your browser. Visit www.allaboutcookies.org to learn more.

Sale of Personal Data

We also use cookies to personalize your experience on our websites, including by determining the most relevant content and advertisements to show you, and to monitor site traffic and performance, so that we may improve our websites and your experience. You may opt out of our use of such cookies (and the associated “sale” of your Personal Information) by using this toggle switch. You will still see some advertising, regardless of your selection. Because we do not track you across different devices, browsers and GEMG properties, your selection will take effect only on this browser, this device and this website.

Social Media Cookies

We also use cookies to personalize your experience on our websites, including by determining the most relevant content and advertisements to show you, and to monitor site traffic and performance, so that we may improve our websites and your experience. You may opt out of our use of such cookies (and the associated “sale” of your Personal Information) by using this toggle switch. You will still see some advertising, regardless of your selection. Because we do not track you across different devices, browsers and GEMG properties, your selection will take effect only on this browser, this device and this website.

Targeting Cookies

We also use cookies to personalize your experience on our websites, including by determining the most relevant content and advertisements to show you, and to monitor site traffic and performance, so that we may improve our websites and your experience. You may opt out of our use of such cookies (and the associated “sale” of your Personal Information) by using this toggle switch. You will still see some advertising, regardless of your selection. Because we do not track you across different devices, browsers and GEMG properties, your selection will take effect only on this browser, this device and this website.