The power of technology is reshaping industries and revolutionizing the ways we live and work. However, along with the added productivity, convenience and power that technology brings comes a significant responsibility for industries, governments and citizens to ensure its ethical use.
While there’s justifiable excitement about the many new technologies entering the market—especially, although not limited to, artificial intelligence—it’s important to ensure they’re used wisely and well and for the benefit of all members of society. Below, 16 members of Forbes Technology Council share some of the current and potential ethical challenges concerning technology as well as how all of us as a society can address them.
1. Protecting Private Information
With an exponential increase in the collection and storage of personal data, challenges have arisen regarding privacy. As we traverse the technological frontier, an increasingly pressing issue that troubles me greatly is the difficulty of preserving the confidentiality of private information, owing to our growing dependence on digital infrastructures across almost all daily activities. Privacy is a privilege nowadays. – Mark Ruber, MTSolutions Group
2. The Rush To Deploy AI
With the advances in generative AI, companies cannot avoid the rush to capitalize on the acceleration of knowledge worker tasks. However, rushing to deploy AI creates minefields of new risks, including unintended biases and regulatory violations. The mass layoffs of in-house ethics teams at several large tech companies should serve as motivation for others to double down on their pursuit of responsible AI. – Usama Fayyad, Institute for Experiential AI, Northeastern University
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
3. The Proliferation Of Misinformation
One ethical crisis in technology is its ability to easily create “deep fakes” and misinformation. Technology makes it possible for the video you are watching to look accurate, even though it’s not. Or, the article you are reading may seem to be correct but is really riddled with misinformation. Now more than ever, relying on trusted sources that provide vigorously validated content is critical. – Michael Dennis, CAS, a division of the American Chemical Society
4. The Need For AI Guardrails
AI tools can be a great benefit to companies in terms of productivity and efficiency, but it’s critical that they’re supervised. Tech leaders must get ahead of ethical concerns related to data protection, security and intellectual property by introducing industrywide regulations, as well as company-level guardrails that will ensure that artificial intelligence is both safe and effective. – Marco Santos, GFT
5. The Lack Of Transparency Around Data Usage
We know that our data is being used by businesses, and we are happy when our data is used to improve service quality. However, a potential issue that is burgeoning today is the lack of transparency around companies’ usage of personal data. Consumers need answers to the following questions. How are businesses using my data? Are they sharing it with other providers to deliver a better service? What data is being used and where? The transparency crisis is looming. – Kiran Menon, Tydy
6. Ensuring AI Is Used Only For Good
Generative AI and large language models in particular are advancing rapidly and are becoming more and more powerful by the day. How do we ensure AI is used for good? How do we provide the necessary guardrails, privacy and security? And how do we minimize deception and, more importantly, provide transparency about generative AI? The first one concerns me the most—let’s hope regulations can come fast enough. – Lana Feng, Huma.AI, Inc.
Government spyware and zero-day exploit markets present an ethical crisis. Software that’s designed to hide on your computer and steal information without your knowledge is malware, even if a “legitimate” company has sold it to a government organization. And not disclosing a zero-day vulnerability in a widely available consumer product, so the government can use it for espionage, puts us all at risk. Companies must do better. – Corey Nachreiner, WatchGuard Technologies Inc.
8. The Ease Of Access To AI
The impact artificial intelligence will have on society is similar to or greater than that of the discovery of nuclear energy, so robust ethical guidelines need to be put in place. And it’s much easier to access AI than nuclear energy, which makes AI more difficult to control. A lot of thought must go into developing ways to detect AI’s harms and correct them. We don’t currently have many effective measures for this. – Kazuhiro Gomi, NTT Research
9. Fabricated Studies Being Quoted By Generative AI
Generative AI is generating articles that quote studies by real organizations, but those studies are fabricated. So now we are being served statistics that are completely fabricated, yet attributed to analyst groups or the Big 5 consultants. The amount of disinformation being spread is alarming. We all need to be fact-checking when we use generative AI to help us create content. – Laureen Knudsen, Broadcom
10. The Lack Of Awareness About LLMs
Some entities are hiding or obscuring the fact that generative technology relies on large language models and are passing off the output as fact or truth. It is already hard to tell fact from fiction in 2023, but I suspect it is about to get a lot worse. – Elise Carmichael, Lakeside Software
11. Blockchain’s Vulnerability To Scams
I’ve seen no industry that’s more scam-prone than blockchain. Quick coin launches fuel both innovation and abuse, and major scams have occurred even in regulated exchanges such as FTX. Regulation isn’t the answer; cryptographic solutions like zero-knowledge proof are. They can prove a smart contract’s function, validate an exchange’s claimed reserves and assure the accuracy of on-chain data. – Marlene Ronstedt, Play by Ear
12. The Risk Of Biased Outcomes From AI
AI—especially the accelerated adoption of generative AI—comes with the ongoing risk of producing biased outcomes. AI models are trained on input data that reflects our societal biases, which can be amplified through machine learning. To develop responsible AI technologies, we have to understand what those biases are and make sure we can account for them accordingly. – Merav Yuravlivker, Data Society
13. The Lack Of Consensus Over Appropriate Uses For Generative AI
Questions about ChatGPT’s ethical use are all over the news, as we’re not ready for it from an ethics standpoint. Is it okay to use ChatGPT to provide views for this discussion or to diagnose patients? We have no strategy for approaching AI. I hope big tech and government will come up with a code of ethics on how we can best use AI in our lives. It’ll take time, and now, it’s a question of personal ethics. – Nadya Knysh, a1qa
14. The Alienation Of Older Consumers
As technology advances, companies need to ensure they aren’t leaving less tech-savvy users behind. Shuttering brick-and-mortar locations for ones located in the metaverse may make good financial sense and appeal to younger demographics, but it could alienate older customers. – Patti Mikula, Hackworks Inc.
15. Falsified ‘Green Reports’
Technology enthusiasts enjoy the cybersecurity conversation, but when environmental concerns are the question, cybersecurity fails. Sometimes, individuals will produce “green reports” in which the metrics and information are falsified to satisfy management. This can be as simple as removing printers to meet benchmarks—which still means the environment is at risk. This is unethical. Encouraging responsible ownership is key. – Dewayne Hart, SEMAIS
16. The Risk Of Autonomous, Unchecked AI Systems
If AI systems become too autonomous and operate without proper safeguards, they could make decisions that are harmful or contrary to human values, leading to unintended and uncontrollable consequences. Governments should work alongside businesses, researchers and experts to develop comprehensive governance frameworks for AI. – Fidelis Chibueze, Fixtops Technology