New York Tech Media
  • News
  • FinTech
  • AI & Robotics
  • Cybersecurity
  • Startups & Leaders
  • Venture Capital
No Result
View All Result
  • News
  • FinTech
  • AI & Robotics
  • Cybersecurity
  • Startups & Leaders
  • Venture Capital
No Result
View All Result
New York Tech Media
No Result
View All Result
Home Cybersecurity

Navigating ethics in AI today to avoid regrets tomorrow

New York Tech Editorial Team by New York Tech Editorial Team
October 26, 2021
in Cybersecurity
0
Navigating ethics in AI today to avoid regrets tomorrow
Share on FacebookShare on Twitter

As artificial intelligence (AI) programs become more powerful and more common, organizations that use them are feeling pressure to implement ethical practices in the development of AI software. The question is whether ethical AI will become a real priority, or whether organizations will come to view these important practices as another barrier standing in the way of fast development and deployment.

ethics in AI

A cautionary tale could be the EU General Data Protection Regulation (GDPR). Enacted with good intentions and hailed as a major step toward better, more consistent privacy protections, GDPR soon became something of an albatross for organizations trying to adhere it. The GDPR and other privacy regulations that followed were often seen as just adding more work that kept them from focusing on projects that really mattered. Organizations that attempt to solve for each new regulation in a silo end up adding significant overhead and making themselves vulnerable to competition in form of agility and cost effectiveness.

Could an emphasis on ethics in AI go the same route? Or should organizations realize the risks—as well as their responsibilities—in putting powerful AI applications into use without addressing ethical concerns? Or is there another way to deal with yet another area of quality without the excessive burden?

AI bias is human bias

AI programs are undoubtedly smart, but they’re still programs; they’re only as smart as the thought—and the programming—put into them. Their ability to process information and draw conclusions on their own adds layers to the programming that isn’t necessary with more traditional computing programs in which accounting for obvious factors is relatively simple.

When, for example, an insurance company is determining the cost of a yearly policy for a driver, they typically take data like gender and ethnicity out of the equation to come up with a quote. That’s easy. But with AI, it gets complicated. You don’t micro-control AI. You give it all the information, and the AI decides what to do with it. AI starts out with no understanding of the impact of factors such as race, so if programmers haven’t limited how data can be used by the AI, you can wind up with racial data being used, thus creating AI bias.

There are many examples of how bias creeps into AI programs, often because of incomplete data. One of the most infamous examples involved the Correctional Offender Management Profiling for Alternative Sanctions, known as COMPAS, an algorithm used in some U.S. state court systems to generate sentencing recommendations. COMPAS used a regression model to predict whether someone convicted of a crime would become a repeat offender. Based on the data sets put into the system, the model predicted twice as many false positives for recidivism for Black offenders.

In another example, a health care risk-prediction algorithm used on more than 200 million U.S. patients to determine which ones needed advanced care was found to be preferential toward white patients. Race wasn’t a factor in the algorithm, but health care cost history was, and it tended to be lower for Black patients with the same conditions.

Compounding the problem is that AI programs aren’t good at explaining how they reached a conclusion. Whether an AI program is determining the presence of cancer or simply recommending a restaurant, its “thought” processes are inscrutable. And that adds to the burden on programming in ethics up front.

Ethics and privacy together

Continued improvements in AI have potentially far-reaching consequences. The Department of Defense, for one, has launched a slew of AI-based initiatives and centers of excellence focused on national security. Seventy-six percent of business enterprises are prioritizing AI and machine learning in their budgeting plans, according to a recent survey.

Alongside the ethical concerns of AI’s role in decision-making is the inescapable issue of privacy. Should an AI scanning social media be able to contact authorities if it detects a pattern of suicide? Apple, as an example, is considering a plan to scan users’ iPhone data for signs of child abuse. Considering the ethical and potential legal implications, it makes sense that privacy and ethics get folded into the same security process as organizations plan on how to address ethics. The two should not be treated separately.

As these and other programs move forward, new guidelines on ethics in AI are inevitable. This will create even more work for teams trying to get new products or capabilities into production, but it also raises issues that can’t be ignored.

Successful AI ethics policies will likely depend on how well they are integrated with existing programs. Organizations’ experience with GDPR can offer a good example. Where it once was seen primarily as a burden, some organizations that are integrating it into their security processes have gained a lot more maturity by treating privacy and security as one bucket.

Consider future regrets

Ultimately, it comes down to programmers baking in certain guidelines and rules on how to treat various types of data differently, and how to make sure that data segregation is not happening. Integrating these guidelines into overall operations and software development will depend on an organization’s leaders making ethics a priority.

Enterprises should be addressing ethics and security together, leveraging systems and tools they use for security for ethics. This will ensure effective management of the software development lifecycle. I would go so far as to say that ethics should be considered an essential part of a threat modeling process.

The question organizations should ask themselves is: Five years down the road, looking back at how you handled the question of ethics in AI, what could be your regrets?

Considering the history of how the impact of other game-changing technologies (e.g., Facebook) were overlooked until legal issues arose, the potential for regret may well be in not taking it seriously and not acting proactively until it becomes a pressing priority.

People tend to address the loudest problem at the time, the squeaky wheel getting the most attention. But that’s not the most effective way of handling things. The ethical implications of AI need to be confronted now, in tandem with security.

Credit: Source link

Previous Post

VC Bradley Tusk on how startups can get their way with regulators

Next Post

Chargebacks911 appoints David Jimenez to Chief Revenue Officer

New York Tech Editorial Team

New York Tech Editorial Team

New York Tech Media is a leading news publication that aims to provide the latest tech news, fintech, AI & robotics, cybersecurity, startups & leaders, venture capital, and much more!

Next Post
Chargebacks911 appoints David Jimenez to Chief Revenue Officer

Chargebacks911 appoints David Jimenez to Chief Revenue Officer

  • Trending
  • Comments
  • Latest
Meet the Top 10 K-Pop Artists Taking Over 2024

Meet the Top 10 K-Pop Artists Taking Over 2024

March 17, 2024
Panther for AWS allows security teams to monitor their AWS infrastructure in real-time

Many businesses lack a formal ransomware plan

March 29, 2022
Zach Mulcahey, 25 | Cover Story | Style Weekly

Zach Mulcahey, 25 | Cover Story | Style Weekly

March 29, 2022
How To Pitch The Investor: Ronen Menipaz, Founder of M51

How To Pitch The Investor: Ronen Menipaz, Founder of M51

March 29, 2022
Japanese Space Industry Startup “Synspective” Raises US $100 Million in Funding

Japanese Space Industry Startup “Synspective” Raises US $100 Million in Funding

March 29, 2022
UK VC fund performance up on last year

VC-backed Aerium develops antibody treatment for Covid-19

March 29, 2022
Startups On Demand: renovai is the Netflix of Online Shopping

Startups On Demand: renovai is the Netflix of Online Shopping

2
Robot Company Offers $200K for Right to Use One Applicant’s Face and Voice ‘Forever’

Robot Company Offers $200K for Right to Use One Applicant’s Face and Voice ‘Forever’

1
Menashe Shani Accessibility High Tech on the low

Revolutionizing Accessibility: The Story of Purple Lens

1

Netgear announces a $1,500 Wi-Fi 6E mesh router

0
These apps let you customize Windows 11 to bring the taskbar back to life

These apps let you customize Windows 11 to bring the taskbar back to life

0
This bipedal robot uses propeller arms to slackline and skateboard

This bipedal robot uses propeller arms to slackline and skateboard

0
New York City

Why Bite-Sized Learning is Booming in NYC’s Hustle Culture

June 4, 2025
Driving Innovation in Academic Technologies: Spotlight from ICTIS 2025

Driving Innovation in Academic Technologies: Spotlight from ICTIS 2025

June 4, 2025
Coffee Nova’s $COFFEE Token

Coffee Nova’s $COFFEE Token

May 29, 2025
Money TLV website

BridgerPay to Spotlight Cross-Border Payments Innovation at Money TLV 2025

May 27, 2025
The Future of Software Development: Why Low-Code Is Here to Stay

Building Brand Loyalty Starts With Your Team

May 23, 2025
Tork Media Expands Digital Reach with Acquisition of NewsBlaze and Buzzworthy

Creative Swag Ideas for Hackathons & Launch Parties

May 23, 2025

Recommended

New York City

Why Bite-Sized Learning is Booming in NYC’s Hustle Culture

June 4, 2025
Driving Innovation in Academic Technologies: Spotlight from ICTIS 2025

Driving Innovation in Academic Technologies: Spotlight from ICTIS 2025

June 4, 2025
Coffee Nova’s $COFFEE Token

Coffee Nova’s $COFFEE Token

May 29, 2025
Money TLV website

BridgerPay to Spotlight Cross-Border Payments Innovation at Money TLV 2025

May 27, 2025

Categories

  • AI & Robotics
  • Benzinga
  • Cybersecurity
  • FinTech
  • New York Tech
  • News
  • Startups & Leaders
  • Venture Capital

Tags

3D bio-printing acoustic AI Allseated B2B marketing Business carbon footprint climate change coding Collaborations Companies To Watch consumer tech crypto cryptocurrency deforestation drones earphones Entrepreneur Fetcherr Finance Fintech food security Investing Investors investorsummit israelitech Leaders LinkedIn Leaders Metaverse news OurCrowd PR Real Estate reforestation software start- up Startups Startups On Demand startuptech Tech Tech leaders technology UAVs Unlimited Robotics VC
  • Contact Us
  • Privacy Policy
  • Terms and conditions

© 2024 All Rights Reserved - New York Tech Media

No Result
View All Result
  • News
  • FinTech
  • AI & Robotics
  • Cybersecurity
  • Startups & Leaders
  • Venture Capital

© 2024 All Rights Reserved - New York Tech Media