chatsimple

Product Managers Ensure Artificial Intelligence Is Ethical

  • 4 min read
  • No reactions yet

With so much hype and fear around AI stealing jobs and transforming lives, how can we be sure that we’re creating tech for good?

Resident Senior Lecturer in Product Management for Emerging Technology, Kavita Kapoor, shares her insights about the responsibility Product Managers have in making sure that prejudices, biases and homophobia don’t creep in when implementing products.

Kavita Kapoor Product Management lecturer at CODE

AI Is Everywhere and Everyone Is Scared

I was heading on vacation when a US-immigration officer asked about my job and engaged me in a deep and thoughtful conversation about ChatGPT.

Like so many people, this trained officer was worried about the future of his job and how his children will adapt. It is understandable when jobs like data processing, accounting, customer service, testing and marketing, might all be automated by AI (Forbes 2022)

In the first class of my Emerging Technology lecture series at CODE, I introduce how this fear of rampant technology is not new. Through the story of the Luddites, a secret society in the industrial revolution that destroyed machines, we understand that it “is not what technology does that matters, but who it does it for and who it does it to.” (Doctorow 2021).

Product Managers Have to Know What AI is For

I agree with the experts that we will need Product Managers, and the rise of AI will transform the responsibilities of Product Managers, but it is unlikely to put us out of work (Afshar 2018).

At CODE, we train Product Managers responsible for digital product development who are accountable for innovation planning, product experience, revenue growth and compliance. If you look at these jobs on LinkedIn, Product Managers require excellent analytical, organizational, and communication skills to work with a team to interpret large amounts of data. A Digital Product Manager oversees the entire lifecycle of digital products from conception to launch and beyond. 

responsible product management

In my experience, it doesn’t matter if you are helping airplane manufacturers, television companies, retailers or even the Summer Olympics; it is always exhilarating to shape, nurture and deliver new products, even more so if you demonstrate the power of new technology like AI. At the same time, we need to use AI responsibly.

What Responsible Product Management Looks Like

Having created a lot of different digital products before coming to CODE and having many stories of the unintended consequences of my work I am really keen to ensure that the new generation of Product Managers don’t make the same mistakes.

I am particularly passionate about explaining how the data being ingested into these AI products is actually a form of power that helps organizations control and shape our experiences. 

Let’s consider an AI system that replaces me here at CODE. It could create lectures or mark exams. If those exams are based on essays then the system has to be trained on the essays written by real humans. How the Product Manager chooses those essays for the AI to ingest can determine if future students pass or fail. 

If for example only native English writers’ essays are fed into the system then people with English as a second language might be penalized. This would be really unfair. Especially here at CODE where we are extremely international.

Data Feminism Aims to Create Genuinely Good Products

coding language on computer screen

There are so many negative examples of badly trained AI, which is why I introduce our students to the concept of Data Feminism (D’Ignazio,Klein 2020). Data Feminism positively challenges the status quo with the aim of creating great products that are genuinely good.

The Data Feminism framework combines data science, ethics and intersectional feminism to uncover how standard practices in data science serve to reinforce existing inequalities in products across the world.

In the book Data Feminism there are seven principles of data feminism, examining power, challenging power, embracing pluralism, considering context, making labor visible, building sustainable and ethical data practices, and reimagining abundance. 

Through our students’ work on their own startups, we create case studies to explore these concepts and also discuss the regulations that will govern Product Management. 

Alongside all this, we use a range of international case studies based on well known companies that show how the data in our products (if used unethically) can do a lot of harm.

How Can Product Managers Save Us From Homophobic Artificial Intelligence?

product manager at work at CODE University of applied sciences

In my “Emerging Technologies: Ensure Your AI Product Doesn’t Become Homophobic,” lecture we use Data Feminism and take a deep dive into equality for the LGBTQAI+ community. 

By unpicking a flawed Stanford AI research project that ‘identifies gay faces’ (BBC 2017), we imagine how our AI-enabled products will be implemented in regions where people are killed for being part of the LGBTQAI+ community. 

Even in an LGBTQAI-friendly place like Berlin, we know when supporting our students with their job interviews, that AI is being used for recruitment. So, we look at biases that may creep in, including a look at the London School Economic study, which shows there is a ‘gay jobs’ stereotype’ (LSE 2016). 

Through our discussion, my students have made better choices about sourcing their product data and which companies they partner with in order to be more ethical. Their success in producing ethical products makes me proud.

You Can Become A Product Management Superhero

ai_product management

Training in Product Management in the era of AI will future-proof any career. Here at CODE University of Applied Sciences, we provide ethical and practical experiences in lean processes, marketing analytics, design and road mapping, adaptive systems, stakeholder management, and much more.

If you’re interested in how Product Managers can utilize and work alongside AI to create a better world, “you just need to give it a go”, and that is what I said to the US-immigration officer.

He agreed that he would give ChatGPT a go, understand the implications and help his children.

And I headed on holiday. 

Reactions

Leave a Reply

Your email address will not be published. Required fields are marked *