AI doesn’t have to be inhumane

Artificial intelligence (AI) marauders warn us of mass extinction events, AI detonating nuclear weapons, and supervillains planning cataclysmic disasters. But life is not a movie. As AI experts, our worst-case scenario is a technocentric world where the blind pursuit of AI growth and optimization outweighs the imperative for human prosperity.

What does such a world look like? A technocentric world is highly optimized, so it has the appearance of productivity. People are online, on a screen, and generally “on” all the time. We put on headsets, earphones, goggles, microphones, immersing ourselves so deeply that it’s as if we were hiding from something. Meanwhile, polite chirping is constantly correcting you or pushing you to your next task.

Yet, we have no idea what we’re going to do. We have a life of disappearing hours, animalistic consuming media that make us feel extreme but empty emotions. We’re constantly watched by systems that map our every move, feed it into an algorithm, and determine whether we’re driving safely, getting enough steps, deserving of a job, cheating on an exam, or, just plain, somewhere we don’t think we should be. We are so overwhelmed that we feel nothing.

A technocentric world is based on the premise that humanity is imperfect and technology will save us.

A world dominated by technology that erases humanity is not too far from our future. A general surgeon’s advisory warns us that social media poses a “significant risk of harm” to young people – yet 54% of teens say it would be difficult to stop using it.

What is it to blame? Bad decisions by children? Bad parenting? Or revenue-based engagement optimization?

But we don’t just point the finger at social media companies. Algorithmic management β€” the use of surveillance technologies to track and monitor employees β€” creates situations where workers urinate into bottles because they have strict time constraints (and the algorithms don’t need to use the bathroom). Similarly, algorithms are being used to inappropriately fire hard-working Army veterans in the most soulless way: an automated email message. This basic lack of dignity in the workplace is an inhumane byproduct of over-optimization.

This wave of indifference isn’t just limited to America. We trust that our AI-powered content will not be harmful because people in places like the Philippines, India and the African continent are being paid less than $2 an hour to sanitize our experience. Content moderation, a practice commonly used in all forms of AI-curated or developed media, is known to cause PTSD in moderators. We distance ourselves from human trauma behind bright screens.

This too is not just a problem of the workers. The first wave of layoffs due to AI automation was among college-educated workers, from designers to copywriters and programmers. This was predicted by OpenAI, the company that created ChatGPT. Yet all we seem to be able to do is wring our hands in despair.

We should be familiar with these issues; after all, these technologies simply amplify and consolidate the inequalities, prejudices and harms that already existed.

What are we doing? Why are we doing this? And most importantly, what do we do about it?

AI’s worst-case scenario isn’t about AI at all. These are human beings who make active decisions to pursue technological growth at all costs. Both the language of AI doom and the language of AI utopia use the same sleight of hand when anthropomorphizing AI systems. Moral externalization is insidious; when we ask whether “AI will destroy/save us,” we dismiss the fact that humans create and deploy AI in the first place. Human-like interfaces and the allure of data-driven efficiencies lead us to believe that AI outputs are neutral and preordained. They are not.

Techno-exceptionalism tells us that the problems AI introduces are unprecedented and unique those who built it can tell us how it can be governed. This is simply incorrect. Most technologists are ill-equipped to deal with the ethical challenges that technology introduces. Good governance exists to empower and we need a group that acts for the common good.

One way to stop our worst-case scenario is to invest in Global Governance, an independent body that works with governments, civil society, researchers and businesses to identify and address AI modeling problems. A group like this could address society’s greatest challenges and equip the world’s existing governance ecosystem with the wherewithal to drive the development of AI for the benefit of the public.

A global governance entity should have a mission to optimize human prosperity. This doesn’t mean AI assistants or “forever” AI, but investments in the intangibles of humanity. Humanity is not an inefficiency to be eliminated, but something to be carefully protected and nurtured. An investment in humanity is not about allowing billions more for the developers of these technologies and their investors: it is an investment in ensuring that society thrives in a way that respects democratic values ​​and human rights for all.

A mission of human prosperity seems vague, nebulous, and far-fetched, but isn’t that a fair match for AI companies’ equally far-fetched goal of AI general intelligence? Our efforts to preserve humanity must be on par with the investment and ambition placed in artificial intelligence.

Rumman Chowdhury is Responsible AI Fellow at Harvard University’s Berkman Klein Center for Internet and Society.

Sue Hendrickson is executive director of the Berkman Klein Center for Internet and Society at Harvard University.

Copyright 2023 Nextstar Media Inc. All rights reserved. This material may not be published, transmitted, rewritten or redistributed.

#doesnt #inhumane

Leave a Comment