AI and society
By Natasha Joshi (Associate Director, Rohini Nilekani Philanthropies)
The discourse around AI is filled with excitement and velocity. Horizons and predictions change every week as the tech evolves at an increasing rate. Almost no one in these conversations disagrees that AI has tremendous potential for improving the status quo. At the same time, concerns are raised, and debates rage over the true harms of an AI-enabled society.
On a recent visit to Mumbai, I interacted with community youth leaders and decided to check with them how they felt about AI, given that its deployment will affect their lives the most. A bit about these community youth leaders: they’re between 18–25 years of age, they exist on all social media platforms, are adept at making and distributing content, and are digitally “savvy” in every respect young people today can be. This is why it surprised me that twenty young boys and girls stared back blankly when I said, “What do you think of ChatGPT and AI?”.
After repeating this process informally with different groups — biodiversity and conversation organisations, mental health organisations, people working in performing arts, people managing supply-chain logistics, those working in food production, and others working in government — I realized that my own perception of AI is a big hot topic was biased by my own interest in the subject, and access to information owing to the philanthropic universe I am a part of. In most cases, the way “dolly the sheep” excited people for a moment, and was quickly supplanted by more pressing daily issues, AI and its possibilities have also engaged the interest of the average person only so much. After the brief flurry ChatGPTs release stirred up, it’s back to business as usual. People aren’t quite sure how AI is affecting their work and lives in the now.
Even when we are incorporating AI tools into our workplaces, there is still a fair bit we don’t understand — both about how this technology works and about how it will evolve. Reliability is one concern for sure. Can you depend on AI-generated responses to be true, not just factually true but also contextually and ethically? Can the machine behave morally? If yes, will it? And, do the users understand and adjust to the limits of the machine? At the Singularity University India conference last November, Kris Ostergaard asked the audience if they would like their human manager/team leader to be replaced by an AI-enabled bot. The opinion in the room was split, with those in favor of the bot, citing the bot’s neutrality as the upside. But we know AI isn’t free of bias.
The challenges with AI and bias are well established now — with real-world downstream consequences. This is unsurprising as AI is trained on human inputs, and those inputs carry bias. Moreover, “traditional software is designed to operate on data that’s unambiguous. If you ask a computer to compute “2 + 3,” there’s no ambiguity about what 2, +, or 3 means. But natural language is full of ambiguities that go beyond homonyms and polysemy” so the scope for bias and error is higher[1]. Add to that the fact that many world languages are structurally different from English — different Indian languages have structures dissimilar to English and to one another! One workaround is to have AI train on AI-generated outputs alone, but this has quickly led to model collapse (here).
On bias, there is another angle. While technologists are keen to address the issue of bias in the model, social scientists are interested in how algorithms govern and what types of bias they might have. An essay by Henry Farrell and Marion Fourcade[2] explains that “both bureaucracy and computation enable an important form of social power: the power to classify. Bureaucracy deploys filing cabinets and memorandums to organize the world and make it “legible,” …turning rich but ambiguous social relationships into thin but tractable information…Computational algorithms–especially machine learning algorithms– perform similar functions.” In other words, how data is tagged and classified influences the ultimate outputs of the algorithm.
This raises an important question about transparency and co-creation. AI as a technology can make systems more transparent, but AI, as a tool wielded by humans, can do the opposite. That, too more effectively. More than the technology, it’s the people driving it that matter, and presently, a small number of global north organizations are investing in the development and diffusion of products, spurring what Tristan Harris and Aza Raskin describe as a “race to the bottom of the brain stem”.
Now is a good time to ask how the development of use cases, and the public deployment of AI be done in a way that is more consultative, open and bottom-up.
Some of this work has picked up pace. Philanthropic organisations are supporting research and development with the aim of improving the technical performance of AI models. Additionally, grants have been announced globally to examine the larger risks of AI. For example, ARC Evals (a project of the non-profit Alignment Research Centre) is working on assessing whether cutting-edge AI systems could pose catastrophic risks to civilization. They are currently partnering with Anthropic and OpenAI to evaluate their AI systems. AI Village recently announced the largest-ever public Generative AI Red Team. According to Sven Cattell, the founder of AI Village, “Traditionally, companies have solved this problem with specialized red teams. However, this work has largely happened in private. The diverse issues with these models will not be resolved until more people know how to red team and assess them.” Many others — including the Collective Intelligence Project — are trying to facilitate the integration of democratic inputs (feedback from communities) into existing AI models.
In India, a team of social practitioners and technologists have developed Jugalbandi — a grant-funded free and open platform that combines the power of Large Language Models such as ChatGPT and Indian language translation models such as those under the Government of India’s Bhashini mission to power conversational AI solutions in any domain. On the harms side, we have self-organised groups like the Misinformation Combat Alliance, among others.
Yet, foundations can find more ways to engage diverse people in dialogue, test implementation in various contexts, and ensure strong feedback loops so that technology can serve the needs of the most disadvantaged. Can technologists help NGOs working with migrants and itinerant labor? How can the vast community of para-legal and legal aid organizations use AI to enable better justice delivery? Can foundations supporting work on women’s-land-rights (WLN) help partner NGOs build better processes? The benefits of figuring this out will be doubly so for foundations/NGOs active in the Global South, where the potential for creating and using AI tools is high, but the availability of capital and opportunity is low.
We know this process of co-creation will not be easy. It will require incentives at many levels, along with some suspension of power and personal ambition. But the ensuing debate around AI (and its regulation) — limited as it may be in reach and inclusion — has already indicated what is truly at the heart of this conversation: an examination of what makes us human.
We (in collaboration with People+ai) learned this firsthand from a small meeting we curated, where a few foundations, researchers and non-profits came together to discuss the implications of generative AI on their areas of work[3]. There was some clarity around productive uses of generative AI, but most of the meeting was dominated by questions: Does AI simply amplify the existing misalignments in society? How to think more deeply about “productivity” and “efficiency”. Will AI create a class of elite humans, being served by technology which itself is being served by an underclass of humans? Is AI a tool or a product? Which part is a product (owned by or benefiting a company), and which part is a public good?
Even harder are the questions around human aspirations. The reality is that AI — like most technology — will add something by taking something away. I realized this at a grassroots conference, where Dalit and Adivasi groups were talking about the need to recognize their local practices, oral traditions, and lived experiences as actual “knowledge” — something that has been historically denied by mainstream society and established academic institutions. The talk veered towards enduring barriers to formal education. At that point, I shared that today’s technology could level that playing field. Soon, with audio-to-text becoming more sophisticated, video taking over (especially in India) as a platform for dissemination of ideas and LLMs building models in Indian languages, disadvantaged groups will have new ways to supplant the old. At this, one of the attendees, Sapna responded, “but why should we not enjoy old privilege? After fighting for it all these years. I don’t want open source this or that. I want a degree. I want a seat in a college, no online course. I want a (hard copy) book published with my name on it.” Sapna, like her name, had a dream. And it got me wondering: what technology will help her meet her goal?
How can we better animate AI with human intent, which is dynamic and influenced by emotions, ethics, cultural norms, and intuition? Typically, technologists aren’t versed in the social sciences, and social scientists don’t understand technology. But now might be the time to meet on that bridge and think together — and for capital to bring everyone along. AI will radically change the world, no doubt. But for that change to be positive, artificial intelligence might need to be steered by collective intelligence.
[1] For a wonderful explanation of exactly how LLMs predict and think, read this understanding AI piece
[2] Margaret Levi and Henry Farrell. Creating a Moral Political Economy. Daedalus, 2023.< https://www.amacad.org/daedalus/creating-new-moral-political-economy>
[3] This meet-up was jointly hosted by Rohini Nilekani Philanthropies and People+ai. Attendees included: Aishani Rai & Vinay Narayan of Aapti Institute, Praveen Khanghta of The Convergence Foundation, Safiya Husain of Karya,Tanuj Bhojwani of People+ai, Yashwin Iddya of Fields of View, Kaustabh khare of Ashoka, Rohini Nilekani Philanthropies team, Satish Sangameswaran of Microsoft Research.