top of page

Ignore The AI Utopists And Doomers: The Need For Louder AI Pragmatists



This article was originally published on Forbes. Click here for the original version.


Social media algorithms and editorial outlets often promote the loudest and most controversial voices. Lately, as a result of this, AI news seems to fall into one of two camps. On one side, there are ‘AI Doomers,’ who warn of imminent existential risks and advocate for aggressive actions to curtail the development of AI. On the other side are ‘AI Utopists,’ who think AI will help fix most of society’s problems, and therefore advocate for increased AI adoption and investment. One group wants the default to be ‘no’ until proven safe, the other is willing to default to ‘yes’ and run any AI experiment for the sake of learning and ‘failing fast.’ The debate between these groups makes for great headlines, but also distracts us from having more nuanced conversations about AI. What we’re missing from the debate is the loud voice of the AI Pragmatist.


What is an AI Pragmatist?


AI Pragmatists believe more strongly in embracing the ‘genius of the AND’, instead of succumbing to the ‘tyranny of the OR’. An AI Pragmatist understands that adoption of AI across all aspects of their life is inevitable, yet they want regulation and market pressures to ensure those systems are developed safely and responsibly. They believe that some uses of AI should be outright banned, most fully disclosed, and others can simply replace existing software experiences. They are excited by potential AI innovations to provide benefits in healthcare and education, but also want to ensure those advancements don’t leave people behind. Similarly, they know that biases exist in every dataset and model because of pre-existing human biases, yet they want to find ways to reduce the impact of those biases on AI outcomes.


Let’s give an example of how an AI Pragmatist might approach the evaluation of an AI use case. Multiple studies have shown that AI can be more accurate than humans in predicting breast cancer. AI Pragmatists would promote any tool that allows us to save lives and improve care. However, they would also argue that these applications of AI must go through robust safety evaluations prior to use in clinical settings. This would include ensuring that its training data is representative of diverse populations, that the data accounts for outliers and model drift, and that the right privacy protections are put in place for patient confidentiality. AI Pragmatists don’t just want to say ‘no’ outright, but rather try to solve the problem of how to integrate AI into our existing lives towards overall benefits, and how to balance the risks with the benefits.


Why do AI Pragmatists need to be louder?


We need louder AI Pragmatist voices and advocates to ensure that we have informed discussions about how AI should be used and the ethical rules that should guide its implementation. Everyone, across all sectors, needs to learn more about AI to ensure responsible use, without falling into fear or blind optimism. AI Pragmatists are best suited to create a balanced environment for this learning. Once we understand AI better, we can have a meaningful conversation about how to regulate it effectively.


There is ongoing debate surrounding the regulation of AI use across all levels of society. That debate needs to be constructive, based on factual information about AI benefits, and grounded in shared ethical and philosophical principles. The majority of AI practitioners are committed to being responsible and ethical in their work. By providing them with the necessary tools and incentives, we can encourage responsible practices.


Some AI Doomers may argue against involving AI builders in shaping the regulatory framework. They see this as big business trying to lobby for looser laws and allow them to perform ‘ethics washing’. However, excluding them would result in weak adherence to regulations and mere token efforts at compliance. We witnessed a similar issue with the European Union’s General Data Privacy Regulation (GDPR), where it failed to foster a genuine culture of privacy and data governance. The law became more of a checklist exercise for many organizations to meet the minimum compliance requirements. Regulatory fines simply got calculated into business models. AI Pragmatist involvement in crafting the public policy can help create clear innovation friendly criteria for low risk AI applications, and then help establish the roadmap and criteria for trustworthy and responsible high risk AI use cases.


One of the few constants in life is change. AI is bringing and will continue to bring about rapid change. There are certain areas where we should be open to experimentation, but in other cases, it is essential for AI to prove its safety before moving forward. The challenges associated with collective action between individuals, companies, and countries make it nearly impossible to completely halt AI development. While the intention behind such efforts is commendable, they are likely to be unsuccessful. Instead, we need stronger voices that can help facilitate AI learning, and then establish the conditions under which risky AI applications can be deployed responsibly.








bottom of page