AI Dreams

of a better.world
Ethics in Programming: When Your IDE Has Ideology

Cover image by Fotis Fotopoulos

Ethics in Programming: When Your IDE Has Ideology

Our digital tools actively redesign society through unintended consequences cloaked as progress. Artificial intelligence sorts human experiences into engagement-optimised echo chambers, trading human curiosity for infinite scrolls that fracture attention spans into ever-shrinking intervals. Delivery platforms replace community storefronts with gig-worker warehouses, eroding urban diversity while trapping labourers in wage arbitrage. Résumé-scanning algorithms cement historical prejudices under the guise of “data-driven objectivity,” codifying last century’s discrimination into tomorrow’s opportunities. Before launching another sprint retrospective, we might ask: When did we outsource our moral agency to the Jira board?

From my years working on commercial apps, I’ve watched countless launch cycles where the process consistently drowned out considerations of the societal cost. Skilled teams chase performance metrics dictated by executives, optimising conversion funnels while avoiding questions about who benefits and who gets left behind. We prototype without asking what real value can we add while following roadmaps designed by those prioritising profit. These questions fall beyond our scope of work, for we are often the doers, not the thinkers anymore. This learned helplessness reveals the crisis of modern technical work.

Thinkers like Aristotle observed that every craft carries hidden intentions—a fishing net designed to harvest the sea also reshapes our relationship with marine life. Today’s programmers inherit priorities baked into our tools long before we opened our laptops. The database schemas we define using rigid logical frameworks descends from philosophers who believed the universe could be neatly categorised. The way we split app development into isolated teams (frontend v. backend v. devops) echoes industrial assembly lines divorcing workers from finished products. Even machine learning’s fixation on correlations over root causes mirrors the scientific method’s limitations when confronting human complexity.

But tools don’t just reflect old ideas—they train their users. Anthropologist David Graeber showed how repetitive use of systems moulds behaviours and beliefs. For programmers, reviewing each other’s code under strict guidelines gradually aligns personal judgment with group policies. Consider the automation of university admissions using standardised test scores: technical perfection at the cost of cementing social injustice. This embodies Marx’s critique of entfremdung—alienation through tools that convert human potential into abstracted, dehumanised outputs. Like factory machinery estranging workers from their craft, admission algorithms reduce students’ lived complexity to quantifiable scores, perpetuating class hierarchies under the guise of meritocratic neutrality. The algorithm may work as designed, but the design often fails to account for privilege or discrimination.

History shows we create systems using tools inherited from previous generations. Our modern tech infrastructure—from automated pipelines to app features—carries this forward daily. To align technical work with human values, we could start by examining the hidden beliefs baked into our tools. The clean separation between different software components mirrors historical debates about mind-body duality as the factory assembly line emerged from military logistics, early internet protocols assumed trustworthy institutional users, and our modern codebase carries markers of these origins. The way we separate code into loosely coupled services echoes free market theories from economists who prized individual choice above collective welfare and algorithmic “fairness” solutions often employ 19th-century utilitarianism—maximising good for the majority while accepting minority harm as statistical necessity. Until we recognise these inherited assumptions, we’ll keep embedding old inequities into new infrastructure.

However, change requires practical steps and recognising these origins builds accountability. We have the capacity to conduct a thorough examination of various tools to uncover any underlying, implicit biases or assumptions embedded within their design and functionality. That Python package saving dev hours—what productivity philosophy does it embody? That A/B testing framework optimising engagement—what definitions of human flourishing does it prioritise? These frameworks operationalise Shoshana Zuboff’s “surveillance capitalism,” where human experience is “unilaterally claimed as free raw material for translation into behavioural data”—a digital enclosure movement privatising social life itself.

We could track ethical challenges like we track technical debt—documenting who’s responsible for certain impacts, why compromises were made, and how to fix them later. Imagine dashboards showing unresolved privacy concerns or accessibility gaps alongside bug counts. For example, after adding facial recognition to school cafeteria payments, a team might log “System struggles with dark skin tones—awaiting diversity training and camera upgrades before scaling”. Or a logged entry might note: “Our recommendation algorithm currently favours urban users at rural users’ expense; need geographic fairness review Q3.”

Technical education could combine coding with ethical training, helping us connect everyday tools to bigger ideas. Nadezhda Krupskaya argued education must arm students with methods to analyse reality critically. Her vision aligns with curricula which ensure coders view schema design as acts of social accountability—not just technical optimisation. Picture a computer science curriculum where creating relational databases requires debating GDPR implications. A frontend course could treat accessible dropdown menus not as edge cases, but as foundational skills akin to responsive grids, where semantic HTML becomes an act of neurodiversity advocacy. The React developer optimising component reusability would simultaneously learn how template choices might exclude screen reader users, transforming pull requests into instruments of inclusive design policy. By anchoring technical decisions to their human consequences, we stop treating “best practices” as neutral truths. By viewing tech choices as active value statements rather than neutral tools, we gain agency to shape more thoughtful systems.

When we peer into our pull requests, we see more than code—every code review carries invisible blueprints for human experience. Within the layers of abstraction, one can almost sense the spectral presence of Wittgenstein contemplating the inherent boundaries of language, Fanon dissecting the intricate mechanisms of colonial power structures, and Gibson meticulously charting the ever-evolving contours of cyberspace. Technical work gains human depth when guided by Socrates’ challenge: What is the eudemonia—the good life—we’re engineering toward?

A ticket titled “Optimise password reset UX” might improve metrics while ignoring how stricter security excludes elderly users reliant on legacy devices. A sprint retrospective celebrating reduced API latency won’t capture the climate cost of added server farms needed to sustain millisecond improvements. Standard tools focus teams on answering “Did we build it right?” while avoiding “Did we build the right thing—for whom?” Rosa Luxemburg’s assertion that “those who do not move, do not notice their chains” points to this phenomenon. Engineers risk internalising corporate priorities as natural law unless we actively interrogate how ticketing systems reduce ethical urgency to backlog items. The true challenge emerges when ethical questions resist ticketing systems’ formats—no priority level captures the moral weight of excluding sign language avatars from metaverse workplaces, no story points quantify the generational harm of biased criminal risk-assessment algorithms.

Every day we face a choice: optimise for capital or care. Philosophical literacy allows us to reframe the problem holistically rather than technically. Tools become either extensions of human will or autonomous systems governing their creators. Thoughtless labour creates systemic dehumanisation—a prophecy fulfilled when Instagram’s engagement algorithms promoted teenage eating disorder content they couldn’t consciously comprehend they were amplifying. But another path exists. Early web pioneers designed TCP/IP protocols with open access and error correction principles baked into their mathematical foundations. The choice remains ours: build tools as mirrors reflecting humanity back to itself, or as slot machines addicting users to profit-driven dopamine drips.

The terminal screen holds no inherent ethics—only the humans programming it do. Our keystrokes shape societal ground truth. Every merge request commits not just code but moral stance. Let’s develop with that weight—and possibility—in mind.

Ethics, Ideology