Performing tech for good
a guide by Jessi
I have the privilege to work as a software developer at nonprofit organizations for nearly 20 years, and hope to continue. I’m calling it a privilege as various privileges (confidence, elite education, economic security to not have to maximize salary) have made this path available to me. While aspects of the day-to-day work of writing code are quite similar to working at a for-profit, there is the advantage of being able to sleep at night and not feel guilty about being complicit in big tech, surveillance capitalism, and the myriad of ethical concerns with working at big tech, as well as the day-to-day shittiness of working in a hierarchical profit-based organization. But nonprofit tech is not all rosy either, as far as the ethical purity or the day-to-day reality.
“Nonprofit” is a US-specific term for organizations that have the 501c3 tax status, meaning that people can donate to them and offset their taxes. It’s a category of organization designed to keep the status quo, with some little tweaks here and there. Nonprofits are generally funded by foundations, smaller grassroots individual donations, and possibly contracts. It is not a system designed for radical change, and there are many reasons to critique it. By aiming to provide staff a (hopefully) livable wage, the organizations can get caught in a cycle where fundraising becomes their main priority, and what they do has to change to fit what funders want (or leadership thinks funders want.)
In my work at equity-driven nonprofits, I’ve seen a range of how tech is done, and why. By “tech”, I mean specifically software development, as that is what I do, and thus I’ve worked at places that have such a need. All organizations very broadly use tech, but actually writing code is less common. I’ve found myself often working in the social services sector, as that’s a domain where there is a need for custom software.
Various books aimed at nonprofit leadership, like The Smart Nonprofit: Staying Human-Centered in an Automated World and The Tech That Comes Next: How Changemakers, Technologists, and Philanthropists Can Build an Equitable World talk about how technology can benefit nonprofits. They give many examples of nonprofits that develop software. While some of these organizations started developing software due to a need that they felt could be better solved by in-house-written custom software, others were tech-driven from the start. In particular, somebody(ies) with a tech background, likely due to a combination of techno-utopianism and savior complex, starts a nonprofit with the idea that the tech-driven organization can solve a problem that others have failed to solve before.
The Smart Nonprofit enumerates reasons that a nonprofit might develop what it considers “smart tech” (pg 125). “Smart tech” is a broad term that includes anything that requires any software or data engineering: bots, custom software, data visualizations, algorithms to prioritize or recommend, etc.. They imply it is about automation, although many of the examples they give (like custom software and using data to educate) are not what is generally considered automation. The listed reasons for developing technology include creating capacity, overcoming service barriers, crisis intervention, removing accessibility barriers for those with disabilities, and advocacy (meaning using data to teach people about issues.) These reasons did not strike me as particularly distinct, but might indicate how non-technical nonprofit leaders categorize technology. Other than the advocacy example, which is distinct and more about using data to educate, the underlying theme of the other reasons is to do more with less resources. Custom software can allow nonprofits to try to more effectively serve more people with less staff. Software is a tool that can help, but is not itself a solution. Many of the software needs are also not based on needs of communities, but needs of government or other funders, who require quantitative data to measure effectiveness, and have other demands.
I’ve spent many years working on variations of case management software, for social service and legal providers. In these cases, part of the reason for developing software in-house was that it was cheaper and more customizable, allowing the direct service providers to be more effective. Organizations could also feel more comfortable that they have more control over the data and its use if the software was managed in-house. These software were not “smart” in the sense of using machine learning or automated decision-making—it is software that helped staff at an organization do its work. There can be a tendency to try to sound impressive and act like software is providing sophisticated algorithms that are doing the work qualitatively differently, but there is also a need for software that just efficiently does what somebody could do with a lot of files.
Many of the software projects that I have found most fulfilling developed out of an organization’s need, where the software grew organically as an organization, doing some justice-oriented work, cobbled together their systems to create a custom software project. Projects like this can be messy to work on, as there is no big picture design or goal, but just trying to develop tools to meet what folks were trying to do. Despite the mess of such projects, I’ve always found them the most fulfilling. Other projects are more top-down, where somebody had an idea of a problem that tech could solve, and an organization was formed to develop this software. Such tech-driven organizations might have a Silicon Valley attitude of disrupting others doing this work. They might view the technology as a solution, rather than a tool. While I don’t think it is a clean binary between nonprofit-tech-companies and nonprofits-with-some-tech-staff, I think the gradation is useful to consider.
Software can support radical projects, in the sense of getting at the root of the problem, as those doing radical work might need software, to communicate securely and make decisions effectively, for example. Most radical software I know of is about building alternative infrastructure. Some such infrastructure could be inherently progressive, like tools to make decisions democratically. Other technical infrastructure, like tools for fundraising, secure or mass communication, or to mobilize people to attend a protest, vote, or engage politically another way, are useful to those across the political spectrum. Infrastructure can be developed by nonprofits or for-profits, and I don’t think those developed by nonprofits are inherently more ethical. With either legal structure, there needs to be a source of income in order to pay staff, and thus compromises could be made in order to keep stable funding.
In “Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor”, Virginia Eubanks talks about specific algorithms aiming to solve social problems that are in fact harmful. While algorithms can theoretically avoid human bias, they likely have biases built into them. She critiques “systems engineering approaches to social problems” (pg 124), which assume that we can solve problems just by having all the information. The inequalities of late-stage capitalism are not going to be solved by building an algorithm to gather and parse information. Eubanks offers some checks one can do to see if an automated tool aimed at low-income individuals meets some minimal necessary but not sufficient conditions---a sort of Bechdel test---for being ethical: “Does the tool increase the self-determination and agency of the poor? Would the tool be tolerated if it was targeted at non-poor people?” She also presents an “Oath of Non-Harm for an Age of Big Data” that tech workers can consider when working on a project supposedly intended to do good (pg 212).
Eubanks’ minimal conditions and oath resonate with me as somebody who does tech work at nonprofits, as far as ways to evaluate if I am willing to work on a project, and ways to push on projects that I’m already working on. The question of whether a technological tool would be tolerated by those with privilege is a constantly-moving norm. We are like frogs in warming water as far as our ability to judge the freedoms we give up when using technology, be it for convenience or just a default expectation. Like many “worst practices”, invasions of our privacy and taking away autonomy are first forced on those without the power to object, and then become the norms. Thus, the bar of what is tolerated is a low one, given how much people are willing to accept for convenience. Eubanks’s oath talks about the tension between consent and convenience in developing the tool, and how informed consent should be the higher priority. She talks about the importance of acknowledging the history of structural oppression in our society, and not being “complicit in the ‘unintended’ but terribly predictable consequences that arise when equity and good intentions are assumed as initial conditions.” (pg 213) Automated tools designed to improve social services will not necessarily do so, and there is huge risk in not challenging them at every step. This does not mean such tools cannot be valuable! But they will not be unless the team developing the tool is regularly committed to ensuring that equity is the top priority.
There are day-to-day pieces that I personally like about nonprofit work, but which might be a major downside to others---scrappier organizations without the resources and perks of a corporate tech job. I want to use a linux computer and have minimal (or ideally zero) travel expectations, and want to eat a home-prepped lunch. I enjoy chasing down bugs in an old code base written by a mix of staff and volunteers, and don’t expect best practices across the board. I don’t have interest in paying for 3rd party services, expensive training, licenses, etc..., so these don’t even seem like downsides to me. But to those familiar with working at a highly resourced job, they might be disappointments. While nonprofit pay hugely varies, tech salaries at nonprofits are generally significantly lower than corporate salaries, and without the gambling stock-equity-options that for-profits might offer.
One nonprofit worker talked about how there was less of a clear enemy in the workplace, which can make it harder to build solidarity among workers. At a traditional tech job, the hierarchy is more clear, and that can allow workers to connect. At scrappier jobs, the hierarchies (which still exist), can be less clear and more challenging, with vaguer policies in place, which can allow those with more entitlement to take advantage of the vagueness, compounding the structural inequities within the workplace.
Nonprofit organizations will claim to prioritize hiring tech workers who are committed to the work, and ideally a member of the communities served, but how that plays out can range widely. I’ve seen nonprofit leadership and the hiring team claim to prioritize applicants who are committed to the mission, but not actually care. It’s easy for anybody to say they want to “do good”---who doesn’t, if “good” isn’t defined?---but it is more rare to actually check if there is a political match. Given how high corporate tech salaries are, there can be insecurity when hiring for a nonprofit, and a scarcity mindset that it will be hard to find somebody who is able and willing to work for a lower salary. As with any hiring process, there is also always a stated goal of hiring diverse candidates, but not necessarily a commitment to follow-through. There is a tension, where those who might have a resume with lots of relevant volunteer experience are less likely to have the lived experience that an organization might claim to prioritize for its staff.
For tech workers considering working at a nonprofit, the questions interviewers ask about your knowledge of the domain, and what you think of the organization’s strategies and theory of change, are likely indicative of how much they do or don’t want you to engage in the organization itself. For organizations who view tech workers as silo-ed engineers, they might be less likely to listen to critique of how something is to be done. It can be hard to predict, as liberal nonprofits (and even apolitical corporations) will use a language around supporting DEI, “doing good”, etc.., and it can be hard to predict who will want to hear that. I’ve been surprised at which work places listened to my feedback, and which pretty much told me it was my responsibility to implement the product as others had defined it, with hand-waving about the edge cases I would bring up.
Nonprofits do not necessarily treat workers better than profit-driven corporations. While unionization is relatively common among government employees, there is less of a history of unionization at nonprofit organizations. Certainly there is a history---unionized Red Cross employees in LA went on strike in 1997---and unionization of nonprofit staff has grown in the past few years. The number of Nonprofit Professional Employees Union (NPEU) members has grown from about 70 in 2010 to about 1,500, at about 50 organizations, in 2022. While management of some nonprofits voluntarily recognize a union, that is not always the case. The opposition from management at a nonprofit often looks quite different than that at a for-profit company---management might question why workers are supposedly putting their needs before that of the mission and claim that unionization will lead to services cuts. Nonprofit leaders who identify as progressive and supportive of workers rights can have a NIMBY attitude to workers at their own organizations wanting the same rights that the leaders might claim to support for others. Many nonprofit union drives are about having worker voices, and making sure there is follow-through when the organization espouses its support for equity, and nonprofit leadership can seem to react defensively to this.
While it is somewhat cynical, my first priority is to do no harm, and not to assume that harm is impossible. In The Tech That Comes Next: How Changemakers, Philanthropists, and Technologists Can Build an Equitable World, there is a recognition that well-meaning tech can indeed do harm. These harms include tech that is extractive, does not acknowledge identities, or is not accessible. In addition to these, I worry about leaking data (particularly of people who are vulnerable and might be compelled to share their data to access services), having the technology be patronizing, triggering, or wasting time, and diverting energy from more radical change. While these are risks of all nonprofit work, whether technology is involved or not, technology can obscure some of these risks, and as tech workers, it is important we not let an organization absolve itself from considering such risks by doing the dirty work. As tech workers, we need to push back to make sure the work is done ethically, and use our worker voice to not go along with technical decisions that we think could be done more ethically. In my experiences, these tensions have often come up with storing data around gender, race, and other identities. In particular, there can be a goal to keep the data simple, or mapping to government or other databases, at the expense of storing peoples’ identities as they want. And once data is stored, the bar for security must be significantly higher than what is legally required or explained in a Terms of Service that nobody will read. As a tech worker, I see it as my responsibility to continually push to make sure personal data is stored only if there is a reason, including considering corner cases that could lead to data leaks, and that the data is fully removed as soon as possible.
Like with any organization, the power dynamics at a nonprofit determine so much about its mission, how it enacts it, and what it’s like to work there. And certainly an organization’s professed decision-making structure, hierarchy, and theory of change might be very different from how it actually operates. But, for nonprofits using software in order to enact their mission, tech workers on staff are in a unique position to push the organization to stay true to its mission, and push on ethical questions (like security) that might not initially be on the radar of organizational leadership.