Techno Blender
Digitally Yours.

5 Best Practices for Nonprofit Data Management | by Kaleb Nyquist | Jun, 2022

0 107


Illustration AI-generated by Midjourney using author’s prompt “Data Tips for Nonprofits”. Licensed under Midjourney’s commercial use agreement.

A few years ago when I pivoted from a career in communications to data analytics, I sought to remain in the nonprofit and social enterprise sector. I saw low-hanging fruit that could help these resourceful and mission-driven organizations lower expenses and maximize impact through developing a data strategy. However, as I began delving in, I quickly learned the hard way that many nonprofit organizations have poorly managed data, making it nearly impossible to perform any meaningful analysis. As the saying goes: “garbage in, garbage out.”

To save aspiring data scientists seeking opportunities in the nonprofit sector and social entrepreneurs hoping to build a robust data strategy from the same headaches I endured, I have developed 5 best practices . These 5 best practices represent the lessons I learned during my career pivot, organized from easy to hard starting with two “don’ts”, two “dos”, and one “dream big!”

Admittedly, we have all done it: set a cell’s font color to red to mark a problem, highlighted a cell green to indicate completion, or color-coded any other rainbow of possibilities into an Excel or Google spreadsheet as a shortcut to store and convey information. While color-coding seems innocuous enough for making spreadsheets human-readable, in the long term, this technique is a recipe for data chaos.

Why chaos? Color-coded data is almost always implicit data. Unless you are tracking data specifically about color (such as paint pigments, there will be no obvious connection between the color-code and the data’s meaning. Although a “legend” or “key” can help clarify meanings, they often get lost or cropped out, and are useless when printed in grayscale. For sophisticated data management and analysis, it is infuriatingly tedious to import color-coded data into any data structure used in languages like R and Python.

Illustration by author using Microsoft Excel.

Last year I contracted with a small nonprofit doing a rapid outreach campaign in response to a salient social crisis. Their sole staffer was on paternity leave, and it fell on me to dig through spreadsheets to determine which contacts needed to be engaged during the next 48 hours. However, each spreadsheet was a kaleidoscope of color codes, most of which meant something significant to their sole staffer, but took multiple phone calls and precious hours for me to decipher.

Through the creation of new columns of explicit data we were able to make a shared meaning of the data. Explicit data is data that represents exactly what it means. For example, instead of the color green to represent “complete”, explicit data could be a value that simply says the word complete, or a “complete?” column with yes and no values underneath. Explicit data is simultaneously both more human-readable and computer-readable. For example: once we had set up explicit data within our rapid outreach campaign, we were able to use it to subset mail merges that sent the right message to the appropriate audience.

Explicit data opens up the possibility of conditional formatting as a feasible alternative to color-coding. Available in Google Sheets, Microsoft Excel, Apple Numbers and more, conditional formatting is the creation of rules that add color and other visual flair based on cell values. For example, instead of manually coloring cells green to indicate a row is “complete”, a cell in a “complete?” column could be marked with a value of yes and the conditional formatting rule will automatically assign the row a green color. The have-your-cake-and-eat-it-too advantage of conditional formatting is that it retains the visual highlighting of color-coding, while also allowing for explicit data that more closely represents what it means.

By not tying the meaning of the data to an arbitrary color scheme, organizational data becomes more resilient and can handle challenges including employee turnover, software upgrades, and knowledge exchanges.

I recently consulted for a nonprofit organization rebuilding their website from the ground-up. This organization’s history goes back over a century, and much of their data reflected that legacy — including a timeline of significant milestones, a list of influential people over the decades, notable public statements, and a media archive. However, all this historical data was organized (and sometimes the only place this valuable data even existed) was across the organization’s increasingly out-of-date and bug-ridden website. The weight of the data hardcoded into HTML was not only holding back critical website upgrades, but also made it difficult to query the data for the most basic analyses.

Illustration by author using good ol’ fashioned HTML.

To illuminate the underlying issue, it is helpful to know the technical term “system of record”: the information storage system that is the authoritative data source for a given piece of data. In my client’s example above, individual webpages were used as the system of record. This is a natural temptation insofar as the website is a sort of open conversation between leadership and audience. As the organization discovers its niche, it updates the website with new data to answer important questions: Who are we connected to? What are we capable of? Where are we operating?

However, the average nonprofit website is trying to achieve goals better suited to the “system of engagement” paradigm: providing users with an accessible means of viewing, querying, and interacting with information stored within a system of record centralized elsewhere. Web frameworks are simply not designed to operate as a system of record.

Database management systems, on the other hand, are designed to fit the “system of record” paradigm. By focusing on the data itself, stripped of front-end visual considerations, a well-maintained database naturally becomes the system of record for an organization’s most valuable data. Furthermore, most database management systems can be viewed, queried, or called by the organization’s website. This means data can still be available via the website without being stuck on the website. Although it takes a bit of upfront work to set up a database separate from the website, doing so allows for more data resilience and web agility.

A simple exercise to implement a “system of record” is to search your website for bullet points or tables that aggregate interesting data. Create the data into a no-code database like Airtable, and embed an Airtable “view” back into your website. The advantage of this method is that Airtable views allow data filters based on what you want to show. For example, you can create views separating public data from classified data, while still having all your data conveniently gathered together in a single database!

While spreadsheets are useful for “quick-and-dirty” data gathering and analysis, a growing organization will inevitably find their data trapped in the “prison bars” of two-dimensional spreadsheets. In addition to concerns about size and data integrity when deployed across multiple users, spreadsheets have an additional limitation in that each data point is trapped in an individual “cell” that makes it difficult to connect data for purposes of bulk updating or sophisticated analysis.

Illustration by author using LucidChart

Contrast this with relational data, most commonly found in relational database management systems (RDBMS). Relational data models allow for data to be linked across tables through sets of primary keys (a column of unique IDs for each row in a table) and foreign keys (a column referring to the primary keys of a “foreign” table, allowing for another row’s data to be referenced within a row of the current table).

Paradoxically, although social enterprises are inherently relational endeavors, organizations pursuing the social good often do not utilize a relational data model for their most mission-critical data. This oversight may be because they are unaware of the value of relational data, or lack the technical capabilities to store and access it.

For organizations wanting to make the leap to relational data, a no-code solution like the aforementioned Airtable is an excellent gateway. Nonprofits with more advanced data needs and a bit more tech-savvy will find a SQL database the best way to go. Finally, although they aren’t “relational” in a technical sense, cutting-edge graph database solutions like Neo4j store not only information about interrelated entities but also information about the relationships themselves.

After you have harnessed the power of relational data even just once, you are bound to see the potential of implementing relational data everywhere. Here are some examples from my own career:

  • Youth-facing Service Program: I harnessed relational data to skillfully manage data on students and their parents without resorting to clunky column names like “Parent #2 Contact Info” that cluttered our old spreadsheet.
  • Political Advocacy: I created a relational data model for power-mapping exercises to determine the shortest path of influence over a key decision-maker.
  • Wildlife Conservation Network: To make our directory easy to navigate, I used a relational database to set up distinct-but-connected tables for endangered species, potential donors, global conservation organizations, and local wildlife refuges.

Unlike their for-profit counterparts, nonprofit organizations operate in a world mostly void of profit margins and price signals. To paraphrase the economist Friedrich Hayek, “price” itself is an abstracted data point that substitutes for information from across global markets and then, in a quasi-magical sleight-of-the-invisible-hand, coordinates profit-seeking economic activity. Because nonprofits are not governed by a profit motive, they lose much of this price-ordained sense of direction and find that many of their projects cannot be distilled down to a quantifiable bottom line. To report “growth” or other goals requires nonprofits to substitute real-world data to fill the vacuum left behind by revenue numbers.

Crucially, this real-world data frequently is in the form of qualitative data. Whereas quantitative data “measures” and is numbers-based, qualitative data “describes” and is usually text-based. Qualitative data can include stakeholder surveys, interview transcripts, focus group recordings, and volunteer observations. Nonprofits live and breathe on qualitative data — even if the “data” is simply scribbles being passed around on post-it notes!

Illustration by author

Frustratingly, funding agencies often ask for reports with quantitative data rather than qualitative data, even though nonprofits will likely have less of the former and an abundance of the latter. In their defense, the funding agencies are quite literally trying to hold recipients ac-count-able for the funds they receive. Furthermore, many foundation officers are increasingly aware that quantitative data alone leaves out important context. A mixed-methods approach that combines both quantitative and qualitative data is often the most persuasive for telling a story about social impact and change.

Categorical data is the nexus where quantitative data meets qualitative data. A category set usually consists of words or short phrases that can be used to categorize a record. For example, a volunteer management database might have a field for a volunteer’s “motivation” where the options include skill_development, required_service_hours, personal_story, or member_of_group. Categorical data can then be used to subset data for questions such as “how many hours does the average volunteer work if their motivation is skill development versus having required service hours?” as well as more sophisticated research techniques such as regression analysis.

Categories can be imposed on the data or can emerge from analysis of the data. An example of emergent categorization came from a summit of climate action organizations my team convened in New York City. Through dialogue and an increasingly-cluttered whiteboard, we began to identify the diverse types of climate action being pursued: “advocacy”, “carbon reductions”, and “disaster preparedness”. However, this one-day summit was insufficient to complete what realistically had to be an iterative process. Through follow-up conversations, we developed a process inspired by “grounded theory” of open coding (free association of data), axial coding (grouping data together), and selective coding (fitting groups together into categories like jigsaw pieces).

Depending on the problem you are trying to solve, creating robust categories for your data can be a labor-intensive and intellectually demanding process. If your organization truly wants to embrace qualitative data, it is worth embedding within the data team an ethnographer (if trying to understand a problem with no clear solution) or a program evaluator (if trying to prove a tentative solution). Having a qualitative data expert proficient in “coding categories” work alongside a quantitative data expert skilled in “coding computers” is a powerful one-two punch for any ambitious organization seeking to solve urgent social problems through an evidence-based approach.

Synthetic data is “fake data” generated by an algorithm and designed to replicate an actual dataset. Depending on the sophistication of the algorithm, the synthetic dataset will not only replicate the structure of the data and the range of values each field can take, but also reproduce the correlations, standard deviations, and other statistical patterns. This extra level of sophistication means that a statistical analysis of the synthetic dataset will be virtually identical to a statistical analysis of the corresponding “real data”.

Illustration by author using Jetbrains Mono, a specialized font for programming.

Once created, synthetic data can be leveraged in a number of helpful ways. For nonprofit organizations that provide services to vulnerable populations, synthetic data allows for program evaluation through direct analysis of the service data without compromising the privacy of any individual service recipient. Although nonprofit organizations should still be explicit about how someone’s data will be used and shared, the hyper-anonymous nature of synthetic data is a worthwhile safeguard when collaborating with external researchers and partner organizations.

Synthetic data can be a strategic on-ramp for introducing artificial intelligence into a nonprofit’s operations. First, synthetic data is almost always generated through some form of artificial intelligence, and thus learning how to create synthetic data can serve as a crash course in AI techniques and terminology. Second, because synthetic data safeguards privacy, generating synthetic datasets opens up the possibility for nonprofits to run their own “data science competition” on platforms such as Kaggle or DrivenData. Finally, it is believed by some that synthetic data can help correct for “AI bias” by supplementing real data on underrepresented populations with synthetic data, so that that machine doesn’t internalize underrepresentation as “less-likely-to-have-existence”. While promising, remember that AI bias is really just human bias taught to the machine through inputted data. Synthetic data may help correct some AI bias, but the real solution is to address the bias at the human source.

The field of synthetic data is rapidly evolving, but I found Gretel’s platform relatively easy to learn and use. Notably, they have published and presented on the value of their technology specifically for nonprofits, an all-too-rare marketing decision in the data science field. Some tech savvy is required to set up and tweak a Gretel model: about the equivalence of a 1-semester course in machine learning (as opposed to a full-fledged computer science degree). In light of the other tips shared earlier in this article, it is worth highlighting that Gretel can work with relational data and text-based data (i.e., qualitative data). However, Gretel does not work with color-coded spreadsheets, being optimized for file formats such as .csvor .jsoninstead.

Data is becoming exponentially cheaper to collect, store and analyze. These advances also have a shadowy side, evidenced by rising concerns over privacy violations and AI bias. Nonprofits have an opportunity to ride the data wave in order to more effectively realize their mission, but it takes work to translate the usefulness of data into the specific idiosyncrasies of nonprofits without letting data’s shadowy side sneak in the back door. My hope is that through these best practices in nonprofit data management you are able to multiply your organization’s impact to do more good in the world while also minimizing the bad.

That said, making sense of data for nonprofits is an ongoing process! Let me know which advice you found most useful in the poll below. Your responses help guide my thinking about if and when to do a deeper dive into any of these topics.

If you like this article, you might also find my tutorials for using Airtable with Python to be helpful. Feel free to follow me here on Medium or on Twitter (where I talk about a lot more than just data, but also some data). If you would like to support more writing like this, you can buy me a coffee or (if you haven’t already) become a Medium subscriber using my referral link. Cheers!




Illustration AI-generated by Midjourney using author’s prompt “Data Tips for Nonprofits”. Licensed under Midjourney’s commercial use agreement.

A few years ago when I pivoted from a career in communications to data analytics, I sought to remain in the nonprofit and social enterprise sector. I saw low-hanging fruit that could help these resourceful and mission-driven organizations lower expenses and maximize impact through developing a data strategy. However, as I began delving in, I quickly learned the hard way that many nonprofit organizations have poorly managed data, making it nearly impossible to perform any meaningful analysis. As the saying goes: “garbage in, garbage out.”

To save aspiring data scientists seeking opportunities in the nonprofit sector and social entrepreneurs hoping to build a robust data strategy from the same headaches I endured, I have developed 5 best practices . These 5 best practices represent the lessons I learned during my career pivot, organized from easy to hard starting with two “don’ts”, two “dos”, and one “dream big!”

Admittedly, we have all done it: set a cell’s font color to red to mark a problem, highlighted a cell green to indicate completion, or color-coded any other rainbow of possibilities into an Excel or Google spreadsheet as a shortcut to store and convey information. While color-coding seems innocuous enough for making spreadsheets human-readable, in the long term, this technique is a recipe for data chaos.

Why chaos? Color-coded data is almost always implicit data. Unless you are tracking data specifically about color (such as paint pigments, there will be no obvious connection between the color-code and the data’s meaning. Although a “legend” or “key” can help clarify meanings, they often get lost or cropped out, and are useless when printed in grayscale. For sophisticated data management and analysis, it is infuriatingly tedious to import color-coded data into any data structure used in languages like R and Python.

Illustration by author using Microsoft Excel.

Last year I contracted with a small nonprofit doing a rapid outreach campaign in response to a salient social crisis. Their sole staffer was on paternity leave, and it fell on me to dig through spreadsheets to determine which contacts needed to be engaged during the next 48 hours. However, each spreadsheet was a kaleidoscope of color codes, most of which meant something significant to their sole staffer, but took multiple phone calls and precious hours for me to decipher.

Through the creation of new columns of explicit data we were able to make a shared meaning of the data. Explicit data is data that represents exactly what it means. For example, instead of the color green to represent “complete”, explicit data could be a value that simply says the word complete, or a “complete?” column with yes and no values underneath. Explicit data is simultaneously both more human-readable and computer-readable. For example: once we had set up explicit data within our rapid outreach campaign, we were able to use it to subset mail merges that sent the right message to the appropriate audience.

Explicit data opens up the possibility of conditional formatting as a feasible alternative to color-coding. Available in Google Sheets, Microsoft Excel, Apple Numbers and more, conditional formatting is the creation of rules that add color and other visual flair based on cell values. For example, instead of manually coloring cells green to indicate a row is “complete”, a cell in a “complete?” column could be marked with a value of yes and the conditional formatting rule will automatically assign the row a green color. The have-your-cake-and-eat-it-too advantage of conditional formatting is that it retains the visual highlighting of color-coding, while also allowing for explicit data that more closely represents what it means.

By not tying the meaning of the data to an arbitrary color scheme, organizational data becomes more resilient and can handle challenges including employee turnover, software upgrades, and knowledge exchanges.

I recently consulted for a nonprofit organization rebuilding their website from the ground-up. This organization’s history goes back over a century, and much of their data reflected that legacy — including a timeline of significant milestones, a list of influential people over the decades, notable public statements, and a media archive. However, all this historical data was organized (and sometimes the only place this valuable data even existed) was across the organization’s increasingly out-of-date and bug-ridden website. The weight of the data hardcoded into HTML was not only holding back critical website upgrades, but also made it difficult to query the data for the most basic analyses.

Illustration by author using good ol’ fashioned HTML.

To illuminate the underlying issue, it is helpful to know the technical term “system of record”: the information storage system that is the authoritative data source for a given piece of data. In my client’s example above, individual webpages were used as the system of record. This is a natural temptation insofar as the website is a sort of open conversation between leadership and audience. As the organization discovers its niche, it updates the website with new data to answer important questions: Who are we connected to? What are we capable of? Where are we operating?

However, the average nonprofit website is trying to achieve goals better suited to the “system of engagement” paradigm: providing users with an accessible means of viewing, querying, and interacting with information stored within a system of record centralized elsewhere. Web frameworks are simply not designed to operate as a system of record.

Database management systems, on the other hand, are designed to fit the “system of record” paradigm. By focusing on the data itself, stripped of front-end visual considerations, a well-maintained database naturally becomes the system of record for an organization’s most valuable data. Furthermore, most database management systems can be viewed, queried, or called by the organization’s website. This means data can still be available via the website without being stuck on the website. Although it takes a bit of upfront work to set up a database separate from the website, doing so allows for more data resilience and web agility.

A simple exercise to implement a “system of record” is to search your website for bullet points or tables that aggregate interesting data. Create the data into a no-code database like Airtable, and embed an Airtable “view” back into your website. The advantage of this method is that Airtable views allow data filters based on what you want to show. For example, you can create views separating public data from classified data, while still having all your data conveniently gathered together in a single database!

While spreadsheets are useful for “quick-and-dirty” data gathering and analysis, a growing organization will inevitably find their data trapped in the “prison bars” of two-dimensional spreadsheets. In addition to concerns about size and data integrity when deployed across multiple users, spreadsheets have an additional limitation in that each data point is trapped in an individual “cell” that makes it difficult to connect data for purposes of bulk updating or sophisticated analysis.

Illustration by author using LucidChart

Contrast this with relational data, most commonly found in relational database management systems (RDBMS). Relational data models allow for data to be linked across tables through sets of primary keys (a column of unique IDs for each row in a table) and foreign keys (a column referring to the primary keys of a “foreign” table, allowing for another row’s data to be referenced within a row of the current table).

Paradoxically, although social enterprises are inherently relational endeavors, organizations pursuing the social good often do not utilize a relational data model for their most mission-critical data. This oversight may be because they are unaware of the value of relational data, or lack the technical capabilities to store and access it.

For organizations wanting to make the leap to relational data, a no-code solution like the aforementioned Airtable is an excellent gateway. Nonprofits with more advanced data needs and a bit more tech-savvy will find a SQL database the best way to go. Finally, although they aren’t “relational” in a technical sense, cutting-edge graph database solutions like Neo4j store not only information about interrelated entities but also information about the relationships themselves.

After you have harnessed the power of relational data even just once, you are bound to see the potential of implementing relational data everywhere. Here are some examples from my own career:

  • Youth-facing Service Program: I harnessed relational data to skillfully manage data on students and their parents without resorting to clunky column names like “Parent #2 Contact Info” that cluttered our old spreadsheet.
  • Political Advocacy: I created a relational data model for power-mapping exercises to determine the shortest path of influence over a key decision-maker.
  • Wildlife Conservation Network: To make our directory easy to navigate, I used a relational database to set up distinct-but-connected tables for endangered species, potential donors, global conservation organizations, and local wildlife refuges.

Unlike their for-profit counterparts, nonprofit organizations operate in a world mostly void of profit margins and price signals. To paraphrase the economist Friedrich Hayek, “price” itself is an abstracted data point that substitutes for information from across global markets and then, in a quasi-magical sleight-of-the-invisible-hand, coordinates profit-seeking economic activity. Because nonprofits are not governed by a profit motive, they lose much of this price-ordained sense of direction and find that many of their projects cannot be distilled down to a quantifiable bottom line. To report “growth” or other goals requires nonprofits to substitute real-world data to fill the vacuum left behind by revenue numbers.

Crucially, this real-world data frequently is in the form of qualitative data. Whereas quantitative data “measures” and is numbers-based, qualitative data “describes” and is usually text-based. Qualitative data can include stakeholder surveys, interview transcripts, focus group recordings, and volunteer observations. Nonprofits live and breathe on qualitative data — even if the “data” is simply scribbles being passed around on post-it notes!

Illustration by author

Frustratingly, funding agencies often ask for reports with quantitative data rather than qualitative data, even though nonprofits will likely have less of the former and an abundance of the latter. In their defense, the funding agencies are quite literally trying to hold recipients ac-count-able for the funds they receive. Furthermore, many foundation officers are increasingly aware that quantitative data alone leaves out important context. A mixed-methods approach that combines both quantitative and qualitative data is often the most persuasive for telling a story about social impact and change.

Categorical data is the nexus where quantitative data meets qualitative data. A category set usually consists of words or short phrases that can be used to categorize a record. For example, a volunteer management database might have a field for a volunteer’s “motivation” where the options include skill_development, required_service_hours, personal_story, or member_of_group. Categorical data can then be used to subset data for questions such as “how many hours does the average volunteer work if their motivation is skill development versus having required service hours?” as well as more sophisticated research techniques such as regression analysis.

Categories can be imposed on the data or can emerge from analysis of the data. An example of emergent categorization came from a summit of climate action organizations my team convened in New York City. Through dialogue and an increasingly-cluttered whiteboard, we began to identify the diverse types of climate action being pursued: “advocacy”, “carbon reductions”, and “disaster preparedness”. However, this one-day summit was insufficient to complete what realistically had to be an iterative process. Through follow-up conversations, we developed a process inspired by “grounded theory” of open coding (free association of data), axial coding (grouping data together), and selective coding (fitting groups together into categories like jigsaw pieces).

Depending on the problem you are trying to solve, creating robust categories for your data can be a labor-intensive and intellectually demanding process. If your organization truly wants to embrace qualitative data, it is worth embedding within the data team an ethnographer (if trying to understand a problem with no clear solution) or a program evaluator (if trying to prove a tentative solution). Having a qualitative data expert proficient in “coding categories” work alongside a quantitative data expert skilled in “coding computers” is a powerful one-two punch for any ambitious organization seeking to solve urgent social problems through an evidence-based approach.

Synthetic data is “fake data” generated by an algorithm and designed to replicate an actual dataset. Depending on the sophistication of the algorithm, the synthetic dataset will not only replicate the structure of the data and the range of values each field can take, but also reproduce the correlations, standard deviations, and other statistical patterns. This extra level of sophistication means that a statistical analysis of the synthetic dataset will be virtually identical to a statistical analysis of the corresponding “real data”.

Illustration by author using Jetbrains Mono, a specialized font for programming.

Once created, synthetic data can be leveraged in a number of helpful ways. For nonprofit organizations that provide services to vulnerable populations, synthetic data allows for program evaluation through direct analysis of the service data without compromising the privacy of any individual service recipient. Although nonprofit organizations should still be explicit about how someone’s data will be used and shared, the hyper-anonymous nature of synthetic data is a worthwhile safeguard when collaborating with external researchers and partner organizations.

Synthetic data can be a strategic on-ramp for introducing artificial intelligence into a nonprofit’s operations. First, synthetic data is almost always generated through some form of artificial intelligence, and thus learning how to create synthetic data can serve as a crash course in AI techniques and terminology. Second, because synthetic data safeguards privacy, generating synthetic datasets opens up the possibility for nonprofits to run their own “data science competition” on platforms such as Kaggle or DrivenData. Finally, it is believed by some that synthetic data can help correct for “AI bias” by supplementing real data on underrepresented populations with synthetic data, so that that machine doesn’t internalize underrepresentation as “less-likely-to-have-existence”. While promising, remember that AI bias is really just human bias taught to the machine through inputted data. Synthetic data may help correct some AI bias, but the real solution is to address the bias at the human source.

The field of synthetic data is rapidly evolving, but I found Gretel’s platform relatively easy to learn and use. Notably, they have published and presented on the value of their technology specifically for nonprofits, an all-too-rare marketing decision in the data science field. Some tech savvy is required to set up and tweak a Gretel model: about the equivalence of a 1-semester course in machine learning (as opposed to a full-fledged computer science degree). In light of the other tips shared earlier in this article, it is worth highlighting that Gretel can work with relational data and text-based data (i.e., qualitative data). However, Gretel does not work with color-coded spreadsheets, being optimized for file formats such as .csvor .jsoninstead.

Data is becoming exponentially cheaper to collect, store and analyze. These advances also have a shadowy side, evidenced by rising concerns over privacy violations and AI bias. Nonprofits have an opportunity to ride the data wave in order to more effectively realize their mission, but it takes work to translate the usefulness of data into the specific idiosyncrasies of nonprofits without letting data’s shadowy side sneak in the back door. My hope is that through these best practices in nonprofit data management you are able to multiply your organization’s impact to do more good in the world while also minimizing the bad.

That said, making sense of data for nonprofits is an ongoing process! Let me know which advice you found most useful in the poll below. Your responses help guide my thinking about if and when to do a deeper dive into any of these topics.

If you like this article, you might also find my tutorials for using Airtable with Python to be helpful. Feel free to follow me here on Medium or on Twitter (where I talk about a lot more than just data, but also some data). If you would like to support more writing like this, you can buy me a coffee or (if you haven’t already) become a Medium subscriber using my referral link. Cheers!

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment