Evaluation is necessary for any healthy group or project and is critical for grant writing. Most grant applications will require the organization or individual to describe the proposed process for evaluating the success of the project.
Developing a habit of effective evaluation will benefit your work in many ways. Evaluation aids in discoveries and helps avoid pitfalls; it shifts organizations from a culture of reactivity to one of thriving productivity. Additionally, grantors in recent trends, increasingly emphasize impact philanthropy and measuring their work in society. In this chapter, you will learn how to create evaluation practices that will impress grantors. You will learn about Impact Philanthropy, how to evaluate programs effectively, and how to develop a culture of evaluation amongst your teams.
Impact Philanthropy is the process of measuring the success of donated or grant awarded funds. Due to technological advancements, organizations have the tools necessary to track philanthropic impacts, and therefore grantors and donors value that data. Impact philanthropy means that instead of stating “XYZ Foundation donated $1 Million to local charities.” grantors can report “XYZ Foundation served 142,857 meals to over 35,500 families and provided workforce development training to 330 adults.”
Grantors want to know about the impact of their dollars for many reasons. First, measurement reporting is a form of checks and balances; grantors can verify that the money they awarded made a difference and that the organization handled the funds wisely. Also, data on awarded funds allow grantors the ability to share their story and impacts in a way that is more likely to convince today’s donors to give. Additionally, data will enable grantors to check that they are meeting their mission and awarding dollars most effectively. Similarly, tracking data on outcomes allows organizations and individuals the opportunity to promote the success of their work and helps leverage future funds through a record of success.
Shifting towards a culture of evaluation and measurement is wise, but be aware that focusing on the data at the expense of your organization’s mission may harm your project. As an illustration, gathering data on individuals you serve and the results of their relationship with your project is advisable, although it is unwise to provide unnecessary services for the sake of recording data.
When handled correctly, impact philanthropy benefits the grantor and the project. Collecting data may mean more work, but in the end, the work is greater transparency, increased stakeholder support, and more effective program management.
Collecting and Understanding Data
To effectively report on the impacts of your grant funds, you must understand the different types of data of which grantors are interested. Mainly, grantors want to know about your project’s Inputs, outputs, outcomes, and impacts.
The resources used to operate your project. Resources can be staff, money, and material.
- Example: An emergency assistance program utilizes staff, money to pay for utilities, and office supplies for filling out forms.
The immediate result of the project.
- Example: A bike giveaway program that gives away 200 bikes. The donated bikes are the output in this scenario.
The likely achieved short-term results of a project.
- Example: An individual trains 100 youth in graphic design. The outcome is that 100 teenagers now have a new employable skill.
The likely achieved long-term result of a project.
- Example: A social enterprise business employs ex-felons to make household items. The impact is that those employees can re-enter the workforce and thus make more money and avoid future offenses.
A clear understanding of these terms will help you when you write the evaluation section of your grant proposal. A failure to understand these terms leads to mistrust with grantors, so it is essential to educate yourself. Writing that “XYZ will measure the inputs of the project to understand its long-term benefits.” leads grantors to wonder if you will be able to conduct evaluations of your project. Although, reporting that “XYZ will track inputs, outputs, and outcomes to understand program efficacy.” and then subsequently expanding upon your plan to realize that statement will instill confidence that you are equipped to achieve your goals.
After you determine what type of information you want to collect, you next need to figure out how you will receive the data. When collecting data, you must understand the difference between data that is quantitative and that which is qualitative. Some types of evaluation lean more towards one or the other. For instance, inputs and outputs are often measured using quantitative data, whereas impacts often require qualitative measurements.
Data that is collected numerically. Quantitative data is less subjective and more precise.
- Example: number of volunteers, the number of workshops provided, and number of customers served.
Data that is not collected numerically. Qualitative data is subject to opinion.
- Example: satisfaction surveys, categorical participants (thee blue-eyed staff vs. eight brown-eyed staff members), testimonials.
The chart below outlines different examples by data type and sector:
|Individual: Art Fellowship Applicant||Nonprofit: Soup Kitchen Applicant||Business: Tax Advisor Applicant|
|Inputs||Hours spent working on artwork (quantitative)||Amount of food and supplies (quantitative)||Hours spent with customers, office supplies and utilities (quantitative)|
|Outputs||Number of pieces of art (quantitative)||Number of meals served (quantitative)||Number of customers served (quantitative)|
|Outcomes||Promotion of diverse artists through (qualitative)||Reduced hunger (qualitative)||Increased IRS compliance (qualitative)|
|Impacts||Increased awareness and appreciation of diverse artists (qualitative)||Increased quality of life (qualitative)||Increased financial confidence (qualitative)|
There are different tools for tracking data. Below is a list of standard tools that organizations and individuals use to keep track of their work.
Microsoft Excel or Google Sheets:
If you need an easy and inexpensive tool for data collection, a spreadsheet is a great option. Set up a spreadsheet for your project and update the document regularly with data such as the number of clients served, or dollars spent on supplies and the number of supplies required to produce a program.
Charity Tracker is a case management tracking software meant to enable communities to streamline service-provision and decrease redundancies. Charity Trackers reporting feature allows you to track inputs and outputs.
Social Solutions “helps nonprofits get to the heart of their data so they can understand the full impact of their work.” Social Solutions is an online data management tool that offers the ability to create and track evaluative criteria to prove organizational efficacy.
Surveys are an excellent way to track both qualitative and quantitative data. They can be produced informally using a word document or by using web-based tools such as surveymonkey.com. Use surveys to conduct a S.W.O.T. analysis after programs or to collect information on customer opinions. You can later compile the data for grantors to make it easier to understand. For example, “forty percent of survey participants felt more secure at home after completing XYZ’s safety course.”
Sometimes, you may want to dive deeper with a small group of targeted individuals to understand the benefits of a project. Conducting a focus group and asking collecting the answers to participant opinions is another method of collecting qualitative data.
Create a Culture of Effective Evaluation
Integrating evaluation practices into your project’s routine encourages staff and volunteers to participate in data collection and increases the accuracy of that collection. If you are an Executive Director of a nonprofit or the owner of a business, it is much easier to obtain accurate evaluation data if all staff members commit to the process. To gain buy-in from employees and volunteers, you must explain the benefits of data collection, provide easy to use tools, and offer motivation.
Collecting data takes time, and employees may view data collection as another pointless form to fill out or a tedious survey to conduct. Explaining the benefits of evaluation in staff meetings and asking for staff input on how to make evaluation efforts easy and successful will empower team members to participate meaningfully. Forgoing the education process and neglecting to encourage buy-in may lead to staff that ignore collecting data regularly or, worse; manipulate data to meet a requirement.
- Provide Tools:
Don’t expect volunteers or staff members to create a tool for data collection; doing so leads to multiple forms of data and confusing results. Instead, research the appropriate data collection tool for your project and make sure that it is one that is easy to use to eliminate unnecessary time wasted.
Lastly, if your team is reluctant to collect data, create motivational rewards. “When we collectively complete our goal of collecting 100 surveys, the boss will buy you lunch!” Then use that data to inspire staff and volunteers during staff meetings and throughout communications. “Last month, because of your hard work, we graduated 25 individuals through our education program.”
Describing Evaluation Practices in Grant Proposals
The evaluation section of a grant proposal asks for information concerning how you plan to evaluate the project you seek to fund. Do not describe general organizational evaluation practices; you already wrote about your overall goals and objectives in the Organizational Information section. Instead, define your project goals and the metrics you will use to evaluate success.
Pick three to five goals that are confident you can achieve with the awarded funds and remember to make your goals S.M.A.R.T (Specific, Measureable, Attainable, Relevant, Time-Sensitive) in nature. Lastly, as always, ensure your goals match the grantor’s priorities.
To evaluate the results of the Seed Grant Program, Disability Connection, Inc. will use quantitative and qualitative data on output, outcome, and impact results.
- DC will administer 20 $7,500 grants to partnering institutions previously unable to implement improvement plans within 6 months.
- Metric: number of grants provided.
- DC will also provide additional educational accommodations for 1,250 students with disabilities within 12 months.
- Metric: number of increased accommodations
- DC will increase the number of partnering institutions able to implement improvement plans by 20 institutions within 6 months which will decrease DC’s failure to implement plans rate by 10 %
- Metric: Number of partnering institutions able to implement improvement plans.
- DC will provide educational accommodations for 1,250 additional students with disabilities within 12 months.
- Metric: number of students who receive additional accommodations.
- DC will increase external support for 20 partnering institutions within 12 months.
- Metric: Number of additional funding sources for each institution from outside stakeholders.
- DC will increase the quality of life for 1,250 students within 12 months.
- Metric: Pre & post-program evaluation of parent and student input conducted at each partnering institution.
Evaluative practices help organizations achieve more significant results. Grantors require evaluation information to understand better your ability to think proactively, to develop a feel for the impact of their potential investment, and to verify that you used the funds correctly. To develop a robust evaluation plan, start with a S.M.A.R.T goal, and then determine the appropriate metrics to measure the success of that goal. Keep in mind the difference between quantitative data and qualitative data when developing metrics and make sure to describe a balance of inputs, outputs, outcomes, and impacts.