Tech Impact’s Idealware is, at heart, a research organization. Every article or report we write which includes reviews of software products follows a specific research methodology. In general, our articles follow our “Few Good Tools” methodology, while longer reports use our “Consumers Guide” methodology, both described below.
We keep a database of products and reach out to vendors for specific products. If you’d like us to add your product to our database, email overview information to chris@techimpact.org.
Few Good Tools Methodology
Few Good Tools articles are short round-ups intended to give nonprofits an overview of some of the major software products in a particular space. They’re not intended to be comprehensive, but rather to summarize a small set of proven options as a starting point.
These articles are typically researched and written by an Idealware staff member, but occasionally we’ll hire a contractor—typically a technology consultant who specializes in nonprofit software selection. No writer will ever have paid ties to a vendor or specialize in implementing a particular software package. However, writers will frequently have more personal experience with some products in the article than others—for instance, they may personally use, or have implemented, some of them but not others. The only alternative would be to hire researchers with no software or nonprofit experience, resulting in articles that would not be well-tied to the realities of nonprofit technology. We’ve defined a process, outlined below, to minimize any potential biases this might produce.
For each Few Good Tools article, we:
- Identify five to 10 nonprofit staff members and consultants who have experience evaluating more than one software product in the area. We typically recruit these contributors through our own database of several hundred potential nonprofit experts, and through posts to public email discussion lists, like Progressive Exchange and NTEN Discuss. We’re always looking for those willing to contribute knowledge to articles—email chris@idealware.org if you’d like to be added to our database for future articles.
- Once we identify the contributors, we interview each by phone to ask what factors are important to consider when looking at software in the area, what software packages they have found to be high quality and a good value in their own work, and how these packages compare.
- Based on the interviews, we write an article summarizing the considerations and the products mentioned by at least two of the contributors. We do not review software ourselves for these articles, but rather summarize high-level information related to us by multiple contributors. If a number of contributors have negative opinions of a product, we leave it out of the article—Few Good Tool articles only include software products that we believe to be a solid choice for a number of organizations.
- We then send the draft article to the contributors to ask them to flag any inaccuracies or concerns, and iterate the article accordingly.
- Each article includes a list of contributors. Every article published after July 2009 also includes a thumbnail summary of each contributor’s experience with the software area, including what products they have direct experience with, to allow the reader to assess their perspectives for themselves.
There’s a downside to this style of article—it doesn’t help organizations find really excellent, but not very well-known, tools. These smaller articles are intended to provide a summary level of information, while our more-detailed reports cover particular areas more comprehensively. For these reports, we more carefully define the specific criteria by which software will be included, and reach out to vendors and the community to attempt to find all applicable tools.
Consumers Guide Methodology
Methodology tends to vary a bit from report to report, depending on the precise goals of the research, so it’s worth consulting the methodology section in the report itself to find details for a specific project. However, we do have a general process that we use as a baseline for all our detailed comparative reports.
- Define the Core Team. A large report typically has a team of several people, including a lead researcher, a subject matter expert and potentially a research assistant and writer. No member of the core team will ever have paid ties to a vendor or specialize in implementing a particular software package. However, as per our Few Good Tools articles, they will frequently have more personal experience with some products in the article than others—for instance, they may personally use, or have implemented, some of them but not others. We’ve defined a process, outlined below, to minimize any potential biases this might produce.
- Create the List of Tools to be Reviewed. The team begins by carefully defining the area the report will cover. What audiences and business issues will it target? What basic criteria must the tools meet to address these business needs? Through Internet research, searching our database, and/or informal polls of the nonprofit community, the team generates a laundry list of tools that meet these high-level criteria. For some reports, we’ll only be able to review a particular number of tools, and we’ll need to refine the business criteria to pare the list down to a number we can more reasonably compare. Idealware never uses a “Pay to Play” model, where vendors must pay a fee to be included.
- Create the Criteria for Review. Many organizations’ needs are quite similar. To understand these needs, the team talks to a set of consultants and organizations who have worked with relevant tools. The authors then pull out the list of features and qualities that will address most of the needs of the majority of target organizations. These needs form the basis of the software evaluations.
- Conduct Summary Reviews. For some reports, we work with a representative for each product to see a quick demo over the web—if the vendor doesn’t have a web demo tool, Idealware provides one. These demos are typically very focused on a particular set of questions and have a specific time limit—often a half-hour—to allow us to review many tools in a short period of time. The information we gather from these quick reviews is not sufficient for detailed comparisons, but it’s very useful to get a high level sense of the strengths and weaknesses of the products, and enough to create thumbnail reviews.
- Conduct Detailed Reviews. In order to perform thorough comparisons of products, we set up long demos of each software package —often two to three hours—with product representatives. We ask them to demonstrate each of the key features over the web, which lets the team evaluate the true nature of each feature and follow up with probing questions, as well as consider product usability. The team writes up the findings for each tool in a standardized product summary template.
- Gather User Feedback. Some traits—especially quality of a vendor’s customer service—are hard to measure without actually using each product. The team gathers this type of data from actual users. Product users are not always easy to find—we pull them from those who have volunteered to take part in Idealware surveys, clients that the team can find via the web, and, if necessary, clients provided by vendors. The report team collects user opinions through surveys, interviews or both.
- Define Rubric and Guidelines. The vast amount of data collected has to be analyzed so that others can make sense of it. The team works together to create a rubric, or grading scheme, to allow them to compare the different aspects of the tools. (For instance, what are the qualities that define excellent reporting functionality and unusually poor reporting functionality?) As the authors go through this process, they also define the key differences between the tools for use in the “How to Choose” section and comparison charts.
- Write the Report. The team writes a summary of the typical features and functions of the reviewed software tools, a set of guidelines for how to choose the right tool, detailed reviews of each product, and comparison charts. To create recommendations, they carefully pull out the tools that best meet the needs of each of the “typical” organization profiles.
- Review, Revise and Publish Report. The authors then distribute a strong draft to a wide set of reviewers in the target audience—some with experience in the report area, and some without—and revise the draft to their comments. Finally, the finished report is posted and distributed.