Staying on Track: Evaluating Relevancy Efforts

The genesis of the Relevancy Roadmap arose at a July 2018 meeting where a group of conservation leaders and experts drafted a conceptual map of the barriers inhibiting conservation agencies from serving and engaging broader constituencies. In preparation for that meeting, attendees who were involved with prior related efforts (e.g., Wildlife Governance Principles, Agency Transformation, public trust doctrine, R3) conducted a literature review of published research and summaries of work groups and think tanks that had previously explored the challenges of increasing fish and wildlife relevancy. Absent from these summaries were a clear description of the ultimate outcome(s) of increased relevancy, and guidance or best practices on how organizations could identify and measure the success of the ultimate or mid-term outcomes of their efforts.

This lack of outcome-based approaches to relevancy and the consequent lack of data on mid-term and ultimate effectiveness of those efforts proved a challenge to those drafting the strategies and tactics identified in this document. In contrast to ecological aspects of natural resource management, work done by agencies and organizations related to public engagement has been largely developed, conducted, and replicated uninformed by a well-designed adaptive management framework that measures and documents outcomes. That is, these programs or efforts are often designed, implemented, avoided, or discarded as a result of staff opinion or impression rather than evaluating effectiveness gleaned from a carefully constructed pilot effort, focus group, or hypothesis test.

Therefore, the editors of, and collaborators on, the Relevancy Roadmap chose to present the challenges to serving and engaging broader constituencies in a way that is easily integrated into an actionable, adaptive approach. Barriers to engaging and serving broader constituencies were organized into five categories. Specific strategies to address each barrier included a series of incremental changes or steps that must be completed to reach an outcome to reduce or eliminate the barrier. Finally, each step within a strategy included tactics that could be used to accomplish that step. The tactics are in no way a comprehensive list of actions, but rather the current “best advice” of the editors and contributors of this document. By framing the problems related to relevancy in this way, a starting point to applying a strategy is provided using an assessable framework for implementing that strategy.

Each strategy is constructed as a series of steps. These steps should be viewed as sequential or intermediate results that must be achieved to produce a desired outcome. It may be useful to think of these steps as changes that occur in an “if, then” logic flow, each change must occur before the subsequent change can fully occur. If one of the changes is not made or is not fully accomplished in a strategy’s implementation, then the changes that follow are less likely to be achieved, and the ultimate outcome of engaging and serving broader constituencies may remain unrealized.

In planning for the implementation of a strategy, it is important to review the primary types of information needed to understand the effectiveness of a strategy’s implementation. Program or process evaluation should generate two types of information; 1) information capable of indicating that the effort produced the desired effect (ultimate outcomes), and 2) information capable of identifying where an effort may have gotten off track if its ultimate outcomes were poorly achieved (mid-term outcomes).

It is beyond the scope of this document to identify all the ways these two types of information should be collected (i.e., the specific evaluation tools used to impartially document the success of strategy or tactic implementation) in the course of implementing strategies to increase relevancy. The variety of strategies and tactics contained in the Roadmap, and their variance in scope and scale, simply cannot be evaluated in a one-size-fits-all fashion. For each strategy, a combination of evaluation techniques will likely need to be applied. Examples could include, but are not limited to the following:

  1. Internal or external surveys to document values, beliefs, interests of various groups, audiences, or demographics.

  2. Third-party assessments of processes, procedures, and policies to determine their alignment (or lack of alignment) with broader constituent engagement.

  3. Analysis of databases (i.e. license/access pass holders) or survey data to identify changes in outdoor user demographics.

  4. Pre/post-strategy assessments of organizational capacity in public engagement expertise.

Determining the most effective evaluation tools and techniques for each strategic step or tactic will need to be determined by those who create an implementation plan for that strategy. Ideally, human dimension researchers or social scientists will be integrally involved in selecting the type of evaluation tools needed to prove and improve a program or process. Just as wildlife or fisheries biologists are employed by agencies to develop, implement and monitor natural resource management efforts, so too should social scientists be enlisted to assist in the development, implementation and evaluation of the public engagement efforts. If organizations wishing to apply strategies or recommendations within the Roadmap do not have access to human dimension expertise within their organization, it is strongly encouraged that these agencies secure that expertise (via new staff hires or external consultants) as a first step in implementing nearly all the strategies listed in this document.

© Florida Fish and Wildlife Conservation CommissionWhen evaluating relevancy efforts, agencies must engage in social science. This includes analysis of  changes in outdoor user demographics.

© Florida Fish and Wildlife Conservation Commission

When evaluating relevancy efforts, agencies must engage in social science. This includes analysis of changes in outdoor user demographics.

The Relevancy Roadmap as a Foundation for Evaluation

To understand how practitioners can integrate evaluation in the implementation of strategies presented in the Relevancy Roadmap, consider, as an example, the strategy addressing Barrier 2 of the Agency Capacity section. This strategy includes six steps (changes) that should occur to address the barrier of an agency’s lack of capacity to identify, understand, engage with and serve the needs of broader constituencies. Like the steps within it, the strategy itself is written as a high-level change that must occur to reduce the barrier. In this case, that change is “increase capacity to identify and engage with broader constituencies”. The simple logic being that if an organization does not have the resources and expertise required to engage and serve broader constituencies it should acquire that capacity before beginning to implement efforts, programs, or practices targeted at those constituents. If this is not done first, the agency’s relevancy efforts will be limited in their design, delivery, and effectiveness. To remove the barrier, the change that must occur in the system is that the agency increases its capacity.

By considering the strategy and the steps within it as statements of change, practitioners are presented with the opportunity to view each step as a hypothesis that must be validated. For example, consider Step 1, “Commit existing or acquire new resources to gather social science data or conduct new research to identify and better understand agency constituents’ interests.” This step not only informs the practitioner on what should be done first to implement the strategy, but it also identifies what must be accomplished before step two is taken. The degree to which this first step has been accomplished should be monitored and evaluated. During implementation, if the agency can determine if needed capacity changed, then data will be available to determine if the strategy is on track or if it drifted off course at this particular step.

To evaluate Step 1 in this strategy, the practitioner must specifically determine to what degree “new resources” have been committed and if they are sufficient to move forward at the time this step was implemented. An objective for this step could be written as “by fiscal year 2021, 100% of existing or new resources needed to conduct social science research have been acquired, committed or allocated”. This statement denotes the indicator that can be tracked to determine the degree to which this this step was completed. In this case, that indicator is the actual percent of resources that were acquired, committed or allocated. If only 50% of the needed resources were acquired, subsequent steps in the strategy will likely be impacted. The limited success of this step may prove to be a critical factor in the degree to which the entire strategy was, or was not, effective. But if 100% of the needed resources were acquired after Step 1 and the strategy still only achieved limited success, then it may be necessary to make adjustments.

Those using the Roadmap will need to draft an implementation plan for their selected strategies. While drafting the plan, measurable objectives and their associated indicators should be identified. The plan should also layout how monitoring of the objectives and indicators will be performed. Human dimension researchers or those trained in the social sciences should be integrally involved in this process.

When implementing the Roadmap, practitioners should think of their work as a pilot test. The approach used will likely need to be adjusted in real time to adapt to the current fiscal, political, cultural and agency environments. Evaluating progress in a strategy’s implementation at each step will allow practitioners to determine accomplishments and identify where an effort may have gotten off track. These data will help an agency make informed decisions on how to modify, scale, replicate, and increase the effectiveness of their efforts. If the natural resource management community commits to outcome-based application of their relevancy efforts, the community as a whole may begin to refine its approaches to effective strategies that engage and serve broader constituencies. A key premise of the Roadmap is that agencies will share successes, challenges and mistakes with the community. Systematic and effective evaluation of the implementation processes and outcomes will be critical to the success of agencies engaging and serving broader constituencies. If this occurs, the next iteration of the Roadmap will be less of a compendium of hypotheses and more of a catalogue of proven strategies.

Previous
Previous

Where does the Relevancy Roadmap Go From Here?

Next
Next

Roadmap Glossary