In its simplest form, fidelity assessment helps to answer three questions: Are you doing what you intend to do? How do you know? And, does it make a difference?
If the intention is to use an innovation, is it being used in practice as it was intended to be used? How do you know? Is there a way to assess the presence and strength of the innovation for every practitioner who is using the innovation? Does the use of the innovation as intended make a difference in terms of outcomes; that is, do high fidelity scores = excellent outcomes; low fidelity scores = poor outcomes? Answers to these questions are critical to implementation, improvement, scaling, organization change, and system improvement. In this section the focus is on the use of fidelity for the purpose of assessing the quality of implementation supports and the use in improvement.
A practical measure of fidelity highly correlated with outcomes is part of each Usable Innovation. With a strong correlation (e.g. 0.70 or better), if fidelity is known then future outcomes can be predicted. Instead of lamenting poor outcomes a year from now, fidelity brings the (predictable) future into the present where there is an opportunity to create a better future by improving fidelity. If you know fidelity (high or low), you have a good idea of what the outcomes will be (excellent or poor). If a measure is not available, or the measure is not practical to use in the daily practice environment, it is up to an Implementation Team to create a fidelity measure and improve it with experience. Having no way to assess fidelity is not an option in implementation practice or science.
Fidelity criteria are minimum criteria: that is, if a practitioner is doing these things with this level of quality then recipient benefits should be apparent. As practitioners continue to meet fidelity criteria they begin using the innovation in nuanced ways that go beyond the minimum criteria and provide excellent examples of the innovation done with extreme expertise and decidedly improved outcomes. With the advice of competent coaches, skilled practitioners eventually can deviate from the basic fidelity criteria in ways that maintain the integrity of the innovation while adding value (Szulanski & Jensen, 2006). A person who has learned to play the scales and read music meets minimum fidelity criteria for a piano player but is not yet an accomplished pianist. Like pianists, high fidelity experienced practitioners are artists whose work can be studied to expand knowledge of the innovation and supports for its uses in practice (Wolf et al., 1995).
There is a continuing discussion about “fidelity first” or “adapt to adopt.” Active Implementation focuses on fidelity first. Given the need to improve service outcomes, it is assumed that initially there will be a poor fit between an interaction-based innovation and any existing staff group, organization, and system. It is assumed that the existing organization structures, roles, and functions that support existing (less than desirable) outcomes will need to change so that newly designed organization structures, roles, and functions will support an innovation that produces better outcomes. Thus, when attempting to use an innovation, the goal is to use the essential components of a Usable Innovation with high fidelity and change the organization to fit the innovation. The status quo is powerful; fidelity is a lever for change.
The results of performance assessments seem to have many practical uses. Coaches can use the information to sharpen their professional development agendas with practitioners. Administrators can use the information to assess the quality of training and coaching. Implementation Teams can use the information as a guide for implementation at the practice and program development levels. And, researchers can use the information as an outcome measure for some studies and as an independent variable in others. Fidelity is not a characteristic of practitioners nor is it a characteristic of organizations. It is something to work for every day.
A strong variable in implementation is staff selection. Advice from business leaders is that high functioning organizations should be difficult to get into and easy to get out of. Once a person is hired or promoted or reassigned then the organization has an obligation to that person. If the decision to hire the person was a poor one, the person consumes considerable time and effort as supervisors, managers, human resources staff, administrative staff, and others try to help the person do the intended work in spite of the person’s apparent lack of skill or motivation. The best way to solve these problems is to prevent them with a good staff selection process. The process applies to people being hired as new practitioners and to people already employed in an organization who are being reassigned to begin using an innovation. The goal is to have practitioners who are ready, willing, and able to do what the innovation requires and who are enthusiastic about participating in the process of learning and using the innovation.
Much has been written about the readiness of practitioners to use an innovation and the importance of the fit between the innovation and practitioner values, beliefs, psychological states, and so on (Aarons et al., 2011; Proctor et al., 2009; Stetler, McQueen, Demakis, & Mittman, 2008). In Active Implementation, these factors are assessed and created in the selection process. If newly hired or reassigned staff are “resistant” to change or are a poor fit with doing the work required by an innovation or are reluctant to be coached or refuse to participate in fidelity evaluations, then the selection process needs to be reconsidered and improved. In this sense “resistance” is not the fault of the individual but does reflect the need to improve the use of implementation supports.
Staff selection is not discussed often and is rarely evaluated in human service programs. Nevertheless, selection is a key ingredient of implementation at every level:
- selection of practitioners,
- selection of organization staff (administrators, managers, leaders), and
- selection of staff of Implementation Teams members (trainers, coaches, evaluators, change agents).
Selection of staff is important to having effective practitioners, excellent trainers, effective coaches, skilled evaluators, facilitative administrators, or effective purveyors. Not everyone is suited to each role. For example, people who are outgoing and decisive may make good practitioners or Implementation Team members. People who are methodical and comfortable making judgments based on specified criteria may make better evaluators. People who are more comfortable with public speaking and “performing” might make better trainers or directors. With respect to given evidence-based practices or programs, the extent of knowledge and direct experience in the specific program or practice might be more critical for some positions than others.
Beyond academic qualifications or experience factors, certain practitioner characteristics are difficult to teach in training sessions so must be part of the selection criteria (e.g. knowledge of the field, common sense, social justice, ethics, willingness to learn, willingness to intervene, good judgment). In research, these skills were the best predictors of future job performance and were best assessed in work sample measures (e.g. role plays) in the context of using a structured interview process. Some programs are purposefully designed to minimize the need for careful selection. For example, the SMART program for tutoring reading was designed to accept any adult volunteer who could read and was willing to spend 2 days a week tutoring a child (Baker, Gersten, & Keating, 2000). Others have specific requirements for practitioner qualifications (e.g., Chamberlain, 2003; Phillips, Burns, & Edgar, 2001; Schoenwald, Brown, & Henggeler, 2000) and competencies (e.g., Blase et al., 1984; Maloney, Phillips, Fixsen & Wolf, 1975; Reiter-Lavery, 2004).
The implementation team is responsible for assuring the availability of competent interviewers (selection, training, and coaching of interviewers) and establishing recruitment and selection plans. A specific person is responsible for coordinating the quality and timeliness of interview processes for practitioners using an innovation.
General human resources staff can do advertising and can screen applicants for basic qualifications. Interviews are conducted by those who have a stake in hiring the person; that is, those who will “live with” the decision to hire. Typically, interviewers include a coach or trainer who knows the innovation well and has a “feel for” successful (and not successful) practitioners, a manager or director responsible for the use of an innovation, and a practitioner who has met fidelity criteria.
Staff selection also represents the intersection with a variety of larger contextual variables. General workforce development issues, the overall economy, organizational financing, the demands of the evidence-based program in terms of time and skill, and so on impact the availability of staff for human service programs.
Training for new hires or reassigned staff who successfully complete the selection process is a critical step toward using an innovation as intended. No practitioner should be asked to use an innovation without first completing training. This means that training is done as needed, and not on some fixed schedule – a challenge for some organizations and a topic for leadership and facilitative administration. Thus, a practitioner position stays “open” until a practitioner completes training. Untrained staff are not asked to “fill in.”
For implementation purposes, the goal of training is to teach practitioners the knowledge, skills, and abilities required to begin using an innovation. The training content is informed by the information developed to meet the Usable Innovation criteria. However, even then, there are choices to be made about what foundation information should be presented and practiced in training and what can safely be left for coaches to teach on the job. Philosophy, values, inclusion-exclusion criteria, essential components, what to do and say to do the innovation (from practice profiles), and the methods and content of fidelity assessments all inform the content of training. While the content is innovation specific, the methods of training are universal and apply across all innovations that require more than a little behavior change on the part of practitioners.
A member of the implementation team is accountable for assuring that training is timely, done as intended, and improved over time. Highly-competent individuals provide training (e.g. trainers who have deep content knowledge and effective training skills). Trainers who teach sections of a pre-service workshop may be coaches, managers, fidelity assessors, practitioners who have met fidelity criteria, and others who are immersed in doing the work of assuring the full and effective use of an innovation in an organization.
Training is skill-based and includes opportunities for practice in the form of behavior rehearsals for essential skills and includes both positive and constructive feedback to participants. The behavior rehearsals in training are similar to those used in the selection interview process. The difference is that, in training, the skills are practiced and re-practiced until criteria for competence are reached (similar to the criteria for competence used in fidelity assessments). As Active Implementation trainers say, Day 1 of training is done when Day 1 skills are learned (sometimes at 10 at night for a given trainee).
Given the emphasis on skill development, a training session may have readings that can be completed by trainees prior to a training session followed by a brief introduction to the skills to be learned and why they are important. Perhaps a third of training time is spent in behavior rehearsals to begin teaching and learning the innovation-related skills. Discussion time interspersed with behavior rehearsals allows time to revisit what was provided in the readings and introductory information. This process helps to develop common concepts and common language for describing the essential components of the innovation as the skills are being taught and learned. The skill required of trainers cannot be emphasized enough. Trainers need to be content experts and skilled teachers. The preparation of trainers is an important step in the implementation and scaling processes.
Coaching and fidelity assessment are the eyes and ears of implementation of any innovation. Fidelity is quality assurance; are we doing what we intend to do, how do we know, does it make a difference? Fidelity sets a minimum standard for using an innovation. Coaching is the way to see the details regarding how an innovation is used and help practitioners initially meet and then routinely exceed the fidelity standard. High expectations are set by fidelity standards; low tolerance for error is set by coaching.
With the exception of coaching, the competency drivers are episodic. Practitioners encounter selection processes once during the hiring process, participate in training once as they anticipate using an innovation, and participate in fidelity assessments a few times each year. Coaching is constant. If a practitioner is using an innovation, then scheduled and on-demand coaching support is being provided according to the coaching service delivery plan.
Coaching is essential because most skills needed by successful practitioners can be assessed during selection and introduced in training but really are learned on the job with the help of a coach. An effective coach provides “craft” information along with advice, encouragement, and opportunities to practice and use skills specific to the innovation (e.g. engagement, using an innovation in the setting, clinical judgment). The full and effective use of innovations requires behavior change at the practitioner, supervisory, and administrative support levels. Training and coaching are the principal implementation methods by which behavior change is brought about for carefully selected staff in the beginning stages of implementation and throughout the life of evidence‐based practices and programs and other innovations. Organizations make use of data to establish and improve coaching methods.
In human services, practitioners are the intervention. Evidence-based practices and programs inform where, when, and how they interact with recipients and stakeholders but it is the person (the practitioner) who delivers the intervention through his or her words and actions. In the transactional interplay between practitioner and recipient, each affects the other in complex ways. To help practitioners use an innovation as intended in this complex environment, coaching needs to be work based, opportunistic, readily available, and reflective (e.g. debriefing discussions). Spouse (2001) described four main roles of a coach:
- Teaching while engaged in practice activities
- Assessment and feedback
- Provision of emotional support
After a few decades of research on training teachers, Joyce & Showers (2002) began to think of training and coaching as one continuous set of operations designed to produce actual changes in the classroom behavior of teachers. One without the other is insufficient. With newly learned behavior there are several simultaneous problems that must be faced:
- Newly-learned behavior is crude compared to performance by a master practitioner.
- Newly-learned behavior is fragile and needs to be supported in the face of reactions from recipients and others in the service setting.
- Newly-learned behavior is incomplete and will need to be shaped to be most functional in a given service setting.
The selection and development of coaches is a critical role for Implementation Teams. In more mature organizations coaches are selected from the ranks of high fidelity practitioners who are experts regarding the innovation and craft knowledge. During start up in the absence of high fidelity practitioners, coaches need to learn the innovation and learn coaching skills simultaneously, a more difficult task for all concerned.