Organizational studies don’t have a very good track record for making a difference in organizations. There are lots of reasons for this. Sometimes the findings offend a high level official so the study never sees the light of day. Other times the client who asked for the study just isn’t in a position to do anything about the findings. Sometimes the findings are not credible, or are too abstract – lots of reasons but in the end, all too frequently, the recommendations are never carry out.
I am using the term "study" here, but you could substitute the terms "lessons learned," "evaluation" or "assessment" and see the same absence of results. In most situations in which an organization asks or is required to take an in-depth, objective look at an issue, the study report is more likely to be the end rather than the beginning of change.
Participatory Action Research is a way to conduct an organizational study that has a very different outcome, that is, change starts to happen while the study is still being conducted. By the time the study is completed there is already a buzz in the organization about the issues and people are trying out changes. There are a number of ways to design Participatory Action Research and I’ll write about other designs in up-coming posts. But central to all Participatory Action Research are four basic principles:
1. employ a social scientist to design the study to insure the reliability and validity of the findings
2. involve a team of internal employees (who will be the beneficiaries of the findings) in all phases of the research, that is, data collection, analysis, recommendations and implementation
3. design organization wide conversations about the findings and recommendations
4. assess the results of the implementation as input to continuing the organization’s learning about the issue being studied.
I’ve just completed a study for a government agency that I want to use here to illustrate how Participatory Action Research works and why it makes a difference when other studies don’t.
The Situation: For some time this agency has questioned the effectiveness of its longstanding process intended to improve the quality of its reports for clients. That process involves submitting the reports to a series of hierarchical reviews before they are released. My task was to take a systematic look at the hierarchical review process to accurately describe how it works and more importantly to learn what impact the review process has on the quality of outcomes. And I had a very important second task - to conduct the study in such a way that the findings from the study wouldn’t just sit on the shelf in some decision maker’s office. The goal was to embed the findings in the minds of organizational members.
The Participatory Action Research study I designed to address this issue used semi-structured interviews. That means there was a standard set of questions, but the interviewers were instructed to encourage interviewees to raise issues the interview team had not yet thought of and to follow those trails when they seemed promising. In a study like this one, where the issue is not well understood, having that flexibility is essential. Had interviewers just stuck with the original interview question a great number of critical insights would have been lost. Using semi-structured interviews and multiple interviewers required adding four research practices to the four principles listed above:
5. ensure consistency across multiple interviewers
6. take verbatim notes to reduce interviewer bias in the data
7. analyze interview data using qualitative data analysis software in order to compare the data from differing perspectives
8. ensure interviewee anonymity and keep faith with those who have offered their insights
The first principle is to have as a principal investigator a social scientist that is knowledgeable about how to conduct qualitative research to insure both the reliability and validity of the study. In this study I served in that role of principal investigator. But with Participative Action Research the social scientist doesn’t conduct the study alone, which leads to the second principle.
The second principle is to involve an internal team of people who will be the beneficiaries of the findings. The beneficiary is not the person who sponsored the study; rather it is those whose work could change based on the findings – often those on the frontline. Participatory Action Research involves this internal team in all phases of the study. Team members become joint researchers, collecting data, analyzing the data, and constructing recommendations.
Identifying Interviewees: In consultation with the agency sponsor, four departments were selected to be involved in the study, making sure a diversity of types of work were included. I met with each department head to outline the parameters of the study and to offer anonymity both in terms of the department and the interviewees.
Each department was asked to identify eight interviewees including both frontline workers who produce the products, and those in the hierarchy who conduct the reviews; and taking into account a distribution of level and experience. Each department was also asked to provide a two person interview team made up of one frontline worker and one person in the hierarchy.
Preparing the Interview Team: I developed a set of interview questions and a protocol for conducting the interviews, which I had tested earlier and revised based on the responses. A protocol is a step-by-step guide of how to conduct the interviews. It includes how to introduce the study to the interviewee, issues of confidentiality, how to probe responses, how to take notes, etc. The protocol satisfied the research practice of ensuring consistency across multiple interviewers.
The interview teams from the four departments came together for the first of three meetings. In this first meeting I demonstrated the interview protocol and the interviewer group made revisions to the questions. Involving the interviewers in refining the questions was important for two reasons, 1) interviewers were able alter the language of the questions so they would have more meaning to their colleagues, as well as adding new questions that were important to the interviewers, and 2) the discussion and language changes helped the interviewers to “own” the questions – the start of making it their study.
Collecting Data: I assigned each interview team to a department to conduct their interviews – a department other than their own. This not only reduced interviewer bias, but served to broaden the interviewee’s understanding of the review process.
The interviewers found conducting the interviews informative, they 1) discovered practices that would be useful in their own department, 2) empathized with the difficulties others faced in departments other than their own, and 3) gained a clearer understanding of their own department process by seeing it through the eyes of others. After interviewing they reported:
“What is illuminating is seeing other processes.”
“It was very helpful. You don’t realize that things could be better or worse in other offices. We asked interviewees what things needed to be fixed and we thought we could profit from those fixes too.”
“It helped us look at our own process. We’ve revamped a number of things.”
“It was also revealing to us that we’re not doing that bad.”
The research practice of using verbatim notes to reduce interviewer bias in the data was employed. Taking down what an interviewee actually said rather than paraphrasing, which is the interviewer’s interpretation of what the interviewee meant, is critical to the validity of the study. Having a team conduct the interviews allowed one person to concentrate on the notes and the other to focus on asking the questions and follow-up probes. Verbatim notes are also useful in writing the report because interviewees’ actual words help to substantiate and bring the findings to life.
The teams conducted half of the interviews and I, as principal researcher, conducted the other half. The comparison between the content of my interviews and the interview teams’ interviews provided a reliability check.
Analyzing the Interviewee Data: After the interviews had been completed, the interview team met again to report to each other on what they had learned. Out of those reports and the discussion that followed the team identified common themes across all four departments. This was an important meeting that served several purposes, 1) it provided the initial themes used to code the data in a qualitative database, 2) it broadened the interviewers’ understanding of how the review process is conducted in different offices, and 3) the insights of this group provided additional data for the study.
A new group of frontline workers and members of the hierarchy were then brought together as a focus group. I provided them a brief written description of each of the themes the interview team had developed, which they then discussed at length. This meeting served to both validate and amplify the themes with new insights.
The research practice of analyzing interview data using qualitative data analysis software was particularly critical with this study because of the large number of interviews. This type of software makes it possible to sort unstructured data and then examine it through multiple lenses, (e.g. role, theme, tenure, etc.) It makes it possible to organize and manage thousands of pieces of related information, explore themes, and make comparisons.
I entered all of the verbatim interview notes into a qualitative database and coded the data with the themes identified by the interviewers as well as a number of additional themes that emerged from the lengthy task of coding. Coding the data entails assigning codes to every sentence of every interview and as new themes emerge, returning to already read interviews to re-assign codes. Based on this analysis I then wrote a first draft of the report.
The draft was sent to each of the interviewers to read and revise. The interviewer team came together for a third meeting to talk through the draft report. They brought with them their corrected copies of the draft. Each team was also asked to bring a thought leader from their own department with them to the meeting to participate in the discussion. In this review meeting the interview team made revisions to the draft and validated the findings. The team then turned their attention to constructing recommendations based on the findings, which then became a part of the final report.
By the third meeting, change related to the study issue had already begun to happen. There was now in each department a group of 5-6 employees (consisting of the interviewer team, the focus group members and the thought leader) who had a thorough understanding of the findings and a vested interest in using them to make change.
The research practice of ensuring interviewee anonymity and keeping faith with those who have offered their insights was followed. Each person who had been interviewed and promised anonymity received a draft of the report and was asked to read through it to make sure none of the quotes could identify them. Before sending them the draft I had taken care to sanitize it as thoroughly as I could, but even so, a handful of interviewees ask for changes in their quotes to further ensure they would not be recognized.
By the time the final report was ready to send to the sponsor, sixty employees spread across the four departments had thoroughly read the draft. Which meant there was already an organizational buzz about the findings of the report.
I now return to the third principle of Participatory Action Research, designing organization wide conversations about the findings and recommendations. To produce change in an organization, the findings of the study have to become a part of the organizational conversation. With Participative Action Research those conversations begin to happen during the study, but that isn’t adequate to bring about change. It is critical to convene small and large group conversations across the organization to provide employees the opportunity to process the findings for themselves and to think through what the findings mean for how their work needs to change. It is necessary to have such conversations among high-level decision makers, but equally critical that those whose work will change hold such conversations and make recommendation specific to their own department.
Participatory Action Research is, at its heart, a conversational process, not an impersonal survey, nor an experimental design with “subjects;” rather it is a conversation between organizational members. That conversation takes place between interviewer and interviewee, between members of the interview team, and between organizational members in the conversations set up to discuss the findings. All of the conversations are a part of the change. In the end all change is a result of organizational conversation – the only question is who is invited into the conversation. Participatory Action Research is a methodology to focus the organization on conversations that matter and invite into the conversation those whose work will change.
For the agency study I have used as the example in this post, we are now poised at the step of designing the organization wide conversations. So stay tuned.