By: Lillian Pierson
I’ve recently been tasked with building and managing a Monitoring and Evaluation Team for an organization within the digital humanitarian response community. Although after-action reports have been created by many groups since at least 2010, Monitoring and Evaluation is a relatively novel and absolutely vital function in the digital humanitarian response space. We need to be monitoring and evaluating how our work-products are performing on the ground, so that we can optimize our workflows and increase the effectiveness of our products. In digital humanitarian response, the main goal is for us to provide timely and accurate information to field humanitarians (or affected populations) who are responding to save lives or aid crisis-affected communities in an emergency situation on-the-ground. Within this core function, we must also work to ensure that any of the information we release adheres to the “do no harm principle”.
To monitor and evaluate the effectiveness of our workflows and product utilization, we must start by taking a broad view of the humanitarian response ecosystem and probe to discover our relative position in that ecosystem. To get a visual idea, please see the infographic posted below. We need to ask questions about on-the-ground information product usage (by field response organizations), information product utilization and tracking (by other digital humanitarian response organizations), information product monitoring (i.e.; quality assurance, quality control), and M&E team function and efficiency (a project management and implementation task).
Beyond asking the right questions and evaluating responses, as part of building an M&E team, I have also researched categorical functions of sub-teams within a typical monitoring and evaluation implementation. As stated above, M&E is a novel function in the digital humanitarian space, and the technical nature of digital humanitarian response cannot be omitted from consideration when building a team. Borrowing from and adapting UNDP’s Handbook on Monitoring and Evaluation for Results, my initial recommendation is that an M&E Team in digital humanitarian response should have the following 4 functional groups:
- Outcome Monitoring Team (An Implementation Role)
- Outcome Evaluation Team (An Implementation Role)
- Partner Engagement Team (An Implementation Role)
- Monitoring Tools and Mechanisms (A Technical Team – Research and Development Role)
And from there moving forward, I have formulated a basic set of questions, the answers to which could be assessed to form some basic plans about how an M&E Team might best operate within the disaster response framework.