It is important that we first define terms of importance to understand the question and how to best respond. I will begin with Data-Driven Decision-Making as it is the easier of the two terms to narrow down in meaning. Data-Driven Decision-Making is the act of using the wealth of data that the educational system provides about each student to guide all educators as to where to begin looking for effective methods to apply to the student’s learning environment to affect meaningful positive change. The simple explanation of Data-Driven Decision-Making is to call it educated guesswork based on information that is flawed by necessity. As for evaluation, it may seem simple but evaluation is a complicated process of trying to balance complacence, fear of loss, hidden biases, and ignorance. Evaluation of data may seem simple until the educator factors in all the flaws inherent in the data. The educator will evaluate the evaluation process out of redundant masochism and fall into a trap similar to the trope of they know that I know that they know I know they know.
The data involved in Data-Driven Decision-Making can come in various forms and represent various aspects of the student’s educational ecosystem. The simplest data is that which appears to be empirical data; numbers on a page are believed to be sacred. They are not. The amount of numerical data needed to accurately represent the ability of the student would take so long to procure that the student would cease to learn out of necessity. Therefore, the first flaw is of empirical data is that educators must extrapolate from insufficient data. The second flaw of empirical data is the inherent nature of gathering the data only tests one form of student understanding. A student may excel at taking tests while not understanding the material being tested. An example of this can be found in the Defense Language Aptitude Battery exam offered by the military. Students are asked to learn a language that is entirely made up. They are given an incomplete and simplistic set of rules for this language. They are then given an exam on their understanding of the language. This test can be passed, but what type of expertise is it testing? The third flaw in empirical data is the inherent bias in questioning. It is difficult to create a test that is free of bias in questioning. It may be a question about country music on an educator’s licensure test, or an item used as an example that the student has never encountered. Cultural testing bias is difficult to overcome and can have a serious impact on the data gathered. In the end how reliable can this empirical data be? Are hard numbers as pure as we believe? All of this is further compounded by the mood of the student; did the student put in their full effort?
That leads to the flaws of Data-Driven Decision-Making and data gathered from socio-emotional data gathering. As stated above, cultural impacts may skew the data gathered as the student and the evaluator may not share cultural norms or understandings. How is the shrug of shoulders scored? What does it mean when a student does not look you in the eyes? Does the student see you as a safe adult or a threat? These questions inevitably lead to suggestions of a broad spectrum test or blind testing, but do those methods account for self-reporting bias or will the data be diluted by the lack of specificity? Furthermore, schools are asked to gather socio-emotional data, but they are faced with a shortage of qualified professionals in positions of need that are trained to best administer the various forms of socio-emotional data gathering. With all of these monumental challenges educators are asked to evaluate the data gathered and employ it with the best practices of Data-Driven Decision-Making to make decisions with far reaching implications on every student’s life.
Though the data may be flawed the evaluators should be able to overcome those flaws with experience and expertise. This would be the case if every teacher wasn’t facing a deluge of data and extra responsibilities. The market for education related services is fierce and growing every day. Each provider offers more data, more metrics, and more multi-colored bar graphs. School districts will snap up these new tools and train their staff, deploy their tools, and gather the data. This shrinks the time educators have to evaluate the data, deploy the changes for students, and adapt to new information. Teachers begin to cull the herd of students who would require their precious attention. The first batch culled is those who are doing great in class and thus have nothing to gain from time spent evaluating their data. This is a mistake, the data offered by those students may offer a glimpse into what made them successful, but alas they were cut first. The second group to be culled is those students who are passing or hiding in the shadows. No parents are blasting emails, no report is required to be filed about their performance, and there is no reason to add them to a team meeting schedule. In some cases, the teacher may not be able to identify this student outside of the school. This is a mistake, these students may have simple answers that could make a monumental difference in their educational experience and improve their learning in ways that could change the future. Again, there just is not enough time to help them. The last group to be culled is that group made up of lost causes. The students who will not try, those who will have enough respect to show up, and those who stand challengingly eye-to-eye with their teacher. These students have been judged beyond redemption. This is the most egregious sacrifice to the alter of time. These students are waiting for someone to break into their world and show them the path back to being a part of their peer group. When these students see that they have been given up on they will gird themselves with that feeling of loss and fight the system that has judged them of lowest value. The teacher has snuffed a light rather than raising it to be with all the others that shine.
These evaluations leave a manageable number of students that can be reported on and emails can be sent showing progress. Spread sheets can be filled in. Parents can be consoled that there is progress and the community stakeholders can be told their money was well spent because we have proven success. In the quiet moments at the end of the day an educator may admit to themselves that they could have done more. They wrap themselves in the laurels of success while ignoring those that have fallen to the evaluations of time and worth. Over the years the educational system looks like a robot that has been programmed to climb mountains. It finds stability before it raises the next leg to the next highest ledge. The robots progress is not straight or ideal, but it is progress and it gets to the top eventually. If the programmers add more data gathering methods, they could improve the robot’s progress or they could bog it down with indecision as it tries to process all the information and is paralyzed by the added levels of complexity.
In the end Data-Driven Decision-Making and evaluation are subjective. They are fundamentally flawed and all those involved know this at some level of conciseness. This is not to discount the value of Data-Driven Decision-Making or evaluation. The educators who rely on them are doing their best to find the incremental means to improve the lives of those few students they can affect. Over the course of a student’s time in school they will have to restart the process repeatedly as data is rarely passed on or experiences codified into longitudinal methods. The greatest example of this is the use of student numbers to replace the student’s identity. There are privacy reasons for this, but the dehumanizing effect can be felt by all. There is no scale at the end of each educator’s career to weigh the successes against the failures. There is no leaderboard for staying up late trying to understand how to engage one student in a room of thirty or more. Data-Driven Decision-Making uses imperfect data, drives incomplete decisions, and relies on an evaluator that is trying to do their best despite all that is being thrown at them. The system is not perfect; it cannot be perfect. Every day educators face decisions that are impossible to make; each choice takes a piece of their heart and they hope that they chose the path that takes the least while hoping it returns a glimmer of hope. Hope and heart are the unquantifiable elements of evaluation; they are the means that expert educators use to fill in the gaps in the data. Educators may call it experience, and they may be asked to pass on their methods, but hope and heart transcend a system. No district has a plan-on-a-page that lists hope and heart. Data-Driven Decision-Making is not entirely new to education. Data-Driven Decision-Making is the system by which educators do their best every day to make incremental improvements in the lives of their students based on evaluating data that strips the humanity out of the student. Educators humanize that data, they hear the voice in the numbers, they go beyond the statistical information and look for the outliers. When the lights go out at the end of the day, it is the educator that carries home the choices they made and they ask themselves how they will do better tomorrow. The system of Data-Driven Decision-Making and evaluation are similar to Winston Churchill’s views of Democracy, No one pretends that democracy is perfect or all-wise. Indeed, it has been said that democracy is the worst form of Government except for all those other forms that have been tried from time to time.