Monday, April 6, 2015

Why Go Big or Go Home Doesn't Work for Professional Development








We've all been to a "launch" - a large scale, whole group PD where the "answer" is provided for us, where we are told if everyone does just these one or two things, we will see transformation in our schools. Some time goes by, and then we are invited to the next "launch".
Whole group PD has a purpose, but teaching and learning are not it. Whole group PD is a great way to set and agree to common expectations, develop implementation plans, and to align practice with a vision. What it cannot and will not do is change practice.
We often resort to this type of PD because it is efficient. But, efficient and effective are two different words. Regardless of the fact that every participant marked "strongly agree" on the evaluations or that the staff was highly engaged, just like teacher lessons, we can't just stop at modeling.
Professional development is not an event; it is a process fostered by collegial relationships, and it can only be effective when framed and guided with that mindset.
Other considerations. Too often, we leave whole group PD without some basic structures in place. We need to ask and answer:
1. How will we make this practice routine? This often requires a cultural shift, meaning that it is important to identify frontrunners and success stories early and often.
It also requires that EVERYONE at some point look at the implementation and make adjustments accordingly. Are we still working with the same understanding? Perhaps, one of the steps doesn't really fit some classes or is too cumbersome. Perhaps, someone has figured out how to do it better.
2. How much disruption/failure is acceptable? When the staff walks out, are you asking them to drop everything they are doing/have planned and to focus on this strategy? Is it o.k. to focus on specific elements until all are mastered?
Have we communicated that there might be implementation dips and given supports to overcome those dips? People often panic when things aren't perfect right away and go back to doing what they were doing even if others are successful.
Additionally, we have to have realistic expectations based on each individual staff member's capacity. Sure, there are teachers who can walk right out of a PD and implement everything they learned, but, depending on your staff, there are some teachers who might actually take a whole year or even two to be able to maximize a strategy (think about your teachers who are struggling with classroom management, basic planning, or who are just getting foundational skills).
This also means that you, as the leader, are going to have to be prepared to answer questions about implementation effectiveness when pressed about why "everyone" isn't great yet.
3. How will the leaders continue to communicate/emphasize expectations and hold staff accountable? We often roll out PD, but don't give anyone specifics on this. And, it causes a panic when it seems that the leaders are all of a sudden asking about something from 2 months ago.
4. How will the professional development reach the individual staff member? After the whole group PD, how will leaders ensure that there is continuous support and feedback?
This is one of the most difficult aspects of PD, especially if you have a larger staff. The onus of this work has to fall on the leaders at the beginning and shifts to peers later on: but, the key is that it can't wait until the next PD day - it has to be built into the actual work day (teacher meetings, observations, peer learning groups, etc.). This means that the work of changing practice has to narrow enough that you can manage it, but not so narrow that it dampers innovations that can occur while everyone is getting their "sea legs".
Even though I know all of these things, I still have to keep going back to remind myself about the importance of each consideration. Like everyone else, I'm busy and trying to juggle everything. But, the nagging question of effectiveness is always in my mind...and, regardless of any other task, I believe that building the capacity of my staff is the most important, most impressionable work that I do. And, I want to be able to say that I have been able to positively affect my teachers' careers and improve student learning everyday.
What PD advice do you share with other administrators?

Sunday, March 29, 2015

How Prototyping is Changing My Leadership







This year, two of my teachers attended the Chicago Public Education Fund Innovation workshops. The workshops encourage school teams to create prototype projects to improve education that are then funded based on merit.
As a result, we introduced the idea of prototyping to our staff this year through an activity called the Marshmallow Project, where teachers had to build towers with spaghetti and marshmallows. The most successful teams were those who built as they planned (they were able to build the highest towers) versus those who spent the most of their time planning (they built smaller towers IF they were able to complete a tower).
We emphasized that teaching should be like this - quick cycles of problem solving and evaluation. I really enjoyed the project, and I decided to employ the strategy into my own leadership, which has meant selecting one thing to really focus on and build quicker cycles into.
My prototype cycle has gone like this:
1. Present/plan with staff expectations
2. Provide coaching to staff through various venues (coaches, e-mails, feedback on initial tries, etc.)
3. Meeting with each staff member to look at evidence of implementation
4. Follow up meetings with written feedback and observation cycles
I used the Domain 4 rubric for my individual staff meetings, which helped the teachers frame their work as "professional practice". Domain 4 of the Danielson rubric is about what each of us usually deem as administrative expectations.
These meetings were a great opportunity to make sure that every staff member really understood expectations, to get detailed feedback on school implementation, and to see where both the administration and teachers needed to focus to take our school to the next level.
Our school, as a whole, learned something. For instance, we found that teachers made phone calls, but that the calls were about academic failure and behavior. So, we had very few teachers contacting parents to share what was being taught or how they might support their students. Definitely something to think about!
The meetings with the individual teachers:
*Resulted in immediate teacher collaboration to address areas that they needed improvement in - each grade level immediately added Domain 4 to their weekly meetings, including helping each other identify and review evidence
*Innovative ideas to address gaps that were identified - grade level teams and teachers begin to design/update their websites, one grade level is sponsoring a monthly academic scavenger hunt for parents/students, and all levels begin planning a grade level presentation including schoolwide community service
*Clear direction for both the administration and teachers in day to day and school planning - the curriculum scope and sequence is now emphasized in e-mails with the actual standards/objectives listed and these are also the topical focii in grade level meetings, rather than just hyperlinks; the result has been more focused academic collaboration
Even though it took 3 weeks to complete all of the individual meetings, the impact of that three weeks has really been powerful to move teachers to action as professionals.
We are now moving to the observation and feedback phase of our prototyping cycle, and we are excited to see what changes we will see when we go into classrooms specifically for this purpose.
So, what have I learned so far is:
1. Face to face, structured, individual meetings are more powerful than any other form of communication, especially when there is common understanding - evaluation/observation meetings are not enough
2. Grouped meetings/feedback with teachers help them to collaborate more meaningfully - rather than spread out meetings. Teachers are able to help others understand their feedback, especially if they, themselves, received similar feedback.
3. The individual meetings are a great way to collect detailed data on school implementation - quantitatively and qualitatively.
This experience is shifting the way that I organize my work, and I am excited to start next school year, when the administrative team will be able to put even more structure into our process and hopefully speed up our cycles.
But, in the meantime, I thought I'd share and see if others are having similar experiences.
I am new to blogging, and I am starting a Google+ Blogger, Instructional Principal. I'd love to get feedback and have a dialogue with other ed professionals, so check it out at http://principalinstruction.blogspot.com/. Or, converse with me @AzizSims on Twitter.

Thursday, February 19, 2015

Quick Feedback for Teacher Assessments





Working with teachers on assessment is one of the best ways to improve curriculum and instruction practices. Teacher assessment provides a reflective space where there are no political or philosophical arguments - just student data that results from a teacher's design and decision making. This provides a great space to talk to teachers about how they understand what the curriculum is, their approaches to teaching, and the results that they get as a direct result of the relationship between curriculum and instruction. Furthermore, if a teacher has really been paying attention to his or her own assessment practices, it makes data analysis of external assessments more personal and valuable as a source of information.
Reviewing assessments can assist you as the principal with really understanding curriculum and instruction implementation in your school. Of course, you don't want to spend enormous amounts of time going through each individual question, but there are some quick points that can be valuable to both you and the teacher.
1. Is the time assigned to the test/task reasonable? Many teachers make the mistake of trying to test fluency (how automatic a skill is) before they test mastery. This happens when you try to test a large amount of items in a short time or require a complex task to be completed in a short amount of time.
I tell teachers to use the time and a half rule. If it would take you as an expert an hour to do this test/task, then expect for it to take students at least 1 1/2 times amount of time. For instance, a teacher has a 45 minute class period, but they have 30 questions attached to two reading passages and 2 short answer questions. If a student were to finish this test, this would mean that either the questions are very low level or the students are very fast, expert readers who have already mastered the skills in the exam. Neither situation is ideal for assessment and demonstrates that the assessment probably is not going to be worth the class time being used.
I encourage teachers to use less items that are higher quality and have different levels of complexity. They get better information about student performance this way. The purpose of assessment is not to fill up the class period; it is to find out if students have mastered the skills and content taught.
TIP: Make this the first thing that you look at. The size of the task v. the time given will probably help you to develop some very pointed questions that will help you to zero in on feedback for the specific teacher.
2. Does the question/prompt reflect the standards? I encourage teachers to explicitly write the standard/objectives on their assessment. This serves three purposes: one, it helps the teacher to focus on making sure that (s)he is aligning the assessment item to the standard; two, it helps the student know what is being evaluated during the assessment (really important in complex tasks like projects or essays); and, three, it serves as valuable information in data analysis later on. As the administrator, you can also use it to understand how standards are being implemented in your school.
If the standard says to explain but the assessment has the student selecting an answer or giving an opinion, this is not aligned. The result will be that the assessment item isn't that valuable for understanding the effectiveness of instruction or if students are learning.
TIP: Review the teacher's unit/lesson plan before you look at the assessment to make sure you know the standards/objectives and then check to see if the items match
3. Is the assessment organized by standard/objective and then by difficulty level? Assessments that are measuring student performance should be organized first by the standard/objective; within that, the assessment should be organized from the easiest skill/task to the most difficult.
This allows the student to progress through levels of mastery, keeping their attention focused on a specific objective rather than having to sporadically change gears throughout the test (which adds a cognitive skill outside of what is being tested). It also allows the teacher to see the depth of mastery very quickly (hmm, most students score low at the 3rd level of this objective....).
If the teacher is using multiple anchors (reading passages, graphs, etc.) and testing multiple objectives, they should have almost like a protocol for each assessment item, going from easiest to most difficult testing the same skills.
TIP: This is something that you can see fairly quickly and if you use something like Bloom's or Costa's, then you can easily give feedback on the organization and difficulty levels.
Of course, this isn't everything to look for in an assessment, but it is a good start, and will be a strong conversation starter with your teachers. Some have never thought about these issues and others don't know where to start. But, in my experience, the more teachers understand about the assessments they create, the better they get at data analysis and responding to student data. Furthermore, the data begins to have an intrinsic value to the teachers rather than the stench of compliance.


Sunday, February 8, 2015

How Do You Define Struggling?



It is now en vogue to use the word "struggling" - it is, in fact, the nice way to say if someone is in need of intervention or has a number of "growth opportunities".  The issue with this is that struggling is not necessarily a bad thing - most of us struggle before we have breakthroughs.  But, how do we define the difference between those who are in the natural progression of learning versus those who need intervention?

I have been grappling with this concept over the past year as I look at and process where my school goes next, but the one thing that persistently comes up in this internal conversation is the fact that everyone can't be struggling.  Everyone can't be in need of intervention.  And, everyone cannot be at the remedial level.

This particular thought is very important because I think that it will define who I am as an administrator as well as the identify of our school.  What are the specific criteria for struggling or in need of intervention versus the need for support and encouragement?  These are two very different strategies, and, to get results, they have to be applied judiciously.

Has it been taught?  In education, I find that we make a lot of assumption about both adults and students.  Is it really fair to label someone as struggling when they actually haven't been taught?  I'm sure that I would be a struggling neurosurgeon having taken no medical courses or having spent any time in an operating room, but in this case would struggling be the right classification?  I love the teaching profession.  I believe that teachers and instruction make a difference.  Part of truly believing in the power of education is believing that instruction occurs before intervention or remediation.

This happens to students when schools and teachers assume that they should just know how to read, how to write, how to do math, etc. when in fact a large group of them do not and have never had the instruction to know it.  We act surprised when students fail to excel on exams when we have no evidence that the curriculum or instruction ever addressed the material.  The students are then blamed for not knowing something that they have never been taught.

This happens to teachers when schools and administrators assume that teachers should be able to implement every best practice and have data analysis skills without any reflection on when teachers became certified or what their schools of education focused on, or, even closer to home, the type of professional development that teachers have had access to.  The teachers are then blamed for not knowing something that they have not been taught.  I don't say this to mean that teachers should not continually seek professional development, but to point out that if you change the expectations for teaching, then you should expect for there to be gaps in teacher knowledge and practice.

The struggling/intervention label should not be used for people who have not actually received instruction in the skills that they are being critiqued on.

What is the mindset?  An administrator shared a great way that she learned about her staff - she said she started every conversation trying to figure out if teachers "can't" or "won't".  This definitely speaks to making sure that people have had access to instruction, but it also points to another important issue - mindset.

Can we define people as "struggling" if they are not putting forth effort?  It seems that this drains resources and energy rather than solving the issue of performance.  When we encounter students who "don't/won't do school", are we pouring money and time into academic interventions that most likely will have little impact unless the mindset is changed?  Are we doing the same with teachers or other staff members in our building who have already made up their minds about what they are or are not going to do?

The struggling/intervention label should not be used for people who we need to have difficult conversations with (and, I do believe that students and their parents can fall into this category as well - everyone is responsible for learning).

What are the resources?  I'm always hesitant to believe everyone is struggling unless I see concrete proof.  The biggest reason is that it is my job, as the administrator, to make sure that the resources that we have are appropriately distributed and targeted to help everyone in my building (child and adults) reach their maximum potential.

If everyone is struggling, then there are only two real options - one, redirect most of your resources to intervention (and some will have to be here regardless of what your data is) or rethink how you work and approach the issue, which means that you have to look at how you allocate resources to: one, ensure initial instruction and programming is effective for the majority of your stakeholders; two, allocate resources to those who need additional instruction and support; and, three, allocate time and resources to foster the environment to have the difficult conversations that will move the organization forward.

This seems to be the real theory between Response to Intervention systems rather than the everyone needs some type of intervention approach that seems to be popular in some schools and places across the country.

I am o.k. with people struggling - this is a natural state for everyone.  I struggle everyday, and I'm sure that I'm not alone.  But, the question is, do we need intervention or do we need a different approach?  And, that is something that everyone in education needs to look very closely at.

Monday, January 19, 2015

What We're Not Getting About Tech Integration



Technology integration will not save schools.  It will not make teachers into super-teachers.  It will not raise the IQ points of students or necessarily their test scores.  These are feats created by human expertise and interaction.  But, what it will do is to help maximize the potential of every person in the building with access and that is why it is important for our schools.

At every turn in history, whenever advancement comes, it is pooh-poohed or treated as charlatanism.  Reading and writing evolved from being evil, to being useless, to being revered.  And, we can see the same pattern emerge over time with many other things that we now overlook as "normal" in our every day lives.  Yet, here we are again, facing another innovation with the same drudgery that school officials looked upon at the introduction of pens and papers to classrooms, and we ask if technology is necessary.

Technology integration provides opportunities that have not existed previously times and has the potential to create real innovation in our concepts of instruction and schooling.  Unfortunately, we seem to lack the imagination to really embrace this.

Time.  Is the most valuable thing in schools, and there isn't enough of it.  You can't get more time, but you can free more of it up.  Technology integration has the ability to automate processes that take time away from administrators, teachers, and students.

File sharing instead of copying.  Auto-reviewing instead of personal review.  Data dashboards instead of data review and collation.  Feedback systems.  Responsive systems that create individualized paths.  Videoconferencing instead of driving to meetings.  Collaborative documents instead of downloading, meeting, uploading, and repeat.

And, what can you do with the time?  Anything that you would like.  Additional time has the potential to improve student achievement and outcomes, and it is the number one thing that everyone says that they need to improve student outcomes.

Enhancement and augmentation.  There is more than one way to skin a cat and individual differences matter.  We talk about literacy and numeracy but we deny many of the aides available by skipping the technology.  Interactive texts, multiple representations, and responsive platforms are available but not in use.  Each of these innovations has some type of research base that demonstrates that they improve student learning and can serve as effective interventions, but we're not using them because we don't have the technology.

Even more interesting is that these aides are available to ALL students and not just students with disabilities, which opens us up to the possibility that literacy and numeracy as we know them will continue to evolve with people using a variety of strategies that maximize their own skills and abilities.  We can expect that this will become a challenge to "traditional" notions of literacy and numeracy as well as to assessment.

Will colleges or workplaces care if students prefer to listen to a text, read a text as its highlighted, or to watch a video version if the student can demonstrate comprehension?  So, why do we?  Will colleges or workplaces care if students write texts using combinations of speech to text applications and grammar correction apps if the product is good?  So, why do we?

Technology integration makes it possible for everyone to rely more on their strengths and to supplement their weaknesses.  Hence the catchphrase, "there's an app for that".

Hybridization of the school-world and the real-world.  Technology allows us to reach beyond our immediate spaces.  We can find information.  We can connect with others.  We can recognize shared problems and solve those problems together.

This isn't to say that our teachers and students should be engage online at all times but to point out the vast number of prospects that are available simply by having the technology available.  In the past few years, we have already seen the ability of young people to contribute innovation to different fields (http://www.oddee.com/item_99064.aspx).  We see heartwarming stories of students able to reach out to their real life mentors and celebrities who make a difference in their lives, and, more importantly to interact with those individual academically, enhancing and solidifying their academic experiences.

We preach that we want this to be a norm, but we aren't necessarily providing the tools to make it a reality.

As important as technology integration is, we can't overlook people.  The people usage of technology in our school is what defines the value of the integration; however, without the technology, how will schools ever advance?  We are in a learning phase in schools - a phase that unfortunately lags behind the world that we are trying to "prepare" our students to enter.  Colleges and workplaces are technology integrated already, and we already see the innovations that occur happen quickly and frequently.  Colleges and workplaces are not static and are being defined by adaptability - an adaptability that we are not preparing teachers or students for.  And, among the least prepared, will be our low-income students whose families can't afford home technology or continuous, uninterrupted access to smartphones, which are looked upon by many as a "staple" possession.

So, we may not know everything there is to know about technology integration or it's impact on school outcomes, but we do know that the world will keep progressing technologically, while our schools may or may not.

As an administrator, I've been questioned about my focus on technology in schools, and I can share this with everyone.  This year, my school went 1:1, and I have seen a slow evolution building in our classrooms - I see more students engaged, I see teachers changing their approaches to teaching (because they can), and I see collaboration, not only among the adults, not only among the students, but also between the adults and the students.  It starts off small and then it starts to spread.

From what I have seen, I do not believe that technology integration, by itself, will improve my school's outcomes, but I do believe that the people, in my building, using that technology, will.  And, that their success will come as a result of me believing in them and providing them with the tools and space to innovate.  Every day that I walk into the building, I am just beginning to see what is possible and that is what many people do not get about technology integration.




Friday, January 9, 2015

Assessment's Missing Link: Success Criteria



There's a missing link in assessment.  That's why everyone is talking about it.  We have standards.  We have assessments. But, they aren't connecting, and they aren't helping students.  Many of us are missing a step - we don't have success criteria.

Success criteria are called by many names - mastery levels, student level objectives, etc., but many are not equivalent to their actual definition.  Some of you are thinking, but we have rubrics, interim exams, and test banks.  We have common objectives.  We have mastery targets (80% of the questions).  You may have all of these things but these are components of testing, not success criteria.

Success criteria are the defined levels of student performance that articulate what student performance actually looks like.  Success criteria answer the question, "How do we know that the students have mastered the objective?".  Oftentimes, teachers begin to tell me how they will test students (bellringers, discussions) or the frequency of answers (they can answer 6 of the 10 questions).

This is the catch: success criteria are created before any actual assessment tool is created (rubrics, exams, projects, etc.).  This is often overlooked when we talk about assessment.  

Professional test developers:

1.  Define performance levels
2.  Sketch or blueprint assessments
3.  Create assessments

But, in schools, we tend to simply create assessments.  The most common error that we make is that we equate rubrics or percentages correct to success criteria.  Rubrics/percentatges are used to evaluate products; success criteria describe objective performance.  

Here's an example.

A teacher has decided that she is going to assign an essay (assessment tool) to her class.  She is doing this to test how well her students can use supporting details to support a claim (objective).  

What is on her rubric?  She has 5 categories: Main Idea, Supporting Details, Grammar, Neatness, and Outline.  For each of the categories, she creates 4 levels of description.  Sounds great, right?  Except that she has been teaching supporting details, not the 4 other categories.  

Depending upon how she writes the descriptions, she may or may not describe mastery. Additionally, it is possible for a student to get a grade that does not necessarily reflect mastery of their ability to use supporting details since this category is conflated with four other categories (so, a student could have a very neat paper, with an outline, a clear main idea, and great grammar, and STILL do well even though they haven't mastered supporting details).

Let's look at the difference between a rubric and success criteria.  

Rubric Example:  

At a level 4 on the rubric, a student provides 6-8 supporting details for the main idea in each paragraph.  The details are clear and support the main idea.

At a level 3, a student provides 4-6 supporting details for the main idea in each paragraph.  The details are overall clear and support the main idea with one or two exceptions.

Success Criteria Example:

 At a level 4, the student is able to provide explicit and inferential details to support the main idea.  The student uses transition words and gives explanations that clearly articulate the relationship between the details and the main idea to create a logical text.  The details are a mix of direct quotes, paraphrases, and the student's interpretation.  

At a level 3, the student is able to provide explicit details to support the main idea.  The student mostly uses transition words and explanations that clearly articulate the relationship between the details and the main idea to create a logical text.  The details are a mix of direct quotes, paraphrases, and the student's interpretation.

As administrators, we really want to hear the success criteria, but we often get the rubric instead.  Note that the success criteria can be used in conjunction with the rubric.  The rubric can adopt the success criteria as it's descriptions, but the rubric CANNOT replace the success criteria.

Success criteria are about learning - it helps both teachers and students to identify the gaps and the possible next steps for instruction.  Rubrics, multiple choice questions, etc. ON THEIR OWN are very limited in their ability to do this because they are designed for specific testing events, not student learning; whereas, success criteria can be continually used regardless of activity, test, or context.  Success criteria can also do something assessment tools by themselves cannot do - guide the alignment of instruction, activities, and TEACHER FEEDBACK (what is outlined in the success criteria should be what you hear and see in classroom/assignment feedback) and prevent classes from falling into the abyss of confusion (what were we learning today?).

If we take more time and create viable success criteria, they can be used to build common understanding of standards implementation and student mastery across classrooms and disciplines, not just common testing.

This is not a quick and easy process - it's not one or two sit down meetings, this is meaningful work that develops over time from looking at student work and assessment results, but it is work worth doing when everyone understands what is supposed to be going on rather than using their individual interpretations.

NOTE: PARCC actually provides it's Common Core standards interpretations.  In fact, any and all standardized exams provide their interpretation of standards (they may create their own standards), but these can be found on their websites in their test blueprint areas.

If you are interested in PARCC, I have written a blog about the particular site page that you may want to check out.  http://principalinstruction.blogspot.com/2014/12/the-mecca-of-parcc-assessment-page-you.html

Thanks for giving me a few minutes of your time.  Looking forward to your comments.



Thursday, January 1, 2015

A Single Point of Data: Avoiding the Telephone Game


Cartoon courtesy of thadguy.com

Over the last couple of years, I have seen the following quote used to bolster arguments against data: "There are lies, damned lies, and statistics".  The quote is often attributed to mark Twain, however, the original source is contested.  The actual quote that Mark Twain made in "Chapters from My Autobiography" (1906) was "Figures often beguile me particularly when I have the arranging of them myself; in which case the remark attributed to Disraeli would often apply with justice and force: 'There are three kinds of lies: lies, damned lies, and statistics.'"

Like this quote, we often only depend on part of the data when we do interpretations, leading to unnecessary misunderstanding and caustic arguments.  The simple fact is that there is not one single data point that can tell a story on it's own.  Some people will say that the argument is about the numbers and will follow up with several anecdotes: the issue with this is that anecdotes are data themselves.

As administrators, our role is to create a story about our school's performance, using multiple data points and to ensure that these data points accurately portray our schools.  As we build our stories, we should make sure that we are clear about:

1.  The meaning of the data point.  Whenever data points are published, a definition is also published.  The interpretation of that data point is limited to that definition.  A single data point can contribute to an evaluation, but cannot serve as an evaluation itself.

Our role as administrators is to make sure that everyone understands the definitions of the data points that are used.

2.  Clusters of data tell stories, individual data points do not.  Summative data is a great starting point for understanding your school, but it most likely does not tell the whole story.  Use summative data as a starting point to find out your school's story.

I used to crunch data for a group of schools, and you would be surprised how far off base people's beliefs were about a school that they were sitting in based on the summative data that they received. They accepted the summative data even though their day to day experiences contrasted with the data.

Is your attendance really low, or are there data entry errors?  Do you have a large number of students cutting, or are your offices forgetting to submit attendance for them?  

How many of your students were within 1-2 questions of meeting proficiency? 

What percentage of your students have been disciplined and what are they being disciplined for? 

Our role as the administrator is to tell a story with the data, not just to report out what is given to us.

3.  Alignment: Systems, resources/training, processes, THEN people.  Summative data is OUTCOME data.  It is a reflection of SYSTEMS, not people.  This is why the role of the principal is so important - we lead the design of the systems.   

It is important to make sure that the data points you use align to the appropriate level (i.e. standardized test scores can be used to identify curriculum issues (system), but not resource or teaching issues), so it makes sense to people when you explain your strategies and to your staff as they work day to day. Weaknesses in resources/training, processes, or people all point to some system flaw.  Correcting at any of these levels may create short-term gains, but only system changes create long-term gains. Strategies based on changes or replacement of people are the riskiest and can cost you the most - that's why it's important that data is not used to blame people but to correct structures.

Failure to ensure the correct alignment leads to an over dependence on individual data points and misinterpretation.

Our role as the administrator is to make sure that the main thing is actually the main thing.

Many people are intimidated by statistics and this can lead to a multitude of issues.  As administrators, becoming data proficient can be a big support to our stakeholders and help everyone stay focused on improvement rather than blame.


If you're new to working with data, you may want to check out my post about managing your school data, "Getting Muddy: Personalizing Your School's Data",
http://principalinstruction.blogspot.com/2014/12/how-to-personalize-your-school-data-get.html