This blog provides instructional support for education leaders. It gives specifics about suggested instructional practices for building leaders and shares learned lessons from an experienced educator.
Monday, January 19, 2015
What We're Not Getting About Tech Integration
Technology integration will not save schools. It will not make teachers into super-teachers. It will not raise the IQ points of students or necessarily their test scores. These are feats created by human expertise and interaction. But, what it will do is to help maximize the potential of every person in the building with access and that is why it is important for our schools.
At every turn in history, whenever advancement comes, it is pooh-poohed or treated as charlatanism. Reading and writing evolved from being evil, to being useless, to being revered. And, we can see the same pattern emerge over time with many other things that we now overlook as "normal" in our every day lives. Yet, here we are again, facing another innovation with the same drudgery that school officials looked upon at the introduction of pens and papers to classrooms, and we ask if technology is necessary.
Technology integration provides opportunities that have not existed previously times and has the potential to create real innovation in our concepts of instruction and schooling. Unfortunately, we seem to lack the imagination to really embrace this.
Time. Is the most valuable thing in schools, and there isn't enough of it. You can't get more time, but you can free more of it up. Technology integration has the ability to automate processes that take time away from administrators, teachers, and students.
File sharing instead of copying. Auto-reviewing instead of personal review. Data dashboards instead of data review and collation. Feedback systems. Responsive systems that create individualized paths. Videoconferencing instead of driving to meetings. Collaborative documents instead of downloading, meeting, uploading, and repeat.
And, what can you do with the time? Anything that you would like. Additional time has the potential to improve student achievement and outcomes, and it is the number one thing that everyone says that they need to improve student outcomes.
Enhancement and augmentation. There is more than one way to skin a cat and individual differences matter. We talk about literacy and numeracy but we deny many of the aides available by skipping the technology. Interactive texts, multiple representations, and responsive platforms are available but not in use. Each of these innovations has some type of research base that demonstrates that they improve student learning and can serve as effective interventions, but we're not using them because we don't have the technology.
Even more interesting is that these aides are available to ALL students and not just students with disabilities, which opens us up to the possibility that literacy and numeracy as we know them will continue to evolve with people using a variety of strategies that maximize their own skills and abilities. We can expect that this will become a challenge to "traditional" notions of literacy and numeracy as well as to assessment.
Will colleges or workplaces care if students prefer to listen to a text, read a text as its highlighted, or to watch a video version if the student can demonstrate comprehension? So, why do we? Will colleges or workplaces care if students write texts using combinations of speech to text applications and grammar correction apps if the product is good? So, why do we?
Technology integration makes it possible for everyone to rely more on their strengths and to supplement their weaknesses. Hence the catchphrase, "there's an app for that".
Hybridization of the school-world and the real-world. Technology allows us to reach beyond our immediate spaces. We can find information. We can connect with others. We can recognize shared problems and solve those problems together.
This isn't to say that our teachers and students should be engage online at all times but to point out the vast number of prospects that are available simply by having the technology available. In the past few years, we have already seen the ability of young people to contribute innovation to different fields (http://www.oddee.com/item_99064.aspx). We see heartwarming stories of students able to reach out to their real life mentors and celebrities who make a difference in their lives, and, more importantly to interact with those individual academically, enhancing and solidifying their academic experiences.
We preach that we want this to be a norm, but we aren't necessarily providing the tools to make it a reality.
As important as technology integration is, we can't overlook people. The people usage of technology in our school is what defines the value of the integration; however, without the technology, how will schools ever advance? We are in a learning phase in schools - a phase that unfortunately lags behind the world that we are trying to "prepare" our students to enter. Colleges and workplaces are technology integrated already, and we already see the innovations that occur happen quickly and frequently. Colleges and workplaces are not static and are being defined by adaptability - an adaptability that we are not preparing teachers or students for. And, among the least prepared, will be our low-income students whose families can't afford home technology or continuous, uninterrupted access to smartphones, which are looked upon by many as a "staple" possession.
So, we may not know everything there is to know about technology integration or it's impact on school outcomes, but we do know that the world will keep progressing technologically, while our schools may or may not.
As an administrator, I've been questioned about my focus on technology in schools, and I can share this with everyone. This year, my school went 1:1, and I have seen a slow evolution building in our classrooms - I see more students engaged, I see teachers changing their approaches to teaching (because they can), and I see collaboration, not only among the adults, not only among the students, but also between the adults and the students. It starts off small and then it starts to spread.
From what I have seen, I do not believe that technology integration, by itself, will improve my school's outcomes, but I do believe that the people, in my building, using that technology, will. And, that their success will come as a result of me believing in them and providing them with the tools and space to innovate. Every day that I walk into the building, I am just beginning to see what is possible and that is what many people do not get about technology integration.
Friday, January 9, 2015
Assessment's Missing Link: Success Criteria
There's a missing link in assessment. That's why everyone is talking about it. We have standards. We have assessments. But, they aren't connecting, and they aren't helping students. Many of us are missing a step - we don't have success criteria.
Success criteria are called by many names - mastery levels, student level objectives, etc., but many are not equivalent to their actual definition. Some of you are thinking, but we have rubrics, interim exams, and test banks. We have common objectives. We have mastery targets (80% of the questions). You may have all of these things but these are components of testing, not success criteria.
Success criteria are the defined levels of student performance that articulate what student performance actually looks like. Success criteria answer the question, "How do we know that the students have mastered the objective?". Oftentimes, teachers begin to tell me how they will test students (bellringers, discussions) or the frequency of answers (they can answer 6 of the 10 questions).
This is the catch: success criteria are created before any actual assessment tool is created (rubrics, exams, projects, etc.). This is often overlooked when we talk about assessment.
Professional test developers:
1. Define performance levels
2. Sketch or blueprint assessments
3. Create assessments
But, in schools, we tend to simply create assessments. The most common error that we make is that we equate rubrics or percentages correct to success criteria. Rubrics/percentatges are used to evaluate products; success criteria describe objective performance.
Here's an example.
A teacher has decided that she is going to assign an essay (assessment tool) to her class. She is doing this to test how well her students can use supporting details to support a claim (objective).
What is on her rubric? She has 5 categories: Main Idea, Supporting Details, Grammar, Neatness, and Outline. For each of the categories, she creates 4 levels of description. Sounds great, right? Except that she has been teaching supporting details, not the 4 other categories.
Depending upon how she writes the descriptions, she may or may not describe mastery. Additionally, it is possible for a student to get a grade that does not necessarily reflect mastery of their ability to use supporting details since this category is conflated with four other categories (so, a student could have a very neat paper, with an outline, a clear main idea, and great grammar, and STILL do well even though they haven't mastered supporting details).
Let's look at the difference between a rubric and success criteria.
Rubric Example:
At a level 4 on the rubric, a student provides 6-8 supporting details for the main idea in each paragraph. The details are clear and support the main idea.
At a level 3, a student provides 4-6 supporting details for the main idea in each paragraph. The details are overall clear and support the main idea with one or two exceptions.
Success Criteria Example:
At a level 4, the student is able to provide explicit and inferential details to support the main idea. The student uses transition words and gives explanations that clearly articulate the relationship between the details and the main idea to create a logical text. The details are a mix of direct quotes, paraphrases, and the student's interpretation.
At a level 3, the student is able to provide explicit details to support the main idea. The student mostly uses transition words and explanations that clearly articulate the relationship between the details and the main idea to create a logical text. The details are a mix of direct quotes, paraphrases, and the student's interpretation.
As administrators, we really want to hear the success criteria, but we often get the rubric instead. Note that the success criteria can be used in conjunction with the rubric. The rubric can adopt the success criteria as it's descriptions, but the rubric CANNOT replace the success criteria.
Success criteria are about learning - it helps both teachers and students to identify the gaps and the possible next steps for instruction. Rubrics, multiple choice questions, etc. ON THEIR OWN are very limited in their ability to do this because they are designed for specific testing events, not student learning; whereas, success criteria can be continually used regardless of activity, test, or context. Success criteria can also do something assessment tools by themselves cannot do - guide the alignment of instruction, activities, and TEACHER FEEDBACK (what is outlined in the success criteria should be what you hear and see in classroom/assignment feedback) and prevent classes from falling into the abyss of confusion (what were we learning today?).
If we take more time and create viable success criteria, they can be used to build common understanding of standards implementation and student mastery across classrooms and disciplines, not just common testing.
This is not a quick and easy process - it's not one or two sit down meetings, this is meaningful work that develops over time from looking at student work and assessment results, but it is work worth doing when everyone understands what is supposed to be going on rather than using their individual interpretations.
NOTE: PARCC actually provides it's Common Core standards interpretations. In fact, any and all standardized exams provide their interpretation of standards (they may create their own standards), but these can be found on their websites in their test blueprint areas.
If you are interested in PARCC, I have written a blog about the particular site page that you may want to check out. http://principalinstruction.blogspot.com/2014/12/the-mecca-of-parcc-assessment-page-you.html
Thanks for giving me a few minutes of your time. Looking forward to your comments.
Thursday, January 1, 2015
A Single Point of Data: Avoiding the Telephone Game
Cartoon courtesy of thadguy.com
Over the last couple of years, I have seen the following quote used to bolster arguments against data: "There are lies, damned lies, and statistics". The quote is often attributed to mark Twain, however, the original source is contested. The actual quote that Mark Twain made in "Chapters from My Autobiography" (1906) was "Figures often beguile me particularly when I have the arranging of them myself; in which case the remark attributed to Disraeli would often apply with justice and force: 'There are three kinds of lies: lies, damned lies, and statistics.'"
Like this quote, we often only depend on part of the data when we do interpretations, leading to unnecessary misunderstanding and caustic arguments. The simple fact is that there is not one single data point that can tell a story on it's own. Some people will say that the argument is about the numbers and will follow up with several anecdotes: the issue with this is that anecdotes are data themselves.
Like this quote, we often only depend on part of the data when we do interpretations, leading to unnecessary misunderstanding and caustic arguments. The simple fact is that there is not one single data point that can tell a story on it's own. Some people will say that the argument is about the numbers and will follow up with several anecdotes: the issue with this is that anecdotes are data themselves.
As administrators, our role is to create a story about our school's performance, using multiple data points and to ensure that these data points accurately portray our schools. As we build our stories, we should make sure that we are clear about:
1. The meaning of the data point. Whenever data points are published, a definition is also published. The interpretation of that data point is limited to that definition. A single data point can contribute to an evaluation, but cannot serve as an evaluation itself.
Our role as administrators is to make sure that everyone understands the definitions of the data points that are used.
2. Clusters of data tell stories, individual data points do not. Summative data is a great starting point for understanding your school, but it most likely does not tell the whole story. Use summative data as a starting point to find out your school's story.
I used to crunch data for a group of schools, and you would be surprised how far off base people's beliefs were about a school that they were sitting in based on the summative data that they received. They accepted the summative data even though their day to day experiences contrasted with the data.
Is your attendance really low, or are there data entry errors? Do you have a large number of students cutting, or are your offices forgetting to submit attendance for them?
How many of your students were within 1-2 questions of meeting proficiency?
What percentage of your students have been disciplined and what are they being disciplined for?
Our role as the administrator is to tell a story with the data, not just to report out what is given to us.
3. Alignment: Systems, resources/training, processes, THEN people. Summative data is OUTCOME data. It is a reflection of SYSTEMS, not people. This is why the role of the principal is so important - we lead the design of the systems.
It is important to make sure that the data points you use align to the appropriate level (i.e. standardized test scores can be used to identify curriculum issues (system), but not resource or teaching issues), so it makes sense to people when you explain your strategies and to your staff as they work day to day. Weaknesses in resources/training, processes, or people all point to some system flaw. Correcting at any of these levels may create short-term gains, but only system changes create long-term gains. Strategies based on changes or replacement of people are the riskiest and can cost you the most - that's why it's important that data is not used to blame people but to correct structures.
Failure to ensure the correct alignment leads to an over dependence on individual data points and misinterpretation.
Our role as the administrator is to make sure that the main thing is actually the main thing.
Many people are intimidated by statistics and this can lead to a multitude of issues. As administrators, becoming data proficient can be a big support to our stakeholders and help everyone stay focused on improvement rather than blame.
Many people are intimidated by statistics and this can lead to a multitude of issues. As administrators, becoming data proficient can be a big support to our stakeholders and help everyone stay focused on improvement rather than blame.
If you're new to working with data, you may want to check out my post about managing your school data, "Getting Muddy: Personalizing Your School's Data",
http://principalinstruction.blogspot.com/2014/12/how-to-personalize-your-school-data-get.html
http://principalinstruction.blogspot.com/2014/12/how-to-personalize-your-school-data-get.html
Subscribe to:
Posts (Atom)