An important aspect of the credential project is establishing the means by which we intend to measure its impact. We have determined two main fronts on which we wish to formally assess our impact. One is on students curricular achievement, and the other is in the domain of agency. This blog entry explains the three elements of agency we have chosen to concentrate on, and the measures we will use to determine any effect once micro-credentials are introduced to the classroom.
Three dimensions of agency
1. The student contributes thoughtful ideas and/or questions
We will consider our project to be successful if it stimulates in the learners a belief that their contributions are of value, and aids their development of higher-order thinking when it comes to addressing complex problems.
How will this be measured?
It was a strong desire to avoid defaulting to the typical measure of impacts that can’t be measured by the established subject curriculum, which is to ask the students themselves to ‘self-report’ changes in their disposition and actions. The assumption we have made in devising a measure for this contributory form of agency is that if a student contributes to a learning event, then they must experience a meaningful degree of agency.
We have decided to film classes during an interactive learning process. From this, a panel will isolate all examples of student contributions and classify them on a (yet to be devised) taxonomy that allows us to make an objective judgement of the level of reasoning such contributions represent.
We’ll compare this to the order of reasoning that we observe after 6+ months operating under the micro-credential regime.
A class who is not exposed to this regime, but who is the same age and attainment profile and who is completing the same body of work with the same teacher will also be included as a form of control.
2. The student voluntarily tries again in a new way
Brought to the mainstream by the work of Carol Dweck and her notion of “Growth Mindset”, this form of agency can be allied to the idea of resilience.
If our project is successful, then a significant impact must be that it promotes a degree of perseverance in our learners that will mean they will continue to attempt to master an element of their learning, even after an initial setback.
How will this be measured?
We proppose that we measure this form of tenacity from within the microcredential system itself. We will track the frequency with which students return to credentials that they did not unlock in the first instance, and attempt them again, subsequently unlocking the credential.
There are elements to the design of the credentials that borrow from game theory that are designed to promote this in the learners, so a measure of the frequency of this ‘try again’ action would be of great value to our over-all impact assessment.
3. The student voluntarily goes further than the basics
This aspect of student agency may also be assessed by tracking an aspect inherent to the micro-credential system itself: The opportunities that students take to submit work for, and unlock, credentials that have not been specifically covered, or assigned to them, in a classroom.
We propose to track the prevalence of this with a hope that this practice increases in prevalence over time.
I think your measurement system is starting to take shape. A couple of thoughts I have reflecting on the above. For the first item – contributing thoughtful ideas/questions – I think that there is a risk that if you only focus on in-class discussions you may miss the opportunity to capture this in all the different ways it may manifest. For instance, what if a student comes to you after a class and engages in a thoughtful discussion? What if a student sends you a particularly thoughtful email? People like to engage with new ideas in different ways. Some are more comfortable contributing to a discussion than others. Also, some people like to take time to gather their thoughts and then to respond (I often fall into this category). It seems a shame not to capture this form of agency (and engagement).
I know the focus for the project is very much centred on credentials but I also think that it would be interesting to explore changes in student behaviour that are separated from the assessment system. That is, does incorporating micro credentialing into a subject have flow on consequences for student behaviour (and more specifically agency) in the subject, which is separate from the micro-credentials. For instance, do they ‘try again’ or ‘voluntarily go further’ at times when a micro-credential is not involved?
Hi Nina, I couldn’t agree more with both of these observations. My only concern is that, standing at the front right now, I feel it’s important that we strike a balance between the ideal and the achievable. Any thoughts on how these additional indicators of agency might be measured without too much overhead?
I’m going to add two thoughts into the conversation at this point. I agree with Nina – you could miss lots of interesting opportunities if you rely on data from such a formal process. Plus it will be a lot of additional work for the people on the panel. It could be very rewarding but do you really have the time or would it be better spent elsewhere? (Think about priorities and workload.)
As I was reading your post Chris I found myself wondering about use of the term “measure” when in essence we are looking for spontaneous indicators of thoughtfulness. I wonder how our thinking might shift gears if we chose a verb such as “notice” instead? Could the sorts of opportunities Nina added be “noted” without being measured as such? What would be lost and what would be gained? What would you need to do to ensure consistency across the wider team?
I am interested in the dearth of research in the area of what is called “performance-based assessment” when all around the world people are calling for a greater focus on “competencies” that can’t necessarily be readily measured. That is exactly our dilemma here. What’s more the issues confronted must be essentially the same in many different contexts and for many different task types. How do we rebuild trust in teachers’ professional judgments if we don’t give “better noticing” a fair go?
Hi Rosemary,
Thanks so much for taking the time to look at this initial work. I’m torn in two directions with this. On one hand, I feel a strong affiliation with the desire to strengthen our trust in and reliance on teachers’ professional judgement. On the other hand, when you recognise that this whole project has been designed to bring to the surface specific indicators of the very competencies to which you refer, it feels contradictory to build a means of measuring the impact of this that doesn’t follow a similar form.
I wonder what your thoughts might be on the idea of choosing a verb metaphor like ‘see’. “How might this be seen?”. The more important thing to me here is that we define with more clarity what the “This” is in these phrases. How do we define areas of student development and performance like “thoughtfulness”, “volition” and “resilience” in such a way as what we notice can be qualified using some form of taxonomy.
I guess the other important aspect of this measurement of the project’s impact is the one I still feel attracted to due to its sheer elegance – and that’s the fact that since the micro-credential scheme has been developed with these competencies in mind, surely we might be able to develop credentials within the scheme that will allow us very simply to evaluate its impact. If we have credentials in the scheme for thoughtfulness, levels of volition, resilience etc, then perhaps all we need to do is count how many students unlock them over time?
Looping back to the initial point in relation to our trusting professionalism of teachers, I’d like to re-insert the important aspect of this scheme of assessment of its being entirely devised by teachers themselves. At the same time as this, its design is intended to create a bridge between that work of the teacher and the experiences of students, families and community. I want anything we do to be transparent and contestable.
In summary: could we use the micro-credentials themselves (perhaps a small, defined sub-set of them) to measure the impact of the project?
I’d love to know what everyone reading this thinks of this idea!
C
Hi everyone!
I am so enjoying following this conversation, it is fascinating stuff!
I agree with much of what has been said by you all. Something big for me is the workload side of things. Teaching a full load means I have little time in my day to stop and so whatever we decide to do as a means of keeping a record needs to be manageable within my daily structure (I feel confident that this applies to most teachers).
I do think that Chris’ idea of tracking the behaviours that we would expect to see in agentic student is worth pursuing a little more. For instance, I think there could be a way to write credentials that ask students to do things ‘voluntarily’ to identify this aspect of agency. As we are all aware, the wording of the credentials is essential and so thinking about language that captures agentic behaviours is possibly an avenue to discuss further. Another thing that we can build into the system that might make for some interesting data are credentials that a student can achieve more than once. I think it could be argued that a student is demonstrating higher levels of agency if they are awarded a ‘try again’ badge on more than one occasion than the student that achieves the credential once and then moves on and never attempts it again.
Nina, I would like to raise a thought around one of the barriers you have suggested about when and where we are aiming our observation of the agentic behaviours. It does have to occur in a setting which we can see it happen to be able to be noted and the easiest place to do this is in our classrooms (to me, this is also the most likely place for them to occur as well). I agree with you when you say some people like to consider things and then contribute their ideas (I am one of these people) but I also think that we have to consider how often we get students emailing us about a concept presented in class or even speaking with us at the end of the lesson. If we use the present day as a baseline measure, the answer is ‘hardly ever’. I feel if there were an increase in these behaviours, it would be easy to ‘see’ and record using the same taxonomy that Chris is suggesting we develop to observe the discussions with, given that they so rarely happen now.
Happy Wednesday!
Renee