Digital Badges as “Transformative Assessment”

                                                            By Dan Hickey
               The MacArthur Foundation's Badges for Lifelong Learning competition generated immense
interest in using digital badges to motivate and acknowledge informal and formal learning. The
366 proposals submitted in the first round presented a diverse array of functions for digital
badges. As elaborated in a prior post, the various proposals used badges to accomplish one or
more of the following assessment functions:

               Traditional summative functions. This is using badges to indicate that the earner
               previously did something or knows something. This is what the educational assessment
               community calls assessment of learning.

               Newer formative functions. This is where badges are used to enhance motivation,
               feedback, and discourse for individual badge earners and broader communities of earners.
               This is what is often labeled assessment for learning.

               Groundbreaking transformative functions. This is where badges transform existing
               learning ecosystems or allow new ones to be created. These assessment functions impact
               both badge earners and badge issuers, and may be intentional or incidental. I believe we
               should label this assessment as learning.

This diversity of assessment functions was maintained in the 22 badge content awardees who were
ultimately funded to develop content and issue badges, as well as the various entities associated with HIVE collectives in New York and Chicago, who were funded outside of the competition to help their members develop and issue badges.  These awardees will work with one of the three badging platform awardees who are responsible for creating open (i.e., freely-available) systems for issuing digital badges.
            Along the way, the Badges competition attracted a lot of attention.  It certainly raised some eyebrows that the modestly funded program (initially just $2M) was announced by a cabinet-level official at a kickoff meeting attended by heads of numerous other federal agencies.  The competition and the idea of digital badges were mentioned in articles in the Wall Street Journal, New York Times, and The Chronicle of Higher Education.  This attention in turn led to additional interest and helped rekindle the simmering debate over extrinsic incentives.  This attention also led many observers to ask the obvious question: “Will it work?” 
This post reviews the reasons why I think the various awardees are going to succeed in their stated goals for using digital badges to assess learning.  In doing so I want to unpack what “success” means and suggest that the initiative will provide a useful new definition of “success” for learning initiatives.  I will conclude by suggesting that the initiative has already succeeded because it has fostered broader appreciation of the transformative functions of assessment.



Success Factors for the DML Badges Initiative
The deep pool of submissions, a rigorous review process, and the infusion of additional resources from the Gates Foundation make it likely that most of the funded initiatives will succeed at their stated goals.  Other success factors include the following:

Networked learning communities at HASTAC and Mozilla that have been set up to support badging communities;

The open source ethos of sharing and collaborating that permeates the initiative, and particularly placing the Mozilla foundation at the center of the efforts to define the Open Badges Interface (OBI) protocols;

The inclusion of entrepreneurs, with awards to start-ups both for issuing badges and platforms for issuing them;

New educational frameworks to inform and justify awardees' efforts, including Henry Jenkins’ ideas about participatory learning, and Mimi Ito’s ideas about connected learning.

New collaborative topologies for badges including roles for badges led by Philipp Schmidt and Erin Knight at P2PU, badging systems led by Barry Joseph from Global Kids, and the lexicon for badges led by Carla Casilli at Mozilla[1].

In sum, these factors should allow most of the awardees to succeed in using digital badges to assess learning.  More broadly speaking, their shared success should leave behind a community that is bound together by an emerging set of principles and examples for issuing and awarding badges.

What Will it Really Mean to “Succeed?”
The Digital Media and Learning 2012 request for proposals for badge content and programs included no requirement for documenting learning outcomes. Since I have spent much of my career helping educational innovators meet the stringent expectations for program evaluation at the NSF, DOE, and most other agencies and foundations, this omission was noteworthy.  Proposers were only asked to describe outcomes, including  “learning content, programs, or activities that will be supported by badges,” “skills, competencies, and achievements badges will validate,” “specific identities or roles” that learners might assume, and “opportunities or privileges” that the badges would confer. 
While the RFP asked proposers to describe “existing assessments, if any” that could be used for “tracking or measuring performance,” there was no requirement that formal outcome assessments be included.  Examination of the funded proposals [2] reveals little in the way of formally articulated plans for documenting (a) the learning associated with earning a badge, (b) the learning enhanced by offering badges, or (c) the transformations caused by the introduction of badges.  And none of the proposals even began to consider the evidential, consequential, and systemic validity issues associated with any of the evidence that might be used to document summative, formative, and transformative impact. 
While some observers must have been surprised by the lack of formal evaluation requirements, it made a lot of sense to me.  The actual grants were quite small, and the initiative was quite groundbreaking and practically-oriented.  More importantly, stipulations for formal outcome evaluations would have likely focused on the more salient and measurable summative outcomes, and the corresponding evidential validity issues.  From my perspective, this would have come at the expense of the relatively elusive formative outcomes. This is partly because documenting formative outcomes raises complex issues of consequential validity that assessment scholars like Pamela Moss, Lorrie Shepard, and Samuel Messick have struggled with for decades. Because it is so hard to document formative impact, awardees who tried to evaluate those outcomes would likely have fallen short. What little anecdotal evidence they might have obtained would end up being dismissed by skeptics, allowing critics to argue that their program, and by extension, the larger initiative, had failed.
My point here is that requiring formal evaluation of learning outcomes might have squelched the badges movement before it actually got underway. As everyone agrees, there are currently no “best practices” for issuing and awarding digital badges for learning.  While we all point to initial examples like Global Kids, Stackoverflow.com, and Wikipedia Barnstars, there is currently little actual evidence of learning outcomes associated with these examples.  While some of the DML 2012 Research awardees are likely to generate new evidence of the impact of badges, it is simply too early for most of the DML 2012 Badge Content awardees to formally document summative and formative outcomes, or to grapple with the corresponding evidential and consequential validity issues. 
Some content awardees will soon enough be required to formally document learning outcomes when they seek subsequent additional funding from more conventional agencies and programs.  I hope the assessment community can help prepare them for that time.  In the meantime, I am concerned that some content awardees will be pushed to evaluate formal outcomes earlier than they expected or should.  For example, it seems possible that the introduction of digital badges by more established awardees (e.g., Girl Scouts or 4-H) may cause those organizations to wrestle with some enduring tensions in their own learning ecosystems.  This might lead some stakeholders in those organizations to ask for early evidence that digital badges are worth the trouble.  The obvious concern is that premature efforts to formally document summative and formative learning outcomes will draw resources away from or interfere with efforts to obtain those same outcomes.
This raises ad even broader concern that the rigorous evaluations of learning outcomes will be carried out in the badging contexts that are by virtue of those evaluations least likely to obtain those outcomes.  This effect is rooted in the tensions between formative and summative assessment functions.  Think of it as an inversion of the well-known "Hawthorne Effect" where outcomes are generated by virtue of efforts to document them.  Essentially what happens is that the the effort to formally document summative assessment functions transforms the learning ecosystems in ways that are hard to anticipate or even recognize.  This interferes with the formative functions that are needed to increase the learning outcomes associated with badges, while undermining the validity of the very evidence that might be obtained to document those outcomes.

Evaluating Transformative Outcomes of Badges
            To keep this discussion from getting too complicated, I have so far only focused on evaluating summative and formative learning outcomes.  Things gets a lot more complicated when considering the how assessing learning with badges might be used to transform existing ecosystems or create new ones.  This is because the “learning” associated with transformative assessment defies conventional characterizations of learning.  The learning associated with transformative outcomes is really systemic change.  Such learning is highly contextualized within the social and technological practices that collectively define a specific learning ecosystem.  It is certainly possible to use interpretive methods like ethnography and discourse analysis to obtain rigorous evidence of these transformations.  It is also difficult. In my experience, the naturalistic orientation of most interpretive educational researchers can obscure the more interventionist goals of transformative assessment functions.  Even when systemic transformative outcomes are documented, one is still left with the challenge of linking transformative outcomes to individual formative and summative outcomes.[3] 
This challenge of documenting systemic impact of assessment practices and  linking them to individual outcomes is what Allan Collins and John Frederiksen began exploring in a groundbreaking 1989 paper that introduced the notion of systemic validity.  In the ensuing two decades, the primary response to this challenge has consisted of what proponents labeled design-based research (DBR); most recently William Penuel, Barry Fishman, et al. have introduced a helpful new variant called design-based implementation research (DBIR).  In subsequent posts, I will argue why DBR and particularly DBIR are ideal for both documenting and enhancing the entire range of learning outcomes associated with digital badges.  In short, I believe that design-based educational research methods can provide individual awardees and the broader community with a framework for documenting general and specific principles for designing and awarding badges, while gradually increasing broad learning outcomes and evidence of those outcomes.  But doing so will have to wait until the infrastructure for issuing and awarding badges is established.  In the meantime, I hope that the badge developers and issuers will consider positive ways in which their emerging practices might be transforming their learning  learning ecosystem, and search of potentially unexpected (and possibly undesirable) transformations as well.

New Appreciation of Transformative Assessment
As badge developers eventually seek additional support from foundations or investors, they will likely need to craft well-articulated plans for formally documenting the impact of that investment on learning outcomes.  Thanks to continuing efforts of others in the DML community (most notably Penuel and Fishman), it is possible that these funders will be more sensitive to the challenges and risks associated with formal evaluation of learning outcomes.  More fundamentally, I predict that the collective efforts of the awardees will result in a coherent set of principles and practices for issuing and awarding digital badges.  Accompanying these principles and practices will be self-evident examples of learning ecosystems that have been fundamentally transformed or entirely created by digital badges. This should provide a broadly meaningful way of helping others appreciate the transformative functions of learning assessment.  Specific examples are likely to help others appreciate that not all transformative functions can be anticipated, and that some transformations are undesirable. Hopefully other examples will emerge in which DBR methods have been systematically applied from the very start.
In conclusion, I believe that the Badges initiative has already succeeded, because it has provided a broadly meaningful context for appreciating the transformative functions of assessment.  While I don’t expect that all badge developers will find this information immediately useful, I believe that this post illustrates how the DML initiative has moved this issue beyond the small subset of assessment and measurement scholars who have been grappling with it.  I hope that this information will help awardees who are not specialists in assessment or validity (or even learning) to anticipate and appreciate the complex issues that they will encounter as they go forward, and begin addressing them.  I also hope that readers will send along suggestions for doing so.

Next Up:  Before writing about DBR and DBIR, I plan another post about the likely impact of the Badges initiative beyond the awardees, and introduce the notion of transcendent assessment.




[1] Case in point, Carla Casilli’s deliberations over the badging lexicon led to the helpful distinction between learners and earners, which both raises and addresses huge issue in assessment nomenclature.  The more agnostic term earner is more precise; whether an earner has actually learned something is bound up in the social contract between the earner and the issuer.
[2] The winners are listed in this announcement; their original proposals are linked to the summaries of the larger set of Stage 2 winners.
[3] With all due respect, I acknowledge the 2008 book Transformative Assessment by the esteemed assessment scholar Jim Popham.  I have found it useful for thinking about the broader consequences of summative and formative assessment, and I think many awardees will as well.  But the vision of transformative assessment I have in mind goes quite a bit farther than the one outlined in this book.

0 comments:

Post a Comment

Related Post