In case you haven't noticed, there are hundreds of thousands of apps and instructional videos available on the App Store, Google Play, and YouTube. The Apple App Store alone claims to "featur[e] over 80,000 education apps — designed especially for the iPad — that cover a wide range of subjects for every grade level and learning style."
When searching the term “addition” back in April 2015, the App Store yielded 2,197 apps. Seven months later, the same search yields over 3,000 results. How exactly are teachers supposed to find effective and useful digital learning content for each student in a classroom? Keeping up with this constantly evolving ecosystem is exhausting even for advanced technology integrators.
Teachers and districts would need a limitless budget and ample time to sift through each video and app in order to determine its efficacy. Without a limitless budget and plethora of time, teachers often resort to relying on their gut instinct to decide whether or not the app will be effective and stimulating for their students. In a day and age where many people may think that teacher intuition and data are at odds, we reject that perspective to advocate for a system that blends a data-driven approach with teacher expertise.
Let’s take a step back and clarify what we mean. According to Dictionary.com, efficacy is the "capacity for producing a desired result or effect." In this context, efficacy in education is examining both the academic effectiveness of digital learning content and how well it engages students. At eSpark, our mission is to deliver best of breed digital content aligned to rigorous learning standards. We are charged with evaluating the efficacy of the third-party content that our students use to ensure engaging, academically relevant classroom experiences. We'd like to share our approach in hopes that these tips might make your own content search and evaluation a little bit easier.
To curate digital learning activities, eSpark’s Learning Design team unpacks state standards, sources apps, videos, and assessments to make sure they’re aligned with the standards, and then they evaluate the content based on a rubric. The factors on our evaluation rubric include alignment to standards, risk factors, scaffolding of learning, intuitiveness, student engagement, and text complexity.
First, we must make sure the digital content is aligned to the Common Core and there’s no risk of ads popping up while students are engaging with the content. Next, scaffolding of learning and intuitiveness ask whether the student is able to learn from his/her mistakes, get feedback, and whether they are able to quickly comprehend topics to make the most of their time. Finally, the student engagement and text complexity factors ask if students can be engaged and have fun with what they’re doing while ensuring that the app and text is appropriately used for the grade level at which each student is performing.
After identifying high quality content and incorporating it into our personalized iPad curriculum modules, our team collects quantitative data about student learning at the app and video level using the combination of a pre- and post-quiz. In the eSpark experience, a student takes a pre-quiz and then watches videos, completes activities, and synthesizes learning in performance tasks—all of which are aligned to a particular standard where the student requires targeted practice.
How exactly does this work?
After each activity and quiz, a prompt appears on their iPad and students are asked to rate each with a thumbs up or thumbs down to indicate engagement. Students are then required to take a post-quiz at the end of each quest. Because the quiz measurements occur around each standard, the feedback loop is around 1-3 weeks and continues throughout the school year. Once we have obtained a sufficient sample size, we can examine the aggregate data on the sum impact of the composition of apps and videos in the quest on learning growth while also examining the engagement data from each activity rating.
The data collected presents insights into what type of content is engaging, what is helping to push the needle academically, and what needs improvement. By forming hypotheses and applying data to restructure strands of curriculum, we’re constantly working to improve the student learning experience and have the greatest impact possible.
It is our hope that finding answers about efficacy in education and disseminating information across the education technology space will have a profound impact on student learning, achievement, and engagement in classrooms across the nation. We are excited for a future where teachers can have insights into the efficacy of individual apps and videos, can confidently use digital resources in their classrooms to improve student learning, and can feel a sense of security in the way that the efficacy of these digital learning materials is measured.
This post was updated on November 10, 2015.