Is Interactivity Worth the Cost of Flashy Technology?
ED 650 Current Issue Paper 4
One of my major long-term goals as an instructional design librarian is to lead an ongoing project to develop tutorials for library users on a variety of topics (such as searching the library catalog, searching particular databases, placing holds or interlibrary loan requests, etc.). Over the past 15 years or so, library tutorials have taken many forms from screenshots with text to screencasts to interactive tutorials. The technology used to create them runs the gamut as well: content management systems, simple screencasting software like Jing or Screencast-o-matic, robust screencast tools, interactive tutorial creators like Captivate and Storyline, and more. New technologies and updated versions of current technologies frequently emerge. With all of these options and developing tools, how do I choose what approach to take? Anecdotal evidence suggests that interactive tutorials are superior to passive tutorials in terms of student learning and preference — but interactive tutorials come at a high cost in terms of software cost and extensive staff time for development (and frequent revision). This paper briefly explores whether some recent research shows that the high cost of interactivity is worth it.
A key problem I discovered in the research on this topic is a definition of what exactly is interactive. Some articles considered a tutorial interactive if students had to click forward at their own pace to see the next page of text and screenshots. When Craig and Friehs (2013) assert in the article abstract that interactivity in tutorials fails to significantly impact student learning over a screencast tutorial, they are referring to this click-through text and image model. I have real concerns with this, and apparently their students did too: “Comments suggest some of the students did not perceive the HTML tutorial as being especially interactive” (p. 300). Only at the end of the article does the reader discover that they are speaking of interactivity only in terms of a click-through web site being more interactive than a screencast video. They cite other research that has shown that even simply controlling the pace of a tutorial by clicking next can be “enough to increase user attention and is better than passively watching a streaming video” (p. 295). I am not at all surprised that students appeared to learn more effectively from a screencast in this experience, which presents information in audio and visual formats simultaneously (dual-coding and cognitive load theories suggest that this sort of learning is ideal).
While I would not argue that a screencast is interactive, I would assert that the interactivity of a click-through text-and-screenshot tutorial is so minimal that no assertion regarding interactivity should be made based solely on them. I consider interactivity to be something that incorporates active learning using real-life scenarios. In order for a tutorial to truly be interactive, students must apply what the tutorial is teaching them. For example, if the tutorial is demonstrating the development of search terms, I want the student to pull search terms from a research question. Similarly, if the tutorial demonstrates searching a database, I want her to enter something into the search box and see results.
In their review of tutorial types and best practices, Martin and Martin (2015) state that interactivity involves student “[participation] with the material itself” (p. 47). These authors seem to share my feelings about interactivity. Anderson and Wilson (2009) do as well. In their research, they discovered that students strongly preferred interactive tutorials (ones that include “typing searches, answering questions, etc.”) over passive (78% to 22%) and did marginally better in interactive tutorials when content is otherwise the same (p. 11). The passive tutorial in this experiment is a click-through text-and-screenshot guide similar to the “interactive” tutorial in Craig and Friehs. It would be interesting to see similar research done comparing a truly interactive tutorial with a screencast (which has no interactivity where the click-through tutorial has the bare minimum). Perhaps the difference in outcomes would be more significant.
It seems clear from the research I read that students prefer truly interactive tutorials over more passive tutorial types. However, more research needs to be done to make conclusive statements as to whether interactive tutorials are significantly more effective in terms of student learning. Other research has shown a “disconnect between satisfaction and actual learning,” meaning that liking a tutorial is not correlated with actual learning taking place (as described in Craig and Friehs, 2013, p. 294). Until more and better research is conducted to support the impact of interactivity on learning, I believe strongly in the theory of active learning and believe it is worth the time and effort to incorporate active learning into tutorials as much as I am able given the technology I can access.
Anderson, R. P., and Wilson, S. P. (2009). Quantifying the effectiveness of interactive tutorials in medical library instruction. Medical Reference Services Quarterly, 28(1), 10-21.
Craig, C. L., and Friehs, C. G. (2013). Video and HTML: Testing online tutorial formats with biology students. Journal of Web Librarianship, 7(3), 292-304.
Martin, N. A., and Martin, R. (2015). Would you watch it? Creating effective and engaging video tutorials. Journal of Library and Information Science in Distance Learning, 9(1-2), 40-56.