A Component and Process Analysis of the Impact of Enhanced Self-Instructional Packets on Behavioral Programming
AdvisorHayes, Steven C.
AltmetricsView Usage Statistics
Staff training is widely understood to be of great importance for any ABA agency, but it is often difficult to find the resources needed for sufficient staff training (LeBlanc, Gravina, & Carr, 2009). Frequent and effective training is viewed as an instrumental component for achieving better outcomes and for fostering staff retention (Nosik, Williams, Garrido, & Lee, 2013). Staff training is viewed as being key to better staff morale, productivity, and efficiency (Carr et al., 2013). This only adds to the challenges faced in delivering proper training to staff across the range of different ABA clinics, settings and cultures. Research on staff training has focused on experimentally comparing the effect of each component of the training package in an effort to identify components most responsible for behavior change (Cooper, Heron, & Heward, 2007; WardHorner & Sturmey, 2010). The aim of the current research was to extend the existent literature regarding staff training by investigating if the enhanced self-instructional training packet would increase participant’s procedural fidelity in applying taught skills replicating Al-Nasser et. al (2018). Eighty-eight undergraduate students at University of Nevada. Reno (UNR) with no behavior analysis background participated in this study. Results showed that having pictures and simplified language as part of the enhanced training packet increased staffs’ procedural fidelity to 91% with (20 out of 22 participants) scoring at an accuracy of 70% or higher. Followed by simplified language with 63.6% (14 out of 22) scoring at an accuracy of 70% or higher. Pictures only training packet was third with (41% ; 9 out of 22) scoring at an accuracy 70% or higher and fourth was the standard training packet with (41%; 9 out of 22) scoring at an accuracy 70% or higher. To account for any difference may be found in knowledge processes the extent, flexibility, and speed of verbal knowledge were assessed. Results showed that MT-IRAP was successful in measuring flexibility when participants deliberately responded incorrect to Dunce items and correct to Einstein items. A secondary question was investigated in this study that looked into whether implicit knowledge relates to overt behavior above and beyond the extent of knowledge. This was assessed by examining whether the interference effects assessed by the MT-IRAP (the difference in latency between deliberately correct answers and deliberately incorrect answers expressed as the D score) relate to procedural fidelity scores, and if they do, whether they do so over and above untimed tests of knowledge. Results showed no indication that implicit knowledge was impacted by training.