The Multimedia Battery for Assessment of General-Domain and Specific-Domain Skills in Reading is a reliable and valid multimedia battery designed to assess cognitive and basic reading skills. It enables the generation of a comprehensive cognitive and reading performance profile, which is particularly beneficial for children with dyslexia.
The acquisition of reading skills is an intricate process that demands the cultivation of various domain-general and domain-specific abilities. Consequently, it is unsurprising that many children grapple with maintaining proficiency at the grade level, particularly when confronted with challenges spanning multiple abilities across both domains, as observed in individuals with reading difficulties. Strikingly, despite reading difficulties being among the most prevalent neurodevelopmental disorders affecting school-aged children, the majority of available diagnostic tools lack a comprehensive framework for assessing the full spectrum of cognitive skills linked to dyslexia, with minimal computerized options. Notably, there are currently limited tools with these features available for Spanish-speaking children. The aim of this study was to delineate the protocol for diagnosing Spanish-speaking children with reading difficulties using the Sicole-R multimedia battery. This tool for elementary grades focuses on evaluating cognitive skills that are associated with dyslexia as prescribed by the scientific literature. Specifically, it concentrates on assessing a range of cognitive abilities that studies have demonstrated to be linked to dyslexia. This focus is based on the observation that individuals with dyslexia typically exhibit deficits in several of the cognitive areas evaluated by this digital tool. The robust internal consistency and multidimensional internal structure of the battery were demonstrated. This multimedia battery has proven to be a fitting tool for diagnosing children with reading difficulties in primary education, offering a comprehensive cognitive profile that is valuable not only for diagnostic purposes but also for tailoring individualized instructional plans.
Dyslexia is a neurodevelopmental disorder characterized by difficulties in accurate and/or fluent word recognition and poor spelling and decoding abilities and is characterized by unexpected and persistent difficulty in acquiring efficient reading skills despite conventional instruction, adequate intelligence, and sociocultural opportunity1. This neurobiological disorder often manifests as challenges in reading, spelling, and writing, primarily due to phonological deficits2,3. “The importance of early identification of dyslexia cannot be overstated, as it allows for timely intervention and support4,5. When a student does not progress beyond Tier-3 in a response-to-intervention model, it becomes essential to conduct a more comprehensive assessment of both domain-general and domain-specific abilities associated with dyslexia, as highlighted by the scientific literature. The development of the technique presented here is grounded in the necessity of conducting thorough evaluations to ensure that appropriate interventions and support are provided. Moreover, previous studies underscore the utility of technology-based screening tools, such as web applications and computer games, in facilitating effective screening processes6,7. These studies collectively highlight the multifaceted nature of dyslexia, emphasizing the need for comprehensive assessment and intervention strategies to address the diverse cognitive profiles of individuals with dyslexia. Despite the prevalence of dyslexia among school-aged children, most available diagnostic tools lack a framework that comprehensively assesses both domain-general and domain-specific skills. Moreover, there are minimal computerized options, particularly for Spanish-speaking populations. This multimedia battery addresses these gaps by leveraging technology to facilitate a detailed assessment of cognitive skills linked to dyslexia.
Theoretical perspectives and cognitive deficits in Dyslexia
Various theoretical models, including phonological, rapid auditory processing, visual, magnocellular, and cerebellar theories, aim to explain the causes of dyslexia and inform interventions (see for a review)8. The phonological theory attributes dyslexia to difficulties in processing language sounds9, while the rapid auditory processing theory links dyslexia to deficits in perceiving rapidly changing sounds10. Visual theory highlights the visual aspects of reading difficulties, and magnocellular theory points to impairments in visual and auditory processing pathways11. The cerebellar theory suggests that dyslexia arises from cerebellar impairments affecting motor control and cognitive functions12. Nicolson and Fawcett's Delayed Neural Commitment (DNC) framework posits that slower skill acquisition and delayed neural network development are central to dyslexia. Recent models, such as the multiple deficit model, propose that dyslexia is a complex disorder influenced by genetic, cognitive, and environmental factors13,14,15. For instance, Ring and Black14 support the multiple deficit model, showing that both phonological and cognitive processing deficits contribute to the heterogeneity of dyslexia. Soriano-Ferrer et al.15 conducted a study with Spanish-speaking children with developmental dyslexia (DD) and found significant impairments in naming speed, verbal working memory, and phonological awareness (PA). Similarly, Zygouris et al.16 and Rauschenberger et al.6 underscore the importance of cognitive screening tools in identifying these deficits, with dyslexic individuals consistently scoring lower than typically achieving peers.
Examining technological approaches in Dyslexia screening: Insights from research studies
Research on dyslexia screening has evolved with three main approaches: early detection strategies, multifaceted screening methods combining various assessments, and integrating technology for enhanced efficiency17. Politi-Georgousi's18 recent systematic review highlights a shift toward more applications for intervening in dyslexia symptoms rather than screening processes, aligning with technology integration to improve reading skills in dyslexic students. Various tools exist, such as the Dyslexia Early Screening Test (DEST) by Fawcett and Nicolson, which assesses speed, phonological skills, motor skills, cerebellar function, and knowledge19. “Computer-based tools have advanced, including a web application assessing reading and cognitive skills in Greek children20 and tools by Hautala et al.21 and Rauschenberg et al.6 that use gaming and machine learning for early identification of dyslexia. Ahmad et al. integrated gaming with neural networks, achieving 95% accuracy in detection22. Studies across different orthographies underscore the importance of phonological awareness and rapid automatized naming in dyslexia identification23,24.
Insights into Dyslexia among Spanish-speaking children
The study of dyslexia in Spanish-speaking children has been significantly advanced through the use of Sicole-R technology. Jiménez et al. demonstrated its effectiveness in assessing dyslexia across age groups, particularly in distinguishing between dyslexic and typically achieving readers based on phonological and syntactic processing during early elementary years25. Guzmán et al. investigated naming speed deficits in dyslexic children with phonological challenges, highlighting interactions between dyslexia and naming speed measured through tasks such as letter-RAN and number-RAN26. Further studies by Jiménez et al. explored phonological awareness deficits across different syllable structures27, while Ortiz et al. investigated speech perception deficits among Spanish children with dyslexia, revealing impairments in speech perception development regardless of phonetic contrast or linguistic unit28,29. Jiménez et al. investigated the double-deficit hypothesis of dyslexia30, followed by analyses of cognitive processes and gender-related disparities in dyslexia prevalence31,32. Rodrigo et al. explored lexical access among Spanish dyslexic children33, and Jiménez et al. scrutinized syntactic processing deficits34. Finally, Jiménez et al. studied phonological and orthographical processes in dyslexic subtypes, highlighting differences in orthographic route efficiency35. These studies collectively enhance our understanding of the cognitive and linguistic challenges of dyslexia in Spanish-speaking populations.
The conducted studies share several common characteristics in terms of the age and background of the participating children. The children included in these studies ranged in age from 7 to 14 years. Most studies focused on primary school children aged between 7 and 12 years, except those that included children up to 14 years old, providing a sample that spans from early school years to preadolescence31,32. The participating children were primarily from the Canary Islands in Spain. Additionally, some studies included samples from other regions of Spain and Guatemala31,32. Participants were recruited from both public and private schools whose backgrounds included urban and suburban areas. The socioeconomic levels represented in these studies range from low-middle to working and middle class.
Together, these inquiries significantly advance our understanding of dyslexia's complexities, contributing to the field of dyslexia research. Adapted for use across multiple Ibero-American countries, including Spain, Guatemala, Chile, and Mexico, the tool facilitates the assessment of diagnostic accuracy and precision in a diverse Spanish-speaking sample for this study.
This study aimed to delineate a protocol for diagnosing Spanish-speaking children with reading difficulties using a specialized multimedia battery. The primary goal is to provide a comprehensive assessment tool that evaluates both domain-general and domain-specific skills associated with dyslexia.
Experimental setup overview
The SICOLE-R was programmed in the Java 2 Platform Standard Edition (J2SE). The HSQL database engine is used as a database. The software includes 6 main modules to be evaluated: 1) perceptual processing, which includes the tasks of voicing, placing, and manner of articulation; 2) phonological processing, which includes phoneme isolation, phoneme deletion, phoneme segmentation, and phoneme blending tasks; 3) naming speed, which includes the tasks of naming speed in numbers, letters, colors and pictures; 4) orthographic processing, which includes tasks of morphological comprehension of lexemes and suffixes and homophone comprehension; 5) syntactic processing, including gender, number, function words, and grammatical structure tasks; and 6) semantic processing, which influences reading comprehension tasks through informative and narrative text. Instructions for each task, accompanied by one or two trials (depending on the task) and a demonstration, are delivered by a pedagogical agent prior to the initiation of the testing phase. The application protocol for each task is illustrated here.
Prior to administering the multimedia battery to the study sample, adaptations were made to the Spanish language modality for each country (i.e., Mexico, Guatemala, Ecuador, and Chile), including adjustments to vocabulary, images, and other relevant content. The administration conditions were the same across all Latin American countries. The administration environment had to be quiet within the school and free from noise, distractions, and interruptions. The duration of the multimedia battery administration ranged from 3-4 sessions of 30 min each, depending on the student's ability and age. Due to its database compatibility with most spreadsheet and statistical data processing systems, the evaluator can analyze the results of each child and each task. Concerning the data collection, two distinct task types were employed: 1) tasks where the examiner records students' oral performance, noting successes and errors using an external mouse, and 2) tasks requiring students to independently select options by clicking on them.
This protocol was conducted in accordance with the guidelines provided by the Comité de Ética de la Investigación y Bienestar Animal (Research Ethics and Animal Welfare Committee, CEIBA) at Universidad de La Laguna (ULL). The data were collected at different times according to the curriculum of each country, capturing information exclusively from students whose educational administrations, schools, and parents provided consent. The test battery used in this study is registered as intellectual property and can be accessed through a transfer agreement with the ULL. For more information on how to obtain the test battery, interested parties can contact the Office of Knowledge Transfer (OTRI) at ULL.
1. SICOLE-R installation and preparation
2. SICOLE-R tasks
3. Data analysis
Sample study
The sample included 881 participants from Spain (N = 325), Mexico (N = 169), Guatemala (N = 227), and Chile (N = 160), all of whom were native Spanish speakers. The sample was divided into two groups: 451 in the reading disability (RD) group and 430 in the normally achieving readers (NAR) group. Children with special educational needs-those requiring support and specific educational attention due to sensory impairments, neurological issues, or other conditions-were excluded because these factors are typically used as exclusionary criteria for learning disabilities or severe behavioral disorders, either temporarily or for the duration of their schooling. Participants in the RD group were selected based on criteria such as an IQ equal to or greater than 80, a word reading time measurement percentile above the 75th percentile, a pseudoword reading time measurement percentile above the 75th percentile, or a pseudoword reading accuracy measurement percentile below the 25th percentile. Similarly, the NAR group was selected based on comparable criteria, with the addition of a PROLEC36 comprehension task percentile above the 25th percentile. All percentiles were determined according to participants' grade levels. The classification of participants into these groups relied on tasks of word reading and pseudoword reading, each comprising separate blocks. In both blocks, participants were instructed to read aloud the presented verbal stimuli as quickly as possible. The word reading block included 32 stimuli, while the pseudoword reading block contained 48 stimuli. The familiarity of the words was controlled using criteria outlined by Guzman and Jiménez37. Measures of hits, errors, and latency times were recorded, with latency times measured from the appearance of each item to the start of the student's vocal response. The word reading block demonstrated high internal consistency (α = .89), while the pseudoword reading block exhibited even greater internal consistency (α = .91). The distribution of students with RD and NAR across different grade levels was as follows: In the second grade, there were 30 NAR and 27 RD students. Transitioning to the third grade, 100 NARs were observed alongside 88 RD students. The fourth grade included 100 NAR and 116 RD students. In the fifth grade, there were 100 NAR and 116 RD students, while in the sixth grade, there were 100 NAR and 104 RD students. To assess age differences in the total sample and within each grade level, one-way ANOVA tests were conducted. The results revealed no significant differences, with all p-values exceeding 0.05. Similarly, the distribution of the gender variable was examined using the chi-square test, with all p values also exceeding 0.05.
Descriptive statistics
Table 1 presents the descriptive statistics (mean and standard deviation) for age and the assessment measures. The data are categorized by gender, distinguishing between normally achieving readers and those with reading disabilities across various countries. The results showed positive and statistically significant correlations among a large number of assessment measures (see Table 2). The table displays correlations among the measures, spanning from weak to strong. Correlations less than 0.3 are considered weak, while those greater than 0.5 are categorized as strong. Specific correlation values are provided alongside their respective strength classifications. For example, weak correlations (r < 0.3) included those of blending-segmentation (r = 0.354) and deletion-homophone comprehension (r = 0.270). Moderate correlations (0.3 ≤ r < 0.5) were observed between the grammatical structure-number (r = 0.463) and the functional words-grammatical structure (r = 0.512). Strong correlations (r ≥ 0.5) are evident for number-gender (r = 0.642) and picture RAN-Color RAN (r = 0.442).
Table 1: Descriptive statistics of the measures. This table provides the mean, standard deviation, and other descriptive statistics for the various measures included in the digital tool. AGE: Age; NAR: Normally Achieving Readers; RD: Reading disability ; SEG: Segmentation; BLN: Blending; ISO: Phoneme isolation; DEL: Phoneme deletion; HOM: Homophone comprehension; R&S: Root & Suffixes; GEN: Gender; NUM: Number; FW: Functional words; GRS: Grammatical Structure; EXT: Expositive Text ; NAT: Narrative Text ; VOC: Voicing of articulation; MOA: Manner of articulation; POA= Place of articulation; DIG: Digit RAN; LET: Letter Ran; COL: Color RAN; PIC: Picture RAN Please click here to download this Table.
Table 2: Correlation coefficients of the measures. This table shows the correlation coefficients between different measures of the digital tool, indicating the strength and direction of the relationships between these measures. SEG: Segmentation; BLN: Blending; ISO: Phoneme isolation; DEL: Phoneme deletion; HOM: Homophone comprehension; R&S: Root & Suffixes; GEN: Gender; NUM: Number; GRS: Grammatical Structure; FW: Functional Words; EXT: Expositive Text ; NAT: Narrative Text ; VOC: Voicing of articulation; MOA: Manner of articulation; POA= Place of articulation; DIG: Digit RAN; LET: Letter Ran; COL: Color RAN; PIC: Picture RAN. Please click here to download this Table.
Confirmatory factor analysis
CFA was conducted to assess the proposed factor structure of the multimedia battery. The model includes one second-order factor and six latent variables, each representing distinct modules of the multimedia battery. These constructs include phonological awareness (including phonemic segmentation, isolation, blending, and deletion tasks), morpho-orthographic module (involving root and suffixes and homophone comprehension tasks), syntax (comprising gender, number, grammatical structure, and functional word tasks), speech perception (involving voicing, manner of articulation, and place of articulation tasks), reading comprehension (encompassing expositive text and narrative text comprehension tasks), and rapid automated naming (letters, colors, digits, and pictures tasks from the RAN task). To ensure consistency across all tasks in the multimedia battery (ranging from 0 to 1), we used the proportion of maximum scaling (POMS) estimation. POMS scores were computed for the time-based tasks (roots and suffixes)38.
Statistical analyses and graphical presentations were performed using R version 4.3.139, employing the lavaan40, semTools41, and ggplot242 packages. Model fit was assessed using various goodness-of-fit indices. Although the chi-square goodness of fit was significant, χ2(df) = 632.01, p < .001, which suggests a discrepancy between the hypothesized model and the observed data, it is important to note that the chi-square test is sensitive to sample size43. Therefore, additional fit indices were considered.
The comparative fit index (CFI) yielded a value of .961, exceeding the commonly accepted threshold of .95 and indicating a good fit to the data. The root mean square error of approximation (RMSEA) was .038, indicating a close fit of the model to the data. The standardized root mean square residual (SRMR) was .034, which was below the recommended value of .05, suggesting an ideal fit. Factor loadings for the items ranged from .36 to .81, indicating significant loadings on their respective factors. The total average variance extracted (AVE) exceeded .50, suggesting adequate convergent validity, and the total composite reliability (CR) was above .80, indicating good internal consistency reliability.
In conclusion, the results of the CFA supported the proposed factor structure, demonstrating good model fit, convergent validity, and reliability. Please refer to the graphical representation of the model in Figure 1.
Figure 1: Confirmatory factor analysis. This figure illustrates the results of the confirmatory factor analysis performed on the digital tool, highlighting the factor loadings and the relationships between different cognitive processing measures. Abbreviations: SIC= Sicole-R; PA= Phonological awareness; SEG= Segmentation; BLN= Blending; ISO= Isolation; DEL= Deletion; MO= Morphological processing; HOM= Homophone comprehension; R&S= Root and suffixes; SYN= Syntactic processing; GEN= Gender; NUM= Number; GRS= Grammatical structure; FW= Functional words; RC= Reading comprehension; EXT= Expositive text; NAT= Narrative text; SP= Speech perception; VOC= Voicing of articulation; MOA= Manner of articulation; POA= Place of articulation; RAN= Rapid Automatized Naming; NUM= Number RAN; LET= Letter RAN; COL= Color RAN; PIC= Picture RAN. Please click here to view a larger version of this figure.
Measurement invariance analyses were used to test whether the factor structure of the multimedia battery was stable across genders. Testing for measurement invariance consists of a series of model comparisons that define increasingly stringent equality constraints44. We carried out the statistical analyses and followed the same fit model criteria of the previous CFA. We successively constrained parameters representing the configural, metric (loadings), scalar (intercepts), and strict (residuals) structures45. A poor fit in any of these models suggests that the aspect being constrained does not operate consistently for the different groups. The degree of invariance was determined jointly when χ2D was >0.05 and ΔCFI was <0.01. A summary of the results of the indices of the models and the differences between them are shown in Table 3.
Table 3: Invariance model fit indices. This table presents the fit indices for testing the gender invariance of the model, assessing how well the digital tool maintains consistency in its structure and measurement properties across male and female groups. These indices ensure the tool's reliability and validity by confirming its ability to measure cognitive processes consistently across diverse gender groups. Please click here to download this Table.
The configural model, which constrained only the relative configuration of variables in the model to be the same in both groups, had an adequate fit to the data: χ2(292) = 745.970, p<.001, CFI = .964, SRMR = 0.034, RMSEA = .036, 90% CI (.033, .038). The metric invariance model constrained the configuration of variables and all factor loadings to be constant across groups. The fit indices were comparable to those of the configural model: χ2(310) = 768.56, p<.001, CFI = .963, SRMR = .037, RMSEA = .036, 90% CI (.033, .038). The invariance of the factor loadings was supported by the nonsignificant difference tests that assessed model similarity: χ2D (18) = 26.30, p =.09; ΔCFI = .001. In the scalar invariance model, the configuration, factor loadings, and indicator means/intercepts were constrained to be the same for each group. The fit indices were less than ideal: χ2(322) = 787,50, p<.001, CFI = .963, SRMS = .038, RMSEA = .035 (90% CI = .032, .038). The difference tests that evaluated model similarity suggested that there was factorial invariance: χ2D(12)=13.00, p = .369; ΔCFI = .001. Finally, in the strict invariance model, the configuration, factor loadings, indicator means/intercepts, and residuals were constrained to be the same for each group. The fit indices were less than ideal: χ2(341) =798.20, p<.001, CFI = .961, SRMS = .039, RMSEA = .0035 (90% CI = .032, .038). Strict invariance was supported by the nonsignificant difference tests that assessed model similarity: χ2D(11) = 25.81, p<.001; ΔCFI =.002
Diagnostic accuracy
To evaluate the accuracy and discriminative capacity of the multimedia battery, receiver operating characteristic (ROC) analyses were conducted. ROC analysis aids in determining the optimal threshold (cutoff value) for a continuous-scale assessment test, balancing sensitivity (ability to correctly identify true positives) and specificity (ability to correctly identify true negatives). Additionally, ROC analysis was used to assess the ability of the test to discriminate between the RD and NAR groups. To carry out the analysis, Z scores were calculated per grade for each task of the multimedia battery. The omnibus score consisted of the sum of the zeta scores. Statistical analyses and graphical presentations were carried out using R version 4.3.141. The pROC46 and ggplot242 packages were used. In terms of diagnostic accuracy, the multimedia battery exhibited an area under the curve (AUC) of 9439.8 [95% CI: 93.31%-96.24% (DeLong)] and a sensitivity of 91.0 (Figure 2). The ROC curves by grade showed the following indices: 2° grade, AUC= 96.195% [CI: 91.54%-100% (DeLong)], Se= .96, Sp=.90; 3° grade, AUC= 95.3, [95% CI: 92.43%-98.18% (DeLong)], Se=75.70, Sp= 72.34; 4° grade, AUC=93.4 [95% CI: 90.2%-96.66% (DeLong)], Se=.92, Sp=.84; 5° grade, AUC=95.9 [95% CI: 93.11%-98.75% (DeLong)], Se=.90, Sp=.95; 6° grade, AUC=94.4 [95% CI: 91.11%-97.69% (DeLong)], Se=.92, Sp=.91.
Figure 2: Curve ROC analysis. This figure presents the ROC curve analysis, which shows the diagnostic accuracy of the digital tool by plotting the true positive rate against the false positive rate at various threshold settings. The units for the x-axis of the ROC curve are specificity, and the units for the y-axis are sensitivity. Abbreviations: ROC= receiver operating characteristic curve; AUC= area under the curve. Please click here to view a larger version of this figure.
In this study, confirmatory factor analysis (CFA) was employed to evaluate the factor structure of the Sicole-R battery, comprising one second-order factor and six latent variables representing different modules. The results indicated good model fit, convergent validity, and reliability, confirming the efficacy of the battery in assessing a comprehensive set of cognitive and reading skills that are critical for individuals with dyslexia. Importantly, the consistent performance of the digital tool across diverse demographic groups within Spain, Mexico, Guatemala, and Chile suggests its potential utility in educational and clinical settings across various Spanish-speaking regions.
This study contributes to literature by demonstrating how multimedia batteries can be used to effectively identify specific cognitive deficits associated with dyslexia. The battery's ability to pinpoint specific areas of difficulty, such as phonological awareness, speech perception, naming speed, orthographical processing, syntactic processing, and reading comprehension, enables tailored interventions to address individual student needs effectively. Moreover, the demonstrated consistency of the battery across genders underscores its utility in promoting equitable assessment practices, thereby supporting inclusive education initiatives aimed at providing equal opportunities for all students.
Although previous tools with similar characteristics were designed for use in the English language, such as the Dyslexia Early Screening Test (DEST) proposed by Fawcett and Nicolson17, which assesses multiple domains, this study demonstrated the applicability and effectiveness of the multimedia battery in a different linguistic context. Throughout the analysis, several critical steps were undertaken within the protocol to ensure the accuracy and validity of the results. These steps included data preprocessing, model specification, and parameter estimation, aligning with studies such as Hautala et al.19 that have demonstrated the reliability of computer-based tools in identifying students with reading difficulties. Additionally, other studies, such as those by Protopapas et al.20 and Pennington et al.12, have explored the implementation of computer-based screening tools for dyslexia identification, underscoring the importance of harnessing technology to enhance assessment strategies. Furthermore, the current study aligns with recent advancements in dyslexia research and psychometric assessment, emphasizing the relevance of the battery in contemporary contexts. Studies conducted by Jiménez et al.23 and Guzmán et al.24 have highlighted the critical need for robust assessment tools tailored to diverse linguistic backgrounds, underscoring Sicole-R's efficacy in filling this gap within Spanish-speaking populations. Moreover, the integration of digital technologies in dyslexia assessment, as discussed by Rauschenberg et al.6, reinforces the methodological rigor demonstrated in this study's approach to factor analysis and validation.
Despite the methodological rigor employed in this study, it is important to acknowledge certain limitations inherent in the analysis. One limitation concerns the generalizability of the findings, as the sample consisted of a specific demographic group, potentially limiting the applicability of the results to broader populations. Additionally, while efforts were made to ensure measurement invariance across genders, other demographic factors, such as age or socioeconomic status, were not explicitly accounted for in the analysis, which may have influenced the results.
Future research directions could explore its adaptation to different Spanish dialects and its integration into broader educational frameworks to enhance identification strategies. Moving forward, future research should aim to address the limitations identified in this study, such as exploring the utility of the multimedia battery in diverse populations and investigating its longitudinal validity and predictive validity in predicting long-term educational outcomes. Additionally, a promising future line of research has been highlighted in a recent review by Jin et al.47, suggesting the incorporation of neurobiological technology. They conducted a comprehensive review of scientific databases, selecting studies published between 2018 and 2023. Their findings suggest that neurobiological technology assessment is emerging as a promising trend in advancing the diagnosis of dyslexia.
In conclusion, the findings of this study highlight the multimedia battery as a valuable tool for educators, ultimately facilitating more effective support and improved outcomes for students with dyslexia. By incorporating these contemporary insights, the findings contribute to the evolving discourse on dyslexia assessment, offering practitioners and researchers updated perspectives on effective diagnostic strategy frameworks. To our knowledge, this study is among the first to validate a comprehensive multimedia battery tailored specifically for Spanish-speaking populations, filling a critical gap in the current diagnostic tools available for dyslexia.
The authors have nothing to disclose.
We gratefully acknowledge the support provided by the Programa de la Agencia Española de Cooperación con Iberoamérica (AECI), enabling the adaptation of the technological tool Sicole-R-Primaria to the Spanish language variant of different countries within the Ibero-American space through the projects Evaluación de procesos cognitivos en la lectura mediante ayuda asistida a través de ordenador en población escolar de educación primaria (Assessment of Cognitive Processes in Reading through Computer-Assisted Aid in Primary School Student Population) in Guatemala (ref.: A/3877/05), Ecuador (ref.: C/030692/10), México (ref.: A/013941/07), and Chile (ref.: A/7548/07). Additionally, we would like to express our sincere gratitude to the Inter-American Development Bank (IDB) for their financial support toward the Ministry of Education (MEDUCA) of Panamá, with the Organization of Ibero-American States for Education, Science and Culture (OEI) acting as an intermediary. This funding has enabled the adaptation of the Sicole-R for use on both computers and tablets. We are also grateful for the support provided within the framework of Program PN-L1143; 4357/OC-PN, particularly the Technical Support for Facilitator Training and Review of Educational Resources. Additionally, we extend our appreciation for the External Products and Services Contract (PEC), which is aimed at offering specialized training to facilitate the detection, identification, and early intervention of Panamanian students who may be at risk of experiencing difficulties in reading, writing, and mathematics. For all the projects mentioned above, the first author served as the principal investigator.
.