What are Scored Labs?
(And Why Do They Matter for Skill Validation?)
Scored labs are hands-on, virtual training labs that are scored or graded. Scored labs provide you the ability to objectively validate a learner’s actions and skills against real on-the-job tasks and outcomes.
With these types of labs/item types, you can task leaners to “show what they know” in a live environment that then measures and scores their actions against desired outcomes.
Traditionally, grading has been a time intensive manual process, but with the growing accessibility of powerful scoring engines, organizations are automating the grading process. As a result, they are enabling learning and development stakeholders to now more quickly and effectively gauge a learners’ skill level, provide additional coaching and determine whether they are job ready.
There are also performance labs which are a rapidly growing hands-on item type used in high-stakes skills validation and performance testing (PBT). We’ll cover these in an upcoming separate article.
Why are scored labs a force multiplier for learning and development?
The COVID-19 pandemic accelerated an already urgent learning and development (L&D) challenge: building the right capabilities for the future of the workforce. According to the World Economic Forum, more than one billion people will need significant digital reskilling by 2030. This is driving organizations worldwide to find more effective methods of assessing, developing and verifying employees’ technical skills if they want to thrive.
To do this, organizations are looking to revamp their training programs, supplementing traditional approaches to training (i.e., PowerPoint presentations, video courses, lectures, etc.) with hands-on methods such as sandboxes, simulations and virtual training labs. The demand for hands-on experiential learning is becoming increasingly popular as they enable users to practice skills in a safe live environment designed to build job readiness.
Virtual labs provide invaluable hands-on practice; yet, the process of assessing and verifying digital technology skills cannot be accomplished through hands-on practice alone. Adding scored labs to hands-on experiences enable L&D teams to validate that learners can properly apply what they’ve learned in training. This provides a more quantifiable approach to workforce skills development and leads to a more intentional and adaptive training programs and sets learners up for success.
End-of-lab score reports provide learner-specific feedback and recommended areas for improvement while L&D teams can analyze organization-specific skill data at scale.
This is just one example of what’s possible on Skillable’s platform.
Scored lab use cases for assessing and validating skill development
Item types such as scored labs can be used in a variety of ways, including:
Activity-based Assessments (ABAs)
End-of-Module or End-of-Course Assessments
Each lab is powered by a robust scoring engine that tracks the changes users make within the environment. These advanced, yet accessible scripts, automatically check a learner’s work against specific outcomes and provide them with a score based on their efforts. When automated, this allows L&D teams to spend more time supporting learners and/or improving training materials.
Incorporating scored labs helps you break free from relying solely on multiple choice questions (MCQs) and true/false questions, resulting in a more accurate skills assessment.
The strategic benefits of using scored labs in training curricula and learning paths
The use of traditional labs (also called step-by-step labs) provides learners with a limited environment or simulation to practice their skills. While users do get the chance to practice these skills hands-on, scored labs can accelerate the pace of skilling by continuously assessing, developing and validating learners’ skills. This results in several benefits, including:
Helps gauge training program efficacy
The results provided by scored labs highlights how learners are progressing through the learning lifecycle. L&D leaders can identify where improvements need to made training materials, which learners need additional instruction and if they need adjust future skilling plans. This insight is provided automatically via a scored report, enabling organizations to adjust training focus and needs.
ENHANCES ABILITIES FOR TARGETED SKILL DEVELOPMENT
Using data from hands-on assessments, L&D teams can make more real-time and proactive decisions on where additional skilling is needed. Thus, they can create more targeted, organizational-specific learning to more effectively close skills gaps and improve learners’ overall skills.
Improves the hiring AND SCREENING process
Bad hires happen (approximately 80% of employee turnover is due to poor hiring decisions), and replacing bad hires is time-consuming and expensive.
Asking candidates to show that they have your “must-have skills” in a live lab environment such as Skillable Challenges verifies that individuals have the skills they say they do. This gives hiring managers more time to focus on other elements such as culture and team fit.
Automates lab grading and scoring
Say goodbye to time-intensive manual grading! By automating scoring, roles such as Instructors, training managers and technical trainers have more time to help learners develop skills, coach through skills gaps and improve training materials. This leads to a more adaptable, agile, tailored and robust skilling program.
Who is using scored labs?
Microsoft, AWS and CompTIA are examples of organizations that are incorporating performance scored labs into their certification exams and training programs. This move to performance testing (PBT) (formerly called performance-based testing) should not come as a surprise.
A certification represents that an individual has proven his or her knowledge, skills and abilities (KSAs) in the subject matter. Vendors want their certifications to be viewed as valuable, with regard, and, above all, as credible. If a technology professional earned a certification because they found an exam dump or crammed but they cannot actually apply KSAs on the job, it reflects poorly on the credentialing organization.