UNCLASSIFIED - NO CUI

Skip to content

Detect Actual vs Intended Execution - Eval

Description

After we get back information from the eval survey's (https://gitlab.com/90cos/public/training-roadmaps/-/issues/14) how do we capture discrepancies on that information when there is a potential conflict with the provider?

Workflow:

  1. A training source or test bank claim coverage
  2. We survey the course #47
  3. The survey results disagree with the mapping
  4. How do we "correct" this discrepancy (not from a correcting their materials perspective (that's their job) but from a correcting the mapped level information on our coverage statistics)?
  • Option A: We have a "validated" field we use as authoritative.
  • Option B: We control the mapped level (<-- this is less data).

Determining true level/mapping with multiple data points

How do we (90COS) determine the aggregated mapped level especially as time and instance will matter. A course could be given 4 times. the first two times it was level A, then once at level B, then latest at level C... what do we say the level is?

Additional Concerns

We may have to deal with fallout from the Vendors perceptions:

  1. They(90COS) are saying wrong information about our course! Where are they getting their information?!
  2. How do I correct this?!

How can we overcome the above?

  1. Be transparent about how we get our information. Consider releasing survey information.
  2. Be clear with documentation written to a vendor as a user.

Potential Blockers

  1. Do we have a way to track training instances and associate it with survey data? So we know the last class was X, but the class before was Y? Need some form of tracking system...

Acceptance Criteria

  • AC 1 - We have documented how we use training information to improve our training
  • AC 2 - We've typed up documentation to address potential vendor concerns
  • AC 3 -
Edited by charles.heaton.2