From 119fb648fcd94f9d9667b54ed14dd02f33be018e Mon Sep 17 00:00:00 2001 From: Travis Smith Date: Mon, 12 Apr 2021 13:44:00 -0600 Subject: [PATCH] otguide2.6 to master --- .../2-INTRODUCTION/index.html | 7 +- .../5-STRUCTURE-OF-AN-EVALUATION/index.html | 2 +- .../6-RATING-TENETS-AND-PRINCIPLES/index.html | 58 ++ .../index.html | 18 + .../8-RATING-MEASURES/index.html | 362 ++++++++++++ .../index.html | 525 ++++++++++++++++++ sites/oteguide/index.html | 2 +- .../1-5-PROCESS/index.html | 2 +- .../index.html | 141 +---- .../1-7-Observations-Review/index.html | 19 + .../1-8-Dissemination/index.html | 19 + .../index.html | 13 + .../2-1-CONCLUSION/index.html | 19 + .../2-2-OBSERVATION-SHEET/index.html | 15 + .../2-3-INTERVIEW-SHEET/index.html | 20 + .../oteguide/lessons_learned_guide/index.html | 2 +- .../measures_guide/2-INTRODUCTION/index.html | 2 +- .../index.html | 4 +- .../index.html | 213 ++++--- .../index.html | 99 ++++ .../index.html | 13 +- .../index.html | 15 +- .../index.html | 10 + .../index.html | 4 +- sites/oteguide/measures_guide/index.html | 2 +- .../1-1-introduction/page-data.json | 2 +- .../1-3-fielding-combat-capes/page-data.json | 2 +- .../1-5-acquisition-pathways/page-data.json | 2 +- .../1-6-art-principles/page-data.json | 2 +- .../2-test-lifecycle-overview/page-data.json | 2 +- .../3-1-involvement-decision/page-data.json | 2 +- .../3-2-standing-activities/page-data.json | 2 +- .../3-3-test-strategy/page-data.json | 2 +- .../4-1-levels-of-test/page-data.json | 2 +- .../page-data.json | 2 +- .../4-3-OT-construct/page-data.json | 2 +- .../page-data.json | 2 +- .../5-test-lifecycle-exit/page-data.json | 2 +- .../6-0-ot-business-practices/page-data.json | 2 +- .../OT&E_Guide/6-1-1-tasking/page-data.json | 2 +- .../6-1-2-team-standup/page-data.json | 2 +- .../6-1-3-team-training/page-data.json | 2 +- .../6-2-risk-management/page-data.json | 2 +- .../OT&E_Guide/6-3-resources/page-data.json | 2 +- .../page-data.json | 2 +- .../page-data.json | 2 +- .../OT&E_Guide/6-6-acap/page-data.json | 2 +- .../OT&E_Guide/6-7-case-study/page-data.json | 2 +- .../page-data/OT&E_Guide/page-data.json | 2 +- .../1-2-INTRODUCTION/page-data.json | 2 +- .../page-data.json | 2 +- .../page-data.json | 2 +- .../page-data.json | 2 +- .../page-data.json | 2 +- .../page-data.json | 2 +- .../page-data.json | 2 +- .../page-data.json | 2 +- .../2-1-Test-Design/page-data.json | 2 +- .../2-2-Measures-Development/page-data.json | 2 +- .../page-data.json | 2 +- .../page-data.json | 2 +- .../page-data.json | 2 +- .../1-2-INTRODUCTION/page-data.json | 2 +- .../page-data.json | 2 +- .../page-data.json | 2 +- .../page-data.json | 2 +- .../page-data.json | 2 +- .../1-7-Accreditation-Concepts/page-data.json | 2 +- .../1-8-Risk-Matrix:-Example/page-data.json | 2 +- .../1-9-Previous-Accreditation/page-data.json | 2 +- .../2-1-Accreditation-Process/page-data.json | 2 +- .../page-data.json | 2 +- .../page-data.json | 2 +- .../page-data.json | 2 +- .../page-data.json | 2 +- .../page-data.json | 2 +- .../2-7-Best-Practices/page-data.json | 2 +- .../2-8-Summary/page-data.json | 2 +- .../page-data.json | 2 +- .../3-1-Governance/page-data.json | 2 +- .../3-2-Points-of-Contact/page-data.json | 2 +- .../accreditation_guide/page-data.json | 2 +- .../page-data/acronyms/page-data.json | 2 +- .../1-2-OVERVIEW/page-data.json | 2 +- .../page-data.json | 2 +- .../page-data.json | 2 +- .../page-data.json | 2 +- .../1-6-Characterize/page-data.json | 2 +- .../1-7-Compare/page-data.json | 2 +- .../1-8-Problem-Identification/page-data.json | 2 +- .../1-9-TYPES-OF-TEST-DESIGNS/page-data.json | 2 +- .../2-1-Factorials/page-data.json | 2 +- .../page-data.json | 2 +- .../2-3-Augmented-Designs/page-data.json | 2 +- .../page-data.json | 2 +- .../page-data.json | 2 +- .../page-data.json | 2 +- .../page-data.json | 2 +- .../2-8-Demonstrations/page-data.json | 2 +- .../2-9-Cases/page-data.json | 2 +- .../3-1-Observational-Studies/page-data.json | 2 +- .../page-data.json | 2 +- .../3-3-Power/page-data.json | 2 +- .../3-4-Optimality-Efficiency/page-data.json | 2 +- .../page-data.json | 2 +- .../3-6-Prediction-Variance/page-data.json | 2 +- .../page-data.json | 2 +- .../page-data.json | 2 +- .../3-9-Randomization/page-data.json | 2 +- .../4-1-Replication/page-data.json | 2 +- .../4-2-Blocking/page-data.json | 2 +- .../4-3-TYPICAL-SITUATIONS/page-data.json | 2 +- .../page-data.json | 2 +- .../4-5-Appendices/page-data.json | 2 +- .../4-6-Test-Design-Guidance/page-data.json | 2 +- .../page-data.json | 2 +- .../4-8-Footnotes/page-data.json | 2 +- .../analyst_test_design_guide/page-data.json | 2 +- .../1-2-PURPOSE/page-data.json | 2 +- .../page-data.json | 2 +- .../1-4-Consistency-is-Key/page-data.json | 2 +- .../1-5-One-Voice/page-data.json | 2 +- .../1-6-Templates/page-data.json | 2 +- .../1-7-Microsoft-Word-Options/page-data.json | 2 +- .../1-8-Focus-Areas/page-data.json | 2 +- .../1-9-Styles/page-data.json | 2 +- .../2-1-Acronyms/page-data.json | 2 +- .../2-2-Commas/page-data.json | 2 +- .../page-data.json | 2 +- .../2-4-Headers-and-Footers/page-data.json | 2 +- .../2-5-Page-Numbering/page-data.json | 2 +- .../2-6-Blank-Pages/page-data.json | 2 +- .../2-7-Capitalization/page-data.json | 2 +- .../page-data.json | 2 +- .../2-9-Vertical-Lists/page-data.json | 2 +- .../page-data.json | 2 +- .../3-2-Tables/page-data.json | 2 +- .../3-3-Figures/page-data.json | 2 +- .../3-4-Passive-Voice/page-data.json | 2 +- .../page-data.json | 2 +- .../3-6-Wordiness/page-data.json | 2 +- .../page-data.json | 2 +- .../3-8-Apostrophes/page-data.json | 2 +- .../3-9-Choppy-Sentences/page-data.json | 2 +- .../4-1-Numbers/page-data.json | 2 +- .../4-2-Dates/page-data.json | 2 +- .../4-3-Abbreviations/page-data.json | 2 +- .../4-4-Terms/page-data.json | 2 +- .../page-data.json | 2 +- .../4-6-Verb-Tenses/page-data.json | 2 +- .../4-7-Embedded-Files/page-data.json | 2 +- .../4-8-Converting-Pictures/page-data.json | 2 +- .../page-data.json | 2 +- .../5-1-Patches/page-data.json | 2 +- .../page-data.json | 2 +- .../5-3-Distribution-List/page-data.json | 2 +- .../page-data.json | 2 +- .../5-5-Spell-Check/page-data.json | 2 +- .../5-6-Configuration-Control/page-data.json | 2 +- .../5-7-Table-of-Contents/page-data.json | 2 +- .../page-data.json | 2 +- .../editing_reports_guide/page-data.json | 2 +- .../2-INTRODUCTION/page-data.json | 2 +- .../3-TEST-ACTIVITY-SEQUENCE/page-data.json | 2 +- .../page-data.json | 2 +- .../page-data.json | 2 +- .../page-data.json | 1 + .../page-data.json | 1 + .../8-RATING-MEASURES/page-data.json | 1 + .../page-data.json | 1 + .../page-data.json | 2 +- .../page-data/fn_guides/page-data.json | 2 +- .../page-data/glossary/page-data.json | 2 +- sites/oteguide/page-data/index/page-data.json | 2 +- .../page-data.json | 2 +- .../page-data.json | 2 +- .../1-4-Policy-Guidance/page-data.json | 2 +- .../1-5-PROCESS/page-data.json | 2 +- .../page-data.json | 1 + .../1-7-Observations-Review/page-data.json | 1 + .../1-8-Dissemination/page-data.json | 1 + .../page-data.json | 1 + .../2-1-CONCLUSION/page-data.json | 1 + .../2-2-OBSERVATION-SHEET/page-data.json | 1 + .../2-3-INTERVIEW-SHEET/page-data.json | 1 + .../lessons_learned_guide/page-data.json | 2 +- .../2-INTRODUCTION/page-data.json | 2 +- .../page-data.json | 1 - .../page-data.json | 1 + .../page-data.json | 1 - .../page-data.json | 1 + .../page-data.json | 1 - .../page-data.json | 1 + .../page-data.json | 1 - .../page-data.json | 1 + .../page-data.json | 1 - .../7-Continuous-Measures/page-data.json | 1 + .../page-data.json | 1 + .../page-data.json | 1 - .../page-data.json | 1 + .../page-data/measures_guide/page-data.json | 2 +- .../safety_risk_mgmt_guide/page-data.json | 2 +- sites/oteguide/page-data/sq/d/1928916242.json | 2 +- sites/oteguide/page-data/sq/d/2619113677.json | 2 +- .../page-data.json | 2 +- .../test_capability_guide/page-data.json | 2 +- .../safety_risk_mgmt_guide/index.html | 2 +- sites/oteguide/sitemap.xml | 156 +++--- .../6b9fd/image11.png | Bin 0 -> 16652 bytes .../a2ead/image11.png | Bin 0 -> 7359 bytes .../efc66/image11.png | Bin 0 -> 18808 bytes .../6b9fd/image2.png | Bin 30006 -> 0 bytes .../a2ead/image2.png | Bin 12532 -> 0 bytes .../bd9eb/image2.png | Bin 51927 -> 0 bytes .../e3189/image2.png | Bin 74671 -> 0 bytes .../69476/image5.png | Bin 0 -> 26301 bytes .../6b9fd/image5.png | Bin 0 -> 24154 bytes .../a2ead/image5.png | Bin 0 -> 8275 bytes .../6b9fd/image12.png | Bin 0 -> 9280 bytes .../a2ead/image12.png | Bin 0 -> 4630 bytes .../c56af/image12.png | Bin 0 -> 15564 bytes .../e3189/image12.png | Bin 0 -> 20381 bytes .../62da8/image13.png | Bin 0 -> 23812 bytes .../6b9fd/image13.png | Bin 0 -> 13809 bytes .../a2ead/image13.png | Bin 0 -> 6055 bytes .../e3189/image13.png | Bin 0 -> 30950 bytes .../66caf/image9.png | Bin 0 -> 18293 bytes .../6b9fd/image9.png | Bin 0 -> 12561 bytes .../a2ead/image9.png | Bin 0 -> 5159 bytes .../6b9fd/image4.png | Bin 0 -> 8685 bytes .../98314/image4.png | Bin 0 -> 13252 bytes .../a2ead/image4.png | Bin 0 -> 4038 bytes .../6b9fd/image8.png | Bin 0 -> 18715 bytes .../a2ead/image8.png | Bin 0 -> 8539 bytes .../c61d0/image8.png | Bin 0 -> 26535 bytes .../e3189/image8.png | Bin 0 -> 43428 bytes .../6b9fd/image6.png | Bin 0 -> 6596 bytes .../6da96/image6.png | Bin 0 -> 9761 bytes .../a2ead/image6.png | Bin 0 -> 1627 bytes .../2f6f6/image15.png | Bin 0 -> 35331 bytes .../6b9fd/image15.png | Bin 0 -> 20734 bytes .../a2ead/image15.png | Bin 0 -> 8321 bytes .../e3189/image15.png | Bin 0 -> 49450 bytes .../29c1d/image14.png | Bin 0 -> 54293 bytes .../6b9fd/image14.png | Bin 0 -> 31632 bytes .../a2ead/image14.png | Bin 0 -> 10942 bytes .../e3189/image14.png | Bin 0 -> 79673 bytes .../669eb/image10.png | Bin 0 -> 19310 bytes .../6b9fd/image10.png | Bin 0 -> 11231 bytes .../a2ead/image10.png | Bin 0 -> 5128 bytes .../e3189/image10.png | Bin 0 -> 26462 bytes .../6b9fd/image6.png | Bin 0 -> 16115 bytes .../78a22/image6.png | Bin 0 -> 10613 bytes .../a2ead/image6.png | Bin 0 -> 7051 bytes .../5ecaa/image7.png | Bin 0 -> 608 bytes .../5c98f/image16.png | Bin 0 -> 17571 bytes .../6b9fd/image16.png | Bin 0 -> 9959 bytes .../a2ead/image16.png | Bin 0 -> 4795 bytes .../e3189/image16.png | Bin 0 -> 23021 bytes 259 files changed, 1603 insertions(+), 479 deletions(-) create mode 100644 sites/oteguide/evaluation_and_reporting_guide/6-RATING-TENETS-AND-PRINCIPLES/index.html create mode 100644 sites/oteguide/evaluation_and_reporting_guide/7-Ratings-for-Continuous-and-Cumulative-Feedback/index.html create mode 100644 sites/oteguide/evaluation_and_reporting_guide/8-RATING-MEASURES/index.html create mode 100644 sites/oteguide/evaluation_and_reporting_guide/9-RATING-CRITICAL-OPERATIONAL-ISSUES/index.html rename sites/oteguide/{measures_guide/4-SECTION-2-MEASURES-DEVELOPMENT => lessons_learned_guide/1-6-Milestone-and-Event-Triggers}/index.html (84%) create mode 100644 sites/oteguide/lessons_learned_guide/1-7-Observations-Review/index.html create mode 100644 sites/oteguide/lessons_learned_guide/1-8-Dissemination/index.html create mode 100644 sites/oteguide/lessons_learned_guide/1-9-Lessons-Learned-(Resolution)/index.html create mode 100644 sites/oteguide/lessons_learned_guide/2-1-CONCLUSION/index.html create mode 100644 sites/oteguide/lessons_learned_guide/2-2-OBSERVATION-SHEET/index.html create mode 100644 sites/oteguide/lessons_learned_guide/2-3-INTERVIEW-SHEET/index.html rename sites/oteguide/measures_guide/{3-SECTION-1-FUNDAMENTALS-OF-MEASUREMENT => 3-SECTION-1:-FUNDAMENTALS-OF}/index.html (84%) rename sites/oteguide/measures_guide/{5-SECTION-3-MEASURE-DOCUMENTATION => 4-SECTION-2:-MEASURES-DEVELOPMENT}/index.html (85%) create mode 100644 sites/oteguide/measures_guide/5-SECTION-3:-MEASURE-DOCUMENTATION/index.html rename sites/oteguide/measures_guide/{7-APPENDIX-B-MEASURE-REVIEW-QUICK-REFERENCE => 6-APPENDIX-A:-CATEGORIES-OF-MEASURES}/index.html (80%) rename sites/oteguide/measures_guide/{6-APPENDIX-A-CATEGORIES-OF-MEASURES => 7-Continuous-Measures}/index.html (87%) create mode 100644 sites/oteguide/measures_guide/8-APPENDIX-B:-MEASURE-REVIEW-QUICK-REFERENCE/index.html rename sites/oteguide/measures_guide/{8-APPENDIX-C-CYBER-RELATED-MEASURES => 9-APPENDIX-C:-CYBER-RELATED-MEASURES}/index.html (88%) create mode 100644 sites/oteguide/page-data/evaluation_and_reporting_guide/6-RATING-TENETS-AND-PRINCIPLES/page-data.json create mode 100644 sites/oteguide/page-data/evaluation_and_reporting_guide/7-Ratings-for-Continuous-and-Cumulative-Feedback/page-data.json create mode 100644 sites/oteguide/page-data/evaluation_and_reporting_guide/8-RATING-MEASURES/page-data.json create mode 100644 sites/oteguide/page-data/evaluation_and_reporting_guide/9-RATING-CRITICAL-OPERATIONAL-ISSUES/page-data.json create mode 100644 sites/oteguide/page-data/lessons_learned_guide/1-6-Milestone-and-Event-Triggers/page-data.json create mode 100644 sites/oteguide/page-data/lessons_learned_guide/1-7-Observations-Review/page-data.json create mode 100644 sites/oteguide/page-data/lessons_learned_guide/1-8-Dissemination/page-data.json create mode 100644 sites/oteguide/page-data/lessons_learned_guide/1-9-Lessons-Learned-(Resolution)/page-data.json create mode 100644 sites/oteguide/page-data/lessons_learned_guide/2-1-CONCLUSION/page-data.json create mode 100644 sites/oteguide/page-data/lessons_learned_guide/2-2-OBSERVATION-SHEET/page-data.json create mode 100644 sites/oteguide/page-data/lessons_learned_guide/2-3-INTERVIEW-SHEET/page-data.json delete mode 100644 sites/oteguide/page-data/measures_guide/3-SECTION-1-FUNDAMENTALS-OF-MEASUREMENT/page-data.json create mode 100644 sites/oteguide/page-data/measures_guide/3-SECTION-1:-FUNDAMENTALS-OF/page-data.json delete mode 100644 sites/oteguide/page-data/measures_guide/4-SECTION-2-MEASURES-DEVELOPMENT/page-data.json create mode 100644 sites/oteguide/page-data/measures_guide/4-SECTION-2:-MEASURES-DEVELOPMENT/page-data.json delete mode 100644 sites/oteguide/page-data/measures_guide/5-SECTION-3-MEASURE-DOCUMENTATION/page-data.json create mode 100644 sites/oteguide/page-data/measures_guide/5-SECTION-3:-MEASURE-DOCUMENTATION/page-data.json delete mode 100644 sites/oteguide/page-data/measures_guide/6-APPENDIX-A-CATEGORIES-OF-MEASURES/page-data.json create mode 100644 sites/oteguide/page-data/measures_guide/6-APPENDIX-A:-CATEGORIES-OF-MEASURES/page-data.json delete mode 100644 sites/oteguide/page-data/measures_guide/7-APPENDIX-B-MEASURE-REVIEW-QUICK-REFERENCE/page-data.json create mode 100644 sites/oteguide/page-data/measures_guide/7-Continuous-Measures/page-data.json create mode 100644 sites/oteguide/page-data/measures_guide/8-APPENDIX-B:-MEASURE-REVIEW-QUICK-REFERENCE/page-data.json delete mode 100644 sites/oteguide/page-data/measures_guide/8-APPENDIX-C-CYBER-RELATED-MEASURES/page-data.json create mode 100644 sites/oteguide/page-data/measures_guide/9-APPENDIX-C:-CYBER-RELATED-MEASURES/page-data.json create mode 100644 sites/oteguide/static/146a07273349dbc36954330a63b5388e/6b9fd/image11.png create mode 100644 sites/oteguide/static/146a07273349dbc36954330a63b5388e/a2ead/image11.png create mode 100644 sites/oteguide/static/146a07273349dbc36954330a63b5388e/efc66/image11.png delete mode 100644 sites/oteguide/static/15c01df8a9fa6f626567e43efe3eaffe/6b9fd/image2.png delete mode 100644 sites/oteguide/static/15c01df8a9fa6f626567e43efe3eaffe/a2ead/image2.png delete mode 100644 sites/oteguide/static/15c01df8a9fa6f626567e43efe3eaffe/bd9eb/image2.png delete mode 100644 sites/oteguide/static/15c01df8a9fa6f626567e43efe3eaffe/e3189/image2.png create mode 100644 sites/oteguide/static/1882a4460923a55391e4af55449a1048/69476/image5.png create mode 100644 sites/oteguide/static/1882a4460923a55391e4af55449a1048/6b9fd/image5.png create mode 100644 sites/oteguide/static/1882a4460923a55391e4af55449a1048/a2ead/image5.png create mode 100644 sites/oteguide/static/1eed82d67c46689b62a7e6728fc45a05/6b9fd/image12.png create mode 100644 sites/oteguide/static/1eed82d67c46689b62a7e6728fc45a05/a2ead/image12.png create mode 100644 sites/oteguide/static/1eed82d67c46689b62a7e6728fc45a05/c56af/image12.png create mode 100644 sites/oteguide/static/1eed82d67c46689b62a7e6728fc45a05/e3189/image12.png create mode 100644 sites/oteguide/static/494a1319872ca8029c3f59b6fb1b72d2/62da8/image13.png create mode 100644 sites/oteguide/static/494a1319872ca8029c3f59b6fb1b72d2/6b9fd/image13.png create mode 100644 sites/oteguide/static/494a1319872ca8029c3f59b6fb1b72d2/a2ead/image13.png create mode 100644 sites/oteguide/static/494a1319872ca8029c3f59b6fb1b72d2/e3189/image13.png create mode 100644 sites/oteguide/static/67a8e7809655d9884d926e93f455b88a/66caf/image9.png create mode 100644 sites/oteguide/static/67a8e7809655d9884d926e93f455b88a/6b9fd/image9.png create mode 100644 sites/oteguide/static/67a8e7809655d9884d926e93f455b88a/a2ead/image9.png create mode 100644 sites/oteguide/static/688af7e1bde67ae8e5b5f8c57a84b5f2/6b9fd/image4.png create mode 100644 sites/oteguide/static/688af7e1bde67ae8e5b5f8c57a84b5f2/98314/image4.png create mode 100644 sites/oteguide/static/688af7e1bde67ae8e5b5f8c57a84b5f2/a2ead/image4.png create mode 100644 sites/oteguide/static/6b9833f68a1524a0b0cf7cfa443edbc8/6b9fd/image8.png create mode 100644 sites/oteguide/static/6b9833f68a1524a0b0cf7cfa443edbc8/a2ead/image8.png create mode 100644 sites/oteguide/static/6b9833f68a1524a0b0cf7cfa443edbc8/c61d0/image8.png create mode 100644 sites/oteguide/static/6b9833f68a1524a0b0cf7cfa443edbc8/e3189/image8.png create mode 100644 sites/oteguide/static/854f02679aa6ba16edc99c0cc9bd8035/6b9fd/image6.png create mode 100644 sites/oteguide/static/854f02679aa6ba16edc99c0cc9bd8035/6da96/image6.png create mode 100644 sites/oteguide/static/854f02679aa6ba16edc99c0cc9bd8035/a2ead/image6.png create mode 100644 sites/oteguide/static/85d2923d3fca34d2087f79a66d925e25/2f6f6/image15.png create mode 100644 sites/oteguide/static/85d2923d3fca34d2087f79a66d925e25/6b9fd/image15.png create mode 100644 sites/oteguide/static/85d2923d3fca34d2087f79a66d925e25/a2ead/image15.png create mode 100644 sites/oteguide/static/85d2923d3fca34d2087f79a66d925e25/e3189/image15.png create mode 100644 sites/oteguide/static/8888bba70268037c536a218fe81ca688/29c1d/image14.png create mode 100644 sites/oteguide/static/8888bba70268037c536a218fe81ca688/6b9fd/image14.png create mode 100644 sites/oteguide/static/8888bba70268037c536a218fe81ca688/a2ead/image14.png create mode 100644 sites/oteguide/static/8888bba70268037c536a218fe81ca688/e3189/image14.png create mode 100644 sites/oteguide/static/bae1e4e348a78fabaf5658337ed25708/669eb/image10.png create mode 100644 sites/oteguide/static/bae1e4e348a78fabaf5658337ed25708/6b9fd/image10.png create mode 100644 sites/oteguide/static/bae1e4e348a78fabaf5658337ed25708/a2ead/image10.png create mode 100644 sites/oteguide/static/bae1e4e348a78fabaf5658337ed25708/e3189/image10.png create mode 100644 sites/oteguide/static/c63b7dbae6812abcf7b6b5d97ebbb864/6b9fd/image6.png create mode 100644 sites/oteguide/static/c63b7dbae6812abcf7b6b5d97ebbb864/78a22/image6.png create mode 100644 sites/oteguide/static/c63b7dbae6812abcf7b6b5d97ebbb864/a2ead/image6.png create mode 100644 sites/oteguide/static/e246abddf2a51a9e75b101fe3824e795/5ecaa/image7.png create mode 100644 sites/oteguide/static/f58c73453be1fe2eb373edf77fa2e353/5c98f/image16.png create mode 100644 sites/oteguide/static/f58c73453be1fe2eb373edf77fa2e353/6b9fd/image16.png create mode 100644 sites/oteguide/static/f58c73453be1fe2eb373edf77fa2e353/a2ead/image16.png create mode 100644 sites/oteguide/static/f58c73453be1fe2eb373edf77fa2e353/e3189/image16.png diff --git a/sites/oteguide/evaluation_and_reporting_guide/2-INTRODUCTION/index.html b/sites/oteguide/evaluation_and_reporting_guide/2-INTRODUCTION/index.html index fd0f9072..a405b33f 100644 --- a/sites/oteguide/evaluation_and_reporting_guide/2-INTRODUCTION/index.html +++ b/sites/oteguide/evaluation_and_reporting_guide/2-INTRODUCTION/index.html @@ -1,9 +1,4 @@ -INTRODUCTION

INTRODUCTION

- - - image2 - -

This guide focuses on the analysis, evaluation, and reporting of AFOTEC +INTRODUCTION

INTRODUCTION

This guide focuses on the analysis, evaluation, and reporting of AFOTEC OT&E activities. It covers the products and processes you must follow to complete this final phase of test execution successfully. Submit suggested changes to: AFOTEC.A3.Workflow\@us.af.mil.

\ No newline at end of file diff --git a/sites/oteguide/evaluation_and_reporting_guide/7-Ratings-for-Continuous-and-Cumulative-Feedback/index.html b/sites/oteguide/evaluation_and_reporting_guide/7-Ratings-for-Continuous-and-Cumulative-Feedback/index.html new file mode 100644 index 00000000..c33d07f4 --- /dev/null +++ b/sites/oteguide/evaluation_and_reporting_guide/7-Ratings-for-Continuous-and-Cumulative-Feedback/index.html @@ -0,0 +1,18 @@ +Ratings for Continuous and Cumulative Feedback

Ratings for Continuous and Cumulative Feedback

The majority of AFOTEC programs require continuous feedback during +system development due to agile software development approaches with +rapid release and fielding cycles. However, AFOTEC provides some degree +of continuous feedback on all programs to inform development activities. +In these cases, AFOTEC provides MOE impacts when practical, but rarely +provides COI and E&S ratings. Reporting results in a timely manner is +critical and limits the amount of analysis and results AFOTEC can +provide beyond the MOP/MOS level. Furthermore, stakeholders may not need +feedback beyond the MOP/MOS level at that time.

\ No newline at end of file diff --git a/sites/oteguide/evaluation_and_reporting_guide/8-RATING-MEASURES/index.html b/sites/oteguide/evaluation_and_reporting_guide/8-RATING-MEASURES/index.html new file mode 100644 index 00000000..fdd96982 --- /dev/null +++ b/sites/oteguide/evaluation_and_reporting_guide/8-RATING-MEASURES/index.html @@ -0,0 +1,362 @@ +RATING MEASURES

RATING MEASURES

Rating measures consists of analyzing, evaluating, and informing +results. In most cases, analysis should be concurrent with test +execution. During execution, test teams should verify continuously that +they are associating data with the correct measures or supporting +measures, and that analytical techniques and data management procedures +are working.

General Methods for Rating Measures

The test team must synthesize ratings of individual MOEs in order to +rate COIs. Accordingly, MOEs may depend on rating MOPs and MOSs. +Measures form a credible basis for rating COIs, effectiveness, and +suitability. The two primary methods for analyzing and rating measures +are direct measurement and reasonable inference.

Direct Measurement

Direct measurement is evidence observed during testing that supports the +conclusion that the SUT will be similarly successful when employed by +the warfighter in an operational environment. MOEs document operational +effects that can be summarized as a proportion, percentage, or rate of +success. In other words, if 80% of targets are destroyed, the overall +determination of operational impact (defined by the COI) is likely +positive. In this case, we have direct evidence of the SUT's (mission) +effectiveness. However, if the test is small and 4 out of 5 targets are +destroyed, the test team should be more cautious when rating the COI. +For example, 4 or more successes in 5 trials will occur 19% of time if +the actual success rate is 50%. When the sample size is small, it is +important to use MOPs and MOSs as clues to determine how +unrepresentative (lucky or unlucky) the MOE data might be.

Reasonable Inference

Reasonable inference is evidence that supports the logical conclusion +that the system will successfully (or unsuccessfully) achieve desired +effects and accomplish the mission in an operational environment. +Reasonable inference depends on the degree to which the system satisfies +requirements and performs necessary MOP/MOS operations/tasks. MOPs and +MOS are often formally stated or derived from requirements documents, +and therefore have associated performance criteria that must be met by +the SUT's developers. Test results for a MOP/MOS can be compared to the +performance criteria, and the test team can infer that the SUT will or +will not be successful (relative to what the MOP/MOS actually +measures) in the actual operational environment.

One drawback of using MOP/MOS for an evaluation based on reasonable +inference is that requirement performance thresholds (criteria), the +guiding star for reasonable inference, may not accurately represent the +performance required for desired effects. For example, a requirement may +state that the average miss distance of a weapon (an MOP measure) must +be less than 30 meters. During the test, the average miss distance was +35 meters, but 80% of targets were destroyed, demonstrating the +requirement was overly conservative. To overcome the uncertainty +concerning the validity of the stated requirements, a well designed test +should explicitly plan for the collection and evaluation of effects +(MOEs).

Types of Measures

AFOTEC uses three types of measures---MOE, MOP, and MOS. They are the +data (sets) used to rate mission effects or operational impact, and are +evidence of the system's positive performance characteristics, +capabilities, and deficiencies. MOPs and MOSs indicate specific +strengths and weaknesses of a system, mitigate statistical issues +associated with the typically small sample size of MOE data, and support +or refute MOE conclusions. MOPs and MOSs are useful for quantifying SUT +performance and, as they are often continuous data, can be used to +create analysis of variance or regression models of SUT performance +across variable battlespace conditions. In addition, MOPs and MOSs are +often used as evidence to validate Deficiency Reports (DR) at a review +board. A common analogy is that MOPs/MOSs provide a perspective on +individual trees, but MOEs provide a perspective on the forest.

Measures Worksheet

The measures worksheet is used to document and organize details of a +measure. Measures worksheets contain significant detail about a measure +and how the data will be collected. For example, the scope section +defines and qualifies the measurement attribute of interest. +Understanding the data in the header of a measures worksheet will be +sufficient for purposes of this guide, but a complete discussion of the +contents of a measures worksheet can be found in the AFOTEC Measures +Guide.

The header of the measures worksheet contains a concise summary of the +important information regarding a measure. It states the attribute of +the SUT that the test team is measuring (operational capability), a +short title (measure statement) describing what is being measured that +conveys information about the attribute, how it will be computed (the +metric), and the basis for judging the metric (the threshold, if +available). Table 1 is an example of a measures worksheet header within +an AFOTEC report.

Table 1. Standard Measures Worksheet Header

+ + + + + + + + + + + + + + + + + + + + + + + + + +
+ Operational Capability: Accuracy + + + + +
+ Measure + + Metric + + Threshold + + Results + + Rating +
+ MOP 2.1: Miss Distance + + Mean + + ≤20 meters + + 20.5 Meters + + + + + image7 + + +



Rating Measures of Effectiveness

MOEs measure the degree to which the SUT can directly achieve desired +operational effects. MOEs communicate operational effects to +stakeholders (e.g., SPO, warfighter) and how successful the SUT is at +conducting its mission. The operations and associated MOEs are +considered anchors for the OT&E construct and the evaluation structure. +When rating MOEs, the test team must consider the significance of the +effect demonstrated, through direct measurement, as well as test +conditions, DRs, and application of operations experience and judgment. +The test team night have to use reasonable inference to rate +MOP/MOS performance.

Figure 5 depicts the rating logic for MOEs. MOEs may or may not have an +associated criteria (or ID standard). If there is a criteria/standard, +determine whether it was met or there was a capability shortfall. This +is a mechanical step and the outcome may not be operationally +significant, but external agencies (e.g., SPO and developer) are +interested in knowing whether user criteria was satisfied. If there is +no associated criteria/standard, document the demonstrated effects +without comparing to a criteria/standard. If there were no shortfalls or +negative effects, rate the MOE as "No Shortfall" or "No Negative +Effect." If there was a capability shortfall or negative effect that did +not impact mission accomplishment, rate the MOE as "Shortfall, Not +Significant" or "Negative Effects, Not Significant." If a shortfall or +negative effect impacted mission accomplishment, rate the MOE as +"Shortfall, Significant" or "Negative Effects, Significant."

Figure 5. MOE Rating Logic

+ + + image8 + +

Figure 6 contains MOE ratings and visual indicators, and includes an +"Inconclusive" rating, which should be used only when analysis does not +allow a credible effects determination. MOE Notable findings are +addressed later in this section.

Figure 6. MOE Ratings and Visual Indicators

+ + + image9 + +

Basis for MOE Evaluation

Test teams use the following considerations when evaluating MOEs:

Effect Criteria or Identified Standard

When a threshold (criteria or identified standard) for an effect is +provided, it should be the basis of a test team's evaluation of the MOE. +Test team judgment is still important because some thresholds of +performance are overly conservative while others underestimate the +required effectiveness.

A significant challenge in evaluating an MOE is that they may lack an +authoritative threshold from which to base a rating of the measure data. +In other words, if the success rate during a test was 75%, there is +seldom a requirement that states the minimum acceptable success rate is +80%. In addition, MAJCOMs are unaccustomed to providing an identified +standard such as "the success rate should be at least 80%." Test team +judgement and expertise is required to rate an MOE without an +authoritative threshold.

If a SUT is conducting the same mission as a legacy system, then attempt +to derive MOE effect thresholds from the relevant operator community.

Feedback from Test Participants

Test participant feedback on usability of the SUT should contribute +significantly to an MOE rating. An operator is trained to perform system +tasks in order to achieve operational objectives; and if the SUT is +usable and effective at enabling them to perform system tasks, the test +participant feedback should be very positive and the MOE rating should +be No Shortfall or Negative Effect (Green). If participants struggle to +use the system to complete the test scenarios, the rating should be +Shortfall or Negative Effect, NOT Significant (Yellow). If participants +are unable to complete the mission during testing, the rating should be +Shortfall or Negative Effect, Significant (Red).

Problems/Deficiencies

The number and significance of problems and deficiencies discovered +during testing should contribute to the MOE's rating. Systems that +accomplish the mission but have problems that must be addressed by the +developer are likely not worthy of a "Green" MOE rating.

Required Workarounds

Although OT should be conducted per established employment and support +concepts, TT&Ps and tech data workarounds are common. The significance +of workarounds to accomplish the mission should factor into the rating. +Mission effects may not tell the whole story, because operators and +maintainers are conditioned to accomplish the mission by overcoming +adversity, including system constraints. Workarounds can affect test +scenarios, performance results, safety, and skew operational effects +during OT. Workarounds may also be reasonable as a short term fix during +combat operations, but are unacceptable during ongoing peacetime +operations.

Statistical Confidence

It takes a relatively large amount of test data to be statistically +confident the point estimate of the proportion of successes and failures +(from the test of a system with variable performance) is close to the +true success rate that will be observed in the field. For example, +suppose the true success rate of the system was 50% and the test +has 5 test events.

There is a 0.62 probability (0.31 + 0.31) that test results will be +within 10% of the true success rate of 50% (i.e., 2 or three successes). +Furthermore, it is impossible with a sample size of 5 to accurately +estimate a success rate of 50%; the closest possible outcomes are 2 +successes out of 5 (40%) or 3 successes out of 5 (60%). To have a 0.80 +chance that the test results are within the range 40-60%, the test would +require 30 pass/fail events. If the sample of mission successes and +failures is small, other sources of data become much more important in +the evaluation of an MOE.

Reasonable Inference of MOP and MOS Impacts

Test teams may use reasonable inference to develop an estimate of +results for an MOE (i.e., the MOE is not directly measurable) or may be +used to bolster results for an MOE (i.e., small sample size). A variety +of techniques are available for this, such as rationale argument, +weighting schemes (discouraged), algorithms/formulas, models, and +simulations. This is sometimes referred to as "rolling up" or +aggregating to an MOE or possibly a COI. The preferred approach is to +use direct measurement---aggregating can raise questions about the +validity of the conclusion.

Key Performance Parameters

Key performance parameters (KPP) are not a significant consideration +when rating MOEs. While they highlight important characteristics of SUT +performance, the test team should place them into the context of the +operation to evaluate operational effect no differently than any other +capability requirement. If a KPP threshold appears contrary to +operational effects (or MOE rating), the test team should recognize the +differences and document to communicate the rationale.

MOE Notable Findings

The test team may document notable findings at the MOE level of the +evaluation for two potential situations. First, there was an additional +observation made on one or more MOP/MOS that had an operational effect +measured by an MOE. Second, an operational effect was recorded, but +there was no corresponding MOE. MOE notable findings are included along +with the MOE as part of the process of evaluating a COI.

Rating MOPs and MOSs

Typically, test teams analyze MOPs and MOSs by summarizing the measure +data (with an average, percentile, median, user rating, etc.) and then +comparing that value to a threshold value (provided by the operational +user as a criteria in capability requirements documents) or an +AFOTEC-formulated identified standard. Rating an MOP/MOS is a two-step +process (Figure 7).

Figure 7. MOP/MOS Rating Logic

+ + + image10 + +

Step 1 is somewhat mechanical and documents that the system performance +met or did not meet the threshold. The program office relies on AFOTEC +discerning whether the benchmark for our rating is user-provided +criteria or a standard developed by AFOTEC as a test planning standard +(the "AFOTEC Measures Guide" contains more info on thresholds). For +purposes of this guide, criteria and identified standards are used +synonymously as a threshold unless stated specifically.

When SUT performance meets user-established criteria, the MOP/MOS rating +will be "Met Threshold." It is not operationally important whether or +not a SUT meets a threshold---AFOTEC's focus is performance-driven +mission effects. However, this information is provided because it is +important to external agencies such as the users and program offices. +When a measure does not meet criteria, the system is said to have a +shortfall. The test team must determine whether the shortfall is +operationally significant to the user community.

Step 2 of the MOP/MOS rating logic addresses the operational +significance of performance shortfalls. Operational significance is the +consequence or impact on an operation's outcome if a performance +threshold is not met. A MOP/MOS can fail to meet a threshold, but not +have a significant operational impact. Ultimately, the operational +impact on an MOE determines whether a shortfall is significant. A +shortfall having little to no impact on the MOE is rated "Did Not Meet +Criteria, Not Significant." A shortfall that has mission effect is rated +"Did not meet Criteria, Significant." Operational significance is the +primary technique for rating the MOP/MOS; however, the test team should +consider the operational significance of other factors, such as DRs, and +apply operational experience/judgment to determine significance. The +following paragraphs present additional considerations in determining +operational significance.

Test Conditions

Performance measures may meet criteria under some conditions, but not +all, or a system may only be tested in a small set of conditions. In +these cases, team judgement should be applied when +determining/projecting the operational significance. If the shortfall +only appears under certain operational conditions (e.g., certain +altitudes, specific weather phenomena, cyber environment), the test team +uses available information such as CONOPS and OPLANS together with +operational experience/judgment to determine its impact. If mission +effects described by the MOE were significant and impacted the operation +(described by the COI) substantially, the measure may be rated "Did Not +Meet Criteria---Significant Shortfall." Conversely, if the system was +tested across a wide variety of conditions and the impact on the MOE was +not substantial, even if it sometimes did not meet the threshold +criteria, the variation may justify a Not Significant determination.

Deficiency Reports

At times, DRs are associated with specific measures. These DRs may or +may not align with performance outcomes and effects on the MOEs/COIs. +When demonstrated effects of performance shortfalls differ from expected +effects documented in a DR, the team must apply experience/judgement, +but will likely rate the measure to be consistent with the MOP/MOS +rating. For more information on the DR process, see TO 00-35D-54 and the +AFOTEC Deficiency Reporting Tool on the AFOTEC intranet Library.

Beyond the Point Estimate

When analyzing a MOP/MOS that involves interval or ratio data (e.g., +time, temperature, distance), it is important to consider more than just +the summarized value, known as a point estimate by statisticians, and +should be identified as the metric in the measures worksheet. Point +estimates provide the best guess about some population parameter such as +the overall performance average. However, the dispersion of data around +the point estimate, or variability, also provides useful information to +the test team and the warfighter. While variance or standard deviation +are metrics of choice for statisticians to summarize how data are +dispersed around an average, a more interpretable metric is the range of +the data, i.e., the difference between the highest and lowest data +value. When the data are normally distributed (and therefore the average +is at the center of the distribution) one can estimate that roughly 99% +of the data will be within a range of plus or minus three standard +deviations of the average. However, it is not easy to heuristically +translate a standard deviation into a range if the average is not near +the center of the distribution. Hence the range is a more general and +straightforward way of describing the degree of dispersion in the data. +Using both the average and the range can facilitate a more thorough +assessment of the system. Suppose a system and its users take an average +of 10 minutes to accomplish a task and the range is 2 minutes. Then the +warfighter can be told to expect the system to consistently require +around 10 minutes for task completion (9 to 11 minutes if the average is +centered in the range.) However, suppose the average is 10 minutes and +the range is 12 minutes. Furthermore, suppose the average is not +centered in the range and the test data show the task takes between 8 +and 20 minutes. Then the warfighter may need a warning about the +variability in performance; it may be that waiting 10 extra minutes is +the difference between life and death. Your evaluation of the measure +should take the range of the data into account.

Operational Experience/Judgement

The test team's experience/judgment is the cornerstone to credible OT +analysis, evaluation, and reporting. It will be the deciding factor when +determining operational significance throughout the rating levels. In +particular, test teams leverage their operational experience/judgment +when they determine whether a shortfall is operationally significant. +Test team judgment should be informed by credible, and preferably, +verifiable information. Examples of different information types are +overall performance of the SUT during test scenarios, recorded feedback +and ratings provided by users during testing, modeling and simulation +results, system CONOPS, knowledge of legacy system performance (if +applicable), and DRs.

Key Performance Parameters

KPPs are not significant considerations when rating MOPs/MOSs. While +they highlight specific system performance, and meeting or not meeting a +threshold criteria is of great interest to the user and SPO, they are +treated no differently than any other capability requirement. It is not +uncommon that a SUT fails to meet a KPP criteria, but there is little to +no operational impact. Test teams evaluate KPP-based operational effects +in the context of various operations and operational conditions, no +differently than other capability requirements. The KPP performance is +documented in MOP/MOS ratings, identified as KPPs, and rated based on +the analysis of significance. If a KPP threshold appears contrary to +operational effects (or MOE rating), the test team should recognize and +document these differences and communicate the rationale.

Additional Observations

Sometimes teams learn noteworthy information while evaluating a measure +that was not previously considered during test planning (e.g., unplanned +data collection, interviews, user comments or DRs), nor within the scope +of the measure. While additional observations may not affect the rating +of the measure, they are included in the narrative of the test report. +The test team might identify a shortfall if they measure an aspect of +the SUT not addressed by the criteria. For example, suppose a +requirement states that a system must have an average assembly time of +20 minutes. The test team realizes that disassembly time is also +operationally relevant and not likely to be the same as the assembly +time, so they create a separate measure without a threshold. They can +determine whether a shortfall exists by observing distinct steps of the +disassembly process and the level of difficulty operators experience +while disassembling the system. In addition, when the test team talks to +operators during testing, they can include the average disassembly time +in context and determine if there are negative impacts to the mission. +Figure 8 shows MOP/MOS ratings and visual indicators, including the +"Inconclusive" rating, which should only be used when analysis does not +allow a credible performance determination.

Figure 8. MOP/MOS Ratings and Visual Indicators

+ + + image11 + +

\ No newline at end of file diff --git a/sites/oteguide/evaluation_and_reporting_guide/9-RATING-CRITICAL-OPERATIONAL-ISSUES/index.html b/sites/oteguide/evaluation_and_reporting_guide/9-RATING-CRITICAL-OPERATIONAL-ISSUES/index.html new file mode 100644 index 00000000..5be90b4c --- /dev/null +++ b/sites/oteguide/evaluation_and_reporting_guide/9-RATING-CRITICAL-OPERATIONAL-ISSUES/index.html @@ -0,0 +1,525 @@ +RATING CRITICAL OPERATIONAL ISSUES

RATING CRITICAL OPERATIONAL ISSUES

The measures, notably MOEs, form the basis for evaluating and rating a +COI. Test team judgment is key to rating a COI and is based on test team +members' prior operational experience, military training, and +experiences during test events. However, it is also informed by +battlespace development, test design, measurement, analysis using formal +statistics, measurement and evaluation logic, etc.

COI evaluation determines the impact effects had on operations defined +by COIs. After MOEs are rated individually, the evaluation shifts to +collective effects (or operational impacts) on operations. Primarily, +COI ratings are based on evaluating MOEs mapped to each COI, and +judgment of the test team. The COI rating taxonomy has four tiers and +each tier addresses a different operational impact. In addition to +effects addressed in MOEs, the test team should consider the impact of +notable findings for each MOE.

Each COI rating must be supported by a defensible narrative and an +associated visual indicator. When there are no negative effects +resulting in operational impacts, the COI is rated "No E&S Shortfalls." +When there are operational impacts, the three potential operational +ratings are Minimal, Substantial, and Severe. Figure 9 depicts the COI +ratings logic.

Figure 9. COI Rating Logic

+ + + image12 + +

Use the following as a general guideline for rating COIs:

  • No E&S Shortfalls, No impact. There were no negative MOE +effects resulting in operational impact

  • E&S Shortfalls, Minimal impact. Use this rating if the MOE +effects indicate operational personnel will be able to accomplish +the operation defined by the COI with no significant impact. +Generally, this means that all operational tasks can be accomplished +without any (or minimal) limitations and workarounds.

  • E&S Shortfalls, Substantial impact. Use this rating if the +MOE effects indicate operational personnel will be able to +accomplish the operation with significant operational impact. This +may mean the majority or all operational tasks can be accomplished, +but with significant limitations or operational workarounds. +Examples of substantial impact would be: operations accomplished +with a manageable reduction in accuracy; an acceptable increase in +time, assets, and manpower; or minor changes to tactics/ training.

  • E&S Shortfalls, Severe impact. Use this rating if MOE effects +indicate operational personnel will not be able to accomplish the +operation in the projected operational environment. This may mean +the majority or all operational tasks cannot be accomplished, even +with significant limitations or operational workarounds. Examples of +severe impact would be: Unacceptable lethality or reliability; +unacceptable increase in time, assets, and manpower; or unacceptable +cybersecurity.

Figure 10 illustrates COI ratings and visual indicators, including the +"Inconclusive" rating, which should only be used when analysis does not +allow a credible determination of operational impact at the COI level. +COI Notable findings are addressed later in this section.

Figure 10. COI Ratings and Visual Indicators

+ + + image13 + +

Basis for Critical Operational Issue Evaluations

The following paragraphs capture elements test teams must consider when +determining operational impact.

Preponderance of Measures Meeting Criteria

MOEs are indicators of whether and how well a COI can be achieved. If +numerous measures indicate success, the operation defined by the COI is +likely to be achieved. However, avoid using a simple numeric that 4 out +of 5 MOEs have no significant effect on the operation to make a +determination. Take advantage of the operational experience and judgment +of the test team.

Prominent Measures

There may be one or more measures that are have a greater influence and +impact on a COI. While these measures are usually identified prior to +testing, particularly as a desired effect, test results sometimes reveal +the unexpected importance of a particular measure. For example, even if +a preponderance of MOEs reflect positive effects, a single prominent +measure (effect) could indicate a substantial/severe operational impact.

Operational Conditions

It is not uncommon for a system to perform well under some conditions, +but poorly under others. Test results combined with conditions provide +an indication of system robustness. A limitation or poor performance +under certain conditions may justify a severe operational impact, but if +the likelihood of employing/sustaining the SUT under those certain +conditions is rare, it may warrant a rating of substantial, rather than +severe, impact.

Cyber Effects

Cyber threats are not more important than any other condition or threat, +but are differentiated and highlighted for two reasons. First, and most +importantly, cyber threats are always present and active during ongoing +operations, including during peacetime. Second, cyber receives a +disproportionate level of external attention, which warrants special +emphasis and highlighting in test reports. Using cyber threat +conditions, test teams estimate cybersecurity protection, detection, and +restoration risks at the MOP/MOS level, and confidentiality, integrity, +and availability effects at the MOE level. Then, they determine +operational impacts of cyber effects on the various operations defined +by COIs.

Interaction of Results

Occasionally, prominent measures may have interdependencies that make it +difficult to draw conclusions regarding a specific COI (i.e., "You can +plan a mission quickly, but it isn't complete," or "You can create a +complete mission plan, but it takes an unacceptable amount of time."). +Such interactions must be understood and discussed to determine their +significance and operational impact.

Holistic Considerations

Test teams must consider all MOEs under a COI. In conjunction with MOEs +that express desired effects, there may be MOEs or notable findings that +express undesirable effects. Collectively, the operational impact of +these effects will determine the consequent rating of the COI. In +addition, some MOPs/MOSs may have additional observations and DRs to +consider when determining operational impact. For example, consider an +overly complex and timely maintenance action to remove and replace a +component. This alone could result in negative operational effects, but +if the reliability of the component was well beyond the predicted +reliability, and the component would rarely need to be changed, this +could mitigate the negative operational effects. The team would have to +consider other factors in the aggregation, such as confidence in the +reliability predictions, criticality of the component, and operational +tempo.

COI Notable Findings

Under rare circumstances, the team may identify a significant shortfall +(additional observation) and/or significant effect (MOE notable finding) +that was not addressed in the original OT&E construct. If the shortfall +has substantial/severe impact on the operational outcome, it is +identified as a Notable Finding and considered when determining the COI +rating.

RATING OPERATIONAL EFFECTIVENESS AND SUITABILITY

At the conclusion of OT&E (initial OT&E, qualification OT&E, follow-on +OT&E and multiservice OT&E), an AFOTEC report documents the operational +effectiveness and operational suitability (E&S) determination for the +SUT. The definitions of effectiveness and suitability are fairly +specific, but generally foreign to those without an acquisition and/or +test background. Although the terms come across as acquisition jargon, +the definitions are warfighter-centric and succinctly address the major +operational warfighting considerations. Figure 11 contains E&S +definitions. Effectiveness is synonymous with mission accomplishment and +is defined by the mission statement and documented early in the planning +process. It subsumes all operations defined by COIs and factors in +employment concepts and operational conditions. Suitability is defined +by support concepts, operational conditions, and COI context.

Figure 11. Effectiveness and Suitability Definitions

+ + + image14 + +

Rating Effectiveness and Suitability

By now, it should be clear that determining a SUT's E&S begins at the +measures level and carries through to COIs. After the test team rates +COIs individually, they determine the E&S rating. The E&S rating is +based on the collective operational (or aggregate) impact of COI ratings +on mission accomplishment, and whether the SUT can be fielded and +sustained. The E&S rating taxonomy has four tiers and each tier +addresses a different operational impact. The E&S ratings (or +determination) should be supported by a defensible narrative and +assigned an associated visual indicator. Figure 12 depicts the E&S +rating logic. Use the following guidelines to rate E&S:

  • Rating: Fully Effective . Applied if all aspects of the +mission can be accomplished in the intended operational environment

  • Rating: Effective . Applied if the all aspects of the mission +can be accomplished, but with increased operational costs

  • Rating: Partially Effective . Applied if only some aspects of +the mission can be accomplished AND/OR the mission can only be +accomplished under some operational conditions

  • Rating: Not Effective . Applied is the mission cannot be +satisfactorily accomplished

  • Rating: Fully Suitable . System can be fielded/sustained in +the intended operational environment

  • Rating: Suitable . System can be fielded/sustained, but with +increased operational costs

  • Rating: Partially Suitable . Only some aspects of system +fielding/sustainment can be accomplished AND/OR fielding/sustainment +can only be accomplished under some operational conditions

  • Rating: Not Suitable . Fielding/sustainment cannot be +satisfactorily accomplished

Figure 12. Effectiveness and Suitability Rating Logic

+ + + image15 + +

Mission Capability Determination (Optional)

An overall mission capability (MC) rating may accompany the E&S rating +in the final OT&E report. The MC determination is an optional rating for +AFOTEC test teams that is intended to provide a determination of the +SUT's overall capability to execute or support tasked missions. The MC +rating must consider operational costs (manpower, time, ease of use, +supplies, workarounds and risks), limitations associated with aspects or +portions of the mission, and mission accomplishment across a variety of +operational conditions. Like the E&S determination, it uses a four tier +rating scale (see Figure 13). The MC determination merges E&S +conclusions into a single rating. If the team applies the optional MC +rating, it is in addition to, not in lieu of, the E&S rating. The MC +determination uses the same analysis and evaluation information used to +rate the E&S and the following definitions apply:

  • Rating: Fully Mission Capable . Applied if all aspects of the +mission can be accomplished in the intended operational environment

  • Rating: Mission Capable . Applied if the all aspects of the +mission can be accomplished, but with increased operational costs

  • Rating: Partially Mission Capable . Applied if only some +aspects of the mission can be accomplished AND/OR the mission can +only be accomplished under some operational conditions

  • Rating: Not Mission Capable . Applied is the mission cannot be +satisfactorily accomplished

Figure 13. Mission Capability Rating Logic

+ + + image16 + +

TEST REPORTING

AFOTEC's test report communicates results of our operational +assessment/evaluation and the impact of those results to the warfighter +(the "so what"). The test team reports test findings/results in various +formats and on different timelines depending on the test activity, test +purpose, and external needs. Depending on the nature of the program, +reports inform specified milestones, releases of capability to the +field, progress or status to the program office, or feedback to +developers and testers. Test reporting is the primary enabler for +continuous and cumulative feedback. The OT&E reporting strategy should +be tailored to the acquisition strategy and adapted during OT&E, if +warranted. OT&E reporting has evolved and taken on various formats. With +an increased emphasis on early and continuous OT feedback, reports are +smaller, document less analysis, are generated more quickly, approved at +lower levels, and may not inform defined decisions. There are numerous +types of OT&E reports. Some have established content expectations with +existing templates, while others are heavily tailored to specific +situations. Some reports are written at the end of test, some +immediately after test as quick interim summaries, and some reports are +provided during test execution. OT&E reports can be generated to report +anything from problems or progress, up to a final determination of +effectiveness and suitability. Some reports also include briefings to +communicate results to various stakeholders.

Provide feedback in writing. In the past, AFOTEC test teams provided +continuous relevant operational feedback during various +functions/forums, but delivered limited written feedback/information. +Valuable information is easily forgotten, lost, or dismissed if not +provided in writing.

Although reporting choices are dependent on numerous factors, all AFOTEC +reports should adhere to some fundamental OT&E tenets. They should be +accurate, balanced, and timely, and they should be operationally +sufficient, technical adequate, and credible.

  • Accurate: Information will be precise in analysis and +evaluation to ensure the narrative and ratings are consistent and +the conclusions are defendable.

  • Balanced: Balanced reports highlight system capabilities and +limitations (strengths and weaknesses). This can be especially +challenging when focusing on shortfalls.

  • Timely: Timely reporting is broken into two categories. +First, reports delivered to decision makers with time to review and +understand the results prior to decision dates. Second, reporting +operationally relevant information to stakeholders in time to affect +capability development.

  • Operational Sufficiency: The breadth of the operations +addressed while employing new or modified capabilities within the +context of representative employment and support concepts. The +evaluation is considered operationally sufficient if it provides the +decision maker and the warfighter with results from test events +executed across sufficient operational conditions to identify the +capabilities and limitations associated with employment as well as +to determine the effectiveness and suitability.

  • Technical Adequacy: Addresses the relevance of the technical +information produced by the test in relation to the purpose of the +test (i.e., the operationally relevant questions being addressed by +the test activity). A test is technically adequate if the test data +evaluation provides the acquisition decision maker with the +information to inform the appropriate decision.

  • Technical Credibility: Addresses the depth and accuracy of +the technical information produced by the test. AFOTEC products must +be credible and the Commander must have confidence in the conclusion +when reporting OT&E. The report addresses risk with an acceptable +degree of certainty in test results. Acceptable certainty starts +with sound, defendable processes from ITD through OT&E analysis and +reporting.

Test reporting is broken into two broad categories: continuous reporting +and final reporting.

Continuous Reporting

Continuous and cumulative feedback ensures AFOTEC provides timely +feedback throughout the life of a program, to include early acquisition +stages. Test directors should continuously deliver timely, relevant +written information/findings to stakeholders (e.g., developers and +program offices) as soon as available. Test teams provide continuous +feedback in small, tailorable reports (e.g., status reports, significant +event reports, periodic reports, observation reports, and quick look +briefs). TDs may also provide the more structured progress reports +(e.g., operational assessments) at specified milestones, depending on +the acquisition strategy. Reporting deliverables may be cumulative, +containing all OT feedback captured prior to a particular milestone or +decision point. Continuous feedback shifts the impression of an OT +"final exam" to highlighting continuous OT, collaborative discovery, and +information exchange. Thus, AFOTEC better integrates and is a partner in +system development and delivering combat capability through timely +independent feedback.

Test directors should be acutely aware of their responsibility to keep +their leadership (Det/CCs) apprised of test progress, status, and any +potential issues. Detachment commanders should ensure AFOTEC senior +leadership is aware of any potential controversy that could surface +during reporting. As staff POC, A-3 provides feedback and recommended +courses of action to Detachments and the Command Section during test +execution and reporting. There are several recognized formats for +providing continuous written feedback. Although continuous reporting +fundamentals are common across the various AFOTEC programs, test teams +exercise significant latitude in reporting. Report titles vary from +Detachment to Detachment and contain subtle differences in internal +format. Teams are not limited to currently recognized and helpful +reporting templates.

Operational Assessment Reports

EOAs/OAs analyze and assess progress toward E&S and delivering combat +capability. EOAs/OAs are most commonly used for Major Capability +Acquisitions (see DoDI 5000.02), but may be suited for some Middle Tier +Acquisitions (see DoDI 5000.80). Assessment areas are structured around +AFMAN 63-119 templates that address acquisition, development, and +testing. The "AFOTEC OT&E Guide" and corresponding functional guides +(e.g., Test Execution Guide, Cybersecurity Guide) contain guidance on +EOAs/OAs. These assessment reports contain results for two distinct +areas. Area 1 assesses progress toward operational capabilities (E&S). +Area 2 assesses progress towards delivery of combat capability in three +areas: test planning and documentation, system design and performance, +and test assets and support. EOA/OA tools and report templates are +available in the AFOTEC Intranet Library. Approval authority is carried +out in accordance with (IAW) the signature delegation memorandum on the +AFOTEC Intranet Library.

Status Reports

Status reports provide updates and important test findings during OT&E. +Status reports are in the form of a letter, normally very short (no more +than several pages) and are not intended to be a mini final report. +Status reports may be periodic (monthly, quarterly, or as required), +associated with specific (planned) test events, or in response to an +external organization's request for test status. Optimally, the OT&E +plan documented the intent to produce a status report, and how often. +The status report approval level is provided in the current AFOTEC +signature delegation memorandum on the AFOTEC Intranet Library. A status +report template is provided as a starting point and is posted in the +AFOTEC Intranet Library. Approval authority is carried out IAW the +signature delegation memorandum on the AFOTEC Intranet Library.

Periodic Reporting

Periodic reporting allows frequent reporting on a pre-planned, +continuous interval. These reports are generally small and quick to +produce given the content is limited. They are generally written in a +memo format and the content and level of detail is tailored to the +program needs. Briefings are by exception. Currently, there is no +periodic report template. Contact AFOTEC A-3 for latest guidance and +examples of periodic reports. Approval authority is carried out IAW the +signature delegation memorandum on the AFOTEC Intranet Library.

Observation Reports

Observation reports provide OT&E observations of key acquisition +activities and test events, which do not have to be integrated. +Observation reports are usually associated with an observation plan, but +not always. The report summarizes the scope of the observation, to +include the acquisition activities and developmental testing observed. +It describes key activities observed (e.g., development laboratory +testing, modeling and simulation events) and applied data collection +methods (e.g., direct observation, questionnaires, interviews, and +research). It also provides notable observations (significant positive +and negative observations) of interest to decision makers and +developers, and makes recommendations to address those observations. +Observation report briefings are provided by exception, if +requested/directed. Observation reports communicate the status +(acquisition or performance), operational relevance, and results of +significant activities/events. See the observation report template on +the AFOTEC Intranet Library. Approval authority is carried out IAW the +signature delegation memorandum on the AFOTEC Intranet Library.

Significant Event Reports

Significant event reports briefly describe results of significant test +events, as determined by the TD and Det/CC, during OT activities. They +are usually not provided in pre-planned intervals. The Det/CC submits +these reports via email to the AFOTEC/CC. The AFOTEC/A-3 receives these +reports and submits them, as appropriate, to the PM, HQ USAF/TE, PEM, +PEO, LDTO, PTOs, operational MAJCOM, and others, within 24 hours of any +significant test event, as described in the test plan.

Operational Test Situation Reports

OT SITREPs are used to keep leadership informed of test execution +progress, test team status, and upcoming events. Typically, they are not +provided for continuous reporting, but test teams should be aware of +this type of report. Although uncommon, some teams have used a modified +SITREP format as a method of providing continuous feedback for +exceptionally fast-paced development efforts. Usually, SITREPs are +signed out by the TD, but it is important to elevate approval authority +when SITREPs provide cumulative results for later OT buy-down or draw +conclusions that inform a fielding decision. See the AFOTEC OT&E Guide, +the AFOTEC Test Execution Guide, and the SITREP templates (to include +SAP) for more information.

Final Reporting

The final report is the culmination of the OT&E process and is often +considered the most important product AFOTEC produces. The final report +provides a determination of effectiveness and suitability for formal +acquisitions (and operational utility determinations for technology +demonstrations--see OUA tool in the AFOTEC Intranet Library). Final +reports include a briefing, which are general officer-level summaries of +the test report. A final OT&E report is a cumulative document that +contains operationally relevant feedback (periodic reporting) from all +previous OT. It includes a determination of the SUT's operational +effectiveness and suitability while it is operated and maintained in a +realistic operational environment (including exposure to representative +threats) by typical operators and maintainers. Evaluations require +operational experience to apply judgment and determine system +performance and operational impact in the context of intended +operations. Various report templates are available in the AFOTEC +Intranet Library to aid teams when writing final reports (e.g., OUE, +M/I/Q/FOT&E, and OUA). Report templates are a guide to ensure important +content is included. The instructional text describes expectations for +content and format. Approval authority is carried out IAW the signature +delegation memorandum on the AFOTEC Intranet Library.

Effectiveness and Suitability Final Reporting

E&S is determined and documented in one of the following final reports.

  • Operational utility evaluation (OUE) report

  • Initial operational test and evaluation (IOT&E) report

  • Qualification operational test and evaluation (QOT&E) report

  • Follow-on operational test and evaluation (FOT&E) report

  • Multiservice operational test and evaluation (MOT&E) report

OUE Final Reporting

Although OUEs occur continuously during the OT&E timeframe, they are +intended to provide an E&S determination. They are conducted to +demonstrate or validate new operational concepts or capabilities, +upgraded components, or expanded mission/capabilities of existing or +modified systems. OUEs may support operational decisions (i.e., fielding +a system with less than full capability, including but not limited to +integrated testing of releases and increments of capabilities) or +acquisition-related decisions, whenever appropriate throughout the +system's lifecycle. The OUE M/I/Q/FOT&E report and classified report +annex templates posted in the AFOTEC Intranet Library contain more +information. Also see the "AFOTEC OT&E Guide," 11th ed. for staffing +guidelines.

IOT&E Final Reporting

IOT&E is the final dedicated phase of OT&E, which precedes a full-rate +production (FRP) decision. It is the final evaluation that entails +dedicated operational testing of production representative test articles +and uses typical operational scenarios that are as realistic as +possible. IOT&E is conducted by an operational test agency (OTA) +independent of the contractor, program management office, or developing +agency. AFOTEC commonly conducts IOT&Es to inform the FRP/initial +operational capability (IOC)/full operational capability (FOC) and +fielding decisions. IOT&E will likely include developmental (contractor +or government) test and integrated test results deemed operationally +relevant. The OUE M/I/Q/FOT&E report and classified report annex +templates posted on the AFOTEC Intranet Library contain more +information. Also see the "AFOTEC OT&E Guide," 11th ed. for staffing +guidelines.

QOT&E Final Reporting

QOT&E is conducted by an operational test and evaluation agency (OTA) +independent of the contractor, program management office, or developing +agency. When programs lack RDT&E funding (e.g., commercial +off-the-shelf, nondevelopmental items, and government furnished +equipment), AFOTEC performs a QOT&E. QOT&Es are independent and +dedicated operational tests that AFOTEC conducts in as realistic an +operational environment as possible to evaluate a system's operational +capability as determined by its effectiveness and suitability in its +intended operational environment. QOT&E is the final dedicated phase of +OT&E preceding a full-rate production (FRP) decision. AFOTEC can conduct +QOT&Es to inform the FRP/initial operational capability (IOC)/full +operational capability (FOC) and fielding decisions. The OUE M/I/Q/FOT&E +report and classified report annex templates posted on the AFOTEC +Intranet Library contain more information. Also see the "AFOTEC OT&E +Guide," 11th ed. for staffing guidelines.

FOT&E Final Reporting

FOT&E, a continuation of IOT&E or QOT&E, answers specific questions +about unresolved COIs and test issues, verifies resolution of +deficiencies determined to have substantial or severe impact on mission +operations, or completes areas not finished during the I/QOT&E. +Requirements for FOT&E are documented in an approved AFOTEC OT&E report +prior to planning the FOT&E. The OUE M/I/Q/FOT&E report and classified +report annex templates posted on the AFOTEC Intranet Library contain +more information. Also see the "AFOTEC OT&E Guide," 11th ed. for +staffing guidelines. In addition, DoDI 5000.02, and AFI 99-103 contain +details on FOT&Es.

MOT&E Final Reporting

MOT&Es are essentially OT&Es involving more than one military Service. A +standing MOT&E MOA, signed by the four Service OTAs and the Joint +Interoperability Test Command (JITC), outlines roles, responsibilities +and event-driven deliverables for participating Service OTAs. Test team +members should review the MOA prior to MOT&E reporting activities. MOT&E +report coordination is generally more challenging and takes longer. Per +the MOA, supporting OTAs will use the lead OTA's processes and products +during the OT&E activities. However, even in a supporting role, AFOTEC +TDs still have the responsibility to know, understand, and be able to +present overall test results. The Multiservice MOA, the OUE M/I/Q/FOT&E +report and classified report annex templates posted on the AFOTEC +Intranet Library contain more information. Also see the "AFOTEC OT&E +Guide," 11th ed. for staffing guidelines.

Interim Summary Report

An ISR is provided to the appropriate AFOTEC signature authority within +10 calendar days of completion of the final phase of OT&E (OUE, IOT&E, +etc.). An ISR is a time-sensitive alternative to a final report (with an +expedited review cycle) used to meet a decision date that occurs before +AFOTEC can complete its final report. It is a short (15 pages maximum) +summary that provides preliminary results of the test team's significant +findings to credibly inform the decision maker about the system's +effectiveness and suitability. Distribution is limited to essential +addressees; however, expanded external circulation may be appropriate +for programs with high interest or upon request or direction. ISRs are +staffed following the product review procedures in the "AFOTEC OT&E +Guide," 11th ed.

Operational Utility Assessment Final Reporting

An operational utility assessments (OUA) is a judgment of a system's +military utility when exposed to representative threats while +operated/maintained in realistic operational environments by typical +operators/maintainers. The AFOTEC OUA tool and OUA report template +posted to the AFOTEC Intranet Library contain more information.

COORDINATION AND PUBLISHING

With the ever evolving acquisition environment, reduced resources, and +expanding AFOTEC roles, it is imperative that AFOTEC optimize its final +report routing process, while maintaining both technical and +administrative quality. AFOTEC leadership expects the Det technical +advisor (TA) to ensure that each report is technically accurate and +supportable and that it was reviewed by the Det's technical editor (TE) +prior to forwarding it to HQ for coordination. Emphasis is placed on +using final report templates to establish the foundation of the final +report prior to test start. A strawman of the final report should be +ready for review at the TRR briefing. Remaining sections of the final +report should be written as test events are completed to meet the +timeline from last test event to submission of the final report for +staffing and signature. The Det test director, Det TA, and Det/CC are +responsible for ensuring that all reports (e.g., SITREP, status, and +significant event) are ready for signature when they enter 2-letter +and/or CS coordination. See the AFOTEC OT&E Guide for coordination +details.

Controversial Information

Potentially controversial information must be passed up the AFOTEC chain +of command before it is shared outside of AFOTEC, and preferably before +official coordination begins. Delaying discussion of issues found early +in the evaluation and reporting phase can limit AFOTEC's options to +respond to those issues.

Signature/Approval Levels

The approval levels are documented in the AFOTEC delegation memorandum +posted in the AFOTEC Intranet Library. See this site for current +delegation guidance. All reports should have the Det test director, Det +TA, and Det CC's signature.

Coordination Method

The AFOTEC Collaborative Action Platform (ACAP) system is the preferred +coordination tool. AFOTEC A-3 (A-3Z for SAP) is the staff-lead for +coordinating all test reports. However, in certain, time-sensitive +cases, email or other methods may be used. Discuss coordination methods +as soon as possible with A-3 if ACAP will not be used to coordinate. All +reports should be posted on the program's test management page.

History Office

The Det TE or HQ TE will transmit a copy of all reports they distribute +to the HO.

Technical Editor Role

The Detachment technical editor (TE) works with their leadership to +ensure the final report complies with current AFOTEC template, writing, +formatting, and style requirements. The TE maintains configuration +control of the draft final report and provides guidance to test teams. +At a minimum, the TE is involved in the following three stages of the +writing process: provide the test team instruction and current template +guidance prior to writing; format and review the final report before or +after AO coordination; perform a final edit prior to submission into +2-ltr coordination. Once the final report enters CS coordination, the +A-3 TE maintains configuration control. The OA/OUA/OUE/OT&E report +templates reside on the AFOTEC SharePoint.

Report Medium

Reports are coordinated using ACAP, unless the Det obtains CS approval +to use another method.

Report Timeline Exceptions

Coordination could take longer on some programs (i.e., classified) or +require an expedited process due to shortened time between completion of +testing (i.e., LTE) and the decision date. If the timelines outlined in +Attachment C need to be modified to meet requirements of a particular +reporting situation, contact A-3 early in the reporting process to +develop an agreeable alternative.

Final Report Publishing

The TD is responsible for ensuring the distribution list documented in +the final report contains accurate and up-to-date NIPRNet and SIPRNet +addresses for program-specific personnel (individuals, not workflow +addresses). Once the AFOTEC/CC has signed the final product (keeping in +mind the timeline for staff coordination outlined in Attachment C), the +following procedures are followed to publish the final report:

  • If the TD wishes to include additional files (briefings, +videos/photos, interviews, related reports, etc.) with the final +report, the additional files will be included as supporting +documentation during ACAP 2-ltr coordination. The Det TE ensures +these additional files are in a format compatible with the final +report file.

  • Following final review and approval/signature by the AFOTEC/CC, the +report and all attachments, annexes, and appendices are considered +final; no further changes are authorized without the AFOTEC/CC's +approval.

  • The A-3 TE converts the master report file into an electronic ".pdf" +file, including additional files as requested by the TD.

  • The A-3 TE transmits the file to individuals indicated on the +document's distribution list. If the file is classified, or if there +is a classified annex to the file, the A-3 TE provides NIPRNet +notification to the same individuals stating the file was +distributed via SIPRNet.

  • The A-3 PM posts these files to the program's NIPRNet test +management page (and SIPRNet, if applicable) and removes all +previous versions.

  • The A-3 TE transmits a copy of the file to DTIC for processing.

Report Briefings

Report briefings should be in the approved format found on the AFOTEC +SharePoint. If the template formally changes after the TRR briefing, the +previous template can be used unless AFOTEC/CC directs significant +changes requiring use of a draft template. Additionally, the command +section may require modified, new, or deleted information that requires +updating slides in the existing template. Briefings to OSD and agencies +outside the Air Force are normally briefed to AF/TE first. The user and +program office are normally represented. The briefing is reviewed and +coordinated, as documented in Attachment C, and externals (program +office, AFMC, operating command, HQ USAF) are pre-briefed, as +appropriate.

The report briefing is staffed with the written report as a package. An +electronic copy of the briefing is maintained on the program's test +management page. Detailed backup slides are not transmitted outside of +AFOTEC. The AFOTEC/CC approves briefings required by OSD or other +external agencies before they are presented outside of AFOTEC.

The briefing trail begins with internal briefings that lead to an +AFOTEC/CC-approved and coordinated presentation. Consistent with the +AFOTEC/CC's "no surprises" policy, the test team should encourage +participation in the AFOTEC/CC presentation by personally inviting the +user and developer communities. Prior to inviting any "externals" to the +CC's briefing, the proposed attendees must be approved by the command +section. Include proposed external invitees on the briefing request +form. Every attempt should be made to include the user and program +office via video-teleconference or telephone conference. Attachment C +contains the product review process for staffing the report briefing.

There is a need for persistent communication and coordination with our +mission partners and our chain of command to achieve an uneventful final +report briefing to DOT&E.

  • Flexibility and prior preparation are keys to a successful final +report briefing. Know your audience and anticipate their concerns. +Work closely with your DOT&E action officer to know what they will +brief to the Director and ask for feedback after the briefing.

  • Prior coordination is key to final preparations. After AFOTEC/CC +approves, ensure briefings are reviewed and coordinated with our +mission partners and our chain of command. We strive to achieve a +unified Air Force position with no surprises.

  • Coordinate responses to anticipated questions prior to the briefing. +You should answer only questions pertaining to OT&E results or +observations made during testing. It is imperative to have the right +people (normally the user/program office) available to address +questions outside of our lane.

  • Coordinate with the using MAJCOM, program office, SAF/AQ and others +to establish a clear understanding of each participant's +responsibilities. Questions about CONOPS and operational impact +should go to the user representative. Questions concerning funding, +continued development, or plans for correcting deficiencies should +be answered by the program office.

  • Have contingency plans and additional briefings by the proper +attendee ready to address issues you know are important to your +audience or to facilitate discussions. For example, the program +office may need to prepare a briefing describing their plan for +correcting deficiencies following OT&E. These special briefings +should also be coordinated at the GO level well before the briefing. +The additional briefs may not be needed, but you should have them +available in case they are required.

  • Always stay in your lane. Do not answer rhetorical questions or +offer opinions unless specifically asked, and remember you are +speaking for the AFOTEC Commander.

\ No newline at end of file diff --git a/sites/oteguide/index.html b/sites/oteguide/index.html index 414876ce..38c7b9e8 100644 --- a/sites/oteguide/index.html +++ b/sites/oteguide/index.html @@ -1,4 +1,4 @@ -

Operational Test & Evaluation Guide

Chrome or Firefox recommended for best experience

Most Guides have an expandable folder you can open to see contents. Or simply search for what you're after.

On mobile?

  • Click the 'hamburger' icon to access sections
  • save to your homescreen using the options in your browser of choice

About

AFOTEC Team,

Last Spring, we undertook an effort to evolve our OT&E guidance to be less prescriptive, match what we were doing, and provide the tools to test teams to deliver combat capability. At the core of this effort was enabling the six principles of Adaptive, Relevant Testing. We have been on a journey to transform our core documentation, the OT&E Guide, into a mobile friendly app that makes use of the best the web has to offer.

This site is still in beta, but is regularly updated with all the same data from the AFOTEC Sharepoint site. While we’re still experimenting with the format, if there’s an instance of informational discrepancy between sites, Sharepoint is the authoritative source.

Application features include:

  • Mobile and desktop views
  • Searchable across all documents with a scrollable table of contents that is always visible (desktop view)
  • Pop-up definitions of acronyms (desktop view)
  • Dark/Light reading experiences

The new guide is a living and breathing document. Rather than performing an update every other year, the new guide will incorporate updates at any point. To keep the Guide up to date and meaningful, we need content from you.

You can submit content through email to the Guide Team and App Team, such as:

  • Information for specific guides related to your job duties to create living continuity constructs
  • Tools and how-to methods for achieving certain aspects of your job
  • Templates for reports and other deliverables, along with step-by-step instructions
  • Contacts related to your job, to populate an always-current list
  • Web Application features you’d like to see

This is but one small part of transforming AFOTEC for relevance in 2030. The new tools will allow AFOTEC’s best ideas and most current information to move through the organization in an always current, dynamic fashion.

App Notes

8 Apr 2021

  • decomposed many of the lengthier guides into sub-pages, to help with mobile viewing and quickly finding search results

7 Apr 2021

  • added several guide documents
    • updated utility that converts from Word Doc to Markdown
  • general improvements to links and descriptions
  • modified search
  • added acronyms section
  • IL-4 version in work: tests ~77% complete, CMS ~90% complete

11 Mar 2021

  • Onboarded Lt Zink and MSgt Farr (in training)
  • Created a collapsible folder view on left TOC
  • Link fixes
  • Working on CMS
  • Working on P1 pipeline

4 Mar 2021

  • Onboarded Lt Peterson
  • P1 IL-4 setup
  • Link updates
  • Mobile scroll up minor fix
  • Color theme modification

19 Feb 2021

  • updated contact email
  • added mobile 'scroll up' button

4 Feb 2021

  • Added 404 page
  • Created internal links possibility for content pages
  • updated links to internal pages and docs on the SharePoint, minor fixes to content

27 Jan 2021

  • minor functional change to search (click away focus)
  • added links
  • added intro content

20 Jan 2021

  • modified search
  • added 6 Functional Guides (see bottom of the TOC)

Credits

The OT&E Guide Team is:

  • Mr. Dan Telford
  • Mr. Russ Foos
  • Mr. Payton Akin
  • Mr. David Ray
  • Col Kevin "Maddog" Madrigal
  • Col Matthew "Mags" Magness
  • (others)
  • You, if you want to add content (email the team)

The App Team is:

  • Lt William Zink
  • Lt Jacob Peterson
  • Capt Andrew "Auto" Risse
  • Maj Travis "Agent" Smith
\ No newline at end of file diff --git a/sites/oteguide/lessons_learned_guide/1-8-Dissemination/index.html b/sites/oteguide/lessons_learned_guide/1-8-Dissemination/index.html new file mode 100644 index 00000000..10fb041f --- /dev/null +++ b/sites/oteguide/lessons_learned_guide/1-8-Dissemination/index.html @@ -0,0 +1,19 @@ +Dissemination

Dissemination

Once a "Lesson Identified" is found by the LL team, it must be assigned +an action officer. This action officer is given the task of developing +the actions necessary to incorporate the LI into the organization. These +actions are highly dependent on the individual LI, and will have to +adjust to the unique requirements of each LI.

Once implementation actions are identified, the action officer will need +to regularly report on progress towards completion and keep the LL +manager apprised of changes. The final goal of this process will be to +have each of these LL incorporated into training, the OT&E guide, or +OTIFs as appropriate; they will not exist within some stand-alone +database of test advice.

\ No newline at end of file diff --git a/sites/oteguide/lessons_learned_guide/1-9-Lessons-Learned-(Resolution)/index.html b/sites/oteguide/lessons_learned_guide/1-9-Lessons-Learned-(Resolution)/index.html new file mode 100644 index 00000000..97b161e5 --- /dev/null +++ b/sites/oteguide/lessons_learned_guide/1-9-Lessons-Learned-(Resolution)/index.html @@ -0,0 +1,13 @@ +Lessons Learned (Resolution)

Lessons Learned (Resolution)

The final step is resolution, where the LI has reached a successful +implementation and dissemination. The LI has improved how the +organization performs the mission, and the LI is changed to a LL. The +implementation plan is closed and the LL is catalogued for reference.

\ No newline at end of file diff --git a/sites/oteguide/lessons_learned_guide/2-1-CONCLUSION/index.html b/sites/oteguide/lessons_learned_guide/2-1-CONCLUSION/index.html new file mode 100644 index 00000000..b7a4e06f --- /dev/null +++ b/sites/oteguide/lessons_learned_guide/2-1-CONCLUSION/index.html @@ -0,0 +1,19 @@ +CONCLUSION

CONCLUSION

It is the place of every member of our organization to ensure AFOTEC +continues to learn and adapt. We must give our organization every +opportunity to respond to new challenges in the DoD acquisition +environment however they arise, be it policy, funding, or global +situations. This process is one of the steps being taken to support our +combined efforts to face and learn from the challenges of our mission, +and depends on our partners across the AFOTEC enterprise and their +willingness to participate to support this endeavor.

The AFOTEC Lessons Learned process is owned by A-2/9, and staffed by +members of A-9O. Any questions, comments, or suggestions for improvement +can be directed to this office.

\ No newline at end of file diff --git a/sites/oteguide/lessons_learned_guide/2-2-OBSERVATION-SHEET/index.html b/sites/oteguide/lessons_learned_guide/2-2-OBSERVATION-SHEET/index.html new file mode 100644 index 00000000..2a6e8639 --- /dev/null +++ b/sites/oteguide/lessons_learned_guide/2-2-OBSERVATION-SHEET/index.html @@ -0,0 +1,15 @@ +OBSERVATION SHEET

OBSERVATION SHEET

\ No newline at end of file diff --git a/sites/oteguide/lessons_learned_guide/2-3-INTERVIEW-SHEET/index.html b/sites/oteguide/lessons_learned_guide/2-3-INTERVIEW-SHEET/index.html new file mode 100644 index 00000000..5ba7da56 --- /dev/null +++ b/sites/oteguide/lessons_learned_guide/2-3-INTERVIEW-SHEET/index.html @@ -0,0 +1,20 @@ +INTERVIEW SHEET

INTERVIEW SHEET

\ No newline at end of file diff --git a/sites/oteguide/lessons_learned_guide/index.html b/sites/oteguide/lessons_learned_guide/index.html index 64492999..602e1c4e 100644 --- a/sites/oteguide/lessons_learned_guide/index.html +++ b/sites/oteguide/lessons_learned_guide/index.html @@ -1,4 +1,4 @@ -Lessons Learned Guide

Lessons Learned Guide

Last Updated/Reviewed: 08 December 2020

Originating Office: AFOTEC A-2/9

\ No newline at end of file diff --git a/sites/oteguide/measures_guide/7-APPENDIX-B-MEASURE-REVIEW-QUICK-REFERENCE/index.html b/sites/oteguide/measures_guide/6-APPENDIX-A:-CATEGORIES-OF-MEASURES/index.html similarity index 80% rename from sites/oteguide/measures_guide/7-APPENDIX-B-MEASURE-REVIEW-QUICK-REFERENCE/index.html rename to sites/oteguide/measures_guide/6-APPENDIX-A:-CATEGORIES-OF-MEASURES/index.html index f47402f2..ae6eeda5 100644 --- a/sites/oteguide/measures_guide/7-APPENDIX-B-MEASURE-REVIEW-QUICK-REFERENCE/index.html +++ b/sites/oteguide/measures_guide/6-APPENDIX-A:-CATEGORIES-OF-MEASURES/index.html @@ -1,4 +1,15 @@ -APPENDIX B MEASURE REVIEW QUICK REFERENCE

APPENDIX B MEASURE REVIEW QUICK REFERENCE

AFOTEC Keys to Mean Measures

Writing:

  • Does the threshold include an inequality, either ≥ or ≤?

  • Does the threshold include the units of measure?

Example:

  • ???

Planning for Analysis:

  • Keep in mind that the arithmetic mean can be sensitive to every value in the data set, including extremes. This effect is especially noticeable when the sample size is small.

AFOTEC Keys to Error Probable Measures

Writing:

  • Does the threshold include an inequality, either ≥ or ≤?

  • Does the threshold include the units of measure?

  • Does the metric include the probability?

Example:

  • ???

Planning for Analysis:

  • Are the underlying assumptions about the metric realistic for the operational scenario?

AFOTEC Keys to Percentile Measures

Writing:

  • Does the threshold include an inequality, either ≥ or ≤?

  • Does the threshold include the units of measure?

  • Does the metric include the percentile value (not just the word “Percentile”)

Example:

  • ???

Planning for Analysis:

  • Is a percentile applicable to the requirement or is there another metric more appropriate?

  • Does the team intend to report the percentile of the empirical data or of a fitted distribution?

  • Is the sample size sufficient for a percentile to be meaningful?

AFOTEC Keys to Probability Measures

Writing:

  • Is the threshold a value between 0 and 1?

  • Does the threshold include an inequality, either ≥ or ≤?

  • Are the metric and threshold cells merged?

  • Does the measure name include the phrase “Probability of”?

Example:

  • ???

Planning for Analysis:

  • Is the sample size sufficient to determine a probability with a reasonable confidence interval?

  • Is a probability meaningful for the operational scenario?

AFOTEC Keys to Continuous Minimum/Maximum Measures

Writing:

  • Does the threshold include an inequality or equal sign, either ≥, ≤ or =?

  • Is the sign appropriate for the metric (“≤” for a Maximum, “≥” for a Minimum, or an equals sign “=” in certain cases)?

  • Is the metric spelled out completely (no “Min” or “Max”)?

Example:

  • ???

Planning for Analysis:

  • Is a minimum/maximum metric operationally relevant?

AFOTEC Keys to Discrete Minimum/Maximum Measures

Writing:

  • Does the threshold include an inequality or equal sign, either ≥, ≤ or =?

  • Is the sign appropriate for the metric (“≤” for a Maximum, “≥” for a Minimum, or an equals sign “=” in certain cases)?

  • Is the metric spelled out completely (no “Min” or “Max”)?

Example:

  • ???

Planning for Analysis:

  • Is a minimum/maximum metric/threshold operationally relevant?

  • Can the measure be made continuous?

AFOTEC Keys to Proportion Measures

Writing:

  • Does the measure clearly express a relationship between the “whole” and the “part”?

  • Is the “whole” listed first and the “part” listed second?

  • Are the words “that are” used to convey the relationship?

  • Are both parts of the relationship clearly and concisely expressed with no double meaning?

  • Are the words “out of” used to convey the relationship in the threshold?

  • Alternatively a division symbol can be used (3/5)

Example:

  • ???

Planning for Analysis:

  • Can the measure be made continuous?

  • Does the expected sample size support the decision to use a proportion metric?

  • Do not report proportion measure results in decimal form alone. If the requirement is given in a decimal or percentage, report the result as a proportion and annotate the decimal/percentage notation for comparison to the threshold of the requirement.

  • Do not report results in vague quantitative terms such as “a majority of targets were correctly identified.” Always include the actual quantitative values.

AFOTEC Keys to Percentage Measures

Writing:

  • Does the measure clearly express a relationship between the “whole” and the “part”?

  • Is the “whole” listed first and the “part” listed second?

  • Are the words “that are” used to convey the relationship?

  • Are both parts of the relationship clearly and concisely expressed with no double meaning?

  • Does the threshold include an inequality or equal sign, either ≥ or ≤?

Planning for Analysis:

  • Is there a continuous threshold hidden in the requirement that can used to convert the Percentage measure into a continuous Percentile measure?

  • Can a continuous identified standard be determined for the requirement?

  • Does the expected sample size support the decision to use a percentage metric?

  • Do not report results in vague quantitative terms such as “a majority of targets were correctly identified.” Always include the actual quantitative values.

AFOTEC Keys to Rating Measures

Writing:

  • Is the measure written to evaluate some attribute of the human interaction with the system?

  • Is the correct format used for the specific attribute the measure is concerned with?

  • Is the “user” clearly defined in the measure title and/or Scope section of the worksheet?

  • The measure title should never start with “Test Team Rating of…”

  • Test team judgement is already accounted for in the AFOTEC rating taxonomy

  • Are the Metric and Threshold cells merged, while maintaining the header cells?

  • Is the criteria statement formatted correctly?

  • Should the criteria statement be preceded with the identified standard symbol (^)?

  • Does the measure follow one of the pitfalls described in the Ratings measures Section?

  • Measure should not be a “Catch-all” or used to “Cover all the bases”

  • Measure should not attempt to evaluate something that can be answered with objectively collected quantitative data

  • Measure should not be so broad as to evaluate the entire system or COI

  • Measure should not attempt to roll up other measures in the measure set

Planning for Analysis:

  • Are there any additional data elements that can be added to aid in the evaluation of the measure?

  • Plan to look for a convergence or harmony of responses when evaluating these types of measures.

\ No newline at end of file diff --git a/sites/oteguide/measures_guide/8-APPENDIX-C-CYBER-RELATED-MEASURES/index.html b/sites/oteguide/measures_guide/9-APPENDIX-C:-CYBER-RELATED-MEASURES/index.html similarity index 88% rename from sites/oteguide/measures_guide/8-APPENDIX-C-CYBER-RELATED-MEASURES/index.html rename to sites/oteguide/measures_guide/9-APPENDIX-C:-CYBER-RELATED-MEASURES/index.html index 925f45f1..fcc9a0d7 100644 --- a/sites/oteguide/measures_guide/8-APPENDIX-C-CYBER-RELATED-MEASURES/index.html +++ b/sites/oteguide/measures_guide/9-APPENDIX-C:-CYBER-RELATED-MEASURES/index.html @@ -1,4 +1,4 @@ -APPENDIX C CYBER-RELATED MEASURES

APPENDIX C CYBER-RELATED MEASURES

[1]

Cybersecurity T&E is used to describe the activities that encompass all +APPENDIX C: CYBER-RELATED MEASURES

APPENDIX C: CYBER-RELATED MEASURES

Cybersecurity T&E is used to describe the activities that encompass all cybersecurity test and evaluation activities, including vulnerability assessments, security controls testing, penetration testing, adversarial testing, and cybersecurity testing related to a system’s operational @@ -312,7 +312,7 @@ than this AFOTEC< 1, 10 Feb 20

[3] NIST SP 800-12, “An Introduction to Information Security,” Rev. 1, Jun 17

[4] System Survivability Key Performance Parameter (SS KPP) Pillars are defined in the JCS Cyber Survivability Endorsement Implementation Guide, -Version 1.01