|
| 1 | +.. |
| 2 | + # ******************************************************************************* |
| 3 | + # Copyright (c) 2024 Contributors to the Eclipse Foundation |
| 4 | + # |
| 5 | + # See the NOTICE file(s) distributed with this work for additional |
| 6 | + # information regarding copyright ownership. |
| 7 | + # |
| 8 | + # This program and the accompanying materials are made available under the |
| 9 | + # terms of the Apache License Version 2.0 which is available at |
| 10 | + # https://www.apache.org/licenses/LICENSE-2.0 |
| 11 | + # |
| 12 | + # SPDX-License-Identifier: Apache-2.0 |
| 13 | + # ******************************************************************************* |
| 14 | +
|
| 15 | +:orphan: |
| 16 | + |
| 17 | +.. document:: Persistency Software Verification Plan |
| 18 | + :id: doc__persistency_software_verification_plan |
| 19 | + :status: valid |
| 20 | + :security: NO |
| 21 | + :safety: QM |
| 22 | + :realizes: S-CORE_PROCESS_wp__verification__plan |
| 23 | + |
| 24 | +Software Verification Plan |
| 25 | +************************** |
| 26 | +.. This document provides a template for a software verification plan. |
| 27 | +.. It should be adapted to the specific needs of project. |
| 28 | +
|
| 29 | +Persistency Key-Value Storage (KVS) Software Verification Plan |
| 30 | + |
| 31 | +Purpose |
| 32 | +======= |
| 33 | +.. This section should briefly describe the overall goal of the verification plan. |
| 34 | +.. It should state the plan's intended audience and the information it aims to provide. |
| 35 | +.. This might include clarifying the scope of the verification activities and linking to |
| 36 | +.. other relevant documents. |
| 37 | +
|
| 38 | +This document describes the test plan for the Persistency Key-Value Storage (KVS) module. The plan outlines the scope, approach, resources, and schedule of all testing activities. |
| 39 | +The goal is to verify that the KVS implementation meets the specified feature and module-level requirements and is robust for use in an automotive context. |
| 40 | + |
| 41 | + |
| 42 | + |
| 43 | +Objectives and Scope |
| 44 | +==================== |
| 45 | + |
| 46 | +Objectives |
| 47 | +---------- |
| 48 | +.. This section outlines the key objectives of the software verification effort. |
| 49 | +.. Examples include correctness, completeness, reliability, performance, maintainability, |
| 50 | +.. compliance, and traceability. Each objective should be clearly defined and measurable. |
| 51 | +
|
| 52 | +The primary objectives of this verification plan are to ensure the Key-Value Store (KVS) module is fit for its intended purpose in a safety critical environment. Each objective is defined as follows: |
| 53 | + |
| 54 | +* **Correctness**: To verify that the KVS module's implementation strictly adheres to the `feature requirements`_ and `module requirements`_. All API functions must behave as specified under all documented conditions. |
| 55 | +* **Completeness**: To ensure that all specified requirements for the KVS module have corresponding verification tests. |
| 56 | +* **Reliability**: To demonstrate that the KVS module operates dependably over extended periods and recovers gracefully from faults, ensuring data integrity and system consistency. |
| 57 | +* **Performance**: To validate that the KVS module meets the timing and resource usage constraints specified in the non-functional requirements, ensuring it does not negatively impact the overall system performance. |
| 58 | +* **Maintainability**: To ensure that the test suite is well-documented, automated, and easy to modify, facilitating future updates and regression testing. |
| 59 | +* **Compliance**: To ensure the verification process and its work products adhere to the standards outlined in the project's quality plan. |
| 60 | +* **Traceability**: To maintain a clear and auditable link between requirements, design documents, source code and verification test cases. |
| 61 | + |
| 62 | +.. LINKS |
| 63 | +.. _feature requirements: https://eclipse-score.github.io/score/main/features/persistency/kvs/requirements/index.html |
| 64 | +.. _module requirements: https://eclipse-score.github.io/score/main/modules/persistency/kvs/docs/requirements/index.html |
| 65 | + |
| 66 | +Verification Scope and Constraints |
| 67 | +---------------------------------- |
| 68 | +.. This section details what software components and functionalities are included in the |
| 69 | +.. verification process. It should also clearly specify any limitations or exclusions. |
| 70 | +.. This section should address external dependencies and integrations. |
| 71 | +
|
| 72 | +This plan details the verification of the KVS software module. |
| 73 | + |
| 74 | +In Scope: |
| 75 | + |
| 76 | +* **Components**: The KVS module source code, including all public API functions and internal logic. |
| 77 | +* **Functionalities**: |
| 78 | + |
| 79 | + * All Create, Read, Update, Delete (CRUD) operations. |
| 80 | + * Error handling and reporting mechanisms. |
| 81 | + * Data persistence across program cycles. |
| 82 | + * Atomic write operations. |
| 83 | +* **External Dependencies**: The interaction between the KVS module and the underlying file system is in scope, but only to the extent that it affects KVS functionality. |
| 84 | + Verification will ensure that the KVS module's assumptions about the storage layer are valid. |
| 85 | + |
| 86 | +Out of Scope (Exclusions): |
| 87 | + |
| 88 | +* **Hardware-Specific Validation**: Testing of the physical storage medium's endurance, wear-leveling, or low-level hardware error correction is not covered. |
| 89 | +* **Underlying System Verification**: The operating system, its file system implementation, and device drivers are considered pre-validated and are not part of this verification effort. |
| 90 | +* **End-to-End Application Logic**: System-level tests that validate the business logic of applications using the KVS for storage are out of scope. |
| 91 | + |
| 92 | + |
| 93 | +Risks and Mitigation |
| 94 | +-------------------- |
| 95 | +.. This section identifies potential risks associated with the verification activities and outlines |
| 96 | +.. strategies to mitigate those risks. This may involve referencing the :need:`wp__platform_mgmt`. |
| 97 | +
|
| 98 | +Potential risks that derived from the verification activities and their respective mitigation measures are assessed based on Risk Management Matrix and handled by |
| 99 | +the project management. Risks are classified based on their likelihood of occurrence and severity of impact, as shown in the table below. |
| 100 | + |
| 101 | ++------------+------------+------------+------------+------------+------------+ |
| 102 | +| Likelihood | Severity 1 | Severity 2 | Severity 3 | Severity 4 | Severity 5 | |
| 103 | ++============+============+============+============+============+============+ |
| 104 | +| Very High | Low | Medium | High | Very High | Very High | |
| 105 | ++------------+------------+------------+------------+------------+------------+ |
| 106 | +| High | Low | Medium | High | High | Very High | |
| 107 | ++------------+------------+------------+------------+------------+------------+ |
| 108 | +| Medium | Low | Medium | Medium | High | High | |
| 109 | ++------------+------------+------------+------------+------------+------------+ |
| 110 | +| Low | Very Low | Low | Medium | Medium | High | |
| 111 | ++------------+------------+------------+------------+------------+------------+ |
| 112 | +| Very Low | Very Low | Very Low | Low | Low | Medium | |
| 113 | ++------------+------------+------------+------------+------------+------------+ |
| 114 | + |
| 115 | +Schedules |
| 116 | +--------- |
| 117 | +.. This section defines the timeline for different verification activities. |
| 118 | +.. It might include milestones, deadlines, and dependencies between tasks. |
| 119 | +
|
| 120 | +The verification activities are planned as follows. This schedule assumes that the entry criteria are met on time. |
| 121 | + |
| 122 | +.. list-table:: Verification Schedule |
| 123 | + :header-rows: 1 |
| 124 | + |
| 125 | + * - Activity |
| 126 | + - Start Date |
| 127 | + - End Date |
| 128 | + * - Verification Plan Finalized |
| 129 | + - |
| 130 | + - |
| 131 | + * - Finalized Requirements |
| 132 | + - |
| 133 | + - |
| 134 | + * - Test Environment Ready |
| 135 | + - |
| 136 | + - |
| 137 | + * - Unit Test Suite Complete |
| 138 | + - |
| 139 | + - |
| 140 | + * - Code complete for KVS module |
| 141 | + - |
| 142 | + - |
| 143 | + * - Component Integration Test Suite Complete |
| 144 | + - |
| 145 | + - |
| 146 | + * - Feature Integration Test Suite Complete |
| 147 | + - |
| 148 | + - |
| 149 | + * - Platform Test Suite Complete |
| 150 | + - |
| 151 | + - |
| 152 | + * - Reference Hardware Available |
| 153 | + - |
| 154 | + - |
| 155 | + * - Final Verification Report |
| 156 | + - |
| 157 | + - |
| 158 | + |
| 159 | + |
| 160 | + |
| 161 | +Approach |
| 162 | +======== |
| 163 | + |
| 164 | +General Approach |
| 165 | +---------------- |
| 166 | +.. This section provides a high-level overview of the verification strategy. |
| 167 | +.. It should describe the overall methodology (e.g., Continuous Integration), |
| 168 | +.. approaches used, and rationale behind the choices made. |
| 169 | +
|
| 170 | +The verification strategy employs a Continuous Integration (CI) methodology. Every code change pushed to the version control system will automatically trigger a build and the execution of a suite of automated tests. |
| 171 | +This provides rapid feedback to developers and ensures that regressions are caught early. The approach is layered, starting from developer-led unit tests and progressing to component integration tests maintained by test engineers. |
| 172 | + |
| 173 | +Software Integration |
| 174 | +-------------------- |
| 175 | +.. This section details how software components are integrated into the system. |
| 176 | +.. It should describe the integration process, including procedures for handling new features, |
| 177 | +.. bug fixes, and code changes. |
| 178 | +
|
| 179 | +The integration process follows a structured approach: |
| 180 | + |
| 181 | +#. **Feature Branching**: Developers create feature branches for new functionality or bug fixes. This isolates changes until they are ready for integration. |
| 182 | +#. **Code Reviews**: Before merging changes into the main branch, code reviews are conducted to ensure quality and adherence to coding standards. |
| 183 | +#. **Automated Testing**: Each integration is accompanied by automated tests that validate the changes. This includes unit tests and component integration tests. |
| 184 | +#. **Continuous Integration**: The CI system automatically builds the code and runs tests on each commit. This provides immediate feedback on the impact of changes. |
| 185 | +#. **Staging Environment**: Once changes pass automated tests and are approved by reviewers, they can be merged into the main branch. |
| 186 | +#. **SCORE Reference**: After successful merge in feature repository, changes can be introduced in reference repository as new reference version to be a part of SCORE. |
| 187 | + After successful verification including review and automatically executed feature integration tests, changes can be a part of SCORE. |
| 188 | + |
| 189 | +Levels of Integration and Verification |
| 190 | +-------------------------------------- |
| 191 | +.. This section defines the different levels of integration and verification that will be performed |
| 192 | +.. (e.g., unit, component, system). Each level should be clearly defined, with associated criteria |
| 193 | +.. for successful completion. |
| 194 | +
|
| 195 | +#. **Unit/Module Level**: Focuses on testing individual functions of the KVS module in isolation from the rest of the system. The goal is to verify the logical correctness of the code. |
| 196 | +#. **Component Integration Level**: Tests the KVS module along with the storage driver or file system it directly depends on. The goal is to verify the interfaces and interactions between these closely coupled components. |
| 197 | +#. **Platform Level**: Tests the fully integrated KVS module on the reference hardware. The goal is to verify non-functional requirements like performance and robustness in a production-representative environment. |
| 198 | + |
| 199 | +Verification Methods |
| 200 | +-------------------- |
| 201 | +.. This section lists the specific verification methods used, such as static analysis, |
| 202 | +.. dynamic testing, reviews, and inspections. Each method should be briefly described, |
| 203 | +.. including its purpose and applicability at different levels of verification. |
| 204 | +.. Reference tables can list methods, identifiers, applicable levels and ASIL relevance. |
| 205 | +
|
| 206 | +https://eclipse-score.github.io/process_description/main/process_areas/verification/guidance/verification_methods.html |
| 207 | + |
| 208 | +#. Fault Injection |
| 209 | +#. Interface Test |
| 210 | +#. Structural Coverage |
| 211 | + |
| 212 | +Test Derivation Methods |
| 213 | +^^^^^^^^^^^^^^^^^^^^^^^ |
| 214 | +.. This section details the techniques used to derive test cases (e.g., boundary value analysis, |
| 215 | +.. equivalence partitioning, requirements tracing). It should clarify which techniques are used |
| 216 | +.. at each level of testing and for different ASIL levels. Again, a reference table is recommended. |
| 217 | +
|
| 218 | +#. Analysis of requirements (requirements-analysis) |
| 219 | +#. Analysis of design (design-analysis) |
| 220 | +#. Analysis of boundary values (boundary-values) |
| 221 | +#. Analysis of equivalence classes (equivalence-classes) |
| 222 | +#. Fuzzy testing (fuzz-testing) |
| 223 | +#. Error guessing based on knowledge or experience (error-guessing) |
| 224 | +#. Explorative testing (explorative-testing) |
| 225 | + |
| 226 | +Quality Criteria |
| 227 | +---------------- |
| 228 | +.. This section specifies the quality criteria that must be met for successful verification. |
| 229 | +.. These criteria might include code coverage metrics, defect density, or other relevant measures. |
| 230 | +.. The criteria should be defined with quantifiable goals for different ASIL levels. |
| 231 | +
|
| 232 | +.. list-table:: Quality Criteria |
| 233 | + :header-rows: 1 |
| 234 | + |
| 235 | + * - Quality Criterion |
| 236 | + - Target Value |
| 237 | + * - Code Coverage (Line) |
| 238 | + - > 98% |
| 239 | + * - Code Coverage (Branch) |
| 240 | + - > 95% |
| 241 | + * - Static Analysis Defects |
| 242 | + - Zero warnings of any level |
| 243 | + * - Requirements Coverage |
| 244 | + - 100% of requirements covered by test cases |
| 245 | + * - Test Pass Rate (for release) |
| 246 | + - 100% of planned tests executed, 100% pass rate |
| 247 | + |
| 248 | +Test Development |
| 249 | +---------------- |
| 250 | +.. This section describes the process for developing and maintaining test cases. |
| 251 | +.. It should cover aspects such as test automation, test data management, and version control. |
| 252 | +
|
| 253 | +Test cases will be developed in parallel with the software development. All test cases and test scenarios (if applicable) will be stored in the project's version control system (Git) alongside the source code. |
| 254 | +Test automation is mandatory for all unit and integration tests. |
| 255 | + |
| 256 | +.. Pre-existing test cases |
| 257 | +.. ----------------------- |
| 258 | +.. This section describes how pre-existing test cases are handled which are e.g. available |
| 259 | +.. from an OSS component. It should be stated how they are reviewed, integrated, extended |
| 260 | +.. (e.g. with respective documentation), and adopted to the needs described in the project |
| 261 | +.. (e.g. usage of documentation templates and traceability) |
| 262 | +.. |
| 263 | +.. N/A |
| 264 | +
|
| 265 | +Test Execution and Result Analysis |
| 266 | +---------------------------------- |
| 267 | +.. This section describes how tests will be executed and the procedures for analyzing the results. |
| 268 | +.. It should outline the tools and processes used for test execution and reporting. |
| 269 | +
|
| 270 | +Tests will be executed automatically via the CI pipeline for every commit. |
| 271 | +Test results will be automatically collected and visible directly in Pull Request. |
| 272 | +The results of integrated SCORE components will be published to a sphinx-needs dashboard showing metrics and bidirectional traceability between requirements and tests. |
| 273 | + |
| 274 | +Test Selection and Regression Testing |
| 275 | +------------------------------------- |
| 276 | +.. This section describes the approach to selecting test cases for execution and the strategy for |
| 277 | +.. regression testing to ensure that new changes don't introduce regressions. |
| 278 | +
|
| 279 | +* **CI Builds**: Automated subset of tests (unit tests and component integration tests) will run on every commit. |
| 280 | +* **Nightly Builds**: The full suite of unit, component integration, feature integration and platform tests will be executed in a loop to catch sporadic errors. |
| 281 | +* **Release Candidates**: The goal is to any version integrated in SCORE to be a release candidate verified by the full suite of unit, component integration, feature integration and platform tests. |
| 282 | +* **Regression strategy**: The approach is to re-run all tests that verify functionality that could reasonably be affected by a code change. |
| 283 | + |
| 284 | +Work Products and Traceability |
| 285 | +------------------------------ |
| 286 | +.. This section lists all the key deliverables related to the verification process. |
| 287 | +.. It should also describe how traceability between requirements, design, code, and test |
| 288 | +.. cases is maintained. |
| 289 | +
|
| 290 | +.. list-table:: Work Products and Traceability |
| 291 | + :header-rows: 1 |
| 292 | + |
| 293 | + * - Work Product |
| 294 | + - Description |
| 295 | + - Location |
| 296 | + * - Verification Plan |
| 297 | + - This document. |
| 298 | + - Github Pages |
| 299 | + * - Test Specification |
| 300 | + - Detailed descriptions of all test cases. |
| 301 | + - Github Pages, Github Repository |
| 302 | + * - Test Scripts |
| 303 | + - The automated test code. |
| 304 | + - Github Repository |
| 305 | + * - Test Reports |
| 306 | + - The results of each test execution cycle. |
| 307 | + - Github Actions, Github Pages |
| 308 | + * - Defect Reports |
| 309 | + - Detailed reports for each failed test or identified issue. |
| 310 | + - Github Issues |
| 311 | + |
| 312 | +Environments and Resources |
| 313 | +========================== |
| 314 | + |
| 315 | +Roles |
| 316 | +----- |
| 317 | +.. This section defines the roles and responsibilities of individuals involved in the |
| 318 | +.. verification process. It can refer and should be based on the definition in the |
| 319 | +.. verification process :ref:`verification_roles`. |
| 320 | +
|
| 321 | +.. list-table:: Roles and Responsibilities |
| 322 | + :header-rows: 1 |
| 323 | + |
| 324 | + * - Role |
| 325 | + - Responsibility |
| 326 | + * - Test Lead |
| 327 | + - Owns the verification plan, coordinates all testing activities, and is responsible for the final test report. |
| 328 | + * - Test Engineer |
| 329 | + - Designs, develops, and maintains automated test cases; executes system and robustness tests; analyzes results. |
| 330 | + * - Developer |
| 331 | + - Develops and executes unit tests for their own code; performs code reviews; fixes defects found during verification. |
| 332 | + |
| 333 | + |
| 334 | +Tools |
| 335 | +----- |
| 336 | +.. This section lists the tools used for verification, including build systems, test frameworks, |
| 337 | +.. static analysis tools, and other relevant software. |
| 338 | +
|
| 339 | +List of tools used in the verification process: |
| 340 | + |
| 341 | +#. Sphinx-needs |
| 342 | +#. Bazel |
| 343 | +#. Cargo Test |
| 344 | +#. Google Test |
| 345 | +#. Python Pytest |
| 346 | + |
| 347 | +.. #. S-CORE ITF |
| 348 | +
|
| 349 | +
|
| 350 | +Verification Setups and Variants |
| 351 | +-------------------------------- |
| 352 | +.. This section describes the different test environments and configurations used for verification. |
| 353 | +
|
| 354 | +#. **Developer PC**: Windows/Linux PC with development tools and simulators for local testing and debugging. |
| 355 | +#. **CI Environment**: A dedicated CI server that runs automated tests on every commit, ensuring that the codebase remains stable and functional. |
| 356 | +#. **Reference Hardware**: External hardware board connected to a PC via debugging probes and a controllable power supply for automated system and robustness testing. |
| 357 | + |
| 358 | + |
| 359 | +Test Execution Environment and Reference Hardware |
| 360 | +------------------------------------------------- |
| 361 | +.. This section describes the hardware and software environments used for test execution. |
| 362 | +.. It should include information about any specific hardware platforms or simulators used. |
| 363 | +.. It should also define how the verification environment interacts with the CI system, including |
| 364 | +.. access control and maintenance. |
| 365 | +
|
| 366 | +
|
| 367 | +| **CI Runner image**: https://github.com/qorix-group/gh_action_runner_image |
| 368 | +| **HW**: Qualcomm SA8650 running QNX7.1 / QNX8.0 |
0 commit comments