Connected Vehicle Pilot Deployment Program yields program management best practices for integrating and testing large disparate systems.

Systems engineering best practices to reduce risks, minimize schedule delays, and avoid cost overruns.

Date Posted
09/13/2018
TwitterLinkedInFacebook
Identifier
2018-L00834

Lessons Learned

Each of the three Connected Vehicle Pilot Deployment sites (Wyoming, Tampa, NYC), while unique regarding size, features, and functionality to be deployed, are all examples of large disparate systems, or system of systems. Large disparate system deployments naturally entail complexity concerning design, procurement, specification, build, integration, and testing. One desired outcome of the CV Pilot program is that lessons will be learned that will benefit the other CV Pilot deployers and all other future deployers of CV technology. This article presents some lessons learned relating to integrating and testing large disparate systems. These lessons learned are presented as high-level information from a program management perspective. Therefore, they do not include details, specific processes or approaches. This more detailed information is planned to be provided at a later date by the Connected Vehicle Deployment Technical Assistance (CVDTA) Program.

  1. Experience with product development is not a substitute for systems engineering skills needed to reduce risks to large disparate systems. Deployment teams should have deep knowledge and understanding of systems engineering concepts. Requiring the team to have a qualified systems engineer (e.g., have a Certified Systems Engineering Professional (CSEP) or have an advanced degree in systems engineering to work with and manage the subject matter experts (SME)), should help other team members understand what the life-cycle and systems engineering management and technical methodologies are, what is expected in terms of documentation quality, what is expected during the integration and testing of large systems, and help tailor systems engineering concepts to meet project needs and reduce risks.
  2. Willingness to learn and apply systems engineering concepts decreases amount of documentation iteration required to implement the robust verification and validation methods required by these systems. In addition, it reduces risks of discovering significant issues that, when they surface during integration and testing stages, are more expensive to correct. Early in the lifecycle educate your team on the benefits of systems engineering and the importance of tailoring the concepts to meet the risk and goals of the project.
  3. A Systems Engineering Management Plan (SEMP) is needed to help the team understand what they will do technically to successfully prosecute the project. On large disparate systems the teams needs to figure out how the project will be executed at each stage of the integration. This needs to be done early in the project (during the planning stage) to reduce risks to the project in latter stages. . Requiring your project team to develop a SEMP and to follow it will provide insight into the testing process. A SEMP should be tailored to meet the needs of the specific project. For example, you do not need a full-blown SEMP for a simple small project or addition of a feature to an existing system; rather, focus on the important aspects of what the project is to do technically and address those.
  4. Having a team conduct requirements traceability separate from the development and engineering teams injects traceability issues. These traceability issues are due largely to communication issues from this separation. Having in-person reviews using the consistency check methodology improves traceability and communications significantly. Documented traceability of requirements through architecture to design to testing and to physical implementation is critical for maintaining a large disparate system.
  5. Not having requirements specifications for software, firmware and hardware that will be procured outside the development team adds cost and schedule risks. Having clearly defined and documented requirements identifies who is responsible for what technical aspects of the project. There could be legal ramifications if the outside vendor does not perform and deliver according to the specifications.
  6. Testing concepts based on individual products is no substitute for testing on large systems. Not addressing overall integration and acceptance testing concepts in the planning stage will require rework of test plans and other testing documentation later. To avoid rework, develop and document a formal testing strategy early in the project lifecycle. The test strategy should address and describe building and testing incrementally. Comprehensive testing documentation should address unit/component testing, subsystem testing, and system testing for acceptance testing. A good practice to follow before starting a project is to ensure your team has a good understanding of testing principles (based on IEEE 829 or ISO 29119-3).
  7. Testing later in the project lifecycle is time consuming and more expensive due to impacts on other parts of the system. Testing should occur earlier in the lifecycle when it is less expensive. For example, as soon as a prototype is available, execute repeatable test cases and test procedures then document test results. Having a solid baseline set of unit tests can serve as a regression test suite to ensure updates to software or devices don’t break previously working functionality as new versions are released and as subsystem and system integration are performed.
  8. Not following best practice of documenting as-built versions of the implemented system will negatively impact successful operation of the system and increase maintenance costs. Often in the ITS community, once the system is deployed the development team has moved on to work on another project leaving the agency or other support contractor to operate and maintain the system. Without accurate as-built versions of the implemented system, the agency cannot efficiently and effectively operate the system, and maintenance could become cost and schedule prohibitive.
  9. Using an independent verification and validation (IV&V) team can greatly improve documentation quality. Just as a transportation agency uses quality inspectors on civil engineering projects to ensure the bridge or infrastructure component is built correctly and meets inspection standards, similar IV&V techniques/approaches should be applied on large disparate (namely software integration) projects to ensure the system components are built and integrated according to the requirements and design.



In summary, employ a qualified systems engineer (or systems engineering team) and follow systems engineering best practices to reduce risks, minimize schedule delays, and avoid cost overruns. Remember to tailor the systems engineering process to meet the needs of the project. The work required in the early stages will result in project resource savings over the life of the system.

Connected Vehicle Deployment Technical Assistance: Roadside Unit (RSU) Lessons Learned and Best Practices

Connected Vehicle Deployment Technical Assistance: Roadside Unit (RSU) Lessons Learned and Best Practices
Source Publication Date
05/01/2020
Author
Schneeberger, J.D.; Amy O’Hara; Kellen Shain; Linda Nana, David Benevelli; Tony English; Steve Johnson; Steve Novosad; and Bob Rausch
Publisher
USDOT Federal Highway Administration
Other Reference Number
FHWA-JPO-20-804
Goal Areas

Keywords Taxonomy: