FTA guide informs transit agency officials on how to design an appropriate evaluation for transit-automation technologies while also remaining aware of the constraints faced.

Experience designing and implementing an effective evaluation of a transit bus automation project to measure its impacts and record key lessons learned.

Date Posted
04/27/2020
TwitterLinkedInFacebook
Identifier
2020-L00961

Considerations for Evaluating Automated Transit Bus Programs

Summary Information

Given the potential of transit-bus automation, it is critical to evaluate the benefits and challenges from early implementations. A well-designed evaluation can quantify such societal benefits as improving travel time, increasing mobility, and raising transit ridership. This guide aims to assist transit stakeholders with designing and implementing evaluations of automated transit-bus programs. In designing evaluations, transit agencies and other stakeholders should identify program goals and audiences affected by the technology; develop a logic model that maps project inputs, activities, and outcomes; choose an appropriate evaluation design; and collect and analyze data on key performance indicators related to their program goals.

Lessons Learned

FTA recommends the following steps for designing and implementing an evaluation of a transit-automation project.

  • Identify program goals and audience. It is critical to identify transit program goals for deployment of automated transit buses. Such goals illustrate what a transit agency aims to accomplish and why the program is needed. Some goals for deploying an automated transit bus technology could include improving the operator’s experience, enhancing mobility, and increasing safety. In addition to goals, agencies should identify the audiences who will be impacted by a project. Those impacted could include riders, persons with disabilities, motorists, agency staff, and local businesses.
  • Develop logic model. After identifying program goals, it is helpful for agencies to develop a logic model. As described in this report, logic models summarize how a program’s inputs and activities achieve intended goals. In addition to creating a logic model, agencies should also consider external factors that may affect a technology’s deployment or observed outcomes. Such external factors can include changes in legislation and declines in the broader economy.
  • Choose evaluation design. Program goals and the logic model inform the questions that an evaluation seeks to answer. Evaluation questions should be clear and specific, and the terms used in the questions should be readily defined and measurable. An evaluation design is the overall strategy used to answer evaluation questions. Case-study designs allow evaluators to explore issues in depth and are suitable for both qualitative and quantitative data gathering. However, case studies are typically limited to a small sample size. Statistical-analysis designs offer a variety of quantitative methods for identifying the ways in which a program led to its observed outcomes. However, care must be taken to explain the causal relationships (why did X lead to Y?) that inform statistical results.
  • Collect and analyze data. Once an evaluation design is selected, evaluators should choose appropriate qualitative and quantitative methods for collecting and analyzing data. Such methods could include administering surveys and questionnaires, deploying roadside and in-vehicle sensors, examining agency records, and leading interviews and focus groups.
  • Perform Periodic Data Validation. Regardless of whether data collected are quantitative or qualitative, evaluation teams should periodically analyze samples of their data during data collection to assess the extent of any missing or corrupted data. If such issues are caught early, data collection can be restarted or revised with little impact on the ultimate evaluation analysis. However, waiting until the end of data collection to analyze all data risks the surfacing of unforeseen problems that can negatively impact analysis. Further, agencies should continuously monitor the status of key equipment to ensure that data are captured correctly and that equipment is in good operating condition or fixed or replaced as necessary if it is malfunctioning. It is recommended that evaluation teams also develop a risk matrix of potential challenges to data collection and develop appropriate mitigation strategies (e.g., collect qualitative data when quantitative data are unavailable).
  • Negotiate a Data-Sharing Protocol. Given that transit demonstrations of automated driver assistance systems and automated vehicles involve private-sector partners, transit agencies should negotiate a data-sharing protocol. Private-sector partners are likely to be collecting data through sensors and other means, and such data might have useful applications for transit agencies. However, private partners may view data as proprietary, and the release of data could put them at a competitive disadvantage relative to other companies operating in the automated transit space. As such, transit agencies should identify early on their data needs and discuss with private partners how that data can be shared, with appropriate protections for the private partner.
  • Protect Data that is Gathered and Used. In addition to establishing a data-protection protocol, transit agencies should protect the data they gather and use. With surveys, interviews, and camera- and sensor-based data, participants may have concerns about personally identifiable information (PII). Who can access this data? How will identities be protected? To ensure reliable participation in data collection, transit agencies must demonstrate to their audiences that data will be kept confidential, such as through the generation of randomized identification numbers (to anonymize PII) and firewalled servers. Unauthorized data releases could present safety and security risks for a given project and harm an agency’s reputation. It is essential for project and evaluation teams to be aware of any regulations that might pertain to data gathering, including the need for non-disclosure agreements, institutional review board (IRB) review, data agreements, protected data storage, PII protections, and so forth.
  • Maintain a Clear Record of Project Updates. In an ideal evaluation, a technology being evaluated would not change during the course of the intervention. Changes to a technology would complicate the causal chain of how an intervention achieves its goals and impacts society. Evaluators recognize that such an assumption is not always feasible in real-world environments. Should a program learn about critical safety or operational improvements over the course of a pilot, then a transit agency would be obliged to update its program to safeguard the public. However, for the purpose of an evaluation, program managers should maintain clear records of when hardware and software are updated—or operational or other practices changed—during a demonstration to identify which outcomes could be attributed to those program changes. Updated records provide essential qualitative information that allows evaluators to create a clear picture of how a technology, and updates to that technology, achieved an agency’s goals.
  • Prepare a Communications Plan for Evaluation Results. Ultimately, an evaluation is only as good as its distribution. Evaluations provide lessons learned, and whereas those lessons learned do not necessarily have to be advertised to the public, they should be presented to key decision-makers to improve a program. Evaluations also generate important information for peer entities, and sharing that knowledge can advance technological innovations around the world. Finally, the information generated by evaluation is important for the public. With the rapid pace of technological change in transportation, many in the public are curious, excited, and apprehensive about changes to the status quo. Evaluation results, if well presented, can demonstrate potential benefits and invite further public engagement to improve the transportation enterprise.
Goal Areas

Keywords Taxonomy: