Network Audits and Peer Review

From Traffic Analysis and Microsimulation

Jump to: navigation, search

Contents

Status: Draft

Introduction

A microsimulation model audit (also called a peer review) is a structured process for reviewing the model to assure that it conforms to good modeling practices. For simplicity, in this document we use the word "audit" to refer to the peer review/auditing process and the word "reviewer" to refer to the person (or group of people) who prepare the audit.

The audit is not a one-time event; models should reviewed at each major milestone such as:

  • Completion of the base model representing the existing conditions.
  • Completion of the design year DO MINIMUM model.
  • Completion of each "do-something" or BUILD scenario.

The primary goal of the audit process is to protect the public's interests by verifying the integrity of the model. Fundamentally, this means affirming that the model is a true and accurate representation of the traffic conditions. In an ideal situation, conclusions drawn from an audited model will be not be misleading in any way. When this ideal cannot be achieved it is essential for the reviewer to draw the client's attention to the model's limitations so that the client can avoid drawing erroneous conclusions from it.

A secondary benefit of the audit process is to provide a "second set of eyes" that can bring new insights to the modeling process. In this way, knowledge is shared amongst the members of the modeling community, successful techniques are identified and perpetuated, software problems are rooted out, and the state of the art is advanced. Consequently, individuals with a high level of microsimulation experience and willingness to share that experience are desirable as reviewers, while individuals (or firms) that treat their modeling knowledge as proprietary are far less desirable.

A key concept of the audit process is whether the model has been implemented in a way that is suitable for meeting the goals and objectives of the study for which it is being built. As a shorthand, in this document we refer to a model that meets this requirement as being "fit-for-purpose."

If there is any ambiguity, confusion or disagreement about the purpose of the model, the Model Scoping guideline should be reviewed and the Traffic Analysis Scoping Record form should be updated before starting the audit process.


Roles & Responsibilities

The independence and impartiality of the model reviewer is an important aspect of the auditing process. The reviewer should be given a high degree of autonomy, similar to what one would expect in a accounting audit of a company's financial records or a peer review of an academic research report. Time and resources sufficient for the audit should be included in the project schedule and budget. Generally this is most easily accomplished by identifying the peer reviewer at the beginning of the project and contracting separately for the audit. If relevant auditing expertise exists within the client's own organization, it is generally helpful to confirm the reviewer's availability before the project gets underway.

The client's project manager should make sure that the roles and responsibilities of the modeller and the reviewer are clearly understood (especially important if the audit starts after the modeling work is already underway). In general the job of the reviewer is to identify problems and communicate them to the modeler and the client's project manager. Normally it is not the reviewer's job to fix the model. In some instances it may be expeditious for the reviewer to make minor changes to the model, but these should always be done with the full knowledge and consent of the modeler and the client's project manager.

As part of the scoping of the audit work, the client's project manager should set up a schedule for check-in meetings. The number and timing of these client-modeler-reviewer meetings will vary depending on the project complexity, but could include a hand-off meeting when the model is ready to go to the reviewer, a preliminary findings meeting when the reviewer has completed the inital review, a second findings meeting when the reviewer has completed the audit, and a response meeting when the modeler has addressed the comments raised by the reviewer.


Format of the Audit / Peer Review Report

The October 2011 version of the blank Wisconsin DOT Microsimulation Model Audit / Peer Review report form can be found at this link . The compressed file is in Microsoft Word .docx format. To uncompress the document you will need an unzip utility such as 7-Zip.

WisDOT's audit report uses a two column format. The left side of the form provides a quick "report card" style indication of the model's integrity and performance; this side of the form is designed to be easy for non-technical readers to understand. The right side of the form provides space for detailed technical comments including reviewer-to-modeller communications that might be difficult for a layperson to understand. The form should be augmented with any additional sketches, screen shots, calculations, or other information that will assist the modeller in understanding the problems identified in the model. For example, if the traffic throughput on a particular part of the model is unreasonably low, it may be helpful to attach comparative calculations from the Highway Capacity Software.

Where relevant, suggested techniques for improving the model may be included in the audit report. While the report is intended to document the audit findings, it should not serve as the sole means of communication between the reviewer and the modeller. Phone calls, web conferences, or meetings are often the most effective way to resolve complicated technical issues, and the client's project manager should monitor the process to assure that information is being communicated efficiently.

The final section of the WisDOT audit report is the reviewer's sign-off. In this section the reviewer should unequivocally inform the client whether the model is (or is not) suitable for the intended purpose.


Problem Rating Scale

When describing the problems observed in the audit, the following scale should be used:

  1. Minor
  2. Moderate
  3. Serious
  4. Severe
  5. Critical

This rating system is similar to the Abbreviated Injury Scale (AIS) used in the medical community.


Auditing and the Project Life Cycle

Getting Started: Scope Affirmation. Since the objective of the audit process is to confirm that the model is suitable for its intended purpose, the reviewer should always begin by getting a clear understanding of the project goals. This includes a careful reading of the Traffic Analysis / Model Scope report and other model scoping documents. It may also be appropriate to discuss the project goals with the modeling team and the client's project manager, especially if the reviewer did not participate in the model scoping meetings. It is particularly important to affirm that everyone shares the same understanding of the project intent, and that has there has not been any undocumented change of scope or "mission creep." It is also important to make sure the project scope is stable and unambiguous: it will be difficult to assess the model's "fitness for purpose" if the purpose itself is subject to change over the duration of the project.

The focus of the audit depends on the status of the model. Typically, most projects will be audited at three milestones: completion of the base model (existing conditions), completion of the design year DO MINIMUM model, and completion of the BUILD scenarios (these stages are discussed in more detail in the scoping and forking guidelines). The emphasis of the audit is somewhat different at each stage:

  • First Audit. It is likely that the first audit will need to look carefully at the basic structure of the network and the associated trip generation. This means evaluating the way the model was constructed and the degree to which the modeller has been successful in calibrating the model to reproduce the existing traffic conditions. The reviewer should also look carefully at any network features or coding work-arounds which might be difficult to replicate in the forthcoming "do-minimum" and "build" scenarios. For example, the use of link cost factors to control routing could become problematic if the links in question will be demolished or altered in other scenarios.
  • Second Audit. Typically the second audit will occur at the time the design year DO MINIMUM model has been completed. At this stage the reviewer should examine the overall model coding to affirm that it is substantially similar to the base model, with an emphasis on any features that have been changed to create the do-minimum scenario. Particular attention may be directed to the question of whether the model properly meets the definition of a DO MINIMUM scenario. For example, some network errors or coding problems (such as merging problems or weaknesses in the origin-destination matrix) may become evident under increased traffic loading, which were present in the existing-conditions model but masked by low traffic volumes; in many cases it will be appropriate to go back and fix these issues before moving on to other scenarios. Conversely, some changes may have been made to address capacity issues in the DO MINIMUM model, which in fact exceed the definition of a "do minimum" scenario and deserve to be considered as "build" scenarios in their own right.
    Typically, the second audit will also check the model calibration report to verify that the traffic flows under the DO MINIMUM conditions match any externally prepared forecasts. Occasionally, an exception to the volume calibration requirement may be made in cases where the congestion is so severe that even after the DO MINIMUM improvements have been made it is impossible for all vehicles to reach their intended destination. The "fitness for purpose" concept is important in this context; for example, if the project objective is to evaluate the design of a freeway, the reviewer generally should not accept a DO MINIMUM model where vehicles are blocked from entering the freeway by a capacity problem on local streets.
  • Third Audit. The third set of audits typically occurs at the time each of the BUILD scenarios is delivered. At this stage the auditor will typically focus on assuring that the "build" models are consistent with their predecessors, checking the way redesigned elements of the model have been coded, making sure each build scenario is consistent with the corresponding design plans, and reviewing the model calibration report to verify that the modeled traffic volumes are consistent with any externally-prepared traffic forecasts. The reviewer should also pay attention to any new features that have been added to the model as part of the scenario; for example if the new design involves creating special-purpose lanes, the reviewer should verify that these lanes and the associated vehicle restrictions have been implemented correctly.


Auditing Checklist

The list that follows is intended to serve as a starting point and "memory aid" for reviewers. The list is not all-inclusive and it is possible for models to "pass" on all points listed below yet still not be fit-for-purpose. An example would be a model that functions correctly, but relies on features or controls that will be difficult to sustain in future phases of the project.

The list is based on WisDOT's experience with Quadstone Paramics software, but the general idea can be carried over to other software packages. SMALL CAPITAL LETTERS identify terminology that is specific to Paramics; when auditing models prepared with other software, the relevant terms for that software package should be substituted.

The audit process should be always take into consideration the current capabilities and limitations of the software package and version that was used to prepare the model (new software features are seldom foolproof).


Network Coding & Geometry

Network Coding establishes the horizontal and vertical geometry of the network, including placement and interconnection of nodes, links, curb points, curves, turn lanes, merge points, stop bars, signposts, and other network infrastructure. It also includes the appropriate use of settings such as link free-flow speed. Some issues to consider include:

  • Error Messages: Are there network errors or warnings in the INFO BROWSER window?
  • Nodes: Are nodes placed correctly? Are there unused/unconnected nodes? Are there unnecessary nodes?
  • Links: Are the link free-flow speed settings correct? Have any very short links been used? Are there anomalies or inconsistencies?
  • Junctions: Are intersections operating correctly? Are there permitted or prohibited movements which have not been represented correctly?
  • KERBS: Are the curb points set correctly? Are vehicle turing radii realistic?
  • STOPLINES: Do the modelled stop bar positions correspond to the field condition? Do vehicles appear to skid diagonally through intersections?
  • NEXTLANES: Is lane continuity correctly coded at intersections, lane drops, and lane additions? Do vehicles merge and yield correctly at these locations? Is there forced merging in the model which does not occur in the field?
  • SIGNPOSTING and SIGNRANGE: Do vehicles respond to downstream hazards (such as a lane drop) at realistic locations?
  • Ramps: Have freeway ramps been coded using the RAMP function or as ordinary junctions? Is the merging behavior realistic? Are the MINIMUM RAMP TIME, ramp headway target, ramp reaction time, and RAMP AWARE DISTANCE consistent throughout the model?
  • Annotation: Are there street name annotations or other landmarks to orient viewers?
  • 3D Viewing: If three-dimensional modeling is in the project scope, do the levels appear to be correctly coded at grade-separated intersections and interchanges? Are there any "flying vehicles" or other anomalies when viewing in 3D?
  • General "Look and Feel": When running the model, do things visually look correct? Are there benign problems that would undermine the model's credibility if it was shown at a public meeting?


Intersection Traffic Control & Ramp Metering

Intersection Controls are devices that regulate traffic flow at intersections, such as signals, roundabouts, and priorities at stop-controlled intersections (priority junctions). Ramp meters control the rate of entry to a freeway. A complete review includes both the physical aspects of the intersection control and the timing plans. Examples of some things to check are:

  • Signals: How were the signals modelled: actuated, semi-actuated, or fixed-time? Are the phases, timings, splits and offsets broadly consistent with the field? If the signals are actuated or semi-actuated in the field but modelled as fixed-time, has a reasonable simplification of the timing plan been implemented? Have pedestrian crossing times been taken into account?
  • Unsignalized Intersections: Are vehicles yielding as they should? Is the MAJOR/MINOR herarchy correct? Have 4-way stops been modelled properly?
  • Roundabouts Is the lane utilization and yielding in roundabouts representative of the field conditions?
  • Ramp Meters: Have ramp meters been included in the model? Based on the purpose of the model, should they be? If fixed-time meters are being used, are the fixed-time plans a reasonable simplification of the actual timing plan? If meter actuaction is being modelled, are the actuation loops correctly positioned to match the field locations? If the field ramp meters field release vehicles alternately from each lane, how has this been modelled?


Zone Structure

Zone structure relates to the placement of the zones representing the locations where traffic enters or leaves the network. Observations related to sectors and zone connectors should be included in this section. If the microsimulation model zones are derived from a travel demand model, auditors should note any issues related to data interchange and consistency. Some of the things to consider include:

  • Zone Numbering: Are the zones numbered in a logical sequence that will be easy to maintain? Does the zone numbering system allow for new zones to represent future development?
  • Zone Placement: Are the zone boundaries correctly positioned over the intended loading links? Is the model free of overlapping zone boundaries? Is there more than one link within a zone boundary?
  • Loading Links: Do the loading links have enough length and capacity to handle the traffic volume that is being generated? Is there sufficient distance for traffic flow to stabilize between the loading link and the next downstream intersection?
  • SECTORS and CAR PARKS: Have ZONES been aggregated into SECTORS? Are CAR PARKS being used to control the locations where vehicles appear/disappear? If so, have these features been implemented correctly?
  • Relationship to Other Models: Were the zone boundaries imported from a travel demand model? If so, do the boundaries match correctly? Have the demand model's zones been split or combined to form the microsimulation zones? Was this done in a way that allows updating if the travel demand forecast changes?


O-D Matrices, Demand Profiles, and Time Periods

Origin-Destination matrices contain the network demand patterns (number of trips between each pair of zones). Time Periods and Demand Profiles control the timing of the release of the trips into the network. In some cases multiple matrices are used (for example separate matrices for cars and heavy trucks). Some issues to consider include:

  • Origin-Destination Matrices: Has the DEMANDS file been split into separate time periods or vehicle types, and is this appropriate based on the study goals? Are there illogical entries in the OD matrix (e.g. vehicles that must go backwards on a freeway)?
  • PROFILES: Does the within-hour variation in travel demand correspond to the Peak Hour Factor that would be expected in this location?
  • Time Periods: Are the time periods set up to meet the project requirements? Is there sufficient loading time? At minimum, vehicles must be able to go the longest distance in the model before the loading period ends.
  • Special-Purpose OD Matrices: Are there (or will there be) any special-purpose demand matrices for managed lanes (e.g. HOV or toll lanes) or any other features that are restricted to specific categories? If so, have the proportions of vehicles that are eligible to use these facilities been computed correctly?
  • Relationships to Other Models or Forecasts: Are the matrices tied to another data source, such as a regional travel demand forecasting model? If so how was the conversion made? Do the growth rates match?


Core Simulation Parameters

Core simulation parameters (such as mean target headway, mean target reaction time, perturbation, driver familiarity, timesteps, speed memory, and matrix tuning) affect fundamental aspects of vehicle behavior in the network, such as driver aggressiveness and the willingness to merge into small gaps. Considerations include:

  • MEAN TARGET HEADWAY: Does the selected value fall in the normal range (typically 0.80 to 1.00 for Quadstone Paramics)?
  • MEAN TARGET REACTION TIME: Does the selected value fall in the normal range (typically near the default value of 1.00 for Quadstone Paramics)?
  • TIME STEPS: The number of timesteps per second affects merging opportunities at ramps and roundabouts. For Paramics, has the modeller deviated from the recommended range of 2 to 8 (usually 4 to 6) timesteps per second?

Note: INTER TIME STEP PAUSE should be used to adjust animation speed; TIME STEPS should not be used for that purpose.


Closures, Restrictions & Incidents

Closures represent links or lanes that are temporarily or permanently closed to traffic. Restrictions represent links or lanes that are temporarily or permanently closed to specific types of vehicles. Incidents include simulated vehicle break-downs, etc.

  • Closures & Restrictions: The CLOSURES and RESTRICTIONS functions are generally used in combination to close or limit access to certain lanes. Has this been implemented correctly?
  • Vehicle Pre-Positioning: Have closures or restrictions been used to pre-position vehicles in advance of a fork in the road? If so, is this appropriate or should it be done using a different method such as ROUTE CHOICES?
  • Incidents: If an incident scenario is being modelled, is it realistic?
  • Work Zones: Have appropriate link categories been created to allow target headways (and hence capacity) to be reduced on links representing work zones?


Visibility & Aware Distance

Visibility and Ramp Aware Distance influence yielding at intersections and ramps.

  • Visibility: Has VISIBILITY been adjusted to account for any obstructions that limit sight distance in the field, such as walls, columns, abutments, or grade differentials?
  • Ramp Aware Distance: Has AWARE DISTANCE (also called move over distance) been adjusted to account for any obstructions that limit the ability of freeway mainline vehicles to see entering vehicles?


Routing Paramaters

Routing parameters (such as cost factors, turn penalties, modification of the link type hierarchy, and waypoints) override the default routing behavior and profoundly influence the route choice in the network. They are occasionally used increase or decrease the traffic volume on specific links. If used improperly, these controls can cause unrealistic or erratic routing. (Note that these settings are only relevant if there is more than one route that a driver can take to get from an origin zone to a destination zone.)

  • Link Hierarchy vs Link-By-Link Overrides: Has the modeller selected an appropriate set of link categories to represent the variations in speed and functional classification (MAJOR/MINOR)? Have speeds and cost factors been overriden on a link-by-link basis when a link type hierarchy should have been used instead?
  • Link Cost Factors: Have link cost factors been used to adjust the vehicle routing? Are they necessary? Is the number of instances excessive? Will the same links exist (and be exactly the same length) in future scenarios?
  • Traffic Assignment: Has the appropriate method been used to assign traffic to the network: ALL OR NOTHING or PERTURBATION? This will strongly influence all other routing-related settings.
  • Driver Familiarity: If PERTURBATION is enabled, is there an appropriate proportion of familiar drivers (drivers who would be willing to consider second-best routes)? Generally familiarity will be highest in urban areas and lowest in tourist/recreational areas.
  • PERTURBATION: As the PERTURBATION value increases, the number of vehicles using second-best routes will increase. Is the amount of approprate for this location? Are vehicles making illogical loops or U-turns due to excessive perturbation?
  • DYNAMIC FEEDBACK: If this setting is enabled, some vehicles will re-consider their routes while they are travelling. The frequency of the updates will vary depending on the level of real-time traffic information that drivers in the study area can receive. Are the feedback settings appropriate, considering the amount of real-time traffic info available in this location? Have the settings been adjusted to prevent oscillation? Are the settings contributing to logical vehicle routing in the network?
  • Waypoints: These points represent locations that all vehicles travelling between certain OD pairs must pass through. Their use should be reviewed carefully.


Lane Use Parameters

Lane use parameters control the amount and/or destination of the traffic using each lane. A typical application of these parameters is to pre-position vehicles in advance of a fork in the road.

  • Lane Utilization: Are vehicles using the correct lanes at intersections and on the highway/freeway mainline? Is the distribution of vehicles between the lanes similar to the field conditions? Is there unrealistic congestion due to excessive vehicles in one lane?
  • LANE CHOICES and ROUTE CHOICES: Have these settings been used correctly?


Vehicle Types & Proportions

The proportion of heavy vehicles (trucks and buses) influences the overall performance of each part of the network.

  • Vehicle Types: Are the vehicle types in the VEHICLES file consistent with those found in the study area? Are there unnecessary/unused vehicle types? For example, urban transit buses may not be needed for a freeway study.
  • Vehicle Proportions: Are the truck percentages consistent with those found in the study area?Does the vehicle type distribution make sense and meet the needs of the project for both the base and future models?
  • Special Vehicles: Is there a need for any special-purpose vehicles that have different dynamics or obey different rules than ordinary cars and trucks? Possible examples include a low-speed sightseeing shuttle in a tourist area, taxis or carpools that are allowed in specific lanes, or freight trains.


Unreleased Vehicles & Stuck/Stalled Vehicles

The reviewer should note any problems with unreleased, stuck or stalled vehicles (including intermittent problems). Unreleased vehicles are vehicles that are unable to enter the network due to congestion in the links close to the zone where they are created; since these vehicles never enter the network the conditions downstream typically appear to be better than they should be. Stuck/stalled vehicles are ones that unexpectedly slow or stop partway through their route (which can cause backups that do not exist in the field). The reviewer should also audit the use of stalled vehicle removal tools.

  • Unreleased Vehicles: Are all generated vehicles able to enter the network? (Be sure to turn the RELEASE BLOCKING VALUES on and check all zones during times with the most congestion).
  • Stuck Vehicles: Are vehicles getting stuck somewhere in the network, for example at a lane drop or signalized intersection?
  • Traffic Flow: Are stuck vehicles causing unreasonable traffic flow in portions of the network? Is there unreasonably high congestion upstream of a place where vehicles get stuck, or unreasonably good traffic flow downstream of that location?
  • Stuck Vehicle Removal Tools: Are more than 10 vehicles per hour being "destroyed" or removed from the network?


Advanced Model Features

Special features include site- or study-specific items such as the use of detectors, car parks, variable message signs, special purpose lanes, speed harmonization, public transit routes, toll lanes, toll plazas, pedestrian modeling, special graphics, Application Programming Interfaces (APIs), etc.

  • Actuated Signals: If actuated signals are being used, does the logic match what is programmed into the field controller?
  • APIs and "Plug-Ins": Have any "plug-ins" been used to override the standard program logic by means of the Application Programming Interface (API)? If so, are these plug-ins supported by the software vendor? Are the APIs necessary to meet the project objectives? Are they sustainable throughout the duration of the project? Have they been used correctly?
  • Pedestrians: Are pedestrians modelled? If so, how has it been done?
  • PMX Models: Have high-resolution graphics been used in the model to represent cars, buildings, etc.? Is their use appropriate with respect to the study goals? Does it add or detract from the credibility of the model?
  • Special-Purpose Lanes: Are special-purpose lanes being used in any of the scenarios? Examples include carpool lanes, toll lanes, bus-only lanes, truck-only lanes, taxi lanes, etc. If so, have these lanes and all associated infrastructure been implemented correctly? (Associated infrastructure includes things like toll plazas).
  • Railroads & Public Transportation: Are railroad crossings, trolley/light rail lines, or bus routes part of the model? If so, are the stop locations, routes, vehicle speeds, and schedules reasonable?
  • Variable Message Signs: Are VMS Beacons being used to influence traffic routing? If so, how was the logic established and is it reasonable?


Model Calibration

Calibration refers to the accuracy of the model’s representation of the real-world traffic conditions, including traffic volumes, speed, travel time, trip-making patterns, and congested areas (hot spots). For details see the Model Calibration guidelines.

  • Model Calibration Report: Does the calibration report show the model to be in conformance with the Wisconsin Calibration Guidelines? If not, why? Are there roughly equal numbers of links that are above and below calibration targets, or does the report show that the model is "just barely meeting" targets in some systematic way (for instance all volumes are low)? Would similar results be reported if different random seeds were used?
  • Queues: Are the modeled queues representative of field conditions? If there are queues blocking upstream lanes or intersections, does this happen in the field?
  • Routing: Are the vehicles using reasonable routes? Are there any problems with unrealistic U-turns, vehicles doubling-back to reach their destinations, excessive PERTURBATION, etc.?
  • Congestion: Does the location, duration, and severity of congestion in the model match the field conditions? Is the model accurately depicting the underlying causes of the congestion?
  • Traffic Problems Extenting Beyond the Modelled Area: Are there any situations where congestion or other traffic problems are masked because the modelled area is too small to show the problem?
  • Stability: Is the model able to accommodate fluctuations in demand without unexpected traffic flow break-downs? Is the network on the verge of locking up if demands are increased modestly? Does congestion correctly build, plateau, and start to clear during the modelled period?


Consistency with Related Models

Modeling studies often involve a series of related models (base model, future no-build, and build alternatives, different times of day, etc.). To assure the integrity of the study as a whole, these models must be consistent. An especially high level of consistency is required if delay computations from the models will be used to compute project benefit/cost ratios, or if economic data derived from the models will be used to justify project expenditures, investments, public-private partnerships, etc.

  • Time-of-Day Modelling Methods: Are all temporal periods (AM peak, PM peak, mid-day, weekend, etc.) modelled using the same basic network and the same calibration parameters? Usually the only differences between temporal periods should be the DEMANDS file and features that change by time of day, such as signal timing plans and peak-hour parking restricitions.
  • Network Coding: Have any new modeling methods or techniques been introduced in DO MINIMUM or BUILD models that did not exist in the BASE YEAR model? If so, how do these affect traffic flow and routing? Has the network coding from the DO MINIMUM been carried through into the BUILD scenarios (with the exception of any the links that are affected by the proposed project)?
  • Core Simulation Parameters: Have core simulation parameters such as MEAN TARGET HEADWAY been kept constant throughout all models, time periods, and scenarios?
  • OD Matrices / Demands: Have the future-year OD matrices and temporal profiles introduced in the DO MINIMUM model remained constant through all BUILD scenarios?
  • Random SEED Values: Have the same SEEDS been used for all time periods and scenarios?
  • Signals & Intersection Control: Have the same methods been used to model the signals in all of the scenarios (e.g. DO MINIMUM vs BUILD). For example, if actuated control is used in the or BUILD it should also be used in the DO MINIMUM. If not, it will be difficult to make a direct comparison of the the delay and other performance measures.


Documentation

Proper documentation of modeling methods and assumptions establishes accountability and facilitates efficient revision, updating, and follow-up.

  • Completeness: Is the model documentation sufficient for hand-off to another experienced modeller? Have all assumptions and special features been documented?
  • File Naming System: Has a logical file naming system been used so that it is clear which file represents each scenario?


Do Minimum Model

  • Growth Rates: Has the existing OD matrix structure been preserved? How was the traffic growth determined? Were the appropriate growth rates applied? Are the growth rates consistent with any external forecasts? Does the model reveal possible errors in the forecast?
  • Network Changes: Have all committed projects been added? Are the improvements coded in the DO MINIMUM justified? Are there "DO MINIMUM" improvements that ought to be evaluated as BUILD alternatives in their own right?
  • Calibration: Does the traffic volume match external forecasts? If external forecasts are unavailable, is the growth reasonable? Are the modelled speeds and route choice reasonable given the new conditions?
  • General Issues: Does the increase in traffic reveal weaknesses in the model coding, OD, or other core elements of the model that need to be corrected before going further?


Design Year / Build Scenarios

  • Design Year OD Matrices: Are the matrices the same as the DO MINIMUM?
  • Network Configuration Changes: Are there any changes in the configuration and if so are they justified?
  • Network Coding: Have scenarios been properly coded? Are there any changes to the network that are not supposed to be part of the scenario?
  • Performance Measures: Have the performance measures (MOEs) been correctly calculated?
  • Calibration: Does the traffic volume match external forecasts? If external forecasts are unavailable, is the growth reasonable? Are the modelled speeds and route choice reasonable given the new conditions?
  • Model Conclusions: Do the conclusions make sense?
Personal tools