Verification
A high-level language design and modeling environment enables thorough multi-level verification.
At Dillon Engineering, we understand that most effort in a logic design is spent in verification. Any streamlining in verification comes directly off the bottom line of schedule, and any bolstering of verification quality reduces design risk. This is a look at some of the ways we utilize our high-level modeling capabilities to be the basis for our verification environment.
At Dillon Engineering, we understand that most effort in a logic design is spent in verification. Any streamlining in verification comes directly off the bottom line of schedule, and any bolstering of verification quality reduces design risk. This is a look at some of the ways we utilize our high-level modeling capabilities to be the basis for our verification environment.
Commonality between model and HDL verification
We believe a solid FPGA or ASIC verification flow should span all of the project's design hierarchy, from an original algorithm description, to a high-level language model, and down to the hardware description language (HDL) code. Verifying all of these design representations against the same verification criteria and data ensures the hardware will meet the top-down intent.
The model must first be verified against the original algorithm. Of course, the format and maturity of the algorithm determines what is available as a verification "gold standard", whether it's a combination of sketchy code, pseudocode, and flow diagrams that only performs with one or two data sets, or whether it's a meticulously documented code repository with a full test suite. Regardless, the model structure must be flexible enough to scale up to meet higher-level algorithm structures, and also to scale down to allow lower-level HDL verification. Using the right modeling environment (such as Python!) is key to enabling this flexibility.
The model must first be verified against the original algorithm. Of course, the format and maturity of the algorithm determines what is available as a verification "gold standard", whether it's a combination of sketchy code, pseudocode, and flow diagrams that only performs with one or two data sets, or whether it's a meticulously documented code repository with a full test suite. Regardless, the model structure must be flexible enough to scale up to meet higher-level algorithm structures, and also to scale down to allow lower-level HDL verification. Using the right modeling environment (such as Python!) is key to enabling this flexibility.
Unit Testing
Unit testing is commonly evaded in HDL design flows, as it requires more effort to build and maintain additional testbenches, and to keep unit-level module interfaces concise and invariant. However, full verification of complex designs with exponential combination of states requires that intensive unit-level testing be performed. The impact of this is lessened by our high-level language capabilities, allowing us to develop comprehensive object-oriented test environments as well as synthesizable RTL.
Consequently, similar to the way an algorithm or software program may be developed, we perform stepwise implementation of smaller units of HDL and rigorously test them against the model. We then build up to higher levels of integration until reaching the top level of the design. The ability to control/observe inside the model at any unit size black-box is the key to progressive HDL verification.
Consequently, similar to the way an algorithm or software program may be developed, we perform stepwise implementation of smaller units of HDL and rigorously test them against the model. We then build up to higher levels of integration until reaching the top level of the design. The ability to control/observe inside the model at any unit size black-box is the key to progressive HDL verification.
Robust data generation
Data generation is another verification necessity that is unleashed with a powerful high-level language. Complex designs require robust stimulus including ramping, random, directed random, decaying, trig functions, etc. When data must survive operations such as filters or correlations in order to remain meaningful later in the pipe, the data generator must be easily scaled to provide data that anticipates such staging of effects. Our Python modeling language environment provides these advantages, with its reduced semantic burden and wealth of built-in and extension functions enabling intuitive, limitless data creation.
Another stimulus/response format issue is the oft-overlooked file I/O. When a model is operating from canned data sets, or providing data for another process such as an HDL simulation, an ability to adapt between data formats such as hex, float, complex, etc. makes simulation development a whole lot easier. Versatile numeric formats and ease of type conversion are again some of Python's strengths here, and are taken advantage of by the DE modeling environment and class library.
Another stimulus/response format issue is the oft-overlooked file I/O. When a model is operating from canned data sets, or providing data for another process such as an HDL simulation, an ability to adapt between data formats such as hex, float, complex, etc. makes simulation development a whole lot easier. Versatile numeric formats and ease of type conversion are again some of Python's strengths here, and are taken advantage of by the DE modeling environment and class library.
Model/HDL Co-simulation
While file I/O is quite often an acceptable data exchange mechanism between model and HDL simulations, we also employ co-simulation as another powerful verification technique. Co-simulation is enabled on the HDL side by Verilog PLI, and on the Python modeling side via generator objects and the fabulous work done on the MyHDL project. This connectivity allows both model and HDL to interact at any or all steps of the design.
Co-simulation provides some obvious advantages over file I/O, such as piecemeal data generation, run-time checking, and minimized file storage and maintenance. MyHDL, developed for RTL and testbench generation from Python source, includes the hooks to support PLI data exchange and hardware concurrency modeling. We have built upon these capabilities to link our model execution and HDL simulation under a single-command umbrella.
Co-simulation provides some obvious advantages over file I/O, such as piecemeal data generation, run-time checking, and minimized file storage and maintenance. MyHDL, developed for RTL and testbench generation from Python source, includes the hooks to support PLI data exchange and hardware concurrency modeling. We have built upon these capabilities to link our model execution and HDL simulation under a single-command umbrella.
Run-time configuration
When a design has a lot of configurability, it adds to what is known as the state explosion problem. Verification now requires exponentially more state combinations to consider beyond just different stimulus patterns. The modeling environment should be able to mimic the HDL's run-time configuration settings in conjunction with data stimulus to create complete verification. Whether these are passed in from command line, embedded in data I/O, or read from configuration files, the DE modeling system can handle run-time and dynamic configuration to match what would be the control path in the hardware design. And the run-time speedups achieved with model simulation becomes even more important with the increase in test vectors.
---
Dillon Engineering ingenuity in verification points right back to the efficient environment in which we model in a high-level language. While conventional building blocks such as testbenches, bus functional models, checkers, and file I/O are necessary to perform HDL simulation, we feel the direct correlation between HDL and model, including data generation, co-simulation and configurability, is the key to enabling robust verification.
---
Dillon Engineering ingenuity in verification points right back to the efficient environment in which we model in a high-level language. While conventional building blocks such as testbenches, bus functional models, checkers, and file I/O are necessary to perform HDL simulation, we feel the direct correlation between HDL and model, including data generation, co-simulation and configurability, is the key to enabling robust verification.