Don Y
2024-11-08 04:10:45 UTC
In *regulated* industries (FDA, aviation, etc.), products are
validate (hardware and software) in their "as sold" configurations.
This adds constraints to what can be tested, and how. E.g.,
invariants in code need to remain in the production configuration
if relied upon during validation.
But, *testing* (as distinct from validation) is usually more
thorough and benefits from test-specific changes to the
hardware and software. These to allow for fault injection
and observation.
In *unregulated* industries (common in the US but not so abroad),
how much of a stickler is the validation process for this level
of "purity"?
E.g., I have "test" hardware that I use to exercise the algorithms
in my code to verify they operate as intended and detect the
faults against which they are designed to protect. So, I can inject
EDAC errors in my memory interface, SEUs, multiple row/column
faults, read/write disturb errors, pin/pad driver faults, etc.
These are useful (essential?) to proving the software can
detect these faults -- without having to wait for a "naturally
occurrence". But, because they are verified/validated on non
production hardware, they wouldn't "fly" in regulated
markets.
Do you "assume" your production hardware/software mimics
the "test" configuration, just by a thought exercise
governing the differences between the two situations?
Without specialty devices (e.g., bond-outs), how can you
address these issues, realistically?
validate (hardware and software) in their "as sold" configurations.
This adds constraints to what can be tested, and how. E.g.,
invariants in code need to remain in the production configuration
if relied upon during validation.
But, *testing* (as distinct from validation) is usually more
thorough and benefits from test-specific changes to the
hardware and software. These to allow for fault injection
and observation.
In *unregulated* industries (common in the US but not so abroad),
how much of a stickler is the validation process for this level
of "purity"?
E.g., I have "test" hardware that I use to exercise the algorithms
in my code to verify they operate as intended and detect the
faults against which they are designed to protect. So, I can inject
EDAC errors in my memory interface, SEUs, multiple row/column
faults, read/write disturb errors, pin/pad driver faults, etc.
These are useful (essential?) to proving the software can
detect these faults -- without having to wait for a "naturally
occurrence". But, because they are verified/validated on non
production hardware, they wouldn't "fly" in regulated
markets.
Do you "assume" your production hardware/software mimics
the "test" configuration, just by a thought exercise
governing the differences between the two situations?
Without specialty devices (e.g., bond-outs), how can you
address these issues, realistically?