Three scientists all named Mohammed, M. Zulkermine, M. Raihan, M. Uddin presented "Towards Model-Based Automatic Testing of Attack Scenarios." I kept wondering why they didn't mention the Achilles system, which is the de-facto standard for such testing.
Several scientists from the Norwegian University of Science and Technology described "CRIOP" A Human Factors Verification and Validation Methodology that works in an Industrial Setting." This was a survey methodology to determine if an HMI works well or not.
Bruce McMillin of Missouri University of Science and Technology talked about the advanced electric power/smart grid as a cyber-physical system and the reliability of such a system. FACTS devices control the power grid. They can fail in a number of ways. They can have hardware failures, as well as software failures. McMillin and his team investigated the effect of software failures in FACTS devices on the operation of the system.
If a FACTS device fails, the system behaves as if the device never existed. Adding FACTS devices increases system reliability if they are very high reliability. Other failure modes include erroneously set flow to 80% of correct value, and erroneously set flow to line capacity. This is somewhat disappointing because even though we have a fabulously complex cyber system, and from a reliability point of view, it might be better to leave it alone.
McMillin described the NSF-funded FREEDM project-- a project to bring alternative energy down to the residential level.
"Just because you can do something, doesn't mean it will be that much better."
Thomas Steffen of Loughborough University presented "Increasing reliability by means of Efficient Configurations for High Redundancy Actuators."
He compared standard actuation to parallel redundancy, noting that the latter is not fault tolerant to lockup, but is simple, expensive, provides more fault and is tolerant to loose faults. When you move to serial redundancy, you are tolerant to lockup, you have more travel, but it is not tolerant to loose faults and it is complex.
So, the answer is high redundancy, which is complex, but is fault tolerant to both fault models.
This model uses a high number of smaller elements, inspired by musculature-- many small cells.
Many problems have the same basic structure: 2 fault modes and series/parallel aggregation.
High voltage switches, rotary motion with Torque/velocity adding gears, or physical transport...not in information systems,
Steffen looked at fault trees for various configurations. He noted that these were systems of systems, with complex configuations made from aggregations of simple ones. He looked at limiting capabilities of multistate systems) but he also shared a newly derived additive capability role. Now, he noted he can describe the reliability of force, and the reliability of travel, completely independently of each other.
He showed un-reliabilty results by configuration, where PPSS is worst and SSPP is best. But he noted that each of the configuration may be the most reliable in special circumstances.
Finally, Ute Schiffel discussed the perils and pitfalls of building safety critical systems with commodity hardware...showing an arithmetic encoder for safety critical systems.