Return of Proprietary?
Ive been catching up on my reading and caught your article in August 2007 Control on open systems complexity (Pain Relief for Open-Systems Complexity).
Regarding Bruce Turries comment on virtualization: Of course, there are many platforms to pick from and at least two different types. They are also an OS supplied by yet another vendor, another OS to keep track of and to ensure security for, etc. There is no free lunch there.
The software as a service model also brings no relief. Imagine being connected to the web and getting HMI updates every day that have interesting changes for the operator to deal with. (Today we offer Internet software updates already, but we do not automatically update or reboot operator consoles.) This is not the web paradigm.
Perhaps the move should be back to proprietary. Gosh, things were good then. But the sad reality is that nobody is prepared to pay for these pleasures. (A console of the 80s cost much more than our consoles of today).
So our customers enjoy the pleasure of much lower capital costs on their projects, but get to trade that for larger life-cycle costs and anxieties such as Turrie is witnessing. The vendor is stuck between a rock and a hard place. PCs by Dell, switches by anyone, OS by Microsoft, etc., etc.
A tough time. I am not sure where the answer is but it has occurred to me more than once−is it time to return to proprietary?
Vice President, Marketing
Emerson Process Management,
Dan Hebert wrote a very good article (How Good is OPC UA, Really?). Thanks for taking the time to do some research behind the scenes to figure out what were trying to do.
It really is about getting multiple vendors and world competitors to understand the requirements of the end users and put together the right specifications, technology and processes necessary to achieve and exceed the end-users expectations for secure, reliable interoperability. This is a complicated problem, because were taking all the OPC specifications that weve developed over the last 13 years and migrating all those interfaces into a generic set of services and standardizing on the wire protocol.
What we discovered was the real difference among a lot of the OPC specifications was really the information that we were trying to standardize on.
The OPC UA will define a generic set of services for things like Query, Read, Write, which is what we call service-oriented architecture. The infrastructure underneath is all about the protocol for interoperability, but the surfaces above are very generic, allowing client and server applications secure interoperability and, ultimately, transactional capability.
Thanks again for your diligence in writing an excellent article. I look forward to future correspondence and future opportunities is to provide the right conduit to help the end users and vendors achieve interoperability in automation that weve come to expect from the consumer electronics world.
Thomas J. Burke,
President and Executive Director