Control Sheet No. 7

In this issue:

On Creativity in Control System Design

According to Wikipedia, »creativity is the ability to generate innovative ideas and manifest them from thought into reality; the process involves original thinking and then producing.«

Given this definition, control systems of large experimental physics facilities provide an excellent breeding ground for creative endeavours. Facilities themselves are designed to transcend the state-of-the-art of technology, frequently stumbling across difficult engineering challenges that require out-of-the-box thinking to solve. At the same time, the perceived cost »thinking and producing« in the field of control systems is particularly low: »just sit down and code« (with the »sitting down« part being optional).

Apart from functionality that control system needs to provide (i.e., ability to monitor and control process variables), large experimental physics facilities also need to be scalable (number of process variables being in the order of 105 to 106) and long-lasting (a decade of construction followed by a couple of decades of operation). Since capacity of an individual control system engineer is limited, it is inevitable that the work of many will ultimately need to be integrated into a coherent whole.

Imagine the process of integration if each of the engineers used a different technique to control a subsystem. The first subsystem using PLCs of one vendor, the second subsystem PLCs of another. One subsystem running VxWorks operating system, the other Linux, the third also Linux, but with a slightly patched kernel. One subsystem using Windows for the operator interfaces, the second MacOS. One subsystem relying on EPICS control system framework, the second on TANGO. To make such subsystems exchange information, gateways would need to be built – a significant development effort that might also have implications for performance and availability, while needing configuration and maintenance. And integrators would need to have very wide and very deep knowledge of the plethora of technologies they'd be troubleshooting. Given that integration activity cannot be parallelized much (i.e., it won't progress much faster if more people are assigned to it), any unforeseen events could postpone the completion date of the overall project.

And unless constrained, subsystem control engineers will apply creativity to solve their problems: pick the technologies they are most comfortable with (or like the most), and invent solutions to those problems for which they are unaware that solutions exist (or against which they have a bias).

A significant relief to the integrators would be to ensure that all subsystems are built to adhere to common design principles, and to standardize the interfaces through which the subsystems interact. Thus, the solution is to constrain the subsystem control design to preempt creativity where creativity is not only unnecessary, but could lead to »impedance mismatches« during integration. There are two things one can do:

  • Document the guidelines for design of subsystems to which subsystem engineers will need to conform. Among others, the guidelines specify what equipment is to be chosen, what naming conventions apply, and what communication protocols are to be used.
  • Package development environment and tools that enforce conformance to the prescribed guidelines. In return for compliance, the development environment should improve productivity of subsystem control engineers.

However, these need to be done skilfully. Providing unrealistic, hard-to-follow, too-lengthy-to-read guidelines could result in them not being applied. And delivering an archaic development environment that does not address the actual needs of development would result in it not being used, and subsystem engineers would find their own solutions to their problems

The above approach is adopted at ITER, where a detailed Plant Control Design Handbook has been prepared that will be used for procurement of subsystems. Also, EPICS was packaged for automated installation on Red Hat Enterprise Linux operating system, which allows subsystem controls engineers to set-up their development environment within minutes in a manner that fully conforms to the ITER's standard.

It appears that the work done by ITER can be leveraged to a significant extent elsewhere as well. For example, the European Spallation Source (ESS) is also considering a similar approach, and with the lack of additional constraints (e.g., what control system needs to be used, or what operating system is in place), decisions made by ITER are as good as any – in spite of the fact that ITER is a fusion facility, whereas ESS a spallation source.


Figure 1: split of responsibility between a subsystem and the central system. Interaction between the subsystem and other systems and the central system is through a few well-defined interfaces.


Introducing Visual DCT3

Cosylab is proud to announce that the new version of Visual DCT, version 3.0 is ready and available in beta version and can be downloaded from sourceforge (visualdct). As opposed to other releases of VDCT which consisted of several upgrades and maintenance releases, version 3 comes as a complete rewrite practically from scratch. VDCT3 comes with dramatic changes in its architecture and design, supporting new EPICS features and allowing a richer user experience. We would like to thank our partners at the Brookhaven National Lab for their support and collaboration on the project.

VDCT2 was based on a 9 year old architecture that although has had many changes throughout this time adding new features is becoming increasingly difficult. Especially, dramatic changes as including more options for loading and saving databases (IRMIS) and introduction of new data models (pvData) could no longer be “squeezed in”.

The main improvement is definitely the architecture itself. It supports different data models, currently implemented are EPICSv3 and pvData. With this, VDCT can work with data sets than can include information beyond the EPICS database (e.g. driver specific information or higher level object definition), making it the best choice tool for designing EPICS control systems.

Modular architecture also allows for integration with other tools. With IRMIS, CSS (Control System Studio) and SDD (Self-Description Data at ITER) becoming more and more used, it wouldn't make sense to continue using a version of VDCT that couldn't work with them.

In the new version the database model and the GUI are cleanly separated and graphics is based on proven technologies that did not exist 9 years ago and which will allow us to leverage third-party development and not to do everything (e.g. link routing) from scratch as we had to.

VDCT3 is based on JAVA 1.6 as well as on Netbeans Visual Library as a drawing tool framework. There was a comparison made between Netbeans Visual Library and Eclipse GEF (Graphical Editing Framework) but in the end we decided Visual Library would be a better option since it is based on plain Java and Swing and does not require native libraries bundled in Eclipse RCP. This makes the framework more flexible and more open to users and developers. The visual part can also be cleanly rewritten in GEF (to integrate with CSS) without affecting other parts of the application.

As VDCT evolves ever more to meet the needs of the EPICS community we would like to have the EPICS community reach out as well to support VDCT development. We would like to encourage everyone to use it and give us their feedback. Any collaborators and funders for development are welcome to participate with us in providing the VDCT that the EPICS community deserves.

Figure 1: Snapshot of a sample database as seen on graphic perspective of VDCT3

Figure 2: Snapshot of a sample database as seen on hierarchical perspective of VDCT3


Fiber-optic communication on low end FPGAs in hard real time

Fiber-optic communication is becoming more and more popular; it offers very high data throughput, allows large distances, offers galvanic isolation and immunity to electromagnetic interference. Drawbacks on the other side are higher costs compared to copper wire, mechanical weakness of the cable and the fact that it cannot carry electrical power.

Normally, to design a fiber link communication one would use either dedicated communication components or FPGAs with high-speed serial interface support allowing speeds of 5Gb/s or faster. Here, the role of these components is to serialize and de-serialize data (parallel/serial and vice versa) and apply encoding/decoding scheme suitable for fiber link, all this in real-time. Minor drawback however might be that these devices require more complex FPGAs and board routing than otherwise needed for application itself.

But what to do in a basic FPGA design with modest application and PCB complexity? If one still needs all the benefits of fiber-link communication and can afford to make bandwidth compromise to “only” couple of 100 Mb/s per link, then such a fiber link can also be implemented on low-end (and low-cost) FPGAs that do not otherwise provide high-speed serial links. Also, one can have arbitrary number of links per FPGA.

Figure [1] shows a block diagram of typical fiber-link communication that is also suitable for low-end FPGAs. Small form-factor pluggable – SFP – transceiver is used to bridge optical and electrical signals. Clock Data Recovery – CDR – component recovers clock and data to be used by receiving logic inside FPGA. Logic inside FPGA handles serializing, de-serializing, encoding and decoding of data that is provided to/from main FPGA application (user logic). With such a design one can also use recovered clock for synchronous operation of the (slave) FPGA (or more of them); here the receiver (slave) would “breath” with the clock the master is breathing

Protocol is the next thing that has to be chosen. First—to allow clock recovery and optimal data transfer conditions—encoding must be such that it features frequent signal transitions, regardless of the actual data being transferred. Second, we have to look into contents. If digital waveform square signal is to be transferred, then bi-phase mark (or similar) encoding would probably be the best solution. However, when data is being transferred, it does not prove to be efficient if one mimics UART-like data transfer combined with bi-phase mark encoding. For data transfer over the fiber link 8b/10b encoding scheme would be the choice and can also be fairly easily implemented in FPGA.

Up to this level the link still features hard real-time performance (well below ns scale), but unfortunately still misses higher-level interface or protocol definition (SW people might say: raw data only, no API ). Application communication interface is added on top of 8b/10b encoding scheme. E.g. it can be designed in such a way that certain data gets delivered with hard-real time performance and certain data is delivered on best-effort case (hence fully utilizing remaining bandwidth). This is for example needed in case of multiple slave FPGA-applications, where some data (from centralized source) must be delivered to multitude of devices at the same moment (~ns) while simultaneously lower priority communication must still be possible over the same link.

When deciding on a protocol, it has to be noted that common multi-layer communication protocols (such as ubiquitous TCP/IP) do not provide any latency guarantees between the transmitter and the receiver. Their latency greatly depends on current bandwidth usage and network topology (switches, routers…), making them unusable for application where hard real-time performance of the link is needed. In such a case designing a custom protocol is the only way to go.

Please contact us if you have any questions regarding Spartan 3/6 based implementation (8b/10b, hard real-time data + lower-priority user data communication link, SFP @ 200 MHz).


Figure 1 – Block diagram of a typical SFP interfacing circuit. 

back to previous content

Download printable version of Control Sheet nr.7 here