Control Sheet No. 5

  • Steering development of software and hardware projects,used and funded by several laboratories
  • Extreme power Developing With FPGAs: Did Digital Electronics Become a Simple Programming Exercise?

In this issue:

Steering development of software and hardware projects,used and funded by several laboratories

Very often the accelerator control system community finds itself solving a similar problem many times independently, with every lab implementing its own solution. People working on these projects of course see this happening and are motivated to collaborate – they share code, split development effort for new functionality among several labs, etc.  However, such collaborations are faced with many challenges, just to name a few:

Who will be committed to drive the development of the common project, not just at the beginning, when the enthusiasm is high, but also later, in the dull bug fixing phase?

This person needs a lot of experience in the domain in order to be able to steer the project into the direction that is the most beneficial for the whole community of users.

How can the responsible person influence developers from sites other than her own to always work in the best interest of the project as a whole?

For example we can take a look at the following situation: on site A, there is a pressing need to add something to the common code that is only relevant to site A, and the developers from this site do not want to go through the burdensome process of getting the OK from the community. So, they just add “this one feature”.

Who will be in charge of getting additional funding for the project and keep in mind the necessary accounting?

For these and many more reasons, such collaborations have a large risk of failure. In this article we will provide two examples of collaborations that are succeeding: one that has already proved to be going well for eight years, a software product called Visual DCT. The other example is the development of White Rabbit, a multi-lab, multi-company effort to come up with an Ethernet-based network that can guarantee sub-ns synchronization in more than 2000 stations with typical fiber lengths in the order of 10 km. In order to make the development of hardware in an open way easier, the Open Hardware Repository project (OHR) was implemented.

Visual DCT

Visual DCT (Database Configuration Tool) provides visual composition of new EPICS databases as well as graphical representation of already existing ones. Today, it is the most widely used tool for visual composition of EPICS databases.

Before Visual DCT, the most widely used tool was CapFast, a tool developed for designing electronic circuits. Only later it was adapted so that it was possible to use it to design EPICS databases. With this development path came several drawbacks: some features of EPICS databases were hard to include into CapFast in the first place, further customizations of the tool were coming slowly, and finally the pricing and support scheme of CapFast was not attractive for the accelerator community, which is used to working with open source tools. Nevertheless, CapFast provided a robust, professional grade visual editor that was hard to replace. Investment into a new tool that would provide all the functionality of CapFast was too large for any of the individual laboratories to take on.

Therefore, a gradual approach was taken: Steve Hunt, then the leader of the control group at SLS funded the development of the first version of Visual DCT and has outsourced the development to Cosylab. The scope of the project was to provide a simple GUI builder for EPICS records, with much less features than CapFast provided. However, Steve already had a vision on how the tool could be expanded further.

After that, other labs have seen the potential of Visual DCT to become the standard for building EPICS databases visually. The development packages are:

  • Visual DCT v1, SLS, 2000
  • Visual DCT v2, SLS, 2001
  • Package A, ANL, 2002
  • Package B, DLS, 2002
  • Debug Package, PSI, 2003
  • CapFast to Visual DCT Converter, JLAB, 2003
  • Package D1, ORNL, 2004
  • Package D2, DLS, 2004
  • Design study of CSS and Visual  DCT integration, DESY, 2006
  • Support package, ORNL, 2006
  • Visual DCT Package E – Spreadsheet view, SLAC, 2007
  • Support package, SLAC, 2008
  • Package F, BNL, 2009
  • Visual DCT for EPICS v4, BNL, 2009

During the lifetime of Visual DCT there were several people in charge of steering its development. All are experienced and respected members of the EPICS community, representing sites that use Visual DCT in everyday operation. At first, the responsible was Steve Hunt from SLS, and later John Maclean from APS, Nick Rees and Emma Shepherd from DLS. The role of the representative from the EPICS community is to make sure that new developments of Visual DCT are steered in the right direction. They are responsible for collecting and prioritizing requirements that were given by the members of the EPICS community. For each development package, a part of the development budget went to address specific needs of Visual DCT of the funding lab, and another part (usually a larger part) went into developments that were useful for all sites that use Visual DCT. This part also covered ongoing support and bug fixing.

Open Hardware Repository and White Rabbit

In the case of hardware, duplication of effort in different labs - and even inside the same lab - is even a bigger problem than in the case of software. One possible cause is that schematic diagrams and PCB layouts are inherently harder to share than code. The non-open nature of many commercial developments does not help either, and even open designs suffer from the non-openness of schematics and PCB file formats of commercial design tools.

In order to overcome these problems and pave the way for a fruitful open hardware scene, CERN's Hardware and Timing section teamed up with Cosylab to develop the Open Hardware Repository* (OHR). The main aims of OHR are:

  • To avoid duplication of efforts by publishing designs in an easily exportable way.
  • To improve quality by the well-known mechanism of peer-review.
  • To rationalize work split: do what you do best, let others do the rest.
  • To make communication among labs and companies easier, especially in the frame of complex collaboration projects.

The first project to use OHR's services was White Rabbit. CERN, GSI and Cosylab - among others - are involved in this project, and today already one can browse the OHR and find schematic diagrams, HDL and firmware for a proof-of-concept version of the White Rabbit Switch. Mailing list archives are also there to be consulted, making incorporation of new team members a painless process.

One very important part of the OHR philosophy concerns the role of companies. One can pay them to collaborate during the design stage, and they are certainly key partners when it comes to producing, testing, selling and supporting hardware. The results of this collaboration should be open for everyone - including companies themselves - to benefit from.

Conclusion

For the case of Visual DCT we can answer the questions from the introduction:

Who will be committed to drive the development…? In the case of Visual DCT these were the responsible persons from the EPICS community. They are experienced, motivated and able to get agreement in the community.

How can the responsible person influence developers from sites other than her own …? For Visual DCT the answer was that Cosylab, a commercial company, was in charge of the development of most packages. In this manner, the responsible person had a strong influence on the development, since Cosylab would not get paid if the deliverables were not acceptable.

Who will be in charge to getting additional funding for the project and keep in mind the necessary accounting? For Visual DCT, both the community responsible and Cosylab were motivated to get the funding. The responsible persons had good contacts to the relevant decision makers in the community.

Open hardware collaborations have the same risks as their software counterparts, and the solutions to increase the chances of success are very similar: motivated labs and companies, a design of general-enough interest, and a well-thought business model not only during design, but also for production, testing and commercialization.

Despite the challenges, collaborative effort is possible, if managed well. The results are well worth the risks, since complex, well tested and widely used products developed in this manner are much better than any site could produce on its own.

*Open Hardware Repository: OHR can be used freely for collaborative hardware projects. Go to the site, read the manifesto, and contact Javier Serrano if you want to share your project in OHR. 

 

Extreme power Developing With FPGAs: Did Digital Electronics Become a Simple Programming Exercise?

The programmability (or rather configurability) of FPGAs brings extreme power. But there is also an extreme lure: they make people think that anybody with a little programming experience can develop complex digital electronics. In a way, they are similar to sports cars; the more power they give you, the less forgiving they are to even small mistakes. And just like sports cars, FPGAs are not meant for the average programmer (driver) to develop fast digital electronics (drive races), but a tool for already experienced electronics developers. Or, an analogy the SW people will understand: The difference between switching some flip flops with VHDL and developing a real world FPGA application is just like the difference between making a nice Web page with Flash and a full fledged, high performance and scalable eCommerce service. No matter how easy the initial example is, the learning curve for FPGA development becomes very steep.

Here are some typical problems that require a lot of experience, which software people and physicists, both untrained in electronics, just don't have:

Dealing with signals, not values: in SW, a zero is a zero and a one is a one, unmistakably. In FPGAs, there are hold and setup times, etc. all of which have to be taken into account not only in the design of the electronics board, but also in the programming of the FPGA in particular when crossing time domains.

Time dependence: SW is a series of logically connected actions, without the need for exact relative timing. This is because the CPU clock is much faster than the typical timescale of software events. In FPGAs, the correct timing is the most essential aspect, resulting in a series of issues:

  • difficult debugging: a tracer destroys the original time structure, therefore debugging is like an observer in quantum physics - it changes the outcome
  • actions must be precisely timed
  • communication between modules that are physically distant on the silicon may take longer than the clock, destroying synchronization

Interdependence of logic blocks: in SW, it is easy to add another "elseif" statement to an already existing "if". In the FPGA, a few additional logic statements can completely alter the original functionality:

  • adding a independent block may disrupt the timing constraints in an otherwise fully functional block, due to the changed timing and placements on Silicon
  • as a consequence, it is not enough to just make a few test cases. One must test all possible corner cases

Interfaces to other electronics: typically, 90% of time in FPGA development is spent on high speed interfaces to PCI, Ethernet, RAM, etc. - the logic itself (even the DSP) is the easy part. As this logic is the only issue that SW trained people consider, they are really lured in a false belief of simplicity. The challenge with interfaces is for example:

  • manipulation of data driven by different clocks from different sources is very tricky, e.g. a fast serial line receiving data
  • shifting data from differently clocked structures (say, from a 10 MHz to a 15 MHz bus) may result in hold and setup time violations of flip flops, i.e. flip flops becoming unstable at irregular intervals. Now debug this!

Some easy rules of thumb to remember when planning to develop with FPGAs are:

  • once the usage of silicon gets above 20-30%, the constraints gets much tougher and the complexity gets much worse
  • the faster the FPGA clock, the more one has to optimize
  • increase the amount of testing: there should be typically one tester per 4 programmers. For FPGAs, the ratio should be 1:1. And for those that want to develop ASICs, the ratio is 4 testers per one developer.
  • don't rely just on high level tools like Matlab or LabView FPGA but use them for simple jobs: while they provide blocks for most interfaces and logic operations for proofs of concept, their performance and in particular memory usage is far from optimal - we're talking about factors of 10-100, just like in the compilation of high level languages. But opposed to computer RAM, the available silicon may be, at least for now, still a precious commodity.

Download printable version of Control Sheet nr.5 here.