Blog

Get the latest updates and news from FINCAD. Subscribe and never miss a post! 

Subscribe

Blog
Architecture Series, Part 8: Process and Tools
By Russell Goyder PhD | January 13, 2013

I think this will be the final post in this series. I have covered some important but somewhat dry aspects of designing an analytics library, but have said very little about finance and it's high time I did! It has been all about constructing the pipe, not deciding what flows through it. The next series of posts will be about precisely this - the information flow needed for a generic approach to calculating price and exposure of derivatives.

But before that, I'd like to end with some miscellaneous musings on what constitutes "best practice" in the area of software development. This also gives you an idea of how we go about the business of delivery of production code here at FINCAD.

Test early and often

If I had to have a mantra, it would be this. It applies at all scales of activity, from checking that a few lines of code you just wrote compiles to tracer bullets for big systems. Here are some examples:

  • Write the interface and documentation first and apply some form of review. (There are lots of good code review tools out there that facilitate review and allow for remote, asynchronous reviewing, automatic addition of reviewers and the like.)
  • Make it a "one-click" operation to build everything on all platforms. Make your build farm large enough to support the throughput that arises from your quants/developers building a lot.
  • The build fails if the test suites (unit, functional, product examples etc) don't pass.
  • Warnings are errors in all compilations.
  • Make revision control commits frequently.
  • Don't think of code review as merely for code and don't do it at the end. There should be a process of feedback on code, documentation, testing, examples, interface and usability aspects and so on, from the appropriate roles and on an appropriate timescale, which boils down to roughly whenever significant change is made.

Incidentally, branch-based development with routine use of a multi-platform build farm means that you can take a relaxed view of certain topics which often give rise to unnecessary friction and noise, such as which editors, IDEs, operating systems and so on people use to develop software. The fundamental requirement is that all builds pass on all platforms in the farm before a branch is merged into the main line/trunk.

Automate aggressively

Fundamentally, this is about writing code that writes and runs code. Examples include

  • specifying a library's API and reference documentation as metadata, implemeting it in C++, then generating everything else that allows it to be called from various platforms and environments, such as Excel, MATLAB, C, C++, Java, C#, Python etc.
  • the automation of coding conventions as pre-commit hooks, most just at the level of syntax (grep and friends) but the rest  with a tool like clang.
  • the auto-generation of IDE project files (Visual Studio for example) as a build product.

Part of a well-automated system is to treat code written for the build (and tests) as "first class" code that requires careful design consideration, not an afterthought.

Keep it DRY

Don't repeat yourself. Or SPOT (single point of truth) - this is "software 101", but bears repeating. One inevitable consequence is that any well-designed library in C++ makes extensive use of template meta-programming.

Agile, but not too agile

"Agile" is a buzzword that can mean quite a few things to different people. In my experience,  the manner in which one plans and organizes the development of production code does not stem from some external "theory" or "system", but arises from capable individuals making sensible decisions appropriate to the problem at hand. This results in agility, but not necessarily Agility! I suspect there is quite a lot of money to be made by bundling up a lot of sensible ideas and lending it some mystique by means of some jargon.

In the extreme, Agile has no long-term vision, there is just the backlog and the next iteration. It is in some part a reaction to the other end of the spectrum - rigid big-release planning and waterfall development. In practice, businesses need a delivery commitment on some timescale that runs into months, if not years. But we must also react to the issues of the day. The fundamental constraint from physics is that systems should be observed and modified on their characteristic timescale for change.

Some general considerations apply, like "communication is key". Use a good tool for this, like JIRA. And that snap decisions can hurt you in the long run, so take the time to give design the thought it deserves. Above all, relax, breathe deeply and trust in your ability to exercise common sense and, while educating yourself on what the software process "revolutions" of today, and yesterday, have to say, be mindful of the distinction between them and truly original ideas.

About the author
Russell Goyder PhD
Russell Goyder PhD
Director of Quantitative Research and Development | FINCAD

Russell Goyder, PhD, is the Director of Quantitative Research and Development at FINCAD. Before joining FINCAD’s quant team in 2006, he worked as a consultant at The MathWorks, solving a wide range of problems in various industries, particularly in the financial industry. In his current role, Russell manages FINCAD’s quant team and oversees the delivery of analytics functionality in FINCAD’s products, from initial research to the deployment of production code. Russell holds a PhD in Physics from the University of Cambridge.