Tag Archives: faster-trading

FasterTrading 2008 and the fragmentation of the UK and European Trade Markets

After a busy day at work in our City Office last Tuesday (the 4th) I was able to get along to Intel’s FasterTrading 2008 event, hosted by an ex-Sun guy, Nigel Woodward, over at the HQ of the IET at the magnificent Savoy Place.

I really enjoyed the event, finding it one of the best vendor run, market facing, events I’ve yet gone to. Nigel and the Intel team had secured some top-notch speakers who really knew their stuff, and it was a pleasure to listen to them.

I’ve broken down what I captured against each speaker and their pitches below (as a reference here’s more about the speakers and there individual pitches too) and I’ve used this post as an opportunity to discuss and write-up an overview of the current status of the European and UK Finance and Market Trading industry, as well as the topic of the event itself, which was the current technology ‘Arms Race’ around increasing Trade speeds and the impact of increased Trade speeds on trading itself.

The post is divided into the following sections which mainly follow the course of the event itself (apart from the last section, which are my reflections and thoughts about the event and its content).

  • A Market Overview
  • The New Market(place) Maker
  • The Standard(s) Bearer
  • The Bank Chief Architect
  • The Vendor
  • The Sponsors Panel
  • My Thoughts

A Market Overview

George Andreadis, Head of AES Liquidity Strategy at Credit Suisse, presented “Challenges of the trading arms race”.

Our first speaker covered the fragmentation in the trading markets across the UK and Europe and why that was and he gave an exemplary overview of both.

Due to a variety of reasons, the number of Market Trading systems and Marketplaces across Europe (not just in the UK) has been rapidly expanding, fragmenting the European Market Trading market, leading to greater competition. He went on to talk about:

  • Markets in Financial Instruments Directive (MiFID)
  • Dark Liquidity and Dark Pools
  • Low Latency and Low Latency Trading
  • Smart Order Routing (SOR) versus Smart Order Execution (SOE)

Markets in Financial Instruments Directive (MiFID)

At a very basic level the Markets in Financial Instruments Directive (MiFID) is a legal agreement between the European Economic Area nations to manage and regulate trading in an open manner, and has led to the ‘opening up’ of the European Trading Markets, and ultimately to the current fragmentation seen in those markets.

Dark Liquidity and Dark Pools

Dark Liquidity generally means liquidity which is not revealed, and when collected together it is in Dark Pools. These are off-market collections of Dark Liquid where there is a desire to trade without display on order books. Because liquidity is not revealed potential market participants cannot determine market depth. This is useful for traders moving large numbers of shares without revealing themselves to the open market, and is generally used to try and reduce market impact when trading large orders (that would otherwise move the market if known).

Low Latency and Low Latency Trading

Low Latency is currently a very ‘hot’ topic for Traders as “a 1-millisecond advantage in trading applications can be worth $100 million a year“. It is focused towards driving down Trade transaction latency as much as possible.

Low Latency Trading is about utilising the transaction speed reduction to gain financial advantage. The tremendous differences in transaction latency across the World had led to demand for dealing with systems that could give the lowest latency possible, in fact one of the most striking differences that he spoke about was the comparison in latency across the world, latency in the US was as good as 5 ms, the UK was circa 20 ms, whilst the European average was as poor as 40 ms.

Smart Order Routing (SOR) versus Smart Order Execution (SOE)

He also spoke about Smart Order Routing (SOR) in deference to Smart Order Execution (SOE).

In the past Trading was focused very much about the quality of the trade and this is commonly called Smart Order Execution (SOE). As you have one market to go to, such as the London Stock Exchange (LSE), you’d need to extract as much financial value out of getting the trade ‘right’ as possible.

With the emergence of new Trading Markets, financial value can be delivered by understanding which trading marketplace will give you the best reward for your trade, and is very much about who to trade with, this is typically called Smart Order Routing (SOR).

SOR looks to answer the question: “who do I send my trade to so as to make the most profitable trade”.

SOE answers the question: “how and when do I position and transact my trade to make the most profitable trade”.

Obviously Low Latency Trading impacts and is currently in a symbiotic relationship with SOR.

The New Market(place) Maker

Peter Randall, CEO of Chi-X, presented: “New market trading models & technology”.

Our next speaker gave us an overview of why Latency was so slow in the UK (and across Europe) against the trading latency found in the US. His opinion was that we in the UK had computerised legacy, human focused, business processes, and never really moved away from supporting that model. Whereas the Americans had delivered computerised trading systems, which did not rely upon emulating human / organic processing.

The new exchanges like Chi-X took the American approach and started with processes designed for electronic systems from the outset. This was a principal reason why Chi-X’s trading latency was as low as 7 ms, just 2 ms off of the American (NASDAQ) average, a whole 13 ms faster than the UK (LSE) average, and a whopping 33 ms faster than the European average !

To fulfil the desire to trade in Dark Pools, Peter posed the rhetorical question “What has enabled the transformation of the European markets and the growth of these Market trading systems ?” answering it:

  1. The Markets in Financial Instruments Directive (MiFID) – an agreement across Europe for co-ordinated and harmonised regulation in the financial industry
  2. Financial Information eXchange (FIX) Protocol – common interface standard
  3. Market Data – available and free
  4. Rebating – low transaction costs pass on financial gains to both the buyers and sellers – the “maker and taker” benefit

I think that there is a lot here for the other (data) trading systems to take on board, and points to which attributes any such system needs to have to be successful, apart from the obvious requirement for a genuine “Business Need”:

  1. Regulation – agree boundaries and demarcation (i.e. who is responsible for what, when)
  2. Open Standards – well regulated, maintained and managed, with an inclusive approach to defining the standard
  3. Shared Status – where the current status is known and there is a common benchmark to judge against
  4. Good Value for Money – for all concerned

The Standard(s) Bearer

Kevin Houstoun, CTO of BidRoute, presented: “Winning the race to liquidity & new services”.

Kevin is a popular figure in the Finance industry, acting as an evangelist for the FIX protocol, which he is in an ideal position to do so as the co-lead of the FIX Protocol’s Global Technical Committee, he also leads the repository working group and the web services working group.

He gave an overview of Bid Route as a supplier of SOR based solutions, explaining more around SOR architectures.

An audience member representing the Securities Technology Analysis Center (STAC) brought up some good points about the move towards standardised SOR based trading architectures and the increase in data available around the performance of trading systems (which, of course, STAC is driving).

The Bank Chief Architect

Tony Bishop, former Chief Architect Wachovia IT (sponsored jointly by Verari and Wachovia), presented: “High-speed trading: specialized solutions”.

Tony gave possibly one of the best overviews of matching functional requirements (in this instance those of trades) to that of technology. It was excellent as it worked on two levels, both the overview between the relationship between functional requirements and technology was good, but so was the match of the technology to those business requirements. He’d managed this in one slide, and frankly I was very impressed.

He continued on to talk about the technology implemented, he spoke about the techniques he implemented at Wachovia to achieve their dominance in terms of trade speeds in the US, which included:

  1. Standardised trade system design focused around ‘Pods’
  • Proximity of components making up a discrete functional component matters – because it affects speed
  • Speed of Roll Out / Deployment matters to allow exponential growth to be managed
  • Standardised Data Centre design focused around Shipping Containers (such as Sun’s Modular Data Center, aka project ‘Black Box’)
    • Another mechanism to speed up roll out / deployment
    • High level of flexibility, not encumbered by having to build a new DC (and get permission to do so)
  • Avoid full ISO stacks, use technology which talks directly to the hardware if and where possible
    • Offload Standard processing to a Daughter Board chipset
    • Put specific, proprietary bank algorithms on Daughter Board chips
  • Use Java to rapidly deliver functionality, setting up an environment where it ran quicker than natively compiled C++
    • Put everything Java based in memory, each device having 80GB of memory
    • Turn off Garbage Collection – disable Deterministic Garbage Collection (GC), allowing the process to build up uncollected memory as long as the 80 GB ‘pool’ is enough to get it through the trade cycle, when it can be forced to do Garbage Collection then
  • Move storage to online and solid state mediums
    • Tony suggested that circa 30% of all DC floor space is now taken up by storage, and it is imperative to shrink this footprint so as to maximise the compute power available.

    I suppose, unsurprisingly, I enjoyed Tony’s presentation most, having spent much of my career in IT Architecture roles, and one of my main interests being how technology is implemented to meet genuine business needs.

    In conversation with Tony after the event he said these items:

    • Measurement mattered to justify belief and sponsorship. They used OpsTier at Wachovia to provide a vertical view through their (multiple) n-tier architectures and applications, and a horizontal view across the applications, infrastructure, network and storage.
    • Sun is one of the greatest Software Companies in the World, and rather than give away our software and charge for hardware, we should do the opposite charging for our software and giving our hardware away for free (have to admit I’ve heard this before, but usually it’s said quite ‘tongue in cheek’ which I didn’t feel with Tony’s version).

    The Vendor

    Pavel Yegerov, Chief Architect Financial Services at Intel, presented: “Acceleration techniques for the front-office infrastructure”.

    Basically this was overview of the Intel roadmap, well presented and put together. I found some of the joint work that the Intel Finance team are doing within the industry interesting (probably Nigel to thank for at least some of that, he championed the early work around FIX at Sun circa six or seven years ago).

    The Sponsors Panel

    There was representation from the following companies (sadly I didn’t catch the names, just who they represented):

    • Fujitsu-Siemens
    • BEA (now an Oracle subsidiary of course)
    • Merrill Lynch
    • Goldman Sachs
    • Fidelity

    Don’t recall much from this part of the event, although the chap from BEA did speak about Complex Event Processing (CEP), suggesting that the WYSIWYG tools for Business Process Engineering (BPE, and it’s close relative, Business Process Re-engineering, or BPR) could be easily enabled by the use of his companies toolset (presume this would be what was called WebLogic Integration, or WLI, running over the WebLogic Business Process Manager, or BPM, which itself runs over the BEA J2EE Application Server).

    He suggested that this would become an acceptable technology due to the move of ‘everything’ to memory.

    Frankly I thought this extremely questionable, as that tool, as many of its ilk, abstract so much technical complexity in a bid to simplify their workings so as to be used by Business Analysts to the point that non-functional capabilities such as speed, performance, and reliability, are impacted detrimentally.

    I’ve come across this issue before, and it is specifically to do with the over abstraction of solution, to the point that it ignores, or rather does not take into consideration, the technical constraints.

    It can be most obviously demonstrated by comparing two sequence diagrams, one showing how the Business Analyst / Designer thinks it works based on the Functional calls it makes, and then how it actually works based upon adding the Non-Functional calls it makes. Even if you don’t understand sequence diagrams it should at least contrast that there is a lot more going on than the person designing these processes is aware of.

    Here is an example Sequence Diagram, seen at the Functional Level.

    Here is the same Sequence Diagram, seen at the Non-Functional Level.

    As you can plainly see there is a lot going on ‘under the hood’, and this should not be dismissed purely to gain increases in delivery / implementation time. I will do a post just on this subject in the near future.

    My Thoughts

    Here’s a section of the things that came to mind during and after the event…

    Some thoughts on HPC across differing Industries

    Don Grantham, EVP for our Global Sales and Services (GSS) Organisation (basically Don is the head of the entire ‘Field’ or ‘Customer’ engineering) said in his keynote to the UK organisation “measure what you care for, because you’ll come to care for what you measure”, and this underlies my thoughts on the different HPC systems across the world, and across the differing industries.

    I find that “the different HPC systems across differing industries are defined by the measurements attributed to them”, so for HPC systems in education and research this is typically teraflops and the like, the stuff of the Top500. But for HPC systems in Finance they are typically measured in trade latency, transactional fulfillment and algorithmic speed. Whilst for Google and the other Internet based ‘cloud compute’ HPC grids the focus is successful responses in a given time period.

    As the functional requirements are different, even though they share the same sort of non-functional scale and performance requirements, the architecture and topology across HPC systems is different, leading to Industry Specific Grids and HPC.

    I suppose this is pretty obvious, but I find it’s the obvious, core attribute, stuff that people lose focus of and seemingly forget. The Infrastructure between the differing HPC types, as delineated by industry, can have a lot of similarities at the infrastructure layer (speed, performance, technology components, network, topology i.e. how it’s fitted together), however they differ immensely when it comes to the application / logical and functional layers.

    Some thoughts on messaging systems across differing Industries

    Expanding my series of messaging system overviews outside of the ones on messaging systems in Government to cover other industries, including: trade land (the stock exchanges and their ilk, the Market Data trading systems, FIX, FIXml, Swift and SwiftNet, etc.), utility land (Moresco, the DTC, etc.), energy land (gas, oil, and other energy and resource trading exchanges), retail (B2B, etc.), manufacturing (supply and demand), media (news, and market data too if you’re a Reuters), and Telco (customer and service transfer and transactions).

    What these messaging systems point out is that everyone is sharing to enable business, and that inclusion and participation matters.

    Some thoughts on speeding up trading systems Straight Through Processing (STP)

    As well as the move to re-engineer the trading market places to avoid legacy Business Processes, another way to improve speed would be to retire the existent data dictionary during trade related Straight Through Processing (STP).

    Don’t translate in to and out of local proprietary data dictionaries, instead use FIX as the native data dictionary throughout a trade transaction.