Everybody else is speculating so maybe I should do the same and add one more possibility to the till.
A server fall over possibly could have started all of this on Wall Street, then of course panic sets in and it got worse - those algorithms.
If this was by chance happening at the same time as all the high volumes of trading occurring interruption in the fail over procedure could have caused a delay in the data getting over the mirrored secondary server and with the speed that stocks are traded today it got mucky but from what I have read it appears once everything loaded back into memory, stocks ended up back where they were. The automated trades that had been configured just worked with what was in memory at the time too, so a mathematical mess.
I would not think it could have been the trading software but rather a data transfer once server fail over kicked in. You see it now and then on the web if you know what you are looking at with an occasional blink on a web site which may be a server fail over has taken place. Darn servers go down all the time (grin).
Again it just looks like there was something interrupting in getting the cache memory over who knows how many clusters were involved here.
I keep reading where they are looking at their trading software and I don’t think that’s where the answer lies and the fat finger thing sounds kind of silly. This was what was said on the Wall Street Journal yesterday.
Recently in the news was the upgrade to some new processors too, the Xeon 7500 series which unleashes some real awesome power. As you can see from the clips below – hosting of additional algorithms can be done. Intel is putting out really great computing power these days!
“The Intel Xeon 5600 processor is the successor to the Intel Xeon processor 5500 series introduced in early 2009, and can deliver up to 60% performance improvement over the Xeon 5500. The extra speed comes mainly thanks to Intel's industry-leading manufacturing technology which enables smaller (32 nanometres in length), faster and more energy-efficient transistors (the building blocks of processors) thus improving performance and reducing energy consumption of the chip overall. This makes them ideal for deployment in trading environments as the faster processing speeds will feed directly into the demands of latency sensitive trading activities.
Anthony Warden, ED Global Head of Algorithmic Trading and Quant Prime Broker Technology, Nomura, said: "Handling bursts in trade traffic is a key factor in DMA and low latency, high throughput trading platforms. By enabling us to spread processing over more cores, the Intel Xeon processor 7500 series will allow us to handle larger peaks more efficiently without any increase in response times. The increased core count of the 7500 series also allows us to host more trading algorithms on a single box which can communicate through local memory, thereby reducing latency."
Those clients who have quickly utilized the features of these technologies are gaining immediate commercial benefits in trading and the high data processing capability over multicore feeds through to improved risk management and analytics at the enterprise level which, as a result of greater supervisory scrutiny, will see increasing focus across the market in the coming years."
Fall over is used on servers like RAID is on PCs for mirroring and back up, so when one goes down, the other steps in without missing a step.
“Automatic fall-over is a term associated with content management and contingency planning for system disaster and recovery. It is an automated process where in case of a system or server failure, the control of data and applications will "automatically fall over" to a secondary system or server. Automatic fall over procedures results in less costly downtime while eliminating the possibility of inducing system failure during recovery, and usually require very little user input.”
In short is the trading speed outdoing what can be accomplished when a server fall over needs to take place and with new Xeons in place did we have a little competition for processor threads here if all were not using the same processor power? If all were not running same processors (the new Xeons) we may have had a perfect storm potential, although the processors are to improve latency, but when combined with existing technology and design nobody knows sometimes what could happen. We know what is supposed to happen and what is not supposed to happen but computer science is not 100%, although we want it to be.
Data house design is definitely getting more complex everywhere today and specs on hardware/software configurations can be detailed and tedium. So there’s my feeble 2 cent speculation and hopefully the forensics tech folks on Wall Street will iron it all out for us soon and if nothing else I got some traffic to the blog today (grin). BD