The four ESG tech traps

ESG - Investing Traps

Daryl Roxburgh, President and Global Head of BITA Risk, outlines the four main ESG-related technology challenges facing the unwary wealth manager.

When every failed trade will have a price tag

Failed Trades

The avoidance of failed trades is now business critical, not a nice-to-have, writes Paul Bowen, corfinancial.

Every failed trade will have a price tag from February 2021.

Although the Central Securities Depositories Regulation (the “CSDR”) came into effect on 17 September 2014, its operational impacts on buy and sell-side firms is just coming into focus. In particular it is the Settlement Discipline Regime (SDR) element of CSDR that will have the most significant impact on market participants.

The SDR reform stipulates that trading venues and investment firms must implement measures to prevent and address failures in the settlement process. Every failed trade will cost businesses. Where a settlement fail does occur, CSDs must impose cash penalties on failing participants. The basis of the penalty is determined by the number of business days beyond settlement date that a transaction remains unsettled. Over and above this, there will also be a mandatory buy-in process for failed trades and the recovery of the costs will be passed on to the defaulting party.

In other words, increased settlement discipline and the avoidance of failed trades is now business critical, not a nice-to-have.

Slipped through the net

One may ponder, when so much automation has been successfully introduced into middle and back office processes over the years, how it is that failed trades have slipped through the net? How have failed trades become the last bastion of non-automation?

It could be argued that current penalties are not significantly punitive or material to attract focus and investment in this process. With the introduction of the new penalty structure, however, non-compliance could result in significant monetary and reputational cost. In a sense, therefore, failed trades were not the highest priority; SDR will have a substantial impact as it formalises the settlement process and gives failures added significance.

Another factor is SWIFT messaging. This communication method has been available for many years yet has not been adopted in its entirety. The custodians instead have often provided failed trade reports, either through portals or daily spreadsheets. The problem here is that all those portals and spreadsheets are different, with the result that the buy- and sell-sides would assign multiple resources to manually process numerous failed trade reports and rationalise them as best they could. The custodians had little incentive to introduce standardisation, hence manual workarounds were commonplace.

Operational challenges

In operational terms, there are several obstacles that the industry must overcome in order to effectively deal with SDR.

Firstly, the industry must minimise the cost impact of buy-ins. Trade failures will often occur in illiquid markets where there is a shortage of stock. If a firm is receiving buy-ins in illiquid markets, potentially there could be some large price differentials at a ‘Buy-in-Auction’ at the end of the day. In these circumstances, the premiums levied by empowered sellers are generally significant, leaving the counterparty at fault and with a painful variance.

Secondly, firms must reduce the manual processing stemming from SDR. This labour-intensive administration is likely to include extensive effort associated with buy-ins. It’s not just a case of sending an email; asset managers, for instance, may need to start cancelling trades, rebooking trades, pursuing the brokers for all the fines and so on. The introduction of SDR will mean that businesses will have to deal with far more manual workarounds.

Thirdly, operational teams must prove that they are in control of the settlement process. These teams will now need to report in more detail to senior management on unsettled trades and counterparty exposure. One of the key observations from the Lehman collapse was the lack of information regarding consolidated counterparty exposures. The new SDR regime, while imposing penalties, has the benefit of reducing settlement exposure and cash management for all parties involved in the trade cycle.

In summary, the most significant changes being made through SDR is fining firms, introducing rigour around buy-ins and reporting the worst offenders. All firms need to proactively prevent trade failures, understand their exposure to unsettled trades and protect their company’s reputation. SDR means asset managers and brokers must move nearer to real-time monitoring, compelling them to transition up the settlement cycle and adopt a pre-settlement mentality.

Just looking at trade fails is not solving the problem.

Paul Bowen
Senior Executive – Operations corfinancial

Post trade transparency: shining a light into dark corners

Post trade transparency

An effective post-trade programme is now business critical, writes David Veal, Senior Executive - Client Solutions at corfinancial.

Although well over ten years ago, many people still vividly remember the immediate chaos during the financial crisis. Firms were scrambling to understand counterparty exposures and settlement risk, along with a key requirement to know the exact state of asset and cash positions. Executed transactions sitting between trade date and settlement date fell into deep voids where the status, even post-settlement date, was not absolutely clear. It took many firms days, sometimes even weeks, to piece together a conclusive picture of the actual situation.

With multiple industry utilities, a plethora of systems and with transactions recorded in multiple mediums (including paper tickets ‘enhanced’ with coloured marker pens, faxes and spreadsheets) it swiftly became clear that such an environment only works when the outside world cooperates. Post-crisis, sanctions were introduced to impose responsibilities and liabilities upon firms, with the aim of firms having more control of transaction data. Equally, lucidity in post-trade processes supports the maintenance of IBOR platforms that also require near real-time position data. Can a company therefore survive without a transparent post-trade system? The regulators would say ‘absolutely not’.


The upcoming enhancements to the Central Securities Depositories Regulation (CSDR), which must be implemented by February 2021, pushes the responsibilities even further. In particular, the Settlement Discipline Regime (SDR) within CSDR means that where a settlement fail does occur, CSDs must impose cash penalties on failing participants, as well as compulsory buy-ins after a short time. The impact of this change will only add to reputational damage for parties that are unable to apply effective measures and controls.

I would argue that a better level of post-trade transparency brings challenges but also opportunities for the industry as a whole. Depending on the definition of transparency, additional controls and processes improve the ability to monitor the settlement status of a transaction and reduce exposure to settlement risk.

It’s time to shine a light into the dark corners.

Can APIs save legacy systems?

corfinancial Opinion

Legacy systems are a familiar conundrum for the buy-side industry. Firms have to weigh up the perils and potential costs associated with attempting to remove ageing architecture (with inherent operational risks, limited scalability and restricted interoperability) against the potential benefits of having a more modern, efficient application.

More often than not, the pain of replacing obsolete yet pervasive software is too much to bear. Any firm that wants to supplant a large legacy system will probably take three years doing so and spend an inordinate amount of money – with the ever-present danger that by the time it has been substituted, the new system will have become almost a legacy in itself, as business needs and market drivers evolve.

Furthermore, who defines what is a ‘legacy system’? Technologists will assert that you should be using the latest and greatest application on the market, with the most up-to-date language. Meanwhile systems administrators might imply that anything with 32-bit architecture is now redundant. Risk professionals might suggest that some old code that is not patched with the latest security updates should be replaced. There are also resource-restricted legacy systems: for example, where it is exceptionally difficult (or expensive) to find a programmer for dated languages such as COBOL. With little agreement on what even constitutes legacy software, what then is the way forward for buy-side firms? Could APIs provide an answer?

The primary purpose of effective APIs is getting software systems to talk to other systems. The process involves technology-agnostic messages that are written in an open standard (like XML) that most systems can readily consume or interrogate. A good example of the application of APIs in everyday life is the Citymapper app. When a user enters a target destination in say London, the app will call the Transport for London API as well as others such as Uber, aggregating data from multiple sources into one user interface. Money comparison sites operate in much the same way.

With Little Agreement On What Even Constitutes Legacy Software, What Then Is The Way Forward For Buy-Side Firms? Could Apis Provide An Answer?

Yet within asset management an API is a very loose term. Different perspectives on the same data mean that finding consistency in APIs is almost impossible. APIs can be technology-agnostic, but they can also be either proprietary or open source. The way firms communicate within the protocol can also vary according to different vendors. This lack of standardisation in APIs is certainly one issue that the software industry needs to overcome if APIs are to reach their potential.

For most asset management firms, data is king; the ability to acquire data in as close to real-time as possible is where the power lies. The method by which asset managers retrieve their data therefore has to be the easiest, quickest and most consistent way possible. For this reason they are increasingly turning to APIs at every opportunity, rather than continually building local interfaces between systems. The asset management industry is therefore moving ahead in its development and utilisation of APIs.

Within our own post-trade processing system (Salerio), we have used APIs to help our clients overcome legacy issues in order to present information regarding the meaningful events within the trade lifecycle. Asset managers want to capture when a trade transitions through its key states, allowing them to report to investors/clients or conduct further analytics as close to real-time as possible.

Furthermore, an outsourcer (securities services/investor services company) might just want to pull data out of a post-trade system to use within a client portal service, enabling its clients to view what is happening with their trades in near real-time. By utilising an API, the outsourcer can access the data in a consistent manner without being concerned about data extracts or interface delays.

Asset Managers Want To Capture When A Trade Transitions Through Its Key States, Allowing Them To Report To Investors/Clients Or Conduct Further Analytics As Close To Real-Time As Possible.

On the other hand, an asset manager may wish to analyse the data more closely, perhaps to automate processes using AI to minimise user interactions associated with trade exceptions. With a consistent API and data set, buy-side companies can concentrate resources on the ‘value add’ rather than the mechanics of accessing the data in the first place.

Will APIs provide the means for buy-side companies to finally ‘kill off’ their tired, old technology? I would argue that APIs enable them to ‘insulate’ legacy systems, rather than having the enormous cost and disruption by trying to replace them. If a buy-side firm can utilise APIs between legacy systems and the outside world, this will remove much of the pain and risk associated with dated technology. Through the effective deployment of APIs, companies can isolate systems while exposing the data. They can have easier access to the data without having to worry about the nature of the technology, because whatever is consuming or interacting with the data is technology-agnostic with respect to other systems.

Rather than sounding the death knell for legacy systems, APIs can extend their life-span. In uncertain days such as these, it may be the pragmatic way forward for many institutions with ageing architecture.

It’s not cloud solutions or nothing

City of London

Cloud-based applications are being touted as the catch-all solutions for firms with software infrastructure problems. You’ll see experts – be they software vendors, consultants or anyone else with a vested interest – lining up to sing sweetly about the benefits of moving everything to cloud-based computing. Mention that there are degrees of choice, and that chorus becomes a cacophony of “Nos.” Common opinion seems to be that if you’re looking for buy-side fintech, you’re only looking for something hosted by a vendor in the cloud – but the choice may not be quite so cut-and-dried. What works for one financial institution doesn’t always work for all of them. Each has to find the solution that benefits its business the most.

Option one: continue to retain or deploy platforms on internal networks

This might mean keeping away from the cloud altogether – investing in locally-installed servers might be the preference for firms still wary of cloud-computing, whether that be over concerns about security or a desire to retain greater control over its systems. It is recognised that this stance requires substantial investment in the physical network architecture (and that hardware needs to be replicated offsite for the added security of back-up and disaster recovery), but it is not necessarily the wrong choice, even if the hardware investment required is sizeable.

Option two: move to a centralised private hosting service provider Operational challenges

An alternative would be a private cloud service – essentially a reserved area of a cloud service provider’s environment or even directly owned datacentres. Many of the new systems being used by a buy-side firm could be installed in this private cloud-based virtual machine world. Operating in such an environment provides benefits in terms of centralisation, management, monitoring, upkeep, maintenance and connectivity. It’s a solution that is proving popular with many buy-side companies that are updating their IT infrastructure as it enables many of their key operational platforms to be hosted and managed in the same cloud environment …provided the vendor has software solutions that can be deployed in these virtual locations. Never forget, in complex system-based infrastructures, the platforms have to be able to connect and communicate with each other.

Option three: use cloud hosting services from the system provider

The third way is for vendors to offer their own cloud-based hosting service. Using cloud computing services such as Microsoft Azure or Amazon Web Services (or even their own private cloud services), they can offer a virtual machine environment to buy-side firms. This can be tailored to the needs of the buy-side companies in terms of software, system architecture, and services.

This option shifts the onus for managing and monitoring the system to the vendor, with many problems for the buy-side firm being handled by the vendor through its direct relationship with the cloud computing service provider. Additional benefits include client systems being directly managed by the vendor’s experts, plus greater opportunities to stay current with system enhancements and upgrades. These kinds of systems do not take care of themselves, so simply setting it up and walking away doesn’t work. But there is also an opportunity to reflect the cost of the provision and upkeep of cloud-based software infrastructure in pricing models that have a more defined service element.

It is certainly a possibility that the third way will become more and more popular for some buy-side firms, even though it’s not necessarily the cheaper option. It’s the simplicity of this alternative as part of a long-term strategy to move away from supporting in-house technology as cloud-based services flourish that holds the appeal.

For most buy-side firms, a greater shift towards the cloud is inevitable. But a singular option is never going to be a winner. Quite simply, people need choice. Some will only want to dip a toe in the water, while others will dive right in. Some will still want to wait and see – they need to be sure that any troubled waters they perceive have calmed before they take the plunge.

No matter which decision they take, buy-side firms should be demanding flexibility as their business models are likely to evolve as quickly as the technology that underpins it. Solution vendors are putting themselves in a position to provide cloud-based services that they may never have offered before to help facilitate this change. Buy-side firms will eventually be looking to extract maximum value from their software vendors – and that means ensuring they have the advantages of cloud-based fintech without the overheads of running and maintaining such a system, as one of their primary choices.