Thought Leadership

Evolve or die: Entering the next era of compliance

By | Thought Leadership | No Comments

Manual approaches insufficient for Transparency Directive compliance

Vuyelwa Gongxeka – Business Analyst

Research has found that far too many companies still conduct compliance monitoring manually, which naturally increases the risk of errors and reduces efficiencies. This situation is further exacerbated by the need to comply with a wide range of global financial regulations across different regions; such as Europe’s Transparency Directive and Undertakings for Collective Investments in Transferable Securities (UCITS), Canada’s National Instrument 81-102 — Investment Funds and the US Investment Company Act of 1940 to highlight a few.

By its very nature, compliance legislation is complicated to interpret, more so when a myriad of legislative rules must be complied with and monitored manually, across multiple spreadsheets and embedded macros.  Manual compliance monitoring is not only time consuming and at risk of error, but also lacks sufficient audit trails and – crucially – may not provide sufficient warning of the potential for breaches.

Technology has fundamentally changed how business is conducted; and with stricter global compliance restrictions, it has become an essential tool for ensuring post trade compliance.

By way of example, the Transparency Directive (TD), issued in 2004 and revised in 2013 aims to ensure transparency of information for investors through regular disclosures and the on-going dissemination of information to the public. Central to this directive is the shareholders to notify major holdings of voting rights to competent authorities and issuers.

A shareholder acquiring or selling shares has to notify the issuer of such transactions as soon as the acquisition or disposal of shares results in an amount of voting rights that exceeds, falls below or reaches the threshold – for example the German Authority, BaFin (Federal Financial Supervisory Authority) places the thresholds at 3%,5%, 10%, 15%, 20%, 25%, 30%, 50% and 75%.

In the South African context, the Transparency Directives (Voting Notifiable) can be compared to the Companies Act (2008). Section 122 requires notification in the event that a ‘beneficial interest in sufficient securities of a class’ is acquired or disposed of. In this case, the threshold is at 5%, 10%, 15% or any further whole multiple of 5%.

Because notifiable limits are complex by their nature, compliance demands highly accurate, ongoing interventions – well beyond the capacity of most organisations if approached manually.

To address this challenge, StatPro developed a comprehensive global Transparency Directive rule book, which enables effective and efficient monitoring of thresholds to ensure that risks are minimised and managed in a holistic manner. This pre-packaged rule book, SPC Transparency Directive, caters for the majority of the European Union nations and clearly stipulates the limits imposed per country. Furthermore, the rule book supports the entire compliance life cycle for an organisation, providing automated alerts to manage possible contraventions – with thresholds or breach limits customisable to any percentage of tolerance required. Similar rule books have been developed to address the compliance challenges in other regions and to enable the rapid deployment of an effective post trade compliance solution.

Automation and advanced post trade solutions are essential for supporting compliance, as well as for improving efficiencies by removing the mundane, time-consuming and error prone manual activities that can hamper innovation and growth. In line with the saying ‘If you don’t evolve, you will die’, it has become imperative for businesses learn to harness technology to underpin their compliance function, in order to remain relevant and competitive.

Data should not be garbage

By | Thought Leadership | No Comments

Sherwin Manjo – Product Owner

The saying “you can’t make a silk purse out of a sow’s ear” has an IT equivalent, “garbage in, garbage out” or “GIGO” for short. Put simply, GIGO states that the output of any system or analysis is a product of the inputs. This logic applies to a number of fields. In mathematics for example, if one adds x number of apples to y number of apples the only possible solution is x+y apples. The expectation that the above equation could result in producing any other fruit except apples is illogical but often this flawed logic is visible in the business domain and we wonder why systems and processes cannot provide high quality outputs when the quality of the data input is poor, weak or just plain wrong.

The meaning of data

Data, according to information scientists, are symbols that represent properties of objects, events and their environments. In other words, data represents units of measurement without context or reference and is the product of observation. Almost all business organisations collect and store data to support business processes, make decisions and satisfy stakeholder requirements. Raw data increases in value after it has been processed and transformed into meaningful information.  In the case of an asset manager, a set of data contains a collection of numbers and letters that it receives from its various source systems. This base data has little to no intrinsic value to the organisation until it has context and meaning. For example, a list of share prices has little value until an investment manager knows for which shares, on what date and time and in what currency the prices are applicable. In this way, information, created from processed data, is able to provide answers to specific business questions. Poor quality data, when processed, will provide incorrect or partial information and this is not only of little value, but can represent a significant risk with shocking outcomes. Recent history, such as Brexit and the US presidential election, has highlighted the dangers of poor quality data, with almost every pre-election poll forecasting the wrong result.

From GIGO to GIQO

The majority of computer systems used in a today’s investment management organisation are reliant upon data to perform key functions. One example of this is the performance measurement and attribution function. In order to calculate the performance of a security or portfolio, performance measurement systems use a variety of data.  The effect of having incorrect data in a performance calculation can lead to a poor investment decision. In the case of a financial instrument such as a security, for example, if an instruments price should increase from 100 to 150 over a certain period the simple price performance measure of that instrument over that period is 50% (Price Performance for the period = [end price – start price]/start price). If either (or both) of the instrument prices were incorrect, the performance result would also be incorrect.  Regardless of the level of sophistication of the performance measurement system GIGO will apply. Another way to view GIGO is “quality in quality out” or “QIQO”. Although the relationship between quality data inputs and quality outputs is not as strong as in GIGO (a poor process or system, can still result in garbage outputs given quality inputs) there is a relationship nonetheless. However, as anyone who has worked in IT for any period will confirm, data is often less than perfect. This begs the question, can computer systems, such as our performance calculation above, be designed in a way that elegantly deals with the inevitable occurrence of poor data? One possible answer to this involves the use of support tools, controls and processes to deal with imperfect inputs. Using the performance measurement example above a practical implementation of this approach could be that when the performance measurement system observes a price move of greater than 20% a preconfigured rule should highlight that there is potentially incorrect data that must be reviewed before further processing can take place. A computer system that can take garbage in, perform or support validation of the data, highlight questionable data and then allow users to verify, correct and authorise that data is a system that can take garbage in and generate quality out or, to coin yet another phrase, “GIQO”.

The technology “Silver Bullet”

Large data sets and data storage issues are typical examples of data problems faced by organisations. To solve these, technology solutions such as relational databases, data warehouses, cloud computing and data lakes are promoted as “the answer”. In reality, technology will always only be part of solution. Take the case of Big Data. The underlying promise of Big Data is that if an organisation can collect, store and effectively analyse this data it will translate into better decisions. However, the sheer scale of these large datasets have challenged traditional database technology with organisations now required to invest in cloud data storage and data lake technology.  With the data storage issue resolved, a new problem of finding computer systems that can analyse such large datasets is realised. Then assuming both these problems have been addressed there is the subsequent challenge of finding adequately skilled people to mine through the data to identify patterns and glean insights from the information. Much like the mythical hydra, as one problem is seemingly “solved” so new problems manifest. Fortunately, and as is typical in technology, a new solution has been offered, that of artificial intelligence and machine learning in particular.

Machines need data too

Machine learning, in simple terms, is the science of designing a computer system in such a way that the program is able to improve performance on a specific task, based on data presented, without the being explicitly programmed to do so. In other words, these systems can “learn” how to analyse large sets of data based on patterns identified in the data. Then, when these systems process new data, they are able to adapt to this data based on the patterns previously “learnt”.

A good example of machine learning is self-driving or autonomous vehicles. These vehicles make navigational decisions, without human input by observing objects within the vehicles environment. Objects are categorised and decisions made on how these objects are likely to behave based on large volumes of historical data previously processed. In this way, autonomous cars are dependent upon the data that was previously processed and the data that it processes in real time and the consequences of bad or garbage data leading to an incorrect decision could be disastrous.

Data Quality- a moving target?

Data quality is comprised of a number of key aspects such as accuracy, validity, reliability, completeness, relevance and availability. Under different circumstances or scenarios, these characteristics may have different tolerance levels. E.g., with reference to the performance example above, a treasury bond with a 50% price performance measure over a period of a year is questionable and the data investigated whereas, a 50% price change on an unlisted or exotic security over the same time could be seen as acceptable. This highlights the need for a flexible data quality engine or framework that enables the selection and application of the most appropriate rules under different scenarios. Only with this capability in place is GIQO a realistic option.

Houston we have a problem!

Once identified, data quality issues can be resolved in a number of ways. Ideally, corrections should occur at the source of the data, although often, this is not as easy as it sounds. External data providers are reluctant to make changes that affect multiple parties and in some cases contest that there is data issue at all. Another option is to cater for the poor quality data in code. However, the number of permutations required can be exponentially large, and invariably new instances occur.

A solution that is effective, involves a hybrid of these approaches. In this approach, data is reviewed and corrected from source systems before it can negatively impact downstream processes. To achieve this, data is loaded into a staging area or data warehouse, prior to further processing, and a flexible data quality rules engine applies the relevant validation rules. This two-stage model allows for data cleansing and approval to take place to mitigate the effects of incorrect data. This model is dependent on data stewards or data owners to review the data and a clearly documented process to correct the data. The level of control and oversight required is a function of the value of the data to the organisation and the risk of bad data to downstream systems. By adopting this approach, technology can enable and support business processes to achieve quality data outputs.

In conclusion

Our reliance on data cannot be underestimated. In 2012, participants at the World Economic Forum in Davos Switzerland, went as far as to declare data as being a new class of economic asset similar to gold or currency. The Economist referred to data as the “new oil”. However, just like gold and oil, data requires a process of refinement and review to achieve a level of quality that can be relied upon. In our opinion, pure technology solutions are only part of the answer to the data quality challenge. A viable solution requires an approach that allows for context data validation, staged data review, defined data quality procedures and accountable data owners. The days of GIGO should be numbered, garbage out should never be an acceptable option for any serious system or business organisation.

Put pragmatic governance in place to capitalise on data assets

By | Thought Leadership | No Comments

Without effective data management processes in place, organisations face unnecessary costs, risk and delays, says Infovest.

By Michael Willemse – Head of Technical Implementation at Infovest

Now more than ever before, organisations need timely, accurate data to support everyday operations and business growth. An inability to capitalise on data assets could have far-reaching effects, such as unnecessary costs, operational delays, or exposure to risks.

But not all organisations have the capacity to tackle large data governance initiatives; facing constraints in terms of organisational size, team structures, information system architecture, timelines, or directives.

In these cases, a pragmatic approach to data governance is needed. Organisations need to consider which processes and tools are crucial, how to embark on a culture of continuous improvement, and what needs to be done to build a platform for future governance.

Start with strategy

The first step is to constitute a data governance team with a clearly defined strategy to provide the stimulus and direction for the change required.  This strategy should define the vision, the skills and resources needed to achieve the goals outlined in the vision, the reasons why the change must take place, and an action plan.

Let risk and issues drive priority through the improvement cycle

With the strategy in place, the organisation should draft the data governance policies with high-level statements of intent relating to the functioning and management of data.  A useful approach is to first focus on the areas of difficulty or risk that the organisation is experiencing, and then to prioritise and evaluate the courses of action that will address these.

The workflows, data steward behaviours, and flow of data throughout an enterprise will have the most significant impact on data quality.  It is vital to understand and document the flow of data, and any processes that act on data.  Detailed system architecture diagrams enrich the understanding of workflows.

With an understanding of the workflows, business domains and processes that act on data, the Data Governance Lead is able to identify the key stewardship roles, including the operational data stewards and data owners.

Only now do we recommend starting to work with the data itself. It is important to understand what the primary sources of Master Data are, and how that data is impacted. Governance controls are essential to ensure the quality of data when enrichment has taken place – through external sources, derived data, or manually modified data.

The Governance Lead or governance working group can now begin to analyse the system and processes to address threats or weaknesses; capitalise on opportunities; or leverage strengths within the existing processes.

Implement governance controls

The following processes and controls should be considered to improve the quality of data and processes:

  • Data Management Interfaces – Facilities for the management of data need to be understood and reviewed.
  • Approvals – Clear accountability is vital to the approval process. Visual representations of data through reports or dashboards enable the accountable user to quickly confirm the data quality.
  • Issue Escalation and Resolution – If a data steward becomes aware of data or process risk, they must be able to reliably log the error for investigation and resolution, or for escalation.
  • Logging – Not having a record of how or who made the modification to data undermines any efforts to improve the data and related processes.
  • Decision logic and controls – Workflows by their nature may have conditionality, divergence and convergence, and decision points.  The implementation of governance controls will result in additional control points in order to evaluate outcomes and state, or to facilitate manual intervention.
  • Automated Reporting – Automated reporting can be an exceptionally effective tool to monitor system states and inform the governance team about risk events.
  • Data Matrices and Data profiles – Data Matrices are invaluable references when analysing data for subject areas, business domains, imports and exports.  Data profiles add to this additional detail relating to the context of the data.

Plan, do, refine, repeat

After each iteration or at defined intervals, key lessons are noted and refinements made.  This constant feedback loop will allow for quick wins, and ensure that the data governance programme considers organisational changes and remains aligned with stakeholder expectations.

The case for a dedicated investment compliance system

By | Thought Leadership | No Comments

By Jenine Ellappen – Compliance Product Manager at Infovest

Consolidation is often perceived as a positive goal. Software companies are continuously making acquisitions and trying to merge products and services to create an ‘all-inclusive’ package. Is this dream really possible though? Is it feasible for one solution to be the best at everything? The investment industry is enormous and has many components including portfolio management, performance, risk and compliance to name a few. For a single product to master all of these requires extreme flexibility, domain expertise and a well-designed architecture. Therefore the question is: given the choice, should one choose an all-in-one solution or a specialised product?

In investment compliance, this choice gives rise to heated debate between the business and compliance teams. While integrated order management systems seem to offer an all-in-one solution, which includes pre and post trade compliance, there are certain limitations on the post trade functionality resulting from the fact that the system was not specifically designed for post trade compliance.  Consequently, there are many arguments to support the case for a dedicated investment compliance system.

One of the main reasons that compliance teams need a dedicated compliance system is to allow them to have compliance centric data which is independent from other business units.

Data is vast and in every organisation the instrument universe, data attributes and data needs are constantly growing and evolving. In addition, compliance regulation is also evolving and becoming ever more demanding.

There are various regulations and investment policies in place to protect investors. These stipulate investment restrictions across various data attributes such as asset type, sector, country and issuer. The difficulty is not in obtaining data but in obtaining clean and consistent data. A prime example is finding clean issuer data as there is no international naming convention. If a fund has exposure to an issuer named ‘US Treasury N/B’ and another issuer named ‘US Treasury FRN’ the system will identify them as different issuers, not aggregate the exposures and as a result potentially not report a valid breach. This is a major risk. In addition compliance may require additional data on unlisted instruments or alternative asset type classifications. Certain regulations refer to the term ‘transferable securities’ instead of ‘bonds’ or ‘equities’. With a compliance centric database, the compliance team is able to clean, classify and supplement data without being concerned about the impact on other business units.

Investment compliance rules can be complex and a flexible rule builder is essential. While many systems have rule templates it is never the case that one size fits all.

Best of breed systems have a combination of templates and base rule components, such as calculators and aggregators, to allow the compliance team to create their own custom rules via a rule wizard. This provides them with flexible tools and insight into the detail of the rule calculation. It also empowers the compliance team to take full ownership of their rulebooks.

Consider the Undertakings for Collective Investment in Transferable Securities (UCITS) Directive which provides investment guidelines for Collective Investment Schemes (also known as Mutual Funds) which has the 5/10/40 rule that restricts investments in transferable securities or money market instruments issued by the same body to 5%. This 5% is however raised to 10% provided that the sum of all exposures greater than 5% is less than 40%. This rule may appear to be simple but it actually requires conditional logic and a mix of aggregators. In South Africa the Collective Investment Schemes Control Act (CISCA) Board Notice 52: Determination on the Requirements for Hedge Funds has a similar rule.  This complexity demonstrates the importance of having a flexible rule building platform.

Breach management is also a significant component of any compliance system. Once compliance breaches are identified by the system the compliance team require tools to investigate and manage these breaches.

Breaches may occur for a host of different reasons such as market value movements, cash investments or disinvestments, purchases or sales of instruments or portfolio transitioning. The compliance team may choose to take different approaches depending on the underlying cause of the breach. In most cases though, the portfolio manager will need to be contacted with the details and calculation of the breach.

This communication should be executed from the compliance system and all comments should be saved for auditing purposes. Should a fund incur major losses and the regulator looks for those accountable, the compliance team needs to have the fund compliance history and comments at their disposal.

The system should also be able to automatically send daily compliance reports and escalation emails. If a breach age exceeds a certain number of days the system should highlight this by sending a notification to the compliance users, risk manager and portfolio manager.

In summary, to operate effectively, the compliance team requires a dedicated investment compliance system with all the necessary tools to fulfil their mandate. They need a system that provides them with compliance centric data, a flexible and simple rule wizard, sophisticated breach management and automated reporting. To settle for anything less will compromise compliance capability with far-reaching consequences for investors and the firm itself.