(c) FCA
This text was submitted in response to the UK Financial Conduct Authority Call for Input on Regtech. We’re publishing it also here on our blog, to foster wider discussion.
Based on our extensive experience in this field, we have been discussing several RegTech-related ideas here at Alyne (further thinking on this in our blog).
In the tradition of other “-Techs” (such as “FinTech”), we understand RegTech as digitisation of regulatory compliance processes. Digitisation is of course a buzzword itself, but it helps to frame it more in the context of automation. In other industries and topic areas, it is envisioned to completely substitute manual processes at some point or at least augment human capabilities and capacity to rapidly scale and leverage the output of manual processes.
It should be noted here that RegTech should not just be seen in the context of the Financial Services industry. All organisations are subject to some form of regulation and the basic management methods and technical-organisational controls for operational risk are usually similar.
We therefore recommend for the FCA to align and network with regulators of other industries within UK, EU and internationally in order to broadly deal with the RegTech concept.
One of the big obstacles for a larger degree of automation in regulatory compliance lies in the very diverging level of detail, type of language, verbosity and format of the actual regulation. This makes it imperative to “parse” the regulation into internal controls, policies, standards and guidelines and try to implement them somehow into one’s own organisation.
What if there were some kind of standardised formal language for regulations? One of the core disciplines of computer science is the definition of Formal Languages and Context-Free Grammars as the basis for programming languages. Extending this thought in the realm of regulations and policies, it should be possible to define a format, which acts as a sort of programming language for organisations, processes and systems and defines the basic conditions, requirements, mandates, limitations and consequences. This format should be machine-readable (e.g. XML or JSON) so that it can be directly used by other programs.
The FCA is already doing this somewhat in the form of the GABRIEL service, which accepts also XML and XBRL data. This should be further expanded and encouraged e.g. by mandating and/or offering incentives for the use of such APIs, especially in light of extended data collection in the future.
This would in turn encourage further the use of modern accounting software tools, which support such interfacing.
As an example you can see the German “eBilanz” mandate, which requires organisations to submit tax relevant business data using an XBRL taxonomy. Also, there is evidence that early adopters of such electronic submissions have higher overall corporate governance maturity.
One of the big economic trends is a reduction in the vertical range of manufacture, e.g. modern car manufacturers are only directly manufacturing less than 10% of today’s cars (with the side effect that some of these outsourcing suppliers are now much bigger than the car manufacturers themselves).
This trend can also be observed more and more in the financial services industry in the form of the recent FinTech explosion. Most of these FinTech firms are bundling customer facing activities and sales into a product but rely on partner banks for the actual service delivery. One of the main reasons for this is that they also sidestep regulatory responsibility this way. However, this approach might not be fully suitable for today’s time and create uncontrolled risks through economic externalities.
If we look at regulation in other topic areas (e.g. data protection), the concepts are applied somewhat differently: There is always an accountable “Data Controller” at the end of the chain, who is also mandated to assure the proper handling by their commissioned Data Processing suppliers. Now, this can be somewhat transferred via certifications of suppliers (e.g. PCI DSS for Payment Processors), however it remains the ultimate task of the accountable party to get assurance on the proper functioning of controls. This creates strong demand for skills and resources to perform supplier assessments and checks. Considering limited demand of truly skilled and experienced resources in this space and the currently often very inefficient manual approaches, there can be much value generated by creating semi-automated governance risk compliance collaboration platforms which can augment and scale the human powered oversight of the extended enterprise.
In combination with the above machine-readable regulation, systems should be policy-aware and be able to apply the regulatory “program code” to the core processes, which they are serving.
Modern software engineering and development practices include the approach called Continuous Integration. This entails constantly testing the code against defined expected and acceptable output values and ranges, as well as checking for undesired regressions when making changes. Naturally, this is only possible by using a very high degree of automation in the testing approach.
We propose to look at this automatic testing paradigm as a way to get assurance on the effective operation of credit, market and oprisk controls in the financial sector. Take for example also increasingly popular stresstests regularly run by regulators against banks and other systemically important financial actors. This scenario-based approach depends on very high effort for preparation of the scenarios and executing the simulation against the bank balance sheets using data analytics tools and highly experienced analysts (see a pattern here?). The quality of the results of such tests highly depend on the quality and appropriateness of the inputs based on the chosen scenario (and there have been numerous examples of stresstests failing to show significant impact only to be overtaken by developments in reality).
If you look at the history of the stresstest concept, you have to remember that they originally come out of the (hardware) engineering world and are designed to subject equipment to and above the limits of their specified operational ranges to verify the implemented controls (e.g. bending the wings of an airplane until they break).
This concept has also strongly found it’s way into the software engineering world, where automated test scripts can be executed in a way to simulate extremely high loads or even security attack scenarios on an IT system to observe output behavior and any failure points under extreme edge case conditions.
The current state of the art of security stress testing even includes something called “fuzzing”, in which the system is bombarded by a wide range of automatically generated input permutations to identify crashes or other undesired behaviour even for situations not envisioned by the designers or specified in business requirements.
All of these approaches are also candidates for application in the RegTech arena.
As a regulatory agency, the FCA’s role best lies in creating a suitable environment and level-playing field for all market players. However, RegTech is a bit of a special case in terms of a regulator’s traditional role, as the technologies, tools and services will probably be not only used by the regulated entities but also by the regulator itself.
We therefore recommend to also put a priority on formulating specific requirements and standards to be considered by RegTech providers.
Based on the description in answer one, three topics currently stand out as foundational building blocks to create a rich environment for a RegTech services ecosystem:
The current regulatory environment is a direct consequence of the prior global financial crisis and primarily aims to reduce systemic risk back to acceptable levels. There has so far been little focus on the “implementability” of rules in the form of automated controls.
In this context, it becomes understandable that financial services organisations could only react by massively increasing manual processes to achieve compliance in terms of providing the required data to regulators. However, especially this large focus on mass data analysis and stress testing might be shortsighted and it is not clear if they accurately model the underlying risks or if it is leaving other areas under supervised.
For example, it might be redundant to analyse large amounts of transactional data, if there are automatic controls in place and assurance of their proper functioning. This also means that a greater emphasis is required on the Operational Risk of these automated control systems.
In line with our previous recommendations, we see opportunities in providing additional policies and guidance around:
As RegTech is primarily about automation (see intro), we need to identify processes and reportings, which are suitable for automation, which currently require high effort manual processes, and which can be automated with proven technology in reasonable effort.
Also, we should keep in mind target areas with the greatest overall risk reduction compared to the required investment.
These are some topics, where we currently see the greatest return on invest:
This post originally appeared on the Alyned Thinking blog.
Supporting the development and adoption of RegTech was originally published in RegTech Forum on Medium, where people are continuing the conversation by highlighting and responding to this story.
Source: medium posts
{$excerpt:n}
You must be logged in to post a comment
About the author