An unfiltered view of filters
With regulators placing ever greater demands on institutions in the area of sanctions, it is essential that institutions screen their customers and transactions against relevant sanctions lists. Regulators are showing less tolerance for errors than in the past. Not only have fines have been imposed on banks but also on payments companies and casinos.
Specifically, regulators are increasingly focused on making sure that sanctions filters are used correctly, and are looking at three key areas:
- Is both internal (customer data, transaction data) and externally-sourced data (sanctions lists, PEP lists, etc) accurate and complete?
- The filter itself – is it identifying potential matches appropriately?
- The quality of the output – how is this being investigated and monitored?
Know your filter
In the past, institutions would typically set the parameters of their sanctions filters in accordance with instructions from the vendor. Today, this approach is no longer seen as acceptable: regulators expect institutions to know how their filters operate instead of relying on vendors. This has implications both for banks, who need to increase their knowledge of these systems, and for vendors, who need to open up their systems to allow clients to understand them better. It is likely that this trend will continue in the future.
Filter effectiveness and model validation
Testing can provide assurance that a screening tool is working today as well as it did yesterday and that it is providing the necessary protection. This includes making sure that there are no unintentional effects as a result of upgrades, system patches or environmental changes between a user acceptance testing (UAT) environment and production.
When assessing the effectiveness of a filter, institutions should document the functionalities of the model or tools used for screening. Functionalities that are used for suppression of alerts should be reviewed very rigorously by compliance teams from regulatory environments. Suppression logic that comes into effect post-filtering should also be included, as this will impact the output of the model itself.
It is important to understand whether list, customer and transaction data is accurate and up to date, and to make sure that the settings on the sanctions system are tailored to address any potential weaknesses. Organisations should consider performing a data quality assessment, whereby data elements that are critical for the screening process are identified and rules are created for those data elements – such as not having the word “corporation” in a field for individuals. Such rules can then be automated to trigger a process for data improvement.
Model validation is used to check that changes to filter configuration will deliver predictable – and correct – results. This involves the following steps:
- Have a well-defined validation plan with step-by-step, repeatable processes.
- Make sure that sample sizes are large enough to provide statistical significance.
- Ensure that customers and transactions are properly segmented by geography.
- Provide evidential support of all the analysis that has taken place.
Technology plays an important role in helping institutions test the effectiveness of their filters. Organisations can use standardised testing platforms to carry out regular validations. The platforms will generate a set of test cases that can be run through the filter. The results are then fed back to the test platform, which performs an analysis of the results and indicates the effectiveness of the filter.
Resources and support
Resources are often tight in this area, and the sanctions area is a niche specialty. It can be helpful for institutions to build internal training programs so that subject matter experts can pass on their knowledge to others within the organisation. By institutionalising knowledge in this way, organisations can make sure that the relevant information is retained in spite of staff turnover.
If an institution does not have the in-house expertise needed to create tests, it may be advisable to retain a third party to carry out the initial validation. In addition to meeting requirements, this can provide a valuable opportunity for people within the organisation to educate themselves about the test scenarios being used in the market today.
Peer banks can also be a useful resource. Even if the peer bank is working with a different vendor, it can be helpful to understand how the bank in question approaches the testing process. Having open discussions – without sharing sensitive information – can be beneficial for both parties.
Tuning and optimisation
Large numbers of false positive alerts divert precious resources from investigating genuine alerts and may indicate that the sanctions filter needs to be optimised. Other factors can also make it necessary to tune the filter, such as the introduction of a new business line or the implementation of a new watch list.
Institutions may also need to reexamine the way they are using lists, in terms of scope, choice of lists, etc. In practice, some institutions may be screening against lists that are not really applicable to their businesses. It is therefore important to evaluate what the organisation is screening against to ensure that list entities and their attributes match the firm’s risk appetite and that appropriate regulatory expectations are being met.
In today’s regulatory environment, filter testing, assurance and optimisation are becoming a normal part of doing business. Institutions can draw upon technology to support internal assurance measures and gain the level of understanding increasingly sought by regulators.
By John Pattenden, product manager, Swift’s Sanctions Testing Services (part of the co-operative’s Financial Crime Compliance unit)