Most of us are building systems that sit on top of other third party systems. This is common in the FinTech ecosystem that I am currently involved with. Most Neo-banks are built on modern core banking systems like Mambu, Thought Machine, etc. Core banking systems are not the only third party systems you need to build a modern bank. You also need a CRM, lending management system, payment switches, AML/fraud prevention system, Engagement platform, CMS, KYC, and a few others. Once you have selected all the ecosystem partners you have to do custom software development to build new and innovative customer journeys and integrate these systems into a working neo-bank. In this post, I will talk about important factors you should consider when architecting systems that are powered by third party systems.
#1. Deployment model
There are three popular deployment models – SaaS, Cloud Prem, On-prem.
- SaaS: The third-party provider provides a multi-tenant solution that you consume over the API and pay per usage.
- Cloud Prem: This involves setting up an instance of the software in a cloud VPC. Third party provider manages it and you consume via an API. You are guaranteed better performance and security in this model. Payment model is the yearly subscription fee.
- On-prem model: This is similar to cloud prem. The only difference being it is running in your datacenter. There is one catch here. Any modern third party system today will be using cloud native technologies so if your on-prem datacenter is not ready then you will not be able to use the software.
Most FinTech and Neo-banks that I have interacted with in the last couple of years are using one of the big three cloud providers. So, it mainly boils down to SaaS and Cloud Prem deployment models.
Cloud prem deployment model is preferred for systems that are core to the problem domain for better performance, security, and data privacy reasons. In the context of neobanks core banking systems are mostly deployed in cloud prem deployment mode. Another example is the engagement layer.
The SaaS model is preferred for ancillary systems like CRM. They are also important but customers are fine paying extra dollars for reduced maintenance and overhead. I think it also boils down to the fact that CRM is not a differentiation for your product. So, the SaaS model works fine.
#2. Technology coherence
This is related to the deployment model point I covered in the previous point but I think it is important enough to deserve its own point. When selecting a third party provider you should understand the third party technology stack specifically their choice of database, cache, and deployment platform. If you are building a Kubernetes based platform then please check if the solution can be packaged as a container image and run in your platform just like any other service that you will build. Programming language is less important to me here as you are not expected to change code but you still might have to manage and run it. So, databases and cache become important since their operational expertise is limited in most organizations. You should prefer technologies that you can run.
#3. NFRs and SLAs
It is important to discuss what SLAs and NFRs third party systems will guarantee and penalty clauses in case they fail to meet that. SLA refers to service level availability. It is usually measured via uptime like 99%, 99.9%, 99.99%. And, NFR refers to Non-functional requirements. In NFR, I am mainly interested in latency at desired throughput and resource utilization. You also need to understand scaling characteristics of the third party solution.
For the Cloud Prem model, you should also request a performance testing report from the provider. You should review evidence of the scalability and robustness of these APIs through some sort of automated testing. You also need to know how SLAs and NFRs will be made available to you. Does the third party provide an operational dashboard where this information is published for you to consume?
On multiple occasions I have experienced that third-party provider Presales consultants had no clue on NFRs and SLAs.
# 4. Infrastructure and hardware needs
You need to understand from the third-party provider their infrastructure and hardware needs. In the cloud world it is important to understand which instance type they prefer, what database configuration they need, etc. You don’t want any vendor lock in either from hardware or cloud providers.
#5. API style and documentation
You are going to consume a third party system using an API. So, you should spend time understanding their API style, quality of API documentation, and developer sandbox environment. You should spend time understanding the main API style supported by the third party. It could happen that they support multiple API styles like both REST and GraphQL but understand which one is the primary interface. These days I also see if they support gRPC style for efficiency reasons.
In API I look for consistency in concepts and conventions, domain modeling, extension points (via web hooks or events endpoint), documentation depth, versioning strategy, quality of SDKs, and documentation search results quality.
I have seen some third parties provide Github repositories with samples and examples and postman collections. This can really help get your team started quickly and deliver faster.
#6. Idempotent APIs
This is an extension to point 5 on API style but since this is a very important concept it deserves its own point. I am a big fan of idempotency. As per wikipedia,
Idempotence is the property of certain operations in mathematics and computer science whereby they can be applied multiple times without changing the result beyond the initial application.
You can safely retry idempotent API operations. This is critical since API calls can/will fail and you should know which operations you can safely retry. This is implemented via idempotency keys. When a client retries, it will use the same idempotent key from its previous request. With this, we can then identify which client’s request was successful earlier.
#7. Getting data out of the third party provider
The data stored in the third party system is your data. You might need to move that to a data warehouse for analytics or power other capabilities from the existing data. So, you need to understand options available to move data out of the third party system. I have seen three main solutions here:
- Database archive dump as CSV or other format
- Events published on Kafka
- Access to the database then you can either do CDC or take database dumps
I prefer real time event streams published on a message bus like Kafka so that we can do real time streaming and power real time use cases.
Last factor you should consider is the community aspect of the third party provider. Do they have active forums where developers can ask questions? Are there enough developers in your region with the same skills? Who is using it? Can you easily train your developers? Do they provide training?
It is an interesting time to build software. There are times it feels like we are connecting multiple lego blocks. It is easy to fall for a bad third party provider. You should consider multiple things before finalizing a third party.