Full Stack Serverless Application with vendor lock-in prevention

Impelsys Tech Blog
Impelsys
Published in
9 min readJul 16, 2022

--

by Lijoy George

Serverless architecture is quickly gaining popularity with the tech community as it offers several advantages, such as greater scalability, more flexibility, and quicker release time, all coming at a lower cost. This opens up a wide range of possibilities, making serverless architecture a real boon to the industry.

Serverless computing is the right choice for projects with unpredictable traffic, and from a business point of view, it helps cut operational costs tremendously.

But to our surprise, short research on serverless computing showed extreme variation in trends regarding its usage across the industry. With its many advantages, it should be an obvious choice for enterprises, regardless of its size or revenue.

Fig 1-Usage Trends of Serverless Computing

Apart from being the perfect choice for meeting our project requirements, the varying trends in its usage created an environment of skepticism. Further research was conducted to understand why serverless computing showed varying trends and how its drawbacks can be addressed efficiently.

This article talks about how we tackled the challenges and leveraged the advantages of serverless computing.

Project requirements and evaluation of serverless computing

Listed below are the project requirements :

  1. Web application with a future requirement for a mobile application.
  2. Low or no information about the level of traffic anticipated.
  3. Should be scalable based on market response.
  4. Regional isolation should be possible.
  5. Revenue returns were unpredictable; hence investment had to be minimized.

Due to the uncertainty revolving around serverless computing, we tried to explore the scope of microservices and monolith architecture to create a solution that negates its drawbacks. A tailor-made solution had to be formulated to meet the project requirements.

An evaluation was conducted to understand the features of each architecture or service.

Fig 2- Microservices Architecture Pros and Cons

Now exploring the advantages and disadvantages of the monolith architecture. We understood the following.

Fig 3-Monolith Architecture-Pros and Cons

Lastly, serverless computing was evaluated to derive a constructive solution and address the disadvantages that led the industry and tech community to doubt its usage. During the evaluation, the inputs helped design a solution that met the project’s requirements.

Fig 4- Serverless Computing Pros and Cons

After rigorous research on serverless computing, a clear understanding was derived regarding why the industry and tech community showed a confusing trend. Amongst the above-listed reasons, vendor lock-in was the primary reason behind the varying trends of serverless computing usage.

What is Vendor lock-in?

Vendor lock-in is the main issue most people encounter while using serverless allocation. It creates a tight coupling between your application/code and the serverless vendor. This can make it hard to move the code to another platform and result in higher expenditure as demanded by the vendor. It happens in three significant ways,

  1. Data lock-in

Data lock-in is the problem faced while constructing an application by relying completely on the service provider. This may lead to surrendering a part of the data to the service provider, resulting in data that is key for business getting held by the vendor or service provider.

Case example,
Lambda provides a ton of value to developers by integrating natively with AWS-specific Auth0, which gives you robust user authentication out of the box or DynamoDB. But this action results in user data getting stuck in a closed system. Being locked into these services and their ecosystems is a by-product of critical data being locked into these services.

2. Resource lock-in

Resource lock-in is another by-product of vendor lock-in. By choosing a vendor, you may end up using their resources coupled with your serverless application.

For example,
Lambda provides an easy integration strategy with AWS s3, SNS, and other services. This results in the resources getting locked with the service provider and it becomes tough to move to another service provider in the future.

3. API level lock-in

API is the interface between the customer’s code and the serverless vendor’s infrastructure. Vendor lock-in can occur in the serverless context at the API level. Coupling your code too tightly to a serverless vendor’s APIs can make it hard to manoeuver the code to a different platform.

The solution of Hybrid Architecture

A final decision was taken to design a hybrid architecture that uses the advantages of microservices and monolith architecture to mitigate the drawbacks of serverless computing. The benefits of microservices and monolith would also serve us during implementation.

Before moving into how the architecture was designed, essential questions had to be answered.

Is serverless computing a software architecture?

Serverless computing is commonly referred to as Serverless architecture in most instances. The concept deviates from being an architecture and inclines more towards being termed a methodology used to avoid managing software infrastructure for developing applications and services.

How were the drawbacks of serverless computing mitigated using the advantages of microservices and monolith architecture?

The below-mentioned figure shows serverless computing being used to design an architecture for Amplify and other serverless components with multi-tenancy support.

Fig 5-Serverless architecture for Amplify and other serverless components with multi-tenancy support

The above architecture shows a possibility of the project or application being pulled into a vendor lock-in situation. This can be avoided by using AWS Lamba for different services through microservices architecture. Meanwhile, microservices architecture requires separate databases for each service, increasing costs. As mentioned earlier, as part of project requirements, the investment could not be increased as the revenue returns from the project were unpredictable.

A solution was devised here by using PGSQL(PostgreSQL) schema to fragment the existing database into sections for each service. The hybrid architecture designed majorly aligns with microservices architecture here to utilize the advantages of microservices to mitigate the drawbacks of serverless computing such as resource lockin and data lockin. It also leverages the benefits of contributing architectures.

AWS Amplify was used as a total package for simple development and testing to utilize the advantages of monolith architecture. It was easy to deploy and above all gave the advantages of horizontal scalability.

The following are the steps to be carefully followed to avoid vendor lock-in and to design an architecture that is highly scalable and isolated. It can also be easily converted into any of the architectures, microservices, or monolith if there is a demand for the same in the future.

  1. Select the right database provider

To avoid data lock-in, it is very important to select the right database provider for the application. When you choose a database, you should consider specifics such as, selecting an open-source database and avoiding databases with vendor services like Amazon Aurora. A few good choices are PGSQL and MySQL. But this can vary based on the need of the application. It is always advised to select the database in a well-researched manner.

2. Isolate Data

Another thing that needs to be considered while tackling data lock-in is, permanently isolating the data.

For example,
If AWS Lambda is put to use with a Cognito user pool or identity pool, this can cause your data to get scattered in the vendor service and cannot be retrieved or migrated to another system. To avoid this, always try to manage your data within your application with a dedicated database. In this case, one can have a user table and session table to address the user pool.

3. Isolated authentication and authorization layers

This is the classical problem of vendor lock-in, where you fall into data lock-in, resource lock-in, and API lock-in together.

For example,
Consider that the application is using AWS Cognito as the user pool and the same used as the authorizer and authenticator at the API level. By this example, it can be noticed that the data is locked in and is tightly coupled with the Cognito. The API gateway is also dependent on the resource.

To avoid this situation, it is always better to construct a custom authenticator with the help of some third-party open-source tools like JSON web token(JWT). Authorization can be managed by keeping a dedicated user table with roles, as mentioned in the previous section. This authenticator can be attached to the respective gateway to avoid API level lock-in.

4. Pluggable resource layer

Resource lock-in can be easily avoided by preparing the application architecture with a separate resource plugin layer. This layer will manage resources used in the application with configurations and be capable of adding and modifying resources and their configurations.

For example,
If Amazon s3 is used as an object store in the application, it can always create an intermediate layer to communicate with the resource and the application. Like s3 configurations will be isolated into a separate file, the resource list handler will manage your resource connections and feedback with the specific configurations.

5. Isolate application context

Depending on the application context or directly accessing the application context within the application will lead to vendor lock-in. This can be easily avoided by using an open-source plugin like serverless express. AWS Lambda has direct support with the serverless version of expressJS. Serverless express is not directly dependent on the lambda context.

6. Manage API routes

API routes must be managed separately and not wholly depend on vendor-provided gateway services. Try and have a single gateway to manage your API routes and handle the routes within the application layer. This can be easily achieved by constructing a route handler along with expressJS.

7. Directly use REST API endpoints.

The vendors available in the market provide numerous plugins to easily plug the resource into your application, but this can create a vendor lock.

For example,
Suppose one is creating a backend API service with AWS gateway and Lambda function. This API can be easily integrated into the application’s frontend with the help of a vendor plugin like AWS-SDK.

Instead, one can use direct gateway endpoints to communicate (in the frontend layer either the native fetch method or some third-party plugins like Axios can be used) along with the mapped custom domain.

The final hybrid architecture would look as depicted in the figure below.

Fig 6- Serverless computing methodology — Hybrid Architecture Design.

The hybrid architecture is designed to be highly scalable, considering the future requirements of the project, and is isolated. It meets all the client-provided requirements at a lower cost. The application has successfully cut costs and met all of its needs efficiently.

The scope of this hybrid architecture is vivid as it can be easily converted into microservice or monolith architecture if there is a requirement for the same in the future. It also has a scope for exploring architectures that use cross-cloud platforms where various cross-cloud elements can be used in a plug-in plug-out model to take maximum advantage of each cloud platform and not be vulnerable to getting locked in with any of the service providers.

Fig 7- Cross Cloud

Actual Results of the hybrid architecture:

Adapting to the hybrid architecture resulted in substantial cost-cutting on the operations side. Using any other architecture would have resulted in an approximate monthly bill of $ 1000. In contrast, the hybrid architecture returned a bill of approximately $ 580 per month, saving 42% of operational costs.

The attempt to create a solution to mitigate the drawbacks of serverless computing using the advantages of microservices and monolith architectures succeeded in yielding the intended results, as we could leverage the benefits of serverless computing for meeting the project requirements, without its drawbacks impacting the project or application.

Apart from the fact that the hybrid architecture cut operational costs, the client also appreciated the innovative solution designed for the application, which was precise to the requirements. Above all, the application with unpredictable returns on investment turned out to be a success and enhanced business opportunities for the client.

--

--

Impelsys Tech Blog
Impelsys

Impelsys is a global leader in delivering impactful, engaging & adaptable online learning solutions for global publishers, education providers, & enterprises.