Decomposition Tool

Business Purpose

The decomposition tool aims to help software developers in finding the optimal decomposition solution for an application based on the microservices architectural style and the serverless FaaS paradigm. It is a standalone tool that follows the TOSCA standard and extends the RADON framework. The typical usage scenarios of the tool are envisioned as follows:

  • Architecture Decomposition: It can be used to generate a coarse-grained or fine-grained TOSCA model from a monolithic one, which will be realized by analyzing functional dependencies among the interfaces and applying appropriate architectural design patterns.
  • Deployment Optimization: In this scenario, it is used to obtain the optimal deployment scheme for either a platform-independent or platform-specific TOSCA model, minimizing the operating costs on the target cloud platform under the performance requirements.
  • Accuracy Enhancement: It can also be used to enhance the accuracy of performance annotations in a TOSCA model according to runtime monitoring data so that a better decomposition or optimization result may be achieved afterwards. This enables an iterative lifecycle for developing the application.
  • Assignment/Consolidation: Additionally, it is used to consolidate the assignment of container applications in a TOSCA model by learning from runtime experiment data, and the job inteferences among the component microservices is thus minimized on the available compute nodes.

Technical Details

A prototype of the decomposition tool with an initial deployment optimization capability has been implemented. The implementation is based on a set of data structures and external tools as illustrated in the following figure. Given a TOSCA model, the tool uses a built-in YAML processor to import the service template into MATLAB and generates a so-called topology graph through model-to-model transformation. This topology graph embeds a layered queueing network for performance prediction. An optimization problem is then created from the topology graph and solved by invoking the GA solver and the LINE engine. When the optimal solution is found, the tool writes the result back into the service template.

_images/overall_approach.png

Getting Started

The decomposition tool has been made available on an Amazon EC2 instance with a RESTful API, which can be accessed through the URL: http://ec2-108-128-104-167.eu-west-1.compute.amazonaws.com:9000. The next table summarizes RESTful endpoints exposed by this public access server.

Path Method Parameters Body Response Description
/file/{filename} POST   file: octet-stream   Upload a file to the server
/file/{filename} GET       Download a file from the server
/file/{filename} DELETE       Delete a file in the server
/dec-tool/decompose PATCH model_filename: string     Decompose the architecture of a RADON model
/dec-tool/optimize PATCH model_filename: string   total_cost: number, measures: object array Optimize the deployment of a RADON model
/dec-tool/enhance PATCH model_filename: string, data_filename: string     Enhance the accuracy of a RADON model
/dec-tool/consolidate PATCH model_filename: string, data_filename: string     Consolidate the assignment of a RADON model

A demo application example (thumbnail generation) based on the definitions of TOSCA types specific to the decomposition tool is provided here. Two models are included in this example, one with an open workload and the other with a closed workload. To try the decomposition tool on the former, please perform the following steps:

  1. Clone the repository and enter the model directory:
git clone https://github.com/radon-h2020/radon-decomposition-tool.git && cd radon-decomposition-tool/demo-app
  1. Upload the original model to the server:
curl -X POST http://ec2-108-128-104-167.eu-west-1.compute.amazonaws.com:9000/files/model.tosca -F 'file=@open_model.tosca'
  1. Optimize the deployment of the model:
curl -X PATCH http://ec2-108-128-104-167.eu-west-1.compute.amazonaws.com:9000/dec-tool/optimize?filename=model.tosca
  1. Back up the original model in place:
cp open_model.tosca open_model.tosca.bkp
  1. Download the resultant model from the server:
curl -X GET http://ec2-108-128-104-167.eu-west-1.compute.amazonaws.com:9000/files/model.tosca -o open_model.tosca

Additional information about the resultant model will be reported upon completion of deployment optimization (step 3), including predictions of the total operating cost and performance measures under consideration. In this example, the decomposition tool will return the mean as well as the 90th, 95th and 99th percentiles of the predicted response time distribution for the AwsLambdaFunction_0 node since a MeanReponseTime policy is attached to the execute entry of that node.

It is HIGHLY RECOMMENDED to use a different filename for each uploaded model, e.g. putting a UUID into the filename (model_5da82fdc-ae4c-48c4-ab5f-369a9a4fdee3.tosca), so as to minimize the possibility of collisions between concurrent requests. Last but MOST IMPORTANTLY, do not upload a model with any sensitive information, e.g. a pair of AWS access key ID and secrete access key. We currently cannot prevent other users from accessing your models in the server.

Additional Information

References

  • Alim Ul Gias, André van Hoorn, Lulai Zhu, Giuliano Casale, Thomas F. Düllmann, Michael Wurster: Performance Engineering for Microservices and Serverless Applications: The RADON Approach. ICPE Companion 2020: 46-49 https://doi.org/10.1145/3375555.3383120

Contact

Acknowledgments

This work is being supported by the European Union’s Horizon 2020 research and innovation programme (Grant No. 825040, RADON).