CloudScale Method

The key to the development of scalable cloud applications is an appropriate engineering method for the scalability. Initially, a service provider has an idea for an application which he wants to execute in the cloud because of the cost-efficient management and the virtually unlimited amount of hardware resources. The service provider wants to ensure that the application scales cost-effectively, i.e., that it always tries to cope with its workload with the minimum amount of cloud resources (measured in terms of costs paid for the resources). To enable sustainable engineering method for scalable cloud applications it is important to support the complete application life-cycle. The proposed CloudScale Method builds on an overall system life-cycle process from initial requirements collection towards operation and monitoring process.

 

CloudScale Method Scenarios and Basic Elements

method_v2.png

CloudScale provides an engineering method for building (evolving) and adapting scalable cloud applications and services. We focus on two core scenarios:

  • Development: Enabling software engineers to develop scalable applications and services for a cloud computing platform. We extend existing capacity planning tools for scalability analysis, and introduce ScaleDL Usage Evolution to describe, compare, and compose the scalability of services. We also provide a set of best practices and design-patterns for building scalable software systems.
  • Evolution: Enabling software engineers to evolve an existing software system or service into a solution that scales in cloud computing environments. We introduce a novel approach for scalability evaluation of existing systems by systematic experimentation using existing load-testing tools or making experiments on the models. This approach allows for scalability anti-pattern detection in an existing application and for extracting scalability models.

Furthermore, the CloudScale Method will enable a combination of both scenarios. The scalability evaluation of existing systems will yield scalability models that allow for scalability redesign and the evaluation of "What-if" scenarios that can be combined with measured behaviour from deployed system. The integrated view the CloudScale Method provides on scalability allows software engineers to address scalability in all lifecycle phases of their application with minimal effort.

CloudScale Method overview is presented on the figure. On the bottom of the figure is a legend explaining the notation. It is important to form clear process steps which cover essential service life-cycle steps like Requirements, Design, Realisation, Operation and Monitoring. It is also important to show the data and control flow between the process steps. The most important service life-cycle steps are additionally elaborated with supported tools and intermediate documents. To enable experimentation and iterative construction and analysis cycles, a couple of decision points for control and data flow were introduced (e.g., a decision step where we determine if scalability requirements are satisfied after system modelling and analysis). Additionally, the method enables solution check after each service life-cycle steps and then returning to the analysis step to optimise the constructed service.

The basic processes in CloudScale Method are standard development processes in software development. We make some changes in the naming of processes to have more focus on the specific need of the CloudScale Method. The first defined process is Requirements identification, because the CloudScale Method will specifically deal with scalability issues in the system development or adaptation, and will be integrated with some other general accepted engineering method for requirements engineering. However, it may also be executed independently. This process is defined because we must always annotate and define scalability requirements for the analysed system. During the Requirements identification process, the main focus is on describing the evolution of load and work. The quality metric describes what is acceptable system quality to the users, e.g., a particular response time. For example, we may expect used service response time in less than one second. This process results in a requirement specification document that is visible as ScaleDL Usage Evolution Specification. This output document of the Requirements Identification process is used like
main input in the System construction and Analysis process.

Based on this requirement specification, the process System Construction and Analysis starts where we will use a model for specifying the system. We can do this in two ways:

  1. By Reverse Engineering using an existing code base for creating an initial or adapting an existing system model. This process is driven by our tool Extractor.

  2. By (Re-)Designing a system on the model level using our Analyser tool. For specifying this model, we will use the ScaleDL Usage Evolution and Palladio Component Model (PCM), see D1.1 Section 7.

We guide our design decisions along the requirements specification and support it with known patterns for good architectures regarding scalability in cloud environments by using ScaleDL Architectural Templates. The output document from Extractor or newly designed system that will support (Re-)Design of system will be unified System Model that will be representation of ScaleDL Instance as basic input for Analyser and Static Spotter tools. The next step involves the analysis of the modelled system to check whether it meets the identified requirements. This process is driven by our Analyser and Static Spotter tools as described in D1.1 Section 3.4.3 and is repeated with different system alternatives until the requirements are met. The main reason why we combine these two tools is because they logically do similar tasks. And development of these two tools is still in early stage and when clear tool inputs and outputs will be defined, we will update the CloudScale Method definition. If first model does not satisfy requirements we shall apply Spotter to make an analysis where we can compare the Anti-Patterns & Solution based knowledge/parameters and Analyser driven output parameters. Results from the Spotter is compared with Available Solutions and if we are satisfied with the proposed Solution we will apply an Anti-Pattern solution to create a new ScaleDL Instance of the System model and go to the Analyser tool with new model. In the situation when the solution is not available, we shall return to our construction and analysis process for improving the existing system model. This construction and analysis loop stops when system architect is satisfied with the system behaviour — when the requirements are met. The system construction and analysis process finally ends by reaching satisfactory results about service scalability behaviour.

We may also find that our scalability requirements are infeasible and, therefore, we have to modify them by reducing the complexity (and consequently work) of the services offered during the high load.

The two main outputs from the system construction and analysis process are the Realisation Directive and the Deployment Directive. These two directives form the basis to continue with next steps in software applications life-cycle called Realisation and Deployment.

The Realisation Directive can be used to automatic code generation in the Realisation process, either based on a developed system model or for semi-automatic code generation. The Deployment Directive contains essential requirements/parameters for services deployment, which is required to satisfy required system behaviour. In some situations, output from the System Construction and Analysis process can be only a deployment directive, e.g., in cases when we use only existing and already developed service components, and when we define only parameter reconfigurations for each component, which determine the service deployment process.

Based on a Realisation Directive we start to implement our system. The output of system realisation is a Realised System prepared for deployment.

When the Realisation process step is finished, we move to Deployment. In the Deployment process step, the application is deployed according to the Deployment Directive, and put in operation in a cloud computing environment. For operation it is important to specify well-defined resource requirements that enable cloud computing provider to efficiently provide cloud resources and to fulfil load, work and quality requirements of the deployed application reflecting the application evolution.

Monitoring is another process step that is active during the system operation and enables control of system behaviour. Collecting measurements for scalability parameters also belongs in this step. Based on the operational parameters and system quality metrics, monitoring control can require some changing system requirements and triggering the need to rerun our process cycle (adaptation loop).

Based on systematic experiments, enabled with the SpottingByMeasuring process, software engineers will receive the information needed to assess costs, identify scalability problems, and address such problems accordingly. This will address both deployment evolution as well as architectural evolution. In the area of identification of scalability anti-patterns, CloudScale will identify and formalise scalability anti-patterns for cloud applications and provide a tool-supported method to detect these patterns on existing code-bases.

 

CloudScale Method Roles and Stakeholders

method_roles_v2.png

In the previous overall CloudScale Method description we have mainly used the role of software engineer, but it is important to emphasise that in the proposed method we envision four basic roles for the software engineer:

  • Product manager: Person responsible for discussing with the customer of our service and identify initial system requirements and define development goals especially from the business perspective. The product manager is always active during the decisions regarding the requirements fulfilment and business potential of the solution, i.e., in the Requirements identification and System Construction and Analysis process steps.

  • System architect: Person responsible for Requirements Identification and the main driver of System Construction and Analysis. This role cooperates with Product manager and Service developer and its main responsibilities are architecture definition and to propose the main service components.

  • Service developer: Person responsible for service realisation (both development and test), and for reparing the system deployment process. This role cooperates with System architect for checking realised services and with System engineer in preparing system deployment.

  • System engineer: Person responsible for service deployment and monitoring of the system in operation. Based on monitoring results, the system engineer tends to optimise system operation parameters or to run the SystemConstruction and Analysis process step if it is not possible to fix system by fine tuning. System engineer cooperates with all other roles during the system life-cycle.

Looking from the perspective of CloudScale Method usage, we can define four basic system stakeholders:

  • Service consumer wants to have supplied services for own purpose according to business needs and according an agreed SLA.

  • Service provider responsible to fulfil SLA and other requirements towards service user (according to cloud services it can be IaaS, PaaS or SaaS provider), preparing service requirements and interact with system builder to enable appropriate service, operate system during system life-cycle and lead needs for system adaptation.

  • System builder responsible for all activities to transfer requirements to the system realisation and deployment.

  • Service developer develops modelled service and cooperates with system builder.