How does Cloud effect Testing Parameters ?


Cloud Testing & its characteristic testing itself a big area of testing....Let discuss about Multi-tenancy Testing.

First we have to know how many types multi-tenancy architecture are there.

Mainly two types of architecture are there for multi-tenancy which are very much associated with cost, security & isolation.

A. Database Level Multi-tenancy:

Which is further divided into three types based on there implementation architecture.

1. Shared DB & Shared Schema (Cost - Low, Security/Isolation-Low)
2. Shared DB & Separate Schema(Cost - Medium, Security/Isolation-Medium)
3. Separate DB ( Cost - High, Security/Isolation - High)

B. White Labelling Multi-tenancy.


Now after knowing above implementation architecture than only we  can plan our test scenarios...there is no specific tool noticed in my knowledge for Multi-tenancy Testing.

Skill set required multi-tenancy testing is knowledge of cloud computing business model, cloud category, types of cloud, unix /shell script, database knowledge(RDBMS / NoSql).


Cloud Application is just an application, so traditional testing tools apply (IE vulnerability scanning engine, load generator, etc).
  • Testing the Cloud Environment (not the application) is a different problem.  My experience is that each cloud platform has homegrown testing tools for their platform that they use to perform these tests.

  • For load testing, try http://blazemeter.com/ or https://www.blitz.io/.

  • For continuous integration, look into TravisCI, Continuous Delivery with Jenkins, Hosted continuous integration and deployment platform, and Continuous Delivery and Integration made simple.

 Single Operating System Instance / Resource Allocation

A core resource on any physical computer is memory. Thus, memory management seems like a great place to start the discussion.     
  • Memory as a resource is made virtual on modern computers and allocated in 4K pages. These pages are read and processed by the CPU. Only a small set of physical pages are ever required to actually be in memory at any one time. Thus, an operating system applies an algorithm to choose which pages need to be 'paged or swapped from disk to physical memory" Ref: Virtual Memory Peter J. Denning.


   

Resource Failure

This leads to the question of what to do when a resource allocation fails. Assume failure will occur and program accordingly.        

This really leads from your the first point. If you use up all physical memory then the Operating System will use an algorithm to choose which pages to swap to disk. If all the physical memory is used and the disk fills up what happens at that juncture?

In the case of a single instance of a computer it will generally crash. Caveat - there are ways to mitigate this potential event...

Multiple Computer Availability


  Let us consider failure and fault tolerance in the context of cloud computing.

  •  [Caveat] Let me first say that today we see a strong prevalence of thought towards the belief that a single Operating system instance is not able to be as available as a group of physical computers each running their own instance of an operating system.  There is subtlety in what I just said - I referred to a single operating system not a single physical computer. A single operating system can run across multiple physical computers.
  •  A key point is that there really are multiple forms of availability / fault tolerance in a cloud context and each applies different algorithms to address particular needs:

    Local Availability

  • Component [Dual Power Supplies, Multi Path IO etc]
  • System [Clustering, Storage Arrays etc]
  • Data [Data Duplication, Snapshots, Backup etc]
  • Application [Application Resiliency etc]

    Site Availability

  • Site Duplication

Resource allocation strategies


Resource allocation strategies used for VMs by leading cloud service providers?

  • Cloud service providers (CSPs) buy hardware which is then available to be broken into smaller chunks, known as Virtual Machines, or VMs.  The profitability of the CSP, once they get to moderate scale, is mainly driven by the utilisation of that already purchased hardware.  So most successful CSPs who have survived for a while (implying they werent just lucky) have developed pricing models that help to maximise the utilisation of their hardware.

  • AWS as an example - Reserved Instances.  These are discount vouchers allowing you to pay a fixed low hourly price where you have committed to long-term (12 or 36 months) of continuous usage. You get a further discount if you make an upfront payment.  The key aspect of the Reserved Instance pricing structure from the CSPs perspective, is that there is a transfer of utilization risk from CSP (who picked up the risk when he bought the hardware) to the customer.  Also in today's world of falling cloud prices it is also a valuable price hedge for the CSP.

  • Google's Sustained Usage Discounts are often compared to Reserved Instances, but really shouldn't be, as there is no transfer of utilization risk.