Most cloud services reflect a common arrangement of components and capabilities. While these arrangements still support older web servers, modern cloud architecture is optimized for scalability and reliability. Cloud service is like any third-party internet service. We establish an account with the provider and then we configure everything through that account. We install and configure the software and services we want to run. We establish our master or root cloud service account by giving the cloud provider a way to receive payments for the services we use. The account gives us the mechanism we need to configure the services and install the software. As with Linux systems, the root account is too powerful for day-to-day use. Cloud providers recommend that consumers establish sub-accounts for administrative and operational purposes. These sub accounts can create specific accounts to control the resources belonging to deployed projects and services. For example, an administrator can associate public-key credentials with internet servers. Cloud consumers often control their resources through dashboards implemented as web pages or mobile apps. Consumers may also use command line utilities via a secure shell. Cloud providers generally support secure versions of file transfer protocols for uploading resources to the cloud. When we design a service for cloud deployment, we try to achieve certain properties. First of all, we try to meet the requirements of a RESTful service where REST is the acronym for Representational State Transfer. In particular, we try to build a stateless interface with no race conditions. We also want to save service results in network caches wherever possible. When the service starts, it inherits access rights from the cloud consumer's account. This may be a sub-account established specifically for running this service. To keep up with increasing load, we start additional server processes by cloning an image customized to implement our service. Typical cloud services collect requests in a load balancer which in turn distributes those requests to the running server processes. When the load grows, the load balancer clones additional servers. The more load, the more server instances we deploy. As the load drops off, we shed the server processes we no longer need. The server instances provide the internet accessible interface to the back end processes that actually perform the requested tasks. Back-end servers are often database managers, but provide well understood services via the Structured Query Language, SQL, as well as scalability and distributed service support. Here is an example of the hybrid deployment model. This enterprise hosts its back-end service on a private on-premises server. [MUSIC]