Now that you've documented the data characteristics of your services, let's talk about how to select Google Cloud Storage and Data Solutions. The Google Cloud Storage and Database portfolio covers relational NoSQL, Object, Data Warehouse, and In-memory stores as shown in this table. Let's discuss each service from left to right. Cloud SQL is a fixed schema datastore with a storage limit of 30 terabytes. It is offered using MySQL, Postgres SQL, and SQL Server. These services are good for Web applications such as CMS or e-commerce. Cloud Spanner is also relational and fixed schema but scales infinitely and can be regional or multi-regional. Example use case include scalable relational databases greater than 30 GB with high availability, and also global accessibility like supply chain management and manufacturing. Google Cloud's NoSQL datastores are schemaless. Firestore is a completely managed document datastore with maximum document size of 1 MB, it is useful for hierarchical data. For example, a game state of user profiles. Cloud Bigtable is also a NoSQL Datastore that scales infinitely. It is good for heavy read and write events and use cases including financial services, Internet of Things, and digital ad streams. For Object Storage, Google Cloud offers Cloud Storage. Cloud Storage is schemaless and is completely managed with infinite scale. It stores binary object data, and so it's good for storing images, media serving, and backups. Data. Warehousing is provided by BigQuery. The storage uses a fixed schema and supports completely managed SQL analysis of the data stored. It is excellent for performing analytics and business intelligence dashboards. For In Memory storage, Memorystore provides a schemaless managed Redis database. It is excellent for caching for Web and mobile apps and for providing fast access to state in microservice architectures. If you prefer flowcharts, leverage this chart when selecting a storage or database service. First, ask yourself if your data is structured. If it isn't, you will want to choose persistent disk or Cloud Storage depending on whether you need a file system. If your data is structured, ask yourself whether your workload focuses on analytics. If it does, you will want to choose Cloud Bigtable or BigQuery, depending on your latency and update needs. Otherwise, check whether your data is relational. If it's not relational, choose Firestore or Memorystore, depending on whether your data is short-lived. If your data is relational, you will want to choose Cloud SQL or Cloud Spanner depending on your need for horizontal scalability. In general, choosing a datastore is about trade-offs. Ideally, there would be low-cost, globally scalable, low latency, strongly consistent databases. In the real world, trade-offs must be made, and this flowchart helps you decide on those trade-offs and how they map to a solution. You might also want to consider how to transfer data in Google Cloud, a number of factors must be considered, including cost, time, offline versus online transfer options, and security. While transfer into Cloud storage is free, there will be costs with the storage of the data and maybe even appliance costs if a transfer appliance is used or egress costs if transferring from another Cloud provider. If you have huge datasets, the time required for transfer across a network may be unrealistic. Even if it is realistic, the effects on your organization's infrastructure may be damaging while the transfer is taking place. This table shows the challenge of moving large datasets. For example, if you have 1 TB of data to transfer over a 100 Mbps connection, it will take about 12 days to transfer the data. The color-coded cells highlight unrealistic timelines that require alternative solutions. Let's go over online and offline data transfer options. For smaller or scheduled data uploads, use the Cloud Storage Transfer Service, which enables you to move or backup data to a Cloud Storage bucket from other cloud storage providers such as Amazon S3, from your On-premise storage or from any HTTP, HTTPS location. Move data from one Cloud Storage bucket to another so that it is available to different groups of users or applications. Periodically move data as a part of data processing pipeline or analytical workflow. Storage Transfer Service provides options that make data transfer and synchronization easier. For example, you can schedule one-time transfer operation or recurring transfer operations. Delete existing objects in the destination bucket if they don't have a corresponding object in the source, delete data source objects after transferring them, schedule periodic synchronizations from a data source to a data sink with advanced filters based on file creation dates, filename filters, and the times of day you prefer to import data. Use the Storage Transfer Service for On-prem data for large-scale uploads from your datacenter. The Storage Transfer Service for On-premises data allows large-scale online data transfers from On-premises Storage to Cloud Storage. With this service, data validation, encryption, error retries and fault tolerance are built in. On-premises software is installed on your servers. The agent comes as a Docker container and a connection to Google Cloud is setup. Directories to be transferred to Cloud Storage are selected in the Cloud Console. Once data transfer begins, the service will pattern allies the transfer across many agents supporting scale to billions of files and hundreds of TB's, via the Cloud console, a user can view detailed transfer logs and also the creation, management and monitoring of transfer jobs. To use the Storage Transfer Service for On-premises, a POSIX compliant source is required and a network connection of at least 300 Mbps. Also a Docker supported Linux server that can access the data to be transferred is required with ports 80 and 443 open for outbound connections. The use case is for On-premises transfer of data whose size is more than one TB. For large amounts of On-premises data that would take too long to upload, use Transfer Appliance. Transfer Appliance is a secure, rackable, high-capacity storage server that you setup in your data center. You fill it with data and ship it to an ingest location where the data is uploaded to Google. The data is secure, you control the encryption key and Google erases the appliance after the transfer is complete. The process for using a Transfer Appliance is that you request an appliance and it is shipped in a tamper-evident case, data is transferred to the appliance, the appliance is shipped back to Google, data is loaded to Cloud Storage and you are notified that it is available. Google uses tamper-evident seals on the shipping cases to and from the data ingest site, data is encrypted to AES-256 standard at the moment of capture. Once the transfer is complete, the appliance is erased per NIST-800-88 standards. You decrypt the data when you want to use it. There's also a transfer service for BigQuery. The BigQuery Data Transfer Service automates data movements from SaaS applications to BigQuery on a scheduled managed basis. The Data Transfer Service initially supports Google application sources like Google ads, Campaign Manager, Google Ad Manager, and YouTube. There are also data connectors that allow easy data transfer from Teradata, Amazon Redshift, and Amazon S3 to BigQuery. The screenshots on the slide show that a source type is selected for a transfer, a schedule is configured, and a data destination is selected. For the transfer, the data formats are also configured.