In the Amazon Simple Storage Service User Guide. To access your exported data in the Amazon S3 bucket, see Uploading, downloading, and managing objects For more information, see Exporting a snapshot to an Amazon S3 bucket. You can use a KMS key within your AWS account, or you can use a cross-account KMS key.įor more information, see Using a cross-account AWS KMS key.Įxport the snapshot to Amazon S3 using the console or the start-export-task CLIĬommand. If you have a deny statement in your KMS key policy, make sure to explicitly exclude the AWS service principal For more information on using KMS keys inĪmazon Aurora, see AWS KMS key management. The KMS key policy must includeīoth the kms:Encrypt and kms:Decrypt permissions. The KMS key is used by the snapshotĮxport task to set up AWS KMS server-side encryption when writing the export data to S3. For more information, see Providing access to an Amazon S3 bucket using anĬreate a symmetric encryption AWS KMS key for the server-side encryption. For more information, see Identifying the Amazon S3 bucket for export.Ĭreate an AWS Identity and Access Management (IAM) role that grants the snapshot export task access to the S3īucket. Identify the S3 bucket where the snapshot is to be exported to. Provide the information to access a bucket, take the following steps: Use an existing automated or manual snapshot, or create a manual snapshot of a DBĪ bucket is a container for Amazon S3 objects or files. For more details, see the following sections. You use the following process to export DB snapshot data to an Amazon S3 bucket. You can't restore exported snapshot data from S3 to a new DB cluster. You can delete a snapshot while you're exporting its data to S3, but you're still charged for the storageĬosts for that snapshot until the export task has completed. Operation: The export task with the ID xxxxx already exists. If you don't use a unique task name, you mightĮxportTaskAlreadyExistsFault: An error occurred (ExportTaskAlreadyExists) when calling the StartExportTask We strongly recommend that you use a unique name for each export task. If a table contains a large row that is close to or greater than 2 GB, then ![]() Or greater than 500 MB, then the export fails. If the data contains a large object, such as a BLOB or CLOB, that is close to Tables with slashes (/) in their names are skipped during export.Īurora PostgreSQL temporary and unlogged tables are skipped during export. For more information on version and Region availability ofĮxporting DB cluster snapshot data to S3, see Exporting snapshot data to Amazon S3. For more information on using Redshift Spectrum to read Parquet data, see COPY from columnar data formats in theĪmazon Redshift Database Developer Guide.įeature availability and support varies across specific versions of each database engineĪnd across AWS Regions. Information on using Athena to read Parquet data, see Parquet SerDe in theĪmazon Athena User Guide. Individual Parquet files are usually ~20 GB inĪfter the data is exported, you can analyze the exported data directly through tools like Amazon Athena or Amazon Redshift Spectrum. The data is stored in an Apache Parquet format that is compressed and consistent. Specific sets of databases, schemas, or tables. ![]() By default, all data in the snapshot is exported. Manual snapshots and automated system snapshots. When you export a DB cluster snapshot, Amazon Aurora extracts data from the snapshot and stores it in an Amazon S3 bucket. ![]() The export process runs in the background and doesn't affect the performance You can export DB cluster snapshot data to an Amazon S3 bucket.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |