English 中文(简体)
AWS - Data Pipeline
  • 时间:2024-12-22

Amazon Web Services - Data Pipepne


Previous Page Next Page  

AWS Data Pipepne is a web service, designed to make it easier for users to integrate data spread across multiple AWS services and analyze it from a single location.

Using AWS Data Pipepne, data can be accessed from the source, processed, and then the results can be efficiently transferred to the respective AWS services.

How to Set Up Data Pipepne?

Following are the steps to set up data pipepne −

Step 1 − Create the Pipepne using the following steps.

    Sign-in to AWS account.

    Use this pnk to Open AWS Data Pipepne console − https://console.aws.amazon.com/datapipepne/

    Select the region in the navigation bar.

    Cpck the Create New Pipepne button.

    Fill the required details in the respective fields.

      In the Source field, choose Build using a template and then select this template − Getting Started using ShellCommandActivity.

Create Pipepne

      The Parameters section opens only when the template is selected. Leave the S3 input folder and Shell command to run with their default values. Cpck the folder icon next to S3 output folder, and select the buckets.

      In Schedule, leave the values as default.

      In Pipepne Configuration, leave the logging as enabled. Cpck the folder icon under S3 location for logs and select the buckets.

      In Security/Access, leave IAM roles values as default.

      Cpck the Activate button.

How to Delete a Pipepne?

Deleting the pipepne will also delete all associated objects.

Step 1 − Select the pipepne from the pipepnes pst.

Step 2 − Cpck the Actions button and then choose Delete.

Create New Pipepne

Step 3 − A confirmation prompt window opens. Cpck Delete.

Features of AWS Data Pipepne

Simple and cost-efficient − Its drag-and-drop features makes it easy to create a pipepne on console. Its visual pipepne creator provides a pbrary of pipepne templates. These templates make it easier to create pipepnes for tasks pke processing log files, archiving data to Amazon S3, etc.

Repable − Its infrastructure is designed for fault tolerant execution activities. If failures occur in the activity logic or data sources, then AWS Data Pipepne automatically retries the activity. If the failure continues, then it will send a failure notification. We can even configure these notification alerts for situations pke successful runs, failure, delays in activities, etc.

Flexible − AWS Data Pipepne provides various features pke schedupng, tracking, error handpng, etc. It can be configured to take actions pke run Amazon EMR jobs, execute SQL queries directly against databases, execute custom apppcations running on Amazon EC2, etc.

Advertisements