id
stringlengths
8
78
source
stringclasses
743 values
chunk_id
int64
1
5.05k
text
stringlengths
593
49.7k
analytics-java-api-061
analytics-java-api.pdf
61
your code in one of two ways: • Use the command-line Maven tool. Create your JAR file by running the following command in the directory that contains the pom.xml file: mvn package -Dflink.version=1.8.2 • Use your development environment. See your development environment documentation for details. Note The provided source code relies on libraries from Java 1.8. Ensure that your project's Java version is 1.8. You can either upload your package as a JAR file, or you can compress your package and upload it as a ZIP file. If you create your application using the AWS CLI, you specify your code content type (JAR or ZIP). 2. If there are errors while compiling, verify that your JAVA_HOME environment variable is correctly set. If the application compiles successfully, the following file is created: target/aws-kinesis-analytics-java-apps-1.0.jar Upload the Apache Flink streaming Java code In this section, you create an Amazon Simple Storage Service (Amazon S3) bucket and upload your application code. Getting started: Flink 1.8.2 - deprecating 176 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide To upload the application code 1. Open the Amazon S3 console at https://console.aws.amazon.com/s3/. 2. Choose Create bucket. 3. 4. 5. Enter ka-app-code-<username> in the Bucket name field. Add a suffix to the bucket name, such as your user name, to make it globally unique. Choose Next. In the Configure options step, keep the settings as they are, and choose Next. In the Set permissions step, keep the settings as they are, and choose Next. 6. Choose Create bucket. 7. 8. In the Amazon S3 console, choose the ka-app-code-<username> bucket, and choose Upload. In the Select files step, choose Add files. Navigate to the aws-kinesis-analytics-java- apps-1.0.jar file that you created in the previous step. Choose Next. 9. You don't need to change any of the settings for the object, so choose Upload. Your application code is now stored in an Amazon S3 bucket where your application can access it. Create and run the Managed Service for Apache Flink application You can create and run a Managed Service for Apache Flink application using either the console or the AWS CLI. Note When you create the application using the console, your AWS Identity and Access Management (IAM) and Amazon CloudWatch Logs resources are created for you. When you create the application using the AWS CLI, you create these resources separately. Topics • Create and run the application (console) • Create and run the application (AWS CLI) Create and run the application (console) Follow these steps to create, configure, update, and run the application using the console. Getting started: Flink 1.8.2 - deprecating 177 Managed Service for Apache Flink Create the application Managed Service for Apache Flink Developer Guide 1. Open the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink 2. On the Managed Service for Apache Flink dashboard, choose Create analytics application. 3. On the Managed Service for Apache Flink - Create application page, provide the application details as follows: • For Application name, enter MyApplication. • For Description, enter My java test app. • For Runtime, choose Apache Flink. • Leave the version pulldown as Apache Flink 1.8 (Recommended Version). 4. For Access permissions, choose Create / update IAM role kinesis-analytics- MyApplication-us-west-2. 5. Choose Create application. Note When you create a Managed Service for Apache Flink application using the console, you have the option of having an IAM role and policy created for your application. Your application uses this role and policy to access its dependent resources. These IAM resources are named using your application name and Region as follows: • Policy: kinesis-analytics-service-MyApplication-us-west-2 • Role: kinesisanalytics-MyApplication-us-west-2 Edit the IAM policy Edit the IAM policy to add permissions to access the Kinesis data streams. 1. Open the IAM console at https://console.aws.amazon.com/iam/. 2. Choose Policies. Choose the kinesis-analytics-service-MyApplication-us-west-2 policy that the console created for you in the previous section. 3. On the Summary page, choose Edit policy. Choose the JSON tab. 4. Add the highlighted section of the following policy example to the policy. Replace the sample account IDs (012345678901) with your account ID. Getting started: Flink 1.8.2 - deprecating 178 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide { "Version": "2012-10-17", "Statement": [ { "Sid": "ReadCode", "Effect": "Allow", "Action": [ "s3:GetObject", "s3:GetObjectVersion" ], "Resource": [ "arn:aws:s3:::ka-app-code-username/aws-kinesis-analytics-java- apps-1.0.jar" ] }, { "Sid": "DescribeLogGroups", "Effect": "Allow", "Action": [ "logs:DescribeLogGroups" ], "Resource": [ "arn:aws:logs:us-west-2:012345678901:log-group:*" ] }, { "Sid": "DescribeLogStreams", "Effect": "Allow", "Action": [ "logs:DescribeLogStreams" ], "Resource": [ "arn:aws:logs:us-west-2:012345678901:log-group:/aws/kinesis- analytics/MyApplication:log-stream:*" ] }, { "Sid": "PutLogEvents", "Effect": "Allow", "Action": [ "logs:PutLogEvents" ], "Resource": [ Getting started: Flink 1.8.2 - deprecating 179 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide "arn:aws:logs:us-west-2:012345678901:log-group:/aws/kinesis- analytics/MyApplication:log-stream:kinesis-analytics-log-stream" ] }, { "Sid": "ReadInputStream", "Effect": "Allow", "Action": "kinesis:*", "Resource": "arn:aws:kinesis:us-west-2:012345678901:stream/ ExampleInputStream" }, { "Sid": "WriteOutputStream", "Effect": "Allow", "Action": "kinesis:*", "Resource": "arn:aws:kinesis:us-west-2:012345678901:stream/ ExampleOutputStream"
analytics-java-api-062
analytics-java-api.pdf
62
{ "Sid": "ReadCode", "Effect": "Allow", "Action": [ "s3:GetObject", "s3:GetObjectVersion" ], "Resource": [ "arn:aws:s3:::ka-app-code-username/aws-kinesis-analytics-java- apps-1.0.jar" ] }, { "Sid": "DescribeLogGroups", "Effect": "Allow", "Action": [ "logs:DescribeLogGroups" ], "Resource": [ "arn:aws:logs:us-west-2:012345678901:log-group:*" ] }, { "Sid": "DescribeLogStreams", "Effect": "Allow", "Action": [ "logs:DescribeLogStreams" ], "Resource": [ "arn:aws:logs:us-west-2:012345678901:log-group:/aws/kinesis- analytics/MyApplication:log-stream:*" ] }, { "Sid": "PutLogEvents", "Effect": "Allow", "Action": [ "logs:PutLogEvents" ], "Resource": [ Getting started: Flink 1.8.2 - deprecating 179 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide "arn:aws:logs:us-west-2:012345678901:log-group:/aws/kinesis- analytics/MyApplication:log-stream:kinesis-analytics-log-stream" ] }, { "Sid": "ReadInputStream", "Effect": "Allow", "Action": "kinesis:*", "Resource": "arn:aws:kinesis:us-west-2:012345678901:stream/ ExampleInputStream" }, { "Sid": "WriteOutputStream", "Effect": "Allow", "Action": "kinesis:*", "Resource": "arn:aws:kinesis:us-west-2:012345678901:stream/ ExampleOutputStream" } ] } Configure the application 1. On the MyApplication page, choose Configure. 2. On the Configure application page, provide the Code location: • For Amazon S3 bucket, enter ka-app-code-<username>. • For Path to Amazon S3 object, enter aws-kinesis-analytics-java-apps-1.0.jar. 3. Under Access to application resources, for Access permissions, choose Create / update IAM role kinesis-analytics-MyApplication-us-west-2. 4. Enter the following application properties and values: Group ID Key Value ProducerConfigProp flink.inputstream. LATEST erties initpos ProducerConfigProp aws.region us-west-2 erties Getting started: Flink 1.8.2 - deprecating 180 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Group ID Key Value ProducerConfigProp AggregationEnabled false erties 5. Under Monitoring, ensure that the Monitoring metrics level is set to Application. 6. For CloudWatch logging, select the Enable check box. 7. Choose Update. Note When you choose to enable Amazon CloudWatch logging, Managed Service for Apache Flink creates a log group and log stream for you. The names of these resources are as follows: • Log group: /aws/kinesis-analytics/MyApplication • Log stream: kinesis-analytics-log-stream Run the application 1. On the MyApplication page, choose Run. Confirm the action. 2. When the application is running, refresh the page. The console shows the Application graph. Stop the application On the MyApplication page, choose Stop. Confirm the action. Update the application Using the console, you can update application settings such as application properties, monitoring settings, and the location or file name of the application JAR. You can also reload the application JAR from the Amazon S3 bucket if you need to update the application code. On the MyApplication page, choose Configure. Update the application settings and choose Update. Getting started: Flink 1.8.2 - deprecating 181 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Create and run the application (AWS CLI) In this section, you use the AWS CLI to create and run the Managed Service for Apache Flink application. Managed Service for Apache Flink uses the kinesisanalyticsv2 AWS CLI command to create and interact with Managed Service for Apache Flink applications. Create a Permissions Policy Note You must create a permissions policy and role for your application. If you do not create these IAM resources, your application cannot access its data and log streams. First, you create a permissions policy with two statements: one that grants permissions for the read action on the source stream, and another that grants permissions for write actions on the sink stream. You then attach the policy to an IAM role (which you create in the next section). Thus, when Managed Service for Apache Flink assumes the role, the service has the necessary permissions to read from the source stream and write to the sink stream. Use the following code to create the AKReadSourceStreamWriteSinkStream permissions policy. Replace username with the user name that you used to create the Amazon S3 bucket to store the application code. Replace the account ID in the Amazon Resource Names (ARNs) (012345678901) with your account ID. { "Version": "2012-10-17", "Statement": [ { "Sid": "S3", "Effect": "Allow", "Action": [ "s3:GetObject", "s3:GetObjectVersion" ], "Resource": ["arn:aws:s3:::ka-app-code-username", "arn:aws:s3:::ka-app-code-username/*" ] }, { Getting started: Flink 1.8.2 - deprecating 182 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide "Sid": "ReadInputStream", "Effect": "Allow", "Action": "kinesis:*", "Resource": "arn:aws:kinesis:us-west-2:012345678901:stream/ ExampleInputStream" }, { "Sid": "WriteOutputStream", "Effect": "Allow", "Action": "kinesis:*", "Resource": "arn:aws:kinesis:us-west-2:012345678901:stream/ ExampleOutputStream" } ] } For step-by-step instructions to create a permissions policy, see Tutorial: Create and Attach Your First Customer Managed Policy in the IAM User Guide. Note To access other Amazon services, you can use the AWS SDK for Java. Managed Service for Apache Flink automatically sets the credentials required by the SDK to those of the service execution IAM role that is associated with your application. No additional steps are needed. Create an IAM role In this section, you create an IAM role that the Managed Service for Apache Flink application can assume to read a source stream and write to the sink stream. Managed Service for Apache Flink cannot access your stream without permissions. You grant these permissions via an IAM role. Each IAM role has two policies attached. The trust policy grants Managed Service for Apache Flink permission to assume the role, and the permissions policy determines what
analytics-java-api-063
analytics-java-api.pdf
63
the SDK to those of the service execution IAM role that is associated with your application. No additional steps are needed. Create an IAM role In this section, you create an IAM role that the Managed Service for Apache Flink application can assume to read a source stream and write to the sink stream. Managed Service for Apache Flink cannot access your stream without permissions. You grant these permissions via an IAM role. Each IAM role has two policies attached. The trust policy grants Managed Service for Apache Flink permission to assume the role, and the permissions policy determines what Managed Service for Apache Flink can do after assuming the role. You attach the permissions policy that you created in the preceding section to this role. To create an IAM role 1. Open the IAM console at https://console.aws.amazon.com/iam/. Getting started: Flink 1.8.2 - deprecating 183 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 2. In the navigation pane, choose Roles, Create Role. 3. Under Select type of trusted identity, choose AWS Service. Under Choose the service that will use this role, choose Kinesis. Under Select your use case, choose Kinesis Analytics. Choose Next: Permissions. 4. On the Attach permissions policies page, choose Next: Review. You attach permissions policies after you create the role. 5. On the Create role page, enter MF-stream-rw-role for the Role name. Choose Create role. Now you have created a new IAM role called MF-stream-rw-role. Next, you update the trust and permissions policies for the role. 6. Attach the permissions policy to the role. Note For this exercise, Managed Service for Apache Flink assumes this role for both reading data from a Kinesis data stream (source) and writing output to another Kinesis data stream. So you attach the policy that you created in the previous step, the section called “Create a Permissions Policy”. a. On the Summary page, choose the Permissions tab. b. Choose Attach Policies. c. In the search box, enter AKReadSourceStreamWriteSinkStream (the policy that you created in the previous section). d. Choose the AKReadSourceStreamWriteSinkStream policy, and choose Attach policy. You now have created the service execution role that your application uses to access resources. Make a note of the ARN of the new role. For step-by-step instructions for creating a role, see Creating an IAM Role (Console) in the IAM User Guide. Create the Managed Service for Apache Flink application 1. Save the following JSON code to a file named create_request.json. Replace the sample role ARN with the ARN for the role that you created previously. Replace the bucket ARN suffix Getting started: Flink 1.8.2 - deprecating 184 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide (username) with the suffix that you chose in the previous section. Replace the sample account ID (012345678901) in the service execution role with your account ID. { "ApplicationName": "test", "ApplicationDescription": "my java test app", "RuntimeEnvironment": "FLINK-1_8", "ServiceExecutionRole": "arn:aws:iam::012345678901:role/MF-stream-rw-role", "ApplicationConfiguration": { "ApplicationCodeConfiguration": { "CodeContent": { "S3ContentLocation": { "BucketARN": "arn:aws:s3:::ka-app-code-username", "FileKey": "aws-kinesis-analytics-java-apps-1.0.jar" } }, "CodeContentType": "ZIPFILE" }, "EnvironmentProperties": { "PropertyGroups": [ { "PropertyGroupId": "ProducerConfigProperties", "PropertyMap" : { "flink.stream.initpos" : "LATEST", "aws.region" : "us-west-2", "AggregationEnabled" : "false" } }, { "PropertyGroupId": "ConsumerConfigProperties", "PropertyMap" : { "aws.region" : "us-west-2" } } ] } } } 2. Execute the CreateApplication action with the preceding request to create the application: Getting started: Flink 1.8.2 - deprecating 185 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide aws kinesisanalyticsv2 create-application --cli-input-json file:// create_request.json The application is now created. You start the application in the next step. Start the application In this section, you use the StartApplication action to start the application. To start the application 1. Save the following JSON code to a file named start_request.json. { "ApplicationName": "test", "RunConfiguration": { "ApplicationRestoreConfiguration": { "ApplicationRestoreType": "RESTORE_FROM_LATEST_SNAPSHOT" } } } 2. Execute the StartApplication action with the preceding request to start the application: aws kinesisanalyticsv2 start-application --cli-input-json file://start_request.json The application is now running. You can check the Managed Service for Apache Flink metrics on the Amazon CloudWatch console to verify that the application is working. Stop the application In this section, you use the StopApplication action to stop the application. To stop the application 1. Save the following JSON code to a file named stop_request.json. { Getting started: Flink 1.8.2 - deprecating 186 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide "ApplicationName": "test" } 2. Execute the StopApplication action with the following request to stop the application: aws kinesisanalyticsv2 stop-application --cli-input-json file://stop_request.json The application is now stopped. Add a CloudWatch logging option You can use the AWS CLI to add an Amazon CloudWatch log stream to your application. For information about using CloudWatch Logs with your application, see the section called “Set up application logging in Managed Service for Apache Flink”. Update environment
analytics-java-api-064
analytics-java-api.pdf
64
following JSON code to a file named stop_request.json. { Getting started: Flink 1.8.2 - deprecating 186 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide "ApplicationName": "test" } 2. Execute the StopApplication action with the following request to stop the application: aws kinesisanalyticsv2 stop-application --cli-input-json file://stop_request.json The application is now stopped. Add a CloudWatch logging option You can use the AWS CLI to add an Amazon CloudWatch log stream to your application. For information about using CloudWatch Logs with your application, see the section called “Set up application logging in Managed Service for Apache Flink”. Update environment properties In this section, you use the UpdateApplication action to change the environment properties for the application without recompiling the application code. In this example, you change the Region of the source and destination streams. To update environment properties for the application 1. Save the following JSON code to a file named update_properties_request.json. {"ApplicationName": "test", "CurrentApplicationVersionId": 1, "ApplicationConfigurationUpdate": { "EnvironmentPropertyUpdates": { "PropertyGroups": [ { "PropertyGroupId": "ProducerConfigProperties", "PropertyMap" : { "flink.stream.initpos" : "LATEST", "aws.region" : "us-west-2", "AggregationEnabled" : "false" } }, { "PropertyGroupId": "ConsumerConfigProperties", "PropertyMap" : { "aws.region" : "us-west-2" Getting started: Flink 1.8.2 - deprecating 187 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide } } ] } } } 2. Execute the UpdateApplication action with the preceding request to update environment properties: aws kinesisanalyticsv2 update-application --cli-input-json file:// update_properties_request.json Update the application code When you need to update your application code with a new version of your code package, you use the UpdateApplication AWS CLI action. Note To load a new version of the application code with the same file name, you must specify the new object version. For more information about using Amazon S3 object versions, see Enabling or Disabling Versioning. To use the AWS CLI, delete your previous code package from your Amazon S3 bucket, upload the new version, and call UpdateApplication, specifying the same Amazon S3 bucket and object name, and the new object version. The application will restart with the new code package. The following sample request for the UpdateApplication action reloads the application code and restarts the application. Update the CurrentApplicationVersionId to the current application version. You can check the current application version using the ListApplications or DescribeApplication actions. Update the bucket name suffix (<username>) with the suffix that you chose in the the section called “Create two Amazon Kinesis data streams” section. { "ApplicationName": "test", "CurrentApplicationVersionId": 1, "ApplicationConfigurationUpdate": { "ApplicationCodeConfigurationUpdate": { Getting started: Flink 1.8.2 - deprecating 188 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide "CodeContentUpdate": { "S3ContentLocationUpdate": { "BucketARNUpdate": "arn:aws:s3:::ka-app-code-username", "FileKeyUpdate": "aws-kinesis-analytics-java-apps-1.0.jar", "ObjectVersionUpdate": "SAMPLEUehYngP87ex1nzYIGYgfhypvDU" } } } } } Next step Step 4: Clean up AWS resources Step 4: Clean up AWS resources This section includes procedures for cleaning up AWS resources created in the Getting Started tutorial. This topic contains the following sections: • Delete your Managed Service for Apache Flink application • Delete your Kinesis data streams • Delete your Amazon S3 object and bucket • Delete your IAM resources • Delete your CloudWatch resources Delete your Managed Service for Apache Flink application 1. Open the Kinesis console at https://console.aws.amazon.com/kinesis. 2. In the Managed Service for Apache Flink panel, choose MyApplication. 3. Choose Configure. 4. 5. In the Snapshots section, choose Disable and then choose Update. In the application's page, choose Delete and then confirm the deletion. Delete your Kinesis data streams 1. Open the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink Getting started: Flink 1.8.2 - deprecating 189 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 2. 3. 4. In the Kinesis Data Streams panel, choose ExampleInputStream. In the ExampleInputStream page, choose Delete Kinesis Stream and then confirm the deletion. In the Kinesis streams page, choose the ExampleOutputStream, choose Actions, choose Delete, and then confirm the deletion. Delete your Amazon S3 object and bucket 1. Open the Amazon S3 console at https://console.aws.amazon.com/s3/. 2. Choose the ka-app-code-<username> bucket. 3. Choose Delete and then enter the bucket name to confirm deletion. Delete your IAM resources 1. Open the IAM console at https://console.aws.amazon.com/iam/. 2. 3. In the navigation bar, choose Policies. In the filter control, enter kinesis. 4. Choose the kinesis-analytics-service-MyApplication-us-west-2 policy. 5. Choose Policy Actions and then choose Delete. 6. In the navigation bar, choose Roles. 7. Choose the kinesis-analytics-MyApplication-us-west-2 role. 8. Choose Delete role and then confirm the deletion. Delete your CloudWatch resources 1. Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/. 2. In the navigation bar, choose Logs. 3. Choose the /aws/kinesis-analytics/MyApplication log group. 4. Choose Delete Log Group and then confirm the deletion. Getting started: Flink 1.8.2 - deprecating 190 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Getting started: Flink 1.6.2 - deprecating Note Apache Flink versions 1.6, 1.8, and 1.11 have
analytics-java-api-065
analytics-java-api.pdf
65
the kinesis-analytics-service-MyApplication-us-west-2 policy. 5. Choose Policy Actions and then choose Delete. 6. In the navigation bar, choose Roles. 7. Choose the kinesis-analytics-MyApplication-us-west-2 role. 8. Choose Delete role and then confirm the deletion. Delete your CloudWatch resources 1. Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/. 2. In the navigation bar, choose Logs. 3. Choose the /aws/kinesis-analytics/MyApplication log group. 4. Choose Delete Log Group and then confirm the deletion. Getting started: Flink 1.8.2 - deprecating 190 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Getting started: Flink 1.6.2 - deprecating Note Apache Flink versions 1.6, 1.8, and 1.11 have not been supported by the Apache Flink community for over three years. We plan to deprecate these versions in Amazon Managed Service for Apache Flink on November 5, 2024. Starting from this date, you will not be able to create new applications for these Flink versions. You can continue running existing applications at this time. You can upgrade your applications statefully using the in-place version upgrades feature in Amazon Managed Service for Apache Flink For more information, see Use in-place version upgrades for Apache Flink. This topic contains a version of the Tutorial: Get started using the DataStream API in Managed Service for Apache Flink tutorial that uses Apache Flink 1.6.2. Topics • Components of a Managed Service for Apache Flink application • Prerequisites for completing the exercises • Step 1: Set up an AWS account and create an administrator user • Step 2: Set up the AWS Command Line Interface (AWS CLI) • Step 3: Create and run a Managed Service for Apache Flink application • Step 4: Clean up AWS resources Components of a Managed Service for Apache Flink application To process data, your Managed Service for Apache Flink application uses a Java/Apache Maven or Scala application that processes input and produces output using the Apache Flink runtime. a Managed Service for Apache Flink has the following components: • Runtime properties: You can use runtime properties to configure your application without recompiling your application code. • Source: The application consumes data by using a source. A source connector reads data from a Kinesis data stream, an Amazon S3 bucket, etc. For more information, see Add streaming data sources. Getting started: Flink 1.6.2 - deprecating 191 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • Operators: The application processes data by using one or more operators. An operator can transform, enrich, or aggregate data. For more information, see Operators. • Sink: The application produces data to external sources by using sinks. A sink connector writes data to a Kinesis data stream, a Firehose stream, an Amazon S3 bucket, etc. For more information, see Write data using sinks. After you create, compile, and package your application, you upload the code package to an Amazon Simple Storage Service (Amazon S3) bucket. You then create a Managed Service for Apache Flink application. You pass in the code package location, a Kinesis data stream as the streaming data source, and typically a streaming or file location that receives the application's processed data. Prerequisites for completing the exercises To complete the steps in this guide, you must have the following: • Java Development Kit (JDK) version 8. Set the JAVA_HOME environment variable to point to your JDK install location. • We recommend that you use a development environment (such as Eclipse Java Neon or IntelliJ Idea) to develop and compile your application. • Git Client. Install the Git client if you haven't already. • Apache Maven Compiler Plugin. Maven must be in your working path. To test your Apache Maven installation, enter the following: $ mvn -version To get started, go to Step 1: Set up an AWS account and create an administrator user. Step 1: Set up an AWS account and create an administrator user Sign up for an AWS account If you do not have an AWS account, complete the following steps to create one. To sign up for an AWS account 1. Open https://portal.aws.amazon.com/billing/signup. Getting started: Flink 1.6.2 - deprecating 192 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 2. Follow the online instructions. Part of the sign-up procedure involves receiving a phone call and entering a verification code on the phone keypad. When you sign up for an AWS account, an AWS account root user is created. The root user has access to all AWS services and resources in the account. As a security best practice, assign administrative access to a user, and use only the root user to perform tasks that require root user access. AWS sends you a confirmation email after the sign-up process is complete. At any time, you can view your current account activity and manage your account by going to https://aws.amazon.com/ and choosing My Account. Create a
analytics-java-api-066
analytics-java-api.pdf
66
call and entering a verification code on the phone keypad. When you sign up for an AWS account, an AWS account root user is created. The root user has access to all AWS services and resources in the account. As a security best practice, assign administrative access to a user, and use only the root user to perform tasks that require root user access. AWS sends you a confirmation email after the sign-up process is complete. At any time, you can view your current account activity and manage your account by going to https://aws.amazon.com/ and choosing My Account. Create a user with administrative access After you sign up for an AWS account, secure your AWS account root user, enable AWS IAM Identity Center, and create an administrative user so that you don't use the root user for everyday tasks. Secure your AWS account root user 1. Sign in to the AWS Management Console as the account owner by choosing Root user and entering your AWS account email address. On the next page, enter your password. For help signing in by using root user, see Signing in as the root user in the AWS Sign-In User Guide. 2. Turn on multi-factor authentication (MFA) for your root user. For instructions, see Enable a virtual MFA device for your AWS account root user (console) in the IAM User Guide. Create a user with administrative access 1. Enable IAM Identity Center. For instructions, see Enabling AWS IAM Identity Center in the AWS IAM Identity Center User Guide. 2. In IAM Identity Center, grant administrative access to a user. Getting started: Flink 1.6.2 - deprecating 193 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide For a tutorial about using the IAM Identity Center directory as your identity source, see Configure user access with the default IAM Identity Center directory in the AWS IAM Identity Center User Guide. Sign in as the user with administrative access • To sign in with your IAM Identity Center user, use the sign-in URL that was sent to your email address when you created the IAM Identity Center user. For help signing in using an IAM Identity Center user, see Signing in to the AWS access portal in the AWS Sign-In User Guide. Assign access to additional users 1. In IAM Identity Center, create a permission set that follows the best practice of applying least- privilege permissions. For instructions, see Create a permission set in the AWS IAM Identity Center User Guide. 2. Assign users to a group, and then assign single sign-on access to the group. For instructions, see Add groups in the AWS IAM Identity Center User Guide. Grant programmatic access Users need programmatic access if they want to interact with AWS outside of the AWS Management Console. The way to grant programmatic access depends on the type of user that's accessing AWS. To grant users programmatic access, choose one of the following options. Which user needs programmatic access? To By Workforce identity (Users managed in IAM Identity Center) Use temporary credentials to sign programmatic requests to the AWS CLI, AWS SDKs, or Following the instructions for the interface that you want to use. AWS APIs. Getting started: Flink 1.6.2 - deprecating 194 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Which user needs programmatic access? To By • For the AWS CLI, see Configuring the AWS CLI to use AWS IAM Identity Center in the AWS Command Line Interface User Guide. • For AWS SDKs, tools, and AWS APIs, see IAM Identity Center authentication in the AWS SDKs and Tools Reference Guide. IAM Use temporary credentials to sign programmatic requests Following the instructions in Using temporary credentia to the AWS CLI, AWS SDKs, or ls with AWS resources in the AWS APIs. IAM User Guide. Getting started: Flink 1.6.2 - deprecating 195 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Which user needs programmatic access? To By IAM (Not recommended) Use long-term credentials to Following the instructions for the interface that you want to sign programmatic requests use. to the AWS CLI, AWS SDKs, or AWS APIs. • For the AWS CLI, see Authenticating using IAM user credentials in the AWS Command Line Interface User Guide. • For AWS SDKs and tools, see Authenticate using long-term credentials in the AWS SDKs and Tools Reference Guide. • For AWS APIs, see Managing access keys for IAM users in the IAM User Guide. Step 2: Set up the AWS Command Line Interface (AWS CLI) In this step, you download and configure the AWS CLI to use with a Managed Service for Apache Flink. Note The getting started exercises in this guide assume that you are using administrator credentials (adminuser) in your account to perform the operations. Note
analytics-java-api-067
analytics-java-api.pdf
67
IAM user credentials in the AWS Command Line Interface User Guide. • For AWS SDKs and tools, see Authenticate using long-term credentials in the AWS SDKs and Tools Reference Guide. • For AWS APIs, see Managing access keys for IAM users in the IAM User Guide. Step 2: Set up the AWS Command Line Interface (AWS CLI) In this step, you download and configure the AWS CLI to use with a Managed Service for Apache Flink. Note The getting started exercises in this guide assume that you are using administrator credentials (adminuser) in your account to perform the operations. Note If you already have the AWS CLI installed, you might need to upgrade to get the latest functionality. For more information, see Installing the AWS Command Line Interface in Getting started: Flink 1.6.2 - deprecating 196 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide the AWS Command Line Interface User Guide. To check the version of the AWS CLI, run the following command: aws --version The exercises in this tutorial require the following AWS CLI version or later: aws-cli/1.16.63 To set up the AWS CLI 1. Download and configure the AWS CLI. For instructions, see the following topics in the AWS Command Line Interface User Guide: • Installing the AWS Command Line Interface • Configuring the AWS CLI 2. Add a named profile for the administrator user in the AWS CLI config file. You use this profile when executing the AWS CLI commands. For more information about named profiles, see Named Profiles in the AWS Command Line Interface User Guide. [profile adminuser] aws_access_key_id = adminuser access key ID aws_secret_access_key = adminuser secret access key region = aws-region For a list of available AWS Regions, see Regions and Endpoints in the Amazon Web Services General Reference. Note The example code and commands in this tutorial use the US West (Oregon) Region. To use a different Region, change the Region in the code and commands for this tutorial to the Region you want to use. 3. Verify the setup by entering the following help command at the command prompt: Getting started: Flink 1.6.2 - deprecating 197 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide aws help After you set up an AWS account and the AWS CLI, you can try the next exercise, in which you configure a sample application and test the end-to-end setup. Next step Step 3: Create and run a Managed Service for Apache Flink application Step 3: Create and run a Managed Service for Apache Flink application In this exercise, you create a Managed Service for Apache Flink application with data streams as a source and a sink. This section contains the following steps: • Create two Amazon Kinesis data streams • Write sample records to the input stream • Download and examine the Apache Flink streaming Java code • Compile the application code • Upload the Apache Flink streaming Java code • Create and run the Managed Service for Apache Flink application Create two Amazon Kinesis data streams Before you create a Managed Service for Apache Flink application for this exercise, create two Kinesis data streams (ExampleInputStream and ExampleOutputStream). Your application uses these streams for the application source and destination streams. You can create these streams using either the Amazon Kinesis console or the following AWS CLI command. For console instructions, see Creating and Updating Data Streams in the Amazon Kinesis Data Streams Developer Guide. To create the data streams (AWS CLI) 1. To create the first stream (ExampleInputStream), use the following Amazon Kinesis create-stream AWS CLI command. Getting started: Flink 1.6.2 - deprecating 198 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide $ aws kinesis create-stream \ --stream-name ExampleInputStream \ --shard-count 1 \ --region us-west-2 \ --profile adminuser 2. To create the second stream that the application uses to write output, run the same command, changing the stream name to ExampleOutputStream. $ aws kinesis create-stream \ --stream-name ExampleOutputStream \ --shard-count 1 \ --region us-west-2 \ --profile adminuser Write sample records to the input stream In this section, you use a Python script to write sample records to the stream for the application to process. Note This section requires the AWS SDK for Python (Boto). 1. Create a file named stock.py with the following contents: import datetime import json import random import boto3 STREAM_NAME = "ExampleInputStream" def get_data(): return { "EVENT_TIME": datetime.datetime.now().isoformat(), "TICKER": random.choice(["AAPL", "AMZN", "MSFT", "INTC", "TBV"]), Getting started: Flink 1.6.2 - deprecating 199 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide "PRICE": round(random.random() * 100, 2), } def generate(stream_name, kinesis_client): while True: data = get_data() print(data) kinesis_client.put_record( StreamName=stream_name, Data=json.dumps(data), PartitionKey="partitionkey" ) if __name__ == "__main__": generate(STREAM_NAME, boto3.client("kinesis")) 2. Later in the tutorial, you run the stock.py script to send
analytics-java-api-068
analytics-java-api.pdf
68
Note This section requires the AWS SDK for Python (Boto). 1. Create a file named stock.py with the following contents: import datetime import json import random import boto3 STREAM_NAME = "ExampleInputStream" def get_data(): return { "EVENT_TIME": datetime.datetime.now().isoformat(), "TICKER": random.choice(["AAPL", "AMZN", "MSFT", "INTC", "TBV"]), Getting started: Flink 1.6.2 - deprecating 199 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide "PRICE": round(random.random() * 100, 2), } def generate(stream_name, kinesis_client): while True: data = get_data() print(data) kinesis_client.put_record( StreamName=stream_name, Data=json.dumps(data), PartitionKey="partitionkey" ) if __name__ == "__main__": generate(STREAM_NAME, boto3.client("kinesis")) 2. Later in the tutorial, you run the stock.py script to send data to the application. $ python stock.py Download and examine the Apache Flink streaming Java code The Java application code for this example is available from GitHub. To download the application code, do the following: 1. Clone the remote repository using the following command: git clone https://github.com/aws-samples/amazon-kinesis-data-analytics-examples.git 2. Navigate to the amazon-kinesis-data-analytics-java-examples/ GettingStarted_1_6 directory. Note the following about the application code: • A Project Object Model (pom.xml) file contains information about the application's configuration and dependencies, including the a Managed Service for Apache Flink libraries. • The BasicStreamingJob.java file contains the main method that defines the application's functionality. Getting started: Flink 1.6.2 - deprecating 200 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • The application uses a Kinesis source to read from the source stream. The following snippet creates the Kinesis source: return env.addSource(new FlinkKinesisConsumer<>(inputStreamName, new SimpleStringSchema(), inputProperties)); • Your application creates source and sink connectors to access external resources using a StreamExecutionEnvironment object. • The application creates source and sink connectors using static properties. To use dynamic application properties, use the createSourceFromApplicationProperties and createSinkFromApplicationProperties methods to create the connectors. These methods read the application's properties to configure the connectors. For more information about runtime properties, see Use runtime properties. Compile the application code In this section, you use the Apache Maven compiler to create the Java code for the application. For information about installing Apache Maven and the Java Development Kit (JDK), see Prerequisites for completing the exercises. Note In order to use the Kinesis connector with versions of Apache Flink prior to 1.11, you need to download the source code for the connector and build it as described in the Apache Flink documentation. To compile the application code 1. To use your application code, you compile and package it into a JAR file. You can compile and package your code in one of two ways: • Use the command-line Maven tool. Create your JAR file by running the following command in the directory that contains the pom.xml file: mvn package Getting started: Flink 1.6.2 - deprecating 201 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Note The -Dflink.version parameter is not required for Managed Service for Apache Flink Runtime version 1.0.1; it is only required for version 1.1.0 and later. For more information, see the section called “Specify your application's Apache Flink version”. • Use your development environment. See your development environment documentation for details. You can either upload your package as a JAR file, or you can compress your package and upload it as a ZIP file. If you create your application using the AWS CLI, you specify your code content type (JAR or ZIP). 2. If there are errors while compiling, verify that your JAVA_HOME environment variable is correctly set. If the application compiles successfully, the following file is created: target/aws-kinesis-analytics-java-apps-1.0.jar Upload the Apache Flink streaming Java code In this section, you create an Amazon Simple Storage Service (Amazon S3) bucket and upload your application code. To upload the application code 1. Open the Amazon S3 console at https://console.aws.amazon.com/s3/. 2. Choose Create bucket. 3. 4. 5. Enter ka-app-code-<username> in the Bucket name field. Add a suffix to the bucket name, such as your user name, to make it globally unique. Choose Next. In the Configure options step, keep the settings as they are, and choose Next. In the Set permissions step, keep the settings as they are, and choose Next. 6. Choose Create bucket. 7. 8. In the Amazon S3 console, choose the ka-app-code-<username> bucket, and choose Upload. In the Select files step, choose Add files. Navigate to the aws-kinesis-analytics-java- apps-1.0.jar file that you created in the previous step. Choose Next. Getting started: Flink 1.6.2 - deprecating 202 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 9. In the Set permissions step, keep the settings as they are. Choose Next. 10. In the Set properties step, keep the settings as they are. Choose Upload. Your application code is now stored in an Amazon S3 bucket where your application can access it. Create and run the Managed Service for Apache Flink application You can create and run a Managed Service for Apache Flink application
analytics-java-api-069
analytics-java-api.pdf
69
to the aws-kinesis-analytics-java- apps-1.0.jar file that you created in the previous step. Choose Next. Getting started: Flink 1.6.2 - deprecating 202 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 9. In the Set permissions step, keep the settings as they are. Choose Next. 10. In the Set properties step, keep the settings as they are. Choose Upload. Your application code is now stored in an Amazon S3 bucket where your application can access it. Create and run the Managed Service for Apache Flink application You can create and run a Managed Service for Apache Flink application using either the console or the AWS CLI. Note When you create the application using the console, your AWS Identity and Access Management (IAM) and Amazon CloudWatch Logs resources are created for you. When you create the application using the AWS CLI, you create these resources separately. Topics • Create and run the application (console) • Create and run the application (AWS CLI) Create and run the application (console) Follow these steps to create, configure, update, and run the application using the console. Create the application 1. Open the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink 2. On the Managed Service for Apache Flink dashboard, choose Create analytics application. 3. On the Managed Service for Apache Flink - Create application page, provide the application details as follows: • For Application name, enter MyApplication. • For Description, enter My java test app. • For Runtime, choose Apache Flink. Getting started: Flink 1.6.2 - deprecating 203 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Note Managed Service for Apache Flink uses Apache Flink version 1.8.2 or 1.6.2. • Change the version pulldown to Apache Flink 1.6. 4. For Access permissions, choose Create / update IAM role kinesis-analytics- MyApplication-us-west-2. 5. Choose Create application. Note When you create a Managed Service for Apache Flink application using the console, you have the option of having an IAM role and policy created for your application. Your application uses this role and policy to access its dependent resources. These IAM resources are named using your application name and Region as follows: • Policy: kinesis-analytics-service-MyApplication-us-west-2 • Role: kinesisanalytics-MyApplication-us-west-2 Edit the IAM policy Edit the IAM policy to add permissions to access the Kinesis data streams. 1. Open the IAM console at https://console.aws.amazon.com/iam/. 2. Choose Policies. Choose the kinesis-analytics-service-MyApplication-us-west-2 policy that the console created for you in the previous section. 3. On the Summary page, choose Edit policy. Choose the JSON tab. 4. Add the highlighted section of the following policy example to the policy. Replace the sample account IDs (012345678901) with your account ID. { "Version": "2012-10-17", "Statement": [ { "Sid": "ReadCode", "Effect": "Allow", Getting started: Flink 1.6.2 - deprecating 204 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide "Action": [ "s3:GetObject", "s3:GetObjectVersion" ], "Resource": [ "arn:aws:s3:::ka-app-code-username/java-getting-started-1.0.jar" ] }, { "Sid": "DescribeLogGroups", "Effect": "Allow", "Action": [ "logs:DescribeLogGroups" ], "Resource": [ "arn:aws:logs:us-west-2:012345678901:log-group:*" ] }, { "Sid": "DescribeLogStreams", "Effect": "Allow", "Action": [ "logs:DescribeLogStreams" ], "Resource": [ "arn:aws:logs:us-west-2:012345678901:log-group:/aws/kinesis- analytics/MyApplication:log-stream:*" ] }, { "Sid": "PutLogEvents", "Effect": "Allow", "Action": [ "logs:PutLogEvents" ], "Resource": [ "arn:aws:logs:us-west-2:012345678901:log-group:/aws/kinesis- analytics/MyApplication:log-stream:kinesis-analytics-log-stream" ] }, { "Sid": "ReadInputStream", "Effect": "Allow", "Action": "kinesis:*", Getting started: Flink 1.6.2 - deprecating 205 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide "Resource": "arn:aws:kinesis:us-west-2:012345678901:stream/ ExampleInputStream" }, { "Sid": "WriteOutputStream", "Effect": "Allow", "Action": "kinesis:*", "Resource": "arn:aws:kinesis:us-west-2:012345678901:stream/ ExampleOutputStream" } ] } Configure the application 1. On the MyApplication page, choose Configure. 2. On the Configure application page, provide the Code location: • For Amazon S3 bucket, enter ka-app-code-<username>. • For Path to Amazon S3 object, enter java-getting-started-1.0.jar. 3. Under Access to application resources, for Access permissions, choose Create / update IAM role kinesis-analytics-MyApplication-us-west-2. 4. Enter the following application properties and values: Group ID Key Value ProducerConfigProp flink.inputstream. LATEST erties initpos ProducerConfigProp aws.region us-west-2 erties ProducerConfigProp AggregationEnabled false erties 5. Under Monitoring, ensure that the Monitoring metrics level is set to Application. 6. For CloudWatch logging, select the Enable check box. 7. Choose Update. Getting started: Flink 1.6.2 - deprecating 206 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Note When you choose to enable Amazon CloudWatch logging, Managed Service for Apache Flink creates a log group and log stream for you. The names of these resources are as follows: • Log group: /aws/kinesis-analytics/MyApplication • Log stream: kinesis-analytics-log-stream Run the application 1. On the MyApplication page, choose Run. Confirm the action. 2. When the application is running, refresh the page. The console shows the Application graph. Stop the application On the MyApplication page, choose Stop. Confirm the action. Update the application Using the console, you can update application settings such as application properties, monitoring settings, and the location or file name of the application JAR. You can also reload the application
analytics-java-api-070
analytics-java-api.pdf
70
Apache Flink creates a log group and log stream for you. The names of these resources are as follows: • Log group: /aws/kinesis-analytics/MyApplication • Log stream: kinesis-analytics-log-stream Run the application 1. On the MyApplication page, choose Run. Confirm the action. 2. When the application is running, refresh the page. The console shows the Application graph. Stop the application On the MyApplication page, choose Stop. Confirm the action. Update the application Using the console, you can update application settings such as application properties, monitoring settings, and the location or file name of the application JAR. You can also reload the application JAR from the Amazon S3 bucket if you need to update the application code. On the MyApplication page, choose Configure. Update the application settings and choose Update. Create and run the application (AWS CLI) In this section, you use the AWS CLI to create and run the Managed Service for Apache Flink application. Managed Service for Apache Flink uses the kinesisanalyticsv2 AWS CLI command to create and interact with Managed Service for Apache Flink applications. Create a permissions policy First, you create a permissions policy with two statements: one that grants permissions for the read action on the source stream, and another that grants permissions for write actions on the sink stream. You then attach the policy to an IAM role (which you create in the next section). Getting started: Flink 1.6.2 - deprecating 207 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Thus, when Managed Service for Apache Flink assumes the role, the service has the necessary permissions to read from the source stream and write to the sink stream. Use the following code to create the AKReadSourceStreamWriteSinkStream permissions policy. Replace username with the user name that you used to create the Amazon S3 bucket to store the application code. Replace the account ID in the Amazon Resource Names (ARNs) (012345678901) with your account ID. { "Version": "2012-10-17", "Statement": [ { "Sid": "S3", "Effect": "Allow", "Action": [ "s3:GetObject", "s3:GetObjectVersion" ], "Resource": ["arn:aws:s3:::ka-app-code-username", "arn:aws:s3:::ka-app-code-username/*" ] }, { "Sid": "ReadInputStream", "Effect": "Allow", "Action": "kinesis:*", "Resource": "arn:aws:kinesis:us-west-2:012345678901:stream/ ExampleInputStream" }, { "Sid": "WriteOutputStream", "Effect": "Allow", "Action": "kinesis:*", "Resource": "arn:aws:kinesis:us-west-2:012345678901:stream/ ExampleOutputStream" } ] } For step-by-step instructions to create a permissions policy, see Tutorial: Create and Attach Your First Customer Managed Policy in the IAM User Guide. Getting started: Flink 1.6.2 - deprecating 208 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Note To access other Amazon services, you can use the AWS SDK for Java. Managed Service for Apache Flink automatically sets the credentials required by the SDK to those of the service execution IAM role that is associated with your application. No additional steps are needed. Create an IAM role In this section, you create an IAM role that the Managed Service for Apache Flink application can assume to read a source stream and write to the sink stream. Managed Service for Apache Flink cannot access your stream without permissions. You grant these permissions via an IAM role. Each IAM role has two policies attached. The trust policy grants Managed Service for Apache Flink permission to assume the role, and the permissions policy determines what Managed Service for Apache Flink can do after assuming the role. You attach the permissions policy that you created in the preceding section to this role. To create an IAM role 1. Open the IAM console at https://console.aws.amazon.com/iam/. 2. In the navigation pane, choose Roles, Create Role. 3. Under Select type of trusted identity, choose AWS Service. Under Choose the service that will use this role, choose Kinesis. Under Select your use case, choose Kinesis Analytics. Choose Next: Permissions. 4. On the Attach permissions policies page, choose Next: Review. You attach permissions policies after you create the role. 5. On the Create role page, enter MF-stream-rw-role for the Role name. Choose Create role. Now you have created a new IAM role called MF-stream-rw-role. Next, you update the trust and permissions policies for the role. 6. Attach the permissions policy to the role. Note For this exercise, Managed Service for Apache Flink assumes this role for both reading data from a Kinesis data stream (source) and writing output to another Kinesis data Getting started: Flink 1.6.2 - deprecating 209 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide stream. So you attach the policy that you created in the previous step, the section called “Create a permissions policy”. a. On the Summary page, choose the Permissions tab. b. Choose Attach Policies. c. In the search box, enter AKReadSourceStreamWriteSinkStream (the policy that you created in the previous section). d. Choose the AKReadSourceStreamWriteSinkStream policy, and choose Attach policy. You now have created the service execution role that your application uses to access resources. Make a note of the ARN
analytics-java-api-071
analytics-java-api.pdf
71
Kinesis data Getting started: Flink 1.6.2 - deprecating 209 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide stream. So you attach the policy that you created in the previous step, the section called “Create a permissions policy”. a. On the Summary page, choose the Permissions tab. b. Choose Attach Policies. c. In the search box, enter AKReadSourceStreamWriteSinkStream (the policy that you created in the previous section). d. Choose the AKReadSourceStreamWriteSinkStream policy, and choose Attach policy. You now have created the service execution role that your application uses to access resources. Make a note of the ARN of the new role. For step-by-step instructions for creating a role, see Creating an IAM Role (Console) in the IAM User Guide. Create the Managed Service for Apache Flink application 1. Save the following JSON code to a file named create_request.json. Replace the sample role ARN with the ARN for the role that you created previously. Replace the bucket ARN suffix (username) with the suffix that you chose in the previous section. Replace the sample account ID (012345678901) in the service execution role with your account ID. { "ApplicationName": "test", "ApplicationDescription": "my java test app", "RuntimeEnvironment": "FLINK-1_6", "ServiceExecutionRole": "arn:aws:iam::012345678901:role/MF-stream-rw-role", "ApplicationConfiguration": { "ApplicationCodeConfiguration": { "CodeContent": { "S3ContentLocation": { "BucketARN": "arn:aws:s3:::ka-app-code-username", "FileKey": "java-getting-started-1.0.jar" } }, "CodeContentType": "ZIPFILE" }, "EnvironmentProperties": { "PropertyGroups": [ Getting started: Flink 1.6.2 - deprecating 210 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide { "PropertyGroupId": "ProducerConfigProperties", "PropertyMap" : { "flink.stream.initpos" : "LATEST", "aws.region" : "us-west-2", "AggregationEnabled" : "false" } }, { "PropertyGroupId": "ConsumerConfigProperties", "PropertyMap" : { "aws.region" : "us-west-2" } } ] } } } 2. Execute the CreateApplication action with the preceding request to create the application: aws kinesisanalyticsv2 create-application --cli-input-json file:// create_request.json The application is now created. You start the application in the next step. Start the application In this section, you use the StartApplication action to start the application. To start the application 1. Save the following JSON code to a file named start_request.json. { "ApplicationName": "test", "RunConfiguration": { "ApplicationRestoreConfiguration": { "ApplicationRestoreType": "RESTORE_FROM_LATEST_SNAPSHOT" } } Getting started: Flink 1.6.2 - deprecating 211 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide } 2. Execute the StartApplication action with the preceding request to start the application: aws kinesisanalyticsv2 start-application --cli-input-json file://start_request.json The application is now running. You can check the Managed Service for Apache Flink metrics on the Amazon CloudWatch console to verify that the application is working. Stop the application In this section, you use the StopApplication action to stop the application. To stop the application 1. Save the following JSON code to a file named stop_request.json. { "ApplicationName": "test" } 2. Execute the StopApplication action with the following request to stop the application: aws kinesisanalyticsv2 stop-application --cli-input-json file://stop_request.json The application is now stopped. Add a CloudWatch logging option You can use the AWS CLI to add an Amazon CloudWatch log stream to your application. For information about using CloudWatch Logs with your application, see the section called “Set up application logging in Managed Service for Apache Flink”. Update environment properties In this section, you use the UpdateApplication action to change the environment properties for the application without recompiling the application code. In this example, you change the Region of the source and destination streams. Getting started: Flink 1.6.2 - deprecating 212 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide To update environment properties for the application 1. Save the following JSON code to a file named update_properties_request.json. {"ApplicationName": "test", "CurrentApplicationVersionId": 1, "ApplicationConfigurationUpdate": { "EnvironmentPropertyUpdates": { "PropertyGroups": [ { "PropertyGroupId": "ProducerConfigProperties", "PropertyMap" : { "flink.stream.initpos" : "LATEST", "aws.region" : "us-west-2", "AggregationEnabled" : "false" } }, { "PropertyGroupId": "ConsumerConfigProperties", "PropertyMap" : { "aws.region" : "us-west-2" } } ] } } } 2. Execute the UpdateApplication action with the preceding request to update environment properties: aws kinesisanalyticsv2 update-application --cli-input-json file:// update_properties_request.json Update the application code When you need to update your application code with a new version of your code package, you use the UpdateApplication AWS CLI action. Getting started: Flink 1.6.2 - deprecating 213 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide To use the AWS CLI, delete your previous code package from your Amazon S3 bucket, upload the new version, and call UpdateApplication, specifying the same Amazon S3 bucket and object name. The application will restart with the new code package. The following sample request for the UpdateApplication action reloads the application code and restarts the application. Update the CurrentApplicationVersionId to the current application version. You can check the current application version using the ListApplications or DescribeApplication actions. Update the bucket name suffix (<username>) with the suffix that you chose in the the section called “Create two Amazon Kinesis data streams” section. { "ApplicationName": "test", "CurrentApplicationVersionId": 1, "ApplicationConfigurationUpdate": { "ApplicationCodeConfigurationUpdate":
analytics-java-api-072
analytics-java-api.pdf
72
package from your Amazon S3 bucket, upload the new version, and call UpdateApplication, specifying the same Amazon S3 bucket and object name. The application will restart with the new code package. The following sample request for the UpdateApplication action reloads the application code and restarts the application. Update the CurrentApplicationVersionId to the current application version. You can check the current application version using the ListApplications or DescribeApplication actions. Update the bucket name suffix (<username>) with the suffix that you chose in the the section called “Create two Amazon Kinesis data streams” section. { "ApplicationName": "test", "CurrentApplicationVersionId": 1, "ApplicationConfigurationUpdate": { "ApplicationCodeConfigurationUpdate": { "CodeContentUpdate": { "S3ContentLocationUpdate": { "BucketARNUpdate": "arn:aws:s3:::ka-app-code-username", "FileKeyUpdate": "java-getting-started-1.0.jar" } } } } } Step 4: Clean up AWS resources This section includes procedures for cleaning up AWS resources created in the Getting Started tutorial. This topic contains the following sections: • Delete your Managed Service for Apache Flink application • Delete your Kinesis data streams • Delete your Amazon S3 object and bucket • Delete your IAM resources • Delete your CloudWatch resources Getting started: Flink 1.6.2 - deprecating 214 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Delete your Managed Service for Apache Flink application 1. Open the Kinesis console at https://console.aws.amazon.com/kinesis. 2. In the Managed Service for Apache Flink panel, choose MyApplication. 3. Choose Configure. 4. 5. In the Snapshots section, choose Disable and then choose Update. In the application's page, choose Delete and then confirm the deletion. Delete your Kinesis data streams 1. Open the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink 2. 3. 4. In the Kinesis Data Streams panel, choose ExampleInputStream. In the ExampleInputStream page, choose Delete Kinesis Stream and then confirm the deletion. In the Kinesis streams page, choose the ExampleOutputStream, choose Actions, choose Delete, and then confirm the deletion. Delete your Amazon S3 object and bucket 1. Open the Amazon S3 console at https://console.aws.amazon.com/s3/. 2. Choose the ka-app-code-<username> bucket. 3. Choose Delete and then enter the bucket name to confirm deletion. Delete your IAM resources 1. Open the IAM console at https://console.aws.amazon.com/iam/. 2. 3. In the navigation bar, choose Policies. In the filter control, enter kinesis. 4. Choose the kinesis-analytics-service-MyApplication-us-west-2 policy. 5. Choose Policy Actions and then choose Delete. 6. In the navigation bar, choose Roles. 7. Choose the kinesis-analytics-MyApplication-us-west-2 role. 8. Choose Delete role and then confirm the deletion. Getting started: Flink 1.6.2 - deprecating 215 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Delete your CloudWatch resources 1. Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/. 2. In the navigation bar, choose Logs. 3. Choose the /aws/kinesis-analytics/MyApplication log group. 4. Choose Delete Log Group and then confirm the deletion. Earlier version (legacy) examples for Managed Service for Apache Flink Note For current examples, see Examples for creating and working with Managed Service for Apache Flink applications. This section provides examples of creating and working with applications in Managed Service for Apache Flink. They include example code and step-by-step instructions to help you create Managed Service for Apache Flink applications and test your results. Before you explore these examples, we recommend that you first review the following: • How it works • Tutorial: Get started using the DataStream API in Managed Service for Apache Flink Note These examples assume that you are using the US West (Oregon) Region (us-west-2). If you are using a different Region, update your application code, commands, and IAM roles appropriately. Topics • DataStream API examples • Python examples • Scala examples Legacy examples 216 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide DataStream API examples The following examples demonstrate how to create applications using the Apache Flink DataStream API. Topics • Example: Tumbling window • Example: Sliding window • Example: Writing to an Amazon S3 bucket • Tutorial: Using a Managed Service for Apache Flink application to replicate data from one topic in an MSK cluster to another in a VPC • Example: Use an EFO consumer with a Kinesis data stream • Example: Writing to Firehose • Example: Read from a Kinesis stream in a different account • Tutorial: Using a custom truststore with Amazon MSK Example: Tumbling window Note For current examples, see Examples for creating and working with Managed Service for Apache Flink applications. In this exercise, you create a Managed Service for Apache Flink application that aggregates data using a tumbling window. Aggregration is enabled by default in Flink. To disable it, use the following: sink.producer.aggregation-enabled' = 'false' Note To set up required prerequisites for this exercise, first complete the Tutorial: Get started using the DataStream API in Managed Service for Apache Flink exercise. Legacy examples 217 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide This topic contains the following sections: • Create dependent resources • Write
analytics-java-api-073
analytics-java-api.pdf
73
Examples for creating and working with Managed Service for Apache Flink applications. In this exercise, you create a Managed Service for Apache Flink application that aggregates data using a tumbling window. Aggregration is enabled by default in Flink. To disable it, use the following: sink.producer.aggregation-enabled' = 'false' Note To set up required prerequisites for this exercise, first complete the Tutorial: Get started using the DataStream API in Managed Service for Apache Flink exercise. Legacy examples 217 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide This topic contains the following sections: • Create dependent resources • Write sample records to the input stream • Download and examine the application code • Compile the application code • Upload the Apache Flink streaming Java code • Create and run the Managed Service for Apache Flink application • Clean up AWS resources Create dependent resources Before you create a Managed Service for Apache Flink application for this exercise, you create the following dependent resources: • Two Kinesis data streams (ExampleInputStream and ExampleOutputStream) • An Amazon S3 bucket to store the application's code (ka-app-code-<username>) You can create the Kinesis streams and Amazon S3 bucket using the console. For instructions for creating these resources, see the following topics: • Creating and Updating Data Streams in the Amazon Kinesis Data Streams Developer Guide. Name your data stream ExampleInputStream and ExampleOutputStream. • How Do I Create an S3 Bucket? in the Amazon Simple Storage Service User Guide. Give the Amazon S3 bucket a globally unique name by appending your login name, such as ka-app- code-<username>. Write sample records to the input stream In this section, you use a Python script to write sample records to the stream for the application to process. Note This section requires the AWS SDK for Python (Boto). Legacy examples 218 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 1. Create a file named stock.py with the following contents: import datetime import json import random import boto3 STREAM_NAME = "ExampleInputStream" def get_data(): return { 'event_time': datetime.datetime.now().isoformat(), 'ticker': random.choice(['AAPL', 'AMZN', 'MSFT', 'INTC', 'TBV']), 'price': round(random.random() * 100, 2)} def generate(stream_name, kinesis_client): while True: data = get_data() print(data) kinesis_client.put_record( StreamName=stream_name, Data=json.dumps(data), PartitionKey="partitionkey") if __name__ == '__main__': generate(STREAM_NAME, boto3.client('kinesis', region_name='us-west-2')) 2. Run the stock.py script: $ python stock.py Keep the script running while completing the rest of the tutorial. Download and examine the application code The Java application code for this example is available from GitHub. To download the application code, do the following: Legacy examples 219 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 1. Install the Git client if you haven't already. For more information, see Installing Git. 2. Clone the remote repository with the following command: git clone https://github.com/aws-samples/amazon-kinesis-data-analytics-examples.git 3. Navigate to the amazon-kinesis-data-analytics-java-examples/TumblingWindow directory. The application code is located in the TumblingWindowStreamingJob.java file. Note the following about the application code: • The application uses a Kinesis source to read from the source stream. The following snippet creates the Kinesis source: return env.addSource(new FlinkKinesisConsumer<>(inputStreamName, new SimpleStringSchema(), inputProperties)); • Add the following import statement: import org.apache.flink.streaming.api.windowing.assigners.TumblingProcessingTimeWindows; // flink 1.13 onward • The application uses the timeWindow operator to find the count of values for each stock symbol over a 5-second tumbling window. The following code creates the operator and sends the aggregated data to a new Kinesis Data Streams sink: input.flatMap(new Tokenizer()) // Tokenizer for generating words .keyBy(0) // Logically partition the stream for each word .window(TumblingProcessingTimeWindows.of(Time.seconds(5))) // Flink 1.13 onward .sum(1) // Sum the number of words per partition .map(value -> value.f0 + "," + value.f1.toString() + "\n") .addSink(createSinkFromStaticConfig()); Compile the application code To compile the application, do the following: Legacy examples 220 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 1. Install Java and Maven if you haven't already. For more information, see Complete the required prerequisites in the Tutorial: Get started using the DataStream API in Managed Service for Apache Flink tutorial. 2. Compile the application with the following command: mvn package -Dflink.version=1.15.3 Note The provided source code relies on libraries from Java 11. Compiling the application creates the application JAR file (target/aws-kinesis-analytics- java-apps-1.0.jar). Upload the Apache Flink streaming Java code In this section, you upload your application code to the Amazon S3 bucket you created in the Create dependent resources section. 1. 2. In the Amazon S3 console, choose the ka-app-code-<username> bucket, and choose Upload. In the Select files step, choose Add files. Navigate to the aws-kinesis-analytics-java- apps-1.0.jar file that you created in the previous step. 3. You don't need to change any of the settings for the object, so choose Upload. Your application code is now stored in an Amazon S3 bucket where your application can access it. Create and run the Managed Service for Apache Flink application Follow these steps to create, configure,
analytics-java-api-074
analytics-java-api.pdf
74
your application code to the Amazon S3 bucket you created in the Create dependent resources section. 1. 2. In the Amazon S3 console, choose the ka-app-code-<username> bucket, and choose Upload. In the Select files step, choose Add files. Navigate to the aws-kinesis-analytics-java- apps-1.0.jar file that you created in the previous step. 3. You don't need to change any of the settings for the object, so choose Upload. Your application code is now stored in an Amazon S3 bucket where your application can access it. Create and run the Managed Service for Apache Flink application Follow these steps to create, configure, update, and run the application using the console. Create the application 1. Open the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink 2. On the Managed Service for Apache Flink dashboard, choose Create analytics application. 3. On the Managed Service for Apache Flink - Create application page, provide the application details as follows: Legacy examples 221 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • For Application name, enter MyApplication. • For Runtime, choose Apache Flink. Note Managed Service for Apache Flink uses Apache Flink version 1.15.2. • Leave the version pulldown as Apache Flink version 1.15.2 (Recommended version). 4. For Access permissions, choose Create / update IAM role kinesis-analytics- MyApplication-us-west-2. 5. Choose Create application. Note When you create a Managed Service for Apache Flink application using the console, you have the option of having an IAM role and policy created for your application. Your application uses this role and policy to access its dependent resources. These IAM resources are named using your application name and Region as follows: • Policy: kinesis-analytics-service-MyApplication-us-west-2 • Role: kinesisanalytics-MyApplication-us-west-2 Edit the IAM policy Edit the IAM policy to add permissions to access the Kinesis data streams. 1. Open the IAM console at https://console.aws.amazon.com/iam/. 2. Choose Policies. Choose the kinesis-analytics-service-MyApplication-us-west-2 policy that the console created for you in the previous section. 3. On the Summary page, choose Edit policy. Choose the JSON tab. 4. Add the highlighted section of the following policy example to the policy. Replace the sample account IDs (012345678901) with your account ID. { "Version": "2012-10-17", Legacy examples 222 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide "Statement": [ { "Sid": "ReadCode", "Effect": "Allow", "Action": [ "s3:GetObject", "logs:DescribeLogGroups", "s3:GetObjectVersion" ], "Resource": [ "arn:aws:logs:us-west-2:012345678901:log-group:*", "arn:aws:s3:::ka-app-code-<username>/aws-kinesis-analytics-java- apps-1.0.jar" ] }, { "Sid": "DescribeLogStreams", "Effect": "Allow", "Action": "logs:DescribeLogStreams", "Resource": "arn:aws:logs:us-west-2:012345678901:log-group:/aws/ kinesis-analytics/MyApplication:log-stream:*" }, { "Sid": "PutLogEvents", "Effect": "Allow", "Action": "logs:PutLogEvents", "Resource": "arn:aws:logs:us-west-2:012345678901:log-group:/aws/ kinesis-analytics/MyApplication:log-stream:kinesis-analytics-log-stream" }, { "Sid": "ListCloudwatchLogGroups", "Effect": "Allow", "Action": [ "logs:DescribeLogGroups" ], "Resource": [ "arn:aws:logs:us-west-2:012345678901:log-group:*" ] }, { "Sid": "ReadInputStream", "Effect": "Allow", "Action": "kinesis:*", Legacy examples 223 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide "Resource": "arn:aws:kinesis:us-west-2:012345678901:stream/ ExampleInputStream" }, { "Sid": "WriteOutputStream", "Effect": "Allow", "Action": "kinesis:*", "Resource": "arn:aws:kinesis:us-west-2:012345678901:stream/ ExampleOutputStream" } ] } Configure the application 1. On the MyApplication page, choose Configure. 2. On the Configure application page, provide the Code location: • For Amazon S3 bucket, enter ka-app-code-<username>. • For Path to Amazon S3 object, enter aws-kinesis-analytics-java-apps-1.0.jar. 3. Under Access to application resources, for Access permissions, choose Create / update IAM role kinesis-analytics-MyApplication-us-west-2. 4. Under Monitoring, ensure that the Monitoring metrics level is set to Application. 5. For CloudWatch logging, select the Enable check box. 6. Choose Update. Note When you choose to enable CloudWatch logging, Managed Service for Apache Flink creates a log group and log stream for you. The names of these resources are as follows: • Log group: /aws/kinesis-analytics/MyApplication • Log stream: kinesis-analytics-log-stream This log stream is used to monitor the application. This is not the same log stream that the application uses to send results. Legacy examples 224 Managed Service for Apache Flink Run the application Managed Service for Apache Flink Developer Guide 1. On the MyApplication page, choose Run. Leave the Run without snapshot option selected, and confirm the action. 2. When the application is running, refresh the page. The console shows the Application graph. You can check the Managed Service for Apache Flink metrics on the CloudWatch console to verify that the application is working. Clean up AWS resources This section includes procedures for cleaning up AWS resources created in the Tumbling Window tutorial. This topic contains the following sections: • Delete your Managed Service for Apache Flink application • Delete your Kinesis data streams • Delete your Amazon S3 object and bucket • Delete your IAM resources • Delete your CloudWatch resources Delete your Managed Service for Apache Flink application 1. Open the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink 2. 3. in the Managed Service for Apache Flink panel, choose MyApplication. In the application's page, choose Delete and then confirm the deletion. Delete your Kinesis data streams 1. Open the Kinesis console at https://console.aws.amazon.com/kinesis. 2. 3. 4. In the Kinesis Data Streams panel, choose
analytics-java-api-075
analytics-java-api.pdf
75
the following sections: • Delete your Managed Service for Apache Flink application • Delete your Kinesis data streams • Delete your Amazon S3 object and bucket • Delete your IAM resources • Delete your CloudWatch resources Delete your Managed Service for Apache Flink application 1. Open the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink 2. 3. in the Managed Service for Apache Flink panel, choose MyApplication. In the application's page, choose Delete and then confirm the deletion. Delete your Kinesis data streams 1. Open the Kinesis console at https://console.aws.amazon.com/kinesis. 2. 3. 4. In the Kinesis Data Streams panel, choose ExampleInputStream. In the ExampleInputStream page, choose Delete Kinesis Stream and then confirm the deletion. In the Kinesis streams page, choose the ExampleOutputStream, choose Actions, choose Delete, and then confirm the deletion. Legacy examples 225 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Delete your Amazon S3 object and bucket 1. Open the Amazon S3 console at https://console.aws.amazon.com/s3/. 2. Choose the ka-app-code-<username> bucket. 3. Choose Delete and then enter the bucket name to confirm deletion. Delete your IAM resources 1. Open the IAM console at https://console.aws.amazon.com/iam/. 2. 3. In the navigation bar, choose Policies. In the filter control, enter kinesis. 4. Choose the kinesis-analytics-service-MyApplication-us-west-2 policy. 5. Choose Policy Actions and then choose Delete. 6. In the navigation bar, choose Roles. 7. Choose the kinesis-analytics-MyApplication-us-west-2 role. 8. Choose Delete role and then confirm the deletion. Delete your CloudWatch resources 1. Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/. 2. In the navigation bar, choose Logs. 3. Choose the /aws/kinesis-analytics/MyApplication log group. 4. Choose Delete Log Group and then confirm the deletion. Example: Sliding window Note For current examples, see Examples for creating and working with Managed Service for Apache Flink applications. Legacy examples 226 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Note To set up required prerequisites for this exercise, first complete the Tutorial: Get started using the DataStream API in Managed Service for Apache Flink exercise. This topic contains the following sections: • Create dependent resources • Write sample records to the input stream • Download and examine the application code • Compile the application code • Upload the Apache Flink streaming Java code • Create and run the Managed Service for Apache Flink application • Clean up AWS resources Create dependent resources Before you create a Managed Service for Apache Flink application for this exercise, you create the following dependent resources: • Two Kinesis data streams (ExampleInputStream and ExampleOutputStream). • An Amazon S3 bucket to store the application's code (ka-app-code-<username>) You can create the Kinesis streams and Amazon S3 bucket using the console. For instructions for creating these resources, see the following topics: • Creating and Updating Data Streams in the Amazon Kinesis Data Streams Developer Guide. Name your data streams ExampleInputStream and ExampleOutputStream. • How Do I Create an S3 Bucket? in the Amazon Simple Storage Service User Guide. Give the Amazon S3 bucket a globally unique name by appending your login name, such as ka-app- code-<username>. Legacy examples 227 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Write sample records to the input stream In this section, you use a Python script to write sample records to the stream for the application to process. Note This section requires the AWS SDK for Python (Boto). 1. Create a file named stock.py with the following contents: import datetime import json import random import boto3 STREAM_NAME = "ExampleInputStream" def get_data(): return { "EVENT_TIME": datetime.datetime.now().isoformat(), "TICKER": random.choice(["AAPL", "AMZN", "MSFT", "INTC", "TBV"]), "PRICE": round(random.random() * 100, 2), } def generate(stream_name, kinesis_client): while True: data = get_data() print(data) kinesis_client.put_record( StreamName=stream_name, Data=json.dumps(data), PartitionKey="partitionkey" ) if __name__ == "__main__": generate(STREAM_NAME, boto3.client("kinesis")) 2. Run the stock.py script: Legacy examples 228 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide $ python stock.py Keep the script running while completing the rest of the tutorial. Download and examine the application code The Java application code for this example is available from GitHub. To download the application code, do the following: 1. Install the Git client if you haven't already. For more information, see Installing Git. 2. Clone the remote repository with the following command: git clone https://github.com/aws-samples/amazon-kinesis-data-analytics-examples.git 3. Navigate to the amazon-kinesis-data-analytics-java-examples/SlidingWindow directory. The application code is located in the SlidingWindowStreamingJobWithParallelism.java file. Note the following about the application code: • The application uses a Kinesis source to read from the source stream. The following snippet creates the Kinesis source: return env.addSource(new FlinkKinesisConsumer<>(inputStreamName, new SimpleStringSchema(), inputProperties)); • The application uses the timeWindow operator to find the minimum value for each stock symbol over a 10-second window that slides by 5 seconds. The following code creates the operator and sends the aggregated data to a new Kinesis Data Streams sink: • Add the following import statement: import
analytics-java-api-076
analytics-java-api.pdf
76
clone https://github.com/aws-samples/amazon-kinesis-data-analytics-examples.git 3. Navigate to the amazon-kinesis-data-analytics-java-examples/SlidingWindow directory. The application code is located in the SlidingWindowStreamingJobWithParallelism.java file. Note the following about the application code: • The application uses a Kinesis source to read from the source stream. The following snippet creates the Kinesis source: return env.addSource(new FlinkKinesisConsumer<>(inputStreamName, new SimpleStringSchema(), inputProperties)); • The application uses the timeWindow operator to find the minimum value for each stock symbol over a 10-second window that slides by 5 seconds. The following code creates the operator and sends the aggregated data to a new Kinesis Data Streams sink: • Add the following import statement: import org.apache.flink.streaming.api.windowing.assigners.TumblingProcessingTimeWindows; // flink 1.13 onward Legacy examples 229 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • The application uses the timeWindow operator to find the count of values for each stock symbol over a 5-second tumbling window. The following code creates the operator and sends the aggregated data to a new Kinesis Data Streams sink: input.flatMap(new Tokenizer()) // Tokenizer for generating words .keyBy(0) // Logically partition the stream for each word .window(TumblingProcessingTimeWindows.of(Time.seconds(5))) //Flink 1.13 onward .sum(1) // Sum the number of words per partition .map(value -> value.f0 + "," + value.f1.toString() + "\n") .addSink(createSinkFromStaticConfig()); Compile the application code To compile the application, do the following: 1. Install Java and Maven if you haven't already. For more information, see Complete the required prerequisites in the Tutorial: Get started using the DataStream API in Managed Service for Apache Flink tutorial. 2. Compile the application with the following command: mvn package -Dflink.version=1.15.3 Note The provided source code relies on libraries from Java 11. Compiling the application creates the application JAR file (target/aws-kinesis-analytics- java-apps-1.0.jar). Upload the Apache Flink streaming Java code In this section, you upload your application code to the Amazon S3 bucket that you created in the Create dependent resources section. 1. In the Amazon S3 console, choose the ka-app-code-<username> bucket, and then choose Upload. Legacy examples 230 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 2. In the Select files step, choose Add files. Navigate to the aws-kinesis-analytics-java- apps-1.0.jar file that you created in the previous step. 3. You don't need to change any of the settings for the object, so choose Upload. Your application code is now stored in an Amazon S3 bucket where your application can access it. Create and run the Managed Service for Apache Flink application Follow these steps to create, configure, update, and run the application using the console. Create the application 1. Open the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink 2. On the Managed Service for Apache Flink dashboard, choose Create analytics application. 3. On the Managed Service for Apache Flink - Create application page, provide the application details as follows: • For Application name, enter MyApplication. • For Runtime, choose Apache Flink. • Leave the version pulldown as Apache Flink version 1.15.2 (Recommended version). 4. For Access permissions, choose Create / update IAM role kinesis-analytics- MyApplication-us-west-2. 5. Choose Create application. Note When you create a Managed Service for Apache Flink application using the console, you have the option of having an IAM role and policy created for your application. Your application uses this role and policy to access its dependent resources. These IAM resources are named using your application name and Region as follows: • Policy: kinesis-analytics-service-MyApplication-us-west-2 • Role: kinesisanalytics-MyApplication-us-west-2 Legacy examples 231 Managed Service for Apache Flink Edit the IAM policy Managed Service for Apache Flink Developer Guide Edit the IAM policy to add permissions to access the Kinesis data streams. 1. Open the IAM console at https://console.aws.amazon.com/iam/. 2. Choose Policies. Choose the kinesis-analytics-service-MyApplication-us-west-2 policy that the console created for you in the previous section. 3. On the Summary page, choose Edit policy. Choose the JSON tab. 4. Add the highlighted section of the following policy example to the policy. Replace the sample account IDs (012345678901) with your account ID. { "Version": "2012-10-17", "Statement": [ { "Sid": "ReadCode", "Effect": "Allow", "Action": [ "s3:GetObject", "logs:DescribeLogGroups", "s3:GetObjectVersion" ], "Resource": [ "arn:aws:logs:us-west-2:012345678901:log-group:*", "arn:aws:s3:::ka-app-code-<username>/aws-kinesis-analytics-java- apps-1.0.jar" ] }, { "Sid": "DescribeLogStreams", "Effect": "Allow", "Action": "logs:DescribeLogStreams", "Resource": "arn:aws:logs:us-west-2:012345678901:log-group:/aws/ kinesis-analytics/MyApplication:log-stream:*" }, { "Sid": "PutLogEvents", "Effect": "Allow", "Action": "logs:PutLogEvents", "Resource": "arn:aws:logs:us-west-2:012345678901:log-group:/aws/ kinesis-analytics/MyApplication:log-stream:kinesis-analytics-log-stream" }, Legacy examples 232 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide { "Sid": "ListCloudwatchLogGroups", "Effect": "Allow", "Action": [ "logs:DescribeLogGroups" ], "Resource": [ "arn:aws:logs:us-west-2:012345678901:log-group:*" ] }, { "Sid": "ReadInputStream", "Effect": "Allow", "Action": "kinesis:*", "Resource": "arn:aws:kinesis:us-west-2:012345678901:stream/ ExampleInputStream" }, { "Sid": "WriteOutputStream", "Effect": "Allow", "Action": "kinesis:*", "Resource": "arn:aws:kinesis:us-west-2:012345678901:stream/ ExampleOutputStream" } ] } Configure the application 1. On the MyApplication page, choose Configure. 2. On the Configure application page, provide the Code location: • For Amazon S3 bucket, enter ka-app-code-<username>. • For Path to Amazon S3 object, enter aws-kinesis-analytics-java-apps-1.0.jar. 3. Under Access to application resources, for Access permissions, choose Create / update IAM role kinesis-analytics-MyApplication-us-west-2. 4. Under
analytics-java-api-077
analytics-java-api.pdf
77
Flink Managed Service for Apache Flink Developer Guide { "Sid": "ListCloudwatchLogGroups", "Effect": "Allow", "Action": [ "logs:DescribeLogGroups" ], "Resource": [ "arn:aws:logs:us-west-2:012345678901:log-group:*" ] }, { "Sid": "ReadInputStream", "Effect": "Allow", "Action": "kinesis:*", "Resource": "arn:aws:kinesis:us-west-2:012345678901:stream/ ExampleInputStream" }, { "Sid": "WriteOutputStream", "Effect": "Allow", "Action": "kinesis:*", "Resource": "arn:aws:kinesis:us-west-2:012345678901:stream/ ExampleOutputStream" } ] } Configure the application 1. On the MyApplication page, choose Configure. 2. On the Configure application page, provide the Code location: • For Amazon S3 bucket, enter ka-app-code-<username>. • For Path to Amazon S3 object, enter aws-kinesis-analytics-java-apps-1.0.jar. 3. Under Access to application resources, for Access permissions, choose Create / update IAM role kinesis-analytics-MyApplication-us-west-2. 4. Under Monitoring, ensure that the Monitoring metrics level is set to Application. 5. For CloudWatch logging, select the Enable check box. 6. Choose Update. Legacy examples 233 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Note When you choose to enable Amazon CloudWatch logging, Managed Service for Apache Flink creates a log group and log stream for you. The names of these resources are as follows: • Log group: /aws/kinesis-analytics/MyApplication • Log stream: kinesis-analytics-log-stream This log stream is used to monitor the application. This is not the same log stream that the application uses to send results. Configure the application parallelism This application example uses parallel execution of tasks. The following application code sets the parallelism of the min operator: .setParallelism(3) // Set parallelism for the min operator The application parallelism can't be greater than the provisioned parallelism, which has a default of 1. To increase your application's parallelism, use the following AWS CLI action: aws kinesisanalyticsv2 update-application --application-name MyApplication --current-application-version-id <VersionId> --application-configuration-update "{\"FlinkApplicationConfigurationUpdate \": { \"ParallelismConfigurationUpdate\": {\"ParallelismUpdate\": 5, \"ConfigurationTypeUpdate\": \"CUSTOM\" }}}" You can retrieve the current application version ID using the DescribeApplication or ListApplications actions. Run the application The Flink job graph can be viewed by running the application, opening the Apache Flink dashboard, and choosing the desired Flink job. You can check the Managed Service for Apache Flink metrics on the CloudWatch console to verify that the application is working. Legacy examples 234 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Clean up AWS resources This section includes procedures for cleaning up AWS resources created in the Sliding Window tutorial. This topic contains the following sections: • Delete your Managed Service for Apache Flink application • Delete your Kinesis data streams • Delete your Amazon S3 object and bucket • Delete your IAM resources • Delete your CloudWatch resources Delete your Managed Service for Apache Flink application 1. Open the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink 2. 3. In the Managed Service for Apache Flink panel, choose MyApplication. In the application's page, choose Delete and then confirm the deletion. Delete your Kinesis data streams 1. Open the Kinesis console at https://console.aws.amazon.com/kinesis. 2. 3. 4. In the Kinesis Data Streams panel, choose ExampleInputStream. In the ExampleInputStream page, choose Delete Kinesis Stream and then confirm the deletion. In the Kinesis streams page, choose the ExampleOutputStream, choose Actions, choose Delete, and then confirm the deletion. Delete your Amazon S3 object and bucket 1. Open the Amazon S3 console at https://console.aws.amazon.com/s3/. 2. Choose the ka-app-code-<username> bucket. 3. Choose Delete and then enter the bucket name to confirm deletion. Legacy examples 235 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Delete your IAM resources 1. Open the IAM console at https://console.aws.amazon.com/iam/. 2. 3. In the navigation bar, choose Policies. In the filter control, enter kinesis. 4. Choose the kinesis-analytics-service-MyApplication-us-west-2 policy. 5. Choose Policy Actions and then choose Delete. 6. In the navigation bar, choose Roles. 7. Choose the kinesis-analytics-MyApplication-us-west-2 role. 8. Choose Delete role and then confirm the deletion. Delete your CloudWatch resources 1. Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/. 2. In the navigation bar, choose Logs. 3. Choose the /aws/kinesis-analytics/MyApplication log group. 4. Choose Delete Log Group and then confirm the deletion. Example: Writing to an Amazon S3 bucket In this exercise, you create a Managed Service for Apache Flink that has a Kinesis data stream as a source and an Amazon S3 bucket as a sink. Using the sink, you can verify the output of the application in the Amazon S3 console. Note To set up required prerequisites for this exercise, first complete the Tutorial: Get started using the DataStream API in Managed Service for Apache Flink exercise. This topic contains the following sections: • Create dependent resources • Write sample records to the input stream • Download and examine the application code Legacy examples 236 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • Modify the application code • Compile the application code • Upload the Apache Flink streaming Java code • Create and run the Managed Service for Apache Flink application • Verify the application output • Optional: Customize the source and
analytics-java-api-078
analytics-java-api.pdf
78
for this exercise, first complete the Tutorial: Get started using the DataStream API in Managed Service for Apache Flink exercise. This topic contains the following sections: • Create dependent resources • Write sample records to the input stream • Download and examine the application code Legacy examples 236 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • Modify the application code • Compile the application code • Upload the Apache Flink streaming Java code • Create and run the Managed Service for Apache Flink application • Verify the application output • Optional: Customize the source and sink • Clean up AWS resources Create dependent resources Before you create a Managed Service for Apache Flink for this exercise, you create the following dependent resources: • A Kinesis data stream (ExampleInputStream). • An Amazon S3 bucket to store the application's code and output (ka-app-code-<username>) Note Managed Service for Apache Flink cannot write data to Amazon S3 with server-side encryption enabled on Managed Service for Apache Flink. You can create the Kinesis stream and Amazon S3 bucket using the console. For instructions for creating these resources, see the following topics: • Creating and Updating Data Streams in the Amazon Kinesis Data Streams Developer Guide. Name your data stream ExampleInputStream. • How Do I Create an S3 Bucket? in the Amazon Simple Storage Service User Guide. Give the Amazon S3 bucket a globally unique name by appending your login name, such as ka-app- code-<username>. Create two folders (code and data) in the Amazon S3 bucket. The application creates the following CloudWatch resources if they don't already exist: • A log group called /AWS/KinesisAnalytics-java/MyApplication. • A log stream called kinesis-analytics-log-stream. Legacy examples 237 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Write sample records to the input stream In this section, you use a Python script to write sample records to the stream for the application to process. Note This section requires the AWS SDK for Python (Boto). 1. Create a file named stock.py with the following contents: import datetime import json import random import boto3 STREAM_NAME = "ExampleInputStream" def get_data(): return { 'event_time': datetime.datetime.now().isoformat(), 'ticker': random.choice(['AAPL', 'AMZN', 'MSFT', 'INTC', 'TBV']), 'price': round(random.random() * 100, 2)} def generate(stream_name, kinesis_client): while True: data = get_data() print(data) kinesis_client.put_record( StreamName=stream_name, Data=json.dumps(data), PartitionKey="partitionkey") if __name__ == '__main__': generate(STREAM_NAME, boto3.client('kinesis', region_name='us-west-2')) 2. Run the stock.py script: Legacy examples 238 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide $ python stock.py Keep the script running while completing the rest of the tutorial. Download and examine the application code The Java application code for this example is available from GitHub. To download the application code, do the following: 1. Install the Git client if you haven't already. For more information, see Installing Git. 2. Clone the remote repository with the following command: git clone https://github.com/aws-samples/amazon-kinesis-data-analytics-examples.git 3. Navigate to the amazon-kinesis-data-analytics-java-examples/S3Sink directory. The application code is located in the S3StreamingSinkJob.java file. Note the following about the application code: • The application uses a Kinesis source to read from the source stream. The following snippet creates the Kinesis source: return env.addSource(new FlinkKinesisConsumer<>(inputStreamName, new SimpleStringSchema(), inputProperties)); • You need to add the following import statement: import org.apache.flink.streaming.api.windowing.assigners.TumblingProcessingTimeWindows; • The application uses an Apache Flink S3 sink to write to Amazon S3. The sink reads messages in a tumbling window, encodes messages into S3 bucket objects, and sends the encoded objects to the S3 sink. The following code encodes objects for sending to Amazon S3: input.map(value -> { // Parse the JSON JsonNode jsonNode = jsonParser.readValue(value, JsonNode.class); Legacy examples 239 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide return new Tuple2<>(jsonNode.get("ticker").toString(), 1); }).returns(Types.TUPLE(Types.STRING, Types.INT)) .keyBy(v -> v.f0) // Logically partition the stream for each word .window(TumblingProcessingTimeWindows.of(Time.minutes(1))) .sum(1) // Count the appearances by ticker per partition .map(value -> value.f0 + " count: " + value.f1.toString() + "\n") .addSink(createS3SinkFromStaticConfig()); Note The application uses a Flink StreamingFileSink object to write to Amazon S3. For more information about the StreamingFileSink, see StreamingFileSink in the Apache Flink documentation. Modify the application code In this section, you modify the application code to write output to your Amazon S3 bucket. Update the following line with your user name to specify the application's output location: private static final String s3SinkPath = "s3a://ka-app-code-<username>/data"; Compile the application code To compile the application, do the following: 1. Install Java and Maven if you haven't already. For more information, see Complete the required prerequisites in the Tutorial: Get started using the DataStream API in Managed Service for Apache Flink tutorial. 2. Compile the application with the following command: mvn package -Dflink.version=1.15.3 Compiling the application creates the application JAR file (target/aws-kinesis-analytics- java-apps-1.0.jar). Legacy examples 240 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Note The provided source code relies
analytics-java-api-079
analytics-java-api.pdf
79
name to specify the application's output location: private static final String s3SinkPath = "s3a://ka-app-code-<username>/data"; Compile the application code To compile the application, do the following: 1. Install Java and Maven if you haven't already. For more information, see Complete the required prerequisites in the Tutorial: Get started using the DataStream API in Managed Service for Apache Flink tutorial. 2. Compile the application with the following command: mvn package -Dflink.version=1.15.3 Compiling the application creates the application JAR file (target/aws-kinesis-analytics- java-apps-1.0.jar). Legacy examples 240 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Note The provided source code relies on libraries from Java 11. Upload the Apache Flink streaming Java code In this section, you upload your application code to the Amazon S3 bucket you created in the Create dependent resources section. 1. 2. In the Amazon S3 console, choose the ka-app-code-<username> bucket, navigate to the code folder, and choose Upload. In the Select files step, choose Add files. Navigate to the aws-kinesis-analytics-java- apps-1.0.jar file that you created in the previous step. 3. You don't need to change any of the settings for the object, so choose Upload. Your application code is now stored in an Amazon S3 bucket where your application can access it. Create and run the Managed Service for Apache Flink application Follow these steps to create, configure, update, and run the application using the console. Create the application 1. Open the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink 2. On the Managed Service for Apache Flink dashboard, choose Create analytics application. 3. On the Managed Service for Apache Flink - Create application page, provide the application details as follows: • For Application name, enter MyApplication. • For Runtime, choose Apache Flink. • Leave the version pulldown as Apache Flink version 1.15.2 (Recommended version). 4. For Access permissions, choose Create / update IAM role kinesis-analytics- MyApplication-us-west-2. 5. Choose Create application. Legacy examples 241 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Note When you create a Managed Service for Apache Flink application using the console, you have the option of having an IAM role and policy created for your application. Your application uses this role and policy to access its dependent resources. These IAM resources are named using your application name and Region as follows: • For Application name, enter MyApplication. • For Runtime, choose Apache Flink. • Leave the version as Apache Flink version 1.15.2 (Recommended version). 6. For Access permissions, choose Create / update IAM role kinesis-analytics- MyApplication-us-west-2. 7. Choose Create application. Note When you create a Managed Service for Apache Flink using the console, you have the option of having an IAM role and policy created for your application. Your application uses this role and policy to access its dependent resources. These IAM resources are named using your application name and Region as follows: • Policy: kinesis-analytics-service-MyApplication-us-west-2 • Role: kinesisanalytics-MyApplication-us-west-2 Edit the IAM policy Edit the IAM policy to add permissions to access the Kinesis data stream. 1. Open the IAM console at https://console.aws.amazon.com/iam/. 2. Choose Policies. Choose the kinesis-analytics-service-MyApplication-us-west-2 policy that the console created for you in the previous section. 3. On the Summary page, choose Edit policy. Choose the JSON tab. 4. Add the highlighted section of the following policy example to the policy. Replace the sample account IDs (012345678901) with your account ID. Replace <username> with your user name. Legacy examples 242 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide { "Sid": "S3", "Effect": "Allow", "Action": [ "s3:Abort*", "s3:DeleteObject*", "s3:GetObject*", "s3:GetBucket*", "s3:List*", "s3:ListBucket", "s3:PutObject" ], "Resource": [ "arn:aws:s3:::ka-app-code-<username>", "arn:aws:s3:::ka-app-code-<username>/*" ] }, { "Sid": "ListCloudwatchLogGroups", "Effect": "Allow", "Action": [ "logs:DescribeLogGroups" ], "Resource": [ "arn:aws:logs:region:account-id:log-group:*" ] }, { "Sid": "ListCloudwatchLogStreams", "Effect": "Allow", "Action": [ "logs:DescribeLogStreams" ], "Resource": [ "arn:aws:logs:region:account-id:log-group:%LOG_GROUP_PLACEHOLDER %:log-stream:*" ] }, { "Sid": "PutCloudwatchLogs", "Effect": "Allow", "Action": [ "logs:PutLogEvents" Legacy examples 243 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide ], "Resource": [ "arn:aws:logs:region:account-id:log-group:%LOG_GROUP_PLACEHOLDER %:log-stream:%LOG_STREAM_PLACEHOLDER%" ] } , { "Sid": "ReadInputStream", "Effect": "Allow", "Action": "kinesis:*", "Resource": "arn:aws:kinesis:us-west-2:012345678901:stream/ ExampleInputStream" }, ] } Configure the application 1. On the MyApplication page, choose Configure. 2. On the Configure application page, provide the Code location: • For Amazon S3 bucket, enter ka-app-code-<username>. • For Path to Amazon S3 object, enter code/aws-kinesis-analytics-java- apps-1.0.jar. 3. Under Access to application resources, for Access permissions, choose Create / update IAM role kinesis-analytics-MyApplication-us-west-2. 4. Under Monitoring, ensure that the Monitoring metrics level is set to Application. 5. For CloudWatch logging, select the Enable check box. 6. Choose Update. Note When you choose to enable CloudWatch logging, Managed Service for Apache Flink creates a log group and log stream for you. The names of these resources are as follows: • Log group: /aws/kinesis-analytics/MyApplication Legacy examples 244 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide
analytics-java-api-080
analytics-java-api.pdf
80
• For Path to Amazon S3 object, enter code/aws-kinesis-analytics-java- apps-1.0.jar. 3. Under Access to application resources, for Access permissions, choose Create / update IAM role kinesis-analytics-MyApplication-us-west-2. 4. Under Monitoring, ensure that the Monitoring metrics level is set to Application. 5. For CloudWatch logging, select the Enable check box. 6. Choose Update. Note When you choose to enable CloudWatch logging, Managed Service for Apache Flink creates a log group and log stream for you. The names of these resources are as follows: • Log group: /aws/kinesis-analytics/MyApplication Legacy examples 244 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • Log stream: kinesis-analytics-log-stream This log stream is used to monitor the application. This is not the same log stream that the application uses to send results. Run the application 1. On the MyApplication page, choose Run. Leave the Run without snapshot option selected, and confirm the action. 2. When the application is running, refresh the page. The console shows the Application graph. Verify the application output In the Amazon S3 console, open the data folder in your S3 bucket. After a few minutes, objects containing aggregated data from the application will appear. Note Aggregration is enabled by default in Flink. To disable it, use the following: sink.producer.aggregation-enabled' = 'false' Optional: Customize the source and sink In this section, you customize settings on the source and sink objects. Note After changing the code sections described in the sections following, do the following to reload the application code: • Repeat the steps in the the section called “Compile the application code” section to compile the updated application code. • Repeat the steps in the the section called “Upload the Apache Flink streaming Java code” section to upload the updated application code. Legacy examples 245 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • On the application's page in the console, choose Configure and then choose Update to reload the updated application code into your application. This section contains the following sections: • Configure data partitioning • Configure read frequency • Configure write buffering Configure data partitioning In this section, you configure the names of the folders that the streaming file sink creates in the S3 bucket. You do this by adding a bucket assigner to the streaming file sink. To customize the folder names created in the S3 bucket, do the following: 1. Add the following import statements to the beginning of the S3StreamingSinkJob.java file: import org.apache.flink.streaming.api.functions.sink.filesystem.rollingpolicies.DefaultRollingPolicy; import org.apache.flink.streaming.api.functions.sink.filesystem.bucketassigners.DateTimeBucketAssigner; 2. Update the createS3SinkFromStaticConfig() method in the code to look like the following: private static StreamingFileSink<String> createS3SinkFromStaticConfig() { final StreamingFileSink<String> sink = StreamingFileSink .forRowFormat(new Path(s3SinkPath), new SimpleStringEncoder<String>("UTF-8")) .withBucketAssigner(new DateTimeBucketAssigner("yyyy-MM-dd--HH")) .withRollingPolicy(DefaultRollingPolicy.create().build()) .build(); return sink; } Legacy examples 246 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide The preceding code example uses the DateTimeBucketAssigner with a custom date format to create folders in the S3 bucket. The DateTimeBucketAssigner uses the current system time to create bucket names. If you want to create a custom bucket assigner to further customize the created folder names, you can create a class that implements BucketAssigner. You implement your custom logic by using the getBucketId method. A custom implementation of BucketAssigner can use the Context parameter to obtain more information about a record in order to determine its destination folder. Configure read frequency In this section, you configure the frequency of reads on the source stream. The Kinesis Streams consumer reads from the source stream five times per second by default. This frequency will cause issues if there is more than one client reading from the stream, or if the application needs to retry reading a record. You can avoid these issues by setting the read frequency of the consumer. To set the read frequency of the Kinesis consumer, you set the SHARD_GETRECORDS_INTERVAL_MILLIS setting. The following code example sets the SHARD_GETRECORDS_INTERVAL_MILLIS setting to one second: kinesisConsumerConfig.setProperty(ConsumerConfigConstants.SHARD_GETRECORDS_INTERVAL_MILLIS, "1000"); Configure write buffering In this section, you configure the write frequency and other settings of the sink. By default, the application writes to the destination bucket every minute. You can change this interval and other settings by configuring the DefaultRollingPolicy object. Note The Apache Flink streaming file sink writes to its output bucket every time the application creates a checkpoint. The application creates a checkpoint every minute by default. To increase the write interval of the S3 sink, you must also increase the checkpoint interval. Legacy examples 247 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide To configure the DefaultRollingPolicy object, do the following: 1. Increase the application's CheckpointInterval setting. The following input for the UpdateApplication action sets the checkpoint interval to 10 minutes: { "ApplicationConfigurationUpdate": { "FlinkApplicationConfigurationUpdate": { "CheckpointConfigurationUpdate": { "ConfigurationTypeUpdate" : "CUSTOM", "CheckpointIntervalUpdate": 600000 } } }, "ApplicationName": "MyApplication", "CurrentApplicationVersionId": 5 } To use the preceding code, specify
analytics-java-api-081
analytics-java-api.pdf
81
time the application creates a checkpoint. The application creates a checkpoint every minute by default. To increase the write interval of the S3 sink, you must also increase the checkpoint interval. Legacy examples 247 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide To configure the DefaultRollingPolicy object, do the following: 1. Increase the application's CheckpointInterval setting. The following input for the UpdateApplication action sets the checkpoint interval to 10 minutes: { "ApplicationConfigurationUpdate": { "FlinkApplicationConfigurationUpdate": { "CheckpointConfigurationUpdate": { "ConfigurationTypeUpdate" : "CUSTOM", "CheckpointIntervalUpdate": 600000 } } }, "ApplicationName": "MyApplication", "CurrentApplicationVersionId": 5 } To use the preceding code, specify the current application version. You can retrieve the application version by using the ListApplications action. 2. Add the following import statement to the beginning of the S3StreamingSinkJob.java file: import java.util.concurrent.TimeUnit; 3. Update the createS3SinkFromStaticConfig method in the S3StreamingSinkJob.java file to look like the following: private static StreamingFileSink<String> createS3SinkFromStaticConfig() { final StreamingFileSink<String> sink = StreamingFileSink .forRowFormat(new Path(s3SinkPath), new SimpleStringEncoder<String>("UTF-8")) .withBucketAssigner(new DateTimeBucketAssigner("yyyy-MM-dd--HH")) .withRollingPolicy( DefaultRollingPolicy.create() .withRolloverInterval(TimeUnit.MINUTES.toMillis(8)) .withInactivityInterval(TimeUnit.MINUTES.toMillis(5)) .withMaxPartSize(1024 * 1024 * 1024) .build()) .build(); Legacy examples 248 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide return sink; } The preceding code example sets the frequency of writes to the Amazon S3 bucket to 8 minutes. For more information about configuring the Apache Flink streaming file sink, see Row-encoded Formats in the Apache Flink documentation. Clean up AWS resources This section includes procedures for cleaning up AWS resources that you created in the Amazon S3 tutorial. This topic contains the following sections: • Delete your Managed Service for Apache Flink application • Delete your Kinesis data stream • Delete your Amazon S3 objects and bucket • Delete your IAM resources • Delete your CloudWatch resources Delete your Managed Service for Apache Flink application 1. Open the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink 2. In the Managed Service for Apache Flink panel, choose MyApplication. 3. On the application's page, choose Delete and then confirm the deletion. Delete your Kinesis data stream 1. Open the Kinesis console at https://console.aws.amazon.com/kinesis. 2. In the Kinesis Data Streams panel, choose ExampleInputStream. 3. On the ExampleInputStream page, choose Delete Kinesis Stream and then confirm the deletion. Legacy examples 249 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Delete your Amazon S3 objects and bucket 1. Open the Amazon S3 console at https://console.aws.amazon.com/s3/. 2. Choose the ka-app-code-<username> bucket. 3. Choose Delete and then enter the bucket name to confirm deletion. Delete your IAM resources 1. Open the IAM console at https://console.aws.amazon.com/iam/. 2. On the navigation bar, choose Policies. 3. In the filter control, enter kinesis. 4. Choose the kinesis-analytics-service-MyApplication-us-west-2 policy. 5. Choose Policy Actions and then choose Delete. 6. On the navigation bar, choose Roles. 7. Choose the kinesis-analytics-MyApplication-us-west-2 role. 8. Choose Delete role and then confirm the deletion. Delete your CloudWatch resources 1. Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/. 2. On the navigation bar, choose Logs. 3. Choose the /aws/kinesis-analytics/MyApplication log group. 4. Choose Delete Log Group and then confirm the deletion. Tutorial: Using a Managed Service for Apache Flink application to replicate data from one topic in an MSK cluster to another in a VPC Note For current examples, see Examples for creating and working with Managed Service for Apache Flink applications. Legacy examples 250 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide The following tutorial demonstrates how to create an Amazon VPC with an Amazon MSK cluster and two topics, and how to create a Managed Service for Apache Flink application that reads from one Amazon MSK topic and writes to another. Note To set up required prerequisites for this exercise, first complete the Tutorial: Get started using the DataStream API in Managed Service for Apache Flink exercise. This tutorial contains the following sections: • Create an Amazon VPC with an Amazon MSK cluster • Create the application code • Upload the Apache Flink streaming Java code • Create the application • Configure the application • Run the application • Test the application Create an Amazon VPC with an Amazon MSK cluster To create a sample VPC and Amazon MSK cluster to access from a Managed Service for Apache Flink application, follow the Getting Started Using Amazon MSK tutorial. When completing the tutorial, note the following: • In Step 3: Create a Topic, repeat the kafka-topics.sh --create command to create a destination topic named AWSKafkaTutorialTopicDestination: bin/kafka-topics.sh --create --zookeeper ZooKeeperConnectionString --replication- factor 3 --partitions 1 --topic AWSKafkaTutorialTopicDestination • Record the bootstrap server list for your cluster. You can get the list of bootstrap servers with the following command (replace ClusterArn with the ARN of your MSK cluster): aws kafka get-bootstrap-brokers --region us-west-2 --cluster-arn ClusterArn {... Legacy examples 251 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide "BootstrapBrokerStringTls": "b-2.awskafkatutorialcluste.t79r6y.c4.kafka.us- west-2.amazonaws.com:9094,b-1.awskafkatutorialcluste.t79r6y.c4.kafka.us- west-2.amazonaws.com:9094,b-3.awskafkatutorialcluste.t79r6y.c4.kafka.us- west-2.amazonaws.com:9094"
analytics-java-api-082
analytics-java-api.pdf
82
Started Using Amazon MSK tutorial. When completing the tutorial, note the following: • In Step 3: Create a Topic, repeat the kafka-topics.sh --create command to create a destination topic named AWSKafkaTutorialTopicDestination: bin/kafka-topics.sh --create --zookeeper ZooKeeperConnectionString --replication- factor 3 --partitions 1 --topic AWSKafkaTutorialTopicDestination • Record the bootstrap server list for your cluster. You can get the list of bootstrap servers with the following command (replace ClusterArn with the ARN of your MSK cluster): aws kafka get-bootstrap-brokers --region us-west-2 --cluster-arn ClusterArn {... Legacy examples 251 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide "BootstrapBrokerStringTls": "b-2.awskafkatutorialcluste.t79r6y.c4.kafka.us- west-2.amazonaws.com:9094,b-1.awskafkatutorialcluste.t79r6y.c4.kafka.us- west-2.amazonaws.com:9094,b-3.awskafkatutorialcluste.t79r6y.c4.kafka.us- west-2.amazonaws.com:9094" } • When following the steps in the tutorials, be sure to use your selected AWS Region in your code, commands, and console entries. Create the application code In this section, you'll download and compile the application JAR file. We recommend using Java 11. The Java application code for this example is available from GitHub. To download the application code, do the following: 1. Install the Git client if you haven't already. For more information, see Installing Git. 2. Clone the remote repository with the following command: git clone https://github.com/aws-samples/amazon-kinesis-data-analytics-examples.git 3. The application code is located in the amazon-kinesis-data-analytics-java- examples/KafkaConnectors/KafkaGettingStartedJob.java file. You can examine the code to familiarize yourself with the structure of Managed Service for Apache Flink application code. 4. Use either the command-line Maven tool or your preferred development environment to create the JAR file. To compile the JAR file using the command-line Maven tool, enter the following: mvn package -Dflink.version=1.15.3 If the build is successful, the following file is created: target/KafkaGettingStartedJob-1.0.jar Legacy examples 252 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Note The provided source code relies on libraries from Java 11. If you are using a development environment, Upload the Apache Flink streaming Java code In this section, you upload your application code to the Amazon S3 bucket you created in the Tutorial: Get started using the DataStream API in Managed Service for Apache Flink tutorial. Note If you deleted the Amazon S3 bucket from the Getting Started tutorial, follow the the section called “Upload the application code JAR file” step again. 1. 2. In the Amazon S3 console, choose the ka-app-code-<username> bucket, and choose Upload. In the Select files step, choose Add files. Navigate to the KafkaGettingStartedJob-1.0.jar file that you created in the previous step. 3. You don't need to change any of the settings for the object, so choose Upload. Your application code is now stored in an Amazon S3 bucket where your application can access it. Create the application 1. Open the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink. 2. On the Managed Service for Apache Flink dashboard, choose Create analytics application. 3. On the Managed Service for Apache Flink - Create application page, provide the application details as follows: • For Application name, enter MyApplication. • For Runtime, choose Apache Flink version 1.15.2. 4. For Access permissions, choose Create / update IAM role kinesis-analytics- MyApplication-us-west-2. 5. Choose Create application. Legacy examples 253 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Note When you create a Managed Service for Apache Flink application using the console, you have the option of having an IAM role and policy created for your application. Your application uses this role and policy to access its dependent resources. These IAM resources are named using your application name and Region as follows: • Policy: kinesis-analytics-service-MyApplication-us-west-2 • Role: kinesisanalytics-MyApplication-us-west-2 Configure the application 1. On the MyApplication page, choose Configure. 2. On the Configure application page, provide the Code location: • For Amazon S3 bucket, enter ka-app-code-<username>. • For Path to Amazon S3 object, enter KafkaGettingStartedJob-1.0.jar. 3. Under Access to application resources, for Access permissions, choose Create / update IAM role kinesis-analytics-MyApplication-us-west-2. Note When you specify application resources using the console (such as CloudWatch Logs or an Amazon VPC), the console modifies your application execution role to grant permission to access those resources. 4. Under Properties, choose Add Group. Enter the following properties: Group ID KafkaSource KafkaSource Key topic Value AWSKafkaTutorialTopic bootstrap.servers The bootstrap server list you saved previously Legacy examples 254 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Group ID Key KafkaSource security.protocol KafkaSource ssl.truststore.location Value SSL /usr/lib/jvm/java-11- amazon-corretto/lib/secu rity/cacerts KafkaSource ssl.truststore.password changeit Note The ssl.truststore.password for the default certificate is "changeit"; you do not need to change this value if you are using the default certificate. Choose Add Group again. Enter the following properties: Group ID KafkaSink Key topic Value AWSKafkaTutorialTo picDestination KafkaSink bootstrap.servers The bootstrap server KafkaSink KafkaSink KafkaSink KafkaSink list you saved previously security.protocol SSL ssl.truststore.location /usr/lib/jvm/java-11- amazon-corretto/lib/secu rity/cacerts ssl.truststore.password changeit transaction.timeout.ms 1000 Legacy examples 255 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide The application code reads the
analytics-java-api-083
analytics-java-api.pdf
83
Developer Guide Group ID Key KafkaSource security.protocol KafkaSource ssl.truststore.location Value SSL /usr/lib/jvm/java-11- amazon-corretto/lib/secu rity/cacerts KafkaSource ssl.truststore.password changeit Note The ssl.truststore.password for the default certificate is "changeit"; you do not need to change this value if you are using the default certificate. Choose Add Group again. Enter the following properties: Group ID KafkaSink Key topic Value AWSKafkaTutorialTo picDestination KafkaSink bootstrap.servers The bootstrap server KafkaSink KafkaSink KafkaSink KafkaSink list you saved previously security.protocol SSL ssl.truststore.location /usr/lib/jvm/java-11- amazon-corretto/lib/secu rity/cacerts ssl.truststore.password changeit transaction.timeout.ms 1000 Legacy examples 255 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide The application code reads the above application properties to configure the source and sink used to interact with your VPC and Amazon MSK cluster. For more information about using properties, see Use runtime properties. 5. Under Snapshots, choose Disable. This will make it easier to update the application without loading invalid application state data. 6. Under Monitoring, ensure that the Monitoring metrics level is set to Application. 7. 8. For CloudWatch logging, choose the Enable check box. In the Virtual Private Cloud (VPC) section, choose the VPC to associate with your application. Choose the subnets and security group associated with your VPC that you want the application to use to access VPC resources. 9. Choose Update. Note When you choose to enable CloudWatch logging, Managed Service for Apache Flink creates a log group and log stream for you. The names of these resources are as follows: • Log group: /aws/kinesis-analytics/MyApplication • Log stream: kinesis-analytics-log-stream This log stream is used to monitor the application. Run the application The Flink job graph can be viewed by running the application, opening the Apache Flink dashboard, and choosing the desired Flink job. Test the application In this section, you write records to the source topic. The application reads records from the source topic and writes them to the destination topic. You verify the application is working by writing records to the source topic and reading records from the destination topic. To write and read records from the topics, follow the steps in Step 6: Produce and Consume Data in the Getting Started Using Amazon MSK tutorial. Legacy examples 256 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide To read from the destination topic, use the destination topic name instead of the source topic in your second connection to the cluster: bin/kafka-console-consumer.sh --bootstrap-server BootstrapBrokerString -- consumer.config client.properties --topic AWSKafkaTutorialTopicDestination --from- beginning If no records appear in the destination topic, see the Cannot access resources in a VPC section in the Troubleshoot Managed Service for Apache Flink topic. Example: Use an EFO consumer with a Kinesis data stream Note For current examples, see Examples for creating and working with Managed Service for Apache Flink applications. In this exercise, you create a Managed Service for Apache Flink application that reads from a Kinesis data stream using an Enhanced Fan-Out (EFO) consumer. If a Kinesis consumer uses EFO, the Kinesis Data Streams service gives it its own dedicated bandwidth, rather than having the consumer share the fixed bandwidth of the stream with the other consumers reading from the stream. For more information about using EFO with the Kinesis consumer, see FLIP-128: Enhanced Fan Out for Kinesis Consumers. The application you create in this example uses AWS Kinesis connector (flink-connector-kinesis) 1.15.3. Note To set up required prerequisites for this exercise, first complete the Tutorial: Get started using the DataStream API in Managed Service for Apache Flink exercise. This topic contains the following sections: • Create dependent resources Legacy examples 257 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • Write sample records to the input stream • Download and examine the application code • Compile the application code • Upload the Apache Flink streaming Java code • Create and run the Managed Service for Apache Flink application • Clean up AWS resources Create dependent resources Before you create a Managed Service for Apache Flink application for this exercise, you create the following dependent resources: • Two Kinesis data streams (ExampleInputStream and ExampleOutputStream) • An Amazon S3 bucket to store the application's code (ka-app-code-<username>) You can create the Kinesis streams and Amazon S3 bucket using the console. For instructions for creating these resources, see the following topics: • Creating and Updating Data Streams in the Amazon Kinesis Data Streams Developer Guide. Name your data stream ExampleInputStream and ExampleOutputStream. • How Do I Create an S3 Bucket? in the Amazon Simple Storage Service User Guide. Give the Amazon S3 bucket a globally unique name by appending your login name, such as ka-app- code-<username>. Write sample records to the input stream In this section, you use a Python script to write sample records to the stream for the application to process. Note This section requires the AWS SDK for
analytics-java-api-084
analytics-java-api.pdf
84
For instructions for creating these resources, see the following topics: • Creating and Updating Data Streams in the Amazon Kinesis Data Streams Developer Guide. Name your data stream ExampleInputStream and ExampleOutputStream. • How Do I Create an S3 Bucket? in the Amazon Simple Storage Service User Guide. Give the Amazon S3 bucket a globally unique name by appending your login name, such as ka-app- code-<username>. Write sample records to the input stream In this section, you use a Python script to write sample records to the stream for the application to process. Note This section requires the AWS SDK for Python (Boto). 1. Create a file named stock.py with the following contents: Legacy examples 258 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide import datetime import json import random import boto3 STREAM_NAME = "ExampleInputStream" def get_data(): return { 'event_time': datetime.datetime.now().isoformat(), 'ticker': random.choice(['AAPL', 'AMZN', 'MSFT', 'INTC', 'TBV']), 'price': round(random.random() * 100, 2)} def generate(stream_name, kinesis_client): while True: data = get_data() print(data) kinesis_client.put_record( StreamName=stream_name, Data=json.dumps(data), PartitionKey="partitionkey") if __name__ == '__main__': generate(STREAM_NAME, boto3.client('kinesis', region_name='us-west-2')) 2. Run the stock.py script: $ python stock.py Keep the script running while completing the rest of the tutorial. Download and examine the application code The Java application code for this example is available from GitHub. To download the application code, do the following: 1. Install the Git client if you haven't already. For more information, see Installing Git. Legacy examples 259 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 2. Clone the remote repository with the following command: git clone https://github.com/aws-samples/amazon-kinesis-data-analytics-examples.git 3. Navigate to the amazon-kinesis-data-analytics-java-examples/EfoConsumer directory. The application code is located in the EfoApplication.java file. Note the following about the application code: • You enable the EFO consumer by setting the following parameters on the Kinesis consumer: • RECORD_PUBLISHER_TYPE: Set this parameter to EFO for your application to use an EFO consumer to access the Kinesis Data Stream data. • EFO_CONSUMER_NAME: Set this parameter to a string value that is unique among the consumers of this stream. Re-using a consumer name in the same Kinesis Data Stream will cause the previous consumer using that name to be terminated. • The following code example demonstrates how to assign values to the consumer configuration properties to use an EFO consumer to read from the source stream: consumerConfig.putIfAbsent(RECORD_PUBLISHER_TYPE, "EFO"); consumerConfig.putIfAbsent(EFO_CONSUMER_NAME, "basic-efo-flink-app"); Compile the application code To compile the application, do the following: 1. Install Java and Maven if you haven't already. For more information, see Complete the required prerequisites in the Tutorial: Get started using the DataStream API in Managed Service for Apache Flink tutorial. 2. Compile the application with the following command: mvn package -Dflink.version=1.15.3 Note The provided source code relies on libraries from Java 11. Legacy examples 260 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Compiling the application creates the application JAR file (target/aws-kinesis-analytics- java-apps-1.0.jar). Upload the Apache Flink streaming Java code In this section, you upload your application code to the Amazon S3 bucket you created in the Create dependent resources section. 1. 2. In the Amazon S3 console, choose the ka-app-code-<username> bucket, and choose Upload. In the Select files step, choose Add files. Navigate to the aws-kinesis-analytics-java- apps-1.0.jar file that you created in the previous step. 3. You don't need to change any of the settings for the object, so choose Upload. Your application code is now stored in an Amazon S3 bucket where your application can access it. Create and run the Managed Service for Apache Flink application Follow these steps to create, configure, update, and run the application using the console. Create the application 1. Open the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink 2. On the Managed Service for Apache Flink dashboard, choose Create analytics application. 3. On the Managed Service for Apache Flink - Create application page, provide the application details as follows: • For Application name, enter MyApplication. • For Runtime, choose Apache Flink. Note Managed Service for Apache Flink uses Apache Flink version 1.15.2. • Leave the version pulldown as Apache Flink version 1.15.2 (Recommended version). 4. For Access permissions, choose Create / update IAM role kinesis-analytics- MyApplication-us-west-2. 5. Choose Create application. Legacy examples 261 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Note When you create a Managed Service for Apache Flink application using the console, you have the option of having an IAM role and policy created for your application. Your application uses this role and policy to access its dependent resources. These IAM resources are named using your application name and Region as follows: • Policy: kinesis-analytics-service-MyApplication-us-west-2 • Role: kinesisanalytics-MyApplication-us-west-2 Edit the IAM policy Edit the IAM policy to add permissions to access the Kinesis data streams. 1. Open the IAM console at https://console.aws.amazon.com/iam/. 2. Choose Policies. Choose the
analytics-java-api-085
analytics-java-api.pdf
85
Service for Apache Flink Managed Service for Apache Flink Developer Guide Note When you create a Managed Service for Apache Flink application using the console, you have the option of having an IAM role and policy created for your application. Your application uses this role and policy to access its dependent resources. These IAM resources are named using your application name and Region as follows: • Policy: kinesis-analytics-service-MyApplication-us-west-2 • Role: kinesisanalytics-MyApplication-us-west-2 Edit the IAM policy Edit the IAM policy to add permissions to access the Kinesis data streams. 1. Open the IAM console at https://console.aws.amazon.com/iam/. 2. Choose Policies. Choose the kinesis-analytics-service-MyApplication-us-west-2 policy that the console created for you in the previous section. 3. On the Summary page, choose Edit policy. Choose the JSON tab. 4. Add the highlighted section of the following policy example to the policy. Replace the sample account IDs (012345678901) with your account ID. Note These permissions grant the application the ability to access the EFO consumer. { "Version": "2012-10-17", "Statement": [ { "Sid": "ReadCode", "Effect": "Allow", "Action": [ "s3:GetObject", "logs:DescribeLogGroups", "s3:GetObjectVersion" ], "Resource": [ Legacy examples 262 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide "arn:aws:logs:us-west-2:012345678901:log-group:*", "arn:aws:s3:::ka-app-code-<username>/aws-kinesis-analytics-java- apps-1.0.jar" ] }, { "Sid": "DescribeLogStreams", "Effect": "Allow", "Action": "logs:DescribeLogStreams", "Resource": "arn:aws:logs:us-west-2:012345678901:log-group:/aws/ kinesis-analytics/MyApplication:log-stream:*" }, { "Sid": "PutLogEvents", "Effect": "Allow", "Action": "logs:PutLogEvents", "Resource": "arn:aws:logs:us-west-2:012345678901:log-group:/aws/ kinesis-analytics/MyApplication:log-stream:kinesis-analytics-log-stream" }, { "Sid": "ListCloudwatchLogGroups", "Effect": "Allow", "Action": [ "logs:DescribeLogGroups" ], "Resource": [ "arn:aws:logs:us-west-2:012345678901:log-group:*" ] }, { "Sid": "AllStreams", "Effect": "Allow", "Action": [ "kinesis:ListShards", "kinesis:ListStreamConsumers", "kinesis:DescribeStreamSummary" ], "Resource": "arn:aws:kinesis:us-west-2:012345678901:stream/*" }, { "Sid": "Stream", "Effect": "Allow", "Action": [ "kinesis:DescribeStream", Legacy examples 263 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide "kinesis:RegisterStreamConsumer", "kinesis:DeregisterStreamConsumer" ], "Resource": "arn:aws:kinesis:us-west-2:012345678901:stream/ ExampleInputStream" }, { "Sid": "WriteOutputStream", "Effect": "Allow", "Action": "kinesis:*", "Resource": "arn:aws:kinesis:us-west-2:012345678901:stream/ ExampleOutputStream" }, { "Sid": "Consumer", "Effect": "Allow", "Action": [ "kinesis:DescribeStreamConsumer", "kinesis:SubscribeToShard" ], "Resource": [ "arn:aws:kinesis:us-west-2:012345678901:stream/ExampleInputStream/ consumer/my-efo-flink-app", "arn:aws:kinesis:us-west-2:012345678901:stream/ExampleInputStream/ consumer/my-efo-flink-app:*" ] } ] } Configure the application 1. On the MyApplication page, choose Configure. 2. On the Configure application page, provide the Code location: • For Amazon S3 bucket, enter ka-app-code-<username>. • For Path to Amazon S3 object, enter aws-kinesis-analytics-java-apps-1.0.jar. 3. Under Access to application resources, for Access permissions, choose Create / update IAM role kinesis-analytics-MyApplication-us-west-2. 4. Under Properties, choose Create Group. Legacy examples 264 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 5. Enter the following application properties and values: Group ID Key Value ConsumerConfigProp flink.stream.recor EFO erties dpublisher ConsumerConfigProp flink.stream.efo.c basic-efo-flink-app erties onsumername ConsumerConfigProp INPUT_STREAM ExampleInputStream erties ConsumerConfigProp flink.inputstream. LATEST erties initpos ConsumerConfigProp AWS_REGION us-west-2 erties 6. Under Properties, choose Create Group. 7. Enter the following application properties and values: Group ID Key Value ProducerConfigProp OUTPUT_STREAM ExampleOutputStream erties ProducerConfigProp AWS_REGION us-west-2 erties ProducerConfigProp AggregationEnabled false erties 8. Under Monitoring, ensure that the Monitoring metrics level is set to Application. 9. For CloudWatch logging, select the Enable check box. 10. Choose Update. Legacy examples 265 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Note When you choose to enable CloudWatch logging, Managed Service for Apache Flink creates a log group and log stream for you. The names of these resources are as follows: • Log group: /aws/kinesis-analytics/MyApplication • Log stream: kinesis-analytics-log-stream This log stream is used to monitor the application. This is not the same log stream that the application uses to send results. Run the application The Flink job graph can be viewed by running the application, opening the Apache Flink dashboard, and choosing the desired Flink job. You can check the Managed Service for Apache Flink metrics on the CloudWatch console to verify that the application is working. You can also check the Kinesis Data Streams console, in the data stream's Enhanced fan-out tab, for the name of your consumer (basic-efo-flink-app). Clean up AWS resources This section includes procedures for cleaning up AWS resources created in the efo Window tutorial. This topic contains the following sections: • Delete your Managed Service for Apache Flink application • Delete your Kinesis data streams • Delete Your Amazon S3 Object and Bucket • Delete your IAM resources • Delete your CloudWatch resources Delete your Managed Service for Apache Flink application 1. Open the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink 2. in the Managed Service for Apache Flink panel, choose MyApplication. Legacy examples 266 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 3. In the application's page, choose Delete and then confirm the deletion. Delete your Kinesis data streams 1. Open the Kinesis console at https://console.aws.amazon.com/kinesis. 2. 3. 4. In the Kinesis Data Streams panel, choose ExampleInputStream. In the ExampleInputStream page, choose Delete Kinesis Stream and then confirm the deletion. In the Kinesis streams page, choose the ExampleOutputStream, choose Actions, choose Delete, and then confirm the deletion. Delete Your Amazon S3 Object and Bucket 1. Open the Amazon S3 console at https://console.aws.amazon.com/s3/.
analytics-java-api-086
analytics-java-api.pdf
86
Apache Flink panel, choose MyApplication. Legacy examples 266 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 3. In the application's page, choose Delete and then confirm the deletion. Delete your Kinesis data streams 1. Open the Kinesis console at https://console.aws.amazon.com/kinesis. 2. 3. 4. In the Kinesis Data Streams panel, choose ExampleInputStream. In the ExampleInputStream page, choose Delete Kinesis Stream and then confirm the deletion. In the Kinesis streams page, choose the ExampleOutputStream, choose Actions, choose Delete, and then confirm the deletion. Delete Your Amazon S3 Object and Bucket 1. Open the Amazon S3 console at https://console.aws.amazon.com/s3/. 2. Choose the ka-app-code-<username> bucket. 3. Choose Delete and then enter the bucket name to confirm deletion. Delete your IAM resources 1. Open the IAM console at https://console.aws.amazon.com/iam/. 2. 3. In the navigation bar, choose Policies. In the filter control, enter kinesis. 4. Choose the kinesis-analytics-service-MyApplication-us-west-2 policy. 5. Choose Policy Actions and then choose Delete. 6. In the navigation bar, choose Roles. 7. Choose the kinesis-analytics-MyApplication-us-west-2 role. 8. Choose Delete role and then confirm the deletion. Delete your CloudWatch resources 1. Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/. 2. In the navigation bar, choose Logs. 3. Choose the /aws/kinesis-analytics/MyApplication log group. Legacy examples 267 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 4. Choose Delete Log Group and then confirm the deletion. Example: Writing to Firehose Note For current examples, see Examples for creating and working with Managed Service for Apache Flink applications. In this exercise, you create a Managed Service for Apache Flink application that has a Kinesis data stream as a source and a Firehose stream as a sink. Using the sink, you can verify the output of the application in an Amazon S3 bucket. Note To set up required prerequisites for this exercise, first complete the Tutorial: Get started using the DataStream API in Managed Service for Apache Flink exercise. This section contains the following steps: • Create dependent resources • Write sample records to the input stream • Download and examine the Apache Flink streaming Java code • Compile the application code • Upload the Apache Flink streaming Java code • Create and run the Managed Service for Apache Flink application • Clean up AWS resources Create dependent resources Before you create a Managed Service for Apache Flink for this exercise, you create the following dependent resources: • A Kinesis data stream (ExampleInputStream) • A Firehose stream that the application writes output to (ExampleDeliveryStream). Legacy examples 268 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • An Amazon S3 bucket to store the application's code (ka-app-code-<username>) You can create the Kinesis stream, Amazon S3 buckets, and Firehose stream using the console. For instructions for creating these resources, see the following topics: • Creating and Updating Data Streams in the Amazon Kinesis Data Streams Developer Guide. Name your data stream ExampleInputStream. • Creating an Amazon Kinesis Data Firehose Delivery Stream in the Amazon Data Firehose Developer Guide. Name your Firehose stream ExampleDeliveryStream. When you create the Firehose stream, also create the Firehose stream's S3 destination and IAM role. • How Do I Create an S3 Bucket? in the Amazon Simple Storage Service User Guide. Give the Amazon S3 bucket a globally unique name by appending your login name, such as ka-app- code-<username>. Write sample records to the input stream In this section, you use a Python script to write sample records to the stream for the application to process. Note This section requires the AWS SDK for Python (Boto). 1. Create a file named stock.py with the following contents: import datetime import json import random import boto3 STREAM_NAME = "ExampleInputStream" def get_data(): return { 'event_time': datetime.datetime.now().isoformat(), 'ticker': random.choice(['AAPL', 'AMZN', 'MSFT', 'INTC', 'TBV']), 'price': round(random.random() * 100, 2)} Legacy examples 269 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide def generate(stream_name, kinesis_client): while True: data = get_data() print(data) kinesis_client.put_record( StreamName=stream_name, Data=json.dumps(data), PartitionKey="partitionkey") if __name__ == '__main__': generate(STREAM_NAME, boto3.client('kinesis', region_name='us-west-2')) 2. Run the stock.py script: $ python stock.py Keep the script running while completing the rest of the tutorial. Download and examine the Apache Flink streaming Java code The Java application code for this example is available from GitHub. To download the application code, do the following: 1. Clone the remote repository with the following command: git clone https://github.com/aws-samples/amazon-kinesis-data-analytics-examples.git 2. Navigate to the amazon-kinesis-data-analytics-java-examples/FirehoseSink directory. The application code is located in the FirehoseSinkStreamingJob.java file. Note the following about the application code: • The application uses a Kinesis source to read from the source stream. The following snippet creates the Kinesis source: return env.addSource(new FlinkKinesisConsumer<>(inputStreamName, Legacy examples 270 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide new SimpleStringSchema(), inputProperties)); • The application uses a Firehose sink to write data to a Firehose stream. The following
analytics-java-api-087
analytics-java-api.pdf
87
GitHub. To download the application code, do the following: 1. Clone the remote repository with the following command: git clone https://github.com/aws-samples/amazon-kinesis-data-analytics-examples.git 2. Navigate to the amazon-kinesis-data-analytics-java-examples/FirehoseSink directory. The application code is located in the FirehoseSinkStreamingJob.java file. Note the following about the application code: • The application uses a Kinesis source to read from the source stream. The following snippet creates the Kinesis source: return env.addSource(new FlinkKinesisConsumer<>(inputStreamName, Legacy examples 270 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide new SimpleStringSchema(), inputProperties)); • The application uses a Firehose sink to write data to a Firehose stream. The following snippet creates the Firehose sink: private static KinesisFirehoseSink<String> createFirehoseSinkFromStaticConfig() { Properties sinkProperties = new Properties(); sinkProperties.setProperty(AWS_REGION, region); return KinesisFirehoseSink.<String>builder() .setFirehoseClientProperties(sinkProperties) .setSerializationSchema(new SimpleStringSchema()) .setDeliveryStreamName(outputDeliveryStreamName) .build(); } Compile the application code To compile the application, do the following: 1. Install Java and Maven if you haven't already. For more information, see Complete the required prerequisites in the Tutorial: Get started using the DataStream API in Managed Service for Apache Flink tutorial. 2. In order to use the Kinesis connector for the following application, you need to download, build, and install Apache Maven. For more information, see the section called “Using the Apache Flink Kinesis Streams connector with previous Apache Flink versions”. 3. Compile the application with the following command: mvn package -Dflink.version=1.15.3 Note The provided source code relies on libraries from Java 11. Compiling the application creates the application JAR file (target/aws-kinesis-analytics- java-apps-1.0.jar). Legacy examples 271 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Upload the Apache Flink streaming Java code In this section, you upload your application code to the Amazon S3 bucket that you created in the Create dependent resources section. To upload the application code 1. Open the Amazon S3 console at https://console.aws.amazon.com/s3/. 2. 3. In the console, choose the ka-app-code-<username> bucket, and then choose Upload. In the Select files step, choose Add files. Navigate to the java-getting-started-1.0.jar file that you created in the previous step. 4. You don't need to change any of the settings for the object, so choose Upload. Your application code is now stored in an Amazon S3 bucket where your application can access it. Create and run the Managed Service for Apache Flink application You can create and run a Managed Service for Apache Flink application using either the console or the AWS CLI. Note When you create the application using the console, your AWS Identity and Access Management (IAM) and Amazon CloudWatch Logs resources are created for you. When you create the application using the AWS CLI, you create these resources separately. Topics • Create and run the application (console) • Create and run the application (AWS CLI) Create and run the application (console) Follow these steps to create, configure, update, and run the application using the console. Create the application 1. Open the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink 2. On the Managed Service for Apache Flink dashboard, choose Create analytics application. Legacy examples 272 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 3. On the Managed Service for Apache Flink - Create application page, provide the application details as follows: • For Application name, enter MyApplication. • For Description, enter My java test app. • For Runtime, choose Apache Flink. Note Managed Service for Apache Flink uses Apache Flink version 1.15.2. • Leave the version pulldown as Apache Flink version 1.15.2 (Recommended version). 4. For Access permissions, choose Create / update IAM role kinesis-analytics- MyApplication-us-west-2. 5. Choose Create application. Note When you create the application using the console, you have the option of having an IAM role and policy created for your application. The application uses this role and policy to access its dependent resources. These IAM resources are named using your application name and Region as follows: • Policy: kinesis-analytics-service-MyApplication-us-west-2 • Role: kinesisanalytics-MyApplication-us-west-2 Edit the IAM policy Edit the IAM policy to add permissions to access the Kinesis data stream and Firehose stream. 1. Open the IAM console at https://console.aws.amazon.com/iam/. 2. Choose Policies. Choose the kinesis-analytics-service-MyApplication-us-west-2 policy that the console created for you in the previous section. 3. On the Summary page, choose Edit policy. Choose the JSON tab. 4. Add the highlighted section of the following policy example to the policy. Replace all the instances of the sample account IDs (012345678901) with your account ID. Legacy examples 273 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide { "Version": "2012-10-17", "Statement": [ { "Sid": "ReadCode", "Effect": "Allow", "Action": [ "s3:GetObject", "s3:GetObjectVersion" ], "Resource": [ "arn:aws:s3:::ka-app-code-username/java-getting-started-1.0.jar" ] }, { "Sid": "DescribeLogGroups", "Effect": "Allow", "Action": [ "logs:DescribeLogGroups" ], "Resource": [ "arn:aws:logs:us-west-2:012345678901:log-group:*" ] }, { "Sid": "DescribeLogStreams", "Effect": "Allow", "Action": [ "logs:DescribeLogStreams" ], "Resource": [ "arn:aws:logs:us-west-2:012345678901:log-group:/aws/kinesis- analytics/MyApplication:log-stream:*" ] }, { "Sid": "PutLogEvents", "Effect": "Allow", "Action": [ "logs:PutLogEvents" ], "Resource":
analytics-java-api-088
analytics-java-api.pdf
88
Add the highlighted section of the following policy example to the policy. Replace all the instances of the sample account IDs (012345678901) with your account ID. Legacy examples 273 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide { "Version": "2012-10-17", "Statement": [ { "Sid": "ReadCode", "Effect": "Allow", "Action": [ "s3:GetObject", "s3:GetObjectVersion" ], "Resource": [ "arn:aws:s3:::ka-app-code-username/java-getting-started-1.0.jar" ] }, { "Sid": "DescribeLogGroups", "Effect": "Allow", "Action": [ "logs:DescribeLogGroups" ], "Resource": [ "arn:aws:logs:us-west-2:012345678901:log-group:*" ] }, { "Sid": "DescribeLogStreams", "Effect": "Allow", "Action": [ "logs:DescribeLogStreams" ], "Resource": [ "arn:aws:logs:us-west-2:012345678901:log-group:/aws/kinesis- analytics/MyApplication:log-stream:*" ] }, { "Sid": "PutLogEvents", "Effect": "Allow", "Action": [ "logs:PutLogEvents" ], "Resource": [ Legacy examples 274 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide "arn:aws:logs:us-west-2:012345678901:log-group:/aws/kinesis- analytics/MyApplication:log-stream:kinesis-analytics-log-stream" ] }, { "Sid": "ReadInputStream", "Effect": "Allow", "Action": "kinesis:*", "Resource": "arn:aws:kinesis:us-west-2:012345678901:stream/ ExampleInputStream" }, { "Sid": "WriteDeliveryStream", "Effect": "Allow", "Action": "firehose:*", "Resource": "arn:aws:firehose:us-west-2:012345678901:deliverystream/ ExampleDeliveryStream" } ] } Configure the application 1. On the MyApplication page, choose Configure. 2. On the Configure application page, provide the Code location: • For Amazon S3 bucket, enter ka-app-code-<username>. • For Path to Amazon S3 object, enter java-getting-started-1.0.jar. 3. Under Access to application resources, for Access permissions, choose Create / update IAM role kinesis-analytics-MyApplication-us-west-2. 4. Under Monitoring, ensure that the Monitoring metrics level is set to Application. 5. For CloudWatch logging, select the Enable check box. 6. Choose Update. Note When you choose to enable CloudWatch logging, Managed Service for Apache Flink creates a log group and log stream for you. The names of these resources are as follows: Legacy examples 275 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • Log group: /aws/kinesis-analytics/MyApplication • Log stream: kinesis-analytics-log-stream Run the application The Flink job graph can be viewed by running the application, opening the Apache Flink dashboard, and choosing the desired Flink job. Stop the application On the MyApplication page, choose Stop. Confirm the action. Update the application Using the console, you can update application settings such as application properties, monitoring settings, and the location or file name of the application JAR. On the MyApplication page, choose Configure. Update the application settings and choose Update. Note To update the application's code on the console, you must either change the object name of the JAR, use a different S3 bucket, or use the AWS CLI as described in the the section called “Update the application code” section. If the file name or the bucket does not change, the application code is not reloaded when you choose Update on the Configure page. Create and run the application (AWS CLI) In this section, you use the AWS CLI to create and run the Managed Service for Apache Flink application. Create a permissions policy First, you create a permissions policy with two statements: one that grants permissions for the read action on the source stream, and another that grants permissions for write actions on Legacy examples 276 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide the sink stream. You then attach the policy to an IAM role (which you create in the next section). Thus, when Managed Service for Apache Flink assumes the role, the service has the necessary permissions to read from the source stream and write to the sink stream. Use the following code to create the AKReadSourceStreamWriteSinkStream permissions policy. Replace username with the user name that you will use to create the Amazon S3 bucket to store the application code. Replace the account ID in the Amazon Resource Names (ARNs) (012345678901) with your account ID. { "Version": "2012-10-17", "Statement": [ { "Sid": "S3", "Effect": "Allow", "Action": [ "s3:GetObject", "s3:GetObjectVersion" ], "Resource": ["arn:aws:s3:::ka-app-code-username", "arn:aws:s3:::ka-app-code-username/*" ] }, { "Sid": "ReadInputStream", "Effect": "Allow", "Action": "kinesis:*", "Resource": "arn:aws:kinesis:us-west-2:012345678901:stream/ ExampleInputStream" }, { "Sid": "WriteDeliveryStream", "Effect": "Allow", "Action": "firehose:*", "Resource": "arn:aws:firehose:us-west-2:012345678901:deliverystream/ ExampleDeliveryStream" } ] } For step-by-step instructions to create a permissions policy, see Tutorial: Create and Attach Your First Customer Managed Policy in the IAM User Guide. Legacy examples 277 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Note To access other Amazon services, you can use the AWS SDK for Java. Managed Service for Apache Flink automatically sets the credentials required by the SDK to those of the service execution IAM role that is associated with your application. No additional steps are needed. Create an IAM role In this section, you create an IAM role that the Managed Service for Apache Flink application can assume to read a source stream and write to the sink stream. Managed Service for Apache Flink cannot access your stream if it doesn't have permissions. You grant these permissions via an IAM role. Each IAM role has two policies attached. The trust policy grants Managed Service for Apache Flink permission to assume the role. The permissions policy determines what Managed Service for Apache Flink can do
analytics-java-api-089
analytics-java-api.pdf
89
role that is associated with your application. No additional steps are needed. Create an IAM role In this section, you create an IAM role that the Managed Service for Apache Flink application can assume to read a source stream and write to the sink stream. Managed Service for Apache Flink cannot access your stream if it doesn't have permissions. You grant these permissions via an IAM role. Each IAM role has two policies attached. The trust policy grants Managed Service for Apache Flink permission to assume the role. The permissions policy determines what Managed Service for Apache Flink can do after assuming the role. You attach the permissions policy that you created in the preceding section to this role. To create an IAM role 1. Open the IAM console at https://console.aws.amazon.com/iam/. 2. In the navigation pane, choose Roles, Create Role. 3. Under Select type of trusted identity, choose AWS Service. Under Choose the service that will use this role, choose Kinesis. Under Select your use case, choose Kinesis Analytics. Choose Next: Permissions. 4. On the Attach permissions policies page, choose Next: Review. You attach permissions policies after you create the role. 5. On the Create role page, enter MF-stream-rw-role for the Role name. Choose Create role. Now you have created a new IAM role called MF-stream-rw-role. Next, you update the trust and permissions policies for the role. 6. Attach the permissions policy to the role. Note For this exercise, Managed Service for Apache Flink assumes this role for both reading data from a Kinesis data stream (source) and writing output to another Kinesis data Legacy examples 278 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide stream. So you attach the policy that you created in the previous step, the section called “Create a permissions policy”. a. On the Summary page, choose the Permissions tab. b. Choose Attach Policies. c. In the search box, enter AKReadSourceStreamWriteSinkStream (the policy that you created in the previous section). d. Choose the AKReadSourceStreamWriteSinkStream policy, and choose Attach policy. You now have created the service execution role that your application will use to access resources. Make a note of the ARN of the new role. For step-by-step instructions for creating a role, see Creating an IAM Role (Console) in the IAM User Guide. Create the Managed Service for Apache Flink application 1. Save the following JSON code to a file named create_request.json. Replace the sample role ARN with the ARN for the role that you created previously. Replace the bucket ARN suffix with the suffix that you chose in the the section called “Create dependent resources” section (ka-app-code-<username>.) Replace the sample account ID (012345678901) in the service execution role with your account ID. { "ApplicationName": "test", "ApplicationDescription": "my java test app", "RuntimeEnvironment": "FLINK-1_15", "ServiceExecutionRole": "arn:aws:iam::012345678901:role/MF-stream-rw-role", "ApplicationConfiguration": { "ApplicationCodeConfiguration": { "CodeContent": { "S3ContentLocation": { "BucketARN": "arn:aws:s3:::ka-app-code-username", "FileKey": "java-getting-started-1.0.jar" } }, "CodeContentType": "ZIPFILE" } Legacy examples 279 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide } } } 2. Execute the CreateApplication action with the preceding request to create the application: aws kinesisanalyticsv2 create-application --cli-input-json file:// create_request.json The application is now created. You start the application in the next step. Start the application In this section, you use the StartApplication action to start the application. To start the application 1. Save the following JSON code to a file named start_request.json. { "ApplicationName": "test", "RunConfiguration": { "ApplicationRestoreConfiguration": { "ApplicationRestoreType": "RESTORE_FROM_LATEST_SNAPSHOT" } } } 2. Execute the StartApplication action with the preceding request to start the application: aws kinesisanalyticsv2 start-application --cli-input-json file://start_request.json The application is now running. You can check the Managed Service for Apache Flink metrics on the Amazon CloudWatch console to verify that the application is working. Stop the application In this section, you use the StopApplication action to stop the application. Legacy examples 280 Managed Service for Apache Flink To stop the application Managed Service for Apache Flink Developer Guide 1. Save the following JSON code to a file named stop_request.json. { "ApplicationName": "test" } 2. Execute the StopApplication action with the following request to stop the application: aws kinesisanalyticsv2 stop-application --cli-input-json file://stop_request.json The application is now stopped. Add a CloudWatch logging option You can use the AWS CLI to add an Amazon CloudWatch log stream to your application. For information about using CloudWatch Logs with your application, see the section called “Set up application logging in Managed Service for Apache Flink”. Update the application code When you need to update your application code with a new version of your code package, you use the UpdateApplication AWS CLI action. To use the AWS CLI, delete your previous code package from your Amazon S3 bucket, upload the new version, and call UpdateApplication, specifying the same Amazon S3 bucket and object name. The following sample request for the UpdateApplication action reloads
analytics-java-api-090
analytics-java-api.pdf
90
to add an Amazon CloudWatch log stream to your application. For information about using CloudWatch Logs with your application, see the section called “Set up application logging in Managed Service for Apache Flink”. Update the application code When you need to update your application code with a new version of your code package, you use the UpdateApplication AWS CLI action. To use the AWS CLI, delete your previous code package from your Amazon S3 bucket, upload the new version, and call UpdateApplication, specifying the same Amazon S3 bucket and object name. The following sample request for the UpdateApplication action reloads the application code and restarts the application. Update the CurrentApplicationVersionId to the current application version. You can check the current application version using the ListApplications or DescribeApplication actions. Update the bucket name suffix (<username>) with the suffix you chose in the the section called “Create dependent resources” section. { "ApplicationName": "test", "CurrentApplicationVersionId": 1, "ApplicationConfigurationUpdate": { Legacy examples 281 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide "ApplicationCodeConfigurationUpdate": { "CodeContentUpdate": { "S3ContentLocationUpdate": { "BucketARNUpdate": "arn:aws:s3:::ka-app-code-username", "FileKeyUpdate": "java-getting-started-1.0.jar" } } } } } Clean up AWS resources This section includes procedures for cleaning up AWS resources created in the Getting Started tutorial. This topic contains the following sections: • Delete your Managed Service for Apache Flink application • Delete your Kinesis data stream • Delete your Firehose stream • Delete your Amazon S3 object and bucket • Delete your IAM resources • Delete your CloudWatch resources Delete your Managed Service for Apache Flink application 1. Open the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink 2. In the Managed Service for Apache Flink panel, choose MyApplication. 3. Choose Configure. 4. 5. In the Snapshots section, choose Disable and then choose Update. In the application's page, choose Delete and then confirm the deletion. Delete your Kinesis data stream 1. Open the Kinesis console at https://console.aws.amazon.com/kinesis. 2. In the Kinesis Data Streams panel, choose ExampleInputStream. Legacy examples 282 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 3. In the ExampleInputStream page, choose Delete Kinesis Stream and then confirm the deletion. Delete your Firehose stream 1. Open the Kinesis console at https://console.aws.amazon.com/kinesis. 2. 3. In the Firehose panel, choose ExampleDeliveryStream. In the ExampleDeliveryStream page, choose Delete Firehose stream and then confirm the deletion. Delete your Amazon S3 object and bucket 1. Open the Amazon S3 console at https://console.aws.amazon.com/s3/. 2. Choose the ka-app-code-<username> bucket. 3. Choose Delete and then enter the bucket name to confirm deletion. 4. If you created an Amazon S3 bucket for your Firehose stream's destination, delete that bucket too. Delete your IAM resources 1. Open the IAM console at https://console.aws.amazon.com/iam/. 2. 3. In the navigation bar, choose Policies. In the filter control, enter kinesis. 4. Choose the kinesis-analytics-service-MyApplication-us-west-2 policy. 5. Choose Policy Actions and then choose Delete. 6. 7. If you created a new policy for your Firehose stream, delete that policy too. In the navigation bar, choose Roles. 8. Choose the kinesis-analytics-MyApplication-us-west-2 role. 9. Choose Delete role and then confirm the deletion. 10. If you created a new role for your Firehose stream, delete that role too. Delete your CloudWatch resources 1. Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/. Legacy examples 283 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 2. In the navigation bar, choose Logs. 3. Choose the /aws/kinesis-analytics/MyApplication log group. 4. Choose Delete Log Group and then confirm the deletion. Example: Read from a Kinesis stream in a different account Note For current examples, see Examples for creating and working with Managed Service for Apache Flink applications. This example demonstrates how to create an Managed Service for Apache Flink application that reads data from a Kinesis stream in a different account. In this example, you will use one account for the source Kinesis stream, and a second account for the Managed Service for Apache Flink application and sink Kinesis stream. This topic contains the following sections: • Prerequisites • Setup • Create source Kinesis stream • Create and update IAM roles and policies • Update the Python script • Update the Java application • Build, upload, and run the application Prerequisites • In this tutorial, you modify the Getting Started example to read data from a Kinesis stream in a different account. Complete the Tutorial: Get started using the DataStream API in Managed Service for Apache Flink tutorial before proceeding. • You need two AWS accounts to complete this tutorial: one for the source stream, and one for the application and the sink stream. Use the AWS account you used for the Getting Started tutorial for the application and sink stream. Use a different AWS account for the source stream. Legacy examples 284 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Setup You
analytics-java-api-091
analytics-java-api.pdf
91
the Getting Started example to read data from a Kinesis stream in a different account. Complete the Tutorial: Get started using the DataStream API in Managed Service for Apache Flink tutorial before proceeding. • You need two AWS accounts to complete this tutorial: one for the source stream, and one for the application and the sink stream. Use the AWS account you used for the Getting Started tutorial for the application and sink stream. Use a different AWS account for the source stream. Legacy examples 284 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Setup You will access your two AWS accounts by using named profiles. Modify your AWS credentials and configuration files to include two profiles that contain the region and connection information for your two accounts. The following example credential file contains two named profiles, ka-source-stream- account-profile and ka-sink-stream-account-profile. Use the account you used for the Getting Started tutorial for the sink stream account. [ka-source-stream-account-profile] aws_access_key_id=AKIAIOSFODNN7EXAMPLE aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY [ka-sink-stream-account-profile] aws_access_key_id=AKIAI44QH8DHBEXAMPLE aws_secret_access_key=je7MtGbClwBF/2Zp9Utk/h3yCo8nvbEXAMPLEKEY The following example configuration file contains the same named profiles with region and output format information. [profile ka-source-stream-account-profile] region=us-west-2 output=json [profile ka-sink-stream-account-profile] region=us-west-2 output=json Note This tutorial does not use the ka-sink-stream-account-profile. It is included as an example of how to access two different AWS accounts using profiles. For more information on using named profiles with the AWS CLI, see Named Profiles in the AWS Command Line Interface documentation. Create source Kinesis stream In this section, you will create the Kinesis stream in the source account. Legacy examples 285 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Enter the following command to create the Kinesis stream that the application will use for input. Note that the --profile parameter specifies which account profile to use. $ aws kinesis create-stream \ --stream-name SourceAccountExampleInputStream \ --shard-count 1 \ --profile ka-source-stream-account-profile Create and update IAM roles and policies To allow object access across AWS accounts, you create an IAM role and policy in the source account. Then, you modify the IAM policy in the sink account. For information about creating IAM roles and policies, see the following topics in the AWS Identity and Access Management User Guide: • Creating IAM Roles • Creating IAM Policies Sink account roles and policies 1. Edit the kinesis-analytics-service-MyApplication-us-west-2 policy from the Getting Started tutorial. This policy allows the role in the source account to be assumed in order to read the source stream. Note When you use the console to create your application, the console creates a policy called kinesis-analytics-service-<application name>-<application region>, and a role called kinesisanalytics-<application name>-<application region>. Add the highlighted section below to the policy. Replace the sample account ID (SOURCE01234567) with the ID of the account you will use for the source stream. { "Version": "2012-10-17", "Statement": [ { "Sid": "AssumeRoleInSourceAccount", Legacy examples 286 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide "Effect": "Allow", "Action": "sts:AssumeRole", "Resource": "arn:aws:iam::SOURCE01234567:role/KA-Source-Stream-Role" }, { "Sid": "ReadCode", "Effect": "Allow", "Action": [ "s3:GetObject", "s3:GetObjectVersion" ], "Resource": [ "arn:aws:s3:::ka-app-code-username/aws-kinesis-analytics-java- apps-1.0.jar" ] }, { "Sid": "ListCloudwatchLogGroups", "Effect": "Allow", "Action": [ "logs:DescribeLogGroups" ], "Resource": [ "arn:aws:logs:us-west-2:SINK012345678:log-group:*" ] }, { "Sid": "ListCloudwatchLogStreams", "Effect": "Allow", "Action": [ "logs:DescribeLogStreams" ], "Resource": [ "arn:aws:logs:us-west-2:SINK012345678:log-group:/aws/kinesis- analytics/MyApplication:log-stream:*" ] }, { "Sid": "PutCloudwatchLogs", "Effect": "Allow", "Action": [ "logs:PutLogEvents" ], "Resource": [ Legacy examples 287 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide "arn:aws:logs:us-west-2:SINK012345678:log-group:/aws/kinesis- analytics/MyApplication:log-stream:kinesis-analytics-log-stream" ] } ] } 2. Open the kinesis-analytics-MyApplication-us-west-2 role, and make a note of its Amazon Resource Name (ARN). You will need it in the next section. The role ARN looks like the following. arn:aws:iam::SINK012345678:role/service-role/kinesis-analytics-MyApplication-us- west-2 Source account roles and policies 1. Create a policy in the source account called KA-Source-Stream-Policy. Use the following JSON for the policy. Replace the sample account number with the account number of the source account. { "Version": "2012-10-17", "Statement": [ { "Sid": "ReadInputStream", "Effect": "Allow", "Action": [ "kinesis:DescribeStream", "kinesis:GetRecords", "kinesis:GetShardIterator", "kinesis:ListShards" ], "Resource": "arn:aws:kinesis:us-west-2:SOURCE123456784:stream/ SourceAccountExampleInputStream" } ] } 2. Create a role in the source account called MF-Source-Stream-Role. Do the following to create the role using the Managed Flink use case: Legacy examples 288 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 1. In the IAM Management Console, choose Create Role. 2. On the Create Role page, choose AWS Service. In the service list, choose Kinesis. 3. In the Select your use case section, choose Managed Service for Apache Flink. 4. Choose Next: Permissions. 5. Add the KA-Source-Stream-Policy permissions policy you created in the previous step. Choose Next:Tags. 6. Choose Next: Review. 7. Name the role KA-Source-Stream-Role. Your application will use this role to access the source stream. 3. Add the kinesis-analytics-MyApplication-us-west-2 ARN from the sink account to the trust relationship of the KA-Source-Stream-Role role in the source account: 1. Open the KA-Source-Stream-Role in the IAM console. 2. Choose
analytics-java-api-092
analytics-java-api.pdf
92
Create Role. 2. On the Create Role page, choose AWS Service. In the service list, choose Kinesis. 3. In the Select your use case section, choose Managed Service for Apache Flink. 4. Choose Next: Permissions. 5. Add the KA-Source-Stream-Policy permissions policy you created in the previous step. Choose Next:Tags. 6. Choose Next: Review. 7. Name the role KA-Source-Stream-Role. Your application will use this role to access the source stream. 3. Add the kinesis-analytics-MyApplication-us-west-2 ARN from the sink account to the trust relationship of the KA-Source-Stream-Role role in the source account: 1. Open the KA-Source-Stream-Role in the IAM console. 2. Choose the Trust Relationships tab. 3. Choose Edit trust relationship. 4. Use the following code for the trust relationship. Replace the sample account ID (SINK012345678) with your sink account ID. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::SINK012345678:role/service-role/kinesis-analytics- MyApplication-us-west-2" }, "Action": "sts:AssumeRole" } ] } Update the Python script In this section, you update the Python script that generates sample data to use the source account profile. Legacy examples 289 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Update the stock.py script with the following highlighted changes. import json import boto3 import random import datetime import os os.environ['AWS_PROFILE'] ='ka-source-stream-account-profile' os.environ['AWS_DEFAULT_REGION'] = 'us-west-2' kinesis = boto3.client('kinesis') def getReferrer(): data = {} now = datetime.datetime.now() str_now = now.isoformat() data['event_time'] = str_now data['ticker'] = random.choice(['AAPL', 'AMZN', 'MSFT', 'INTC', 'TBV']) price = random.random() * 100 data['price'] = round(price, 2) return data while True: data = json.dumps(getReferrer()) print(data) kinesis.put_record( StreamName="SourceAccountExampleInputStream", Data=data, PartitionKey="partitionkey") Update the Java application In this section, you update the Java application code to assume the source account role when reading from the source stream. Make the following changes to the BasicStreamingJob.java file. Replace the example source account number (SOURCE01234567) with your source account number. package com.amazonaws.services.managed-flink; import com.amazonaws.services.managed-flink.runtime.KinesisAnalyticsRuntime; import org.apache.flink.api.common.serialization.SimpleStringSchema; Legacy examples 290 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide import org.apache.flink.streaming.api.datastream.DataStream; import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; import org.apache.flink.streaming.connectors.kinesis.FlinkKinesisConsumer; import org.apache.flink.streaming.connectors.kinesis.FlinkKinesisProducer; import org.apache.flink.streaming.connectors.kinesis.config.ConsumerConfigConstants; import org.apache.flink.streaming.connectors.kinesis.config.AWSConfigConstants; import java.io.IOException; import java.util.Map; import java.util.Properties; /** * A basic Managed Service for Apache Flink for Java application with Kinesis data streams * as source and sink. */ public class BasicStreamingJob { private static final String region = "us-west-2"; private static final String inputStreamName = "SourceAccountExampleInputStream"; private static final String outputStreamName = ExampleOutputStream; private static final String roleArn = "arn:aws:iam::SOURCE01234567:role/KA-Source- Stream-Role"; private static final String roleSessionName = "ksassumedrolesession"; private static DataStream<String> createSourceFromStaticConfig(StreamExecutionEnvironment env) { Properties inputProperties = new Properties(); inputProperties.setProperty(AWSConfigConstants.AWS_CREDENTIALS_PROVIDER, "ASSUME_ROLE"); inputProperties.setProperty(AWSConfigConstants.AWS_ROLE_ARN, roleArn); inputProperties.setProperty(AWSConfigConstants.AWS_ROLE_SESSION_NAME, roleSessionName); inputProperties.setProperty(ConsumerConfigConstants.AWS_REGION, region); inputProperties.setProperty(ConsumerConfigConstants.STREAM_INITIAL_POSITION, "LATEST"); return env.addSource(new FlinkKinesisConsumer<>(inputStreamName, new SimpleStringSchema(), inputProperties)); } private static KinesisStreamsSink<String> createSinkFromStaticConfig() { Properties outputProperties = new Properties(); outputProperties.setProperty(AWSConfigConstants.AWS_REGION, region); Legacy examples 291 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide return KinesisStreamsSink.<String>builder() .setKinesisClientProperties(outputProperties) .setSerializationSchema(new SimpleStringSchema()) .setStreamName(outputProperties.getProperty("OUTPUT_STREAM", "ExampleOutputStream")) .setPartitionKeyGenerator(element -> String.valueOf(element.hashCode())) .build(); } public static void main(String[] args) throws Exception { // set up the streaming execution environment final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); DataStream<String> input = createSourceFromStaticConfig(env); input.addSink(createSinkFromStaticConfig()); env.execute("Flink Streaming Java API Skeleton"); } } Build, upload, and run the application Do the following to update and run the application: 1. Build the application again by running the following command in the directory with the pom.xml file. mvn package -Dflink.version=1.15.3 2. Delete the previous JAR file from your Amazon Simple Storage Service (Amazon S3) bucket, and then upload the new aws-kinesis-analytics-java-apps-1.0.jar file to the S3 bucket. 3. In the application's page in the Managed Service for Apache Flink console, choose Configure, Update to reload the application JAR file. 4. Run the stock.py script to send data to the source stream. python stock.py Legacy examples 292 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide The application now reads data from the Kinesis stream in the other account. You can verify that the application is working by checking the PutRecords.Bytes metric of the ExampleOutputStream stream. If there is activity in the output stream, the application is functioning properly. Tutorial: Using a custom truststore with Amazon MSK Note For current examples, see Examples for creating and working with Managed Service for Apache Flink applications. Current data source APIs If you are using the current data source APIs, your application can leverage the Amazon MSK Config Providers utility described here. This allows your KafkaSource function to access your keystore and truststore for mutual TLS in Amazon S3. ... // define names of config providers: builder.setProperty("config.providers", "secretsmanager,s3import"); // provide implementation classes for each provider: builder.setProperty("config.providers.secretsmanager.class", "com.amazonaws.kafka.config.providers.SecretsManagerConfigProvider"); builder.setProperty("config.providers.s3import.class", "com.amazonaws.kafka.config.providers.S3ImportConfigProvider"); String region = appProperties.get(Helpers.S3_BUCKET_REGION_KEY).toString(); String keystoreS3Bucket = appProperties.get(Helpers.KEYSTORE_S3_BUCKET_KEY).toString(); String keystoreS3Path = appProperties.get(Helpers.KEYSTORE_S3_PATH_KEY).toString(); String truststoreS3Bucket = appProperties.get(Helpers.TRUSTSTORE_S3_BUCKET_KEY).toString(); String truststoreS3Path = appProperties.get(Helpers.TRUSTSTORE_S3_PATH_KEY).toString(); String keystorePassSecret = appProperties.get(Helpers.KEYSTORE_PASS_SECRET_KEY).toString(); String keystorePassSecretField = appProperties.get(Helpers.KEYSTORE_PASS_SECRET_FIELD_KEY).toString(); Legacy examples 293 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide // region, etc.. builder.setProperty("config.providers.s3import.param.region", region); // properties builder.setProperty("ssl.truststore.location", "${s3import:" + region + ":" + truststoreS3Bucket +
analytics-java-api-093
analytics-java-api.pdf
93
Providers utility described here. This allows your KafkaSource function to access your keystore and truststore for mutual TLS in Amazon S3. ... // define names of config providers: builder.setProperty("config.providers", "secretsmanager,s3import"); // provide implementation classes for each provider: builder.setProperty("config.providers.secretsmanager.class", "com.amazonaws.kafka.config.providers.SecretsManagerConfigProvider"); builder.setProperty("config.providers.s3import.class", "com.amazonaws.kafka.config.providers.S3ImportConfigProvider"); String region = appProperties.get(Helpers.S3_BUCKET_REGION_KEY).toString(); String keystoreS3Bucket = appProperties.get(Helpers.KEYSTORE_S3_BUCKET_KEY).toString(); String keystoreS3Path = appProperties.get(Helpers.KEYSTORE_S3_PATH_KEY).toString(); String truststoreS3Bucket = appProperties.get(Helpers.TRUSTSTORE_S3_BUCKET_KEY).toString(); String truststoreS3Path = appProperties.get(Helpers.TRUSTSTORE_S3_PATH_KEY).toString(); String keystorePassSecret = appProperties.get(Helpers.KEYSTORE_PASS_SECRET_KEY).toString(); String keystorePassSecretField = appProperties.get(Helpers.KEYSTORE_PASS_SECRET_FIELD_KEY).toString(); Legacy examples 293 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide // region, etc.. builder.setProperty("config.providers.s3import.param.region", region); // properties builder.setProperty("ssl.truststore.location", "${s3import:" + region + ":" + truststoreS3Bucket + "/" + truststoreS3Path + "}"); builder.setProperty("ssl.keystore.type", "PKCS12"); builder.setProperty("ssl.keystore.location", "${s3import:" + region + ":" + keystoreS3Bucket + "/" + keystoreS3Path + "}"); builder.setProperty("ssl.keystore.password", "${secretsmanager:" + keystorePassSecret + ":" + keystorePassSecretField + "}"); builder.setProperty("ssl.key.password", "${secretsmanager:" + keystorePassSecret + ":" + keystorePassSecretField + "}"); ... More details and a walkthrough can be found here. Legacy SourceFunction APIs If you are using the legacy SourceFunction APIs, your application will use custom serialization and deserialization schemas that override the open method to load the custom truststore. This makes the truststore available to the application after the application restarts or replaces threads. The custom truststore is retrieved and stored using the following code: public static void initializeKafkaTruststore() { ClassLoader classLoader = Thread.currentThread().getContextClassLoader(); URL inputUrl = classLoader.getResource("kafka.client.truststore.jks"); File dest = new File("/tmp/kafka.client.truststore.jks"); try { FileUtils.copyURLToFile(inputUrl, dest); } catch (Exception ex) { throw new FlinkRuntimeException("Failed to initialize Kakfa truststore", ex); } } Note Apache Flink requires the truststore to be in JKS format. Legacy examples 294 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Note To set up the required prerequisites for this exercise, first complete the Tutorial: Get started using the DataStream API in Managed Service for Apache Flink exercise. The following tutorial demonstrates how to securely connect (encryption in transit) to a Kafka Cluster that uses server certificates issued by a custom, private or even self-hosted Certificate Authority (CA). For connecting any Kafka Client securely over TLS to a Kafka Cluster, the Kafka Client (like the example Flink application) must trust the complete chain of trust presented by the Kafka Cluster's server certificates (from the Issuing CA up to the Root-Level CA). As an example for a custom truststore, we will use an Amazon MSK cluster with Mutual TLS (MTLS) Authentication enabled. This implies that the MSK cluster nodes use server certificates that are issued by an AWS Certificate Manager Private Certificate Authority (ACM Private CA) that is private to your account and Region and therefore not trusted by the default truststore of the Java Virtual Machine (JVM) executing the Flink application. Note • A keystore is used to store private key and identity certificates an application should present to both server or client for verification. • A truststore is used to store certificates from Certified Authorities (CA) that verify the certificate presented by the server in an SSL connection. You can also use the technique in this tutorial for interactions between a Managed Service for Apache Flink application and other Apache Kafka sources, such as: • A custom Apache Kafka cluster hosted in AWS (Amazon EC2 or Amazon EKS) • A Confluent Kafka cluster hosted in AWS • An on-premises Kafka cluster accessed through AWS Direct Connect or VPN This tutorial contains the following sections: • Create a VPC with an Amazon MSK cluster Legacy examples 295 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • Create a custom truststore and apply it to your cluster • Create the application code • Upload the Apache Flink streaming Java code • Create the application • Configure the application • Run the application • Test the application Create a VPC with an Amazon MSK cluster To create a sample VPC and Amazon MSK cluster to access from a Managed Service for Apache Flink application, follow the Getting Started Using Amazon MSK tutorial. When completing the tutorial, also do the following: • In Step 3: Create a Topic, repeat the kafka-topics.sh --create command to create a destination topic named AWSKafkaTutorialTopicDestination: bin/kafka-topics.sh --create --bootstrap-server ZooKeeperConnectionString -- replication-factor 3 --partitions 1 --topic AWSKafkaTutorialTopicDestination Note If the kafka-topics.sh command returns a ZooKeeperClientTimeoutException, verify that the Kafka cluster's security group has an inbound rule to allow all traffic from the client instance's private IP address. • Record the bootstrap server list for your cluster. You can get the list of bootstrap servers with the following command (replace ClusterArn with the ARN of your MSK cluster): aws kafka get-bootstrap-brokers --region us-west-2 --cluster-arn ClusterArn {... "BootstrapBrokerStringTls": "b-2.awskafkatutorialcluste.t79r6y.c4.kafka.us- west-2.amazonaws.com:9094,b-1.awskafkatutorialcluste.t79r6y.c4.kafka.us- west-2.amazonaws.com:9094,b-3.awskafkatutorialcluste.t79r6y.c4.kafka.us- west-2.amazonaws.com:9094" } Legacy examples 296 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • When following the steps in this tutorial and the prerequisite tutorials, be sure to use your selected AWS Region
analytics-java-api-094
analytics-java-api.pdf
94
that the Kafka cluster's security group has an inbound rule to allow all traffic from the client instance's private IP address. • Record the bootstrap server list for your cluster. You can get the list of bootstrap servers with the following command (replace ClusterArn with the ARN of your MSK cluster): aws kafka get-bootstrap-brokers --region us-west-2 --cluster-arn ClusterArn {... "BootstrapBrokerStringTls": "b-2.awskafkatutorialcluste.t79r6y.c4.kafka.us- west-2.amazonaws.com:9094,b-1.awskafkatutorialcluste.t79r6y.c4.kafka.us- west-2.amazonaws.com:9094,b-3.awskafkatutorialcluste.t79r6y.c4.kafka.us- west-2.amazonaws.com:9094" } Legacy examples 296 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • When following the steps in this tutorial and the prerequisite tutorials, be sure to use your selected AWS Region in your code, commands, and console entries. Create a custom truststore and apply it to your cluster In this section, you create a custom certificate authority (CA), use it to generate a custom truststore, and apply it to your MSK cluster. To create and apply your custom truststore, follow the Client Authentication tutorial in the Amazon Managed Streaming for Apache Kafka Developer Guide. Create the application code In this section, you download and compile the application JAR file. The Java application code for this example is available from GitHub. To download the application code, do the following: 1. Install the Git client if you haven't already. For more information, see Installing Git. 2. Clone the remote repository with the following command: git clone https://github.com/aws-samples/amazon-kinesis-data-analytics-examples.git 3. The application code is located in the amazon-kinesis-data-analytics-java- examples/CustomKeystore. You can examine the code to familiarize yourself with the structure of Managed Service for Apache Flink code. 4. Use either the command line Maven tool or your preferred development environment to create the JAR file. To compile the JAR file using the command line Maven tool, enter the following: mvn package -Dflink.version=1.15.3 If the build is successful, the following file is created: target/flink-app-1.0-SNAPSHOT.jar Note The provided source code relies on libraries from Java 11. Legacy examples 297 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Upload the Apache Flink streaming Java code In this section, you upload your application code to the Amazon S3 bucket that you created in the Tutorial: Get started using the DataStream API in Managed Service for Apache Flink tutorial. Note If you deleted the Amazon S3 bucket from the Getting Started tutorial, follow the the section called “Upload the application code JAR file” step again. 1. 2. In the Amazon S3 console, choose the ka-app-code-<username> bucket, and choose Upload. In the Select files step, choose Add files. Navigate to the flink-app-1.0-SNAPSHOT.jar file that you created in the previous step. 3. You don't need to change any of the settings for the object, so choose Upload. Your application code is now stored in an Amazon S3 bucket where your application can access it. Create the application 1. Open the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink 2. On the Managed Service for Apache Flink dashboard, choose Create analytics application. 3. On the Managed Service for Apache Flink - Create application page, provide the application details as follows: • For Application name, enter MyApplication. • For Runtime, choose Apache Flink version 1.15.2. 4. For Access permissions, choose Create / update IAM role kinesis-analytics- MyApplication-us-west-2. 5. Choose Create application. Note When you create a Managed Service for Apache Flink using the console, you have the option of having an IAM role and policy created for your application. Your application uses this role and policy to access its dependent resources. These IAM resources are named using your application name and Region as follows: Legacy examples 298 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • Policy: kinesis-analytics-service-MyApplication-us-west-2 • Role: kinesisanalytics-MyApplication-us-west-2 Configure the application 1. On the MyApplication page, choose Configure. 2. On the Configure application page, provide the Code location: • For Amazon S3 bucket, enter ka-app-code-<username>. • For Path to Amazon S3 object, enter flink-app-1.0-SNAPSHOT.jar. 3. Under Access to application resources, for Access permissions, choose Create / update IAM role kinesis-analytics-MyApplication-us-west-2. Note When you specify application resources using the console (such as logs or a VPC), the console modifies your application execution role to grant permission to access those resources. 4. Under Properties, choose Add Group. Enter the following properties: Group ID KafkaSource KafkaSource Key topic Value AWSKafkaTutorialTopic bootstrap.servers The bootstrap server list you saved previously KafkaSource security.protocol SSL KafkaSource ssl.truststore.location /usr/lib/jvm/java-11- amazon-corretto/lib/secu rity/cacerts KafkaSource ssl.truststore.password changeit Legacy examples 299 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Note The ssl.truststore.password for the default certificate is "changeit"—you don't need to change this value if you're using the default certificate. Choose Add Group again. Enter the following properties: Group ID KafkaSink Key topic Value AWSKafkaTutorialTo picDestination KafkaSink bootstrap.servers The bootstrap server KafkaSink KafkaSink KafkaSink KafkaSink list you saved previously security.protocol SSL ssl.truststore.location /usr/lib/jvm/java-11- amazon-corretto/lib/secu rity/cacerts ssl.truststore.password changeit transaction.timeout.ms 1000 The application
analytics-java-api-095
analytics-java-api.pdf
95
Key topic Value AWSKafkaTutorialTopic bootstrap.servers The bootstrap server list you saved previously KafkaSource security.protocol SSL KafkaSource ssl.truststore.location /usr/lib/jvm/java-11- amazon-corretto/lib/secu rity/cacerts KafkaSource ssl.truststore.password changeit Legacy examples 299 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Note The ssl.truststore.password for the default certificate is "changeit"—you don't need to change this value if you're using the default certificate. Choose Add Group again. Enter the following properties: Group ID KafkaSink Key topic Value AWSKafkaTutorialTo picDestination KafkaSink bootstrap.servers The bootstrap server KafkaSink KafkaSink KafkaSink KafkaSink list you saved previously security.protocol SSL ssl.truststore.location /usr/lib/jvm/java-11- amazon-corretto/lib/secu rity/cacerts ssl.truststore.password changeit transaction.timeout.ms 1000 The application code reads the above application properties to configure the source and sink used to interact with your VPC and Amazon MSK cluster. For more information about using properties, see Use runtime properties. 5. Under Snapshots, choose Disable. This will make it easier to update the application without loading invalid application state data. 6. Under Monitoring, ensure that the Monitoring metrics level is set to Application. 7. For CloudWatch logging, choose the Enable check box. Legacy examples 300 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 8. In the Virtual Private Cloud (VPC) section, choose the VPC to associate with your application. Choose the subnets and security group associated with your VPC that you want the application to use to access VPC resources. 9. Choose Update. Note When you choose to enable CloudWatch logging, Managed Service for Apache Flink creates a log group and log stream for you. The names of these resources are as follows: • Log group: /aws/kinesis-analytics/MyApplication • Log stream: kinesis-analytics-log-stream This log stream is used to monitor the application. Run the application The Flink job graph can be viewed by running the application, opening the Apache Flink dashboard, and choosing the desired Flink job. Test the application In this section, you write records to the source topic. The application reads records from the source topic and writes them to the destination topic. You verify that the application is working by writing records to the source topic and reading records from the destination topic. To write and read records from the topics, follow the steps in Step 6: Produce and Consume Data in the Getting Started Using Amazon MSK tutorial. To read from the destination topic, use the destination topic name instead of the source topic in your second connection to the cluster: bin/kafka-console-consumer.sh --bootstrap-server BootstrapBrokerString -- consumer.config client.properties --topic AWSKafkaTutorialTopicDestination --from- beginning If no records appear in the destination topic, see the Cannot access resources in a VPC section in the Troubleshoot Managed Service for Apache Flink topic. Legacy examples 301 Managed Service for Apache Flink Python examples Managed Service for Apache Flink Developer Guide The following examples demonstrate how to create applications using Python with the Apache Flink Table API. Topics • Example: Creating a tumbling window in Python • Example: Creating a sliding window in Python • Example: Send streaming data to Amazon S3 in Python Example: Creating a tumbling window in Python Note For current examples, see Examples for creating and working with Managed Service for Apache Flink applications. In this exercise, you create a Python Managed Service for Apache Flink application that aggregates data using a tumbling window. Note To set up required prerequisites for this exercise, first complete the Tutorial: Get started using Python in Managed Service for Apache Flink exercise. This topic contains the following sections: • Create dependent resources • Write sample records to the input stream • Download and examine the application code • Compress and upload the Apache Flink streaming Python code • Create and run the Managed Service for Apache Flink application • Clean up AWS resources Legacy examples 302 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Create dependent resources Before you create a Managed Service for Apache Flink application for this exercise, you create the following dependent resources: • Two Kinesis data streams (ExampleInputStream and ExampleOutputStream) • An Amazon S3 bucket to store the application's code (ka-app-code-<username>) You can create the Kinesis streams and Amazon S3 bucket using the console. For instructions for creating these resources, see the following topics: • Creating and Updating Data Streams in the Amazon Kinesis Data Streams Developer Guide. Name your data streams ExampleInputStream and ExampleOutputStream. • How Do I Create an S3 Bucket? in the Amazon Simple Storage Service User Guide. Give the Amazon S3 bucket a globally unique name by appending your login name, such as ka-app- code-<username>. Write sample records to the input stream In this section, you use a Python script to write sample records to the stream for the application to process. Note This section requires the AWS SDK for Python (Boto). Note The Python script in this section uses the AWS CLI. You must configure your
analytics-java-api-096
analytics-java-api.pdf
96
the Amazon Kinesis Data Streams Developer Guide. Name your data streams ExampleInputStream and ExampleOutputStream. • How Do I Create an S3 Bucket? in the Amazon Simple Storage Service User Guide. Give the Amazon S3 bucket a globally unique name by appending your login name, such as ka-app- code-<username>. Write sample records to the input stream In this section, you use a Python script to write sample records to the stream for the application to process. Note This section requires the AWS SDK for Python (Boto). Note The Python script in this section uses the AWS CLI. You must configure your AWS CLI to use your account credentials and default region. To configure your AWS CLI, enter the following: aws configure 1. Create a file named stock.py with the following contents: Legacy examples 303 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide import datetime import json import random import boto3 STREAM_NAME = "ExampleInputStream" def get_data(): return { 'event_time': datetime.datetime.now().isoformat(), 'ticker': random.choice(['AAPL', 'AMZN', 'MSFT', 'INTC', 'TBV']), 'price': round(random.random() * 100, 2)} def generate(stream_name, kinesis_client): while True: data = get_data() print(data) kinesis_client.put_record( StreamName=stream_name, Data=json.dumps(data), PartitionKey="partitionkey") if __name__ == '__main__': generate(STREAM_NAME, boto3.client('kinesis', region_name='us-west-2')) 2. Run the stock.py script: $ python stock.py Keep the script running while completing the rest of the tutorial. Download and examine the application code The Python application code for this example is available from GitHub. To download the application code, do the following: 1. Install the Git client if you haven't already. For more information, see Installing Git. Legacy examples 304 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 2. Clone the remote repository with the following command: git clone https://github.com/aws-samples/amazon-kinesis-data-analytics-examples.git 3. Navigate to the amazon-kinesis-data-analytics-java-examples/python/ TumblingWindow directory. The application code is located in the tumbling-windows.py file. Note the following about the application code: • The application uses a Kinesis table source to read from the source stream. The following snippet calls the create_table function to create the Kinesis table source: table_env.execute_sql( create_input_table(input_table_name, input_stream, input_region, stream_initpos) ) The create_table function uses a SQL command to create a table that is backed by the streaming source: def create_input_table(table_name, stream_name, region, stream_initpos): return """ CREATE TABLE {0} ( ticker VARCHAR(6), price DOUBLE, event_time TIMESTAMP(3), WATERMARK FOR event_time AS event_time - INTERVAL '5' SECOND ) PARTITIONED BY (ticker) WITH ( 'connector' = 'kinesis', 'stream' = '{1}', 'aws.region' = '{2}', 'scan.stream.initpos' = '{3}', 'format' = 'json', 'json.timestamp-format.standard' = 'ISO-8601' ) """.format(table_name, stream_name, region, stream_initpos) • The application uses the Tumble operator to aggregate records within a specified tumbling window, and return the aggregated records as a table object: Legacy examples 305 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide tumbling_window_table = ( input_table.window( Tumble.over("10.seconds").on("event_time").alias("ten_second_window") ) .group_by("ticker, ten_second_window") .select("ticker, price.min as price, to_string(ten_second_window.end) as event_time") • The application uses the Kinesis Flink connector, from the flink-sql-connector- kinesis-1.15.2.jar . Compress and upload the Apache Flink streaming Python code In this section, you upload your application code to the Amazon S3 bucket you created in the Create dependent resources section. 1. Use your preferred compression application to compress the tumbling-windows.py and flink-sql-connector-kinesis-1.15.2.jar files. Name the archive myapp.zip. 2. 3. In the Amazon S3 console, choose the ka-app-code-<username> bucket, and choose Upload. In the Select files step, choose Add files. Navigate to the myapp.zip file that you created in the previous step. 4. You don't need to change any of the settings for the object, so choose Upload. Your application code is now stored in an Amazon S3 bucket where your application can access it. Create and run the Managed Service for Apache Flink application Follow these steps to create, configure, update, and run the application using the console. Create the application 1. Open the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink 2. On the Managed Service for Apache Flink dashboard, choose Create analytics application. 3. On the Managed Service for Apache Flink - Create application page, provide the application details as follows: • For Application name, enter MyApplication. • For Runtime, choose Apache Flink. Legacy examples 306 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Note Managed Service for Apache Flink uses Apache Flink version 1.15.2. • Leave the version pulldown as Apache Flink version 1.15.2 (Recommended version). 4. For Access permissions, choose Create / update IAM role kinesis-analytics- MyApplication-us-west-2. 5. Choose Create application. Note When you create a Managed Service for Apache Flink application using the console, you have the option of having an IAM role and policy created for your application. Your application uses this role and policy to access its dependent resources. These IAM resources are named using your application name and Region as follows: • Policy: kinesis-analytics-service-MyApplication-us-west-2 • Role: kinesisanalytics-MyApplication-us-west-2 Configure the application 1. On the MyApplication page, choose Configure. 2. On the Configure application page,
analytics-java-api-097
analytics-java-api.pdf
97
pulldown as Apache Flink version 1.15.2 (Recommended version). 4. For Access permissions, choose Create / update IAM role kinesis-analytics- MyApplication-us-west-2. 5. Choose Create application. Note When you create a Managed Service for Apache Flink application using the console, you have the option of having an IAM role and policy created for your application. Your application uses this role and policy to access its dependent resources. These IAM resources are named using your application name and Region as follows: • Policy: kinesis-analytics-service-MyApplication-us-west-2 • Role: kinesisanalytics-MyApplication-us-west-2 Configure the application 1. On the MyApplication page, choose Configure. 2. On the Configure application page, provide the Code location: • For Amazon S3 bucket, enter ka-app-code-<username>. • For Path to Amazon S3 object, enter myapp.zip. 3. Under Access to application resources, for Access permissions, choose Create / update IAM role kinesis-analytics-MyApplication-us-west-2. 4. Under Properties, choose Add group. 5. Enter the following: Group ID Key Value consumer.config.0 input.stream.name ExampleInputStream Legacy examples 307 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Group ID Key Value consumer.config.0 aws.region us-west-2 consumer.config.0 scan.stream.initpos LATEST Choose Save. 6. Under Properties, choose Add group again. 7. Enter the following: Group ID Key Value producer.config.0 output.stream.name ExampleOutputStream producer.config.0 aws.region us-west-2 producer.config.0 shard.count 1 8. Under Properties, choose Add group again. For Group ID, enter kinesis.analytics.flink.run.options. This special property group tells your application where to find its code resources. For more information, see Specify your code files. 9. Enter the following: Group ID Key Value kinesis.analytics. python tumbling-windows.py flink.run.options kinesis.analytics. jarfile flink.run.options flink-sql-connecto r-kinesis-1.15.2.j ar 10. Under Monitoring, ensure that the Monitoring metrics level is set to Application. 11. For CloudWatch logging, select the Enable check box. 12. Choose Update. Legacy examples 308 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Note When you choose to enable CloudWatch logging, Managed Service for Apache Flink creates a log group and log stream for you. The names of these resources are as follows: • Log group: /aws/kinesis-analytics/MyApplication • Log stream: kinesis-analytics-log-stream This log stream is used to monitor the application. This is not the same log stream that the application uses to send results. Edit the IAM policy Edit the IAM policy to add permissions to access the Kinesis data streams. 1. Open the IAM console at https://console.aws.amazon.com/iam/. 2. Choose Policies. Choose the kinesis-analytics-service-MyApplication-us-west-2 policy that the console created for you in the previous section. 3. On the Summary page, choose Edit policy. Choose the JSON tab. 4. Add the highlighted section of the following policy example to the policy. Replace the sample account IDs (012345678901) with your account ID. { "Version": "2012-10-17", "Statement": [ { "Sid": "ReadCode", "Effect": "Allow", "Action": [ "s3:GetObject", "logs:DescribeLogGroups", "s3:GetObjectVersion" ], "Resource": [ "arn:aws:logs:us-west-2:012345678901:log-group:*", "arn:aws:s3:::ka-app-code-<username>/myapp.zip" ] }, Legacy examples 309 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide { "Sid": "DescribeLogStreams", "Effect": "Allow", "Action": "logs:DescribeLogStreams", "Resource": "arn:aws:logs:us-west-2:012345678901:log-group:/aws/ kinesis-analytics/MyApplication:log-stream:*" }, { "Sid": "PutLogEvents", "Effect": "Allow", "Action": "logs:PutLogEvents", "Resource": "arn:aws:logs:us-west-2:012345678901:log-group:/aws/ kinesis-analytics/MyApplication:log-stream:kinesis-analytics-log-stream" }, { "Sid": "ListCloudwatchLogGroups", "Effect": "Allow", "Action": [ "logs:DescribeLogGroups" ], "Resource": [ "arn:aws:logs:us-west-2:012345678901:log-group:*" ] }, { "Sid": "ReadInputStream", "Effect": "Allow", "Action": "kinesis:*", "Resource": "arn:aws:kinesis:us-west-2:012345678901:stream/ ExampleInputStream" }, { "Sid": "WriteOutputStream", "Effect": "Allow", "Action": "kinesis:*", "Resource": "arn:aws:kinesis:us-west-2:012345678901:stream/ ExampleOutputStream" } ] } Legacy examples 310 Managed Service for Apache Flink Run the application Managed Service for Apache Flink Developer Guide The Flink job graph can be viewed by running the application, opening the Apache Flink dashboard, and choosing the desired Flink job. You can check the Managed Service for Apache Flink metrics on the CloudWatch console to verify that the application is working. Clean up AWS resources This section includes procedures for cleaning up AWS resources created in the Tumbling Window tutorial. This topic contains the following sections: • Delete your Managed Service for Apache Flink application • Delete your Kinesis data streams • Delete your Amazon S3 object and bucket • Delete your IAM resources • Delete your CloudWatch resources Delete your Managed Service for Apache Flink application 1. Open the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink 2. 3. in the Managed Service for Apache Flink panel, choose MyApplication. In the application's page, choose Delete and then confirm the deletion. Delete your Kinesis data streams 1. Open the Kinesis console at https://console.aws.amazon.com/kinesis. 2. 3. 4. In the Kinesis Data Streams panel, choose ExampleInputStream. In the ExampleInputStream page, choose Delete Kinesis Stream and then confirm the deletion. In the Kinesis streams page, choose the ExampleOutputStream, choose Actions, choose Delete, and then confirm the deletion. Legacy examples 311 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Delete your Amazon S3 object and bucket 1. Open the Amazon S3 console at https://console.aws.amazon.com/s3/. 2. Choose the ka-app-code-<username> bucket. 3. Choose Delete and then enter the bucket name to confirm deletion. Delete your IAM resources 1. Open the IAM console at
analytics-java-api-098
analytics-java-api.pdf
98
at https://console.aws.amazon.com/kinesis. 2. 3. 4. In the Kinesis Data Streams panel, choose ExampleInputStream. In the ExampleInputStream page, choose Delete Kinesis Stream and then confirm the deletion. In the Kinesis streams page, choose the ExampleOutputStream, choose Actions, choose Delete, and then confirm the deletion. Legacy examples 311 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Delete your Amazon S3 object and bucket 1. Open the Amazon S3 console at https://console.aws.amazon.com/s3/. 2. Choose the ka-app-code-<username> bucket. 3. Choose Delete and then enter the bucket name to confirm deletion. Delete your IAM resources 1. Open the IAM console at https://console.aws.amazon.com/iam/. 2. 3. In the navigation bar, choose Policies. In the filter control, enter kinesis. 4. Choose the kinesis-analytics-service-MyApplication-us-west-2 policy. 5. Choose Policy Actions and then choose Delete. 6. In the navigation bar, choose Roles. 7. Choose the kinesis-analytics-MyApplication-us-west-2 role. 8. Choose Delete role and then confirm the deletion. Delete your CloudWatch resources 1. Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/. 2. In the navigation bar, choose Logs. 3. Choose the /aws/kinesis-analytics/MyApplication log group. 4. Choose Delete Log Group and then confirm the deletion. Example: Creating a sliding window in Python Note For current examples, see Examples for creating and working with Managed Service for Apache Flink applications. Legacy examples 312 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Note To set up required prerequisites for this exercise, first complete the Tutorial: Get started using Python in Managed Service for Apache Flink exercise. This topic contains the following sections: • Create dependent resources • Write sample records to the input stream • Download and examine the application code • Compress and upload the Apache Flink streaming Python code • Create and run the Managed Service for Apache Flink application • Clean up AWS resources Create dependent resources Before you create a Managed Service for Apache Flink application for this exercise, you create the following dependent resources: • Two Kinesis data streams (ExampleInputStream and ExampleOutputStream) • An Amazon S3 bucket to store the application's code (ka-app-code-<username>) You can create the Kinesis streams and Amazon S3 bucket using the console. For instructions for creating these resources, see the following topics: • Creating and Updating Data Streams in the Amazon Kinesis Data Streams Developer Guide. Name your data streams ExampleInputStream and ExampleOutputStream. • How Do I Create an S3 Bucket? in the Amazon Simple Storage Service User Guide. Give the Amazon S3 bucket a globally unique name by appending your login name, such as ka-app- code-<username>. Write sample records to the input stream In this section, you use a Python script to write sample records to the stream for the application to process. Legacy examples 313 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Note This section requires the AWS SDK for Python (Boto). Note The Python script in this section uses the AWS CLI. You must configure your AWS CLI to use your account credentials and default region. To configure your AWS CLI, enter the following: aws configure 1. Create a file named stock.py with the following contents: import datetime import json import random import boto3 STREAM_NAME = "ExampleInputStream" def get_data(): return { 'event_time': datetime.datetime.now().isoformat(), 'ticker': random.choice(['AAPL', 'AMZN', 'MSFT', 'INTC', 'TBV']), 'price': round(random.random() * 100, 2)} def generate(stream_name, kinesis_client): while True: data = get_data() print(data) kinesis_client.put_record( StreamName=stream_name, Data=json.dumps(data), PartitionKey="partitionkey") Legacy examples 314 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide if __name__ == '__main__': generate(STREAM_NAME, boto3.client('kinesis', region_name='us-west-2')) 2. Run the stock.py script: $ python stock.py Keep the script running while completing the rest of the tutorial. Download and examine the application code The Python application code for this example is available from GitHub. To download the application code, do the following: 1. Install the Git client if you haven't already. For more information, see Installing Git. 2. Clone the remote repository with the following command: git clone https://github.com/aws-samples/>amazon-kinesis-data-analytics-java- examples 3. Navigate to the amazon-kinesis-data-analytics-java-examples/python/ SlidingWindow directory. The application code is located in the sliding-windows.py file. Note the following about the application code: • The application uses a Kinesis table source to read from the source stream. The following snippet calls the create_input_table function to create the Kinesis table source: table_env.execute_sql( create_input_table(input_table_name, input_stream, input_region, stream_initpos) ) The create_input_table function uses a SQL command to create a table that is backed by the streaming source: def create_input_table(table_name, stream_name, region, stream_initpos): Legacy examples 315 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide return """ CREATE TABLE {0} ( ticker VARCHAR(6), price DOUBLE, event_time TIMESTAMP(3), WATERMARK FOR event_time AS event_time - INTERVAL '5' SECOND ) PARTITIONED BY (ticker) WITH ( 'connector' = 'kinesis', 'stream' = '{1}', 'aws.region' = '{2}', 'scan.stream.initpos' = '{3}', 'format' = 'json', 'json.timestamp-format.standard' = 'ISO-8601' ) """.format(table_name, stream_name, region, stream_initpos) } • The application uses the Slide operator
analytics-java-api-099
analytics-java-api.pdf
99
input_stream, input_region, stream_initpos) ) The create_input_table function uses a SQL command to create a table that is backed by the streaming source: def create_input_table(table_name, stream_name, region, stream_initpos): Legacy examples 315 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide return """ CREATE TABLE {0} ( ticker VARCHAR(6), price DOUBLE, event_time TIMESTAMP(3), WATERMARK FOR event_time AS event_time - INTERVAL '5' SECOND ) PARTITIONED BY (ticker) WITH ( 'connector' = 'kinesis', 'stream' = '{1}', 'aws.region' = '{2}', 'scan.stream.initpos' = '{3}', 'format' = 'json', 'json.timestamp-format.standard' = 'ISO-8601' ) """.format(table_name, stream_name, region, stream_initpos) } • The application uses the Slide operator to aggregate records within a specified sliding window, and return the aggregated records as a table object: sliding_window_table = ( input_table .window( Slide.over("10.seconds") .every("5.seconds") .on("event_time") .alias("ten_second_window") ) .group_by("ticker, ten_second_window") .select("ticker, price.min as price, to_string(ten_second_window.end) as event_time") ) • The application uses the Kinesis Flink connector, from the flink-sql-connector-kinesis-1.15.2.jar file. Compress and upload the Apache Flink streaming Python code In this section, you upload your application code to the Amazon S3 bucket you created in the Create dependent resources section. This section describes how to package your Python application. Legacy examples 316 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 1. Use your preferred compression application to compress the sliding-windows.py and flink-sql-connector-kinesis-1.15.2.jar files. Name the archive myapp.zip. 2. 3. In the Amazon S3 console, choose the ka-app-code-<username> bucket, and choose Upload. In the Select files step, choose Add files. Navigate to the myapp.zip file that you created in the previous step. 4. You don't need to change any of the settings for the object, so choose Upload. Your application code is now stored in an Amazon S3 bucket where your application can access it. Create and run the Managed Service for Apache Flink application Follow these steps to create, configure, update, and run the application using the console. Create the application 1. Open the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink 2. On the Managed Service for Apache Flink dashboard, choose Create analytics application. 3. On the Managed Service for Apache Flink - Create application page, provide the application details as follows: • For Application name, enter MyApplication. • For Runtime, choose Apache Flink. Note Managed Service for Apache Flink uses Apache Flink version 1.15.2. • Leave the version pulldown as Apache Flink version 1.15.2 (Recommended version). 4. For Access permissions, choose Create / update IAM role kinesis-analytics- MyApplication-us-west-2. 5. Choose Create application. Note When you create a Managed Service for Apache Flink application using the console, you have the option of having an IAM role and policy created for your application. Your Legacy examples 317 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide application uses this role and policy to access its dependent resources. These IAM resources are named using your application name and Region as follows: • Policy: kinesis-analytics-service-MyApplication-us-west-2 • Role: kinesisanalytics-MyApplication-us-west-2 Configure the application 1. On the MyApplication page, choose Configure. 2. On the Configure application page, provide the Code location: • For Amazon S3 bucket, enter ka-app-code-<username>. • For Path to Amazon S3 object, enter myapp.zip. 3. Under Access to application resources, for Access permissions, choose Create / update IAM role kinesis-analytics-MyApplication-us-west-2. 4. Under Properties, choose Add group. 5. Enter the following application properties and values: Group ID Key Value consumer.config.0 input.stream.name ExampleInputStream consumer.config.0 aws.region us-west-2 consumer.config.0 scan.stream.initpos LATEST Choose Save. 6. Under Properties, choose Add group again. 7. Enter the following application properties and values: Group ID Key Value producer.config.0 output.stream.name ExampleOutputStream Legacy examples 318 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Group ID Key Value producer.config.0 aws.region us-west-2 producer.config.0 shard.count 1 8. Under Properties, choose Add group again. For Group ID, enter kinesis.analytics.flink.run.options. This special property group tells your application where to find its code resources. For more information, see Specify your code files. 9. Enter the following application properties and values: Group ID Key Value kinesis.analytics. python sliding-windows.py flink.run.options kinesis.analytics. jarfile flink.run.options flink-sql-connecto r-kinesis_1.15.2.j ar 10. Under Monitoring, ensure that the Monitoring metrics level is set to Application. 11. For CloudWatch logging, select the Enable check box. 12. Choose Update. Note When you choose to enable CloudWatch logging, Managed Service for Apache Flink creates a log group and log stream for you. The names of these resources are as follows: • Log group: /aws/kinesis-analytics/MyApplication • Log stream: kinesis-analytics-log-stream This log stream is used to monitor the application. This is not the same log stream that the application uses to send results. Legacy examples 319 Managed Service for Apache Flink Edit the IAM policy Managed Service for Apache Flink Developer Guide Edit the IAM policy to add permissions to access the Kinesis data streams. 1. Open the IAM console at https://console.aws.amazon.com/iam/. 2. Choose Policies. Choose the kinesis-analytics-service-MyApplication-us-west-2 policy that the console
analytics-java-api-100
analytics-java-api.pdf
100
for Apache Flink creates a log group and log stream for you. The names of these resources are as follows: • Log group: /aws/kinesis-analytics/MyApplication • Log stream: kinesis-analytics-log-stream This log stream is used to monitor the application. This is not the same log stream that the application uses to send results. Legacy examples 319 Managed Service for Apache Flink Edit the IAM policy Managed Service for Apache Flink Developer Guide Edit the IAM policy to add permissions to access the Kinesis data streams. 1. Open the IAM console at https://console.aws.amazon.com/iam/. 2. Choose Policies. Choose the kinesis-analytics-service-MyApplication-us-west-2 policy that the console created for you in the previous section. 3. On the Summary page, choose Edit policy. Choose the JSON tab. 4. Add the highlighted section of the following policy example to the policy. Replace the sample account IDs (012345678901) with your account ID. { "Version": "2012-10-17", "Statement": [ { "Sid": "ReadCode", "Effect": "Allow", "Action": [ "s3:GetObject", "logs:DescribeLogGroups", "s3:GetObjectVersion" ], "Resource": [ "arn:aws:logs:us-west-2:012345678901:log-group:*", "arn:aws:s3:::ka-app-code-<username>/myapp.zip" ] }, { "Sid": "DescribeLogStreams", "Effect": "Allow", "Action": "logs:DescribeLogStreams", "Resource": "arn:aws:logs:us-west-2:012345678901:log-group:/aws/ kinesis-analytics/MyApplication:log-stream:*" }, { "Sid": "PutLogEvents", "Effect": "Allow", "Action": "logs:PutLogEvents", "Resource": "arn:aws:logs:us-west-2:012345678901:log-group:/aws/ kinesis-analytics/MyApplication:log-stream:kinesis-analytics-log-stream" }, { Legacy examples 320 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide "Sid": "ListCloudwatchLogGroups", "Effect": "Allow", "Action": [ "logs:DescribeLogGroups" ], "Resource": [ "arn:aws:logs:us-west-2:012345678901:log-group:*" ] }, { "Sid": "ReadInputStream", "Effect": "Allow", "Action": "kinesis:*", "Resource": "arn:aws:kinesis:us-west-2:012345678901:stream/ ExampleInputStream" }, { "Sid": "WriteOutputStream", "Effect": "Allow", "Action": "kinesis:*", "Resource": "arn:aws:kinesis:us-west-2:012345678901:stream/ ExampleOutputStream" } ] } Run the application The Flink job graph can be viewed by running the application, opening the Apache Flink dashboard, and choosing the desired Flink job. You can check the Managed Service for Apache Flink metrics on the CloudWatch console to verify that the application is working. Clean up AWS resources This section includes procedures for cleaning up AWS resources created in the Sliding Window tutorial. This topic contains the following sections: • Delete your Managed Service for Apache Flink application • Delete your Kinesis data streams Legacy examples 321 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • Delete your Amazon S3 object and bucket • Delete your IAM resources • Delete your CloudWatch resources Delete your Managed Service for Apache Flink application 1. Open the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink 2. 3. in the Managed Service for Apache Flink panel, choose MyApplication. In the application's page, choose Delete and then confirm the deletion. Delete your Kinesis data streams 1. Open the Kinesis console at https://console.aws.amazon.com/kinesis. 2. 3. 4. In the Kinesis Data Streams panel, choose ExampleInputStream. In the ExampleInputStream page, choose Delete Kinesis Stream and then confirm the deletion. In the Kinesis streams page, choose the ExampleOutputStream, choose Actions, choose Delete, and then confirm the deletion. Delete your Amazon S3 object and bucket 1. Open the Amazon S3 console at https://console.aws.amazon.com/s3/. 2. Choose the ka-app-code-<username> bucket. 3. Choose Delete and then enter the bucket name to confirm deletion. Delete your IAM resources 1. Open the IAM console at https://console.aws.amazon.com/iam/. 2. 3. In the navigation bar, choose Policies. In the filter control, enter kinesis. 4. Choose the kinesis-analytics-service-MyApplication-us-west-2 policy. 5. Choose Policy Actions and then choose Delete. 6. In the navigation bar, choose Roles. Legacy examples 322 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 7. Choose the kinesis-analytics-MyApplication-us-west-2 role. 8. Choose Delete role and then confirm the deletion. Delete your CloudWatch resources 1. Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/. 2. In the navigation bar, choose Logs. 3. Choose the /aws/kinesis-analytics/MyApplication log group. 4. Choose Delete Log Group and then confirm the deletion. Example: Send streaming data to Amazon S3 in Python Note For current examples, see Examples for creating and working with Managed Service for Apache Flink applications. In this exercise, you create a Python Managed Service for Apache Flink application that streams data to an Amazon Simple Storage Service sink. Note To set up required prerequisites for this exercise, first complete the Tutorial: Get started using Python in Managed Service for Apache Flink exercise. This topic contains the following sections: • Create dependent resources • Write sample records to the input stream • Download and examine the application code • Compress and upload the Apache Flink streaming Python code • Create and run the Managed Service for Apache Flink application • Clean up AWS resources Legacy examples 323 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Create dependent resources Before you create a Managed Service for Apache Flink application for this exercise, you create the following dependent resources: • A Kinesis data stream (ExampleInputStream) • An Amazon S3 bucket to store the application's code and output (ka-app-code-<username>) Note Managed Service for Apache Flink cannot write data to Amazon S3 with server-side encryption enabled on Managed Service for Apache Flink. You can create the Kinesis
analytics-java-api-101
analytics-java-api.pdf
101
code • Create and run the Managed Service for Apache Flink application • Clean up AWS resources Legacy examples 323 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Create dependent resources Before you create a Managed Service for Apache Flink application for this exercise, you create the following dependent resources: • A Kinesis data stream (ExampleInputStream) • An Amazon S3 bucket to store the application's code and output (ka-app-code-<username>) Note Managed Service for Apache Flink cannot write data to Amazon S3 with server-side encryption enabled on Managed Service for Apache Flink. You can create the Kinesis stream and Amazon S3 bucket using the console. For instructions for creating these resources, see the following topics: • Creating and Updating Data Streams in the Amazon Kinesis Data Streams Developer Guide. Name your data stream ExampleInputStream. • How Do I Create an S3 Bucket? in the Amazon Simple Storage Service User Guide. Give the Amazon S3 bucket a globally unique name by appending your login name, such as ka-app- code-<username>. Write sample records to the input stream In this section, you use a Python script to write sample records to the stream for the application to process. Note This section requires the AWS SDK for Python (Boto). Note The Python script in this section uses the AWS CLI. You must configure your AWS CLI to use your account credentials and default region. To configure your AWS CLI, enter the following: Legacy examples 324 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide aws configure 1. Create a file named stock.py with the following contents: import datetime import json import random import boto3 STREAM_NAME = "ExampleInputStream" def get_data(): return { 'event_time': datetime.datetime.now().isoformat(), 'ticker': random.choice(['AAPL', 'AMZN', 'MSFT', 'INTC', 'TBV']), 'price': round(random.random() * 100, 2)} def generate(stream_name, kinesis_client): while True: data = get_data() print(data) kinesis_client.put_record( StreamName=stream_name, Data=json.dumps(data), PartitionKey="partitionkey") if __name__ == '__main__': generate(STREAM_NAME, boto3.client('kinesis', region_name='us-west-2')) 2. Run the stock.py script: $ python stock.py Keep the script running while completing the rest of the tutorial. Legacy examples 325 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Download and examine the application code The Python application code for this example is available from GitHub. To download the application code, do the following: 1. Install the Git client if you haven't already. For more information, see Installing Git. 2. Clone the remote repository with the following command: git clone https://github.com/aws-samples/amazon-kinesis-data-analytics-examples.git 3. Navigate to the amazon-kinesis-data-analytics-java-examples/python/S3Sink directory. The application code is located in the streaming-file-sink.py file. Note the following about the application code: • The application uses a Kinesis table source to read from the source stream. The following snippet calls the create_source_table function to create the Kinesis table source: table_env.execute_sql( create_source_table(input_table_name, input_stream, input_region, stream_initpos) ) The create_source_table function uses a SQL command to create a table that is backed by the streaming source import datetime import json import random import boto3 STREAM_NAME = "ExampleInputStream" def get_data(): return { 'event_time': datetime.datetime.now().isoformat(), 'ticker': random.choice(['AAPL', 'AMZN', 'MSFT', 'INTC', 'TBV']), 'price': round(random.random() * 100, 2)} Legacy examples 326 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide def generate(stream_name, kinesis_client): while True: data = get_data() print(data) kinesis_client.put_record( StreamName=stream_name, Data=json.dumps(data), PartitionKey="partitionkey") if __name__ == '__main__': generate(STREAM_NAME, boto3.client('kinesis', region_name='us-west-2')) • The application uses the filesystem connector to send records to an Amazon S3 bucket: def create_sink_table(table_name, bucket_name): return """ CREATE TABLE {0} ( ticker VARCHAR(6), price DOUBLE, event_time VARCHAR(64) ) PARTITIONED BY (ticker) WITH ( 'connector'='filesystem', 'path'='s3a://{1}/', 'format'='json', 'sink.partition-commit.policy.kind'='success-file', 'sink.partition-commit.delay' = '1 min' ) """.format(table_name, bucket_name) • The application uses the Kinesis Flink connector, from the flink-sql-connector-kinesis-1.15.2.jar file. Compress and upload the Apache Flink streaming Python code In this section, you upload your application code to the Amazon S3 bucket you created in the Create dependent resources section. 1. Use your preferred compression application to compress the streaming-file-sink.py and flink-sql-connector-kinesis-1.15.2.jar files. Name the archive myapp.zip. Legacy examples 327 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 2. 3. In the Amazon S3 console, choose the ka-app-code-<username> bucket, and choose Upload. In the Select files step, choose Add files. Navigate to the myapp.zip file that you created in the previous step. 4. You don't need to change any of the settings for the object, so choose Upload. Your application code is now stored in an Amazon S3 bucket where your application can access it. Create and run the Managed Service for Apache Flink application Follow these steps to create, configure, update, and run the application using the console. Create the application 1. Open the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink 2. On the Managed Service for Apache Flink dashboard, choose Create analytics application. 3. On the Managed Service for Apache Flink - Create application page, provide the application details as follows: • For Application name, enter
analytics-java-api-102
analytics-java-api.pdf
102
of the settings for the object, so choose Upload. Your application code is now stored in an Amazon S3 bucket where your application can access it. Create and run the Managed Service for Apache Flink application Follow these steps to create, configure, update, and run the application using the console. Create the application 1. Open the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink 2. On the Managed Service for Apache Flink dashboard, choose Create analytics application. 3. On the Managed Service for Apache Flink - Create application page, provide the application details as follows: • For Application name, enter MyApplication. • For Runtime, choose Apache Flink. Note Managed Service for Apache Flink uses Apache Flink version 1.15.2. • Leave the version pulldown as Apache Flink version 1.15.2 (Recommended version). 4. For Access permissions, choose Create / update IAM role kinesis-analytics- MyApplication-us-west-2. 5. Choose Create application. Note When you create a Managed Service for Apache Flink application using the console, you have the option of having an IAM role and policy created for your application. Your application uses this role and policy to access its dependent resources. These IAM resources are named using your application name and Region as follows: Legacy examples 328 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • Policy: kinesis-analytics-service-MyApplication-us-west-2 • Role: kinesisanalytics-MyApplication-us-west-2 Configure the application 1. On the MyApplication page, choose Configure. 2. On the Configure application page, provide the Code location: • For Amazon S3 bucket, enter ka-app-code-<username>. • For Path to Amazon S3 object, enter myapp.zip. 3. Under Access to application resources, for Access permissions, choose Create / update IAM role kinesis-analytics-MyApplication-us-west-2. 4. Under Properties, choose Add group. 5. Enter the following application properties and values: Group ID Key Value consumer.config.0 input.stream.name ExampleInputStream consumer.config.0 aws.region us-west-2 consumer.config.0 scan.stream.initpos LATEST Choose Save. 6. Under Properties, choose Add group again. For Group ID, enter kinesis.analytics.flink.run.options. This special property group tells your application where to find its code resources. For more information, see Specify your code files. 7. Enter the following application properties and values: Group ID Key Value kinesis.analytics. python streaming-file-sin flink.run.options k.py Legacy examples 329 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Group ID Key Value kinesis.analytics. jarfile flink.run.options S3Sink/lib/flink-s ql-connector-kines is-1.15.2.jar 8. Under Properties, choose Add group again. For Group ID, enter sink.config.0. This special property group tells your application where to find its code resources. For more information, see Specify your code files. 9. Enter the following application properties and values: (replace bucket-name with the actual name of your Amazon S3 bucket.) Group ID Key Value sink.config.0 output.bucket.name bucket-name 10. Under Monitoring, ensure that the Monitoring metrics level is set to Application. 11. For CloudWatch logging, select the Enable check box. 12. Choose Update. Note When you choose to enable CloudWatch logging, Managed Service for Apache Flink creates a log group and log stream for you. The names of these resources are as follows: • Log group: /aws/kinesis-analytics/MyApplication • Log stream: kinesis-analytics-log-stream This log stream is used to monitor the application. This is not the same log stream that the application uses to send results. Edit the IAM policy Edit the IAM policy to add permissions to access the Kinesis data streams. 1. Open the IAM console at https://console.aws.amazon.com/iam/. Legacy examples 330 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 2. Choose Policies. Choose the kinesis-analytics-service-MyApplication-us-west-2 policy that the console created for you in the previous section. 3. On the Summary page, choose Edit policy. Choose the JSON tab. 4. Add the highlighted section of the following policy example to the policy. Replace the sample account IDs (012345678901) with your account ID. { "Version": "2012-10-17", "Statement": [ { "Sid": "ReadCode", "Effect": "Allow", "Action": [ "s3:GetObject", "logs:DescribeLogGroups", "s3:GetObjectVersion" ], "Resource": [ "arn:aws:logs:us-west-2:012345678901:log-group:*", "arn:aws:s3:::ka-app-code-<username>/myapp.zip" ] }, { "Sid": "DescribeLogStreams", "Effect": "Allow", "Action": "logs:DescribeLogStreams", "Resource": "arn:aws:logs:us-west-2:012345678901:log-group:/aws/ kinesis-analytics/MyApplication:log-stream:*" }, { "Sid": "PutLogEvents", "Effect": "Allow", "Action": "logs:PutLogEvents", "Resource": "arn:aws:logs:us-west-2:012345678901:log-group:/aws/ kinesis-analytics/MyApplication:log-stream:kinesis-analytics-log-stream" }, { "Sid": "ListCloudwatchLogGroups", "Effect": "Allow", "Action": [ "logs:DescribeLogGroups" ], Legacy examples 331 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide "Resource": [ "arn:aws:logs:us-west-2:012345678901:log-group:*" ] }, { "Sid": "ReadInputStream", "Effect": "Allow", "Action": "kinesis:*", "Resource": "arn:aws:kinesis:us-west-2:012345678901:stream/ ExampleInputStream" }, { "Sid": "WriteObjects", "Effect": "Allow", "Action": [ "s3:Abort*", "s3:DeleteObject*", "s3:GetObject*", "s3:GetBucket*", "s3:List*", "s3:ListBucket", "s3:PutObject" ], "Resource": [ "arn:aws:s3:::ka-app-code-<username>", "arn:aws:s3:::ka-app-code-<username>/*" ] } ] } Run the application The Flink job graph can be viewed by running the application, opening the Apache Flink dashboard, and choosing the desired Flink job. You can check the Managed Service for Apache Flink metrics on the CloudWatch console to verify that the application is working. Clean up AWS resources This section includes procedures for cleaning up AWS resources created in the Sliding Window tutorial. Legacy examples 332 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide
analytics-java-api-103
analytics-java-api.pdf
103
"Effect": "Allow", "Action": [ "s3:Abort*", "s3:DeleteObject*", "s3:GetObject*", "s3:GetBucket*", "s3:List*", "s3:ListBucket", "s3:PutObject" ], "Resource": [ "arn:aws:s3:::ka-app-code-<username>", "arn:aws:s3:::ka-app-code-<username>/*" ] } ] } Run the application The Flink job graph can be viewed by running the application, opening the Apache Flink dashboard, and choosing the desired Flink job. You can check the Managed Service for Apache Flink metrics on the CloudWatch console to verify that the application is working. Clean up AWS resources This section includes procedures for cleaning up AWS resources created in the Sliding Window tutorial. Legacy examples 332 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide This topic contains the following sections: • Delete your Managed Service for Apache Flink application • Delete your Kinesis data stream • Delete your Amazon S3 objects and bucket • Delete your IAM resources • Delete your CloudWatch resources Delete your Managed Service for Apache Flink application 1. Open the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink 2. 3. in the Managed Service for Apache Flink panel, choose MyApplication. In the application's page, choose Delete and then confirm the deletion. Delete your Kinesis data stream 1. Open the Kinesis console at https://console.aws.amazon.com/kinesis. 2. 3. In the Kinesis Data Streams panel, choose ExampleInputStream. In the ExampleInputStream page, choose Delete Kinesis Stream and then confirm the deletion. Delete your Amazon S3 objects and bucket 1. Open the Amazon S3 console at https://console.aws.amazon.com/s3/. 2. Choose the ka-app-code-<username> bucket. 3. Choose Delete and then enter the bucket name to confirm deletion. Delete your IAM resources 1. Open the IAM console at https://console.aws.amazon.com/iam/. 2. 3. In the navigation bar, choose Policies. In the filter control, enter kinesis. 4. Choose the kinesis-analytics-service-MyApplication-us-west-2 policy. 5. Choose Policy Actions and then choose Delete. Legacy examples 333 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 6. In the navigation bar, choose Roles. 7. Choose the kinesis-analytics-MyApplication-us-west-2 role. 8. Choose Delete role and then confirm the deletion. Delete your CloudWatch resources 1. Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/. 2. In the navigation bar, choose Logs. 3. Choose the /aws/kinesis-analytics/MyApplication log group. 4. Choose Delete Log Group and then confirm the deletion. Scala examples The following examples demonstrate how to create applications using Scala with Apache Flink. Topics • Example: Creating a tumbling window in Scala • Example: Creating a sliding window in Scala • Example: Send streaming data to Amazon S3 in Scala Example: Creating a tumbling window in Scala Note For current examples, see Examples for creating and working with Managed Service for Apache Flink applications. Note Starting from version 1.15 Flink is Scala free. Applications can now use the Java API from any Scala version. Flink still uses Scala in a few key components internally but doesn't expose Scala into the user code classloader. Because of that, users need to add Scala dependencies into their jar-archives. For more information about Scala changes in Flink 1.15, see Scala Free in One Fifteen. Legacy examples 334 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide In this exercise, you will create a simple streaming application which uses Scala 3.2.0 and Flink's Java DataStream API. The application reads data from Kinesis stream, aggregates it using sliding windows and writes results to output Kinesis stream. Note To set up required prerequisites for this exercise, first complete the Getting Started (Scala) exercise. This topic contains the following sections: • Download and examine the application code • Compile and upload the application code • Create and run the application (console) • Create and run the application (CLI) • Update the application code • Clean up AWS resources Download and examine the application code The Python application code for this example is available from GitHub. To download the application code, do the following: 1. Install the Git client if you haven't already. For more information, see Installing Git. 2. Clone the remote repository with the following command: git clone https://github.com/aws-samples/amazon-kinesis-data-analytics-examples.git 3. Navigate to the amazon-kinesis-data-analytics-java-examples/scala/ TumblingWindow directory. Note the following about the application code: • A build.sbt file contains information about the application's configuration and dependencies, including the Managed Service for Apache Flink libraries. Legacy examples 335 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • The BasicStreamingJob.scala file contains the main method that defines the application's functionality. • The application uses a Kinesis source to read from the source stream. The following snippet creates the Kinesis source: private def createSource: FlinkKinesisConsumer[String] = { val applicationProperties = KinesisAnalyticsRuntime.getApplicationProperties val inputProperties = applicationProperties.get("ConsumerConfigProperties") new FlinkKinesisConsumer[String](inputProperties.getProperty(streamNameKey, defaultInputStreamName), new SimpleStringSchema, inputProperties) } The application also uses a Kinesis sink to write into the result stream. The following snippet creates the Kinesis sink: private def createSink: KinesisStreamsSink[String] = { val applicationProperties = KinesisAnalyticsRuntime.getApplicationProperties val outputProperties = applicationProperties.get("ProducerConfigProperties") KinesisStreamsSink.builder[String] .setKinesisClientProperties(outputProperties) .setSerializationSchema(new SimpleStringSchema) .setStreamName(outputProperties.getProperty(streamNameKey, defaultOutputStreamName)) .setPartitionKeyGenerator((element: String) => String.valueOf(element.hashCode))
analytics-java-api-104
analytics-java-api.pdf
104
Developer Guide • The BasicStreamingJob.scala file contains the main method that defines the application's functionality. • The application uses a Kinesis source to read from the source stream. The following snippet creates the Kinesis source: private def createSource: FlinkKinesisConsumer[String] = { val applicationProperties = KinesisAnalyticsRuntime.getApplicationProperties val inputProperties = applicationProperties.get("ConsumerConfigProperties") new FlinkKinesisConsumer[String](inputProperties.getProperty(streamNameKey, defaultInputStreamName), new SimpleStringSchema, inputProperties) } The application also uses a Kinesis sink to write into the result stream. The following snippet creates the Kinesis sink: private def createSink: KinesisStreamsSink[String] = { val applicationProperties = KinesisAnalyticsRuntime.getApplicationProperties val outputProperties = applicationProperties.get("ProducerConfigProperties") KinesisStreamsSink.builder[String] .setKinesisClientProperties(outputProperties) .setSerializationSchema(new SimpleStringSchema) .setStreamName(outputProperties.getProperty(streamNameKey, defaultOutputStreamName)) .setPartitionKeyGenerator((element: String) => String.valueOf(element.hashCode)) .build } • The application uses the window operator to find the count of values for each stock symbol over a 5-seconds tumbling window. The following code creates the operator and sends the aggregated data to a new Kinesis Data Streams sink: environment.addSource(createSource) .map { value => val jsonNode = jsonParser.readValue(value, classOf[JsonNode]) new Tuple2[String, Int](jsonNode.get("ticker").toString, 1) } .returns(Types.TUPLE(Types.STRING, Types.INT)) .keyBy(v => v.f0) // Logically partition the stream for each ticker Legacy examples 336 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide .window(TumblingProcessingTimeWindows.of(Time.seconds(10))) .sum(1) // Sum the number of tickers per partition .map { value => value.f0 + "," + value.f1.toString + "\n" } .sinkTo(createSink) • The application creates source and sink connectors to access external resources using a StreamExecutionEnvironment object. • The application creates source and sink connectors using dynamic application properties. Runtime application's properties are read to configure the connectors. For more information about runtime properties, see Runtime Properties. Compile and upload the application code In this section, you compile and upload your application code to an Amazon S3 bucket. Compile the Application Code Use the SBT build tool to build the Scala code for the application. To install SBT, see Install sbt with cs setup. You also need to install the Java Development Kit (JDK). See Prerequisites for Completing the Exercises. 1. To use your application code, you compile and package it into a JAR file. You can compile and package your code with SBT: sbt assembly 2. If the application compiles successfully, the following file is created: target/scala-3.2.0/tumbling-window-scala-1.0.jar Upload the Apache Flink Streaming Scala Code In this section, you create an Amazon S3 bucket and upload your application code. 1. Open the Amazon S3 console at https://console.aws.amazon.com/s3/. 2. Choose Create bucket 3. Enter ka-app-code-<username> in the Bucket name field. Add a suffix to the bucket name, such as your user name, to make it globally unique. Choose Next. 4. In Configure options, keep the settings as they are, and choose Next. Legacy examples 337 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 5. In Set permissions, keep the settings as they are, and choose Next. 6. Choose Create bucket. 7. Choose the ka-app-code-<username> bucket, and then choose Upload. 8. In the Select files step, choose Add files. Navigate to the tumbling-window- scala-1.0.jar file that you created in the previous step. 9. You don't need to change any of the settings for the object, so choose Upload. Your application code is now stored in an Amazon S3 bucket where your application can access it. Create and run the application (console) Follow these steps to create, configure, update, and run the application using the console. Create the application 1. Open the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink 2. On the Managed Service for Apache Flink dashboard, choose Create analytics application. 3. On the Managed Service for Apache Flink - Create application page, provide the application details as follows: • For Application name, enter MyApplication. • For Description, enter My Scala test app. • For Runtime, choose Apache Flink. • Leave the version as Apache Flink version 1.15.2 (Recommended version). 4. For Access permissions, choose Create / update IAM role kinesis-analytics- MyApplication-us-west-2. 5. Choose Create application. Note When you create a Managed Service for Apache Flink application using the console, you have the option of having an IAM role and policy created for your application. Your application uses this role and policy to access its dependent resources. These IAM resources are named using your application name and Region as follows: • Policy: kinesis-analytics-service-MyApplication-us-west-2 Legacy examples 338 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • Role: kinesisanalytics-MyApplication-us-west-2 Configure the application Use the following procedure to configure the application. To configure the application 1. On the MyApplication page, choose Configure. 2. On the Configure application page, provide the Code location: • For Amazon S3 bucket, enter ka-app-code-<username>. • For Path to Amazon S3 object, enter tumbling-window-scala-1.0.jar. 3. Under Access to application resources, for Access permissions, choose Create / update IAM role kinesis-analytics-MyApplication-us-west-2. 4. Under Properties, choose Add group. 5. Enter the following: Group ID Key Value ConsumerConfigProp input.stream.name ExampleInputStream erties ConsumerConfigProp aws.region us-west-2 erties ConsumerConfigProp flink.stream.initp LATEST
analytics-java-api-105
analytics-java-api.pdf
105
Flink Managed Service for Apache Flink Developer Guide • Role: kinesisanalytics-MyApplication-us-west-2 Configure the application Use the following procedure to configure the application. To configure the application 1. On the MyApplication page, choose Configure. 2. On the Configure application page, provide the Code location: • For Amazon S3 bucket, enter ka-app-code-<username>. • For Path to Amazon S3 object, enter tumbling-window-scala-1.0.jar. 3. Under Access to application resources, for Access permissions, choose Create / update IAM role kinesis-analytics-MyApplication-us-west-2. 4. Under Properties, choose Add group. 5. Enter the following: Group ID Key Value ConsumerConfigProp input.stream.name ExampleInputStream erties ConsumerConfigProp aws.region us-west-2 erties ConsumerConfigProp flink.stream.initp LATEST erties os Choose Save. 6. Under Properties, choose Add group again. 7. Enter the following: Legacy examples 339 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Group ID Key Value ProducerConfigProp output.stream.name ExampleOutputStream erties ProducerConfigProp aws.region us-west-2 erties 8. Under Monitoring, ensure that the Monitoring metrics level is set to Application. 9. For CloudWatch logging, choose the Enable check box. 10. Choose Update. Note When you choose to enable Amazon CloudWatch logging, Managed Service for Apache Flink creates a log group and log stream for you. The names of these resources are as follows: • Log group: /aws/kinesis-analytics/MyApplication • Log stream: kinesis-analytics-log-stream Edit the IAM policy Edit the IAM policy to add permissions to access the Amazon S3 bucket. To edit the IAM policy to add S3 bucket permissions 1. Open the IAM console at https://console.aws.amazon.com/iam/. 2. Choose Policies. Choose the kinesis-analytics-service-MyApplication-us-west-2 policy that the console created for you in the previous section. 3. On the Summary page, choose Edit policy. Choose the JSON tab. 4. Add the highlighted section of the following policy example to the policy. Replace the sample account IDs (012345678901) with your account ID. { Legacy examples 340 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide "Version": "2012-10-17", "Statement": [ { "Sid": "ReadCode", "Effect": "Allow", "Action": [ "s3:GetObject", "s3:GetObjectVersion" ], "Resource": [ "arn:aws:s3:::ka-app-code-username/tumbling-window-scala-1.0.jar" ] }, { "Sid": "DescribeLogGroups", "Effect": "Allow", "Action": [ "logs:DescribeLogGroups" ], "Resource": [ "arn:aws:logs:us-west-2:012345678901:log-group:*" ] }, { "Sid": "DescribeLogStreams", "Effect": "Allow", "Action": [ "logs:DescribeLogStreams" ], "Resource": [ "arn:aws:logs:us-west-2:012345678901:log-group:/aws/kinesis- analytics/MyApplication:log-stream:*" ] }, { "Sid": "PutLogEvents", "Effect": "Allow", "Action": [ "logs:PutLogEvents" ], "Resource": [ "arn:aws:logs:us-west-2:012345678901:log-group:/aws/kinesis- analytics/MyApplication:log-stream:kinesis-analytics-log-stream" ] Legacy examples 341 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide }, { "Sid": "ReadInputStream", "Effect": "Allow", "Action": "kinesis:*", "Resource": "arn:aws:kinesis:us-west-2:012345678901:stream/ ExampleInputStream" }, { "Sid": "WriteOutputStream", "Effect": "Allow", "Action": "kinesis:*", "Resource": "arn:aws:kinesis:us-west-2:012345678901:stream/ ExampleOutputStream" } ] } Run the application The Flink job graph can be viewed by running the application, opening the Apache Flink dashboard, and choosing the desired Flink job. Stop the application To stop the application, on the MyApplication page, choose Stop. Confirm the action. Create and run the application (CLI) In this section, you use the AWS Command Line Interface to create and run the Managed Service for Apache Flink application. Use the kinesisanalyticsv2 AWS CLI command to create and interact with Managed Service for Apache Flink applications. Create a permissions policy Note You must create a permissions policy and role for your application. If you do not create these IAM resources, your application cannot access its data and log streams. Legacy examples 342 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide First, you create a permissions policy with two statements: one that grants permissions for the read action on the source stream, and another that grants permissions for write actions on the sink stream. You then attach the policy to an IAM role (which you create in the next section). Thus, when Managed Service for Apache Flink assumes the role, the service has the necessary permissions to read from the source stream and write to the sink stream. Use the following code to create the AKReadSourceStreamWriteSinkStream permissions policy. Replace username with the user name that you used to create the Amazon S3 bucket to store the application code. Replace the account ID in the Amazon Resource Names (ARNs) (012345678901) with your account ID. The MF-stream-rw-role service execution role should be tailored to the customer-specfic role. { "ApplicationName": "tumbling_window", "ApplicationDescription": "Scala tumbling window application", "RuntimeEnvironment": "FLINK-1_15", "ServiceExecutionRole": "arn:aws:iam::012345678901:role/MF-stream-rw-role", "ApplicationConfiguration": { "ApplicationCodeConfiguration": { "CodeContent": { "S3ContentLocation": { "BucketARN": "arn:aws:s3:::ka-app-code-username", "FileKey": "tumbling-window-scala-1.0.jar" } }, "CodeContentType": "ZIPFILE" }, "EnvironmentProperties": { "PropertyGroups": [ { "PropertyGroupId": "ConsumerConfigProperties", "PropertyMap" : { "aws.region" : "us-west-2", "stream.name" : "ExampleInputStream", "flink.stream.initpos" : "LATEST" } }, { "PropertyGroupId": "ProducerConfigProperties", "PropertyMap" : { "aws.region" : "us-west-2", "stream.name" : "ExampleOutputStream" Legacy examples 343 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide } } ] } }, "CloudWatchLoggingOptions": [ { "LogStreamARN": "arn:aws:logs:us-west-2:012345678901:log- group:MyApplication:log-stream:kinesis-analytics-log-stream" } ] } For step-by-step instructions to create a permissions policy, see Tutorial: Create and Attach Your First Customer Managed Policy in the IAM User Guide. Create an IAM role In this section, you create
analytics-java-api-106
analytics-java-api.pdf
106
"tumbling-window-scala-1.0.jar" } }, "CodeContentType": "ZIPFILE" }, "EnvironmentProperties": { "PropertyGroups": [ { "PropertyGroupId": "ConsumerConfigProperties", "PropertyMap" : { "aws.region" : "us-west-2", "stream.name" : "ExampleInputStream", "flink.stream.initpos" : "LATEST" } }, { "PropertyGroupId": "ProducerConfigProperties", "PropertyMap" : { "aws.region" : "us-west-2", "stream.name" : "ExampleOutputStream" Legacy examples 343 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide } } ] } }, "CloudWatchLoggingOptions": [ { "LogStreamARN": "arn:aws:logs:us-west-2:012345678901:log- group:MyApplication:log-stream:kinesis-analytics-log-stream" } ] } For step-by-step instructions to create a permissions policy, see Tutorial: Create and Attach Your First Customer Managed Policy in the IAM User Guide. Create an IAM role In this section, you create an IAM role that the Managed Service for Apache Flink application can assume to read a source stream and write to the sink stream. Managed Service for Apache Flink cannot access your stream without permissions. You grant these permissions via an IAM role. Each IAM role has two policies attached. The trust policy grants Managed Service for Apache Flink permission to assume the role, and the permissions policy determines what Managed Service for Apache Flink can do after assuming the role. You attach the permissions policy that you created in the preceding section to this role. To create an IAM role 1. Open the IAM console at https://console.aws.amazon.com/iam/. 2. In the navigation pane, choose Roles and then Create Role. 3. Under Select type of trusted identity, choose AWS Service 4. Under Choose the service that will use this role, choose Kinesis. 5. Under Select your use case, choose Managed Service for Apache Flink. 6. Choose Next: Permissions. 7. On the Attach permissions policies page, choose Next: Review. You attach permissions policies after you create the role. 8. On the Create role page, enter MF-stream-rw-role for the Role name. Choose Create role. Legacy examples 344 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Now you have created a new IAM role called MF-stream-rw-role. Next, you update the trust and permissions policies for the role 9. Attach the permissions policy to the role. Note For this exercise, Managed Service for Apache Flink assumes this role for both reading data from a Kinesis data stream (source) and writing output to another Kinesis data stream. So you attach the policy that you created in the previous step, Create a Permissions Policy. a. On the Summary page, choose the Permissions tab. b. Choose Attach Policies. c. In the search box, enter AKReadSourceStreamWriteSinkStream (the policy that you created in the previous section). d. Choose the AKReadSourceStreamWriteSinkStream policy, and choose Attach policy. You now have created the service execution role that your application uses to access resources. Make a note of the ARN of the new role. For step-by-step instructions for creating a role, see Creating an IAM Role (Console) in the IAM User Guide. Create the application Save the following JSON code to a file named create_request.json. Replace the sample role ARN with the ARN for the role that you created previously. Replace the bucket ARN suffix (username) with the suffix that you chose in the previous section. Replace the sample account ID (012345678901) in the service execution role with your account ID. The ServiceExecutionRole should include the IAM user role you created in the previous section. "ApplicationName": "tumbling_window", "ApplicationDescription": "Scala getting started application", "RuntimeEnvironment": "FLINK-1_15", "ServiceExecutionRole": "arn:aws:iam::012345678901:role/MF-stream-rw-role", "ApplicationConfiguration": { Legacy examples 345 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide "ApplicationCodeConfiguration": { "CodeContent": { "S3ContentLocation": { "BucketARN": "arn:aws:s3:::ka-app-code-username", "FileKey": "tumbling-window-scala-1.0.jar" } }, "CodeContentType": "ZIPFILE" }, "EnvironmentProperties": { "PropertyGroups": [ { "PropertyGroupId": "ConsumerConfigProperties", "PropertyMap" : { "aws.region" : "us-west-2", "stream.name" : "ExampleInputStream", "flink.stream.initpos" : "LATEST" } }, { "PropertyGroupId": "ProducerConfigProperties", "PropertyMap" : { "aws.region" : "us-west-2", "stream.name" : "ExampleOutputStream" } } ] } }, "CloudWatchLoggingOptions": [ { "LogStreamARN": "arn:aws:logs:us-west-2:012345678901:log- group:MyApplication:log-stream:kinesis-analytics-log-stream" } ] } Execute the CreateApplication with the following request to create the application: aws kinesisanalyticsv2 create-application --cli-input-json file://create_request.json The application is now created. You start the application in the next step. Legacy examples 346 Managed Service for Apache Flink Start the application Managed Service for Apache Flink Developer Guide In this section, you use the StartApplication action to start the application. To start the application 1. Save the following JSON code to a file named start_request.json. { "ApplicationName": "tumbling_window", "RunConfiguration": { "ApplicationRestoreConfiguration": { "ApplicationRestoreType": "RESTORE_FROM_LATEST_SNAPSHOT" } } } 2. Execute the StartApplication action with the preceding request to start the application: aws kinesisanalyticsv2 start-application --cli-input-json file://start_request.json The application is now running. You can check the Managed Service for Apache Flink metrics on the Amazon CloudWatch console to verify that the application is working. Stop the application In this section, you use the StopApplication action to stop the application. To stop the application 1. Save the following JSON code to a file named stop_request.json. { "ApplicationName": "tumbling_window" } 2. Execute the StopApplication action with the preceding request to
analytics-java-api-107
analytics-java-api.pdf
107
"ApplicationName": "tumbling_window", "RunConfiguration": { "ApplicationRestoreConfiguration": { "ApplicationRestoreType": "RESTORE_FROM_LATEST_SNAPSHOT" } } } 2. Execute the StartApplication action with the preceding request to start the application: aws kinesisanalyticsv2 start-application --cli-input-json file://start_request.json The application is now running. You can check the Managed Service for Apache Flink metrics on the Amazon CloudWatch console to verify that the application is working. Stop the application In this section, you use the StopApplication action to stop the application. To stop the application 1. Save the following JSON code to a file named stop_request.json. { "ApplicationName": "tumbling_window" } 2. Execute the StopApplication action with the preceding request to stop the application: aws kinesisanalyticsv2 stop-application --cli-input-json file://stop_request.json Legacy examples 347 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide The application is now stopped. Add a CloudWatch logging option You can use the AWS CLI to add an Amazon CloudWatch log stream to your application. For information about using CloudWatch Logs with your application, see Setting Up Application Logging. Update environment properties In this section, you use the UpdateApplication action to change the environment properties for the application without recompiling the application code. In this example, you change the Region of the source and destination streams. To update environment properties for the application 1. Save the following JSON code to a file named update_properties_request.json. {"ApplicationName": "tumbling_window", "CurrentApplicationVersionId": 1, "ApplicationConfigurationUpdate": { "EnvironmentPropertyUpdates": { "PropertyGroups": [ { "PropertyGroupId": "ConsumerConfigProperties", "PropertyMap" : { "aws.region" : "us-west-2", "stream.name" : "ExampleInputStream", "flink.stream.initpos" : "LATEST" } }, { "PropertyGroupId": "ProducerConfigProperties", "PropertyMap" : { "aws.region" : "us-west-2", "stream.name" : "ExampleOutputStream" } } ] } } } Legacy examples 348 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 2. Execute the UpdateApplication action with the preceding request to update environment properties: aws kinesisanalyticsv2 update-application --cli-input-json file:// update_properties_request.json Update the application code When you need to update your application code with a new version of your code package, you use the UpdateApplication CLI action. Note To load a new version of the application code with the same file name, you must specify the new object version. For more information about using Amazon S3 object versions, see Enabling or Disabling Versioning. To use the AWS CLI, delete your previous code package from your Amazon S3 bucket, upload the new version, and call UpdateApplication, specifying the same Amazon S3 bucket and object name, and the new object version. The application will restart with the new code package. The following sample request for the UpdateApplication action reloads the application code and restarts the application. Update the CurrentApplicationVersionId to the current application version. You can check the current application version using the ListApplications or DescribeApplication actions. Update the bucket name suffix (<username>) with the suffix that you chose in the Create dependent resources section. { "ApplicationName": "tumbling_window", "CurrentApplicationVersionId": 1, "ApplicationConfigurationUpdate": { "ApplicationCodeConfigurationUpdate": { "CodeContentUpdate": { "S3ContentLocationUpdate": { "BucketARNUpdate": "arn:aws:s3:::ka-app-code-username", "FileKeyUpdate": "tumbling-window-scala-1.0.jar", "ObjectVersionUpdate": "SAMPLEUehYngP87ex1nzYIGYgfhypvDU" } Legacy examples 349 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide } } } } Clean up AWS resources This section includes procedures for cleaning up AWS resources created in the tumbling Window tutorial. This topic contains the following sections: • Delete your Managed Service for Apache Flink application • Delete your Kinesis data streams • Delete your Amazon S3 object and bucket • Delete your IAM resources • Delete your CloudWatch resources Delete your Managed Service for Apache Flink application 1. Open the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink 2. 3. in the Managed Service for Apache Flink panel, choose MyApplication. In the application's page, choose Delete and then confirm the deletion. Delete your Kinesis data streams 1. Open the Kinesis console at https://console.aws.amazon.com/kinesis. 2. 3. 4. In the Kinesis Data Streams panel, choose ExampleInputStream. In the ExampleInputStream page, choose Delete Kinesis Stream and then confirm the deletion. In the Kinesis streams page, choose the ExampleOutputStream, choose Actions, choose Delete, and then confirm the deletion. Delete your Amazon S3 object and bucket 1. Open the Amazon S3 console at https://console.aws.amazon.com/s3/. 2. Choose the ka-app-code-<username> bucket. Legacy examples 350 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 3. Choose Delete and then enter the bucket name to confirm deletion. Delete your IAM resources 1. Open the IAM console at https://console.aws.amazon.com/iam/. 2. 3. In the navigation bar, choose Policies. In the filter control, enter kinesis. 4. Choose the kinesis-analytics-service-MyApplication-us-west-2 policy. 5. Choose Policy Actions and then choose Delete. 6. In the navigation bar, choose Roles. 7. Choose the kinesis-analytics-MyApplication-us-west-2 role. 8. Choose Delete role and then confirm the deletion. Delete your CloudWatch resources 1. Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/. 2. In the navigation bar, choose Logs. 3. Choose the /aws/kinesis-analytics/MyApplication log group. 4. Choose Delete Log Group and then confirm the deletion. Example: Creating a sliding window in Scala Note For
analytics-java-api-108
analytics-java-api.pdf
108
IAM resources 1. Open the IAM console at https://console.aws.amazon.com/iam/. 2. 3. In the navigation bar, choose Policies. In the filter control, enter kinesis. 4. Choose the kinesis-analytics-service-MyApplication-us-west-2 policy. 5. Choose Policy Actions and then choose Delete. 6. In the navigation bar, choose Roles. 7. Choose the kinesis-analytics-MyApplication-us-west-2 role. 8. Choose Delete role and then confirm the deletion. Delete your CloudWatch resources 1. Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/. 2. In the navigation bar, choose Logs. 3. Choose the /aws/kinesis-analytics/MyApplication log group. 4. Choose Delete Log Group and then confirm the deletion. Example: Creating a sliding window in Scala Note For current examples, see Examples for creating and working with Managed Service for Apache Flink applications. Note Starting from version 1.15 Flink is Scala free. Applications can now use the Java API from any Scala version. Flink still uses Scala in a few key components internally but doesn't expose Scala into the user code classloader. Because of that, users need to add Scala dependencies into their jar-archives. Legacy examples 351 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide For more information about Scala changes in Flink 1.15, see Scala Free in One Fifteen. In this exercise, you will create a simple streaming application which uses Scala 3.2.0 and Flink's Java DataStream API. The application reads data from Kinesis stream, aggregates it using sliding windows and writes results to output Kinesis stream. Note To set up required prerequisites for this exercise, first complete the Getting Started (Scala) exercise. This topic contains the following sections: • Download and examine the application code • Compile and upload the application code • Create and run the application (console) • Create and run the application (CLI) • Update the application code • Clean up AWS resources Download and examine the application code The Python application code for this example is available from GitHub. To download the application code, do the following: 1. Install the Git client if you haven't already. For more information, see Installing Git. 2. Clone the remote repository with the following command: git clone https://github.com/aws-samples/amazon-kinesis-data-analytics-examples.git 3. Navigate to the amazon-kinesis-data-analytics-java-examples/scala/ SlidingWindow directory. Note the following about the application code: Legacy examples 352 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • A build.sbt file contains information about the application's configuration and dependencies, including the Managed Service for Apache Flink libraries. • The BasicStreamingJob.scala file contains the main method that defines the application's functionality. • The application uses a Kinesis source to read from the source stream. The following snippet creates the Kinesis source: private def createSource: FlinkKinesisConsumer[String] = { val applicationProperties = KinesisAnalyticsRuntime.getApplicationProperties val inputProperties = applicationProperties.get("ConsumerConfigProperties") new FlinkKinesisConsumer[String](inputProperties.getProperty(streamNameKey, defaultInputStreamName), new SimpleStringSchema, inputProperties) } The application also uses a Kinesis sink to write into the result stream. The following snippet creates the Kinesis sink: private def createSink: KinesisStreamsSink[String] = { val applicationProperties = KinesisAnalyticsRuntime.getApplicationProperties val outputProperties = applicationProperties.get("ProducerConfigProperties") KinesisStreamsSink.builder[String] .setKinesisClientProperties(outputProperties) .setSerializationSchema(new SimpleStringSchema) .setStreamName(outputProperties.getProperty(streamNameKey, defaultOutputStreamName)) .setPartitionKeyGenerator((element: String) => String.valueOf(element.hashCode)) .build } • The application uses the window operator to find the count of values for each stock symbol over a 10-seconds window that slides by 5 seconds. The following code creates the operator and sends the aggregated data to a new Kinesis Data Streams sink: environment.addSource(createSource) .map { value => val jsonNode = jsonParser.readValue(value, classOf[JsonNode]) Legacy examples 353 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide new Tuple2[String, Double](jsonNode.get("ticker").toString, jsonNode.get("price").asDouble) } .returns(Types.TUPLE(Types.STRING, Types.DOUBLE)) .keyBy(v => v.f0) // Logically partition the stream for each word .window(SlidingProcessingTimeWindows.of(Time.seconds(10), Time.seconds(5))) .min(1) // Calculate minimum price per ticker over the window .map { value => value.f0 + String.format(",%.2f", value.f1) + "\n" } .sinkTo(createSink) • The application creates source and sink connectors to access external resources using a StreamExecutionEnvironment object. • The application creates source and sink connectors using dynamic application properties. Runtime application's properties are read to configure the connectors. For more information about runtime properties, see Runtime Properties. Compile and upload the application code In this section, you compile and upload your application code to an Amazon S3 bucket. Compile the Application Code Use the SBT build tool to build the Scala code for the application. To install SBT, see Install sbt with cs setup. You also need to install the Java Development Kit (JDK). See Prerequisites for Completing the Exercises. 1. To use your application code, you compile and package it into a JAR file. You can compile and package your code with SBT: sbt assembly 2. If the application compiles successfully, the following file is created: target/scala-3.2.0/sliding-window-scala-1.0.jar Upload the Apache Flink Streaming Scala Code In this section, you create an Amazon S3 bucket and upload your application code. 1. Open the Amazon S3 console at https://console.aws.amazon.com/s3/. Legacy examples 354 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 2. Choose Create
analytics-java-api-109
analytics-java-api.pdf
109
also need to install the Java Development Kit (JDK). See Prerequisites for Completing the Exercises. 1. To use your application code, you compile and package it into a JAR file. You can compile and package your code with SBT: sbt assembly 2. If the application compiles successfully, the following file is created: target/scala-3.2.0/sliding-window-scala-1.0.jar Upload the Apache Flink Streaming Scala Code In this section, you create an Amazon S3 bucket and upload your application code. 1. Open the Amazon S3 console at https://console.aws.amazon.com/s3/. Legacy examples 354 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 2. Choose Create bucket 3. 4. 5. Enter ka-app-code-<username> in the Bucket name field. Add a suffix to the bucket name, such as your user name, to make it globally unique. Choose Next. In Configure options, keep the settings as they are, and choose Next. In Set permissions, keep the settings as they are, and choose Next. 6. Choose Create bucket. 7. Choose the ka-app-code-<username> bucket, and then choose Upload. 8. In the Select files step, choose Add files. Navigate to the sliding-window-scala-1.0.jar file that you created in the previous step. 9. You don't need to change any of the settings for the object, so choose Upload. Your application code is now stored in an Amazon S3 bucket where your application can access it. Create and run the application (console) Follow these steps to create, configure, update, and run the application using the console. Create the application 1. Open the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink 2. On the Managed Service for Apache Flink dashboard, choose Create analytics application. 3. On the Managed Service for Apache Flink - Create application page, provide the application details as follows: • For Application name, enter MyApplication. • For Description, enter My Scala test app. • For Runtime, choose Apache Flink. • Leave the version as Apache Flink version 1.15.2 (Recommended version). 4. For Access permissions, choose Create / update IAM role kinesis-analytics- MyApplication-us-west-2. 5. Choose Create application. Legacy examples 355 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Note When you create a Managed Service for Apache Flink application using the console, you have the option of having an IAM role and policy created for your application. Your application uses this role and policy to access its dependent resources. These IAM resources are named using your application name and Region as follows: • Policy: kinesis-analytics-service-MyApplication-us-west-2 • Role: kinesisanalytics-MyApplication-us-west-2 Configure the application Use the following procedure to configure the application. To configure the application 1. On the MyApplication page, choose Configure. 2. On the Configure application page, provide the Code location: • For Amazon S3 bucket, enter ka-app-code-<username>. • For Path to Amazon S3 object, enter sliding-window-scala-1.0.jar.. 3. Under Access to application resources, for Access permissions, choose Create / update IAM role kinesis-analytics-MyApplication-us-west-2. 4. Under Properties, choose Add group. 5. Enter the following: Group ID Key Value ConsumerConfigProp input.stream.name ExampleInputStream erties ConsumerConfigProp aws.region us-west-2 erties ConsumerConfigProp flink.stream.initp LATEST erties os Legacy examples 356 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Choose Save. 6. Under Properties, choose Add group again. 7. Enter the following: Group ID Key Value ProducerConfigProp output.stream.name ExampleOutputStream erties ProducerConfigProp aws.region us-west-2 erties 8. Under Monitoring, ensure that the Monitoring metrics level is set to Application. 9. For CloudWatch logging, choose the Enable check box. 10. Choose Update. Note When you choose to enable Amazon CloudWatch logging, Managed Service for Apache Flink creates a log group and log stream for you. The names of these resources are as follows: • Log group: /aws/kinesis-analytics/MyApplication • Log stream: kinesis-analytics-log-stream Edit the IAM policy Edit the IAM policy to add permissions to access the Amazon S3 bucket. To edit the IAM policy to add S3 bucket permissions 1. Open the IAM console at https://console.aws.amazon.com/iam/. 2. Choose Policies. Choose the kinesis-analytics-service-MyApplication-us-west-2 policy that the console created for you in the previous section. Legacy examples 357 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 3. On the Summary page, choose Edit policy. Choose the JSON tab. 4. Add the highlighted section of the following policy example to the policy. Replace the sample account IDs (012345678901) with your account ID. { "Version": "2012-10-17", "Statement": [ { "Sid": "ReadCode", "Effect": "Allow", "Action": [ "s3:GetObject", "s3:GetObjectVersion" ], "Resource": [ "arn:aws:s3:::ka-app-code-username/sliding-window-scala-1.0.jar" ] }, { "Sid": "DescribeLogGroups", "Effect": "Allow", "Action": [ "logs:DescribeLogGroups" ], "Resource": [ "arn:aws:logs:us-west-2:012345678901:log-group:*" ] }, { "Sid": "DescribeLogStreams", "Effect": "Allow", "Action": [ "logs:DescribeLogStreams" ], "Resource": [ "arn:aws:logs:us-west-2:012345678901:log-group:/aws/kinesis- analytics/MyApplication:log-stream:*" ] }, { "Sid": "PutLogEvents", "Effect": "Allow", "Action": [ Legacy examples 358 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide "logs:PutLogEvents" ], "Resource": [ "arn:aws:logs:us-west-2:012345678901:log-group:/aws/kinesis- analytics/MyApplication:log-stream:kinesis-analytics-log-stream" ] }, { "Sid": "ReadInputStream", "Effect": "Allow", "Action": "kinesis:*", "Resource": "arn:aws:kinesis:us-west-2:012345678901:stream/ ExampleInputStream" }, { "Sid": "WriteOutputStream", "Effect": "Allow", "Action":
analytics-java-api-110
analytics-java-api.pdf
110
with your account ID. { "Version": "2012-10-17", "Statement": [ { "Sid": "ReadCode", "Effect": "Allow", "Action": [ "s3:GetObject", "s3:GetObjectVersion" ], "Resource": [ "arn:aws:s3:::ka-app-code-username/sliding-window-scala-1.0.jar" ] }, { "Sid": "DescribeLogGroups", "Effect": "Allow", "Action": [ "logs:DescribeLogGroups" ], "Resource": [ "arn:aws:logs:us-west-2:012345678901:log-group:*" ] }, { "Sid": "DescribeLogStreams", "Effect": "Allow", "Action": [ "logs:DescribeLogStreams" ], "Resource": [ "arn:aws:logs:us-west-2:012345678901:log-group:/aws/kinesis- analytics/MyApplication:log-stream:*" ] }, { "Sid": "PutLogEvents", "Effect": "Allow", "Action": [ Legacy examples 358 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide "logs:PutLogEvents" ], "Resource": [ "arn:aws:logs:us-west-2:012345678901:log-group:/aws/kinesis- analytics/MyApplication:log-stream:kinesis-analytics-log-stream" ] }, { "Sid": "ReadInputStream", "Effect": "Allow", "Action": "kinesis:*", "Resource": "arn:aws:kinesis:us-west-2:012345678901:stream/ ExampleInputStream" }, { "Sid": "WriteOutputStream", "Effect": "Allow", "Action": "kinesis:*", "Resource": "arn:aws:kinesis:us-west-2:012345678901:stream/ ExampleOutputStream" } ] } Run the application The Flink job graph can be viewed by running the application, opening the Apache Flink dashboard, and choosing the desired Flink job. Stop the application To stop the application, on the MyApplication page, choose Stop. Confirm the action. Create and run the application (CLI) In this section, you use the AWS Command Line Interface to create and run the Managed Service for Apache Flink application. Use the kinesisanalyticsv2 AWS CLI command to create and interact with Managed Service for Apache Flink applications. Legacy examples 359 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Create a permissions policy Note You must create a permissions policy and role for your application. If you do not create these IAM resources, your application cannot access its data and log streams. First, you create a permissions policy with two statements: one that grants permissions for the read action on the source stream, and another that grants permissions for write actions on the sink stream. You then attach the policy to an IAM role (which you create in the next section). Thus, when Managed Service for Apache Flink assumes the role, the service has the necessary permissions to read from the source stream and write to the sink stream. Use the following code to create the AKReadSourceStreamWriteSinkStream permissions policy. Replace username with the user name that you used to create the Amazon S3 bucket to store the application code. Replace the account ID in the Amazon Resource Names (ARNs) (012345678901) with your account ID. { "ApplicationName": "sliding_window", "ApplicationDescription": "Scala sliding window application", "RuntimeEnvironment": "FLINK-1_15", "ServiceExecutionRole": "arn:aws:iam::012345678901:role/MF-stream-rw-role", "ApplicationConfiguration": { "ApplicationCodeConfiguration": { "CodeContent": { "S3ContentLocation": { "BucketARN": "arn:aws:s3:::ka-app-code-username", "FileKey": "sliding-window-scala-1.0.jar" } }, "CodeContentType": "ZIPFILE" }, "EnvironmentProperties": { "PropertyGroups": [ { "PropertyGroupId": "ConsumerConfigProperties", "PropertyMap" : { "aws.region" : "us-west-2", "stream.name" : "ExampleInputStream", Legacy examples 360 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide "flink.stream.initpos" : "LATEST" } }, { "PropertyGroupId": "ProducerConfigProperties", "PropertyMap" : { "aws.region" : "us-west-2", "stream.name" : "ExampleOutputStream" } } ] } }, "CloudWatchLoggingOptions": [ { "LogStreamARN": "arn:aws:logs:us-west-2:012345678901:log- group:MyApplication:log-stream:kinesis-analytics-log-stream" } ] } For step-by-step instructions to create a permissions policy, see Tutorial: Create and Attach Your First Customer Managed Policy in the IAM User Guide. Create an IAM role In this section, you create an IAM role that the Managed Service for Apache Flink application can assume to read a source stream and write to the sink stream. Managed Service for Apache Flink cannot access your stream without permissions. You grant these permissions via an IAM role. Each IAM role has two policies attached. The trust policy grants Managed Service for Apache Flink permission to assume the role, and the permissions policy determines what Managed Service for Apache Flink can do after assuming the role. You attach the permissions policy that you created in the preceding section to this role. To create an IAM role 1. Open the IAM console at https://console.aws.amazon.com/iam/. 2. In the navigation pane, choose Roles and then Create Role. 3. Under Select type of trusted identity, choose AWS Service 4. Under Choose the service that will use this role, choose Kinesis. Legacy examples 361 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 5. Under Select your use case, choose Managed Service for Apache Flink. 6. Choose Next: Permissions. 7. On the Attach permissions policies page, choose Next: Review. You attach permissions policies after you create the role. 8. On the Create role page, enter MF-stream-rw-role for the Role name. Choose Create role. Now you have created a new IAM role called MF-stream-rw-role. Next, you update the trust and permissions policies for the role 9. Attach the permissions policy to the role. Note For this exercise, Managed Service for Apache Flink assumes this role for both reading data from a Kinesis data stream (source) and writing output to another Kinesis data stream. So you attach the policy that you created in the previous step, Create a Permissions Policy. a. On the Summary page, choose the Permissions tab. b. Choose Attach Policies. c. In the search box, enter AKReadSourceStreamWriteSinkStream (the policy that you created in the previous section). d. Choose the AKReadSourceStreamWriteSinkStream policy,
analytics-java-api-111
analytics-java-api.pdf
111
MF-stream-rw-role. Next, you update the trust and permissions policies for the role 9. Attach the permissions policy to the role. Note For this exercise, Managed Service for Apache Flink assumes this role for both reading data from a Kinesis data stream (source) and writing output to another Kinesis data stream. So you attach the policy that you created in the previous step, Create a Permissions Policy. a. On the Summary page, choose the Permissions tab. b. Choose Attach Policies. c. In the search box, enter AKReadSourceStreamWriteSinkStream (the policy that you created in the previous section). d. Choose the AKReadSourceStreamWriteSinkStream policy, and choose Attach policy. You now have created the service execution role that your application uses to access resources. Make a note of the ARN of the new role. For step-by-step instructions for creating a role, see Creating an IAM Role (Console) in the IAM User Guide. Create the application Save the following JSON code to a file named create_request.json. Replace the sample role ARN with the ARN for the role that you created previously. Replace the bucket ARN suffix (username) with the suffix that you chose in the previous section. Replace the sample account ID (012345678901) in the service execution role with your account ID. Legacy examples 362 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide { "ApplicationName": "sliding_window", "ApplicationDescription": "Scala sliding_window application", "RuntimeEnvironment": "FLINK-1_15", "ServiceExecutionRole": "arn:aws:iam::012345678901:role/MF-stream-rw-role", "ApplicationConfiguration": { "ApplicationCodeConfiguration": { "CodeContent": { "S3ContentLocation": { "BucketARN": "arn:aws:s3:::ka-app-code-username", "FileKey": "sliding-window-scala-1.0.jar" } }, "CodeContentType": "ZIPFILE" }, "EnvironmentProperties": { "PropertyGroups": [ { "PropertyGroupId": "ConsumerConfigProperties", "PropertyMap" : { "aws.region" : "us-west-2", "stream.name" : "ExampleInputStream", "flink.stream.initpos" : "LATEST" } }, { "PropertyGroupId": "ProducerConfigProperties", "PropertyMap" : { "aws.region" : "us-west-2", "stream.name" : "ExampleOutputStream" } } ] } }, "CloudWatchLoggingOptions": [ { "LogStreamARN": "arn:aws:logs:us-west-2:012345678901:log- group:MyApplication:log-stream:kinesis-analytics-log-stream" } ] } Legacy examples 363 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Execute the CreateApplication with the following request to create the application: aws kinesisanalyticsv2 create-application --cli-input-json file://create_request.json The application is now created. You start the application in the next step. Start the application In this section, you use the StartApplication action to start the application. To start the application 1. Save the following JSON code to a file named start_request.json. { "ApplicationName": "sliding_window", "RunConfiguration": { "ApplicationRestoreConfiguration": { "ApplicationRestoreType": "RESTORE_FROM_LATEST_SNAPSHOT" } } } 2. Execute the StartApplication action with the preceding request to start the application: aws kinesisanalyticsv2 start-application --cli-input-json file://start_request.json The application is now running. You can check the Managed Service for Apache Flink metrics on the Amazon CloudWatch console to verify that the application is working. Stop the application In this section, you use the StopApplication action to stop the application. To stop the application 1. Save the following JSON code to a file named stop_request.json. { "ApplicationName": "sliding_window" } Legacy examples 364 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 2. Execute the StopApplication action with the preceding request to stop the application: aws kinesisanalyticsv2 stop-application --cli-input-json file://stop_request.json The application is now stopped. Add a CloudWatch logging option You can use the AWS CLI to add an Amazon CloudWatch log stream to your application. For information about using CloudWatch Logs with your application, see Setting Up Application Logging. Update environment properties In this section, you use the UpdateApplication action to change the environment properties for the application without recompiling the application code. In this example, you change the Region of the source and destination streams. To update environment properties for the application 1. Save the following JSON code to a file named update_properties_request.json. {"ApplicationName": "sliding_window", "CurrentApplicationVersionId": 1, "ApplicationConfigurationUpdate": { "EnvironmentPropertyUpdates": { "PropertyGroups": [ { "PropertyGroupId": "ConsumerConfigProperties", "PropertyMap" : { "aws.region" : "us-west-2", "stream.name" : "ExampleInputStream", "flink.stream.initpos" : "LATEST" } }, { "PropertyGroupId": "ProducerConfigProperties", "PropertyMap" : { "aws.region" : "us-west-2", "stream.name" : "ExampleOutputStream" } } Legacy examples 365 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide ] } } } 2. Execute the UpdateApplication action with the preceding request to update environment properties: aws kinesisanalyticsv2 update-application --cli-input-json file:// update_properties_request.json Update the application code When you need to update your application code with a new version of your code package, you use the UpdateApplication CLI action. Note To load a new version of the application code with the same file name, you must specify the new object version. For more information about using Amazon S3 object versions, see Enabling or Disabling Versioning. To use the AWS CLI, delete your previous code package from your Amazon S3 bucket, upload the new version, and call UpdateApplication, specifying the same Amazon S3 bucket and object name, and the new object version. The application will restart with the new code package. The following sample request for the UpdateApplication action reloads the application code and restarts the application. Update the CurrentApplicationVersionId to the current application version. You
analytics-java-api-112
analytics-java-api.pdf
112
of the application code with the same file name, you must specify the new object version. For more information about using Amazon S3 object versions, see Enabling or Disabling Versioning. To use the AWS CLI, delete your previous code package from your Amazon S3 bucket, upload the new version, and call UpdateApplication, specifying the same Amazon S3 bucket and object name, and the new object version. The application will restart with the new code package. The following sample request for the UpdateApplication action reloads the application code and restarts the application. Update the CurrentApplicationVersionId to the current application version. You can check the current application version using the ListApplications or DescribeApplication actions. Update the bucket name suffix (<username>) with the suffix that you chose in the Create dependent resources section. { "ApplicationName": "sliding_window", "CurrentApplicationVersionId": 1, "ApplicationConfigurationUpdate": { "ApplicationCodeConfigurationUpdate": { "CodeContentUpdate": { Legacy examples 366 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide "S3ContentLocationUpdate": { "BucketARNUpdate": "arn:aws:s3:::ka-app-code-username", "FileKeyUpdate": "-1.0.jar", "ObjectVersionUpdate": "SAMPLEUehYngP87ex1nzYIGYgfhypvDU" } } } } } Clean up AWS resources This section includes procedures for cleaning up AWS resources created in the sliding Window tutorial. This topic contains the following sections: • Delete your Managed Service for Apache Flink application • Delete your Kinesis data streams • Delete your Amazon S3 object and bucket • Delete your IAM resources • Delete your CloudWatch resources Delete your Managed Service for Apache Flink application 1. Open the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink 2. 3. in the Managed Service for Apache Flink panel, choose MyApplication. In the application's page, choose Delete and then confirm the deletion. Delete your Kinesis data streams 1. Open the Kinesis console at https://console.aws.amazon.com/kinesis. 2. 3. 4. In the Kinesis Data Streams panel, choose ExampleInputStream. In the ExampleInputStream page, choose Delete Kinesis Stream and then confirm the deletion. In the Kinesis streams page, choose the ExampleOutputStream, choose Actions, choose Delete, and then confirm the deletion. Legacy examples 367 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Delete your Amazon S3 object and bucket 1. Open the Amazon S3 console at https://console.aws.amazon.com/s3/. 2. Choose the ka-app-code-<username> bucket. 3. Choose Delete and then enter the bucket name to confirm deletion. Delete your IAM resources 1. Open the IAM console at https://console.aws.amazon.com/iam/. 2. 3. In the navigation bar, choose Policies. In the filter control, enter kinesis. 4. Choose the kinesis-analytics-service-MyApplication-us-west-2 policy. 5. Choose Policy Actions and then choose Delete. 6. In the navigation bar, choose Roles. 7. Choose the kinesis-analytics-MyApplication-us-west-2 role. 8. Choose Delete role and then confirm the deletion. Delete your CloudWatch resources 1. Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/. 2. In the navigation bar, choose Logs. 3. Choose the /aws/kinesis-analytics/MyApplication log group. 4. Choose Delete Log Group and then confirm the deletion. Example: Send streaming data to Amazon S3 in Scala Note For current examples, see Examples for creating and working with Managed Service for Apache Flink applications. Legacy examples 368 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Note Starting from version 1.15 Flink is Scala free. Applications can now use the Java API from any Scala version. Flink still uses Scala in a few key components internally but doesn't expose Scala into the user code classloader. Because of that, users need to add Scala dependencies into their jar-archives. For more information about Scala changes in Flink 1.15, see Scala Free in One Fifteen. In this exercise, you will create a simple streaming application which uses Scala 3.2.0 and Flink's Java DataStream API. The application reads data from Kinesis stream, aggregates it using sliding windows and writes results to S3. Note To set up required prerequisites for this exercise, first complete the Getting Started (Scala) exercise. You only need to create an additional folder data/ in the Amazon S3 bucket ka- app-code-<username>. This topic contains the following sections: • Download and examine the application code • Compile and upload the application code • Create and run the application (console) • Create and run the application (CLI) • Update the application code • Clean up AWS resources Download and examine the application code The Python application code for this example is available from GitHub. To download the application code, do the following: 1. Install the Git client if you haven't already. For more information, see Installing Git. 2. Clone the remote repository with the following command: Legacy examples 369 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide git clone https://github.com/aws-samples/amazon-kinesis-data-analytics-examples.git 3. Navigate to the amazon-kinesis-data-analytics-java-examples/scala/S3Sink directory. Note the following about the application code: • A build.sbt file contains information about the application's configuration and dependencies, including the Managed Service for Apache Flink libraries. • The BasicStreamingJob.scala file contains the main method that defines the application's functionality. • The application uses a Kinesis
analytics-java-api-113
analytics-java-api.pdf
113
the application code, do the following: 1. Install the Git client if you haven't already. For more information, see Installing Git. 2. Clone the remote repository with the following command: Legacy examples 369 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide git clone https://github.com/aws-samples/amazon-kinesis-data-analytics-examples.git 3. Navigate to the amazon-kinesis-data-analytics-java-examples/scala/S3Sink directory. Note the following about the application code: • A build.sbt file contains information about the application's configuration and dependencies, including the Managed Service for Apache Flink libraries. • The BasicStreamingJob.scala file contains the main method that defines the application's functionality. • The application uses a Kinesis source to read from the source stream. The following snippet creates the Kinesis source: private def createSource: FlinkKinesisConsumer[String] = { val applicationProperties = KinesisAnalyticsRuntime.getApplicationProperties val inputProperties = applicationProperties.get("ConsumerConfigProperties") new FlinkKinesisConsumer[String](inputProperties.getProperty(streamNameKey, defaultInputStreamName), new SimpleStringSchema, inputProperties) } The application also uses a StreamingFileSink to write to an Amazon S3 bucket:` def createSink: StreamingFileSink[String] = { val applicationProperties = KinesisAnalyticsRuntime.getApplicationProperties val s3SinkPath = applicationProperties.get("ProducerConfigProperties").getProperty("s3.sink.path") StreamingFileSink .forRowFormat(new Path(s3SinkPath), new SimpleStringEncoder[String]("UTF-8")) .build() } • The application creates source and sink connectors to access external resources using a StreamExecutionEnvironment object. Legacy examples 370 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • The application creates source and sink connectors using dynamic application properties. Runtime application's properties are read to configure the connectors. For more information about runtime properties, see Runtime Properties. Compile and upload the application code In this section, you compile and upload your application code to an Amazon S3 bucket. Compile the Application Code Use the SBT build tool to build the Scala code for the application. To install SBT, see Install sbt with cs setup. You also need to install the Java Development Kit (JDK). See Prerequisites for Completing the Exercises. 1. To use your application code, you compile and package it into a JAR file. You can compile and package your code with SBT: sbt assembly 2. If the application compiles successfully, the following file is created: target/scala-3.2.0/s3-sink-scala-1.0.jar Upload the Apache Flink Streaming Scala Code In this section, you create an Amazon S3 bucket and upload your application code. 1. Open the Amazon S3 console at https://console.aws.amazon.com/s3/. 2. Choose Create bucket 3. 4. 5. Enter ka-app-code-<username> in the Bucket name field. Add a suffix to the bucket name, such as your user name, to make it globally unique. Choose Next. In Configure options, keep the settings as they are, and choose Next. In Set permissions, keep the settings as they are, and choose Next. 6. Choose Create bucket. 7. Choose the ka-app-code-<username> bucket, and then choose Upload. 8. In the Select files step, choose Add files. Navigate to the s3-sink-scala-1.0.jar file that you created in the previous step. Legacy examples 371 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 9. You don't need to change any of the settings for the object, so choose Upload. Your application code is now stored in an Amazon S3 bucket where your application can access it. Create and run the application (console) Follow these steps to create, configure, update, and run the application using the console. Create the application 1. Open the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink 2. On the Managed Service for Apache Flink dashboard, choose Create analytics application. 3. On the Managed Service for Apache Flink - Create application page, provide the application details as follows: • For Application name, enter MyApplication. • For Description, enter My java test app. • For Runtime, choose Apache Flink. • Leave the version as Apache Flink version 1.15.2 (Recommended version). 4. For Access permissions, choose Create / update IAM role kinesis-analytics- MyApplication-us-west-2. 5. Choose Create application. Note When you create a Managed Service for Apache Flink application using the console, you have the option of having an IAM role and policy created for your application. Your application uses this role and policy to access its dependent resources. These IAM resources are named using your application name and Region as follows: • Policy: kinesis-analytics-service-MyApplication-us-west-2 • Role: kinesisanalytics-MyApplication-us-west-2 Configure the application Use the following procedure to configure the application. Legacy examples 372 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide To configure the application 1. On the MyApplication page, choose Configure. 2. On the Configure application page, provide the Code location: • For Amazon S3 bucket, enter ka-app-code-<username>. • For Path to Amazon S3 object, enter s3-sink-scala-1.0.jar. 3. Under Access to application resources, for Access permissions, choose Create / update IAM role kinesis-analytics-MyApplication-us-west-2. 4. Under Properties, choose Add group. 5. Enter the following: Group ID Key Value ConsumerConfigProp input.stream.name ExampleInputStream erties ConsumerConfigProp aws.region us-west-2 erties ConsumerConfigProp flink.stream.initp LATEST erties os Choose Save. 6. Under Properties, choose Add group. 7. Enter the following: Group ID Key Value ProducerConfigProp s3.sink.path erties s3a://ka-app-code- <user-name> /data 8. Under
analytics-java-api-114
analytics-java-api.pdf
114
1. On the MyApplication page, choose Configure. 2. On the Configure application page, provide the Code location: • For Amazon S3 bucket, enter ka-app-code-<username>. • For Path to Amazon S3 object, enter s3-sink-scala-1.0.jar. 3. Under Access to application resources, for Access permissions, choose Create / update IAM role kinesis-analytics-MyApplication-us-west-2. 4. Under Properties, choose Add group. 5. Enter the following: Group ID Key Value ConsumerConfigProp input.stream.name ExampleInputStream erties ConsumerConfigProp aws.region us-west-2 erties ConsumerConfigProp flink.stream.initp LATEST erties os Choose Save. 6. Under Properties, choose Add group. 7. Enter the following: Group ID Key Value ProducerConfigProp s3.sink.path erties s3a://ka-app-code- <user-name> /data 8. Under Monitoring, ensure that the Monitoring metrics level is set to Application. 9. For CloudWatch logging, choose the Enable check box. 10. Choose Update. Legacy examples 373 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Note When you choose to enable Amazon CloudWatch logging, Managed Service for Apache Flink creates a log group and log stream for you. The names of these resources are as follows: • Log group: /aws/kinesis-analytics/MyApplication • Log stream: kinesis-analytics-log-stream Edit the IAM policy Edit the IAM policy to add permissions to access the Amazon S3 bucket. To edit the IAM policy to add S3 bucket permissions 1. Open the IAM console at https://console.aws.amazon.com/iam/. 2. Choose Policies. Choose the kinesis-analytics-service-MyApplication-us-west-2 policy that the console created for you in the previous section. 3. On the Summary page, choose Edit policy. Choose the JSON tab. 4. Add the highlighted section of the following policy example to the policy. Replace the sample account IDs (012345678901) with your account ID. { "Version": "2012-10-17", "Statement": [ { "Sid": "ReadCode", "Effect": "Allow", "Action": [ "s3:Abort*", "s3:DeleteObject*", "s3:GetObject*", "s3:GetBucket*", "s3:List*", "s3:ListBucket", "s3:PutObject" ], "Resource": [ "arn:aws:s3:::ka-app-code-<username>", Legacy examples 374 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide "arn:aws:s3:::ka-app-code-<username>/*" ] }, { "Sid": "DescribeLogGroups", "Effect": "Allow", "Action": [ "logs:DescribeLogGroups" ], "Resource": [ "arn:aws:logs:us-west-2:012345678901:log-group:*" ] }, { "Sid": "DescribeLogStreams", "Effect": "Allow", "Action": [ "logs:DescribeLogStreams" ], "Resource": [ "arn:aws:logs:us-west-2:012345678901:log-group:/aws/kinesis- analytics/MyApplication:log-stream:*" ] }, { "Sid": "PutLogEvents", "Effect": "Allow", "Action": [ "logs:PutLogEvents" ], "Resource": [ "arn:aws:logs:us-west-2:012345678901:log-group:/aws/kinesis- analytics/MyApplication:log-stream:kinesis-analytics-log-stream" ] }, { "Sid": "ReadInputStream", "Effect": "Allow", "Action": "kinesis:*", "Resource": "arn:aws:kinesis:us-west-2:012345678901:stream/ ExampleInputStream" } ] Legacy examples 375 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide } Run the application The Flink job graph can be viewed by running the application, opening the Apache Flink dashboard, and choosing the desired Flink job. Stop the application To stop the application, on the MyApplication page, choose Stop. Confirm the action. Create and run the application (CLI) In this section, you use the AWS Command Line Interface to create and run the Managed Service for Apache Flink application. Use the kinesisanalyticsv2 AWS CLI command to create and interact with Managed Service for Apache Flink applications. Create a permissions policy Note You must create a permissions policy and role for your application. If you do not create these IAM resources, your application cannot access its data and log streams. First, you create a permissions policy with two statements: one that grants permissions for the read action on the source stream, and another that grants permissions for write actions on the sink stream. You then attach the policy to an IAM role (which you create in the next section). Thus, when Managed Service for Apache Flink assumes the role, the service has the necessary permissions to read from the source stream and write to the sink stream. Use the following code to create the AKReadSourceStreamWriteSinkStream permissions policy. Replace username with the user name that you used to create the Amazon S3 bucket to store the application code. Replace the account ID in the Amazon Resource Names (ARNs) (012345678901) with your account ID. { "Version": "2012-10-17", "Statement": [ { "Sid": "ReadCode", Legacy examples 376 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide "Effect": "Allow", "Action": [ "s3:GetObject", "s3:GetObjectVersion" ], "Resource": [ "arn:aws:s3:::ka-app-code-username/getting-started-scala-1.0.jar" ] }, { "Sid": "DescribeLogGroups", "Effect": "Allow", "Action": [ "logs:DescribeLogGroups" ], "Resource": [ "arn:aws:logs:us-west-2:012345678901:log-group:*" ] }, { "Sid": "DescribeLogStreams", "Effect": "Allow", "Action": [ "logs:DescribeLogStreams" ], "Resource": [ "arn:aws:logs:us-west-2:012345678901:log-group:/aws/kinesis-analytics/ MyApplication:log-stream:*" ] }, { "Sid": "PutLogEvents", "Effect": "Allow", "Action": [ "logs:PutLogEvents" ], "Resource": [ "arn:aws:logs:us-west-2:012345678901:log-group:/aws/kinesis-analytics/ MyApplication:log-stream:kinesis-analytics-log-stream" ] }, { "Sid": "ReadInputStream", "Effect": "Allow", Legacy examples 377 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide "Action": "kinesis:*", "Resource": "arn:aws:kinesis:us-west-2:012345678901:stream/ ExampleInputStream" }, { "Sid": "WriteOutputStream", "Effect": "Allow", "Action": "kinesis:*", "Resource": "arn:aws:kinesis:us-west-2:012345678901:stream/ ExampleOutputStream" } ] } For step-by-step instructions to create a permissions policy, see Tutorial: Create and Attach Your First Customer Managed Policy in the IAM User Guide. Create an IAM role In this section, you create an IAM role that the Managed Service for Apache Flink application can assume to read a source stream and write to the sink stream. Managed Service for Apache Flink cannot access your stream without
analytics-java-api-115
analytics-java-api.pdf
115
Legacy examples 377 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide "Action": "kinesis:*", "Resource": "arn:aws:kinesis:us-west-2:012345678901:stream/ ExampleInputStream" }, { "Sid": "WriteOutputStream", "Effect": "Allow", "Action": "kinesis:*", "Resource": "arn:aws:kinesis:us-west-2:012345678901:stream/ ExampleOutputStream" } ] } For step-by-step instructions to create a permissions policy, see Tutorial: Create and Attach Your First Customer Managed Policy in the IAM User Guide. Create an IAM role In this section, you create an IAM role that the Managed Service for Apache Flink application can assume to read a source stream and write to the sink stream. Managed Service for Apache Flink cannot access your stream without permissions. You grant these permissions via an IAM role. Each IAM role has two policies attached. The trust policy grants Managed Service for Apache Flink permission to assume the role, and the permissions policy determines what Managed Service for Apache Flink can do after assuming the role. You attach the permissions policy that you created in the preceding section to this role. To create an IAM role 1. Open the IAM console at https://console.aws.amazon.com/iam/. 2. In the navigation pane, choose Roles and then Create Role. 3. Under Select type of trusted identity, choose AWS Service 4. Under Choose the service that will use this role, choose Kinesis. 5. Under Select your use case, choose Managed Service for Apache Flink. 6. Choose Next: Permissions. 7. On the Attach permissions policies page, choose Next: Review. You attach permissions policies after you create the role. Legacy examples 378 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 8. On the Create role page, enter MF-stream-rw-role for the Role name. Choose Create role. Now you have created a new IAM role called MF-stream-rw-role. Next, you update the trust and permissions policies for the role 9. Attach the permissions policy to the role. Note For this exercise, Managed Service for Apache Flink assumes this role for both reading data from a Kinesis data stream (source) and writing output to another Kinesis data stream. So you attach the policy that you created in the previous step, Create a Permissions Policy. a. On the Summary page, choose the Permissions tab. b. Choose Attach Policies. c. In the search box, enter AKReadSourceStreamWriteSinkStream (the policy that you created in the previous section). d. Choose the AKReadSourceStreamWriteSinkStream policy, and choose Attach policy. You now have created the service execution role that your application uses to access resources. Make a note of the ARN of the new role. For step-by-step instructions for creating a role, see Creating an IAM Role (Console) in the IAM User Guide. Create the application Save the following JSON code to a file named create_request.json. Replace the sample role ARN with the ARN for the role that you created previously. Replace the bucket ARN suffix (username) with the suffix that you chose in the previous section. Replace the sample account ID (012345678901) in the service execution role with your account ID. { "ApplicationName": "s3_sink", "ApplicationDescription": "Scala tumbling window application", "RuntimeEnvironment": "FLINK-1_15", "ServiceExecutionRole": "arn:aws:iam::012345678901:role/MF-stream-rw-role", Legacy examples 379 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide "ApplicationConfiguration": { "ApplicationCodeConfiguration": { "CodeContent": { "S3ContentLocation": { "BucketARN": "arn:aws:s3:::ka-app-code-username", "FileKey": "s3-sink-scala-1.0.jar" } }, "CodeContentType": "ZIPFILE" }, "EnvironmentProperties": { "PropertyGroups": [ { "PropertyGroupId": "ConsumerConfigProperties", "PropertyMap" : { "aws.region" : "us-west-2", "stream.name" : "ExampleInputStream", "flink.stream.initpos" : "LATEST" } }, { "PropertyGroupId": "ProducerConfigProperties", "PropertyMap" : { "s3.sink.path" : "s3a://ka-app-code-<username>/data" } } ] } }, "CloudWatchLoggingOptions": [ { "LogStreamARN": "arn:aws:logs:us-west-2:012345678901:log- group:MyApplication:log-stream:kinesis-analytics-log-stream" } ] } Execute the CreateApplication with the following request to create the application: aws kinesisanalyticsv2 create-application --cli-input-json file://create_request.json The application is now created. You start the application in the next step. Legacy examples 380 Managed Service for Apache Flink Start the application Managed Service for Apache Flink Developer Guide In this section, you use the StartApplication action to start the application. To start the application 1. Save the following JSON code to a file named start_request.json. {{ "ApplicationName": "s3_sink", "RunConfiguration": { "ApplicationRestoreConfiguration": { "ApplicationRestoreType": "RESTORE_FROM_LATEST_SNAPSHOT" } } } 2. Execute the StartApplication action with the preceding request to start the application: aws kinesisanalyticsv2 start-application --cli-input-json file://start_request.json The application is now running. You can check the Managed Service for Apache Flink metrics on the Amazon CloudWatch console to verify that the application is working. Stop the application In this section, you use the StopApplication action to stop the application. To stop the application 1. Save the following JSON code to a file named stop_request.json. { "ApplicationName": "s3_sink" } 2. Execute the StopApplication action with the preceding request to stop the application: aws kinesisanalyticsv2 stop-application --cli-input-json file://stop_request.json Legacy examples 381 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide The application is now stopped. Add a CloudWatch logging option You can use the AWS CLI to add an Amazon CloudWatch log stream to your application. For
analytics-java-api-116
analytics-java-api.pdf
116
verify that the application is working. Stop the application In this section, you use the StopApplication action to stop the application. To stop the application 1. Save the following JSON code to a file named stop_request.json. { "ApplicationName": "s3_sink" } 2. Execute the StopApplication action with the preceding request to stop the application: aws kinesisanalyticsv2 stop-application --cli-input-json file://stop_request.json Legacy examples 381 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide The application is now stopped. Add a CloudWatch logging option You can use the AWS CLI to add an Amazon CloudWatch log stream to your application. For information about using CloudWatch Logs with your application, see Setting Up Application Logging. Update environment properties In this section, you use the UpdateApplication action to change the environment properties for the application without recompiling the application code. In this example, you change the Region of the source and destination streams. To update environment properties for the application 1. Save the following JSON code to a file named update_properties_request.json. {"ApplicationName": "s3_sink", "CurrentApplicationVersionId": 1, "ApplicationConfigurationUpdate": { "EnvironmentPropertyUpdates": { "PropertyGroups": [ { "PropertyGroupId": "ConsumerConfigProperties", "PropertyMap" : { "aws.region" : "us-west-2", "stream.name" : "ExampleInputStream", "flink.stream.initpos" : "LATEST" } }, { "PropertyGroupId": "ProducerConfigProperties", "PropertyMap" : { "s3.sink.path" : "s3a://ka-app-code-<username>/data" } } ] } } } Legacy examples 382 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 2. Execute the UpdateApplication action with the preceding request to update environment properties: aws kinesisanalyticsv2 update-application --cli-input-json file:// update_properties_request.json Update the application code When you need to update your application code with a new version of your code package, you use the UpdateApplication CLI action. Note To load a new version of the application code with the same file name, you must specify the new object version. For more information about using Amazon S3 object versions, see Enabling or Disabling Versioning. To use the AWS CLI, delete your previous code package from your Amazon S3 bucket, upload the new version, and call UpdateApplication, specifying the same Amazon S3 bucket and object name, and the new object version. The application will restart with the new code package. The following sample request for the UpdateApplication action reloads the application code and restarts the application. Update the CurrentApplicationVersionId to the current application version. You can check the current application version using the ListApplications or DescribeApplication actions. Update the bucket name suffix (<username>) with the suffix that you chose in the Create dependent resources section. { "ApplicationName": "s3_sink", "CurrentApplicationVersionId": 1, "ApplicationConfigurationUpdate": { "ApplicationCodeConfigurationUpdate": { "CodeContentUpdate": { "S3ContentLocationUpdate": { "BucketARNUpdate": "arn:aws:s3:::ka-app-code-username", "FileKeyUpdate": "s3-sink-scala-1.0.jar", "ObjectVersionUpdate": "SAMPLEUehYngP87ex1nzYIGYgfhypvDU" } Legacy examples 383 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide } } } } Clean up AWS resources This section includes procedures for cleaning up AWS resources created in the Tumbling Window tutorial. This topic contains the following sections: • Delete your Managed Service for Apache Flink application • Delete your Kinesis data streams • Delete your Amazon S3 object and bucket • Delete your IAM resources • Delete your CloudWatch resources Delete your Managed Service for Apache Flink application 1. Open the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink 2. 3. in the Managed Service for Apache Flink panel, choose MyApplication. In the application's page, choose Delete and then confirm the deletion. Delete your Kinesis data streams 1. Open the Kinesis console at https://console.aws.amazon.com/kinesis. 2. 3. 4. In the Kinesis Data Streams panel, choose ExampleInputStream. In the ExampleInputStream page, choose Delete Kinesis Stream and then confirm the deletion. In the Kinesis streams page, choose the ExampleOutputStream, choose Actions, choose Delete, and then confirm the deletion. Delete your Amazon S3 object and bucket 1. Open the Amazon S3 console at https://console.aws.amazon.com/s3/. 2. Choose the ka-app-code-<username> bucket. Legacy examples 384 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 3. Choose Delete and then enter the bucket name to confirm deletion. Delete your IAM resources 1. Open the IAM console at https://console.aws.amazon.com/iam/. 2. 3. In the navigation bar, choose Policies. In the filter control, enter kinesis. 4. Choose the kinesis-analytics-service-MyApplication-us-west-2 policy. 5. Choose Policy Actions and then choose Delete. 6. In the navigation bar, choose Roles. 7. Choose the kinesis-analytics-MyApplication-us-west-2 role. 8. Choose Delete role and then confirm the deletion. Delete your CloudWatch resources 1. Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/. 2. In the navigation bar, choose Logs. 3. Choose the /aws/kinesis-analytics/MyApplication log group. 4. Choose Delete Log Group and then confirm the deletion. Legacy examples 385 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Use a Studio notebook with Managed Service for Apache Flink Studio notebooks for Managed Service for Apache Flink allows you to interactively query data streams in real time, and easily build and run stream processing applications using standard SQL, Python, and Scala. With a few clicks in
analytics-java-api-117
analytics-java-api.pdf
117
then confirm the deletion. Delete your CloudWatch resources 1. Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/. 2. In the navigation bar, choose Logs. 3. Choose the /aws/kinesis-analytics/MyApplication log group. 4. Choose Delete Log Group and then confirm the deletion. Legacy examples 385 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Use a Studio notebook with Managed Service for Apache Flink Studio notebooks for Managed Service for Apache Flink allows you to interactively query data streams in real time, and easily build and run stream processing applications using standard SQL, Python, and Scala. With a few clicks in the AWS Management console, you can launch a serverless notebook to query data streams and get results in seconds. A notebook is a web-based development environment. With notebooks, you get a simple interactive development experience combined with the advanced capabilities provided by Apache Flink. Studio notebooks uses notebooks powered by Apache Zeppelin, and uses Apache Flink as the stream processing engine. Studio notebooks seamlessly combines these technologies to make advanced analytics on data streams accessible to developers of all skill sets. Apache Zeppelin provides your Studio notebooks with a complete suite of analytics tools, including the following: • Data Visualization • Exporting data to files • Controlling the output format for easier analysis To get started using Managed Service for Apache Flink and Apache Zeppelin, see Tutorial: Create a Studio notebook in Managed Service for Apache Flink. For more information about Apache Zeppelin, see the Apache Zeppelin documentation. With a notebook, you model queries using the Apache Flink Table API & SQL in SQL, Python, or Scala, or DataStream API in Scala. With a few clicks, you can then promote the Studio notebook to a continuously-running, non-interactive, Managed Service for Apache Flink stream-processing application for your production workloads. This topic contains the following sections: • Use the correct Studio notebook Runtime version • Create a Studio notebook • Perform an interactive analysis of streaming data • Deploy as an application with durable state 386 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • Review IAM permissions for Studio notebooks • Use connectors and dependencies • Implement user-defined functions • Enable checkpointing • Upgrade Studio Runtime • Work with AWS Glue • Examples and tutorials for Studio notebooks in Managed Service for Apache Flink • Troubleshoot Studio notebooks for Managed Service for Apache Flink • Create custom IAM policies for Managed Service for Apache Flink Studio notebooks Use the correct Studio notebook Runtime version With Amazon Managed Service for Apache Flink Studio, you can query data streams in real time and build and run stream processing applications using standard SQL, Python, and Scala in an interactive notebook. Studio notebooks are powered by Apache Zeppelin and use Apache Flink as the stream processing engine. Note We will deprecate Studio Runtime with Apache Flink version 1.11 on November 5, 2024. Starting from this date, you will not be able to run new notebooks or create new applications using this version. We recommend that you upgrade to the latest runtime (Apache Flink 1.15 and Apache Zeppelin 0.10) before that time. For guidance on how to upgrade your notebook, see Upgrade Studio Runtime. Studio Runtime Apache Flink version Apache Zeppelin version Python version 1.15 1.13 0.1 0.9 3.8 3.8 Recommended Supported until October 16, 2024 Use the correct Studio notebook Runtime version 387 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Apache Flink version Apache Zeppelin version Python version 1.11 0.9 3.7 Deprecating on February 24, 2025 Create a Studio notebook A Studio notebook contains queries or programs written in SQL, Python, or Scala that runs on streaming data and returns analytic results. You create your application using either the console or the CLI, and provide queries for analyzing the data from your data source. Your application has the following components: • A data source, such as an Amazon MSK cluster, a Kinesis data stream, or an Amazon S3 bucket. • An AWS Glue database. This database contains tables, which store your data source and destination schemas and endpoints. For more information, see Work with AWS Glue. • Your application code. Your code implements your analytics query or program. • Your application settings and runtime properties. For information about application settings and runtime properties, see the following topics in the Developer Guide for Apache Flink Applications: • Application Parallelism and Scaling: You use your application's Parallelism setting to control the number of queries that your application can execute simultaneously. Your queries can also take advantage of increased parallelism if they have multiple paths of execution, such as in the following circumstances: • When processing multiple shards of a Kinesis data stream • When partitioning data using the KeyBy operator. • When using multiple window operators For more
analytics-java-api-118
analytics-java-api.pdf
118
or program. • Your application settings and runtime properties. For information about application settings and runtime properties, see the following topics in the Developer Guide for Apache Flink Applications: • Application Parallelism and Scaling: You use your application's Parallelism setting to control the number of queries that your application can execute simultaneously. Your queries can also take advantage of increased parallelism if they have multiple paths of execution, such as in the following circumstances: • When processing multiple shards of a Kinesis data stream • When partitioning data using the KeyBy operator. • When using multiple window operators For more information about application scaling, see Application Scaling in Managed Service for Apache Flink for Apache Flink. • Logging and Monitoring: For information about application logging and monitoring, see Logging and Monitoring in Amazon Managed Service for Apache Flink for Apache Flink. • Your application uses checkpoints and savepoints for fault tolerance. Checkpoints and savepoints are not enabled by default for Studio notebooks. Create a Studio notebook 388 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide You can create your Studio notebook using either the AWS Management Console or the AWS CLI. When creating the application from the console, you have the following options: • In the Amazon MSK console choose your cluster, then choose Process data in real time. • In the Kinesis Data Streams console choose your data stream, then on the Applications tab choose Process data in real time. • In the Managed Service for Apache Flink console choose the Studio tab, then choose Create Studio notebook. For a tutorial, see Event Detection with Managed Service for Apache Flink. For an example of a more advanced Studio notebook solution, see Apache Flink on Amazon Managed Service for Apache Flink Studio. Perform an interactive analysis of streaming data You use a serverless notebook powered by Apache Zeppelin to interact with your streaming data. Your notebook can have multiple notes, and each note can have one or more paragraphs where you can write your code. The following example SQL query shows how to retrieve data from a data source: %flink.ssql(type=update) select * from stock; For more examples of Flink Streaming SQL queries, see Examples and tutorials for Studio notebooks in Managed Service for Apache Flink following, and Queries in the Apache Flink documentation. You can use Flink SQL queries in the Studio notebook to query streaming data. You may also use Python (Table API) and Scala (Table and Datastream APIs) to write programs to query your streaming data interactively. You can view the results of your queries or programs, update them in seconds, and re-run them to view updated results. Flink interpreters You specify which language Managed Service for Apache Flink uses to run your application by using an interpreter. You can use the following interpreters with Managed Service for Apache Flink: Perform an interactive analysis of streaming data 389 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Name %flink Class Description FlinkInterpreter %flink.pyflink PyFlinkInterpreter %flink.ipyflink IPyFlinkInterpreter %flink.ssql FlinkStreamSqlInterpreter %flink.bsql FlinkBatchSqlInterpreter Creates ExecutionEnvironme nt/StreamExecution Environment/BatchTableEnvir onment/StreamTable Environment and provides a Scala environment Provides a python environme nt Provides an ipython environment Provides a stream sql environment Provides a batch sql environment For more information about Flink interpreters, see Flink interpreter for Apache Zeppelin. If you are using %flink.pyflink or %flink.ipyflink as your interpreters, you will need to use the ZeppelinContext to visualize the results within the notebook. For more PyFlink specific examples, see Query your data streams interactively using Managed Service for Apache Flink Studio and Python. Apache Flink table environment variables Apache Zeppelin provides access to table environment resources using environment variables. You access Scala table environment resources with the following variables: Variable senv Resource StreamExecutionEnvironment Apache Flink table environment variables 390 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Variable stenv Resource StreamTableEnvironment for blink planner You access Python table environment resources with the following variables: Variable s_env st_env Resource StreamExecutionEnvironment StreamTableEnvironment for blink planner For more information about using table environments, see Concepts and Common API in the Apache Flink documentation. Deploy as an application with durable state You can build your code and export it to Amazon S3. You can promote the code that you wrote in your note to a continuously running stream processing application. There are two modes of running an Apache Flink application on Managed Service for Apache Flink: With a Studio notebook, you have the ability to develop your code interactively, view results of your code in real time, and visualize it within your note. After you deploy a note to run in streaming mode, Managed Service for Apache Flink creates an application for you that runs continuously, reads data from your sources, writes to your destinations, maintains long-running application
analytics-java-api-119
analytics-java-api.pdf
119
it to Amazon S3. You can promote the code that you wrote in your note to a continuously running stream processing application. There are two modes of running an Apache Flink application on Managed Service for Apache Flink: With a Studio notebook, you have the ability to develop your code interactively, view results of your code in real time, and visualize it within your note. After you deploy a note to run in streaming mode, Managed Service for Apache Flink creates an application for you that runs continuously, reads data from your sources, writes to your destinations, maintains long-running application state, and autoscales automatically based on the throughput of your source streams. Note The S3 bucket to which you export your application code must be in the same Region as your Studio notebook. You can only deploy a note from your Studio notebook if it meets the following criteria: Deploy as an application with durable state 391 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • Paragraphs must be ordered sequentially. When you deploy your application, all paragraphs within a note will be executed sequentially (left-to-right, top-to-bottom) as they appear in your note. You can check this order by choosing Run All Paragraphs in your note. • Your code is a combination of Python and SQL or Scala and SQL. We do not support Python and Scala together at this time for deploy-as-application. • Your note should have only the following interpreters: %flink, %flink.ssql, %flink.pyflink, %flink.ipyflink, %md. • The use of the Zeppelin context object z is not supported. Methods that return nothing will do nothing except log a warning. Other methods will raise Python exceptions or fail to compile in Scala. • A note must result in a single Apache Flink job. • Notes with dynamic forms are unsupported for deploying as an application. • %md (Markdown) paragraphs will be skipped in deploying as an application, as these are expected to contain human-readable documentation that is unsuitable for running as part of the resulting application. • Paragraphs disabled for running within Zeppelin will be skipped in deploying as an application. Even if a disabled paragraph uses an incompatible interpreter, for example, %flink.ipyflink in a note with %flink and %flink.ssql interpreters, it will be skipped while deploying the note as an application, and will not result in an error. • There must be at least one paragraph present with source code (Flink SQL, PyFlink or Flink Scala) that is enabled for running for the application deployment to succeed. • Setting parallelism in the interpreter directive within a paragraph (e.g. %flink.ssql(parallelism=32)) will be ignored in applications deployed from a note. Instead, you can update the deployed application through the AWS Management Console, AWS Command Line Interface or AWS API to change the Parallelism and/or ParallelismPerKPU settings according to the level of parallelism your application requires, or you can enable autoscaling for your deployed application. • If you are deploying as an application with durable state your VPC must have internet access. If your VPC does not have internet access, see Deploy as an application with durable state in a VPC with no internet access. Deploy as an application with durable state 392 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Scala/Python criteria • In your Scala or Python code, use the Blink planner (senv, stenv for Scala; s_env, st_env for Python) and not the older "Flink" planner (stenv_2 for Scala, st_env_2 for Python). The Apache Flink project recommends the use of the Blink planner for production use cases, and this is the default planner in Zeppelin and in Flink. • Your Python paragraphs must not use shell invocations/assignments using ! or IPython magic commands like %timeit or %conda in notes meant to be deployed as applications. • You can't use Scala case classes as parameters of functions passed to higher-order dataflow operators like map and filter. For information about Scala case classes, see CASE CLASSES in the Scala documentation. SQL criteria • Simple SELECT statements are not permitted, as there’s nowhere equivalent to a paragraph’s output section where the data can be delivered. • In any given paragraph, DDL statements (USE, CREATE, ALTER, DROP, SET, RESET) must precede DML (INSERT) statements. This is because DML statements in a paragraph must be submitted together as a single Flink job. • There should be at most one paragraph that has DML statements in it. This is because, for the deploy-as-application feature, we only support submitting a single job to Flink. For more information and an example, see Translate, redact and analyze streaming data using SQL functions with Amazon Managed Service for Apache Flink, Amazon Translate, and Amazon Comprehend. Review IAM permissions for Studio notebooks Managed Service for Apache Flink creates an IAM role for you
analytics-java-api-120
analytics-java-api.pdf
120
DROP, SET, RESET) must precede DML (INSERT) statements. This is because DML statements in a paragraph must be submitted together as a single Flink job. • There should be at most one paragraph that has DML statements in it. This is because, for the deploy-as-application feature, we only support submitting a single job to Flink. For more information and an example, see Translate, redact and analyze streaming data using SQL functions with Amazon Managed Service for Apache Flink, Amazon Translate, and Amazon Comprehend. Review IAM permissions for Studio notebooks Managed Service for Apache Flink creates an IAM role for you when you create a Studio notebook through the AWS Management Console. It also associates with that role a policy that allows the following access: Service CloudWatch Logs Scala/Python criteria Access List 393 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Service Amazon EC2 AWS Glue Managed Service for Apache Flink Managed Service for Apache Flink V2 Access List Read, Write Read Read Amazon S3 Read, Write Use connectors and dependencies Connectors enable you to read and write data across various technologies. Managed Service for Apache Flink bundles three default connectors with your Studio notebook. You can also use custom connectors. For more information about connectors, see Table & SQL Connectors in the Apache Flink documentation. Default connectors If you use the AWS Management Console to create your Studio notebook, Managed Service for Apache Flink includes the following custom connectors by default: flink-sql-connector- kinesis, flink-connector-kafka_2.12 and aws-msk-iam-auth. To create a Studio notebook through the console without these custom connectors, choose the Create with custom settings option. Then, when you get to the Configurations page, clear the checkboxes next to the two connectors. If you use the CreateApplication API to create your Studio notebook, the flink-sql-connector- flink and flink-connector-kafka connectors aren't included by default. To add them, specify them as a MavenReference in the CustomArtifactsConfiguration data type as shown in the following examples. The aws-msk-iam-auth connector is the connector to use with Amazon MSK that includes the feature to automatically authenticate with IAM. Use connectors and dependencies 394 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Note The connector versions shown in the following example are the only versions that we support. For the Kinesis connector: "CustomArtifactsConfiguration": [{ "ArtifactType": "DEPENDENCY_JAR", "MavenReference": { "GroupId": "org.apache.flink", "ArtifactId": "flink-sql-connector-kinesis", "Version": "1.15.4" } }] For authenticating with AWS MSK through AWS IAM: "CustomArtifactsConfiguration": [{ "ArtifactType": "DEPENDENCY_JAR", "MavenReference": { "GroupId": "software.amazon.msk", "ArtifactId": "aws-msk-iam-auth", "Version": "1.1.6" } }] For the Apache Kafka connector: "CustomArtifactsConfiguration": [{ "ArtifactType": "DEPENDENCY_JAR", "MavenReference": { "GroupId": "org.apache.flink", "ArtifactId": "flink-connector-kafka", "Version": "1.15.4" } }] Default connectors 395 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide To add these connectors to an existing notebook, use the UpdateApplication API operation and specify them as a MavenReference in the CustomArtifactsConfigurationUpdate data type. Note You can set failOnError to true for the flink-sql-connector-kinesis connector in the table API. Add dependencies and custom connectors To use the AWS Management Console to add a dependency or a custom connector to your Studio notebook, follow these steps: 1. Upload your custom connector's file to Amazon S3. 2. 3. 4. 5. In the AWS Management Console, choose the Custom create option for creating your Studio notebook. Follow the Studio notebook creation workflow until you get to the Configurations step. In the Custom connectors section, choose Add custom connector. Specify the Amazon S3 location of the dependency or the custom connector. 6. Choose Save changes. To add a dependency JAR or a custom connector when you create a new Studio notebook using the CreateApplication API, specify the Amazon S3 location of the dependency JAR or the custom connector in the CustomArtifactsConfiguration data type. To add a dependency or a custom connector to an existing Studio notebook, invoke the UpdateApplication API operation and specify the Amazon S3 location of the dependency JAR or the custom connector in the CustomArtifactsConfigurationUpdate data type. Note When you include a dependency or a custom connector, you must also include all its transitive dependencies that aren't bundled within it. Add dependencies and custom connectors 396 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Implement user-defined functions User-defined functions (UDFs) are extension points that allow you to call frequently-used logic or custom logic that can't be expressed otherwise in queries. You can use Python or a JVM language like Java or Scala to implement your UDFs in paragraphs inside your Studio notebook. You can also add to your Studio notebook external JAR files that contain UDFs implemented in a JVM language. When implementing JARs that register abstract classes that subclass UserDefinedFunction (or your own abstract classes), use provided scope in Apache Maven, compileOnly dependency declarations in Gradle, provided scope in SBT, or an equivalent directive
analytics-java-api-121
analytics-java-api.pdf
121
user-defined functions User-defined functions (UDFs) are extension points that allow you to call frequently-used logic or custom logic that can't be expressed otherwise in queries. You can use Python or a JVM language like Java or Scala to implement your UDFs in paragraphs inside your Studio notebook. You can also add to your Studio notebook external JAR files that contain UDFs implemented in a JVM language. When implementing JARs that register abstract classes that subclass UserDefinedFunction (or your own abstract classes), use provided scope in Apache Maven, compileOnly dependency declarations in Gradle, provided scope in SBT, or an equivalent directive in your UDF project build configuration. This allows the UDF source code to compile against the Flink APIs, but the Flink API classes are not themselves included in the build artifacts. Refer to this pom from the UDF jar example which adheres to such prerequisite on a Maven project. Note For an example setup, see Translate, redact and analyze streaming data using SQL functions with Amazon Managed Service for Apache Flink, Amazon Translate, and Amazon Comprehend on the AWS Machine Learning Blog. To use the console to add UDF JAR files to your Studio notebook, follow these steps: 1. Upload your UDF JAR file to Amazon S3. 2. 3. 4. 5. In the AWS Management Console, choose the Custom create option for creating your Studio notebook. Follow the Studio notebook creation workflow until you get to the Configurations step. In the User-defined functions section, choose Add user-defined function. Specify the Amazon S3 location of the JAR file or the ZIP file that has the implementation of your UDF. 6. Choose Save changes. To add a UDF JAR when you create a new Studio notebook using the CreateApplication API, specify the JAR location in the CustomArtifactConfiguration data type. To add a UDF JAR to an existing Studio notebook, invoke the UpdateApplication API operation and specify the JAR location User-defined functions 397 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide in the CustomArtifactsConfigurationUpdate data type. Alternatively, you can use the AWS Management Console to add UDF JAR files to you Studio notebook. Considerations with user-defined functions • Managed Service for Apache Flink Studio uses the Apache Zeppelin terminology wherein a notebook is a Zeppelin instance that can contain multiple notes. Each note can then contain multiple paragraphs. With Managed Service for Apache Flink Studio the interpreter process is shared across all the notes in the notebook. So if you perform an explicit function registration using createTemporarySystemFunction in one note, the same can be referenced as-is in another note of same notebook. The Deploy as application operation however works on an individual note and not all notes in the notebook. When you perform deploy as application, only active note's contents are used to generate the application. Any explicit function registration performed in other notebooks are not part of the generated application dependencies. Additionally, during Deploy as application option an implicit function registration occurs by converting the main class name of JAR to a lowercase string. For example, if TextAnalyticsUDF is the main class for UDF JAR, then an implicit registration will result in function name textanalyticsudf. So if an explicit function registration in note 1 of Studio occurs like the following, then all other notes in that notebook (say note 2) can refer the function by name myNewFuncNameForClass because of the shared interpreter: stenv.createTemporarySystemFunction("myNewFuncNameForClass", new TextAnalyticsUDF()) However during deploy as application operation on note 2, this explicit registration will not be included in the dependencies and hence the deployed application will not perform as expected. Because of the implicit registration, by default all references to this function is expected to be with textanalyticsudf and not myNewFuncNameForClass. If there is a need for custom function name registration then note 2 itself is expected to contain another paragraph to perform another explicit registration as follows: %flink(parallelism=l) import com.amazonaws.kinesis.udf.textanalytics.TextAnalyticsUDF # re-register the JAR for UDF with custom name stenv.createTemporarySystemFunction("myNewFuncNameForClass", new TextAnalyticsUDF()) Considerations with user-defined functions 398 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide %flink. ssql(type=update, parallelism=1) INSERT INTO table2 SELECT myNewFuncNameForClass(column_name) FROM table1 ; • If your UDF JAR includes Flink SDKs, then configure your Java project so that the UDF source code can compile against the Flink SDKs, but the Flink SDK classes are not themselves included in the build artifact, for example the JAR. You can use provided scope in Apache Maven, compileOnly dependency declarations in Gradle, provided scope in SBT, or equivalent directive in their UDF project build configuration. You can refer to this pom from the UDF jar example, which adheres to such a prerequisite on a maven project. For a complete step-by-step tutorial, see this Translate, redact and analyze streaming data using SQL functions with Amazon Managed Service for Apache Flink, Amazon
analytics-java-api-122
analytics-java-api.pdf
122
so that the UDF source code can compile against the Flink SDKs, but the Flink SDK classes are not themselves included in the build artifact, for example the JAR. You can use provided scope in Apache Maven, compileOnly dependency declarations in Gradle, provided scope in SBT, or equivalent directive in their UDF project build configuration. You can refer to this pom from the UDF jar example, which adheres to such a prerequisite on a maven project. For a complete step-by-step tutorial, see this Translate, redact and analyze streaming data using SQL functions with Amazon Managed Service for Apache Flink, Amazon Translate, and Amazon Comprehend. Enable checkpointing You enable checkpointing by using environment settings. For information about checkpointing, see Fault Tolerance in the Managed Service for Apache Flink Developer Guide. Set the checkpointing interval The following Scala code example sets your application's checkpoint interval to one minute: // start a checkpoint every 1 minute stenv.enableCheckpointing(60000) The following Python code example sets your application's checkpoint interval to one minute: st_env.get_config().get_configuration().set_string( "execution.checkpointing.interval", "1min" ) Enable checkpointing 399 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Set the checkpointing type The following Scala code example sets your application's checkpoint mode to EXACTLY_ONCE (the default): // set mode to exactly-once (this is the default) stenv.getCheckpointConfig.setCheckpointingMode(CheckpointingMode.EXACTLY_ONCE) The following Python code example sets your application's checkpoint mode to EXACTLY_ONCE (the default): st_env.get_config().get_configuration().set_string( "execution.checkpointing.mode", "EXACTLY_ONCE" ) Upgrade Studio Runtime This section contains information about how to upgrade your Studio notebook Runtime. We recommend that you always upgrade to the latest supported Studio Runtime. Upgrade your notebook to a new Studio Runtime Depending on how you use Studio, the steps to upgrade your Runtime differ. Select the option that fits your use case. SQL queries or Python code with no external dependencies If you are using SQL or Python without any external dependencies, use the following Runtime upgrade process. We recommend that you upgrade to the latest Runtime version. The upgrade process is the same, reardless of the Runtime version you are upgrading from. 1. Create a new Studio notebook using the latest Runtime. 2. Copy and paste the code of every note from the old notebook to the new notebook. 3. In the new notebook, adjust the code to make it compatible with any Apache Flink feature that has changed from the previous version. • Run the new notebook. Open the notebook and run it note by note, in sequence, and test if it works. Set the checkpointing type 400 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • Make any required changes to the code. • Stop the new notebook. 4. If you had deployed the old notebook as application: • Deploy the new notebook as a separate, new application. • Stop the old application. • Run the new application without snapshot. 5. Stop the old notebook if it's running. Start the new notebook, as required, for interactive use. Process flow for upgrading without external dependencies Upgrade your notebook to a new Studio Runtime 401 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Upgrade your notebook to a new Studio Runtime 402 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide SQL queries or Python code with external dependencies Follow this process if you are using SQL or Python and using external dependencies such as connectors or custom artifacts, like user-defined functions implemented in Python or Java. We recommend that you upgrade to the latest Runtime. The process is the same, regardless of the Runtime version that you are upgrading from. 1. Create a new Studio notebook using the latest Runtime. 2. Copy and paste the code of every note from the old notebook to the new notebook. 3. Update the external dependencies and custom artifacts. • Look for new connectors compatible with the Apache Flink version of the new Runtime. Refer to Table & SQL Connectors in the Apache Flink documentation to find the correct connectors for the Flink version. • Update the code of user-defined functions to match changes in the Apache Flink API, and any Python or JAR dependencies used by the user-defined functions. Re-package your updated custom artifact. • Add these new connectors and artifacts to the new notebook. 4. In the new notebook, adjust the code to make it compatible with any Apache Flink feature that has changed from the previous version. • Run the new notebook. Open the notebook and run it note by note, in sequence, and test if it works. • Make any required changes to the code. • Stop the new notebook. 5. If you had deployed the old notebook as application: • Deploy the new notebook as a separate, new application. • Stop the old application. • Run the new application without
analytics-java-api-123
analytics-java-api.pdf
123
• Add these new connectors and artifacts to the new notebook. 4. In the new notebook, adjust the code to make it compatible with any Apache Flink feature that has changed from the previous version. • Run the new notebook. Open the notebook and run it note by note, in sequence, and test if it works. • Make any required changes to the code. • Stop the new notebook. 5. If you had deployed the old notebook as application: • Deploy the new notebook as a separate, new application. • Stop the old application. • Run the new application without snapshot. 6. Stop the old notebook if it's running. Start the new notebook, as required, for interactive use. Process flow for upgrading with external dependencies Upgrade your notebook to a new Studio Runtime 403 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Upgrade your notebook to a new Studio Runtime 404 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Work with AWS Glue Your Studio notebook stores and gets information about its data sources and sinks from AWS Glue. When you create your Studio notebook, you specify the AWS Glue database that contains your connection information. When you access your data sources and sinks, you specify AWS Glue tables contained in the database. Your AWS Glue tables provide access to the AWS Glue connections that define the locations, schemas, and parameters of your data sources and destinations. Studio notebooks use table properties to store application-specific data. For more information, see Table properties. For an example of how to set up a AWS Glue connection, database, and table for use with Studio notebooks, see Create an AWS Glue database in the Tutorial: Create a Studio notebook in Managed Service for Apache Flink tutorial. Table properties In addition to data fields, your AWS Glue tables provide other information to your Studio notebook using table properties. Managed Service for Apache Flink uses the following AWS Glue table properties: • Define Apache Flink time values: These properties define how Managed Service for Apache Flink emits Apache Flink internal data processing time values. • Use Flink connector and format properties: These properties provide information about your data streams. To add a property to an AWS Glue table, do the following: 1. Sign in to the AWS Management Console and open the AWS Glue console at https:// console.aws.amazon.com/glue/. 2. From the list of tables, choose the table that your application uses to store its data connection information. Choose Action, Edit table details. 3. Under Table Properties, enter managed-flink.proctime for key and user_action_time for Value. Work with AWS Glue 405 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Define Apache Flink time values Apache Flink provides time values that describe when stream processing events occured, such as Processing Time and Event Time. To include these values in your application output, you define properties on your AWS Glue table that tell the Managed Service for Apache Flink runtime to emit these values into the specified fields. The keys and values you use in your table properties are as follows: Timestamp Type Key Value Processing Time managed-flink.proctime Event Time managed-flink.rowtime managed-flink.wate rmark.column_na me .milliseconds The column name that AWS Glue will use to expose the value. This column name does not correspond to an existing table column. The column name that AWS Glue will use to expose the value. This column name corresponds to an existing table column. The watermark interval in milliseconds Use Flink connector and format properties You provide information about your data sources to your application's Flink connectors using AWS Glue table properties. Some examples of the properties that Managed Service for Apache Flink uses for connectors are as follows: Connector Type Kafka Key format Value The format used to deseriali ze and serialize Kafka messages, e.g. json or csv. Table properties 406 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Connector Type Key Value scan.startup.mode Kinesis format aws.region S3 (Filesystem) format path The startup mode for the Kafka consumer, e.g. earliest-offset or timestamp . The format used to deseriali ze and serialize Kinesis data stream records, e.g. json or csv. The AWS region where the stream is defined. The format used to deseriali ze and serialize files, e.g. json or csv. The Amazon S3 path, e.g. s3://mybucket/ . For more information about other connectors besides Kinesis and Apache Kafka, see your connector's documentation. Examples and tutorials for Studio notebooks in Managed Service for Apache Flink Topics • Tutorial: Create a Studio notebook in Managed Service for Apache Flink • Tutorial: Deploy a Studio notebook as a Managed Service for Apache Flink application with durable state • View example queries to analyza data in a Studio notebook Examples and tutorials
analytics-java-api-124
analytics-java-api.pdf
124
csv. The AWS region where the stream is defined. The format used to deseriali ze and serialize files, e.g. json or csv. The Amazon S3 path, e.g. s3://mybucket/ . For more information about other connectors besides Kinesis and Apache Kafka, see your connector's documentation. Examples and tutorials for Studio notebooks in Managed Service for Apache Flink Topics • Tutorial: Create a Studio notebook in Managed Service for Apache Flink • Tutorial: Deploy a Studio notebook as a Managed Service for Apache Flink application with durable state • View example queries to analyza data in a Studio notebook Examples and tutorials for Studio notebooks in Managed Service for Apache Flink 407 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Tutorial: Create a Studio notebook in Managed Service for Apache Flink The following tutorial demonstrates how to create a Studio notebook that reads data from a Kinesis data stream or an Amazon MSK cluster. This tutorial contains the following sections: • Complete the prerequisites • Create an AWS Glue database • Next steps: Create a Studio notebook with Kinesis Data Streams or Amazon MSK • Create a Studio notebook with Kinesis Data Streams • Create a Studio notebook with Amazon MSK • Clean up your application and dependent resources Complete the prerequisites Make sure that your AWS CLI is version 2 or later. To install the latest AWS CLI, see Installing, updating, and uninstalling the AWS CLI version 2. Create an AWS Glue database Your Studio notebook uses an AWS Glue database for metadata about your Amazon MSK data source. Create an AWS Glue Database 1. Open the AWS Glue console at https://console.aws.amazon.com/glue/. 2. Choose Add database. In the Add database window, enter default for Database name. Choose Create. Next steps: Create a Studio notebook with Kinesis Data Streams or Amazon MSK With this tutorial, you can create a Studio notebook that uses either Kinesis Data Streams or Amazon MSK: • Create a Studio notebook with Kinesis Data Streams : With Kinesis Data Streams, you quickly create an application that uses a Kinesis data stream as a source. You only need to create a Kinesis data stream as a dependent resource. Tutorial: Create a Studio notebook in Managed Service for Apache Flink 408 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • Create a Studio notebook with Amazon MSK : With Amazon MSK, you create an application that uses a Amazon MSK cluster as a source. You need to create an Amazon VPC, an Amazon EC2 client instance, and an Amazon MSK cluster as dependent resources. Create a Studio notebook with Kinesis Data Streams This tutorial describes how to create a Studio notebook that uses a Kinesis data stream as a source. This tutorial contains the following sections: • Complete the prerequisites • Create an AWS Glue table • Create a Studio notebook with Kinesis Data Streams • Send data to your Kinesis data stream • Test your Studio notebook Complete the prerequisites Before you create a Studio notebook, create a Kinesis data stream (ExampleInputStream). Your application uses this stream for the application source. You can create this stream using either the Amazon Kinesis console or the following AWS CLI command. For console instructions, see Creating and Updating Data Streams in the Amazon Kinesis Data Streams Developer Guide. Name the stream ExampleInputStream and set the Number of open shards to 1. To create the stream (ExampleInputStream) using the AWS CLI, use the following Amazon Kinesis create-stream AWS CLI command. $ aws kinesis create-stream \ --stream-name ExampleInputStream \ --shard-count 1 \ --region us-east-1 \ --profile adminuser Create an AWS Glue table Your Studio notebook uses an AWS Glue database for metadata about your Kinesis Data Streams data source. Tutorial: Create a Studio notebook in Managed Service for Apache Flink 409 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Note You can either manually create the database first or you can let Managed Service for Apache Flink create it for you when you create the notebook. Similarly, you can either manually create the table as described in this section, or you can use the create table connector code for Managed Service for Apache Flink in your notebook within Apache Zeppelin to create your table via a DDL statement. You can then check in AWS Glue to make sure the table was correctly created. Create a Table 1. Sign in to the AWS Management Console and open the AWS Glue console at https:// console.aws.amazon.com/glue/. 2. 3. 4. 5. 6. 7. If you don't already have a AWS Glue database, choose Databases from the left navigation bar. Choose Add Database. In the Add database window, enter default for Database name. Choose Create. In the left navigation bar, choose Tables. In the Tables page, choose
analytics-java-api-125
analytics-java-api.pdf
125
Apache Flink in your notebook within Apache Zeppelin to create your table via a DDL statement. You can then check in AWS Glue to make sure the table was correctly created. Create a Table 1. Sign in to the AWS Management Console and open the AWS Glue console at https:// console.aws.amazon.com/glue/. 2. 3. 4. 5. 6. 7. If you don't already have a AWS Glue database, choose Databases from the left navigation bar. Choose Add Database. In the Add database window, enter default for Database name. Choose Create. In the left navigation bar, choose Tables. In the Tables page, choose Add tables, Add table manually. In the Set up your table's properties page, enter stock for the Table name. Make sure you select the database you created previously. Choose Next. In the Add a data store page, choose Kinesis. For the Stream name, enter ExampleInputStream. For Kinesis source URL, choose enter https://kinesis.us- east-1.amazonaws.com. If you copy and paste the Kinesis source URL, be sure to delete any leading or trailing spaces. Choose Next. In the Classification page, choose JSON. Choose Next. In the Define a Schema page, choose Add Column to add a column. Add columns with the following properties: Column name ticker price Data type string double Tutorial: Create a Studio notebook in Managed Service for Apache Flink 410 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Choose Next. 8. On the next page, verify your settings, and choose Finish. 9. Choose your newly created table from the list of tables. 10. Choose Edit table and add a property with the key managed-flink.proctime and the value proctime. 11. Choose Apply. Create a Studio notebook with Kinesis Data Streams Now that you have created the resources your application uses, you create your Studio notebook. To create your application, you can use either the AWS Management Console or the AWS CLI. • Create a Studio notebook using the AWS Management Console • Create a Studio notebook using the AWS CLI Create a Studio notebook using the AWS Management Console 1. Open the Managed Service for Apache Flink console at https://console.aws.amazon.com/ managed-flink/home?region=us-east-1#/applications/dashboard. 2. In the Managed Service for Apache Flink applications page, choose the Studio tab. Choose Create Studio notebook. Note You can also create a Studio notebook from the Amazon MSK or Kinesis Data Streams consoles by selecting your input Amazon MSK cluster or Kinesis data stream, and choosing Process data in real time. 3. In the Create Studio notebook page, provide the following information: • Enter MyNotebook for the name of the notebook. • Choose default for AWS Glue database. Choose Create Studio notebook. Tutorial: Create a Studio notebook in Managed Service for Apache Flink 411 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 4. In the MyNotebook page, choose Run. Wait for the Status to show Running. Charges apply when the notebook is running. Create a Studio notebook using the AWS CLI To create your Studio notebook using the AWS CLI, do the following: 1. Verify your account ID. You need this value to create your application. 2. Create the role arn:aws:iam::AccountID:role/ZeppelinRole and add the following permissions to the auto-created role by console. "kinesis:GetShardIterator", "kinesis:GetRecords", "kinesis:ListShards" 3. Create a file called create.json with the following contents. Replace the placeholder values with your information. { "ApplicationName": "MyNotebook", "RuntimeEnvironment": "ZEPPELIN-FLINK-3_0", "ApplicationMode": "INTERACTIVE", "ServiceExecutionRole": "arn:aws:iam::AccountID:role/ZeppelinRole", "ApplicationConfiguration": { "ApplicationSnapshotConfiguration": { "SnapshotsEnabled": false }, "ZeppelinApplicationConfiguration": { "CatalogConfiguration": { "GlueDataCatalogConfiguration": { "DatabaseARN": "arn:aws:glue:us-east-1:AccountID:database/ default" } } } } } 4. Run the following command to create your application: Tutorial: Create a Studio notebook in Managed Service for Apache Flink 412 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide aws kinesisanalyticsv2 create-application --cli-input-json file://create.json 5. When the command completes, you see output that shows the details for your new Studio notebook. The following is an example of the output. { "ApplicationDetail": { "ApplicationARN": "arn:aws:kinesisanalyticsus- east-1:012345678901:application/MyNotebook", "ApplicationName": "MyNotebook", "RuntimeEnvironment": "ZEPPELIN-FLINK-3_0", "ApplicationMode": "INTERACTIVE", "ServiceExecutionRole": "arn:aws:iam::012345678901:role/ZeppelinRole", ... 6. Run the following command to start your application. Replace the sample value with your account ID. aws kinesisanalyticsv2 start-application --application-arn arn:aws:kinesisanalyticsus-east-1:012345678901:application/MyNotebook\ Send data to your Kinesis data stream To send test data to your Kinesis data stream, do the following: 1. Open the Kinesis Data Generator. 2. Choose Create a Cognito User with CloudFormation. 3. 4. 5. 6. The AWS CloudFormation console opens with the Kinesis Data Generator template. Choose Next. In the Specify stack details page, enter a username and password for your Cognito user. Choose Next. In the Configure stack options page, choose Next. In the Review Kinesis-Data-Generator-Cognito-User page, choose the I acknowledge that AWS CloudFormation might create IAM resources. checkbox. Choose Create Stack. 7. Wait for the AWS CloudFormation stack to finish being created. After the stack is complete, open the Kinesis-Data-Generator-Cognito-User stack in the AWS
analytics-java-api-126
analytics-java-api.pdf
126
following: 1. Open the Kinesis Data Generator. 2. Choose Create a Cognito User with CloudFormation. 3. 4. 5. 6. The AWS CloudFormation console opens with the Kinesis Data Generator template. Choose Next. In the Specify stack details page, enter a username and password for your Cognito user. Choose Next. In the Configure stack options page, choose Next. In the Review Kinesis-Data-Generator-Cognito-User page, choose the I acknowledge that AWS CloudFormation might create IAM resources. checkbox. Choose Create Stack. 7. Wait for the AWS CloudFormation stack to finish being created. After the stack is complete, open the Kinesis-Data-Generator-Cognito-User stack in the AWS CloudFormation console, Tutorial: Create a Studio notebook in Managed Service for Apache Flink 413 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide and choose the Outputs tab. Open the URL listed for the KinesisDataGeneratorUrl output value. 8. In the Amazon Kinesis Data Generator page, log in with the credentials you created in step 4. 9. On the next page, provide the following values: Region us-east-1 Stream/Firehose stream ExampleInputStream Records per second 1 For Record Template, paste the following code: { "ticker": "{{random.arrayElement( ["AMZN","MSFT","GOOG"] )}}", "price": {{random.number( { "min":10, "max":150 } )}} } 10. Choose Send data. 11. The generator will send data to your Kinesis data stream. Leave the generator running while you complete the next section. Test your Studio notebook In this section, you use your Studio notebook to query data from your Kinesis data stream. 1. Open the Managed Service for Apache Flink console at https://console.aws.amazon.com/ managed-flink/home?region=us-east-1#/applications/dashboard. 2. On the Managed Service for Apache Flink applications page, choose the Studio notebook tab. Choose MyNotebook. Tutorial: Create a Studio notebook in Managed Service for Apache Flink 414 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 3. In the MyNotebook page, choose Open in Apache Zeppelin. The Apache Zeppelin interface opens in a new tab. In the Welcome to Zeppelin! page, choose Zeppelin Note. In the Zeppelin Note page, enter the following query into a new note: 4. 5. %flink.ssql(type=update) select * from stock Choose the run icon. After a short time, the note displays data from the Kinesis data stream. To open the Apache Flink Dashboard for your application to view operational aspects, choose FLINK JOB. For more information about the Flink Dashboard, see Apache Flink Dashboard in the Managed Service for Apache Flink Developer Guide. For more examples of Flink Streaming SQL queries, see Queries in the Apache Flink documentation. Create a Studio notebook with Amazon MSK This tutorial describes how to create a Studio notebook that uses an Amazon MSK cluster as a source. This tutorial contains the following sections: • Set up an Amazon MSK cluster • Add a NAT gateway to your VPC • Create an AWS Glue connection and table • Create a Studio notebook with Amazon MSK • Send data to your Amazon MSK cluster • Test your Studio notebook Set up an Amazon MSK cluster For this tutorial, you need an Amazon MSK cluster that allows plaintext access. If you don't have an Amazon MSK cluster set up already, follow the Getting Started Using Amazon MSK tutorial to create an Amazon VPC, an Amazon MSK cluster, a topic, and an Amazon EC2 client instance. Tutorial: Create a Studio notebook in Managed Service for Apache Flink 415 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide When following the tutorial, do the following: • In Step 3: Create an Amazon MSK Cluster, on step 4, change the ClientBroker value from TLS to PLAINTEXT. Add a NAT gateway to your VPC If you created an Amazon MSK cluster by following the Getting Started Using Amazon MSK tutorial, or if your existing Amazon VPC does not already have a NAT gateway for its private subnets, you must add a NAT Gateway to your Amazon VPC. The following diagram shows the architecture. Tutorial: Create a Studio notebook in Managed Service for Apache Flink 416 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide To create a NAT gateway for your Amazon VPC, do the following: 1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/. 2. Choose NAT Gateways from the left navigation bar. 3. On the NAT Gateways page, choose Create NAT Gateway. 4. On the Create NAT Gateway page, provide the following values: Tutorial: Create a Studio notebook in Managed Service for Apache Flink 417 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Name - optional Subnet Elastic IP allocation ID ZeppelinGateway AWSKafkaTutorialSubnet1 Choose an available Elastic IP. If there are no Elastic IPs available, choose Allocate Elastic IP, and then choose the Elasic IP that the console creates. Choose Create NAT Gateway. 5. On the left navigation bar, choose Route Tables. 6. Choose Create Route Table. 7.
analytics-java-api-127
analytics-java-api.pdf
127
3. On the NAT Gateways page, choose Create NAT Gateway. 4. On the Create NAT Gateway page, provide the following values: Tutorial: Create a Studio notebook in Managed Service for Apache Flink 417 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Name - optional Subnet Elastic IP allocation ID ZeppelinGateway AWSKafkaTutorialSubnet1 Choose an available Elastic IP. If there are no Elastic IPs available, choose Allocate Elastic IP, and then choose the Elasic IP that the console creates. Choose Create NAT Gateway. 5. On the left navigation bar, choose Route Tables. 6. Choose Create Route Table. 7. On the Create route table page, provide the following information: • Name tag: ZeppelinRouteTable • VPC: Choose your VPC (e.g. AWSKafkaTutorialVPC). Choose Create. 8. In the list of route tables, choose ZeppelinRouteTable. Choose the Routes tab, and choose Edit routes. 9. In the Edit Routes page, choose Add route. 10. In the For Destination, enter 0.0.0.0/0. For Target, choose NAT Gateway, ZeppelinGateway. Choose Save Routes. Choose Close. 11. On the Route Tables page, with ZeppelinRouteTable selected, choose the Subnet associations tab. Choose Edit subnet associations. 12. In the Edit subnet associations page, choose AWSKafkaTutorialSubnet2 and AWSKafkaTutorialSubnet3. Choose Save. Create an AWS Glue connection and table Your Studio notebook uses an AWS Glue database for metadata about your Amazon MSK data source. In this section, you create an AWS Glue connection that describes how to access your Tutorial: Create a Studio notebook in Managed Service for Apache Flink 418 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Amazon MSK cluster, and an AWS Glue table that describes how to present the data in your data source to clients such as your Studio notebook. Create a Connection 1. Sign in to the AWS Management Console and open the AWS Glue console at https:// console.aws.amazon.com/glue/. 2. If you don't already have a AWS Glue database, choose Databases from the left navigation bar. Choose Add Database. In the Add database window, enter default for Database name. Choose Create. 3. Choose Connections from the left navigation bar. Choose Add Connection. 4. In the Add Connection window, provide the following values: • For Connection name, enter ZeppelinConnection. • For Connection type, choose Kafka. • For Kafka bootstrap server URLs, provide the bootstrap broker string for your cluster. You can get the bootstrap brokers from either the MSK console, or by entering the following CLI command: aws kafka get-bootstrap-brokers --region us-east-1 --cluster-arn ClusterArn • Uncheck the Require SSL connection checkbox. Choose Next. 5. In the VPC page, provide the following values: • For VPC, choose the name of your VPC (e.g. AWSKafkaTutorialVPC.) • For Subnet, choose AWSKafkaTutorialSubnet2. • For Security groups, choose all available groups. Choose Next. 6. In the Connection properties / Connection access page, choose Finish. Tutorial: Create a Studio notebook in Managed Service for Apache Flink 419 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Create a Table Note You can either manually create the table as described in the following steps, or you can use the create table connector code for Managed Service for Apache Flink in your notebook within Apache Zeppelin to create your table via a DDL statement. You can then check in AWS Glue to make sure the table was correctly created. 1. 2. 3. 4. 5. In the left navigation bar, choose Tables. In the Tables page, choose Add tables, Add table manually. In the Set up your table's properties page, enter stock for the Table name. Make sure you select the database you created previously. Choose Next. In the Add a data store page, choose Kafka. For the Topic name, enter your topic name (e.g. AWSKafkaTutorialTopic). For Connection, choose ZeppelinConnection. In the Classification page, choose JSON. Choose Next. In the Define a Schema page, choose Add Column to add a column. Add columns with the following properties: Column name ticker price Choose Next. Data type string double 6. On the next page, verify your settings, and choose Finish. 7. Choose your newly created table from the list of tables. 8. Choose Edit table and add the following properties: • key: managed-flink.proctime, value: proctime • key: flink.properties.group.id, value: test-consumer-group • key: flink.properties.auto.offset.reset, value: latest • key: classification, value: json Tutorial: Create a Studio notebook in Managed Service for Apache Flink 420 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Without these key/value pairs, the Flink notebook runs into an error. 9. Choose Apply. Create a Studio notebook with Amazon MSK Now that you have created the resources your application uses, you create your Studio notebook. You can create your application using either the AWS Management Console or the AWS CLI. • Create a Studio notebook using the AWS Management Console • Create a Studio notebook using the
analytics-java-api-128
analytics-java-api.pdf
128
flink.properties.auto.offset.reset, value: latest • key: classification, value: json Tutorial: Create a Studio notebook in Managed Service for Apache Flink 420 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Without these key/value pairs, the Flink notebook runs into an error. 9. Choose Apply. Create a Studio notebook with Amazon MSK Now that you have created the resources your application uses, you create your Studio notebook. You can create your application using either the AWS Management Console or the AWS CLI. • Create a Studio notebook using the AWS Management Console • Create a Studio notebook using the AWS CLI Note You can also create a Studio notebook from the Amazon MSK console by choosing an existing cluster, then choosing Process data in real time. Create a Studio notebook using the AWS Management Console 1. Open the Managed Service for Apache Flink console at https://console.aws.amazon.com/ managed-flink/home?region=us-east-1#/applications/dashboard. 2. In the Managed Service for Apache Flink applications page, choose the Studio tab. Choose Create Studio notebook. Note To create a Studio notebook from the Amazon MSK or Kinesis Data Streams consoles, select your input Amazon MSK cluster or Kinesis data stream, then choose Process data in real time. 3. In the Create Studio notebook page, provide the following information: • Enter MyNotebook for Studio notebook Name. • Choose default for AWS Glue database. Tutorial: Create a Studio notebook in Managed Service for Apache Flink 421 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Choose Create Studio notebook. 4. 5. In the MyNotebook page, choose the Configuration tab. In the Networking section, choose Edit. In the Edit networking for MyNotebook page, choose VPC configuration based on Amazon MSK cluster. Choose your Amazon MSK cluster for Amazon MSK Cluster. Choose Save changes. 6. In the MyNotebook page, choose Run. Wait for the Status to show Running. Create a Studio notebook using the AWS CLI To create your Studio notebook by using the AWS CLI, do the following: 1. Verify that you have the following information. You need these values to create your application. • Your account ID. • The subnet IDs and security group ID for the Amazon VPC that contains your Amazon MSK cluster. 2. Create a file called create.json with the following contents. Replace the placeholder values with your information. { "ApplicationName": "MyNotebook", "RuntimeEnvironment": "ZEPPELIN-FLINK-3_0", "ApplicationMode": "INTERACTIVE", "ServiceExecutionRole": "arn:aws:iam::AccountID:role/ZeppelinRole", "ApplicationConfiguration": { "ApplicationSnapshotConfiguration": { "SnapshotsEnabled": false }, "VpcConfigurations": [ { "SubnetIds": [ "SubnetID 1", "SubnetID 2", "SubnetID 3" ], "SecurityGroupIds": [ Tutorial: Create a Studio notebook in Managed Service for Apache Flink 422 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide "VPC Security Group ID" ] } ], "ZeppelinApplicationConfiguration": { "CatalogConfiguration": { "GlueDataCatalogConfiguration": { "DatabaseARN": "arn:aws:glue:us-east-1:AccountID:database/ default" } } } } } 3. Run the following command to create your application: aws kinesisanalyticsv2 create-application --cli-input-json file://create.json 4. When the command completes, you should see output similar to the following, showing the details for your new Studio notebook: { "ApplicationDetail": { "ApplicationARN": "arn:aws:kinesisanalyticsus- east-1:012345678901:application/MyNotebook", "ApplicationName": "MyNotebook", "RuntimeEnvironment": "ZEPPELIN-FLINK-3_0", "ApplicationMode": "INTERACTIVE", "ServiceExecutionRole": "arn:aws:iam::012345678901:role/ZeppelinRole", ... 5. Run the following command to start your application. Replace the sample value with your account ID. aws kinesisanalyticsv2 start-application --application-arn arn:aws:kinesisanalyticsus-east-1:012345678901:application/MyNotebook\ Tutorial: Create a Studio notebook in Managed Service for Apache Flink 423 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Send data to your Amazon MSK cluster In this section, you run a Python script in your Amazon EC2 client to send data to your Amazon MSK data source. 1. Connect to your Amazon EC2 client. 2. Run the following commands to install Python version 3, Pip, and the Kafka for Python package, and confirm the actions: sudo yum install python37 curl -O https://bootstrap.pypa.io/get-pip.py python3 get-pip.py --user pip install kafka-python 3. Configure the AWS CLI on your client machine by entering the following command: aws configure Provide your account credentials, and us-east-1 for the region. 4. Create a file called stock.py with the following contents. Replace the sample value with your Amazon MSK cluster's Bootstrap Brokers string, and update the topic name if your topic is not AWSKafkaTutorialTopic: from kafka import KafkaProducer import json import random from datetime import datetime BROKERS = "<<Bootstrap Broker List>>" producer = KafkaProducer( bootstrap_servers=BROKERS, value_serializer=lambda v: json.dumps(v).encode('utf-8'), retry_backoff_ms=500, request_timeout_ms=20000, security_protocol='PLAINTEXT') def getStock(): data = {} now = datetime.now() str_now = now.strftime("%Y-%m-%d %H:%M:%S") Tutorial: Create a Studio notebook in Managed Service for Apache Flink 424 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide data['event_time'] = str_now data['ticker'] = random.choice(['AAPL', 'AMZN', 'MSFT', 'INTC', 'TBV']) price = random.random() * 100 data['price'] = round(price, 2) return data while True: data =getStock() # print(data) try: future = producer.send("AWSKafkaTutorialTopic", value=data) producer.flush() record_metadata = future.get(timeout=10) print("sent event to Kafka! topic {} partition {} offset {}".format(record_metadata.topic, record_metadata.partition, record_metadata.offset)) except Exception as e: print(e.with_traceback()) 5.
analytics-java-api-129
analytics-java-api.pdf
129
producer = KafkaProducer( bootstrap_servers=BROKERS, value_serializer=lambda v: json.dumps(v).encode('utf-8'), retry_backoff_ms=500, request_timeout_ms=20000, security_protocol='PLAINTEXT') def getStock(): data = {} now = datetime.now() str_now = now.strftime("%Y-%m-%d %H:%M:%S") Tutorial: Create a Studio notebook in Managed Service for Apache Flink 424 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide data['event_time'] = str_now data['ticker'] = random.choice(['AAPL', 'AMZN', 'MSFT', 'INTC', 'TBV']) price = random.random() * 100 data['price'] = round(price, 2) return data while True: data =getStock() # print(data) try: future = producer.send("AWSKafkaTutorialTopic", value=data) producer.flush() record_metadata = future.get(timeout=10) print("sent event to Kafka! topic {} partition {} offset {}".format(record_metadata.topic, record_metadata.partition, record_metadata.offset)) except Exception as e: print(e.with_traceback()) 5. Run the script with the following command: $ python3 stock.py 6. Leave the script running while you complete the following section. Test your Studio notebook In this section, you use your Studio notebook to query data from your Amazon MSK cluster. 1. Open the Managed Service for Apache Flink console at https://console.aws.amazon.com/ managed-flink/home?region=us-east-1#/applications/dashboard. 2. On the Managed Service for Apache Flink applications page, choose the Studio notebook tab. Choose MyNotebook. 3. In the MyNotebook page, choose Open in Apache Zeppelin. The Apache Zeppelin interface opens in a new tab. 4. 5. In the Welcome to Zeppelin! page, choose Zeppelin new note. In the Zeppelin Note page, enter the following query into a new note: %flink.ssql(type=update) Tutorial: Create a Studio notebook in Managed Service for Apache Flink 425 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide select * from stock Choose the run icon. The application displays data from the Amazon MSK cluster. To open the Apache Flink Dashboard for your application to view operational aspects, choose FLINK JOB. For more information about the Flink Dashboard, see Apache Flink Dashboard in the Managed Service for Apache Flink Developer Guide. For more examples of Flink Streaming SQL queries, see Queries in the Apache Flink documentation. Clean up your application and dependent resources Delete your Studio notebook 1. Open the Managed Service for Apache Flink console. 2. Choose MyNotebook. 3. Choose Actions, then Delete. Delete your AWS Glue database and connection 1. Open the AWS Glue console at https://console.aws.amazon.com/glue/. 2. Choose Databases from the left navigation bar. Check the checkbox next to Default to select it. Choose Action, Delete Database. Confirm your selection. 3. Choose Connections from the left navigation bar. Check the checkbox next to ZeppelinConnection to select it. Choose Action, Delete Connection. Confirm your selection. Delete your IAM role and policy 1. Open the IAM console at https://console.aws.amazon.com/iam/. 2. Choose Roles from the left navigation bar. 3. Use the search bar to search for the ZeppelinRole role. 4. Choose the ZeppelinRole role. Choose Delete Role. Confirm the deletion. Tutorial: Create a Studio notebook in Managed Service for Apache Flink 426 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Delete your CloudWatch log group The console creates a CloudWatch Logs group and log stream for you when you create your application using the console. You do not have a log group and stream if you created your application using the AWS CLI. 1. Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/. 2. Choose Log groups from the left navigation bar. 3. Choose the /AWS/KinesisAnalytics/MyNotebook log group. 4. Choose Actions, Delete log group(s). Confirm the deletion. Clean up Kinesis Data Streams resources To delete your Kinesis stream, open the Kinesis Data Streams console, select your Kinesis stream, and choose Actions, Delete. Clean up MSK resources Follow the steps in this section if you created an Amazon MSK cluster for this tutorial. This section has directions for cleaning up your Amazon EC2 client instance, Amazon VPC, and Amazon MSK cluster. Delete your Amazon MSK cluster Follow these steps if you created an Amazon MSK cluster for this tutorial. 1. Open the Amazon MSK console at https://console.aws.amazon.com/msk/home?region=us- east-1#/home/. 2. Choose AWSKafkaTutorialCluster. Choose Delete. Enter delete in the window that appears, and confirm your selection. Terminate your client instance Follow these steps if you created an Amazon EC2 client instance for this tutorial. 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. Choose Instances from the left navigation bar. 3. Choose the checkbox next to ZeppelinClient to select it. Tutorial: Create a Studio notebook in Managed Service for Apache Flink 427 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 4. Choose Instance State, Terminate Instance. Delete your Amazon VPC Follow these steps if you created an Amazon VPC for this tutorial. 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. Choose Network Interfaces from the left navigation bar. 3. 4. Enter your VPC ID in the search bar and press enter to search. Select the checkbox in the table header to select all the displayed network interfaces. 5. Choose Actions, Detach. In the window that appears, choose Enable under Force detachment. Choose Detach, and wait for
analytics-java-api-130
analytics-java-api.pdf
130
427 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 4. Choose Instance State, Terminate Instance. Delete your Amazon VPC Follow these steps if you created an Amazon VPC for this tutorial. 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. Choose Network Interfaces from the left navigation bar. 3. 4. Enter your VPC ID in the search bar and press enter to search. Select the checkbox in the table header to select all the displayed network interfaces. 5. Choose Actions, Detach. In the window that appears, choose Enable under Force detachment. Choose Detach, and wait for all of the network interfaces to reach the Available status. 6. Select the checkbox in the table header to select all the displayed network interfaces again. 7. Choose Actions, Delete. Confirm the action. 8. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/. 9. Select AWSKafkaTutorialVPC. Choose Actions, Delete VPC. Enter delete and confirm the deletion. Tutorial: Deploy a Studio notebook as a Managed Service for Apache Flink application with durable state The following tutorial demonstrates how to deploy a Studio notebook as a Managed Service for Apache Flink application with durable state. This tutorial contains the following sections: • Complete prerequisites • Deploy an application with durable state using the AWS Management Console • Deploy an application with durable state using the AWS CLI Complete prerequisites Create a new Studio notebook by following the Tutorial: Create a Studio notebook in Managed Service for Apache Flink, using either Kinesis Data Streams or Amazon MSK. Name the Studio notebook ExampleTestDeploy. Tutorial: Deploy a Studio notebook as a Managed Service for Apache Flink application with durable state 428 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Deploy an application with durable state using the AWS Management Console 1. Add an S3 bucket location where you want the packaged code to be stored under Application code location - optional in the console. This enables the steps to deploy and run your application directly from the notebook. 2. Add required permissions to the application role to enable the role you are using to read and write to an Amazon S3 bucket, and to launch a Managed Service for Apache Flink application: • AmazonS3FullAccess • Amazonmanaged-flinkFullAccess • Access to your sources, destinations, and VPCs as applicable. For more information, see Review IAM permissions for Studio notebooks. 3. Use the following sample code: %flink.ssql(type=update) CREATE TABLE exampleoutput ( 'ticket' VARCHAR, 'price' DOUBLE ) WITH ( 'connector' = 'kinesis', 'stream' = 'ExampleOutputStream', 'aws.region' = 'us-east-1', 'scan.stream.initpos' = 'LATEST', 'format' = 'json' ); INSERT INTO exampleoutput SELECT ticker, price FROM exampleinputstream 4. With this feature launch, you will see a new dropdown on the right top corner of each note in your notebook with the name of the notebook. You can do the following: • View the Studio notebook settings in the AWS Management Console. • Build your Zeppelin Note and export it to Amazon S3. At this point, provide a name for your application and choose Build and Export. You will get a notification when the export completes. • If you need to, you can view and run any additional tests on the executable in Amazon S3. • Once the build is complete, you will be able to deploy your code as a Kinesis streaming application with durable state and autoscaling. Tutorial: Deploy a Studio notebook as a Managed Service for Apache Flink application with durable state 429 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • Use the dropdown and choose Deploy Zeppelin Note as Kinesis streaming application. Review the application name and choose Deploy via AWS Console. • This will lead you to the AWS Management Console page for creating a Managed Service for Apache Flink application. Note that application name, parallelism, code location, default Glue DB, VPC (if applicable) and IAM roles have been pre-populated. Validate that the IAM roles have the required permissions to your sources and destinations. Snapshots are enabled by default for durable application state management. • Choose create application. • You can choose configure and modify any settings, and choose Run to start your streaming application. Deploy an application with durable state using the AWS CLI To deploy an application using the AWS CLI, you must update your AWS CLI to use the service model provided with your Beta 2 information. For information about how to use the updated service model, see Complete the prerequisites. The following example code creates a new Studio notebook: aws kinesisanalyticsv2 create-application \ --application-name <app-name> \ --runtime-environment ZEPPELIN-FLINK-3_0 \ --application-mode INTERACTIVE \ --service-execution-role <iam-role> --application-configuration '{ "ZeppelinApplicationConfiguration": { "CatalogConfiguration": { "GlueDataCatalogConfiguration": { "DatabaseARN": "arn:aws:glue:us-east-1:<account>:database/<glue-database- name>" } } }, "FlinkApplicationConfiguration": { "ParallelismConfiguration": { "ConfigurationType": "CUSTOM", "Parallelism": 4, "ParallelismPerKPU": 4 } }, Tutorial: Deploy a Studio notebook as a Managed Service
analytics-java-api-131
analytics-java-api.pdf
131
AWS CLI To deploy an application using the AWS CLI, you must update your AWS CLI to use the service model provided with your Beta 2 information. For information about how to use the updated service model, see Complete the prerequisites. The following example code creates a new Studio notebook: aws kinesisanalyticsv2 create-application \ --application-name <app-name> \ --runtime-environment ZEPPELIN-FLINK-3_0 \ --application-mode INTERACTIVE \ --service-execution-role <iam-role> --application-configuration '{ "ZeppelinApplicationConfiguration": { "CatalogConfiguration": { "GlueDataCatalogConfiguration": { "DatabaseARN": "arn:aws:glue:us-east-1:<account>:database/<glue-database- name>" } } }, "FlinkApplicationConfiguration": { "ParallelismConfiguration": { "ConfigurationType": "CUSTOM", "Parallelism": 4, "ParallelismPerKPU": 4 } }, Tutorial: Deploy a Studio notebook as a Managed Service for Apache Flink application with durable state 430 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide "DeployAsApplicationConfiguration": { "S3ContentLocation": { "BucketARN": "arn:aws:s3:::<s3bucket>", "BasePath": "/something/" } }, "VpcConfigurations": [ { "SecurityGroupIds": [ "<security-group>" ], "SubnetIds": [ "<subnet-1>", "<subnet-2>" ] } ] }' \ --region us-east-1 The following code example starts a Studio notebook: aws kinesisanalyticsv2 start-application \ --application-name <app-name> \ --region us-east-1 \ --no-verify-ssl The following code returns the URL for an application's Apache Zeppelin notebook page: aws kinesisanalyticsv2 create-application-presigned-url \ --application-name <app-name> \ --url-type ZEPPELIN_UI_URL \ --region us-east-1 \ --no-verify-ssl View example queries to analyza data in a Studio notebook The following example queries demonstrate how to analyze data using window queries in a Studio notebook. • Create tables with Amazon MSK/Apache Kafka View example queries to analyza data in a Studio notebook 431 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • Create tables with Kinesis • Query a tumbling window • Query a sliding window • Use interactive SQL • Use the BlackHole SQL connector • Use Scala to generate sample data • Use interactive Scala • Use interactive Python • Use a combination of interactive Python, SQL, and Scala • Use a cross-account Kinesis data stream For information about Apache Flink SQL query settings, see Flink on Zeppelin Notebooks for Interactive Data Analysis. To view your application in the Apache Flink dashboard, choose FLINK JOB in your application's Zeppelin Note page. For more information about window queries, see Windows in the Apache Flink documentation. For more examples of Apache Flink Streaming SQL queries, see Queries in the Apache Flink documentation. Create tables with Amazon MSK/Apache Kafka You can use the Amazon MSK Flink connector with Managed Service for Apache Flink Studio to authenticate your connection with Plaintext, SSL, or IAM authentication. Create your tables using the specific properties per your requirements. -- Plaintext connection CREATE TABLE your_table ( `column1` STRING, `column2` BIGINT ) WITH ( 'connector' = 'kafka', 'topic' = 'your_topic', 'properties.bootstrap.servers' = '<bootstrap servers>', View example queries to analyza data in a Studio notebook 432 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 'scan.startup.mode' = 'earliest-offset', 'format' = 'json' ); -- SSL connection CREATE TABLE your_table ( `column1` STRING, `column2` BIGINT ) WITH ( 'connector' = 'kafka', 'topic' = 'your_topic', 'properties.bootstrap.servers' = '<bootstrap servers>', 'properties.security.protocol' = 'SSL', 'properties.ssl.truststore.location' = '/usr/lib/jvm/java-11-amazon-corretto/lib/ security/cacerts', 'properties.ssl.truststore.password' = 'changeit', 'properties.group.id' = 'myGroup', 'scan.startup.mode' = 'earliest-offset', 'format' = 'json' ); -- IAM connection (or for MSK Serverless) CREATE TABLE your_table ( `column1` STRING, `column2` BIGINT ) WITH ( 'connector' = 'kafka', 'topic' = 'your_topic', 'properties.bootstrap.servers' = '<bootstrap servers>', 'properties.security.protocol' = 'SASL_SSL', 'properties.sasl.mechanism' = 'AWS_MSK_IAM', 'properties.sasl.jaas.config' = 'software.amazon.msk.auth.iam.IAMLoginModule required;', 'properties.sasl.client.callback.handler.class' = 'software.amazon.msk.auth.iam.IAMClientCallbackHandler', 'properties.group.id' = 'myGroup', 'scan.startup.mode' = 'earliest-offset', 'format' = 'json' ); You can combine these with other properties at Apache Kafka SQL Connector. View example queries to analyza data in a Studio notebook 433 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Create tables with Kinesis In the following example, you create a table using Kinesis: CREATE TABLE KinesisTable ( `column1` BIGINT, `column2` BIGINT, `column3` BIGINT, `column4` STRING, `ts` TIMESTAMP(3) ) PARTITIONED BY (column1, column2) WITH ( 'connector' = 'kinesis', 'stream' = 'test_stream', 'aws.region' = '<region>', 'scan.stream.initpos' = 'LATEST', 'format' = 'csv' ); For more information on other properties you can use, see Amazon Kinesis Data Streams SQL Connector. Query a tumbling window The following Flink Streaming SQL query selects the highest price in each five-second tumbling window from the ZeppelinTopic table: %flink.ssql(type=update) SELECT TUMBLE_END(event_time, INTERVAL '5' SECOND) as winend, MAX(price) as five_second_high, ticker FROM ZeppelinTopic GROUP BY ticker, TUMBLE(event_time, INTERVAL '5' SECOND) Query a sliding window The following Apache Flink Streaming SQL query selects the highest price in each five-second sliding window from the ZeppelinTopic table: %flink.ssql(type=update) View example queries to analyza data in a Studio notebook 434 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide SELECT HOP_END(event_time, INTERVAL '3' SECOND, INTERVAL '5' SECOND) AS winend, MAX(price) AS sliding_five_second_max FROM ZeppelinTopic//or your table name in AWS Glue GROUP BY HOP(event_time, INTERVAL '3' SECOND, INTERVAL '5' SECOND) Use interactive SQL This example prints the max of event time and processing time and
analytics-java-api-132
analytics-java-api.pdf
132
GROUP BY ticker, TUMBLE(event_time, INTERVAL '5' SECOND) Query a sliding window The following Apache Flink Streaming SQL query selects the highest price in each five-second sliding window from the ZeppelinTopic table: %flink.ssql(type=update) View example queries to analyza data in a Studio notebook 434 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide SELECT HOP_END(event_time, INTERVAL '3' SECOND, INTERVAL '5' SECOND) AS winend, MAX(price) AS sliding_five_second_max FROM ZeppelinTopic//or your table name in AWS Glue GROUP BY HOP(event_time, INTERVAL '3' SECOND, INTERVAL '5' SECOND) Use interactive SQL This example prints the max of event time and processing time and the sum of values from the key-values table. Ensure that you have the sample data generation script from the the section called “Use Scala to generate sample data” running. To try other SQL queries such as filtering and joins in your Studio notebook, see the Apache Flink documentation: Queries in the Apache Flink documentation. %flink.ssql(type=single, parallelism=4, refreshInterval=1000, template=<h1>{2}</h1> records seen until <h1>Processing Time: {1}</h1> and <h1>Event Time: {0}</h1>) -- An interactive query prints how many records from the `key-value-stream` we have seen so far, along with the current processing and event time. SELECT MAX(`et`) as `et`, MAX(`pt`) as `pt`, SUM(`value`) as `sum` FROM `key-values` %flink.ssql(type=update, parallelism=4, refreshInterval=1000) -- An interactive tumbling window query that displays the number of records observed per (event time) second. -- Browse through the chart views to see different visualizations of the streaming result. SELECT TUMBLE_START(`et`, INTERVAL '1' SECONDS) as `window`, `key`, SUM(`value`) as `sum` FROM `key-values` GROUP BY TUMBLE(`et`, INTERVAL '1' SECONDS), `key`; View example queries to analyza data in a Studio notebook 435 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Use the BlackHole SQL connector The BlackHole SQL connector doesn't require that you create a Kinesis data stream or an Amazon MSK cluster to test your queries. For information about the BlackHole SQL connector, see BlackHole SQL Connector in the Apache Flink documentation. In this example, the default catalog is an in-memory catalog. %flink.ssql CREATE TABLE default_catalog.default_database.blackhole_table ( `key` BIGINT, `value` BIGINT, `et` TIMESTAMP(3) ) WITH ( 'connector' = 'blackhole' ) %flink.ssql(parallelism=1) INSERT INTO `test-target` SELECT `key`, `value`, `et` FROM `test-source` WHERE `key` > 3 %flink.ssql(parallelism=2) INSERT INTO `default_catalog`.`default_database`.`blackhole_table` SELECT `key`, `value`, `et` FROM `test-target` WHERE `key` > 7 View example queries to analyza data in a Studio notebook 436 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Use Scala to generate sample data This example uses Scala to generate sample data. You can use this sample data to test various queries. Use the create table statement to create the key-values table. import org.apache.flink.streaming.api.functions.source.datagen.DataGeneratorSource import org.apache.flink.streaming.api.functions.source.datagen.RandomGenerator import org.apache.flink.streaming.api.scala.DataStream import java.sql.Timestamp // ad-hoc convenience methods to be defined on Table implicit class TableOps[T](table: DataStream[T]) { def asView(name: String): DataStream[T] = { if (stenv.listTemporaryViews.contains(name)) { stenv.dropTemporaryView("`" + name + "`") } stenv.createTemporaryView("`" + name + "`", table) return table; } } %flink(parallelism=4) val stream = senv .addSource(new DataGeneratorSource(RandomGenerator.intGenerator(1, 10), 1000)) .map(key => (key, 1, new Timestamp(System.currentTimeMillis))) .asView("key-values-data-generator") %flink.ssql(parallelism=4) -- no need to define the paragraph type with explicit parallelism (such as "%flink.ssql(parallelism=2)") -- in this case the INSERT query will inherit the parallelism of the of the above paragraph INSERT INTO `key-values` SELECT `_1` as `key`, `_2` as `value`, `_3` as `et` FROM `key-values-data-generator` View example queries to analyza data in a Studio notebook 437 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Use interactive Scala This is the Scala translation of the the section called “Use interactive SQL”. For more Scala examples, see Table API in the Apache Flink documentation. %flink import org.apache.flink.api.scala._ import org.apache.flink.table.api._ import org.apache.flink.table.api.bridge.scala._ // ad-hoc convenience methods to be defined on Table implicit class TableOps(table: Table) { def asView(name: String): Table = { if (stenv.listTemporaryViews.contains(name)) { stenv.dropTemporaryView(name) } stenv.createTemporaryView(name, table) return table; } } %flink(parallelism=4) // A view that computes many records from the `key-values` we have seen so far, along with the current processing and event time. val query01 = stenv .from("`key-values`") .select( $"et".max().as("et"), $"pt".max().as("pt"), $"value".sum().as("sum") ).asView("query01") %flink.ssql(type=single, parallelism=16, refreshInterval=1000, template=<h1>{2}</h1> records seen until <h1>Processing Time: {1}</h1> and <h1>Event Time: {0}</h1>) -- An interactive query prints the query01 output. SELECT * FROM query01 %flink(parallelism=4) View example queries to analyza data in a Studio notebook 438 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide // An tumbling window view that displays the number of records observed per (event time) second. val query02 = stenv .from("`key-values`") .window(Tumble over 1.seconds on $"et" as $"w") .groupBy($"w", $"key") .select( $"w".start.as("window"), $"key", $"value".sum().as("sum") ).asView("query02") %flink.ssql(type=update, parallelism=4, refreshInterval=1000) -- An interactive query prints the query02 output. -- Browse through the chart views to see different visualizations of the streaming result. SELECT * FROM `query02` Use interactive Python This is the Python translation of the the section called “Use interactive SQL”.
analytics-java-api-133
analytics-java-api.pdf
133
to analyza data in a Studio notebook 438 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide // An tumbling window view that displays the number of records observed per (event time) second. val query02 = stenv .from("`key-values`") .window(Tumble over 1.seconds on $"et" as $"w") .groupBy($"w", $"key") .select( $"w".start.as("window"), $"key", $"value".sum().as("sum") ).asView("query02") %flink.ssql(type=update, parallelism=4, refreshInterval=1000) -- An interactive query prints the query02 output. -- Browse through the chart views to see different visualizations of the streaming result. SELECT * FROM `query02` Use interactive Python This is the Python translation of the the section called “Use interactive SQL”. For more Python examples, see Table API in the Apache Flink documentation. %flink.pyflink from pyflink.table.table import Table def as_view(table, name): if (name in st_env.list_temporary_views()): st_env.drop_temporary_view(name) st_env.create_temporary_view(name, table) return table Table.as_view = as_view %flink.pyflink(parallelism=16) # A view that computes many records from the `key-values` we have seen so far, along with the current processing and event time st_env \ .from_path("`keyvalues`") \ View example queries to analyza data in a Studio notebook 439 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide .select(", ".join([ "max(et) as et", "max(pt) as pt", "sum(value) as sum" ])) \ .as_view("query01") %flink.ssql(type=single, parallelism=16, refreshInterval=1000, template=<h1>{2}</h1> records seen until <h1>Processing Time: {1}</h1> and <h1>Event Time: {0}</h1>) -- An interactive query prints the query01 output. SELECT * FROM query01 %flink.pyflink(parallelism=16) # A view that computes many records from the `key-values` we have seen so far, along with the current processing and event time st_env \ .from_path("`key-values`") \ .window(Tumble.over("1.seconds").on("et").alias("w")) \ .group_by("w, key") \ .select(", ".join([ "w.start as window", "key", "sum(value) as sum" ])) \ .as_view("query02") %flink.ssql(type=update, parallelism=16, refreshInterval=1000) -- An interactive query prints the query02 output. -- Browse through the chart views to see different visualizations of the streaming result. SELECT * FROM `query02` Use a combination of interactive Python, SQL, and Scala You can use any combination of SQL, Python, and Scala in your notebook for interactive analysis. In a Studio notebook that you plan to deploy as an application with durable state, you can use a combination of SQL and Scala. This example shows you the sections that are ignored and those that get deployed in the application with durable state. View example queries to analyza data in a Studio notebook 440 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide %flink.ssql CREATE TABLE `default_catalog`.`default_database`.`my-test-source` ( `key` BIGINT NOT NULL, `value` BIGINT NOT NULL, `et` TIMESTAMP(3) NOT NULL, `pt` AS PROCTIME(), WATERMARK FOR `et` AS `et` - INTERVAL '5' SECOND ) WITH ( 'connector' = 'kinesis', 'stream' = 'kda-notebook-example-test-source-stream', 'aws.region' = 'eu-west-1', 'scan.stream.initpos' = 'LATEST', 'format' = 'json', 'json.timestamp-format.standard' = 'ISO-8601' ) %flink.ssql CREATE TABLE `default_catalog`.`default_database`.`my-test-target` ( `key` BIGINT NOT NULL, `value` BIGINT NOT NULL, `et` TIMESTAMP(3) NOT NULL, `pt` AS PROCTIME(), WATERMARK FOR `et` AS `et` - INTERVAL '5' SECOND ) WITH ( 'connector' = 'kinesis', 'stream' = 'kda-notebook-example-test-target-stream', 'aws.region' = 'eu-west-1', 'scan.stream.initpos' = 'LATEST', 'format' = 'json', 'json.timestamp-format.standard' = 'ISO-8601' ) %flink() // ad-hoc convenience methods to be defined on Table implicit class TableOps(table: Table) { def asView(name: String): Table = { if (stenv.listTemporaryViews.contains(name)) { stenv.dropTemporaryView(name) } View example queries to analyza data in a Studio notebook 441 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide stenv.createTemporaryView(name, table) return table; } } %flink(parallelism=1) val table = stenv .from("`default_catalog`.`default_database`.`my-test-source`") .select($"key", $"value", $"et") .filter($"key" > 10) .asView("query01") %flink.ssql(parallelism=1) -- forward data INSERT INTO `default_catalog`.`default_database`.`my-test-target` SELECT * FROM `query01` %flink.ssql(type=update, parallelism=1, refreshInterval=1000) -- forward data to local stream (ignored when deployed as application) SELECT * FROM `query01` %flink // tell me the meaning of life (ignored when deployed as application!) print("42!") Use a cross-account Kinesis data stream To use a Kinesis data stream that's in an account other than the account that has your Studio notebook, create a service execution role in the account where your Studio notebook is running and a role trust policy in the account that has the data stream. Use aws.credentials.provider, aws.credentials.role.arn, and aws.credentials.role.sessionName in the Kinesis connector in your create table DDL statement to create a table against the data stream. Use the following service execution role for the Studio notebook account. { View example queries to analyza data in a Studio notebook 442 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide "Sid": "AllowNotebookToAssumeRole", "Effect": "Allow", "Action": "sts:AssumeRole" "Resource": "*" } Use the AmazonKinesisFullAccess policy and the following role trust policy for the data stream account. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::<accountID>:root" }, "Action": "sts:AssumeRole", "Condition": {} } ] } Use the following paragraph for the create table statement. %flink.ssql CREATE TABLE test1 ( name VARCHAR, age BIGINT ) WITH ( 'connector' = 'kinesis', 'stream' = 'stream-assume-role-test', 'aws.region' = 'us-east-1', 'aws.credentials.provider' = 'ASSUME_ROLE', 'aws.credentials.role.arn' = 'arn:aws:iam::<accountID>:role/stream-assume-role-test- role', 'aws.credentials.role.sessionName' = 'stream-assume-role-test-session', 'scan.stream.initpos' = 'TRIM_HORIZON', 'format'
analytics-java-api-134
analytics-java-api.pdf
134
Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide "Sid": "AllowNotebookToAssumeRole", "Effect": "Allow", "Action": "sts:AssumeRole" "Resource": "*" } Use the AmazonKinesisFullAccess policy and the following role trust policy for the data stream account. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::<accountID>:root" }, "Action": "sts:AssumeRole", "Condition": {} } ] } Use the following paragraph for the create table statement. %flink.ssql CREATE TABLE test1 ( name VARCHAR, age BIGINT ) WITH ( 'connector' = 'kinesis', 'stream' = 'stream-assume-role-test', 'aws.region' = 'us-east-1', 'aws.credentials.provider' = 'ASSUME_ROLE', 'aws.credentials.role.arn' = 'arn:aws:iam::<accountID>:role/stream-assume-role-test- role', 'aws.credentials.role.sessionName' = 'stream-assume-role-test-session', 'scan.stream.initpos' = 'TRIM_HORIZON', 'format' = 'json' ) View example queries to analyza data in a Studio notebook 443 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Troubleshoot Studio notebooks for Managed Service for Apache Flink This section contains troubleshooting information for Studio notebooks. Stop a stuck application To stop an application that is stuck in a transient state, call the StopApplication action with the Force parameter set to true. For more information, see Running Applications in the Managed Service for Apache Flink Developer Guide. Deploy as an application with durable state in a VPC with no internet access The Managed Service for Apache Flink Studio deploy-as-application function does not support VPC applications without internet access. We recommend that you build your application in Studio, and then use Managed Service for Apache Flink to manually create a Flink application and select the zip file you built in your Notebook. The following steps outline this approach: 1. Build and export your Studio application to Amazon S3. This should be a zip file. 2. Create a Managed Service for Apache Flink application manually with code path referencing the zip file location in Amazon S3. In addition, you will need to configure the application with the following env variables (2 groupID, 3 var in total): 3. kinesis.analytics.flink.run.options a. b. python: source/note.py jarfile: lib/PythonApplicationDependencies.jar 4. managed.deploy_as_app.options • DatabaseARN: <glue database ARN (Amazon Resource Name)> 5. You may need to give permissions to the Managed Service for Apache Flink Studio and Managed Service for Apache Flink IAM roles for the services your application uses. You can use the same IAM role for both apps. Troubleshoot Studio notebooks for Managed Service for Apache Flink 444 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Deploy-as-app size and build time reduction Studio deploy-as-app for Python applications packages everything available in the Python environment because we cannot determine which libraries you need. This may result in a larger- than necessary deploy-as-app size. The following procedure demonstrates how to reduce the size of the deploy-as-app Python application size by uninstalling dependencies. If you’re building a Python application with deploy-as-app feature from Studio, you might consider removing pre-installed Python packages from the system if your applications are not depending on. This will not only help to reduce the final artifact size to avoid breaching the service limit for application size, but also improve the build time of applications with the deploy-as-app feature. You can execute following command to list out all installed Python packages with their respective installed size and selectively remove packages with significant size. %flink.pyflink !pip list --format freeze | awk -F = {'print $1'} | xargs pip show | grep -E 'Location:|Name:' | cut -d ' ' -f 2 | paste -d ' ' - - | awk '{gsub("-","_",$1); print $2 "/" tolower($1)}' | xargs du -sh 2> /dev/null | sort -hr Note apache-beam is required by Flink Python to operate. You should never remove this package and its dependencies. Following is the list of pre-install Python packages in Studio V2 which can be considered for removal: scipy statsmodels plotnine seaborn llvmlite bokeh pandas matplotlib botocore boto3 Deploy-as-app size and build time reduction 445 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide numba To remove a Python package from Zeppelin notebook: 1. Check if your application depends on the package, or any of its consuming packages, before removing it. You can identify dependants of a package using pipdeptree. 2. Executing following command to remove a package: %flink.pyflink !pip uninstall -y <package-to-remove> 3. If you need to retrieve a package which you removed by mistake, executing the following command: %flink.pyflink !pip install <package-to-install> Example Example: Remove scipy package before deploying your Python application with deploy-as-app feature. 1. Use pipdeptree to discover all scipy consumers and verify if you can safely remove scipy. • Install the tool through notebook: %flink.pyflink !pip install pipdeptree • Get reversed dependency tree of scipy by running: %flink.pyflink !pip -r -p scipy You should see output similar to the following (condensed for brevity): ... ------------------------------------------------------------------------ scipy==1.8.0 ### plotnine==0.5.1 [requires: scipy>=1.0.0] ### seaborn==0.9.0 [requires: scipy>=0.14.0] Deploy-as-app size and build time reduction 446 Managed Service for Apache Flink Managed Service
analytics-java-api-135
analytics-java-api.pdf
135
removed by mistake, executing the following command: %flink.pyflink !pip install <package-to-install> Example Example: Remove scipy package before deploying your Python application with deploy-as-app feature. 1. Use pipdeptree to discover all scipy consumers and verify if you can safely remove scipy. • Install the tool through notebook: %flink.pyflink !pip install pipdeptree • Get reversed dependency tree of scipy by running: %flink.pyflink !pip -r -p scipy You should see output similar to the following (condensed for brevity): ... ------------------------------------------------------------------------ scipy==1.8.0 ### plotnine==0.5.1 [requires: scipy>=1.0.0] ### seaborn==0.9.0 [requires: scipy>=0.14.0] Deploy-as-app size and build time reduction 446 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide ### statsmodels==0.12.2 [requires: scipy>=1.1] ### plotnine==0.5.1 [requires: statsmodels>=0.8.0] 2. Carefully inspect the usage of seaborn, statsmodels and plotnine in your applications. If your applications do not depend on any of scipy, seaborn, statemodels, or plotnine, you can remove all of these packages, or only ones which your applications don’t need. 3. Remove the package by running: !pip uninstall -y scipy plotnine seaborn statemodels Cancel jobs This section shows you how to cancel Apache Flink jobs that you can't get to from Apache Zeppelin. If you want to cancel such a job, go to the Apache Flink dashboard, copy the job ID, then use it in one of the following examples. To cancel a single job: %flink.pyflink import requests requests.patch("https://zeppelin-flink:8082/jobs/[job_id]", verify=False) To cancel all running jobs: %flink.pyflink import requests r = requests.get("https://zeppelin-flink:8082/jobs", verify=False) jobs = r.json()['jobs'] for job in jobs: if (job["status"] == "RUNNING"): print(requests.patch("https://zeppelin-flink:8082/jobs/{}".format(job["id"]), verify=False)) To cancel all jobs: %flink.pyflink Cancel jobs 447 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide import requests r = requests.get("https://zeppelin-flink:8082/jobs", verify=False) jobs = r.json()['jobs'] for job in jobs: requests.patch("https://zeppelin-flink:8082/jobs/{}".format(job["id"]), verify=False) Restart the Apache Flink interpreter To restart the Apache Flink interpreter within your Studio notebook 1. Choose Configuration near the top right corner of the screen. 2. Choose Interpreter. 3. Choose restart and then OK. Create custom IAM policies for Managed Service for Apache Flink Studio notebooks You normally use managed IAM policies to allow your application to access dependent resources. If you need finer control over your application's permissions, you can use a custom IAM policy. This section contains examples of custom IAM policies. Note In the following policy examples, replace the placeholder text with your application's values. This topic contains the following sections: • AWS Glue • CloudWatch Logs • Kinesis streams • Amazon MSK clusters Restart the Apache Flink interpreter 448 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide AWS Glue The following example policy grants permissions to access a AWS Glue database. { "Version": "2012-10-17", "Statement": [ { "Sid": "GlueTable", "Effect": "Allow", "Action": [ "glue:GetConnection", "glue:GetTable", "glue:GetTables", "glue:GetDatabase", "glue:CreateTable", "glue:UpdateTable" ], "Resource": [ "arn:aws:glue:<region>:<accountId>:connection/*", "arn:aws:glue:<region>:<accountId>:table/<database-name>/*", "arn:aws:glue:<region>:<accountId>:database/<database-name>", "arn:aws:glue:<region>:<accountId>:database/hive", "arn:aws:glue:<region>:<accountId>:catalog" ] }, { "Sid": "GlueDatabase", "Effect": "Allow", "Action": "glue:GetDatabases", "Resource": "*" } ] } CloudWatch Logs The following policy grants permissions to access CloudWatch Logs: { "Sid": "ListCloudwatchLogGroups", "Effect": "Allow", AWS Glue 449 Managed Service for Apache Flink "Action": [ "logs:DescribeLogGroups" ], "Resource": [ "arn:aws:logs:<region>:<accountId>:log-group:*" Managed Service for Apache Flink Developer Guide ] }, { "Sid": "ListCloudwatchLogStreams", "Effect": "Allow", "Action": [ "logs:DescribeLogStreams" ], "Resource": [ "<logGroupArn>:log-stream:*" ] }, { "Sid": "PutCloudwatchLogs", "Effect": "Allow", "Action": [ "logs:PutLogEvents" ], "Resource": [ "<logStreamArn>" ] } Note If you create your application using the console, the console adds the necessary policies to access CloudWatch Logs to your application role. Kinesis streams Your application can use a Kinesis Stream for a source or a destination. Your application needs read permissions to read from a source stream, and write permissions to write to a destination stream. The following policy grants permissions to read from a Kinesis Stream used as a source: { Kinesis streams 450 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide "Version": "2012-10-17", "Statement": [ { "Sid": "KinesisShardDiscovery", "Effect": "Allow", "Action": "kinesis:ListShards", "Resource": "*" }, { "Sid": "KinesisShardConsumption", "Effect": "Allow", "Action": [ "kinesis:GetShardIterator", "kinesis:GetRecords", "kinesis:DescribeStream", "kinesis:DescribeStreamSummary", "kinesis:RegisterStreamConsumer", "kinesis:DeregisterStreamConsumer" ], "Resource": "arn:aws:kinesis:<region>:<accountId>:stream/<stream-name>" }, { "Sid": "KinesisEfoConsumer", "Effect": "Allow", "Action": [ "kinesis:DescribeStreamConsumer", "kinesis:SubscribeToShard" ], "Resource": "arn:aws:kinesis:<region>:<account>:stream/<stream-name>/consumer/*" } ] } The following policy grants permissions to write to a Kinesis Stream used as a destination: { "Version": "2012-10-17", "Statement": [ { "Sid": "KinesisStreamSink", "Effect": "Allow", "Action": [ "kinesis:PutRecord", Kinesis streams 451 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide "kinesis:PutRecords", "kinesis:DescribeStreamSummary", "kinesis:DescribeStream" ], "Resource": "arn:aws:kinesis:<region>:<accountId>:stream/<stream-name>" } ] } If your application accesses an encypted Kinesis stream, you must grant additional permissions to access the stream and the stream's encryption key. The following policy grants permissions to access an encrypted source stream and the stream's encryption key: { "Sid": "ReadEncryptedKinesisStreamSource", "Effect": "Allow", "Action": [ "kms:Decrypt" ], "Resource": [ "<inputStreamKeyArn>" ] } , The following policy grants permissions to access an encrypted destination stream and the stream's encryption key: {
analytics-java-api-136
analytics-java-api.pdf
136
"KinesisStreamSink", "Effect": "Allow", "Action": [ "kinesis:PutRecord", Kinesis streams 451 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide "kinesis:PutRecords", "kinesis:DescribeStreamSummary", "kinesis:DescribeStream" ], "Resource": "arn:aws:kinesis:<region>:<accountId>:stream/<stream-name>" } ] } If your application accesses an encypted Kinesis stream, you must grant additional permissions to access the stream and the stream's encryption key. The following policy grants permissions to access an encrypted source stream and the stream's encryption key: { "Sid": "ReadEncryptedKinesisStreamSource", "Effect": "Allow", "Action": [ "kms:Decrypt" ], "Resource": [ "<inputStreamKeyArn>" ] } , The following policy grants permissions to access an encrypted destination stream and the stream's encryption key: { "Sid": "WriteEncryptedKinesisStreamSink", "Effect": "Allow", "Action": [ "kms:GenerateDataKey" ], "Resource": [ "<outputStreamKeyArn>" ] } Kinesis streams 452 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Amazon MSK clusters To grant access to an Amazon MSK cluster, you grant access to the cluster's VPC. For policy examples for accessing an Amazon VPC, see VPC Application Permissions. Amazon MSK clusters 453 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Get started with Amazon Managed Service for Apache Flink (DataStream API) This section introduces you to the fundamental concepts of Managed Service for Apache Flink and implementing an application in Java using the DataStream API. It describes the available options for creating and testing your applications. It also provides instructions for installing the necessary tools to complete the tutorials in this guide and to create your first application. Topics • Review the components of the Managed Service for Apache Flink application • Fulfill the prerequisites for completing the exercises • Set up an AWS account and create an administrator user • Set up the AWS Command Line Interface (AWS CLI) • Create and run a Managed Service for Apache Flink application • Clean up AWS resources • Explore additional resources Review the components of the Managed Service for Apache Flink application Note Amazon Managed Service for Apache Flink supports all Apache Flink APIs and potentially all JVM languages. For more information, see Flink's APIs. Depending on the API you choose, the structure of the application and the implementation is slightly different. This Getting Started tutorial covers the implementation of the applications using the DataStream API in Java. To process data, your Managed Service for Apache Flink application uses a Java application that processes input and produces output using the Apache Flink runtime. A typical Managed Service for Apache Flink application has the following components: Review application components 454 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • Runtime properties: You can use runtime properties to pass configuration parameters to your application to change them without modifying and republishing the code. • Sources: The application consumes data from one or more sources. A source uses a connector to read data from an external system, such as a Kinesis data stream, or a Kafka bucket. For more information, see Add streaming data sources. • Operators: The application processes data by using one or more operators. An operator can transform, enrich, or aggregate data. For more information, see Operators. • Sinks: The application sends data to external sources through sinks. A sink uses a connectorv to send data to a Kinesis data stream, a Kafka topic, Amazon S3, or a relational database. You can also use a special connector to print the output for development purposes only. For more information, see Write data using sinks. Your application requires some external dependencies, such as the Flink connectors that your application uses, or potentially a Java library. To run in Amazon Managed Service for Apache Flink, the application must be packaged along with dependencies in a fat-jar and uploaded to an Amazon S3 bucket. You then create a Managed Service for Apache Flink application. You pass the location of the code package, along with any other runtime configuration parameter. This tutorial demonstrates how to use Apache Maven to package the application, and how to run the application locally in the IDE of your choice. Fulfill the prerequisites for completing the exercises To complete the steps in this guide, you must have the following: • Git client. Install the Git client, if you haven't already. • Java Development Kit (JDK) version 11 . Install a Java JDK 11 and set the JAVA_HOME environment variable to point to your JDK install location. If you don't have a JDK 11, you can use Amazon Coretto 11 or any other standard JDK of your choice. • To verify that you have the JDK installed correctly, run the following command. The output will be different if you are using a JDK other than Amazon Corretto. Make sure that the version is 11.x. $ java --version openjdk 11.0.23 2024-04-16 LTS Complete the required prerequisites 455 Managed Service for Apache Flink Managed Service for
analytics-java-api-137
analytics-java-api.pdf
137
Kit (JDK) version 11 . Install a Java JDK 11 and set the JAVA_HOME environment variable to point to your JDK install location. If you don't have a JDK 11, you can use Amazon Coretto 11 or any other standard JDK of your choice. • To verify that you have the JDK installed correctly, run the following command. The output will be different if you are using a JDK other than Amazon Corretto. Make sure that the version is 11.x. $ java --version openjdk 11.0.23 2024-04-16 LTS Complete the required prerequisites 455 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide OpenJDK Runtime Environment Corretto-11.0.23.9.1 (build 11.0.23+9-LTS) OpenJDK 64-Bit Server VM Corretto-11.0.23.9.1 (build 11.0.23+9-LTS, mixed mode) • Apache Maven. Install Apache Maven if you haven't already. To learn how to install it, see Installing Apache Maven. • To test your Apache Maven installation, enter the following: $ mvn -version • IDE for local development. We recommend that you use a development environment such as Eclipse Java Neon or IntelliJ IDEA to develop and compile your application. • To test your Apache Maven installation, enter the following: $ mvn -version To get started, go to Set up an AWS account and create an administrator user. Set up an AWS account and create an administrator user Before you use Managed Service for Apache Flink for the first time, complete the following tasks: Sign up for an AWS account If you do not have an AWS account, complete the following steps to create one. To sign up for an AWS account 1. Open https://portal.aws.amazon.com/billing/signup. 2. Follow the online instructions. Part of the sign-up procedure involves receiving a phone call and entering a verification code on the phone keypad. When you sign up for an AWS account, an AWS account root user is created. The root user has access to all AWS services and resources in the account. As a security best practice, assign administrative access to a user, and use only the root user to perform tasks that require root user access. Set up an account 456 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide AWS sends you a confirmation email after the sign-up process is complete. At any time, you can view your current account activity and manage your account by going to https://aws.amazon.com/ and choosing My Account. Create a user with administrative access After you sign up for an AWS account, secure your AWS account root user, enable AWS IAM Identity Center, and create an administrative user so that you don't use the root user for everyday tasks. Secure your AWS account root user 1. Sign in to the AWS Management Console as the account owner by choosing Root user and entering your AWS account email address. On the next page, enter your password. For help signing in by using root user, see Signing in as the root user in the AWS Sign-In User Guide. 2. Turn on multi-factor authentication (MFA) for your root user. For instructions, see Enable a virtual MFA device for your AWS account root user (console) in the IAM User Guide. Create a user with administrative access 1. Enable IAM Identity Center. For instructions, see Enabling AWS IAM Identity Center in the AWS IAM Identity Center User Guide. 2. In IAM Identity Center, grant administrative access to a user. For a tutorial about using the IAM Identity Center directory as your identity source, see Configure user access with the default IAM Identity Center directory in the AWS IAM Identity Center User Guide. Sign in as the user with administrative access • To sign in with your IAM Identity Center user, use the sign-in URL that was sent to your email address when you created the IAM Identity Center user. Create a user with administrative access 457 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide For help signing in using an IAM Identity Center user, see Signing in to the AWS access portal in the AWS Sign-In User Guide. Assign access to additional users 1. In IAM Identity Center, create a permission set that follows the best practice of applying least- privilege permissions. For instructions, see Create a permission set in the AWS IAM Identity Center User Guide. 2. Assign users to a group, and then assign single sign-on access to the group. For instructions, see Add groups in the AWS IAM Identity Center User Guide. Grant programmatic access Users need programmatic access if they want to interact with AWS outside of the AWS Management Console. The way to grant programmatic access depends on the type of user that's accessing AWS. To grant users programmatic access, choose one of the following options. Which user needs programmatic access? To By Workforce identity (Users managed in IAM Identity Center)
analytics-java-api-138
analytics-java-api.pdf
138
a permission set in the AWS IAM Identity Center User Guide. 2. Assign users to a group, and then assign single sign-on access to the group. For instructions, see Add groups in the AWS IAM Identity Center User Guide. Grant programmatic access Users need programmatic access if they want to interact with AWS outside of the AWS Management Console. The way to grant programmatic access depends on the type of user that's accessing AWS. To grant users programmatic access, choose one of the following options. Which user needs programmatic access? To By Workforce identity (Users managed in IAM Identity Center) Use temporary credentials to sign programmatic requests Following the instructions for the interface that you want to to the AWS CLI, AWS SDKs, or use. AWS APIs. • For the AWS CLI, see Configuring the AWS CLI to use AWS IAM Identity Center in the AWS Command Line Interface User Guide. • For AWS SDKs, tools, and AWS APIs, see IAM Identity Center authentication in Grant programmatic access 458 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Which user needs programmatic access? To By IAM IAM the AWS SDKs and Tools Reference Guide. Use temporary credentials to sign programmatic requests Following the instructions in Using temporary credentia to the AWS CLI, AWS SDKs, or ls with AWS resources in the AWS APIs. IAM User Guide. (Not recommended) Use long-term credentials to Following the instructions for the interface that you want to sign programmatic requests to the AWS CLI, AWS SDKs, or use. AWS APIs. • For the AWS CLI, see Authenticating using IAM user credentials in the AWS Command Line Interface User Guide. • For AWS SDKs and tools, see Authenticate using long-term credentials in the AWS SDKs and Tools Reference Guide. • For AWS APIs, see Managing access keys for IAM users in the IAM User Guide. Next Step Set up the AWS Command Line Interface (AWS CLI) Next Step 459 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Set up the AWS Command Line Interface (AWS CLI) In this step, you download and configure the AWS CLI to use with Managed Service for Apache Flink. Note The getting started exercises in this guide assume that you are using administrator credentials (adminuser) in your account to perform the operations. Note If you already have the AWS CLI installed, you might need to upgrade to get the latest functionality. For more information, see Installing the AWS Command Line Interface in the AWS Command Line Interface User Guide. To check the version of the AWS CLI, run the following command: aws --version The exercises in this tutorial require the following AWS CLI version or later: aws-cli/1.16.63 To set up the AWS CLI 1. Download and configure the AWS CLI. For instructions, see the following topics in the AWS Command Line Interface User Guide: • Installing the AWS Command Line Interface • Configuring the AWS CLI 2. Add a named profile for the administrator user in the AWS CLI config file. You use this profile when executing the AWS CLI commands. For more information about named profiles, see Named Profiles in the AWS Command Line Interface User Guide. [profile adminuser] aws_access_key_id = adminuser access key ID Set up the AWS CLI 460 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide aws_secret_access_key = adminuser secret access key region = aws-region For a list of available AWS Regions, see Regions and Endpoints in the Amazon Web Services General Reference. Note The example code and commands in this tutorial use the us-east-1 US East (N. Virginia) Region. To use a different Region, change the Region in the code and commands for this tutorial to the Region you want to use. 3. Verify the setup by entering the following help command at the command prompt: aws help After you set up an AWS account and the AWS CLI, you can try the next exercise, in which you configure a sample application and test the end-to-end setup. Next step Create and run a Managed Service for Apache Flink application Create and run a Managed Service for Apache Flink application In this step, you create a Managed Service for Apache Flink application with Kinesis data streams as a source and a sink. This section contains the following steps: • Create dependent resources • Set up your local development environment • Download and examine the Apache Flink streaming Java code • Write sample records to the input stream • Run your application locally • Observe input and output data in Kinesis streams • Stop your application running locally Next step 461 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • Compile and package your application code • Upload the application code JAR file
analytics-java-api-139
analytics-java-api.pdf
139
Managed Service for Apache Flink application with Kinesis data streams as a source and a sink. This section contains the following steps: • Create dependent resources • Set up your local development environment • Download and examine the Apache Flink streaming Java code • Write sample records to the input stream • Run your application locally • Observe input and output data in Kinesis streams • Stop your application running locally Next step 461 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • Compile and package your application code • Upload the application code JAR file • Create and configure the Managed Service for Apache Flink application • Next step Create dependent resources Before you create a Managed Service for Apache Flink application for this exercise, you create the following dependent resources: • Two Kinesis data streams for input and output • An Amazon S3 bucket to store the application's code Note This tutorial assumes that you are deploying your application in the us-east-1 US East (N. Virginia) Region. If you use another Region, adapt all steps accordingly. Create two Amazon Kinesis data streams Before you create a Managed Service for Apache Flink application for this exercise, create two Kinesis data streams (ExampleInputStream and ExampleOutputStream). Your application uses these streams for the application source and destination streams. You can create these streams using either the Amazon Kinesis console or the following AWS CLI command. For console instructions, see Creating and Updating Data Streams in the Amazon Kinesis Data Streams Developer Guide. To create the streams using the AWS CLI, use the following commands, adjusting to the Region you use for your application. To create the data streams (AWS CLI) 1. To create the first stream (ExampleInputStream), use the following Amazon Kinesis create-stream AWS CLI command: $ aws kinesis create-stream \ --stream-name ExampleInputStream \ --shard-count 1 \ Create dependent resources 462 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide --region us-east-1 \ 2. To create the second stream that the application uses to write output, run the same command, changing the stream name to ExampleOutputStream: $ aws kinesis create-stream \ --stream-name ExampleOutputStream \ --shard-count 1 \ --region us-east-1 \ Create an Amazon S3 bucket for the application code You can create the Amazon S3 bucket using the console. To learn how to create an Amazon S3 bucket using the console, see Creating a bucket in the Amazon S3 User Guide. Name the Amazon S3 bucket using a globally unique name, for example by appending your login name. Note Make sure that you create the bucket in the Region you use for this tutorial (us-east-1). Other resources When you create your application, Managed Service for Apache Flink automatically creates the following Amazon CloudWatch resources if they don't already exist: • A log group called /AWS/KinesisAnalytics-java/<my-application> • A log stream called kinesis-analytics-log-stream Set up your local development environment For development and debugging, you can run the Apache Flink application on your machine directly from your IDE of choice. Any Apache Flink dependencies are handled like regular Java dependencies using Apache Maven. Set up your local development environment 463 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Note On your development machine, you must have Java JDK 11, Maven, and Git installed. We recommend that you use a development environment such as Eclipse Java Neon or IntelliJ IDEA. To verify that you meet all prerequisites, see Fulfill the prerequisites for completing the exercises. You do not need to install an Apache Flink cluster on your machine. Authenticate your AWS session The application uses Kinesis data streams to publish data. When running locally, you must have a valid AWS authenticated session with permissions to write to the Kinesis data stream. Use the following steps to authenticate your session: 1. If you don't have the AWS CLI and a named profile with valid credential configured, see Set up the AWS Command Line Interface (AWS CLI). 2. Verify that your AWS CLI is correctly configured and your users have permissions to write to the Kinesis data stream by publishing the following test record: $ aws kinesis put-record --stream-name ExampleOutputStream --data TEST --partition- key TEST 3. If your IDE has a plugin to integrate with AWS, you can use it to pass the credentials to the application running in the IDE. For more information, see AWS Toolkit for IntelliJ IDEA and AWS Toolkit for Eclipse. Download and examine the Apache Flink streaming Java code The Java application code for this example is available from GitHub. To download the application code, do the following: 1. Clone the remote repository using the following command: git clone https://github.com/aws-samples/amazon-managed-service-for-apache-flink- examples.git 2. Navigate to the amazon-managed-service-for-apache-flink-examples/tree/main/ java/GettingStarted directory. Download and examine the Apache Flink streaming Java code 464 Managed Service for
analytics-java-api-140
analytics-java-api.pdf
140
If your IDE has a plugin to integrate with AWS, you can use it to pass the credentials to the application running in the IDE. For more information, see AWS Toolkit for IntelliJ IDEA and AWS Toolkit for Eclipse. Download and examine the Apache Flink streaming Java code The Java application code for this example is available from GitHub. To download the application code, do the following: 1. Clone the remote repository using the following command: git clone https://github.com/aws-samples/amazon-managed-service-for-apache-flink- examples.git 2. Navigate to the amazon-managed-service-for-apache-flink-examples/tree/main/ java/GettingStarted directory. Download and examine the Apache Flink streaming Java code 464 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Review application components The application is entirely implemented in the com.amazonaws.services.msf.BasicStreamingJob class. The main() method defines the data flow to process the streaming data and to run it. Note For an optimized developer experience, the application is designed to run without any code changes both on Amazon Managed Service for Apache Flink and locally, for development in your IDE. • To read the runtime configuration so it will work when running in Amazon Managed Service for Apache Flink and in your IDE, the application automatically detects if it's running standalone locally in the IDE. In that case, the application loads the runtime configuration differently: 1. When the application detects that it's running in standalone mode in your IDE, form the application_properties.json file included in the resources folder of the project. The content of the file follows. 2. When the application runs in Amazon Managed Service for Apache Flink, the default behavior loads the application configuration from the runtime properties you will define in the Amazon Managed Service for Apache Flink application. See Create and configure the Managed Service for Apache Flink application. private static Map<String, Properties> loadApplicationProperties(StreamExecutionEnvironment env) throws IOException { if (env instanceof LocalStreamEnvironment) { LOGGER.info("Loading application properties from '{}'", LOCAL_APPLICATION_PROPERTIES_RESOURCE); return KinesisAnalyticsRuntime.getApplicationProperties( BasicStreamingJob.class.getClassLoader() .getResource(LOCAL_APPLICATION_PROPERTIES_RESOURCE).getPath()); } else { LOGGER.info("Loading application properties from Amazon Managed Service for Apache Flink"); return KinesisAnalyticsRuntime.getApplicationProperties(); } } Download and examine the Apache Flink streaming Java code 465 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • The main() method defines the application data flow and runs it. • Initializes the default streaming environments. In this example, we show how to create both the StreamExecutionEnvironment to be used with the DataSteam API and the StreamTableEnvironment to be used with SQL and the Table API. The two environment objects are two separate references to the same runtime environment, to use different APIs. StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); • Load the application configuration parameters. This will automatically load them from the correct place, depending on where the application is running: Map<String, Properties> applicationParameters = loadApplicationProperties(env); • The application defines a source using the Kinesis Consumer connector to read data from the input stream. The configuration of the input stream is defined in the PropertyGroupId=InputStream0. The name and Region of the stream are in the properties named stream.name and aws.region respectively. For simplicity, this source reads the records as a string. private static FlinkKinesisConsumer<String> createSource(Properties inputProperties) { String inputStreamName = inputProperties.getProperty("stream.name"); return new FlinkKinesisConsumer<>(inputStreamName, new SimpleStringSchema(), inputProperties); } ... public static void main(String[] args) throws Exception { ... SourceFunction<String> source = createSource(applicationParameters.get("InputStream0")); DataStream<String> input = env.addSource(source, "Kinesis Source"); ... } • The application then defines a sink using the Kinesis Streams Sink connector to send data to the output stream. Output stream name and Region are defined in the PropertyGroupId=OutputStream0, similar to the input stream. The sink is connected Download and examine the Apache Flink streaming Java code 466 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide directly to the internal DataStream that is getting data from the source. In a real application, you have some transformation between source and sink. private static KinesisStreamsSink<String> createSink(Properties outputProperties) { String outputStreamName = outputProperties.getProperty("stream.name"); return KinesisStreamsSink.<String>builder() .setKinesisClientProperties(outputProperties) .setSerializationSchema(new SimpleStringSchema()) .setStreamName(outputStreamName) .setPartitionKeyGenerator(element -> String.valueOf(element.hashCode())) .build(); } ... public static void main(String[] args) throws Exception { ... Sink<String> sink = createSink(applicationParameters.get("OutputStream0")); input.sinkTo(sink); ... } • Finally, you run the data flow that you just defined. This must be the last instruction of the main() method, after you defined all the operators the data flow requires: env.execute("Flink streaming Java API skeleton"); Use the pom.xml file The pom.xml file defines all dependencies required by the application and sets up the Maven Shade plugin to build the fat-jar that contains all dependencies required by Flink. • Some dependencies have provided scope. These dependencies are automatically available when the application runs in Amazon Managed Service for Apache Flink. They are required to compile the application, or to run the application locally in your IDE. For more information, see Run your application locally. Make sure that you are using the same Flink version as the runtime you will use in Amazon Managed
analytics-java-api-141
analytics-java-api.pdf
141
Java API skeleton"); Use the pom.xml file The pom.xml file defines all dependencies required by the application and sets up the Maven Shade plugin to build the fat-jar that contains all dependencies required by Flink. • Some dependencies have provided scope. These dependencies are automatically available when the application runs in Amazon Managed Service for Apache Flink. They are required to compile the application, or to run the application locally in your IDE. For more information, see Run your application locally. Make sure that you are using the same Flink version as the runtime you will use in Amazon Managed Service for Apache Flink. <dependency> <groupId>org.apache.flink</groupId> <artifactId>flink-clients</artifactId> Download and examine the Apache Flink streaming Java code 467 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide <version>${flink.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>org.apache.flink</groupId> <artifactId>flink-streaming-java</artifactId> <version>${flink.version}</version> <scope>provided</scope> </dependency> • You must add additional Apache Flink dependencies to the pom with the default scope, such as the Kinesis connector used by this application. For more information, see Use Apache Flink connectors. You can also add any additional Java dependencies required by your application. <dependency> <groupId>org.apache.flink</groupId> <artifactId>flink-connector-kinesis</artifactId> <version>${aws.connector.version}</version> </dependency> • The Maven Java Compiler plugin makes sure that the code is compiled against Java 11, the JDK version currently supported by Apache Flink. • The Maven Shade plugin packages the fat-jar, excluding some libraries that are provided by the runtime. It also specifies two transformers: ServicesResourceTransformer and ManifestResourceTransformer. The latter configures the class containing the main method to start the application. If you rename the main class, don't forget to update this transformer. • <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-shade-plugin</artifactId> ... <transformer implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer"> <mainClass>com.amazonaws.services.msf.BasicStreamingJob</mainClass> </transformer> ... </plugin> Download and examine the Apache Flink streaming Java code 468 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Write sample records to the input stream In this section, you will send sample records to the stream for the application to process. You have two options for generating sample data, either using a Python script or the Kinesis Data Generator. Generate sample data using a Python script You can use a Python script to send sample records to the stream. Note To run this Python script, you must use Python 3.x and have the AWS SDK for Python (Boto) library installed. To start sending test data to the Kinesis input stream: 1. Download the data generator stock.py Python script from the Data generator GitHub repository. 2. Run the stock.py script: $ python stock.py Keep the script running while you complete the rest of the tutorial. You can now run your Apache Flink application. Generate sample data using Kinesis Data Generator Alternatively to using the Python script, you can use Kinesis Data Generator, also available in a hosted version, to send random sample data to the stream. Kinesis Data Generator runs in your browser, and you don't need to install anything on your machine. To set up and run Kinesis Data Generator: 1. Follow the instructions in the Kinesis Data Generator documentation to set up access to the tool. You will run an AWS CloudFormation template that sets up a user and password. 2. Access Kinesis Data Generator through the URL generated by the CloudFormation template. You can find the URL in the Output tab after the CloudFormation template is completed. Write sample records to the input stream 469 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 3. Configure the data generator: • Region: Select the Region that you are using for this tutorial: us-east-1 • Stream/delivery stream: Select the input stream that the application will use: ExampleInputStream • Records per second: 100 • Record template: Copy and paste the following template: { "event_time" : "{{date.now("YYYY-MM-DDTkk:mm:ss.SSSSS")}}, "ticker" : "{{random.arrayElement( ["AAPL", "AMZN", "MSFT", "INTC", "TBV"] )}}", "price" : {{random.number(100)}} } 4. Test the template: Choose Test template and verify that the generated record is similar to the following: { "event_time" : "2024-06-12T15:08:32.04800, "ticker" : "INTC", "price" : 7 } 5. Start the data generator: Choose Select Send Data. Kinesis Data Generator is now sending data to the ExampleInputStream. Run your application locally You can run and debug your Flink application locally in your IDE. Note Before you continue, verify that the input and output streams are available. See Create two Amazon Kinesis data streams. Also, verify that you have permission to read and write from both streams. See Authenticate your AWS session. Setting up the local development environment requires Java 11 JDK, Apache Maven, and and IDE for Java development. Verify you meet the required prerequisites. See Fulfill the prerequisites for completing the exercises. Run your application locally 470 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Import the Java project into your IDE To start working on the application in your IDE, you must import it as a
analytics-java-api-142
analytics-java-api.pdf
142
output streams are available. See Create two Amazon Kinesis data streams. Also, verify that you have permission to read and write from both streams. See Authenticate your AWS session. Setting up the local development environment requires Java 11 JDK, Apache Maven, and and IDE for Java development. Verify you meet the required prerequisites. See Fulfill the prerequisites for completing the exercises. Run your application locally 470 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Import the Java project into your IDE To start working on the application in your IDE, you must import it as a Java project. The repository you cloned contains multiple examples. Each example is a separate project. For this tutorial, import the content in the ./java/GettingStarted subdirectory into your IDE. Insert the code as an existing Java project using Maven. Note The exact process to import a new Java project varies depending on the IDE you are using. Check the local application configuration When running locally, the application uses the configuration in the application_properties.json file in the resources folder of the project under ./src/main/ resources. You can edit this file to use different Kinesis stream names or Regions. [ { "PropertyGroupId": "InputStream0", "PropertyMap": { "stream.name": "ExampleInputStream", "flink.stream.initpos": "LATEST", "aws.region": "us-east-1" } }, { "PropertyGroupId": "OutputStream0", "PropertyMap": { "stream.name": "ExampleOutputStream", "aws.region": "us-east-1" } } ] Set up your IDE run configuration You can run and debug the Flink application from your IDE directly by running the main class com.amazonaws.services.msf.BasicStreamingJob, as you would run any Java application. Run your application locally 471 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Before running the application, you must set up the Run configuration. The setup depends on the IDE you are using. For example, see Run/debug configurations in the IntelliJ IDEA documentation. In particular, you must set up the following: 1. Add the provided dependencies to the classpath. This is required to make sure that the dependencies with provided scope are passed to the application when running locally. Without this set up, the application displays a class not found error immediately. 2. Pass the AWS credentials to access the Kinesis streams to the application. The fastest way is to use AWS Toolkit for IntelliJ IDEA. Using this IDE plugin in the Run configuration, you can select a specific AWS profile. AWS authentication happens using this profile. You don't need to pass AWS credentials directly. 3. Verify that the IDE runs the application using JDK 11. Run the application in your IDE After you set up the Run configuration for the BasicStreamingJob, you can run or debug it like a regular Java application. Note You can't run the fat-jar generated by Maven directly with java -jar ... from the command line. This jar does not contain the Flink core dependencies required to run the application standalone. When the application starts successfully, it logs some information about the standalone minicluster and the initialization of the connectors. This is followed by a number of INFO and some WARN logs that Flink normally emits when the application starts. 13:43:31,405 INFO com.amazonaws.services.msf.BasicStreamingJob [] - Loading application properties from 'flink-application-properties-dev.json' 13:43:31,549 INFO org.apache.flink.streaming.connectors.kinesis.FlinkKinesisConsumer [] - Flink Kinesis Consumer is going to read the following streams: ExampleInputStream, 13:43:31,676 INFO org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils [] - The configuration option taskmanager.cpu.cores required for local execution is not set, setting it to the maximal possible value. Run your application locally 472 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 13:43:31,676 INFO org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils [] - The configuration option taskmanager.memory.task.heap.size required for local execution is not set, setting it to the maximal possible value. 13:43:31,676 INFO org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils [] - The configuration option taskmanager.memory.task.off-heap.size required for local execution is not set, setting it to the maximal possible value. 13:43:31,676 INFO org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils [] - The configuration option taskmanager.memory.network.min required for local execution is not set, setting it to its default value 64 mb. 13:43:31,676 INFO org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils [] - The configuration option taskmanager.memory.network.max required for local execution is not set, setting it to its default value 64 mb. 13:43:31,676 INFO org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils [] - The configuration option taskmanager.memory.managed.size required for local execution is not set, setting it to its default value 128 mb. 13:43:31,677 INFO org.apache.flink.runtime.minicluster.MiniCluster [] - Starting Flink Mini Cluster .... After the initialization is complete, the application doesn't emit any further log entries. While data is flowing, no log is emitted. To verify if the application is correctly processing data, you can inspect the input and output Kinesis streams, as described in the following section. Note Not emitting logs about flowing data is the normal behavior for a Flink application. Emitting logs on every record might be convenient for debugging, but can add considerable overhead when running in production. Observe input and output data in Kinesis streams You can observe records sent to the
analytics-java-api-143
analytics-java-api.pdf
143
- Starting Flink Mini Cluster .... After the initialization is complete, the application doesn't emit any further log entries. While data is flowing, no log is emitted. To verify if the application is correctly processing data, you can inspect the input and output Kinesis streams, as described in the following section. Note Not emitting logs about flowing data is the normal behavior for a Flink application. Emitting logs on every record might be convenient for debugging, but can add considerable overhead when running in production. Observe input and output data in Kinesis streams You can observe records sent to the input stream by the (generating sample Python) or the Kinesis Data Generator (link) by using the Data Viewer in the Amazon Kinesis console. To observe records 1. Open the Kinesis console at https://console.aws.amazon.com/kinesis. Observe input and output data in Kinesis streams 473 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 2. Verify that the Region is the same where you are running this tutorial, which is us-east-1 US East (N. Virginia) by default. Change the Region if it does not match. 3. Choose Data Streams. 4. Select the stream that you want to observe, either ExampleInputStream or ExampleOutputStream. 5. Choose the Data viewer tab. 6. Choose any Shard, keep Latest as Starting position, and then choose Get records. You might see a "No record found for this request" error. If so, choose Retry getting records. The newest records published to the stream display. 7. Choose the value in the Data column to inspect the content of the record in JSON format. Stop your application running locally Stop the application running in your IDE. The IDE usually provides a "stop" option. The exact location and method depends on the IDE you're using. Compile and package your application code In this section, you use Apache Maven to compile the Java code and package it into a JAR file. You can compile and package your code using the Maven command line tool or your IDE. To compile and package using the Maven command line: Move to the directory containing the Java GettingStarted project and run the following command: $ mvn package To compile and package using your IDE: Run mvn package from your IDE Maven integration. In both cases, the following JAR file is created: target/amazon-msf-java-stream- app-1.0.jar. Note Running a "build project" from your IDE might not create the JAR file. Stop your application running locally 474 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Upload the application code JAR file In this section, you upload the JAR file you created in the previous section to the Amazon Simple Storage Service (Amazon S3) bucket you created at the beginning of this tutorial. If you have not completed this step, see (link). To upload the application code JAR file 1. Open the Amazon S3 console at https://console.aws.amazon.com/s3/. 2. Choose the bucket you previously created for the application code. 3. Choose Upload. 4. Choose Add files. 5. Navigate to the JAR file generated in the previous step: target/amazon-msf-java- stream-app-1.0.jar. 6. Choose Upload without changing any other settings. Warning Make sure that you select the correct JAR file in <repo-dir>/java/GettingStarted/ target/amazon-msf-java-stream-app-1.0.jar. The target directory also contains other JAR files that you don't need to upload. Create and configure the Managed Service for Apache Flink application You can create and run a Managed Service for Apache Flink application using either the console or the AWS CLI. For this tutorial, you will use the console. Note When you create the application using the console, your AWS Identity and Access Management (IAM) and Amazon CloudWatch Logs resources are created for you. When you create the application using the AWS CLI, you create these resources separately. Topics Upload the application code JAR file 475 Managed Service for Apache Flink Developer Guide Managed Service for Apache Flink • Create the application • Edit the IAM policy • Configure the application • Run the application • Observe the metrics of the running application • Observe output data in Kinesis streams • Stop the application Create the application To create the application 1. Open the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink 2. Verify that the correct Region is selected: us-east-1 US East (N. Virginia) 3. Open the menu on the right and choose Apache Flink applications and then Create streaming application. Alternatively, choose Create streaming application in the Get started container of the initial page. 4. On the Create streaming application page: • Choose a method to set up the stream processing application: choose Create from scratch. • Apache Flink configuration, Application Flink version: choose Apache Flink 1.20. 5. Configure your application • Application name: enter MyApplication. • Description: enter My java test app. • Access to application resources: choose Create / update
analytics-java-api-144
analytics-java-api.pdf
144
that the correct Region is selected: us-east-1 US East (N. Virginia) 3. Open the menu on the right and choose Apache Flink applications and then Create streaming application. Alternatively, choose Create streaming application in the Get started container of the initial page. 4. On the Create streaming application page: • Choose a method to set up the stream processing application: choose Create from scratch. • Apache Flink configuration, Application Flink version: choose Apache Flink 1.20. 5. Configure your application • Application name: enter MyApplication. • Description: enter My java test app. • Access to application resources: choose Create / update IAM role kinesis-analytics- MyApplication-us-east-1 with required policies. 6. Configure your Template for application settings • Templates: choose Development. 7. Choose Create streaming application at the bottom of the page. Create and configure the Managed Service for Apache Flink application 476 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Note When you create a Managed Service for Apache Flink application using the console, you have the option of having an IAM role and policy created for your application. Your application uses this role and policy to access its dependent resources. These IAM resources are named using your application name and Region as follows: • Policy: kinesis-analytics-service-MyApplication-us-east-1 • Role: kinesisanalytics-MyApplication-us-east-1 Amazon Managed Service for Apache Flink was formerly known as Kinesis Data Analytics. The name of the resources that are automatically created is prefixeed with kinesis- analytics- for backward compatibility. Edit the IAM policy Edit the IAM policy to add permissions to access the Kinesis data streams. To edit the policy 1. Open the IAM console at https://console.aws.amazon.com/iam/. 2. Choose Policies. Choose the kinesis-analytics-service-MyApplication-us-east-1 policy that the console created for you in the previous section. 3. Choose Edit and then choose the JSON tab. 4. Add the highlighted section of the following policy example to the policy. Replace the sample account IDs (012345678901) with your account ID. { "Version": "2012-10-17", "Statement": [ { "Sid": "ReadCode", "Effect": "Allow", "Action": [ "s3:GetObject", "s3:GetObjectVersion" ], "Resource": [ Create and configure the Managed Service for Apache Flink application 477 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide "arn:aws:s3:::my-bucket/kinesis-analytics-placeholder-s3-object" ] }, { "Sid": "ListCloudwatchLogGroups", "Effect": "Allow", "Action": [ "logs:DescribeLogGroups" ], "Resource": [ "arn:aws:logs:us-east-1:012345678901:log-group:*" ] }, { "Sid": "ListCloudwatchLogStreams", "Effect": "Allow", "Action": [ "logs:DescribeLogStreams" ], "Resource": [ "arn:aws:logs:us-east-1:012345678901:log-group:/aws/kinesis- analytics/MyApplication:log-stream:*" ] }, { "Sid": "PutCloudwatchLogs", "Effect": "Allow", "Action": [ "logs:PutLogEvents" ], "Resource": [ "arn:aws:logs:us-east-1:012345678901:log-group:/aws/kinesis- analytics/MyApplication:log-stream:kinesis-analytics-log-stream" ] }, { "Sid": "ReadInputStream", "Effect": "Allow", "Action": "kinesis:*", "Resource": "arn:aws:kinesis:us-east-1:012345678901:stream/ ExampleInputStream" }, { "Sid": "WriteOutputStream", Create and configure the Managed Service for Apache Flink application 478 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide "Effect": "Allow", "Action": "kinesis:*", "Resource": "arn:aws:kinesis:us-east-1:012345678901:stream/ ExampleOutputStream" } ] } 5. Choose Next at the bottom of the page and then choose Save changes. Configure the application Edit the application configuration to set the application code artifact. To edit the configuration 1. On the MyApplication page, choose Configure. 2. In the Application code location section: • For Amazon S3 bucket, select the bucket you previously created for the application code. Choose Browse and select the correct bucket, and then select Choose. Do not click on the bucket name. • For Path to Amazon S3 object, enter amazon-msf-java-stream-app-1.0.jar. 3. For Access permissions, choose Create / update IAM role kinesis-analytics- MyApplication-us-east-1 with required policies. 4. In the Runtime properties section, add the following properties. 5. Choose Add new item and add each of the following parameters: Group ID Key Value InputStream0 stream.name ExampleInputStream InputStream0 aws.region us-east-1 OutputStream0 stream.name ExampleOutputStream OutputStream0 aws.region us-east-1 6. Do not modify any of the other sections. Create and configure the Managed Service for Apache Flink application 479 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 7. Choose Save changes. Note When you choose to enable Amazon CloudWatch logging, Managed Service for Apache Flink creates a log group and log stream for you. The names of these resources are as follows: • Log group: /aws/kinesis-analytics/MyApplication • Log stream: kinesis-analytics-log-stream Run the application The application is now configured and ready to run. To run the application 1. On the console for Amazon Managed Service for Apache Flink, choose My Application and choose Run. 2. On the next page, the Application restore configuration page, choose Run with latest snapshot and then choose Run. The Status in Application details transitions from Ready to Starting and then to Running when the application has started. When the application is in the Running status, you can now open the Flink dashboard. To open the dashboard 1. Choose Open Apache Flink dashboard. The dashboard opens on a new page. 2. In the Runing jobs list, choose the single job that you can see. Note If you set the Runtime properties or edited the IAM policies incorrectly, the application
analytics-java-api-145
analytics-java-api.pdf
145
Run. 2. On the next page, the Application restore configuration page, choose Run with latest snapshot and then choose Run. The Status in Application details transitions from Ready to Starting and then to Running when the application has started. When the application is in the Running status, you can now open the Flink dashboard. To open the dashboard 1. Choose Open Apache Flink dashboard. The dashboard opens on a new page. 2. In the Runing jobs list, choose the single job that you can see. Note If you set the Runtime properties or edited the IAM policies incorrectly, the application status might turn into Running, but the Flink dashboard shows that the job is Create and configure the Managed Service for Apache Flink application 480 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide continuously restarting. This is a common failure scenario if the application is misconfigured or lacks permissions to access the external resources. When this happens, check the Exceptions tab in the Flink dashboard to see the cause of the problem. Observe the metrics of the running application On the MyApplication page, in the Amazon CloudWatch metrics section, you can see some of the fundamental metrics from the running application. To view the metrics 1. Next to the Refresh button, select 10 seconds from the dropdown list. 2. When the application is running and healthy, you can see the uptime metric continuously increasing. 3. The fullrestarts metric should be zero. If it is increasing, the configuration might have issues. To investigate the issue, review the Exceptions tab on the Flink dashboard. 4. The Number of failed checkpoints metric should be zero in a healthy application. Note This dashboard displays a fixed set of metrics with a granularity of 5 minutes. You can create a custom application dashboard with any metrics in the CloudWatch dashboard. Observe output data in Kinesis streams Make sure you are still publishing data to the input, either using the Python script or the Kinesis Data Generator. You can now observe the output of the application running on Managed Service for Apache Flink by using the Data Viewer in the https://console.aws.amazon.com/kinesis/, similarly to what you already did earlier. To view the output 1. Open the Kinesis console at https://console.aws.amazon.com/kinesis. Create and configure the Managed Service for Apache Flink application 481 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 2. Verify that the Region is the same as the one you are using to run this tutorial. By default, it is us-east-1US East (N. Virginia). Change the Region if necessary. 3. Choose Data Streams. 4. Select the stream that you want to observe. For this tutorial, use ExampleOutputStream. 5. Choose the Data viewer tab. 6. Select any Shard, keep Latest as Starting position, and then choose Get records. You might see a "no record found for this request" error. If so, choose Retry getting records. The newest records published to the stream display. 7. Select the value in the Data column to inspect the content of the record in JSON format. Stop the application To stop the applicatio, go to the console page of the Managed Service for Apache Flink application named MyApplication. To stop the application 1. 2. From the Action dropdown list, choose Stop. The Status in Application details transitions from Running to Stopping, and then to Ready when the application is completely stopped. Note Don't forget to also stop sending data to the input stream from the Python script or the Kinesis Data Generator. Next step Clean up AWS resources Clean up AWS resources This section includes procedures for cleaning up AWS resources created in this Getting Started (DataStream API) tutorial. This topic contains the following sections: Next step 482 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • Delete your Managed Service for Apache Flink application • Delete your Kinesis data streams • Delete your Amazon S3 objects and bucket • Delete your IAM resources • Delete your CloudWatch resources • Explore additional resources for Apache Flink Delete your Managed Service for Apache Flink application Use the following procedure to delete the application. 1. Open the Kinesis console at https://console.aws.amazon.com/kinesis. 2. 3. In the Managed Service for Apache Flink panel, choose MyApplication. From the Actions dropdown list, choose Delete and then confirm the deletion. Delete your Kinesis data streams 1. Open the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink. 2. Choose Data streams. 3. Select the two streams that you created, ExampleInputStream and ExampleOutputStream. 4. From the Actions dropdown list, choose Delete, and then confirm the deletion. Delete your Amazon S3 objects and bucket Use the following procedures to delete your Amazon S3 objects and bucket. To delete the object from the S3 bucket 1. Open the Amazon S3 console
analytics-java-api-146
analytics-java-api.pdf
146
3. In the Managed Service for Apache Flink panel, choose MyApplication. From the Actions dropdown list, choose Delete and then confirm the deletion. Delete your Kinesis data streams 1. Open the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink. 2. Choose Data streams. 3. Select the two streams that you created, ExampleInputStream and ExampleOutputStream. 4. From the Actions dropdown list, choose Delete, and then confirm the deletion. Delete your Amazon S3 objects and bucket Use the following procedures to delete your Amazon S3 objects and bucket. To delete the object from the S3 bucket 1. Open the Amazon S3 console at https://console.aws.amazon.com/s3/. 2. 3. Select the S3 bucket that you created for the application artifact. Select the application artifact you uploaded, named amazon-msf-java-stream- app-1.0.jar. 4. Choose Delete and confirm the deletion. Delete your Managed Service for Apache Flink application 483 Managed Service for Apache Flink To delete the S3 bucket Managed Service for Apache Flink Developer Guide 1. Open the Amazon S3 console at https://console.aws.amazon.com/s3/. 2. Select the bucket that you created for the artifacts. 3. Choose Delete and confirm the deletion. Note The S3 bucket must be empty to delete it. Delete your IAM resources To delete your IAM resources 1. Open the IAM console at https://console.aws.amazon.com/iam/. 2. 3. In the navigation bar, choose Policies. In the filter control, enter kinesis. 4. Choose the kinesis-analytics-service-MyApplication-us-east-1 policy. 5. Choose Policy Actions and then choose Delete. 6. In the navigation bar, choose Roles. 7. Choose the kinesis-analytics-MyApplication-us-east-1 role. 8. Choose Delete role and then confirm the deletion. Delete your CloudWatch resources 1. Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/. 2. In the navigation bar, choose Logs. 3. Choose the /aws/kinesis-analytics/MyApplication log group. 4. Choose Delete Log Group and then confirm the deletion. Explore additional resources for Apache Flink Explore additional resources Delete your IAM resources 484 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Explore additional resources Now that you've created and run a basic Managed Service for Apache Flink application, see the following resources for more advanced Managed Service for Apache Flink solutions. • Amazon Managed Service for Apache Flink Workshop: In this workshop, you build an end-to- end streaming architecture to ingest, analyze, and visualize streaming data in near real-time. You set out to improve the operations of a taxi company in New York City. You analyze the telemetry data of a taxi fleet in New York City in near real-time to optimize their fleet operations. • Examples for creating and working with Managed Service for Apache Flink applications: This section of this Developer Guide provides examples of creating and working with applications in Managed Service for Apache Flink. They include example code and step-by-step instructions to help you create Managed Service for Apache Flink applications and test your results. • Learn Flink: Hands On Training: Offical introductory Apache Flink training that gets you started writing scalable streaming ETL, analytics, and event-driven applications. Explore additional resources 485 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Get started with Amazon Managed Service for Apache Flink (Table API) This section introduces you to the fundamental concepts of Managed Service for Apache Flink and implementing an application in Java using the Table API and SQL. It demonstrates how to switch between different APIs within the same application, and it describes the available options for creating and testing your applications. It also provides instructions for installing the necessary tools to complete the tutorials in this guide and to create your first application. Topics • Review the components of the Managed Service for Apache Flink application • Complete the required prerequisites • Create and run a Managed Service for Apache Flink application • Next step • Clean up AWS resources • Explore additional resources Review the components of the Managed Service for Apache Flink application Note Managed Service for Apache Flink supports all Apache Flink APIs and potentially all JVM languages. Depending on the API you choose, the structure of the application and the implementation is slightly different. This tutorial covers the implementation of applications using the Table API and SQL, and the integration with the DataStream API, implemented in Java. To process data, your Managed Service for Apache Flink application uses a Java application that processes input and produces output using the Apache Flink runtime. A typical Apache Flink application has the following components: Review application components 486 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • Runtime properties: You can use runtime properties to pass configuration parameters to your application without modifying and republishing the code. • Sources: The application consumes data from one or more sources. A source uses a connector to read data from and external system, such as a Kinesis data stream or an Amazon MSK topic. For
analytics-java-api-147
analytics-java-api.pdf
147
for Apache Flink application uses a Java application that processes input and produces output using the Apache Flink runtime. A typical Apache Flink application has the following components: Review application components 486 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • Runtime properties: You can use runtime properties to pass configuration parameters to your application without modifying and republishing the code. • Sources: The application consumes data from one or more sources. A source uses a connector to read data from and external system, such as a Kinesis data stream or an Amazon MSK topic. For development or testing, you can also have sources random[ly generate test data. For more information, see Add streaming data sources to Managed Service for Apache Flink. With SQL or Table API, sources are defined as source tables. • Transformations: The application processes data through one or more transformations that can filter, enrich, or aggregate data. When using SQL or Table API, transformations are defined as queries over tables or views. • Sinks: The application sends data to external systems through sinks. A sink uses a connector to send data to an external system, such as a Kinesis data stream, an Amazon MSK topic, an Amazon S3 bucket, or a relational database. You can also use a special connector to print the output for development purposes only. When using SQL or Table API, sinks are defined as sink tables where you will insert results. For more information, see Write data using sinks in Managed Service for Apache Flink. Your application requires some external dependencies, such as Flink connectors your application uses, or potentially a Java library. To run in Amazon Managed Service for Apache Flink, you must package the application along with dependencies in a fat-JAR and upload it to an Amazon S3 bucket. You then create a Managed Service for Apache Flink application. You pass the code package location, along with other runtime configuration parameters. This tutorial demonstrates how to use Apache Maven to package the application and how to run the application locally in the IDE of your choice. Complete the required prerequisites Before starting this tutorial, complete the first two steps of the Get started with Amazon Managed Service for Apache Flink (DataStream API): • Fulfill the prerequisites for completing the exercises • Set up the AWS Command Line Interface (AWS CLI) To get started, see Create an application. Complete the required prerequisites 487 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Create and run a Managed Service for Apache Flink application In this exercise, you create a Managed Service for Apache Flink application with Kinesis data streams as a source and sink. This section contains the following steps. • Create dependent resources • Set up your local development environment • Download and examine the Apache Flink streaming Java code • Run your application locally • Observe the application writing data to an S3 bucket • Stop your application running locally • Compile and package your application code • Upload the application code JAR file • Create and configure the Managed Service for Apache Flink application Create dependent resources Before you create a Managed Service for Apache Flink for this exercise, you create the following dependent resources: • An Amazon S3 bucket to store the application's code and to write the application output. Note This tutorial assumes that you are deploying your application in the us-east-1 Region. If you use another Region, you must adapt all steps accordingly. Create an Amazon S3 bucket You can create the Amazon S3 bucket using the console. For instructions for creating this resource, see the following topics: • How Do I Create an S3 Bucket? in the Amazon Simple Storage Service User Guide. Give the Amazon S3 bucket a globally unique name by appending your login name. Create an application 488 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Note Make sure that you create the bucket in the Region you use for this tutorial. The default for the tutorial is us-east-1. Other resources When you create your application, Managed Service for Apache Flink creates the following Amazon CloudWatch resources if they don't already exist: • A log group called /AWS/KinesisAnalytics-java/<my-application>. • A log stream called kinesis-analytics-log-stream. Set up your local development environment For development and debugging, you can run the Apache Flink application on your machine, directly from your IDE of choice. Any Apache Flink dependencies are handled as normal Java dependencies using Maven. Note On your development machine, you must have Java JDK 11, Maven, and Git installed. We recommend that you use a development environment such as Eclipse Java Neon or IntelliJ IDEA. To verify that you meet all prerequisites, see Fulfill the prerequisites for completing the exercises. You do not
analytics-java-api-148
analytics-java-api.pdf
148
exist: • A log group called /AWS/KinesisAnalytics-java/<my-application>. • A log stream called kinesis-analytics-log-stream. Set up your local development environment For development and debugging, you can run the Apache Flink application on your machine, directly from your IDE of choice. Any Apache Flink dependencies are handled as normal Java dependencies using Maven. Note On your development machine, you must have Java JDK 11, Maven, and Git installed. We recommend that you use a development environment such as Eclipse Java Neon or IntelliJ IDEA. To verify that you meet all prerequisites, see Fulfill the prerequisites for completing the exercises. You do not need to install an Apache Flink cluster on your machine. Authenticate your AWS session The application uses Kinesis data streams to publish data. When running locally, you must have a valid AWS authenticated session with permissions to write to the Kinesis data stream. Use the following steps to authenticate your session: 1. If you don't have the AWS CLI and a named profile with valid credential configured, see Set up the AWS Command Line Interface (AWS CLI). Set up your local development environment 489 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 2. If your IDE has a plugin to integrate with AWS, you can use it to pass the credentials to the application running in the IDE. For more information, see AWS Toolkit for IntelliJ IDEA and AWS Toolkit for compiling the application or running Eclipse. Download and examine the Apache Flink streaming Java code The application code for this example is available from GitHub. To download the Java application code 1. Clone the remote repository using the following command: git clone https://github.com/aws-samples/amazon-managed-service-for-apache-flink- examples.git 2. Navigate to the ./java/GettingStartedTable directory. Review application components The application is entirely implemented in the com.amazonaws.services.msf.BasicTableJob class. The main() method defines sources, transformations, and sinks. The execution is initiated by an execution statement at the end of this method. Note For an optimal developer experience, the application is designed to run without any code changes both on Amazon Managed Service for Apache Flink and locally, for development in your IDE. • To read the runtime configuration so that it will work when running in Amazon Managed Service for Apache Flink and in your IDE, the application automatically detects if it's running standalone locally in the IDE. In that case, the application loads the runtime configuration differently: 1. When the application detects that it's running in standalone mode in your IDE, form the application_properties.json file included in the resources folder of the project. The content of the file follows. Download and examine the Apache Flink streaming Java code 490 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 2. When the application runs in Amazon Managed Service for Apache Flink, the default behavior loads the application configuration from the runtime properties you will define in the Amazon Managed Service for Apache Flink application. See Create and configure the Managed Service for Apache Flink application. private static Map<String, Properties> loadApplicationProperties(StreamExecutionEnvironment env) throws IOException { if (env instanceof LocalStreamEnvironment) { LOGGER.info("Loading application properties from '{}'", LOCAL_APPLICATION_PROPERTIES_RESOURCE); return KinesisAnalyticsRuntime.getApplicationProperties( BasicStreamingJob.class.getClassLoader() .getResource(LOCAL_APPLICATION_PROPERTIES_RESOURCE).getPath()); } else { LOGGER.info("Loading application properties from Amazon Managed Service for Apache Flink"); return KinesisAnalyticsRuntime.getApplicationProperties(); } } • The main() method defines the application data flow and runs it. • Initializes the default streaming environments. In this example, we show how to create both the StreamExecutionEnvironment to use with the DataStream API, and the StreamTableEnvironment to use with SQL and the Table API. The two environment objects are two separate references to the same runtime environment, to use different APIs. StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env, EnvironmentSettings.newInstance().build()); • Load the application configuration parameters. This will automatically load them from the correct place, depending on where the application is running: Map<String, Properties> applicationParameters = loadApplicationProperties(env); • The FileSystem sink connector that the application uses to write results to Amazon S3 output files when Flink completes a checkpoint. You must enable checkpoints to write files to the Download and examine the Apache Flink streaming Java code 491 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide destination. When the application is running in Amazon Managed Service for Apache Flink, the application configuration controls the checkpoint and enables it by default. Conversely, when running locally, checkpoints are disabled by default. The application detects that it runs locally and configures checkpointing every 5,000 ms. if (env instanceof LocalStreamEnvironment) { env.enableCheckpointing(5000); } • This application does not receive data from an actual external source. It generates random data to process through the DataGen connector. This connector is available for DataStream API, SQL, and Table API. To demonstrate the integration between APIs, the application uses the DataStram API version because it provides more flexibility. Each record is generated by a generator
analytics-java-api-149
analytics-java-api.pdf
149
Apache Flink, the application configuration controls the checkpoint and enables it by default. Conversely, when running locally, checkpoints are disabled by default. The application detects that it runs locally and configures checkpointing every 5,000 ms. if (env instanceof LocalStreamEnvironment) { env.enableCheckpointing(5000); } • This application does not receive data from an actual external source. It generates random data to process through the DataGen connector. This connector is available for DataStream API, SQL, and Table API. To demonstrate the integration between APIs, the application uses the DataStram API version because it provides more flexibility. Each record is generated by a generator function called StockPriceGeneratorFunction in this case, where you can put custom logic. DataGeneratorSource<StockPrice> source = new DataGeneratorSource<>( new StockPriceGeneratorFunction(), Long.MAX_VALUE, RateLimiterStrategy.perSecond(recordPerSecond), TypeInformation.of(StockPrice.class)); • In the DataStream API, records can have custom classes. Classes must follow specific rules so that Flink can use them as record. For more information, see Supported Data Types. In this example, the StockPrice class is a POJO. • The source is then attached to the execution environment, generating a DataStream of StockPrice. This application doesn't use event-time semantics and doesn't generate a watermark. Run the DataGenerator source with a parallelism of 1, independent of the parallelism of the rest of the application. DataStream<StockPrice> stockPrices = env.fromSource( source, WatermarkStrategy.noWatermarks(), "data-generator" ).setParallelism(1); • What follows in the data processing flow is defined using the Table API and SQL. To do so, we convert the DataStream of StockPrices into a table. The schema of the table is automatically inferred from the StockPrice class. Download and examine the Apache Flink streaming Java code 492 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Table stockPricesTable = tableEnv.fromDataStream(stockPrices); • The following snippet of code shows how to define a view and a query using the programmatic Table API: Table filteredStockPricesTable = stockPricesTable. select( $("eventTime").as("event_time"), $("ticker"), $("price"), dateFormat($("eventTime"), "yyyy-MM-dd").as("dt"), dateFormat($("eventTime"), "HH").as("hr") ).where($("price").isGreater(50)); tableEnv.createTemporaryView("filtered_stock_prices", filteredStockPricesTable); • A sink table is defined to write the results to an Amazon S3 bucket as JSON files. To illustrate the difference with defining a view programmatically, with the Table API the sink table is defined using SQL. tableEnv.executeSql("CREATE TABLE s3_sink (" + "eventTime TIMESTAMP(3)," + "ticker STRING," + "price DOUBLE," + "dt STRING," + "hr STRING" + ") PARTITIONED BY ( dt, hr ) WITH (" + "'connector' = 'filesystem'," + "'fmat' = 'json'," + "'path' = 's3a://" + s3Path + "'" + ")"); • The last step of the is an executeInsert() that inserts the filtered stock prices view into the sink table. This method initiates the execution of the data flow we have defined so far. filteredStockPricesTable.executeInsert("s3_sink"); Download and examine the Apache Flink streaming Java code 493 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Use the pom.xml file The pom.xml file defines all dependencies required by the application and sets up the Maven Shade plugin to build the fat-jar that contains all dependencies required by Flink. • Some dependencies have provided scope. These dependencies are automatically available when the application runs in Amazon Managed Service for Apache Flink. They are required for application or to the application locally in your IDE. For more information, see (update to TableAPI) Run your application locally. Make sure that you are using the same Flink version as the runtime you will use in Amazon Managed Service for Apache Flink. To use the TableAPI and SQL, you must include the flink-table-planner-loader and flink-table-runtime- dependencies, both with provided scope. <dependency> <groupId>org.apache.flink</groupId> <artifactId>flink-streaming-java</artifactId> <version>${flink.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>org.apache.flink</groupId> <artifactId>flink-clients</artifactId> <version>${flink.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>org.apache.flink</groupId> <artifactId>flink-table-planner-loader</artifactId> <version>${flink.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>org.apache.flink</groupId> <artifactId>flink-table-runtime</artifactId> <version>${flink.version}</version> <scope>provided</scope> </dependency> • You must add additional Apache Flink dependencies to the pom with the default scope. For example, the DataGen connector, the FileSystem SQL connector, and the JSON format. Download and examine the Apache Flink streaming Java code 494 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide <dependency> <groupId>org.apache.flink</groupId> <artifactId>flink-connector-datagen</artifactId> <version>${flink.version}</version> </dependency> <dependency> <groupId>org.apache.flink</groupId> <artifactId>flink-connector-files</artifactId> <version>${flink.version}</version> </dependency> <dependency> <groupId>org.apache.flink</groupId> <artifactId>flink-json</artifactId> <version>${flink.version}</version> </dependency> • To write to Amazon S3 when running locally, the S3 Hadoop File System is also included wit provided scope. <dependency> <groupId>org.apache.flink</groupId> <artifactId>flink-s3-fs-hadoop</artifactId> <version>${flink.version}</version> <scope>provided</scope> </dependency> • The Maven Java Compiler plugin makes sure that the code is compiled against Java 11, the JDK version currently supported by Apache Flink. • The Maven Shade plugin packages the fat-jar, excluding some libraries that are provided by the runtime. It also specifies two transformers: ServicesResourceTransformer and ManifestResourceTransformer. The latter configures the class containing the main method to start the application. If you rename the main class, don't forget update this transformer. • <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-shade-plugin</artifactId> ... <transformer implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer"> <mainClass>com.amazonaws.services.msf.BasicStreamingJob</mainClass> Download and examine the Apache Flink streaming Java code 495 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide </transformer> ... </plugin> Run your application
analytics-java-api-150
analytics-java-api.pdf
150
the code is compiled against Java 11, the JDK version currently supported by Apache Flink. • The Maven Shade plugin packages the fat-jar, excluding some libraries that are provided by the runtime. It also specifies two transformers: ServicesResourceTransformer and ManifestResourceTransformer. The latter configures the class containing the main method to start the application. If you rename the main class, don't forget update this transformer. • <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-shade-plugin</artifactId> ... <transformer implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer"> <mainClass>com.amazonaws.services.msf.BasicStreamingJob</mainClass> Download and examine the Apache Flink streaming Java code 495 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide </transformer> ... </plugin> Run your application locally You can run and debug your Flink application locally in your IDE. Note Before you continue, verify that the input and output streams are available. See Create two Amazon Kinesis data streams. Also, verify that you have permission to read and write from both streams. See Authenticate your AWS session. Setting up the local development environment requires Java 11 JDK, Apache Maven, and an IDE for Java development. Verify you meet the required prerequisites. See Fulfill the prerequisites for completing the exercises. Import the Java project into your IDE To start working on the application in your IDE, you must import it as a Java project. The repository you cloned contains multiple examples. Each example is a separate project. For this tutorial, import the content in the ./jave/GettingStartedTable subdirectory into your IDE . Insert the code as an existing Java project using Maven. Note The exact process to import a new Java project varies depending on the IDE you are using. Modify the local application configuration When running locally, the application uses the configuration in the application_properties.json file in the resources folder of the project under ./src/main/ resources. For this tutorial application, the configuration parameters are the name of the bucket and the path where the data will be written. Run your application locally 496 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Edit the configuration and modify the name of the Amazon S3 bucket to match the bucket that you created at the beginning of this tutorial. [ { "PropertyGroupId": "bucket", "PropertyMap": { "name": "<bucket-name>", "path": "output" } } ] Note The configuration property name must contain only the bucket name, for example my- bucket-name. Don't include any prefix such as s3:// or a trailing slash. If you modify the path, omit any leading or trailing slashes. Set up your IDE run configuration You can run and debug the Flink application from your IDE directly by running the main class com.amazonaws.services.msf.BasicTableJob, as you would run any Java application. Before running the application, you must set up the Run configuration. The setup depends on the IDE that you are using. For example, see Run/debug configurations in the IntelliJ IDEA documentation. In particular, you must set up the following: 1. Add the provided dependencies to the classpath. This is required to make sure that the dependencies with provided scope are passed to the application when running locally. Without this set up, the application displays a class not found error immediately. 2. Pass the AWS credentials to access the Kinesis streams to the application. The fastest way is to use AWS Toolkit for IntelliJ IDEA. Using this IDE plugin in the Run configuration, you can select a specific AWS profile. AWS authentication happens using this profile. You don't need to pass AWS credentials directly. 3. Verify that the IDE runs the application using JDK 11. Run your application locally 497 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Run the application in your IDE After you set up the Run configuration for the BasicTableJob, you can run or debug it like a regular Java application. Note You can't run the fat-jar generated by Maven directly with java -jar ... from the command line. This jar does not contain the Flink core dependencies required to run the application standalone. When the application starts successfully, it logs some information about the standalone minicluster and the initialization of the connectors. This is followed by a number of INFO and some WARN logs that Flink normally emits when the application starts. 21:28:34,982 INFO com.amazonaws.services.msf.BasicTableJob [] - Loading application properties from 'flink-application-properties- dev.json' 21:28:35,149 INFO com.amazonaws.services.msf.BasicTableJob [] - s3Path is ExampleBucket/my-output-bucket ... After the initialization is complete, the application doesn't emit any further log entries. While data is flowing, no log is emitted. To verify if the application is correctly processing data, you can inspect the content of the output bucket, as described in the following section. Note Not emitting logs about flowing data is the normal behavior for a Flink application. Emitting logs on every record might be convenient for debugging, but can add considerable overhead when running in production. Observe the application writing data to an
analytics-java-api-151
analytics-java-api.pdf
151
application properties from 'flink-application-properties- dev.json' 21:28:35,149 INFO com.amazonaws.services.msf.BasicTableJob [] - s3Path is ExampleBucket/my-output-bucket ... After the initialization is complete, the application doesn't emit any further log entries. While data is flowing, no log is emitted. To verify if the application is correctly processing data, you can inspect the content of the output bucket, as described in the following section. Note Not emitting logs about flowing data is the normal behavior for a Flink application. Emitting logs on every record might be convenient for debugging, but can add considerable overhead when running in production. Observe the application writing data to an S3 bucket This example application generates random data internally and writes this data to the destination S3 bucket you configured. Unless you modified the default configuration path, the data will be Observe the application writing data to an S3 bucket 498 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide written to the output path followed by data and hour partitioning, in the format ./output/ <yyyy-MM-dd>/<HH>. The FileSystem sink connector creates new files on the Flink checkpoint. When running locally, the application runs a checkpoint every 5 seconds (5,000 milliseconds), as specified in the code. if (env instanceof LocalStreamEnvironment) { env.enableCheckpointing(5000); } To browse the S3 bucket and observe the file written by the application 1. 1. Open the Amazon S3 console at https://console.aws.amazon.com/s3/. 2. Choose the bucket you previously created. 3. Navigate to the output path, and then to the date and hour folders that correspond to the current time in the UTC time zone. 4. Periodically refresh to observe new files appearing every 5 seconds. 5. Select and download one file to observe the content. Note By default, the files have no extensions. The content is formatted as JSON. You can open the files with any text editor to inspect the content. Stop your application running locally Stop the application running in your IDE. The IDE usually provides a "stop" option. The exact location and method depends on the IDE. Compile and package your application code In this section, you use Apache Maven to compile the Java code and package it into a JAR file. You can compile and package your code using the Maven command line tool or your IDE. To compile and package using the Maven command line Move to the directory that contains the Jave GettingStarted project and run the following command: Stop your application running locally 499 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide $ mvn package To compile and package using your IDE Run mvn package from your IDE Maven integration. In both cases, the JAR file target/amazon-msf-java-table-app-1.0.jar is created. Note Running a build project from your IDE might not create the JAR file. Upload the application code JAR file In this section, you upload the JAR file you created in the previous section to the Amazon S3 bucket you created at the beginning of this tutorial. If you have done it yet, complete Create an Amazon S3 bucket. To upload the application code 1. Open the Amazon S3 console at https://console.aws.amazon.com/s3/. 2. Choose the bucket you previously created for the application code. 3. Choose Upload field. 4. Choose Add files. 5. Navigate to the JAR file generated in the previous section: target/amazon-msf-java- table-app-1.0.jar. 6. Choose Upload without changing any other settings. Warning Make sure that you select the correct JAR file in <repo-dir>/java/ GettingStarted/target/amazon/msf-java-table-app-1.0.jar. The target directory also contains other JAR files that you don't need to upload. Upload the application code JAR file 500 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Create and configure the Managed Service for Apache Flink application You can create and configure a Managed Service for Apache Flink application using either the console or the AWS CLI. For this tutorial, you will use the console. Note When you create the application using the console, your AWS Identity and Access Management (IAM) and Amazon CloudWatch Logs resources are created for you. When you create the application using the AWS CLI, you must create these resources separately. Create the application 1. Open the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink 2. Verify that the correct Region is selected: US East (N. Virginia) us-east-1. 3. On the right menu, choose Apache Flink applications and then choose Create streaming application. Alternatively, choose Create streaming application in the Get started section of the initial page. 4. On the Create streaming application page, complete the following: • For Choose a method to set up the stream processing application, choose Create from scratch. • For Apache Flink configuration, Application Flink version, choose Apache Flink 1.19. • In the Application configuration section, complete the following: • For Application name, enter MyApplication. • For Description, enter My Java Table API test app.
analytics-java-api-152
analytics-java-api.pdf
152
Region is selected: US East (N. Virginia) us-east-1. 3. On the right menu, choose Apache Flink applications and then choose Create streaming application. Alternatively, choose Create streaming application in the Get started section of the initial page. 4. On the Create streaming application page, complete the following: • For Choose a method to set up the stream processing application, choose Create from scratch. • For Apache Flink configuration, Application Flink version, choose Apache Flink 1.19. • In the Application configuration section, complete the following: • For Application name, enter MyApplication. • For Description, enter My Java Table API test app. • For Access to application resources, choose Create / update IAM role kinesis-analytics- MyApplication-us-east-1 with required policies. • In Template for application settings, complete the following: • For Templates, choose Develoment. 5. Choose Create streaming application. Create and configure the Managed Service for Apache Flink application 501 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Note When you create a Managed Service for Apache Flink application using the console, you have the option of having an IAM role and policy created for your application. Your application uses this role and policy to access its dependent resources. These IAM resources are named using your application name and Region as follows: • Policy: kinesis-analytics-service-MyApplication-us-east-1 • Role: kinesisanalytics-MyApplication-us-east-1 Edit the IAM policy Edit the IAM policy to add permissions to access the Amazon S3 bucket. To edit the IAM policy to add S3 bucket permissions 1. Open the IAM console at https://console.aws.amazon.com/iam/. 2. Choose Policies. Choose the kinesis-analytics-service-MyApplication-us-east-1 policy that the console created for you in the previous section. 3. Choose Edit and then choose the JSON tab. 4. Add the highlighted section of the following policy example to the policy. Replace the sample account ID (012345678901) with your account ID and <bucket-name> with the name of the S3 bucket that you created. { "Version": "2012-10-17", "Statement": [ { "Sid": "ReadCode", "Effect": "Allow", "Action": [ "s3:GetObject", "s3:GetObjectVersion" ], "Resource": [ "arn:aws:s3:::my-bucket/kinesis-analytics-placeholder-s3-object" ] }, Create and configure the Managed Service for Apache Flink application 502 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide { "Sid": "ListCloudwatchLogGroups", "Effect": "Allow", "Action": [ "logs:DescribeLogGroups" ], "Resource": [ "arn:aws:logs:us-east-1:012345678901:log-group:*" ] }, { "Sid": "ListCloudwatchLogStreams", "Effect": "Allow", "Action": [ "logs:DescribeLogStreams" ], "Resource": [ "arn:aws:logs:us-east-1:012345678901:log-group:/aws/kinesis- analytics/MyApplication:log-stream:*" ] }, { "Sid": "PutCloudwatchLogs", "Effect": "Allow", "Action": [ "logs:PutLogEvents" ], "Resource": [ "arn:aws:logs:us-east-1:012345678901:log-group:/aws/kinesis- analytics/MyApplication:log-stream:kinesis-analytics-log-stream" ] }, { "Sid": "WriteOutputBucket", "Effect": "Allow", "Action": "s3:*", Resource": [ "arn:aws:s3:::my-bucket" ] } ] } 5. Choose Next and then choose Save changes. Create and configure the Managed Service for Apache Flink application 503 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Configure the application Edit the application to set the application code artifact. To configure the application 1. On the MyApplication page, choose Configure. 2. In the Aplication code location section, choose Configure. • For Amazon S3 bucket, select the bucket you previously created for the application code. Choose Browse and select the correct bucket, and then choose Choose. Don't click on the bucket name. • For Path to Amazon S3 object, enter amazon-msf-java-table-app-1.0.jar. 3. For Access permissions, choose Create / update IAM role kinesis-analytics- MyApplication-us-east-1. 4. In the Runtime properties section, add the following properties. 5. Choose Add new item and add each of the following parameters: Group ID bucket bucket Key name path 6. Don't modify any other setting. 7. Choose Save changes. Value your-bucket-name output Note When you choose to enable Amazon CloudWatch logging, Managed Service for Apache Flink creates a log group and log stream for you. The names of these resources are as follows: • Log group: /aws/kinesis-analytics/MyApplication • Log stream: kinesis-analytics-log-stream Create and configure the Managed Service for Apache Flink application 504 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Run the application The application is now configured and ready to run. To run the application 1. Return to the console page in Amazon Managed Service for Apache Flink and choose MyApplication. 2. Choose Run to start the application. 3. On the Application restore configuration, choose Run with latest snapshot. 4. Choose Run. 5. The Status in Application details transitions from Ready to Starting and then to Running after the application has started. When the application is in Running status, you can open the Flink dashboard. To open the dashboard and view the job 1. Choose Open Apache Flink dashbard. The dashboard opens in a new page. 2. In the Running Jobs list, choose the single job you can see. Note If you set the runtime properties or edited the IAM policies incorrectly, the application status might change to Running, but the Flink dashboard shows the job continuously restarting. This is a common failure scenario when the application is misconfigured or lacks the permissions to
analytics-java-api-153
analytics-java-api.pdf
153
then to Running after the application has started. When the application is in Running status, you can open the Flink dashboard. To open the dashboard and view the job 1. Choose Open Apache Flink dashbard. The dashboard opens in a new page. 2. In the Running Jobs list, choose the single job you can see. Note If you set the runtime properties or edited the IAM policies incorrectly, the application status might change to Running, but the Flink dashboard shows the job continuously restarting. This is a common failure scenario when the application is misconfigured or lacks the permissions to access the external resources. When this happens, check the Exceptions tab in the Flink dashboard to investigate the cause of the problem. Observe the metrics of the running application On the MyApplication page, in the Amazon CloudWatch metrics section, you can see some of the fundamental metrics from the running application. Create and configure the Managed Service for Apache Flink application 505 Managed Service for Apache Flink To view the metrics Managed Service for Apache Flink Developer Guide 1. Next to the Refresh button, select 10 seconds from the dropdown list. 2. When the application is running and healthy, you can see the uptime metric continuously increasing. 3. The fullrestarts metric should be zero. If it is increasing, the configuration might have issues. Review the Exceptions tab on the Flink dashboard to investigate the issue. 4. The Number of failed checkpoints metric should be zero in a healthy application. Note This dashboard displays a fixed set of metrics with a granularity of 5 minutes. You can create a custom application dashboard with any metrics in the CloudWatch dashboard. Observe the application writing data to the destination bucket You can now observe the application running in Amazon Managed Service for Apache Flink writing files to Amazon S3. To observe the files, follow the same process you used to check the files being written when the application was running locally. See Observe the application writing data to an S3 bucket. Remember that the application writes new files on the Flink checkpoint. When running on Amazon Managed Service for Apache Flink, checkpoints are enabled by default and run every 60 seconds. The application creates new files approximately every 1 minute. Stop the application To stop the applicatio, go to the console page of the Managed Service for Apache Flink application named MyApplication. To stop the application 1. 2. From the Action dropdown list, choose Stop. The Status in Application details transitions from Running to Stopping, and then to Ready when the application is completely stopped. Create and configure the Managed Service for Apache Flink application 506 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Note Don't forget to also stop sending data to the input stream from the Python script or the Kinesis Data Generator. Next step Clean up AWS resources Clean up AWS resources This section includes procedures for cleaning up AWS resources created in the Getting Started (Table API) tutorial. This topic contains the following sections. • Delete your Managed Service for Apache Flink application • Delete your Amazon S3 objects and bucket • Delete your IAM resources • Delete your CloudWatch resources • Next step Delete your Managed Service for Apache Flink application Use the following procedure to delete the application. To delete the application 1. Open the Kinesis console at https://console.aws.amazon.com/kinesis. 2. 3. In the Managed Service for Apache Flink panel, choose MyApplication. From the Actions dropdown list, choose Delete and then confirm the deletion. Delete your Amazon S3 objects and bucket Use the following procedure to delete your S3 objects and bucket. Next step 507 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide To delete the application object from the S3 bucket 1. Open the Amazon S3 console at https://console.aws.amazon.com/s3/. 2. 3. Select the S3 bucket that you created. Select the application artifact that you uploaded named amazon-msf-java-table- app-1.0.jar, choose Delete, and then confirm the deletion. To delete all output files written by the application 1. Choose the output folder. 2. Choose Delete. 3. Confirm that you want to permanently delete the content. To delete the S3 bucket 1. Open the Amazon S3 console at https://console.aws.amazon.com/s3/. 2. Select the S3 bucket you created. 3. Choose Delete and confirm the deletion. Delete your IAM resources Use the following procedure to delete your IAM resources. To delete your IAM resources 1. Open the IAM console at https://console.aws.amazon.com/iam/. 2. 3. In the navigation bar, choose Policies. In the filter control, enter kinesis. 4. Choose the kinesis-analytics-service-MyApplication-us-east-1 policy. 5. Choose Policy Actions and then choose Delete. 6. In the navigation bar, choose Roles. 7. Choose the kinesis-analytics-MyApplication-us-east-1 role. 8. Choose Delete role and then confirm the deletion. Delete your IAM resources 508 Managed Service
analytics-java-api-154
analytics-java-api.pdf
154
the Amazon S3 console at https://console.aws.amazon.com/s3/. 2. Select the S3 bucket you created. 3. Choose Delete and confirm the deletion. Delete your IAM resources Use the following procedure to delete your IAM resources. To delete your IAM resources 1. Open the IAM console at https://console.aws.amazon.com/iam/. 2. 3. In the navigation bar, choose Policies. In the filter control, enter kinesis. 4. Choose the kinesis-analytics-service-MyApplication-us-east-1 policy. 5. Choose Policy Actions and then choose Delete. 6. In the navigation bar, choose Roles. 7. Choose the kinesis-analytics-MyApplication-us-east-1 role. 8. Choose Delete role and then confirm the deletion. Delete your IAM resources 508 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Delete your CloudWatch resources Use the following procedure to delete your CloudWatch resources. To delete your CloudWatch resources 1. Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/. 2. In the navigation bar, choose Logs. 3. Choose the /aws/kinesis-analytics/MyApplication log group. 4. Choose Delete Log Group and then confirm the deletion. Next step Explore additional resources Explore additional resources Now that you've created and run a Managed Service for Apache Flink application that uses the Table API, see Explore additional resources in the Get started with Amazon Managed Service for Apache Flink (DataStream API). Delete your CloudWatch resources 509 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Get started with Amazon Managed Service for Apache Flink for Python This section introduces you to the fundamental concepts of a Managed Service for Apache Flink using Python and the Table API. It describes the available options for creating and testing your applications. It also provides instructions for installing the necessary tools to complete the tutorials in this guide and to create your first application. Topics • Review the components of a Managed Service for Apache Flink application • Fulfill the prerequisites • Create and run a Managed Service for Apache Flink for Python application • Clean up AWS resources Review the components of a Managed Service for Apache Flink application Note Amazon Managed Service for Apache Flink supports all Apache Flink APIs. Depending on the API you choose, the structure of the application is slightly different. One popular approach when developing an Apache Flink application in Python is to define the application flow using SQL embedded in Python code. This is the approach that we follow in the following Gettgin Started tutorial. To process data, your Managed Service for Apache Flink application uses a Python script to define the data flow that processes input and produces output using the Apache Flink runtime. A typical Managed Service for Apache Flink application has the following components: • Runtime properties: You can use runtime properties to configure your application without recompiling your application code. • Sources: The application consumes data from one or more sources. A source uses a connector to read data from an external system such as a Kinesis data stream, or an Amazon MSK topic. You Review application components 510 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide can also use special connectors to generate data from within the application. When you use SQL, the application defines sources as source tables. • Transformations: The application processes data by using one or more transformations that can filter, enrich, or aggregate data. When you use SQL, the application defines transformations as SQL queries. • Sinks: The application sends data to external sources through sinks. A sink uses a connector to send data to an external system such as a Kinesis data stream, an Amazon MSK topic, an Amazon S3 bucket, or a relational database. You can also use a special connector to print the output for development purposes. When you use SQL, the application defines sinks as sink tables into which you insert results. For more information, see Write data using sinks in Managed Service for Apache Flink. Your Python application might also require external dependencies, such as additional Python libraries or any Flink connector your application uses. When you package your application, you must include every dependency that your application requires. This tutorial demonstrates how to include connector dependencies and how to package the application for deployment on Amazon Managed Service for Apache Flink. Fulfill the prerequisites To complete this tutorial, you must have the following: • Python 3.11, preferably using a standalone environment like VirtualEnv (venv), Conda, or Miniconda. • Git client - install the Git client if you have not already. • Java Development Kit (JDK) version 11 - install a Java JDK 11 and set the JAVA_HOME environment variable to point to your install location. If you don't have a JDK 11, you can use Amazon Corretto or any standard JDK of our choice. • To verify that you have the JDK correctly installed, run the following command. The output will be different if you
analytics-java-api-155
analytics-java-api.pdf
155
complete this tutorial, you must have the following: • Python 3.11, preferably using a standalone environment like VirtualEnv (venv), Conda, or Miniconda. • Git client - install the Git client if you have not already. • Java Development Kit (JDK) version 11 - install a Java JDK 11 and set the JAVA_HOME environment variable to point to your install location. If you don't have a JDK 11, you can use Amazon Corretto or any standard JDK of our choice. • To verify that you have the JDK correctly installed, run the following command. The output will be different if you are using a JDK other than Amazon Corretto 11. Make sure that the version is 11.x. $ java --version openjdk 11.0.23 2024-04-16 LTS OpenJDK Runtime Environment Corretto-11.0.23.9.1 (build 11.0.23+9-LTS) Fulfill the prerequisites 511 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide OpenJDK 64-Bit Server VM Corretto-11.0.23.9.1 (build 11.0.23+9-LTS, mixed mode) • Apache Maven - install Apache Maven if you have not done so already. For more information, see Installing Apache Maven. • To test your Apache Maven installation, use the following command: $ mvn -version Note Although your application is written in Python, Apache Flink runs in the Java Virtual Machine (JVM). It distributes most of the dependencies, such as the Kinesis connector, as JAR files. To manage these dependencies and to package the application in a ZIP file, use Apache Maven. This tutorial explains how to do so. Warning We recommend that you use Python 3.11 for local development. This is the same Python version used by Amazon Managed Service for Apache Flink with the Flink runtime 1.19. Installing the Python Flink library 1.19 on Python 3.12 might fail. If you have another Python version installed by default on your machine, we recommend that you create a standalone environment such as VirtualEnv using Python 3.11. IDE for local development We recommend that you use a development environment such as PyCharm or Visual Studio Code to develop and compile your application. Then, complete the first two steps of the Get started with Amazon Managed Service for Apache Flink (DataStream API): • Set up an AWS account and create an administrator user • Set up the AWS Command Line Interface (AWS CLI) Fulfill the prerequisites 512 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide To get started, see Create an application. Create and run a Managed Service for Apache Flink for Python application In this section, you create a Managed Service for Apache Flink application for Python application with a Kinesis stream as a source and a sink. This section contains the following steps. • Create dependent resources • Set up your local development environment • Download and examine the Apache Flink streaming Python code • Manage JAR dependencies • Write sample records to the input stream • Run your application locally • Observe input and output data in Kinesis streams • Stop your application running locally • Package your application code • Upload the application package to an Amazon S3 bucket • Create and configure the Managed Service for Apache Flink application • Next step Create dependent resources Before you create a Managed Service for Apache Flink for this exercise, you create the following dependent resources: • Two Kinesis streams for input and output. • An Amazon S3 bucket to store the application's code. Create an application 513 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Note This tutorial assumes that you are deploying your application in the us-east-1 Region. If you use another Region, you must adapt all steps accordingly. Create two Kinesis streams Before you create a Managed Service for Apache Flink application for this exercise, create two Kinesis data streams (ExampleInputStream and ExampleOutputStream) in the same Region you will use to deploy your application (us-east-1 in this example). Your application uses these streams for the application source and destination streams. You can create these streams using either the Amazon Kinesis console or the following AWS CLI command. For console instructions, see Creating and Updating Data Streams in the Amazon Kinesis Data Streams Developer Guide. To create the data streams (AWS CLI) 1. To create the first stream (ExampleInputStream), use the following Amazon Kinesis create-stream AWS CLI command. $ aws kinesis create-stream \ --stream-name ExampleInputStream \ --shard-count 1 \ --region us-east-1 2. To create the second stream that the application uses to write output, run the same command, changing the stream name to ExampleOutputStream. $ aws kinesis create-stream \ --stream-name ExampleOutputStream \ --shard-count 1 \ --region us-east-1 Create an Amazon S3 bucket You can create the Amazon S3 bucket using the console. For instructions for creating this resource, see the following topics: Create dependent resources 514 Managed Service for Apache Flink Managed Service for Apache Flink
analytics-java-api-156
analytics-java-api.pdf
156
first stream (ExampleInputStream), use the following Amazon Kinesis create-stream AWS CLI command. $ aws kinesis create-stream \ --stream-name ExampleInputStream \ --shard-count 1 \ --region us-east-1 2. To create the second stream that the application uses to write output, run the same command, changing the stream name to ExampleOutputStream. $ aws kinesis create-stream \ --stream-name ExampleOutputStream \ --shard-count 1 \ --region us-east-1 Create an Amazon S3 bucket You can create the Amazon S3 bucket using the console. For instructions for creating this resource, see the following topics: Create dependent resources 514 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • How Do I Create an S3 Bucket? in the Amazon Simple Storage Service User Guide. Give the Amazon S3 bucket a globally unique name, for example by appending your login name. Note Make sure that you create the S3 bucket in the Region you use for this tutorial (us- east-1). Other resources When you create your application, Managed Service for Apache Flink creates the following Amazon CloudWatch resources if they don't already exist: • A log group called /AWS/KinesisAnalytics-java/<my-application>. • A log stream called kinesis-analytics-log-stream. Set up your local development environment For development and debugging, you can run the Python Flink application on your machine. You can start the application from the command line with python main.py or in a Python IDE of your choice. Note On your development machine, you must have Python 3.10 or 3.11, Java 11, Apache Maven, and Git installed. We recommend that you use an IDE such as PyCharm or Visual Studio Code. To verify that you meet all prerequisites, see Fulfill the prerequisites for completing the exercises before you proceed. Install the PyFlink library To develop your application and run it locally, you must install the Flink Python library. 1. Create a standalone Python environment using VirtualEnv, Conda, or any similar Python tool. Set up your local development environment 515 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 2. Install the PyFlink library in that environment. Use the same Apache Flink runtime version that you will use in Amazon Managed Service for Apache Flink. Currently, the recommended runtime is 1.19.1. $ pip install apache-flink==1.19.1 3. Make sure that the environment is active when you run your application. If you run the application in the IDE, make sure that the IDE is using the environment as runtime. The process depends on the IDE that you are using. Note You only need to install the PyFlink library. You do not need to install an Apache Flink cluster on your machine. Authenticate your AWS session The application uses Kinesis data streams to publish data. When running locally, you must have a valid AWS authenticated session with permissions to write to the Kinesis data stream. Use the following steps to authenticate your session: 1. If you don't have the AWS CLI and a named profile with valid credential configured, see Set up the AWS Command Line Interface (AWS CLI). 2. Verify that your AWS CLI is correctly configured and your users have permissions to write to the Kinesis data stream by publishing the following test record: $ aws kinesis put-record --stream-name ExampleOutputStream --data TEST --partition- key TEST 3. If your IDE has a plugin to integrate with AWS, you can use it to pass the credentials to the application running in the IDE. For more information, see AWS Toolkit for PyCharm, AWS Toolkit for Visual Studio Code, and AWS Toolkit for IntelliJ IDEA. Download and examine the Apache Flink streaming Python code The Python application code for this example is available from GitHub. To download the application code, do the following: Download and examine the Apache Flink streaming Python code 516 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 1. Clone the remote repository using the following command: git clone https://github.com/aws-samples/amazon-managed-service-for-apache-flink- examples.git 2. Navigate to the ./python/GettingStarted directory. Review application components The application code is located in main.py. We use SQL embedded in Python to define the flow of the application. Note For an optimized developer experience, the application is designed to run without any code changes both on Amazon Managed Service for Apache Flink and locally, for development on your machine. The application uses the environment variable IS_LOCAL = true to detect when it is running locally. You must set the environment variable IS_LOCAL = true either on your shell or in the run configuration of your IDE. • The application sets up the execution environment and reads the runtime configuration. To work both on Amazon Managed Service for Apache Flink and locally, the application checks the IS_LOCAL variable. • The following is the default behavior when the application runs in Amazon Managed Service for Apache Flink: 1. Load dependencies packaged with the application. For more
analytics-java-api-157
analytics-java-api.pdf
157
for development on your machine. The application uses the environment variable IS_LOCAL = true to detect when it is running locally. You must set the environment variable IS_LOCAL = true either on your shell or in the run configuration of your IDE. • The application sets up the execution environment and reads the runtime configuration. To work both on Amazon Managed Service for Apache Flink and locally, the application checks the IS_LOCAL variable. • The following is the default behavior when the application runs in Amazon Managed Service for Apache Flink: 1. Load dependencies packaged with the application. For more information, see (link) 2. Load the configuration from the Runtime properties you define in the Amazon Managed Service for Apache Flink application. For more information, see (link) • When the application detects IS_LOCAL = true when you run your application locally: 1. Loads external dependencies from the project. 2. Loads the configuration from the application_properties.json file included in the project. ... APPLICATION_PROPERTIES_FILE_PATH = "/etc/flink/application_properties.json" ... Download and examine the Apache Flink streaming Python code 517 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide is_local = ( True if os.environ.get("IS_LOCAL") else False ) ... if is_local: APPLICATION_PROPERTIES_FILE_PATH = "application_properties.json" CURRENT_DIR = os.path.dirname(os.path.realpath(__file__)) table_env.get_config().get_configuration().set_string( "pipeline.jars", "file:///" + CURRENT_DIR + "/target/pyflink-dependencies.jar", ) • The application defines a source table with a CREATE TABLE statement, using the Kinesis Connector. This table reads data from the input Kinesis stream. The application takes the name of the stream, the Region, and initial position from the runtime configuration. table_env.execute_sql(f""" CREATE TABLE prices ( ticker VARCHAR(6), price DOUBLE, event_time TIMESTAMP(3), WATERMARK FOR event_time AS event_time - INTERVAL '5' SECOND ) PARTITIONED BY (ticker) WITH ( 'connector' = 'kinesis', 'stream' = '{input_stream_name}', 'aws.region' = '{input_stream_region}', 'format' = 'json', 'json.timestamp-format.standard' = 'ISO-8601' ) """) • The application also defines a sink table using the Kinesis Connector in this example. This tale sends data to the output Kinesis stream. table_env.execute_sql(f""" CREATE TABLE output ( ticker VARCHAR(6), price DOUBLE, event_time TIMESTAMP(3) ) PARTITIONED BY (ticker) WITH ( Download and examine the Apache Flink streaming Python code 518 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 'connector' = 'kinesis', 'stream' = '{output_stream_name}', 'aws.region' = '{output_stream_region}', 'sink.partitioner-field-delimiter' = ';', 'sink.batch.max-size' = '100', 'format' = 'json', 'json.timestamp-format.standard' = 'ISO-8601' )""") • Finally, the application executes a SQL that INSERT INTO... the sink table from the source table. In a more complex application, you likely have additional steps transforming data before writing to the sink. table_result = table_env.execute_sql("""INSERT INTO output SELECT ticker, price, event_time FROM prices""") • You must add another step at the end of the main() function to run the application locally: if is_local: table_result.wait() Without this statement, the application terminates immediately when you run it locally. You must not execute this statement when you run your application in Amazon Managed Service for Apache Flink. Manage JAR dependencies A PyFlink application usually requires one or more connectors. The application in this tutorial uses the Kinesis Connector. Because Apache Flink runs in the Java JVM, connectors are distributed as JAR files, regardless if you implement your application in Python. You must package these dependencies with the application when you deploy it on Amazon Managed Service for Apache Flink. In this example, we show how to use Apache Maven to fetch the dependencies and package the application to run on Managed Service for Apache Flink. Note There are alternative ways to fetch and package dependencies. This example demonstrates a method that works correctly with one or more connectors. It also lets you run the Manage JAR dependencies 519 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide application locally, for development, and on Managed Service for Apache Flink without code changes. Use the pom.xml file Apache Maven uses the pom.xml file to control dependencies and application packaging. Any JAR dependencies are specified in the pom.xml file in the <dependencies>...</ dependencies> block. <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/ xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> ... <dependencies> <dependency> <groupId>org.apache.flink</groupId> <artifactId>flink-connector-kinesis</artifactId> <version>4.3.0-1.19</version> </dependency> </dependencies> ... To find the correct artifact and version of connector to use, see Use Apache Flink connectors with Managed Service for Apache Flink. Make sure that you refer to the version of Apache Flink you are using. For this example, we use the Kinesis connector. For Apache Flink 1.19, the connector version is 4.3.0-1.19. Note If you are using Apache Flink 1.19, there is no connector version released specifically for this version. Use the connectors released for 1.18. Download and package dependencies Use Maven to download the dependencies defined in the pom.xml file and package them for the Python Flink application. Manage JAR dependencies 520 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 1. Navigate to the directory that contains the Python Getting Started project
analytics-java-api-158
analytics-java-api.pdf
158
the version of Apache Flink you are using. For this example, we use the Kinesis connector. For Apache Flink 1.19, the connector version is 4.3.0-1.19. Note If you are using Apache Flink 1.19, there is no connector version released specifically for this version. Use the connectors released for 1.18. Download and package dependencies Use Maven to download the dependencies defined in the pom.xml file and package them for the Python Flink application. Manage JAR dependencies 520 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 1. Navigate to the directory that contains the Python Getting Started project called python/ GettingStarted. 2. Run the following command: $ mvn package Maven creates a new file called ./target/pyflink-dependencies.jar. When you are developing locally on your machine, the Python application looks for this file. Note If you forget to run this command, when you try to run your application, it will fail with the error: Could not find any factory for identifier "kinesis. Write sample records to the input stream In this section, you will send sample records to the stream for the application to process. You have two options for generating sample data, either using a Python script or the Kinesis Data Generator. Generate sample data using a Python script You can use a Python script to send sample records to the stream. Note To run this Python script, you must use Python 3.x and have the AWS SDK for Python (Boto) library installed. To start sending test data to the Kinesis input stream: 1. Download the data generator stock.py Python script from the Data generator GitHub repository. 2. Run the stock.py script: $ python stock.py Write sample records to the input stream 521 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Keep the script running while you complete the rest of the tutorial. You can now run your Apache Flink application. Generate sample data using Kinesis Data Generator Alternatively to using the Python script, you can use Kinesis Data Generator, also available in a hosted version, to send random sample data to the stream. Kinesis Data Generator runs in your browser, and you don't need to install anything on your machine. To set up and run Kinesis Data Generator: 1. Follow the instructions in the Kinesis Data Generator documentation to set up access to the tool. You will run an AWS CloudFormation template that sets up a user and password. 2. Access Kinesis Data Generator through the URL generated by the CloudFormation template. You can find the URL in the Output tab after the CloudFormation template is completed. 3. Configure the data generator: • Region: Select the Region that you are using for this tutorial: us-east-1 • Stream/delivery stream: Select the input stream that the application will use: ExampleInputStream • Records per second: 100 • Record template: Copy and paste the following template: { "event_time" : "{{date.now("YYYY-MM-DDTkk:mm:ss.SSSSS")}}, "ticker" : "{{random.arrayElement( ["AAPL", "AMZN", "MSFT", "INTC", "TBV"] )}}", "price" : {{random.number(100)}} } 4. Test the template: Choose Test template and verify that the generated record is similar to the following: { "event_time" : "2024-06-12T15:08:32.04800, "ticker" : "INTC", "price" : 7 } 5. Start the data generator: Choose Select Send Data. Kinesis Data Generator is now sending data to the ExampleInputStream. Write sample records to the input stream 522 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide Run your application locally You can test the application locally, running from the command line with python main.py or from your IDE. To run your application locally, you must have the correct version of the PyFlink library installed as described in the previous section. For more information, see (link) Note Before you continue, verify that the input and output streams are available. See Create two Amazon Kinesis data streams. Also, verify that you have permission to read and write from both streams. See Authenticate your AWS session. Import the Python project into your IDE To start working on the application in your IDE, you must import it as a Python project. The repository you cloned contains multiple examples. Each example is a separate project. For this tutorial, import the content in the ./python/GettingStarted subdirectory into your IDE. Import the code as an existing Python project. Note The exact process to import a new Python project varies depending on the IDE you are using. Check the local application configuration When running locally, the application uses the configuration in the application_properties.json file in the resources folder of the project under ./src/main/ resources. You can edit this file to use different Kinesis stream names or Regions. [ { "PropertyGroupId": "InputStream0", Run your application locally 523 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide "PropertyMap": { "stream.name": "ExampleInputStream", "flink.stream.initpos": "LATEST", "aws.region": "us-east-1" } }, { "PropertyGroupId": "OutputStream0",
analytics-java-api-159
analytics-java-api.pdf
159
Import the code as an existing Python project. Note The exact process to import a new Python project varies depending on the IDE you are using. Check the local application configuration When running locally, the application uses the configuration in the application_properties.json file in the resources folder of the project under ./src/main/ resources. You can edit this file to use different Kinesis stream names or Regions. [ { "PropertyGroupId": "InputStream0", Run your application locally 523 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide "PropertyMap": { "stream.name": "ExampleInputStream", "flink.stream.initpos": "LATEST", "aws.region": "us-east-1" } }, { "PropertyGroupId": "OutputStream0", "PropertyMap": { "stream.name": "ExampleOutputStream", "aws.region": "us-east-1" } } ] Run your Python application locally You can run your application locally, either from the command line as a regular Python script, or from the IDE. To run your application from the command line 1. Make sure that the standalone Python environment such as Conda or VirtualEnv where you installed the Python Flink library is currently active. 2. Make sure that you ran mvn package at least one time. 3. Set the IS_LOCAL = true environment variable: $ export IS_LOCAL=true 4. Run the application as a regular Python script. $python main.py To run the application from within the IDE 1. Configure your IDE to run the main.py script with the following configuration: 1. Use the standalone Python environment such as Conda or VirtualEnv where you installed the PyFlink library. 2. Use the AWS credentials to access the input and output Kinesis data streams. Run your application locally 524 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 3. Set IS_LOCAL = true. 2. The exact process to set the run configuration depends on your IDE and varies. 3. When you have set up your IDE, run the Python script and use the tooling provided by your IDE while the application is running. Inspect application logs locally When running locally, the application does not show any log in the console, aside from a few lines printed and displayed when the application starts. PyFlink writes logs to a file in the directory where the Python Flink library is installed. The application prints the location of the logs when it starts. You can also run the following command to find the logs: $ python -c "import pyflink;import os;print(os.path.dirname(os.path.abspath(pyflink.__file__))+'/log')" 1. List the files in the logging directory. You usually find a single .log file. 2. Tail the file while the application is running: tail -f <log-path>/<log-file>.log. Observe input and output data in Kinesis streams You can observe records sent to the input stream by the (generating sample Python) or the Kinesis Data Generator (link) by using the Data Viewer in the Amazon Kinesis console. To observe records: Stop your application running locally Stop the application running in your IDE. The IDE usually provides a "stop" option. The exact location and method depends on the IDE. Package your application code In this section, you use Apache Maven to package the application code and all required dependencies in a .zip file. Run the Maven package command again: Observe input and output data in Kinesis streams 525 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide $ mvn package This command generates the file target/managed-flink-pyflink-getting- started-1.0.0.zip. Upload the application package to an Amazon S3 bucket In this section, you upload the .zip file you created in the previous section to the Amazon Simple Storage Service (Amazon S3) bucket you created at the beginning of this tutorial. If you have not completed this step, see (link). To upload the application code JAR file 1. Open the Amazon S3 console at https://console.aws.amazon.com/s3/. 2. Choose the bucket you previously created for the application code. 3. Choose Upload. 4. Choose Add files. 5. Navigate to the .zip file generated in the previous step: target/managed-flink-pyflink- getting-started-1.0.0.zip. 6. Choose Upload without changing any other settings. Create and configure the Managed Service for Apache Flink application You can create and configure a Managed Service for Apache Flink application using either the console or the AWS CLI. For this tutorial, we will use the console. Create the application 1. Open the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink 2. Verify that the correct Region is selected: US East (N. Virginia)us-east-1. 3. Open the right-side menu and choose Apache Flink applications and then Create streaming application. Alternatively, choose Create streaming application from the Get started section of the initial page. 4. On the Create streaming applications page: • For Chose a method to set up the stream processing application, choose Create from scratch. Upload the application package to an Amazon S3 bucket 526 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • For Apache Flink configuration, Application Flink version, choose Apache Flink 1.19. • For Application configuration:
analytics-java-api-160
analytics-java-api.pdf
160
that the correct Region is selected: US East (N. Virginia)us-east-1. 3. Open the right-side menu and choose Apache Flink applications and then Create streaming application. Alternatively, choose Create streaming application from the Get started section of the initial page. 4. On the Create streaming applications page: • For Chose a method to set up the stream processing application, choose Create from scratch. Upload the application package to an Amazon S3 bucket 526 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide • For Apache Flink configuration, Application Flink version, choose Apache Flink 1.19. • For Application configuration: • For Application name, enter MyApplication. • For Description, enter My Python test app. • In Access to application resources, choose Create / update IAM role kinesis-analytics- MyApplication-us-east-1 with required policies. • For Template for applications settings: • For Templates, choose Development. • Choose Create streaming application. Note When you create a Managed Service for Apache Flink application using the console, you have the option of having an IAM role and policy created for your application. Your application uses this role and policy to access its dependent resources. These IAM resources are named using your application name and Region as follows: • Policy: kinesis-analytics-service-MyApplication-us-west-2 • Role: kinesisanalytics-MyApplication-us-west-2 Amazon Managed Service for Apache Flink was formerly known as Kinesis Data Analytics. The name of the resources that are generated automatically is prefixed with kinesis- analytics for backward compatibility. Edit the IAM policy Edit the IAM policy to add permissions to access the Amazon S3 bucket. To edit the IAM policy to add S3 bucket permissions 1. Open the IAM console at https://console.aws.amazon.com/iam/. 2. Choose Policies. Choose the kinesis-analytics-service-MyApplication-us-east-1 policy that the console created for you in the previous section. 3. Choose Edit and then choose the JSON tab. Create and configure the Managed Service for Apache Flink application 527 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 4. Add the highlighted section of the following policy example to the policy. Replace the sample account IDs (012345678901) with your account ID. { "Version": "2012-10-17", "Statement": [ { "Sid": "ReadCode", "Effect": "Allow", "Action": [ "s3:GetObject", "s3:GetObjectVersion" ], "Resource": [ "arn:aws:s3:::my-bucket/kinesis-analytics-placeholder-s3-object" ] }, { "Sid": "ListCloudwatchLogGroups", "Effect": "Allow", "Action": [ "logs:DescribeLogGroups" ], "Resource": [ "arn:aws:logs:us-east-1:012345678901:log-group:*" ] }, { "Sid": "ListCloudwatchLogStreams", "Effect": "Allow", "Action": [ "logs:DescribeLogStreams" ], "Resource": [ "arn:aws:logs:us-east-1:012345678901:log-group:/aws/kinesis- analytics/MyApplication:log-stream:*" ] }, { "Sid": "PutCloudwatchLogs", "Effect": "Allow", "Action": [ "logs:PutLogEvents" Create and configure the Managed Service for Apache Flink application 528 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide ], "Resource": [ "arn:aws:logs:us-east-1:012345678901:log-group:/aws/kinesis- analytics/MyApplication:log-stream:kinesis-analytics-log-stream" ] }, { "Sid": "ReadInputStream", "Effect": "Allow", "Action": "kinesis:*", "Resource": "arn:aws:kinesis:us-east-1:012345678901:stream/ ExampleInputStream" }, { "Sid": "WriteOutputStream", "Effect": "Allow", "Action": "kinesis:*", "Resource": "arn:aws:kinesis:us-east-1:012345678901:stream/ ExampleOutputStream" } ] } 5. Choose Next and then choose Save changes. Configure the application Edit the application configuration to set the application code artifact. To configure the application 1. On the MyApplication page, choose Configure. 2. In the Application code location section: • For Amazon S3 bucket, select the bucket you previously created for the application code. Choose Browse and select the correct bucket, then choose Choose. Don't select on the bucket name. • For Path to Amazon S3 object, enter managed-flink-pyflink-getting- started-1.0.0.zip. 3. For Access permissions, choose Create / update IAM role kinesis-analytics- MyApplication-us-east-1 with required policies. Create and configure the Managed Service for Apache Flink application 529 Managed Service for Apache Flink Managed Service for Apache Flink Developer Guide 4. Move to Runtime properties and keep the default values for all other settings. 5. Choose Add new item and add each of the following parameters: Group ID Key Value InputStream0 stream.name ExampleInputStream InputStream0 flink.stream.initp LATEST os InputStream0 aws.region us-east-1 OutputStream0 stream.name ExampleOutputStream OutputStream0 aws.region us-east-1 kinesis.analytics. python main.py flink.run.options kinesis.analytics. jarfile lib/pyflink-depend flink.run.options encies.jar 6. Do not modify any of the other sections and choose Save changes. Note When you choose to enable Amazon CloudWatch logging, Managed Service for Apache Flink creates a log group and log stream for you. The names of these resources are as follows: • Log group: /aws/kinesis-analytics/MyApplication • Log stream: kinesis-analytics-log-stream Run the application The application is now configured and ready to run. Create and configure the Managed Service for Apache Flink application 530 Managed Service for Apache Flink To run the application Managed Service for Apache Flink Developer Guide 1. On the console for Amazon Managed Service for Apache Flink, choose My Application and choose Run. 2. On the next page, the Application restore configuration page, choose Run with latest snapshot and then choose Run. The Status in Application details transitions from Ready to Starting and then to Running when the application has started. When the application is in the Running status, you can now open the Flink dashboard. To open the dashboard 1. Choose Open Apache Flink dashboard. The dashboard