Amazon Kinesis: The core of real-time streaming!

Yash Bindlish
6 min readApr 9, 2020

Business today receive massive amount of data at a massive scale from various data sources that continuously generate streams of data. Enterprises are eyeing to extract precise insights on a real-time or near real-time from the unstructured data to deliver value and growth to their business.

Unstructured data can be from any available sources to the business, such as application logs, telemetry, social feeds, Internet Of Thing (IOT) etc. Business is enthusiastically looking forward to mine treasured information from the data that can help them more to learn about their customers, values so that they can delivery long term values to their customer.

Amazon Kinesis service is the central suite of the architecture to address the real-time , data ingestion needs.

AWS Kinesis — High Level Architecture

Kinesis Data Streams:

KDS often referred to as Amazon Kinesis or Amazon Kinesis streams. KDS is a Managed service from Amazon Web services which is an immensely scalable and durable real-time data streaming services. KDS has great capability to continuously capture gigabytes of data per second from multi data sources. The data collected is available in milliseconds to enable real-time analytics.

Enterprises can create data-processing applications known as Kinesis Data streaming applications. A typical data streaming application collects data from data stream as data records.

KDS is a group of shards, where each shard has a sequence of data records. Each data record has a sequence number that is assigned by kinesis data streams.

AWS — Kinesis Data Stream Logical Building Block

Shard:

A Shard is a unique sequence of data records in a stream. A stream is composed of multiple shards, each of which provides a fixed unit of capacity. Data retention is 24 hours by default and can be extended to 7days. Multiple applications also take a privilege to consume the same data which is consumed by some other application. Once the Data has been inserted into Kinesis, same cannot be deleted.

- Each shard can support up to 5 transactions per second for reads

- up to maximum total data read rate of 2 MB per second

- up to 1000 records per second to writes

- up to a maximum total write rate of 1 MB per second (including partition keys)

Scaling for Shards:

Shards cannot be scaled automatically. It is also called as Shard Splitting. Shard splitting is used to increase the streaming capacity (1 MB/s data in per shard)

Shard Splitting

Similarly to Shard Splitting one can plan and merge shards to decrease stream capacity and save cost.

Shard Merging

Producers

An Amazon Kinesis Data Streams producer is any application that puts user data records into a Kinesis data stream (also called data ingestion). The following are the different ways through which the data gets recorded on the streams.

The Amazon Kinesis Producer Library (KPL) simplifies producer application development, allowing developers to achieve high write throughput to a Kinesis data stream. The KPL can help build high-performance producers addressing multi-threading, batching, retry logic and de-aggregation at the consumer side. KPL Core is built with C++ module and can be compiled to work on any platform with recent c++ compiler.

Kinesis data streams API in AWS SDKs, allowing users to add multiple records with PutRecords (500) and Single record with PutRecord. KPL is asynchronous by design, so applications that cannot tolerate this additional delay may need to use the AWS SDK directly.

Kinesis agent allows users to Install on app, web, database server and configure by specifying the files to monitor and the streams for the data. AWS IoT integrates directly with Amazon Kinesis.

Storage

The producers ingesting real-time data which can be log files, streams (device, social media) or any transactional data. The recommended storage for the same is as listed in below table:

Consumers

The Kinesis client library (KCL) acts as an intermediary between your record processing logic and streams. When a KCL application is started, it calls the KCL to instantiate a worker with configuration information such as stream name and AWS credentials. KCL is compiled in to the application to enable fault-tolerant consumption of data from the stream.

Applications using KCL or SDK APIs can leverage spot instances to save money as there is guaranteed delivery of data with Streams. Event-driven program processing can be accomplished by using the framework’s Kinesis applications (KCL enabled apps), Spark streaming, Storm and AWS Lambda. Kinesis can get stream inputs from the Kinesis producer on Amazon EC2 or from other sources using APIs.

Consumer is an application that processes all data from a Kinesis data stream. When a consumer uses enhanced fan-out, each consumer registered to use enhanced fan-out receives its own 2MibM/sec of read throughput per shard, independent of other consumers.

It is equally important for enterprises to decide the right architectural pattern for their businesses when designing the architectural blue print for deriving the value from Data for Real-time or Near Real-time need.

AWS Kinesis Vs Kinesis Data Firehose

Amazon Kinesis Analytics

. Kinesis Data Analytics applications continuously read and process streaming data from Data Streams or Data Firehose in real time. In the input configuration, you map the streaming source to an in-application input stream.

. You write application code using SQL to process the incoming streaming data and produce output. You can write SQL statements against in-application streams and reference tables. You can also write JOIN queries to combine data from both of these sources.

. Kinesis Data Analytics then writes the output to a configured destination. External destinations can be a Kinesis Data Firehose delivery stream or a Kinesis data stream. You can configure a Kinesis Data Firehose delivery stream to write results to AWS S3, Redshift, or Elasticsearch Service (ES). You can also specify a Kinesis data stream as the destination with AWS Lambda to poll the stream to your custom destination. You can also define you In-application error stream where all captured errors will be pushed and you can configure Firehose delivery stream to deliver the errors out to Amazon S3 bucket or Redshift table for further analysis.

Image Source: AWS

Amazon Kinesis Firehose

The simpler approach, Firehose handles loading data streams directly into AWS products for processing. Scaling is handled automatically, up to gigabytes per second, and allows for batching, encrypting, and compressing. Firehose also allows for streaming to S3, Elasticsearch Service, or Redshift, where data can be copied for processing through additional services.

. Data producers send records to Kinesis Data Firehose delivery streams for near-time requirements.

. The underlying entity of Kinesis Data Firehose is Kinesis Data Firehose delivery stream. It automatically delivers the data to the destination that you specified (e.g. S3, Redshift, Elasticsearch Service, or Splunk)

. You can also configure Kinesis Data Firehose to transform your data before delivering it. Enable Kinesis Data Firehose data transformation when you create your delivery stream. Then Kinesis Data Firehose invokes your Lambda function to transform incoming source data and deliver the transformed data to destinations.

Conclusion

Amazon Kinesis Data Streams (KDS), often referred to as Amazon Kinesis or Amazon Kinesis streams, can continuously capture gigabytes of data per second from sources such as mobile clients, website click streams, social media feeds, logs (application, network and other types) and events of all kinds. Many real-time solutions aspire to have millisecond response rates, and KDS can make it happen with the data collected available just that quickly. Amazon Kinesis services enable enterprises to focus on your application to make time-sensitive business decisions, rather than deploying and managing the infrastructure and considering data as commodity and not an asset to any enterprise.

--

--

Yash Bindlish

Principal Solution Architect with over 14 years of extensive IT Architecture who share the enthusiasm for exploiting technology to create business value.