datadotzweekly

DataDotz Bigdata Weekly

This entry was posted in Big Data, DataDotz Weekly, Kafka, Spark on by .   0 Comment[s]

Apache Spark
=====

Structured Streaming in Apache Spark 2.0, it has supported joins (inner join and some type of outer joins) between a streaming and a static DataFrame/Dataset. With the release of Apache Spark 2.3.0, now available in Databricks Runtime 4.0 as part of Databricks Unified Analytics Platform, we now support stream-stream joins. In this post, we will explore a canonical case of how to use stream-stream joins, what challenges we resolved, and what type of workloads they enable. Let’s start with the canonical use case for stream-stream joins – ad monetization.
https://databricks.com/blog/2018/03/13/introducing-stream-stream-joins-in-apache-spark-2-3.html

Apache Kafka
==========

Apache Kafka is a distributed streaming platform, and as well as powering a large number of stream-based mission-critical systems around the world, it has a huge role to play in data integration too. Back in 2016 Neha Narkhede wrote that ETL Is Dead, Long Live Streams, and since then we’ve seen more and more companies moving to adopt Apache Kafka as the backbone of their architectures.

https://rmoff.net/2018/03/06/why-do-we-need-streaming-etl/
Keystone
==========

Netflix streaming pipeline (called Keystone) processes 2 Trillion messages/day at peak! It handles 3 PB/day of incoming data, and 7PB/day of outgoing data! Built 100% on AWS, Keystone is built as a self-serve platform allowing multiple teams to publish,, process, and consume events. There are lessons from Netflix’s operational experience that every enterprise can leverage.

https://quickbooks-engineering.intuit.com/lessons-learnt-from-netflix-keystone-pipeline-with-trillions-of-daily-messages-64cc91b3c8ea
Apache Cassandra
==========

Criteo has open sourced a new Apache Cassandra backend for Graphite called Big Graphite. With it, they store over 1 million metrics per second, and they take advantage of Cassandra’s secondary indexes to support the different types of Graphite queries. This presentation goes into further detail on the new backend and also links out to the design document and code on github

https://docs.google.com/presentation/d/17opE2U2aIe1TFYJgr0Z8Dd2fJhy2x0YbReCVeCz6uhA/edit
Apache Spark
==========

Apache Spark 2.3.0 adds experimental support for continuous mode in stream processing. Compared to micro batch latency of 100s of milliseconds, continuous mode uses long-running processes that can take a record end to end in 10s of milliseconds

https://databricks.com/blog/2018/03/20/low-latency-continuous-processing-mode-in-structured-streaming-in-apache-spark-2-3-0.html
Apache Kafka
==========

A tutorial on the Confluent Blog covers loading data into Kafka is using Kafka Connect for MySQL (via Debezium for change data capture) and from CSV files, querying and joining those two streams, and writing the results out to Amazon S3 using the Kafka Connect S3 sink

https://www.confluent.io/blog/ksql-in-action-enriching-csv-events-with-data-from-rdbms-into-AWS/