Skip to content

Rowlock Product Fundamentals

What is Kafka?

At a high level, Kafka is a distributed streaming platform designed to handle large volumes of data in real-time. It provides a publish-subscribe model where producers write data to topics, and consumers subscribe to those topics to receive the data. Kafka enables reliable, scalable, and fault-tolerant data streaming between systems and applications. Kafka Connectors are plugins that facilitate seamless integration between Kafka and other data systems, allowing you to stream data to and from external systems. These connectors act as bridges, handling the ingestion and extraction of data from various sources and sinks, such as databases, messaging systems, and cloud services. They simplify data integration by providing a standardized way to connect Kafka with external systems, making it easier to build robust and flexible data pipelines.

What is Kafka Connect?

Kafka Connect is a distributed and scalable framework for connecting Apache Kafka with external systems such as databases, storage systems, and messaging systems. It is designed to simplify the integration of data pipelines, allowing you to ingest data into and out of Kafka easily.

Here are some key features and concepts associated with Kafka Connect:

  1. Connectors: Kafka Connectors are plugins that define how data is moved between Kafka and external systems. Connectors are available for a wide range of systems, including databases like MySQL and PostgreSQL, storage systems like HDFS and S3, and messaging systems like RabbitMQ and MQTT.

  2. Source Connectors: These connectors ingest data from external systems into Kafka topics. For example, a source connector for a database can capture changes in the database and publish them to Kafka topics.

  3. Sink Connectors: Sink connectors consume data from Kafka topics and write it to external systems. For instance, a sink connector for Elasticsearch can take data from Kafka and index it in Elasticsearch.

  4. Transforms: Kafka Connect allows you to apply transformations to the data as it flows through the connectors. This enables you to modify or enrich the data before it reaches its destination.

  5. Scalability: Kafka Connect is designed to scale horizontally, allowing you to add more worker nodes to handle larger workloads. This makes it suitable for managing large-scale data integration tasks.

Here are a few useful URLs to blog posts and documentation related to Kafka Connect:

  1. Official Documentation:

  2. Blog Posts & Tutorials:

  3. Confluent Hub:

    • Confluent Hub: A repository of pre-built connectors that you can use with Kafka Connect.
  4. GitHub Repository:

These resources should provide a good starting point for understanding and working with Kafka Connect.

What is stream processing?

Stream processing is a computing paradigm that involves processing and analyzing continuous streams of data in real-time. It allows organizations to derive insights, make decisions, and take actions as events occur. Two popular tools for stream processing are ksqlDB and Apache Flink.

ksqlDB

ksqlDB is an open-source streaming SQL engine for Apache Kafka. It simplifies stream processing by providing a familiar SQL interface to interact with Kafka topics. Here are some key aspects of ksqlDB:

  1. Declarative SQL Syntax: ksqlDB allows you to express stream processing operations using SQL-like statements. This makes it accessible to a broader audience, including those with SQL expertise.

  2. Real-time Data Processing: It enables real-time processing of streaming data, allowing you to perform operations like filtering, aggregating, and joining data on the fly.

  3. Integration with Kafka: ksqlDB seamlessly integrates with Apache Kafka, allowing you to use Kafka topics as input and output for your streaming queries.

  4. Materialized Views: ksqlDB supports the creation of materialized views, which are continuously updated views of your data that you can query in real-time.

Here are some useful URLs for ksqlDB:

  • ksqlDB Documentation: The official documentation provides a comprehensive guide to getting started with ksqlDB, including installation, query syntax, and examples.

  • ksqlDB GitHub Repository: The GitHub repository contains the source code, issues, and community contributions for ksqlDB.

  • Introduction to ksqlDB: This blog post introduces ksqlDB's features and capabilities.

  • A great animated primer: This blog post goes a bit deeper on ksqlDB for stream processing.

Apache Flink is a powerful and open-source stream processing framework. It provides a runtime and libraries for building scalable, distributed stream processing applications. Here are some key features of Apache Flink:

  1. Event Time Processing: Flink supports event time processing, allowing you to handle and process events based on the time they occurred rather than when they are processed.

  2. Stateful Processing: Flink allows you to maintain state across event streams, enabling complex event-driven applications.

  3. Fault Tolerance: Flink provides fault-tolerance mechanisms, ensuring that processing can continue even in the presence of failures.

  4. Rich Set of APIs and Connectors: Flink supports multiple APIs, including DataStream API for Java and Scala, and Table API for SQL-like queries. It also provides connectors for various data sources and sinks.

Useful URLs for Apache Flink:

These resources should help you get started with ksqlDB and Apache Flink, whether you are a beginner or an experienced stream processing practitioner.

Data replication without Kafka

Not on Kafka? No Problem!

The links above are great, but deploying and managing Kafka does add engineering overhead.

It's quite possible that:

  • Your company isn't ready for Kafka

  • Your use cases don't require it Kafka

  • Kafka isn't able to service your specific data replication needs

This is why we have also developed our source-to-sink connectors.

What are source-to-sink connectors?

Our source-to-sink connectors are containerized services for data replication, and they boast proprietary features you won't find anywhere else.

  • Replicate data from the most popular SQL databases to the most popular OLAP platforms

    • Full load, ongoing change replication, and full load plus ongoing change replication
  • Complete configurability means the right data arrives, right when it's needed

    • Optimize your integrations for cost and speed: near real-time isn't always needed, but our connectors can deliver it
  • The connectors aren't SaaS, they run on your infrastructure, so no data leaves your VPC

  • Handle replication of large column values from the source system to the sink system

  • Horizontal scaling to handle instances with high IOPS

  • Complete monitoring and logging capabilities for seamless integration with any observability provider