The IoT data river: processing the flood with Apache Kafka Streams - Paolo Patierno

16/12 15:50 horário de exibição

Assista a este e a outros vídeos sobre IoT, tecnologia e inovação no Canal do Jorge Maia .

The IoT data river: processing the flood with Apache Kafka Streams

Tons of devices to supervise the environment connected to the Cloud, tons of data to ingest as a river in flood, tons of messages to process for getting information for them. This is where IoT data streaming comes in place with the need to analyze this unbounded dataset in real time and react to events. But how to handle them for reaching the higher point of wisdom? Apache Kafka and the Streams API are together a dream team for making this reality. During this session, we'll see how to use them in order to develop a complete pipeline for IoT data processing.

Paolo Patierno

Alternate Text

Paolo is a Principal Software Engineer working for Red Hat on the messaging and IoT team. He has been working on different integration projects having AMQP with Apache Kafka and Spark and on the EnMasse messaging-as-a-service project about the integration with MQTT. Currently, he is focusing on Apache Kafka and how to deploy and run it on Kubernetes and OpenShift. In the IoT space he takes part to definition of the API for the Eclipse Hono project being one of the leads for it and he is also committer for Eclipse Paho and manteiner for different IoT related components in Eclipse Vert.x.