This is Volume 2 of the programming news feed. You can find Volume 1 here if you haven’t read it. It got to approx. 5k words.

To know more about this feed, what & why, visit here. It’s also the index of all the programming news & latest software technology volumes. If you like the feed don’t forget to share it with your folks.

To stay notified on the updated content you can follow 8bitmen on Twitter & Facebook

 

TDEngine – An Open Source Big Data Framework Written Specifically for IoT Use Cases

TDEngine is an open-source BigData framework written specifically for processing data streaming-in from IoT devices. Some of the common use cases are processing data obtained from self-driving vehicles, Industrial sensors, smart homes, smart grids, online services infrastructure monitoring etc.

Why the Need for a Big Data Framework Dedicated to IoT Use Case?

This involves several things –

The architecture of a system designed for BigData processing typically involves message queues, in-memory caching & stream processing components. All these are implemented with the help of tech like Kafka, Storm, Spark, Redis etc.

Every tech has a learning curve. Plugging in all the components & making them work in conjunction adds up to the management complexity & development costs.

TDEngine is kind of a Full-Stack for IoT data processing. It has inbuilt caching, message queuing and the stream processing ability. It averts the need for any other tech, typically required, when designing a BigData processing system.

Also, since the data from IoT devices is time-series & largely structured in nature. The framework provides 10x performance in comparison to the commonly used BigData tech in the industry since it is built to handle scenarios specific to IoT data.

Here is a performance comparison report when compared with Cassandra, Influx DB, MySQL

TDEngine is built for IoT data which has common characteristics such as being time-series & structured in nature, it’s rarely updated, there is generally only a single data source for a device, the ratio of read/write is smaller.

For more insights, here is a good read on why do general big data frameworks don’t work best for IoT data processing use cases.

Here is the GitHub repository for it 

 

Cutting Down the Load On Our APIs With WebHooks

If the consumers of your API poll it too often to check for the availability of new information or to check if an event has occurred. WebHooks can save the day and can cut down the unwanted load on our servers by notches.

Speaking of the REST APIs, the mechanism is request-response driven. To share a service with the world we expose an API. Consumers of the API hit it with a request and get the response.

But what if the new information isn’t available on the server yet, or an event hasn’t occurred yet. There is no way for consumers to know. They will keep hitting the API with requests. This would eventually pile up the unwanted load on the server and could eventually bring it down.

To avoid this embarrassment, we can use WebHooks when designing our APIs. WebHooks are more like callbacks. It’s like I will call you when a piece of new information is available. You carry on with your work.

WebHooks enable communication between two services without a middleware. They have an event-based mechanism.

So, how do they work?

To use the webhooks, consumers register an HTTP endpoint with the service. It’s like a phone number. Call me on this number, when an event occurs. I won’t call you anymore.

Whenever the new information is available on the backend. The server fires an HTTP event to all the registered endpoints of the consumers, notifying them of the new update. Browser notifications are a good example of webhooks.

Instead of visiting the websites every now and then for new info, the websites notify us when they publish new content.

Here is a good read on does & don’t when using WebHooks with the APIs.

Zapier engineering writes about how to use webhooks the right way.

 

Micronaut – A JVM Based Framework for Writing Microservices & Reactive Programming

I recently came across Micronaut. A JVM based framework for writing microservices, serverless apps. Also, for writing features based on Reactive programming. Since, Micronaut is JVM based it supports languages like Java, Groovy, Kotlin.

So, we were already writing microservices in Spring & Reactive programming features with Spring Reactor, Play, Akka. What’s new & different about Micronaut?

Micronaut injects the dependency at compile-time as opposed to doing it at run time. Spring via reflection injects the dependencies at runtime.

Injecting dependencies at compile time naturally speeds up the loading process of the program, the application startup is fast & the memory footprint is small.

It also has native support for cloud-native applications. Supports features like distributed tracing. I’ve already talked about distributed tracing here. Have a read.

Non-blocking applications are easier to write with Micronaut. Non-blocking apps are apps which establish a persistent connection between the client and the server with the help of technologies like Web Sockets, Comet, Long polling, messaging queues, in-memory caches etc. This is how we implement Streaming APIs.

I’ve written more about it here.

Rest API Vs Streamingn API Persistent connection

Here is a DZone article on building microservices with Micronaut

In comparison to Spring, Micronaut has just arrived. It’s a relatively new framework. Besides Spring is a mature, production tested tech. Getting hyped about new tech is one thing & successfully running it in production is another.

Go through the framework, check out the pros and cons of it, the support for third-party tech etc. Have a thorough understanding of your use case & then take the decision if you want to write your product with it.

Here is a performance comparison report between Spring Boot & Micronaut.

 

Polyglot Virtual Machine – What is it?

It’s a virtual machine which enables developers to run applications written in different technologies like JavaScript, Python, Ruby, Java, C, C++ etc. on it.

GraalVM is one polyglot virtual machine, it removes the isolation between different technologies and facilitates shared runtime.

 

Vaadin – A UI Framework for Java Web Apps – Why Use it?

Vaadin is a UI framework specifically built for writing Java web application. The UI code is written in Java, is executed on the server. The UI is rendered in Html5 on the browser.

The framework takes care of the communication between the client and the server.

Why use Vaadin when we have popular frameworks like React, Angular Vue?

Vaadin comes in handy for Java developers who do not have much hands-on experience with web development. Specifically front end code.

With Vaadin they can easily create UI by writing code in Java. Vaadin also provides a large set of reusable UI components which covers primarily all the common use cases.

The framework is optimized for writing data-intensive single-page apps. It has built-in support for the Spring framework. Vaadin reminded of a tiny web app I wrote years back using GWT Google Web Toolkit.

What really surprised me is, the framework has over a freakin 100M end users. Woah!!

Here is the GitHub repo for it.

 

More On the Blog

How Hotstar scaled with 10.3 million concurrent users – An architectural insight

How Evernote migrated & scaled their cloud with Google Cloud Platform

How to Pick the Right Architecture For Your App – Explained With Use Cases

How to Pick the Technology Stack For Your Million $ App/Startup

A Super Helpful Guide to Avoiding Cloud Vendor Lock-In When Running Your Service on Cloud