Azure IoT evaluation - what's the benefit


#1

I’ve been trying to get my head around the potential of using the Azure IoT hub in complement to the Imp architecture in our product and am wondering what the true differentiating benefit would be that justifies the extra cost. ?

My setup is quite particular, but not unusual:

  • host uC (Cortex M4F) on a custom HW board running the bulk of the real time processing going on.
  • Imp (originally IMP002, now Imp004m and potentially moving into ImpC001) connected to the host via serial bus, optimised for speed and reliability with HW handshaking.
  • squirrel code, that basically maintains a synced copy of the host configuration in the device and agent which is also able to receive regular real time data reports (sensors) from the host and keep them .
  • squirrel code that is able to receive commands from external apps and pass them on to the host for remote control (RPC approach)
  • Elaborate Agent custom REST API using Rocky lib
  • Android & iOS app that is using the REST API to interact
  • back-end database storing device list, connection params and config data as well as real time data logs.
  • a few hundred to maybe one day a few thousand of end user devices, not millions

I’ve been reading into Azure IoT hub with the Device Twin capability and I’m wondering what a setup using this frame will bring more in functionality/security that justifies the additional cost of all the Azure subscriptions and usage charges.

What I’ve listed so far (besides the obvious one of not having to run/lease my own servers):

  • potential to have a continuous bidirectional socket-like connection between host/imp and the external Apps/Web Apps using MQTT on Imp side and Signal R on Mobile App/Web App side versus polling mode HTTP REST. Creating a REST API on the Azure IoT Hub side fed by the MQTT connection seems like overlapping functionality with the Imp Agent REST API capability and not necessarily easier to implement. Main advantage would be lower complexity in Squirrel code not to have to maintain the host configuration mirror in Device and Agent and a lower complexity for the App as no ‘polling management’ needs to be performed (I’m not using long polling yet).
  • easier storage facilities for config & historical data
  • (maybe) some advanced analystics on the historical data

All insights or experiences are more than welcome as for now, I don’t really see the big benefit but at the same time I see EImp pushing this quite strong (with the addition of MQTT, the creation of the Azure IoT Hub Lib and Hugo appearing on Channel9) so I wonder what I’m missing …


#2

What we’ve found is that every single one of the big company IoT systems (AWS, Azure, Google, etc) have a pretty similar approach to IoT based on the classical models of IoT; there’s a message broker. There’s a way to do some sort of stream processing (or at least ingest data into existing stream processing systems). There’s some concept of a virtualized model of the end device. There’s a provisioning service.

In reality, whilst these architectures look great on paper, real applications tend to have awkward edge cases which are not well addressed by these systems, and often dealing with these messy bits severely delays an IoT deployment. This is where imp agents come in, handily covering those gaps and allowing customers to get their products to market - and keep them maintainable for years - whilst still retaining full access to these big IoT services. The big services are often required for their data analytics pipes and enterprise integrations, or just because the rest of the customer’s backend services are in the Azure/AWS/Google world and they want IoT to be in there too.

Historically - and remember imp agents have existed a lot longer than any of these big cloud offerings - provided a lot of “glue” to hold IoT apps together in a very flexible way - in effect, providing a fully flexible virtual device model, on-ramp to non-IoT specific data pipes like Kinesis or destinations like Firebase, and flexible provisioning flows. As your application was built with on the imp architecture originally, it’s not surprising that you’re looking at these other services and thinking “yes, but my app does this already”.

As IoT becomes more pervasive, many people are coming to IoT with a preference for a certain data service or provider and then are working from the cloud down towards the device - these are the people we target with the new partners and integrations. We’re really good at secure comms and end device management, and though you can use agents for a lot more, a lot of the new customers are using them for light (but absolutely essential) plumbing work.

In terms of the additional use cases, agents will be getting websocket functionality in 2019, which will give your web/mobile apps a lower latency path to an agent - but as before, we are categorically not in the data storage or analytics space and instead ensure that our customers can work with their choice of partners for that.


#3

Thanks a lot Hugo.
This is basically what I was thinking myself but wasn’t sure I missed something.
Our main ‘ask’ is the websocket functionality so we can ‘set and forget’ a connection between the app and the Agent as long as the Client app is active (most of them not more then 15mins per session). For now we’ll do some experimenting with long polling. Our main problem so far was latency to host config changes being reflected in the Mobile app or website. You can make that latency very short by frequent polling, but as these settings maybe change once per day or even week, that’s a lot of wasted data being send around for no use.

As far as analytics are concerned, our need is quite simple with very low data volumes coming in hence the big analytics engines in Azure etc. are already overkill… we can easily do that with a simple web worker or a VM running some code somewhere in the cloud. …