Common Sense Connected Intelligence for 2020

Connected Intelligence

When it comes to Connected Intelligence technologies like #Mobile, #5G and the #InternetOfThings, I’m all about moving the “Value Needle” as quickly, easily and sustainably as possible. #IoT #IIoT

This means following the same strategy you use when taking a big test. Do the easy stuff first and leave the hard things for later. So how does that apply to connecting your organization’s people and things to drive additional value? Read on as I illustrate some simple examples to get you started this year:

Remote Knowledge

People doing remote work in the field still capture information with a paper and pencil. Then they drive back to the office at the end of the workday to transcribe their scribbled data into their organization’s back office computer system along with a dose of human error. I know you’re thinking this is impossible in the 21st century but I see it all the time. Thanks to the magic of cellular data networks combined with smartphones, tablets and smart, connected IoT devices, this inefficient activity can come to an end.

If the remote activity requires dynamic, person-to-person interaction, data should be captured and validated by a smartphone or tablet app and wirelessly transmitted back to the organization. If the remote activity is an inspection that consists of taking an analog reading, drop the clipboard and utilize one or more sensors to convert analog values to digital equivalents and wirelessly transmit the data back to the organization. While newer machines may include the built-in compute, networking, storage and sensors to get the job done, most of the world is filled with older machines that must be retrofitted with these capabilities. If retrofitting isn’t possible, then visual inspection of an asset can be conducted by a fixed or mobile camera where the photo transmitted back to the organization and computer vision converts the analog image to digital values. 

How does this move the Value Needle?

This is the simplest first step you can take in augmenting and/or replacing an expensive activity with connected intelligence. Without employing analytics, you cut costs by reducing or eliminating remote activities that include travel, vehicles, and fuel expenses just to name a few. You lower your risk and liability by reducing the need to put people in vehicles or having them perform inspections in precarious or otherwise dangerous situations. You gain speed and agility through the instant availability of data allowing your organization to respond to problems and opportunities more quickly.

Connecting Old Things

While we’re all excited about what the future holds, it’s important to interface with the world as it exists in the present. The overwhelming majority of the Things that are all around us have existed for years or even decades. In order to derive valuable insights from these operational technology (OT) machines and environmental systems, you’ll have to connect to them in often unfamiliar ways and learn how to speak their language. Achieving broad success requires you to be comfortable with brownfield projects. In many scenarios you’ll find yourself using low-speed serial cables and communicating via 100+ wire protocols and data formats.

You better have your Rosetta Stone handy. Don’t be surprised if you must interface with a programmable logic controller (PLC) instead of connecting to a machine directly. Oftentimes, the OT folks in factories won’t let you get near their machines. If you want to be successful, you’ll have to make peace with this OT/IT reality.

How does this move the Value Needle?

Many of these existing systems are either unconnected or connected to a closed system often built by the same manufacturer. Imagine dozens of machines on the shop floor individually connected to their own, proprietary analytics systems. By freeing the data found in these systems, you go from having countless data silos to a achieving a blend of machine, environmental, organizational and 3rd party data. This delivers much needed context and allows you to see the big picture across systems of systems to make better decisions.

Bootstrapping New Things 

You know how all the industry analysts throw darts at a board and tell you about the tens of billions of connected devices we’ll have in the coming years? They keep having to backtrack on these predictions because of one very important reason. Bringing the Internet of Things to life is still largely a very manual process of configuring devices, networks, code, platforms and security. IoT today operates like one big aftermarket car stereo store. For the Internet of Things to be the huge success we all want it to be, we have to remove many of these manual processes and leave custom work to those targeting specific use cases.

It must all start when Things are created on an assembly line by original equipment manufacturers (OEMs). The smart, connected machine must have compute, storage, networking, sensors and actuators baked-in from the very beginning. In other words, a microcontroller, cellular module, software, a trusted platform module (TPM) and associated security tokens or certificates from the get-go. This new product will be created in one country, possibly shipped to a distributor in another country, and purchased by a customer in yet another country.

When this smart, connected product wakes up somewhere in the world, it must first make an automatic connection to a local mobile network operator via its cellular module and associated connection management capabilities. Next, it must use that connectivity to access the OEM’s globally-available service and pass along its identity and security credentials. This service will determine what and where the product is, who bought it, and ultimately what IoT platform it should send its telemetry to and receive commands from.

How does this move the Value Needle?

By following this process, much of the friction that’s holding the Internet of Things back is removed. Time can be better spent targeting specific use cases allowing customers get to value more quickly at a lower cost. This better use of human and machine resources will exponentially accelerate the rising tide of IoT that lifts all boats.

Many Edges

I know you’ve heard a lot about Edge Computing over the last several years. As it relates to the Internet of Things, the Edge just means moving compute and associated data filtering, aggregation and analytics closer to the machines and environmental systems that actually create the data. When the IoT megatrend heated-up ten or so years ago, companies that weren’t familiar with the decades of capturing telemetry and controlling systems via M2M and SCADA systems assumed IoT data must go to one of the many public clouds. Unacceptable latency, high broadband costs and data governance issues gave rise to concepts like the Fog and Edge to mitigate cloud shortcomings. Since I’ve spent most of my time in the industrial space, this has typically meant placing edge compute near machines on the factory floor or even on a bullet trains where decisions could be made in milliseconds. Performing computing tasks at the Edge has also worked well for discrete and process manufacturers who say, “the data doesn’t leave my factory.”

More recently, the telecom industry has thrown their hat into the ring by placing distributed compute infrastructure and resources at the edge of service provider networks where “last mile” content and applications are delivered. While this particular Edge isn’t on-prem, the concepts of supporting low latency, data intensive applications still applies since it runs at the edge of cellular networks and is significantly closer to the source of IoT data than distant public clouds. It also provides the benefit of reducing congestion and signal load on the core network, so applications and analytics perform better. This architecture is sometimes referred to by the acronym MEC which can either mean mobile edge computing or multi-access edge computing. Expect to see this type of Edge compute used heavily in smart cities, public infrastructure, faster video games with reduced ping times and vehicle-to-everything (V2X) scenarios.

How does this move the Value Needle?

Moving compute resources to the Edge benefits myriad IoT use cases including split-second application responsiveness, reduction in bandwidth costs and congestion, plus the granular data governance needed to meet local, city, state/province, and country security + privacy requirements. Public clouds still have their strengths. As always, just use the right tool for the job.

Sustainable Side Effects

When you don’t put a person in a car, truck or plane to perform a remote inspection, you’re not burning fuel or congesting freeways. Think of wireless data networks as your replacement for travel. Using smart, connected, new machines and retrofitting older machines ensures you always know about their health and performance so they can operate more cleanly and efficiently. Edge computing allows you to address problems more quickly while alleviating network congestion. As always, think of the Internet of Things as your early warning systemto detect pollution, fires, water leakage, unsafe machinery, excessive electricity usage, deforestation, water contamination, and thousands of other important use cases.

Summary

As you can see, none of the topics I discussed required Machine Learning, Deep Learning, Neural Networks, or any kind of Artificial Intelligence to move the Value Needle. Throughout 2020 I want you to avoid the hype that bombards us from every direction and focus on specific problems that can be solved in your organization through the use of Connected Intelligence. Steer clear of esoteric technologies and concepts that you and your colleagues struggle to wrap your head around.

If you start small, keep things simple, and iterate steadily throughout the year, I know you can knock it out of the park and derive tremendous value for your organization while being sustainable.

Monetizing the Industrial Internet of Things

Mobile Future Forward

I was privileged to moderate a panel discussion on monetizing Industrial #IoT at the Mobile Future Forward conference in Seattle. This year’s event focused on Connected Intelligence and the intersection of Man, Machines and Platforms.

I hosted a panel discussion on Monetizing and Scaling the Industrial Internet of Things (IIoT) Wave with a distinguished panel of guests including:

  • Allen Proithis – President at Sigfox
  • John Aisien – CEO at Bluecedar
  • Russ Green – CTO and Head of Products at SAP Digital Interconnect
  • Adam Hertz – Vice President of Engineering at Comcast

In front of large audience we discussed a variety of important topics including:

  • Why is industrial IoT moving faster than consumer?
  • As the next generation of intelligent endpoints, how are the Mobile and IoT ecosystems blurring?
  • How do the various types of wireless connectivity options fit into IIoT solutions?
  • How do companies get IoT platforms integrated with their existing systems of record?
  • What should organizations be doing to secure their IoT infrastructure?
  • What are different ways companies can monetize IIoT?

We had a lively discussion with great questions from the audience. Chetan Sharma, the number one name in Mobile, knows how to put on a top tier conference and his insights were invaluable.

ROBTIFFANY.COM Named in the Top 100 Websites for IoT Industry Professionals

Top IoT Website

Thrilled to see robtiffany.com included in this distinguished group of the world’s top Internet of Things companies, news sites and individual #IoT influencers and luminaries like Rob van Kranenburg, Peggy Smedley, Stacey Higginbotham and Scott Amyx.

Years of innovating in the IoT, M2M, cloud and mobile industries combined with sharing my knowledge and opinions on https://robtiffany.com has been really rewarding for me.

Rob Tiffany Blog

Check it out at: http://blog.feedspot.com/iot_blogs/

Rob Tiffany Named a Top 100 M2M Influencer

M2M Influencer

In Onalytica’s 2016 analysis and ranking of individuals and brands in the Machine to Machine space, Rob was ranked a top 100 #M2M influencer. #IoT

For those of you who are unfamiliar with the term, Machine to Machine (M2M) refers to the direct communication between devices using a variety of communications channels, including wired and wireless. Many of you will think this is the same or similar to the Internet of Things and you wouldn’t necessarily be wrong. I started my career in the M2M space connecting unintelligent vending machines to primitive wireless networks to derive value from remotely monitoring them. Needless to say, a lot has changed since then.

Analytic M2M

In modern terms, traditional M2M is often expressed as the Industrial Internet of Things (IIoT) or Industrie 4.0. Imagine the value to be derived from connecting, analyzing and acting on data from industries such as healthcare, automotive, oil and gas, agriculture, government, smart cities, manufacturing, and public utilities. It’s an exciting space to be in and it’s rapidly transforming our world.

Check it out at http://www.onalytica.com/blog/posts/M2M-2016-Top-100-Influencers-Brands/

Internet of Things Thought Leaders to Watch In the Next 4 Years

IoT Thought Leaders

I’m thrilled and humbled to be named one of the world’s 29 top Internet of Things thought leaders. #IoT

This distinguished list compiled by DADO Labs includes the following IoT luminaries:

Check it out at: http://dadolabs.com/iot_thought_leaders/

Speaking at VSLive! Redmond on Azure IoT

VSLive

If you’re attending Visual Studio Live! Redmond 2015, come check out my session on Making the Internet of Things Real with Azure #IoT Services.

At Microsoft, we believe that the Internet of Things starts with your things by building on the infrastructure you already have and using the devices you already own. Microsoft has played a central role in facilitating Internet of Things (IoT) forerunners including SCADA (Supervisory Control and Data Acquisition) and M2M (Machine to Machine) since the 1990s. We’ve provided real-time, embedded platforms to power sensors that no one ever sees plus advanced robotics, medical devices and human machine interfaces (HMI) just to name a few. Today, Microsoft Azure IoT services delivers hyper-scale telemetry ingestion, streaming analytics, machine learning and other components to unlock insights from the Internet of Your Things in order to transform your business.

Hope to see you there!

Getting Started with Azure IoT services: Event Hubs

Event Hub Graphic

Microsoft Azure Event Hubs is a managed platform component of Azure #IoT services that provides a telemetry data ingestion at cloud scale with low latency and high reliability.

For your Internet of Things (IoT) scenarios, you can think of Event Hubs as the loosely-coupled beginning of an event pipeline that sits between event publishers like sensors and event consumers like Azure Stream Analytics. With industry analysts predicting tens of billions of “Things” sending telemetry over the Internet in the coming years, most data ingestion solutions won’t be able to handle the onslaught of information. Event Hubs and Azure are designed for this very scenario. Unlike queues, Event Hubs implement partitions (shards) to support massive horizontal scale for the processing of a million events per second. Consumer Groups provide consuming applications an independent view of the Event Hub from which to read the telemetry streams that can lead to complex event processing, storage or other downstream services.

Event Hub Graphic

Now that you have a brief summary of this event ingestion technology, it’s time to step through the creation of your own Event Hub so you can start bringing your IoT scenarios to life.

Go to your Azure Portal and click the Service Bus icon on the left side of the page as shown below:

Create Service Bus Namespace

If you have an existing Service Bus namespace, then you can reuse it. Otherwise, click Create a New Namespace.

The Create a Namespace dialog will pop up on your screen as shown below:

Create Namespace Dialog

In this dialog you will enter a unique Namespace Name, select a Region, select a Subscription to bill against, choose Messaging as the Type in order to support Event Hubs and choose Standard as the Messaging Tier. This allows you to support a sufficient number of Brokered connections (AMQP) into the Event Hub and up to 20 consumer groups leading out of the Event Hub.  Click the checkbox when you’re done.

With your Service Bus namespace created, click on the appropriate highlighted row as shown below:

Service Bus Created

Click Event Hubs from one of the choices across the top of the page to bring up the page shown below:

Create Event Hub

Click Create a New Event Hub.

Select Quick Create to which should be sufficient for most IoT scenarios.

Create Event Hub Quick Create

Enter a unique Event Hub Name, select the same Region as your Service Bus Namespace, select a Subscription to bill against, select the Service Bus Namespace you previously created and then click the Create a New Event Hub checkbox.

With your Event Hub created, click on the appropriate highlighted row as shown below:

Event Hub Created

Click Configure from one of the choices across the top of the page to bring up the page shown below:

EventH ub Configure

The Message Retention text box allows you to configure the number of days you’d like to have your messages retained in the Event Hub with a default of one day.  The Event Hub State combo box allows you to enable or disable your Event Hub.  Following the Quick Create path gave you a Partition Count of 16.  This value is not changeable once it’s been set so you might consider a Custom Create of your Event Hub if you need a different value.  Partitions refer to a scale unit where each one supports message ingress of 1 MB/sec and an egress of 2 MB/sec.  You can set the number of Event Hub throughput units on your Service Bus Scale page.  The default value is set to one.

In your next configuration step, you will create two shared access policies to facilitate security on your message ingress and egress as shown below:

SharedAccessPolicies

Click into the Name textbox and enter an ingress name then select the Permissions combo box and select Send.  Repeat the process on the newly created row below by adding an egress name and then select Manage, Send, and Listen from the combo box.  Click the Save icon at the bottom of the page and then you’ll notice that shared access keys are generated for both your message ingress and egress policies.  Those keys will be used to create the connection strings used by your IoT devices, gateways and event consumers like Azure Stream Analytics.

To view and use those connection strings, click Dashboard at the top of the page and then click the Connection Information key icon at the bottom of the page to bring up the Access connection information dialog as shown below:

Connection Strings

This is where you will go to copy the Shared Access Signature (SAS) key connection strings into your code to authenticate access to entities within the namespace. The authentication and security model ensures that only devices that present valid credentials can send data to an Event Hub. It also ensures that one device cannot impersonate another device. Lastly, it prevents a rogue device from sending data to an Event Hub by blocking it. Of course, all communication between devices and Event Hubs occurs over TLS.

To wrap things up, click Consumer Groups from one of the choices across the top of the page to bring up the page shown below:

Consumer Groups

Rather than using the $Default Consumer Group, it’s a good idea to specify one or more of them yourself to create views of the Event Hub that will be used by things Steam Analytics.  This is a simple process that starts with clicking the + Create icon at the bottom of the page.

The Create a Consumer Group dialog will pop up on your screen as shown below:

Parking Group

Type in a meaningful name in the Consumer Group Name textbox and then click the checkbox to save and exit.

Some of you may be wondering why do you need to use Event Hubs for event ingestion when you’ve been uploading data from disparate clients to servers using SOAP + XML and REST + JSON for more than a decade.  The answer has to do with wire protocol efficiency and reliability.  By default, Event Hubs use the Advanced Message Queuing Protocol (AMQP) which is an OASIS standard.  This is a binary, peer-to-peer, wire protocol designed for the efficient, reliable exchange of business messages that got its start on Wall Street.  If it’s good enough for the critical financial transactions between the world’s largest investment banks and stock exchanges, I’m pretty sure it’s good enough for the rest of us.

At this point, your Event Hub should be up and running. The next step is to get a device sending telemetry into your Event Hub so you can see it working. To test this out, I’ll walk you through the creation of a simple Windows console application.

To get started, create a new C# Console Application in Visual Studio 2013 and call it ContosoIoTConsole as shown below:

NewProject

In the Solution Explorer, right-click on References and select Manage NuGet Packages…

In the Search Online box type Azure Service Bus.

NuGet

Install Microsoft Azure Service Bus version 2.6.1 or later.

After that, right-click on References again and add a reference to System.Configuration so your application can read from configuration files.

In the Solution Explorer, open the App.config file. You’ll notice that it’s already filled with various Service Bus extensions. I want you to scroll down to the appSettings section at the bottom where you’ll see the beginnings of a Service Bus connection string waiting to be filled-in with your specific Event Hub data as shown below:

AppSettings

Replace [your namespace] with the name of the Service Bus Namespace you created in the Azure portal. I called my namespace ContosoIoT.

As you slide across to the right, you’ll see SharedAccessKeyName=. I want you to replace RootManageSharedAccessKey with the name of the data ingress shared access policy you created in your Event Hub. I named mine TelemetrySender.

In order to replace [your secret] with the correct value, go to the Dashboard page of your Event Hub and click the Connection Information key at the bottom of the page. A dialog containing access connection information with connection strings will appear. Copy the connection string from the data ingress shared access policy you created and paste it into notepad because it contains too much information. Just copy the SharedAccessKey value at the end of the connection string into [your secret] and then save and close the file.

Hopefully along the way you noticed that you can just paste the entire connection string into the value to get the same result as the direction above.

Keep in mind that when you deploy your individual devices to production, they won’t all be using this same key like you’re doing now for this test scenario. SAS tokens based on the shared access policies must be created and used by each device sending data to Event Hubs.

Now it’s time to jump in and write some code. Open Program.cs and add:

using Microsoft.ServiceBus.Messaging; using System.Configuration;

with all the other using statements found above the namespace.

The actual three lines of code needed to send the IoT equivalent of “Hello World” to your Event Hub is shown below:

Code

First you grab the connection string you created in App.config. Next, you create an EventHubClient based on the connection string and the name of your Event Hub. Lastly, you call the Send method to pass along encoded event data as AMQP. In this scenario you’re only sending a simple string but you can send classes as well.

Run this console app several times and then wait a few minutes before checking the Event Hub dashboard in your browser since it doesn’t update in real time. Verify that your “Hello IoT” messages made it to their destination. Congratulations!

IncomingMessages

You’re now up and running with the basics of high-speed, high-scale telemetry ingestion in Azure for all your IoT and M2M scenarios. Now it’s time to move from a simple “Hello IoT” example to something more real-world like a street parking scenario found in a smart city.

One feature of Smart Cities is to help drivers find free parking spaces on city streets using their smartphones, tablets or in-car navigation apps. This is accomplished by embedding low-power, magnetic sensors in the streets near the curbs where free or metered parking spots are available. These sensors detect the absence or presence of a large metal object above them and relay this Boolean (Yes/No) state via a low-power, 6LoWPAN mesh network to a nearby field gateway that’s probably mounted on a street light.

Modelling this data via your existing console app is trivial and only requires the addition of a class + minimal code to hydrate an object with data and serialize it for transport. To get started, return to your ContosoIoTConsole solution in Visual Studio, right-click on References and add a reference to Newtonsoft.Json to support serializing your new class as JSON.

Next up, right-click on your existing ContosoIoTConsole project and add a public class called StreetParking that looks like the code shown below:

For this example, you’re just going to model a single street block and GPS coordinates with four available parking spaces to choose from.

Jumping back to the Program class from the previous Hello World example, you’ll be re-using the connectionString and EventHubClient code at the top and bottom of Main() below:

Since you’ll be serializing your StreetParking class as JSON, add using Newtonsoft.Json; above the namespace with all the other using statements.

The new code you’ll add above includes instantiating a new StreetParking object, hydrating all its properties with data, serializing the object as a JSON string and then sending the data to your Event Hub. With these code additions made, run your console app a few times to verify that your street parking event arrived in the Event Hub.

Sharing my knowledge and helping others never stops, so connect with me on my blog at https://robtiffany.com , follow me on Twitter at https://twitter.com/RobTiffany and on LinkedIn at https://www.linkedin.com/in/robtiffany

 

Seize the Opportunity of the Internet of Things

VendLink

There are a lot of newcomers to the Internet of Things and Machine to Machine space lately. Many of them love to speak authoritatively and often use vending machines as their favorite example use case to illustrate the value of #IoT.

When you see me use vending machines in a similar fashion, it’s not because of an article I read, a slide deck I copied, or a bandwagon I jumped on. It’s because I actually built this stuff twenty years ago with a group of visionaries and the best engineers I’ve ever worked with in my career.

We didn’t wait until vending machines became intelligent and wireless technologies became pervasive. We took the overwhelming population of unintelligent, fully mechanical vending machines and made them intelligent with our embedded technologies to unlock their insights. Wireless data coverage was a nightmare and the cost per byte would seem insane by today’s standards, but we weren’t going to force route drivers to visit and plugin to vending machines to find out what was going on. We created tiny, bit-encoded data packets on null-modem cables that we brought to a multitude of wireless technologies in order to create cost-effective coverage in the markets we served. Oftentimes, we created our own modems to bounce packets off business radio towers. Yes, we realized that giving each machine an antenna in a bank of vending machines was inefficient so we created gateway technology. As our software analyzed the telemetry we streamed from thousands of vending machines, we brought to life the game-changing insights I see companies “discovering” today. Our company was called Real Time Data and we brought things like real time inventory management, dynamic routing, predictive failure analysis, intelligent merchandising, revenue forecasting, theft alerts and many other insights to an industry run on quarters and dimes. We didn’t have the Internet to connect our “things” to. We either used or created our own private data networks.

These days when I meet around a camp fire with the wireless telemetry pioneers I worked with all those years ago, we often laugh about how easy it would be to recreate these solutions today. Machines and sensors are now intelligent, wireless data networks are cheap and pervasive, IPv6 means we can connect almost anything, off the shelf analytics tools abound, machine learning is here, and cloud computing power is almost limitless. We used to call some of this stuff SCADA, but you can call this combination of streaming telemetry plus command and control the Internet of Things. Now is the time to seize the opportunity right there in front of you to revolutionize your business. It’s all about reducing expenses, boosting customer satisfaction and increasing revenue.