Welcome to IoT Coffee Talk🎙️12 to chat about Digital #Tech #Analytics #Automation #IoT #DigitalTwins #Edge #Cloud #DigitalTransformation #5G #AI #Data #Industry40 & #Sustainability over a cup of coffee.
Grab a cup and settle-in with some of the industry’s leading business minds and technology thought leaders for a lively, irreverent, and informative discussion about IoT in a totally unscripted, organic format.
On this week’s episode, Rick Bullotta (IoT legend), Stephanie Atkinson (Compass Intelligence), David Vasquez (Verizon), Leonard Lee (neXt Curve) and Rob Tiffany (Ericsson) nerd out on IoT architectures and principles of scale out design and deployment.
Click below to check out IoT Coffee Talk wherever you get your podcasts:
Thanks for listening to us! Watch episodes at http://iotcoffeetalk.com/. Your hosts include Leonard Lee, Stephanie Atkinson, Marc Pous, David Vasquez, Rob Tiffany, Bill Pugh, Rick Bullotta and special guests.
We support Elevate Our Kids to bridge the digital divide by bringing K-12 computing devices and connectivity to support kids’ education in under-resourced communities. Please donate.
Today I thought I’d cause a little controversy by comparing my First Principles (things I know to be true) related to building high performance, scalable systems (software, compute, storage and networking) with current conventional wisdom.
Among most architects today, current conventional wisdom states that your architecture must follow a microservice software pattern, use containers like Docker, capture data in an event streaming platform like Kafka, persist data in a NoSQL database, be managed by something like Kubernetes or Mesosphere and run on Linux if you want to have a high performance, scalable system.
So what is something I know to be true from the dot com era?
Back then I was fortunate enough to be part of a team that built an energy trading platform that allowed multiple counterparties to buy and sell financial instruments (think NASDAQ). This platform was built on 32-bit Windows Server 2000 with some kind of Intel Pentium CPU. Classic Active Server Pages would send and receive XML fragments over HTTPS to and from brokers from every major energy company + NYMEX and the Intercontinental Exchange (ICE) following a RESTful API pattern. Incoming data was queued in MSMQ and free-threaded objects pooled inside Microsoft Transaction Server (MTS) moved that data in and out of 32-bit SQL Server 2000. The databases were clustered and Windows Server got 1 GB of RAM whereas SQL Server got most of the remaining 3 GB of RAM. The Internet Information Servers running ASP 3.0 used the built-in network load balancing service (NLB). When we needed to update the software, we just took the servers out of the cluster one by one to make the update. No biggie.
So what’s the takeaway from all this? As someone who has played a prominent role in the Internet of Things space over the years, I struggle to find modern systems that have to deal with the transactional and analytical load that I witnessed with the system I helped build 20 years ago. In spite of all the promise and talk, I’ve yet to see any IoT system that deals with a fraction of the load the world’s financial systems handle effortlessly with their antiquated architectures and relational databases.
Why were we able to do so much more with so much less back then?
Remember folks, we’re just flowing current through a gate to establish a high or low voltage at a particular point in the circuit. The farther away you abstract yourself from this, the more resources your system will require and the slower the system gets. Don’t be impressed by architectural diagrams with hundreds of lines, boxes, arrows and triangles going every which way. Complexity kills. New programming languages, frameworks, and architectural patterns come along all the time. Use you best judgement and fall back to your own first principles before deciding to jump on the next bandwagon.