Claudio Caballero, Director, Technology and Advanced Products, Firemark Labs, Singapore
Whatever is driving your IoT strategy, there are myriad challenges to executing successfully, many of which have already been ably addressed in these pages, such as secure development and use of public clouds.
One frequently overlooked topic I would like to address is the question of what to test and optimize for at what time during the development cycle.
In some areas, you can never test and optimize soon enough. Connectivity is a prime example here. Whatever connectivity technology you are using, there are bound to be hiccups you didn’t anticipate if you don’t test adequately.
SIM cards from a particular carrier may work fine in one part of your target geography, but fail abysmally in another. WiFi spectrum congestion (and remember, it affects Bluetooth also) is generally not an issue for rural applications, but even if you happen to deploy next to just one apartment block, you may be in for lots of trouble. Finally, newer standards like LoRaWAN come with all of the usual interoperability and other headaches that are natural in the early days. Trust me on this, I deployed WiFi in the late 90s.
What I’d really like to address here, however, is the flip side of “test early, test often”, which is the principle of avoiding premature optimization.
To introduce the topic, I’d like to first explore what I believe is an easier-to-understand application of this principle, premature scaling in startups. If you read any literature about common factors that are critical to whether startups survive or die, you will come across this topic. Frequently, startups, in an effort to ensure they are prepared for their product or service to be a breakout hit, will commit resources to scalability before they wind up actually needing them.
Startups frequently commit resources to scalability before actually needing them in an effort to ensure they are prepared for their product or service to be a breakout hit
One form this can take is locking inexpensive contracts with service providers that have great unit rates but are for volumes that haven’t arrived yet and so cost way more overall (I saw this once with a CDN contract that included monthly transfer quotas that the startup wasn’t even hitting one percent of yet).
Another example is hiring support staff before you have enough customers to need them, and then scrambling to find other work for those folks to do, all while incurring the cost of these talented people being underutilized.
In the IoT domain, one common example is locking your software development to a particular target hardware platform before the software is mature enough to really know what the hardware requirements will be.
If, for example, you are deploying an ML model on an edge/IoT device, it is very tempting to proceed in parallel, picking a hardware platform based on factors like cost and power consumption, even before you have settled on the architecture, size, and complexity (integer or floating-point, for example) of the inference model you will wind up deploying.
In this scenario, I would urge you to delay the hardware choice as long as possible, and focus your efforts on having the inference model be as robust and accurate as you need, and well tested, and only then choose your deployment hardware.
I especially urge you to avoid having deployment hardware in the iteration cycle for your developers before the software is pretty much complete. This can slow things down tremendously (both directly and by distracting your engineers), and is a prime reason why the embedded systems market (and mobile app development) has long employed cross-compilers and emulators.
Since you are going to have to do real-world testing on a small scale anyway, parallelize the software and hardware efforts at this point only. You will almost certainly have time and budget to throw hardware at the problem for your prototype/testing phase (even if this early hardware doesn’t meet all the ultimate power and size requirements), and you can turn your engineers loose on optimizing the software for a particular hardware platform once the test results are coming in.
My favorite phrase for the overarching principle here is a quote from the Renaissance scientist Francis Bacon: “Nature, to be commanded, must be obeyed.” Test and optimize early and often for those aspects of your IoT solution that are likely to remain unchanged from the beginning. Avoid premature optimization for items that could change significantly while you are still refining your solution and discovering the right product/market fit.