For the most part, cloud architecture is not that exciting. By now we know basically what works, what does not, and the process to get to the right target architecture. This means both the meta or logical architecture and added technology to get to the physical architecture.
Although we know the best patterns for most of what cloud architecture requires, some problems are still being debated. No de facto solution or best practice has emerged yet. Here are my top three:
First, what goes on the edge? Edge computing has benefits, such as placing data processing closer to the source of the data. However, the question remains: How does one partition data and processes between a cloud-based server and an edge computer?
Many push as much as they can to the edge, but realize that you’re moving away from a centralized system (the public cloud), to many decentralized systems (the edge devices or servers). You need to understand that you must maintain these edge systems, and they are much more difficult to monitor, govern, secure, update, and configure. Multiply that effort by hundreds of edge computing devices and you’ve got an operational nightmare.
Second, what to containerize? Many enterprises say containers are their strategy and not just an enabling technology. This almost religious belief in the power of containers has pushed many an application to the cloud in containers, but that’s really not how business should be moving there.
The issue is that there are no hard and fast rules as to what can—and should—exist in a container. Legacy applications that will take a great deal of effort to refactor (rewrite) for containers are not likely candidates; however, in many instances, the cloud migration team attempts to move them first.
This means that enterprises will fail to find value in containers for some of their applications that move to the cloud. It’s a million-dollar mistake that a good number of cloud architects will make.
Finally, what applications do we AI-enable? Machine learning is cheap in the clouds, as well as much easier to use than it was. This has led to many instances where enterprise IT AI-enabled an application when the use of cognitive systems as an application component was ultimately contraindicated.
Much like the container trade-off outlined above, there is no definitive rule when and how to use machine learning within existing or net-new applications.
A few things work against the use of machine learning, including the fact that you must refactor the application to take advantage of machine learning at all. However, the larger issue is if AI is even needed in the first place. Many never even ask that question.
We’ll always have topics that are not easy to resolve. What’s most productive is that we’re talking about them.