Submitted
By Dan Skatov
A software vendor’s primary objectives are to attract new users, adapt to the expectations of existing users, and prevent user outflow—all crucial to maintaining the agility, competitiveness, and profitability of the business. Every time the user base grows, the developer is put under pressure—tasked with scaling the entire user platform to accommodate increased workloads while keeping the user experience response (UX) time within a fraction of a second. No easy feat considering the development stacks commonly used today present significant trade-offs between system complexity and scalability. This mis-balance can be resolved within an emerging kind of software technology and architecture.
The Trade-off Between System Complexity and Scalability
In the context of today’s conventional development platforms, performance capacity of a single server runs out fast. Hence scaling operations often means scaling out, adding more machines—more layers of complexity—to a multi-tiered network of different kinds of servers along with performance enhancers like multi-node data redundancy, caches, and data grids. The increase in user base leads to an uptick in tiers and a number of machines.
This scale-out architecture comes with weighty drawbacks including higher costs for hardware, software, and maintenance; increased complexity of development and system control; higher disaster risks due to more potential points of failure; greater risk for bottlenecks and implementation bugs; difficulty identifying the root of problems and their posterior fixings; and the need for costly and mistake-prone inter-module integration and configuration.
Together, these drawbacks affect time-to-market for new releases, users’ satisfaction, and as a result—the business vitality. Adopting scale-out architecture can lead to a drop in performance, reliability, and data consistency—issues impossible to solve by just adding more hardware. Even if a scaled-out solution contains no complex logic or heavy computations, the amount of hardware required to run it with affordable UX response time might become ridiculously large. Unfortunately, such a state of affairs reveals not the immaturity of systems implementers—as one might reasonably suggest, but rather a considerable fault in the approach itself.
Multi-tier, scale-out, data-excessive architectures can be seen as a tool to solve an effect in popular cases, but not a cause. They work well in different circumstances like social networks, historical data storage, or overnight business intelligence, but show failures in applications involving the management of valuable resources, including enterprise resource planning and line-of-business applications.
Collapsing the Stack: Solving the Problem
Luckily, instead of solving the effect, it is possible to address the root cause—a software development platform itself. Implementing the platform on the updated fundamentals enables a new breed of applications, globally optimized by simplicity, performance, and modularity as opposed to leveraging for local gains with multiple separate tiers. This vision is summarized by the concept of collapsing the stack. For software development, this means making the code concise along with eliminating the glue code, simplifying system architecture, and increasing flexibility and scalability of the resulting solution. In business terms, this translates to improved agility, competitiveness, and reduced total cost of ownership.
Implementing “the collapsed stack” means a shift from engineering the tiers and their integration to a focused application platform. All features from tiers, like network communication, data persistency, or failover are available as before, while the tiers become either virtualized or removed for simpler facilities. The shift doesn’t mean a move from modularity of layers to a monolithic product. Instead, the key differentiator is facilitating highly modular solutions without sacrificing performance, simplicity, and cost. Looking into the recent micro services movement, a collapsed stack offers a highly-performing, open-ended node running a set of uploadable micro applications. The data integration, which is considered challenging for micro services, can be solved for micro applications by efficient in-memory data sharing.
A new breed of in-memory technology is a primary component on the track. Simply put, if the application and database are two disjoint entities, they have to signal a lot while sending data between each other. It takes eight minutes for light to travel from Sun to Earth, 130 ms for a signal to get from Australia to the U.S. by wires. Even if these databases and applications run on the same machine, they still communicate and thus signal on a silicon chip. Physics laws define a strict upper bound of the multi-tier architectures performance. The solution is to minimize signaling by putting parties to the closest possible extent.
First-generation, in-memory databases shifted from operating data on disk to operating in memory with securing log to disk, which is multiple orders of magnitude faster. Software platforms of a collapsed stack demonstrate a leap ahead, which is shrinking database and application tiers into a single layer. Today, stored procedures are often used to put chunks of code closer to the database for performance. By collapsing the stack, it is no longer necessary to dissipate logic between database and application code, since all applications running in a platform operate physically the same data instances that the database owns. Thus, delivery of data from the database to the application is not needed. The consequences seen in real-world are millions of fully-ACID transactions per second on a modest server .
The shrinking principle heals other parts of collapsing stack as well, by saving on delivering messages from web server to application server, inter-process communication, data redundancy and alike, “middlemen.” In addition, the glue code, which was binding the layers, goes away resulting in the pure business logic expressed in a concise code.
Dealing with Legacy Code
Legacy code stands out as an obstacle for those looking to adopt the collapsed stack application approach. Depending on the application platform chosen, the code is either to be rewritten into different API or language, or simplified—both with increasing clarity and shrinking in size. It is an important investment in a company’s future, clean and concise code is easier to change and support. As Ken Thompson, designer of Unix, says, “one of my most productive days was throwing away 1,000 lines of code.”
Any data/performance-critical business is a strong candidate for collapsing the stack and adopting the in-memory platform. As well, any business demanding agility and performance is a good match. Such platform can be used within any vertical, but likely within industries including banking, finance, retail, internet, telecommunications, and gambling/gaming.
Benefits of Collapsing the Stack
Depending on the platform chosen, gains are a subset of fast, responsive, and multiplatform GUI, improved business agility; faster deployment cycle and instant module/application integration; better technology learning curve—clean and concise code, no stack-related (glue) code; reduced development and maintenance complexity, significantly lower implementation risks; strong security and data integrity guarantees within in-memory technology and thin clients; less hardware (demands in memory space) and more performance; improved reliability by eliminating single points of failure; improved data ownership by shared in-memory data; and rich integration capabilities, exemption from vendor lock-in with multiplatform support.
Getting Ready for the Future
The approaching Internet of Things era makes utilization of in-memory platforms and collapsed stacks unavoidable. To put things in perspective, moving from around seven billion humans to 60 billion users and devices means we can expect an increase of at least seven times more transaction loads. Now is the time to break through complexity and get up to speed with the future. SW
Dan Skatov is the head of development for Starcounter. He previously served start-ups in the field of high performance, natural language processing, and artificial intelligence over the past 10 years in roles of co-founder, R&D head, researcher, and public speaker.
Sources http://www.infoq.com/news/2014/04/bitcoin-banking-mongodb http://www.nytimes.com/2013/05/10/nyregion/eight-charged-in-45-million-global-cyber-bank-thefts.html http://www.gigaspaces.com/xap-in-memory-computing-event-processing/Meet-XAP http://nginx.com/blog/microservices-at-netflix-architectural-best-practices/ http://www.starcounter.com/in-memory-applicationlication-platform/starcounter-performance/ Deeper overview is in: Crowther, Paul, and Peter Lake, Concise Guide to Databases. London: Springer, 2013. Use case on 100x reduced memory footprint: http://hana.sap.com/content/dam/website/saphana/en_us/S4%20HANA/Final_Launch_S4HANA_Plattner.pdf