Evolution of Service Architecture

Assembly: the only way

In principle there was only one: assembly machine language. Hardware and software engineer was a one-role.

The guy who was able to project a chip, was the one who was able to program it.

Then Fortan (1957), the first high level language compiler was created.

C the write once of ’70

In 1972 C language was the child of Unix: the need to port Unix to different hardware. The widespread adoption of Unix on different hardware is the secret (and sign) of C success.

In 1980, the 6510 was the chip inside Atari 2600, Apple computers and Commodore 64. The Bill Gates’ Basic was written in 6510. And guess? the 6510 was a horrible platform for the C language for the lack of registers and the slow stack speed. So C was not an option.

An entire generation of Game programmers coded in assembly for years.

And Microsoft initial success was based on an assembly software!


The Network is the computer?

In 1995 we got a lot of computer interconnected, and Internet sees WWW as the newborn in a new era. On that period Sun was building huge multi processor machine (UltraSparc) and give for free a language, Java who reduced common C errors (pointers, manual memory management, concurrency primitives).

In the 2008 I read an article about eBay scaling best practices: it was hard to scale a huge complex J2EE application. They seems heros to me.

On that days I was on a huge bank which defined the second generation of its banking framework: XFrame. XFrame was a framework to replace U-Frame (Uniform-Frame I guess), the previous old framework, they decided to kill.

XFrame should be the same version on the entire bank, to uniform deploy and rules. 

XFrame was the perfect fit to develop internal banking application, and other portlet needed by UniCredit. It provide a Struts+Hibernate+SingleSignOn and a default strong security model.

XFrame was able to call via SOA legacy cobol, CICS services and so on. The deploy was automated and even named (Gandalf).

Also to reduce memory usage all its libraries was in the WebSphere static classpath (!). You get them for free, and the resulting EARs was lights and “fast” to deploy. It could be a good idea, after all.

You have only one little problem: the framework cannot evolve: after you have 500 apps in production, 1500 environments, no one can ask to do full regression only to push a new Hibernate version on it and so on.

The bank required a  long term commitment to the X-Frame technology stack.

X-Frame was a huge, freezed beast.  And remember, software move fast: in less than 36 months your Java version is the grandfather of the new shiny one, and your API has been first deprecated and then removed.

After five years your code is so old you need to hire costly guys because young interns cannot understand it. And intern works free, while experienced developer wants a lot of money (and Bitcoins, in the future….).

Who say concurrency? Er…

With Scala (2004) and Akka, the approach was gradually shifted to the “share nothing” architecture: avoid using “synchronized java blocks” and deploy your application on a cluster of J2EE machine with no sharing edges. 

I have seen rarely J2EE cluster of more than 8 nodes, being 2 or 4 the average. The reason is simple: with more nodes, the race condition on J2EE mutex (like JMS, distributed transaction and so on) are difficult to manage and you can incur in deadlock if your code is not well written.

And guess? Writing good code is difficult because race condition and deadlocks are hard to model and analyze.

I was called to fix a one of that monster, on 2014 (not alone). It was a huge payment solution developed on a 8-machine clusters. On the “development” environment it takes 30 minutes to get WebSphere up and running on my Linux machine (with plenty of RAM).

The system needs to push millions of SEPA Direct Debit transaction. So was a batch system with an online query interface.

Easier than a Web mail right? 

Not exactly. To scale to millions of record, the code was a nightmare to bring and optimize. We have Exadata Oracle and it seems not enough. You have only two test a day because bringing up the environment and doing a test can take from 2 to 3 hours.

SOA vs microservices

Service Oriented Architecture (SOA) born tho help us to break complex system and to ease integration on  monoliths. 

SOA uses an Enterprise service Bus, which is an integration service, often an evolution of an application server which in turn is an evolution of a transaction monitor.

This lead to increasing license costs. A lot of finance customers are switching from costly WebLogic and WebSphere to JBoss Servers.

The RedHat pricing model is so strong they have a calculator for that.Even considering a 80% discount done by your Oracle dealer, JBoss is hard to bet on the license side. The reason is simple: JBoss core code is open source, and its development costs are shared with the open source community.

RedHat provide patches and support, to optimize the installation. 

In the years, cloud computing become stronger and stronger. Now is normally to deploy on multiple machine, and horizontal scaling is only a matter of money (instances) you are willing to pay. Also after Snowden NSA revelation and Meltdown, public cloud will be a poor-man option.

Who cares will install its own private cloud, because there are flexible tools nowadays.

Kubernetes is a tool to federate and manage system on multiple cloud provider…a very powerful deploy system indeed.

Service Oriented Architecture (SOA) needs to evolve and the micro-services architecture promise us to free from the scaling hassle.

The software needs to be atomized in small and even smaller services to be able to be scaled.

Function, server-less architecture is the most radical approach.

Will microservices save us from the freezed monolith (and be able to hire only cheap intern sw guys)?

Who knows…follow us on the next blog post!


Series NavigationDocker & containers: uso ideale

Leave a Reply