-
We already discussed about Queue management solutions in the past, and I am always happy to write about it. Queue managers are not easy to implement, and there is a reson if IBM MQ Series is still a successful product. Some month ago, a big bank customer asked me to provide a small queue implementation to increase asynchronous internal processing of our payment solution.
The project had very strong contraints: I could not use existing queue system because they were not yet available, and I need to be able to provide microservice parallelism in a cloud-environent.
I have very little time to provide a solid solution, and re-inventing the wheel was not an option. Performance was important, but we plan to have a managable numbers of transactions per seconds, far behind modern cloud database capacity.
Challenge accepted.
Read More -
Abstract: Make a database which can be store historic modification is often considered a “secondary” activity, but bad design leads to databases that are difficult to optimize and often not very understandable. In this article we illustrate a simple method that respects the dictates of relational theory & is easy to understand. As a plus we will show it on SQLite, a small but powerful database system.
Read More -
Abstract: Storicizzare i dati spesso è un attività considerata “secondaria”, ma un cattivo design porta a base dati difficili da ottimizzare e spesso poco comprensibili. In questo articolo illustriamo un semplice metodo che rispetta i dettami della teoria relazionale, è facile da capire ed è attivamente usato in produzione.
Read More -
Some co-workers started using Apache Kafka con a bunch of our Customers.
Apache Kafka is a community distributed event streaming platform capable of handling trillions of events a day. Initially conceived as a messaging queue, Kafka is based on an abstraction of a distributed commit log[*].
To get this goal, Apache Kafka needs a complex servers setup, even more complex if you want the certification for the producing company (Confluent). Now, if you are planning to use Kafka like a simple JavaMessaeSystem (JMS) implementation, think twice before going on this route.PostgreSQL 12 offers a fair (and open source) partition implementation, whereas if money are not a problem, Oracle 12c can happy scale on billions of record before running into troubles (and ExaData can scale even more).
PostgreSQL and Oracle offer optimizations for partitioned data, called “Partition Pruning” in PostreSQL teminology:
With partition pruning enabled, the planner will examine the definition of each partition and prove that the partition need not be scanned because it could not contain any rows meeting the query's WHERE clause. When the planner can prove this, it excludes (prunes) the partition from the query plan.
This feature is quite brand new (popped in PostreSQL 11) but it is essential to a successful partition strategy. Before these feature, partitioning was a black magic art. Now it is simpler to manage.
Read More