-
We already discussed about Queue management solutions in the past, and I am always happy to write about it. Queue managers are not easy to implement, and there is a reson if IBM MQ Series is still a successful product. Some month ago, a big bank customer asked me to provide a small queue implementation to increase asynchronous internal processing of our payment solution.
The project had very strong contraints: I could not use existing queue system because they were not yet available, and I need to be able to provide microservice parallelism in a cloud-environent.
I have very little time to provide a solid solution, and re-inventing the wheel was not an option. Performance was important, but we plan to have a managable numbers of transactions per seconds, far behind modern cloud database capacity.
Challenge accepted.
Read More -
Abstract: Make a database which can be store historic modification is often considered a “secondary” activity, but bad design leads to databases that are difficult to optimize and often not very understandable. In this article we illustrate a simple method that respects the dictates of relational theory & is easy to understand. As a plus we will show it on SQLite, a small but powerful database system.
Read More -
Abstract: Storicizzare i dati spesso è un attività considerata “secondaria”, ma un cattivo design porta a base dati difficili da ottimizzare e spesso poco comprensibili. In questo articolo illustriamo un semplice metodo che rispetta i dettami della teoria relazionale, è facile da capire ed è attivamente usato in produzione.
Read More -
Some co-workers started using Apache Kafka con a bunch of our Customers.
Apache Kafka is a community distributed event streaming platform capable of handling trillions of events a day. Initially conceived as a messaging queue, Kafka is based on an abstraction of a distributed commit log[*].
To get this goal, Apache Kafka needs a complex servers setup, even more complex if you want the certification for the producing company (Confluent). Now, if you are planning to use Kafka like a simple JavaMessaeSystem (JMS) implementation, think twice before going on this route.PostgreSQL 12 offers a fair (and open source) partition implementation, whereas if money are not a problem, Oracle 12c can happy scale on billions of record before running into troubles (and ExaData can scale even more).
PostgreSQL and Oracle offer optimizations for partitioned data, called “Partition Pruning” in PostreSQL teminology:
With partition pruning enabled, the planner will examine the definition of each partition and prove that the partition need not be scanned because it could not contain any rows meeting the query's WHERE clause. When the planner can prove this, it excludes (prunes) the partition from the query plan.
This feature is quite brand new (popped in PostreSQL 11) but it is essential to a successful partition strategy. Before these feature, partitioning was a black magic art. Now it is simpler to manage.
Read More -
I admit it. I suffered from an “algebra narcoleptic syndrome” during my relational database lessons at University (1996 circa).
Read More -
For a complete description see https://www.codeproject.com/Articles/33052/Visual-Representation-of-SQL-Joins
All the Join you want -
The Sqlite Oracle Compatibility Functions is an experimental compatibility layer for SQLite vs Oracle, written in Python 3.
Read More -
Sometimes you need to remove nasty duplicate on a table, based on a subset of the column. On every big database there is something called “rowid” which can be used to indentify a column in a unique way. On PostgreSQL is called ctid, as we shall see:
Read More -
Oracle SQL Developer is full of nice feature, damned by a overwhelming options pane, like the one I will describe to you right now.
Even if Oracle databases (<12) does not support auto increment, you can easily ask to your sql data modeler to generate for you a sequence and a trigger in a automatic way.
Read More -
Oracle SQL Developer is full of nice feature, damned by a overwhelming options pane, like the one I will describe to you right now.
Read More -
I stumbled upon a very brain-f**k error on Oracle 10g on these days.
Context: the following query [sql]SELECT * FROM (
SELECT TO_NUMBER(CUSTOMER_ID) AS SNDG FROM BAD_CODES_TABLE WHERE
AND I_LIKE=UPPER(‘STATIC_CONDITION’) AND CUSTOMER_ID NOT LIKE ‘%P%’ ) S WHERE TO_NUMBER(S.SNDG) >2000[/sql] could trigger a Invalid number if CUSTOMER_ID column contains invalid numbers.Why?
Well…if you ask to “explain plan”, you will get something like
- a table full scan
- Filter Predicates AND
- I_LIKE=UPPER('STATIC_CONDITION')
- TO_NUMBER(S.SNDG) >2000
- CUSTOMER_ID NOT LIKE '%P%'
- Filter Predicates AND
Read More - a table full scan
-
Come evitare iniezione SQL: lato SQL Server (SP_EXECUTESQL)
In generale va evitato nel modo più assoluto la scrittura di query sql diamiche.Va evitato cioè l’uso lato SQL Server di sp_executesql e EXEC
Read More -
E’ facile fare un backup con sql server: Basta selezionare tasto destro Tasks>>Backup su un db. Ma come fare il restore?… Ecco un semplice script che chiarisce la cosa (non sempre lampante dallo wizard di restore….): [sql] – Usare il seguente comando per recuperare i parametri sorgente da usare nella MOVE – Nel nostro caso ssaranno MY_BACKUP e MY_BACKUP_log RESTORE FILELISTONLY FROM DISK = N’C:\TEMP\MY_BACKUP\Backup.bak’ ;
Read More
