!Support Ukraine!

Posted in English Content, Italian Content | Leave a comment

HarborMaster!

This entry is part 6 of 10 in the series Unix swissknife

After creating Misterio, I found this alternative called HarborMaster:

Harbormaster is a small utility that lets you easily deploy multiple Docker-Compose applications on a single host.

It does this by taking a list of git repository URLs that contain Docker Compose files and running the Compose apps they contain. It will also handle updating/restarting the apps when the repositories change.

….

Do you have a work server that you want to run a few small services on, but don’t want to have to manually manage it? Do you find that having every deployment action be in a git repo more tidy?

Harbormaster is for you.

If you need a more resource-friendly solution compared to k8s, Harbormaster is a solution to try: it is a good set of feature and can be a winner.

Compared to Misterio it add some more dependencies (Misterio needs onybash,  ssh daemon and docker on the target machine), so stick with what you prefer.

 

Posted in DevOps, English Content, Python | Tagged | Comments Off on HarborMaster!

Erlang & la resilienza con pattern matching e processi

Cosa ci insegna Erlang sulla resilienza, in un periodo in cui ce ne è molto bisogno?

Erlang è un linguaggio open source nato nel 1986 alla Ericsson, di tipo funzionale, concorrente,  dinamicamente tipato, general purpose e soprattutto costruito per essere “resiliente” sia dai primi giorni della sua nascita.

E’ il motore dietro RabbitMQ e CouchDB, e nell’ultima versione ha un nuovo Just In Time Compiler (JIT) che migliora le performance.

Continue reading

Posted in Erlang, Italian Content | Tagged | Comments Off on Erlang & la resilienza con pattern matching e processi

Simple method to add historic feature to SQLite

Abstract: Make a database which can be store historic modification is often considered a “secondary” activity, but bad design leads to databases that are difficult to optimize and often not very understandable. In this article we illustrate a simple method that respects the dictates of relational theory & is easy to understand. As a plus we will show it on SQLite, a small but powerful database system.

Authors: Ernesto Ghetti (idea), Giovanni Giorgi (Exhibition and demo trigger)

Let’s suppose we have a table called “CUSTOMER” and we want to log any changes made to its fields, for example on a daily basis.

It is usually tempting to add a couple of START & END DATE columns, only to realize that querying this table to find the current data becomes a nightmare, especially if there are other related tables.  Another approach is to use a CUSTOMER_HISTORY table, but duplicating all the tables and leading to a cumbersome system of updating a group of related tables.

Lets start with a E-R model like this:

 

We are going to create a CUSTOMER table with a logical primary key NDG and a physical primary key XD. Then we define a relationship with the same table using XDS (iDentifer hiStory) as a foreign key. Let’s add some data (like customer name) and a couple of dates to register when the record is inserted / updated, as well as the author (editor) of the change in question:

create table CUSTOMER(
    xd number not null, 
    NDG number not null,
    xds number,
    d_start timestamp,
    d_end timestamp,
    editor varchar2(40),
    name varchar2(80),
    Primary key (xd),
 CONSTRAINT FK_HISTORY_CUSTOMER
    FOREIGN KEY (XDS)
    REFERENCES CUSTOMER (XD)
);

-- Forbids different CUSTOMERS with same NDG:
create unique index CUSTOMER_UQ_NDG ON CUSTOMER( case when xd=xds then NDG END );

-- View on active data
create view VCUSTOMER as 
 select * from CUSTOMER where xd=xds;

Then let’s define a couple of rules:

  1. The current record is the one with XD = XDS.
    It could have END_DATE = null
  2. A historicized record always has XD> XDS and must have defined the END_DATE
  3. Also, we want to avoid different customers have the same logical primary key, called NDG (line 16 above).

As you see is possible to define logical constraints (3) without much hassle: we used “indexes on expression”  available from 2015 on SQLite

Creating a new record is easy:

-- Insert record example (unique id are fixed for simplicity)
INSERT INTO CUSTOMER(XD,XDS,d_start,d_end,ndg,name,editor) values(1 ,1,date('now'),null,100,'Zeno Parisi','BOT');

select * from customer order by XD desc;
+----+-----+-----+------------+-------+--------+-------------+
| xd | NDG | xds |  d_start   | d_end | editor |    name     |
+----+-----+-----+------------+-------+--------+-------------+
| 1  | 100 | 1   | 2021-03-31 |       | BOT    | Zeno Parisi |
+----+-----+-----+------------+-------+--------+-------------+

Now the tricky part: how we update a record?

We use a “shrimp” strategy. First of all we create a copy of the current record, and we let point it to the new one with XD pointer.

-- Copy old record XD > XDS
-- On SQLITE you got a rowid special unique id column for every row: we use it here to make
-- the copy
INSERT INTO CUSTOMER(XD,XDS,d_start,d_end,editor,name,ndg)
select rowid+1 /*trick to get a new one */, XDS,d_start,date('now'),editor,name,ndg
from CUSTOMER where xd=xds and NDG=100;

-- Update XD=XDS
update CUSTOMER set name='Zeno Paris', editor='GG', d_start=date('now') where xd=xds and ndg=100;

Then we update the current record with the new value we want to insert (line 7 above).
The result is below:

-- After update
select * from customer order by XD desc;
+----+-----+-----+------------+------------+--------+-------------+
| xd | NDG | xds |  d_start   |   d_end    | editor |    name     |
+----+-----+-----+------------+------------+--------+-------------+
| 2  | 100 | 1   | 2021-03-31 | 2021-03-31 | BOT    | Zeno Parisi |
| 1  | 100 | 1   | 2021-03-31 |            | GG     | Zeno Paris  |
+----+-----+-----+------------+------------+--------+-------------+

External relation support

This strategy supports external relations too, as you can see in the owned_service table model. 

 

Trigger support

-- New record (see below)
INSERT INTO customer(XD,XDS,d_start,d_end,ndg,name,editor) 
select max(rowid)+1,max(rowid)+1, date('now'),null,200,'Scott Tiger','BOT'
from customer;


/* Trigger support for trasparent updates on vcustomer
 * 
 */
create trigger auto_history_customer_update
instead of update on vcustomer
FOR EACH ROW
begin
 INSERT INTO CUSTOMER(XD,XDS,d_start,d_end,editor,name,ndg)
    select max(rowid)+1 /*trick to get a new one */, XDS,d_start,date('now'),editor,name,ndg
    from CUSTOMER where xd=xds and NDG =OLD.ndg;
 update CUSTOMER set name=NEW.name, editor=NEW.editor, d_start=date('now') where xd=xds and xd=OLD.XD;
end;

/* This trigger only manage rowids */

create trigger auto_history_customer_insert
instead of insert on vcustomer
FOR EACH ROW
begin
 INSERT INTO CUSTOMER(XD,XDS,          d_start,     d_end,    editor,     name,     ndg)
  select    max(rowid)+1,max(rowid)+1, NEW.d_start, NEW.d_end,NEW.editor, NEW.name, NEW.NDG
  from customer;
end;



-- Trigger support example

update vcustomer  set editor='Changed via trigger' where NDG=200;
-- select * from customer order by xd desc;
select * from  vcustomer;
+----+-----+-----+------------+-------+---------------------+-------------+
| xd | NDG | xds |  d_start   | d_end |       editor        |    name     |
+----+-----+-----+------------+-------+---------------------+-------------+
| 1  | 100 | 1   | 2021-03-31 |       | GG                  | Zeno Paris  |
| 3  | 200 | 3   | 2021-03-31 |       | Changed via trigger | Scott Tiger |
+----+-----+-----+------------+-------+---------------------+-------------+
.echo off
Real table data:
+----+-----+-----+------------+------------+---------------------+-------------+
| xd | NDG | xds |  d_start   |   d_end    |       editor        |    name     |
+----+-----+-----+------------+------------+---------------------+-------------+
| 1  | 100 | 1   | 2021-03-31 |            | GG                  | Zeno Paris  |
| 2  | 100 | 1   | 2021-03-31 | 2021-03-31 | BOT                 | Zeno Parisi |
| 3  | 200 | 3   | 2021-03-31 |            | Changed via trigger | Scott Tiger |
| 4  | 200 | 3   | 2021-03-31 | 2021-03-31 | BOT                 | Scott Tiger |
+----+-----+-----+------------+------------+---------------------+-------------+

 

Posted in English Content, IT Featured, Knowledgebase, Structured Query Language (SQL) | Tagged , | Comments Off on Simple method to add historic feature to SQLite

Semplice metodo per storicizzare i dati su database

Abstract: Storicizzare i dati spesso è un attività considerata “secondaria”, ma un cattivo design porta a base dati difficili da ottimizzare e spesso poco comprensibili. In questo articolo illustriamo un semplice metodo che rispetta i dettami della teoria relazionale, è facile da capire ed è attivamente usato in produzione.

Autori: Ernesto Ghetti (idea), Giovanni Giorgi (Esposizione e demo trigger)

Supponiamo di avere una tabella di nome “CUSTOMER” e di desiderare storicizzare ogni modifica attuata ai suoi campi, per esempio su base giornaliera. Di norma si è tentati di aggiungere un paio di colonne DATA INIZIO & FINE, per poi accorgersi che fare query su questa tabella per cercare il dato corrente diventa un incubo, soprattutto se magari ci sono altre tabelle correlate. Un altro approccio è usare una tabella CUSTOMER_HISTORY, ma duplicando tutte le tabelle e portando ad un sistema farraginoso di aggiornamento di un gruppo di tabelle correlate.

Partiamo da un approccio diverso, e usiamo come database di riferimento Oracle (nei prossimi articoli vedremo come estenderlo anche ad altri database meno blasonati).

Definiamo una tabella CUSTOMER con una chiave primaria logica NDG ed una chiave primaria fisica XD. Dopodiché definiamo una relazione con la medesima tabella usando come chiave esterna XDS (iDentifer hiStory). Aggiungiamo qualche dato (chessò magari il nome di questo cliente…) e un paio di date per registrare quando il record viene inserito/aggiornato, nonché l’autore (editor) della modifica in oggetto:

CREATE SEQUENCE ID_GEN INCREMENT BY 1 START WITH 1 MINVALUE 1 CACHE 5;

create table CUSTOMER(
    xd number not null,
    NDG number not null,
    xds number,
    d_start timestamp,
    d_end timestamp,
    editor varchar2(40),
    name varchar2(80),
    Primary key (xd),
 CONSTRAINT FK_HISTORY_CUSTOMER
    FOREIGN KEY (XDS)
    REFERENCES CUSTOMER (XD)
);



Dopodiché definiamo un paio di regole:

  1. Il record attuale è quello con XD=XDS.
    Può avere END_DATE=null o ad un valore infinito (es anno 9999)
  2. Un record storicizzato ha sempre XD > XDS e deve aver definita la END_DATE

Come manipolare i dati

Creare i nuovi record è semplice: si utilizza una sequence e si rispetta la regola (1):

INSERT INTO CUSTOMER(XD,XDS,d_start,d_end,ndg,name,editor) values(id_gen.nextval ,id_gen.nextval,sysdate,null,100,'Zeno Parisi','BOT');

Per aggiornare un record esistente si usa una modalità  a “gambero”, e che consiste nel fare prima una copia del dato, e poi modificare quello “corrente”:

-- Copy old record XD > XDS
INSERT INTO CUSTOMER(XD,XDS,d_start,d_end,editor,name,ndg)
select id_gen.nextval,XDS,d_start,sysdate,editor,name,ndg
from CUSTOMER where xd=xds and NDG=100;
-- Update XD=XDS
update CUSTOMER set name='Zeno Paris', editor='GG', d_start=sysdate where xd=xds and ndg=100;

Lasciatevi un minuto per “digerire” l’idea.

Per vedere sempre i record attivi si può creare una vista di appoggio:

-- View on active data
create view VCUSTOMER as 
 select * from CUSTOMER where xd=xds;

I benefici di questa strategia sono immensi: è semplice trovare tutti i record storici (usando la FK XDS su sé stessi) e si possono avere relazioni esterne stabili (poi vedremo).

Per dimostrare la solidità di questo approccio, vediamo come definire la chiave primaria logica NDG.

Useremo un “function based index” che per fortuna è disponibile in molti database moderni:

-- Forbids different CUSTOMERS with same NDG:
create unique index CUSTOMER_UQ_NDG ON CUSTOMER( case when xd=xds then NDG END );

Questo indice garantisce che due record attivi diversi (XD=XDS) non possano avere il medesimo NDG, mentre ignora i record storici, che per costruzione rispetteranno il vincolo nel momento della loro creazione.

Le relazioni esterne

Questo metodo è interessante perché consente di aggiungere relazioni esterne in modo consistente e indipendente.
A tal proposito introduciamo la relazione OWNED_SERVICE, che è una relazione molti a uno, in cui un cliente ha zero o più servizi associati, in maniera tale che un servizio sia presente una sola volta per cliente:

create table OWNED_SERVICE(
    customer_xd number not null,
    service_name varchar2(80),
    xd number not null,    
    xds number,
    d_start timestamp,
    d_end timestamp,
    
    Primary key (xd),
 CONSTRAINT FK_HISTORY_OWNED_SERVICE
    FOREIGN KEY (XDS)
    REFERENCES OWNED_SERVICE (XD),
 CONSTRAINT FK_OWNED_SERVICE_CUSTOMER
    FOREIGN KEY (customer_xd)
    REFERENCES CUSTOMER (XD)
);
-- Dependency relation uniqueness
create unique index OWNED_SERVICE_UQ ON OWNED_SERVICE(
    case when xd=xds then customer_xd END,
    case when xd=xds then service_name END);

In particolare se i servizi del CUSTOMER non cambiano, la chiave esterna continua a puntare al record “padre” tramite la Foreign key CUSTOMER_XD.

Un record storico può esplorare la relazione da CUSTOMER prendendo un record archiviato e poi navigare XDS->XD<->CUSTOMER_XD per trovare l’OWNED_SERVICE valido al tempo della storicizzazione, incrociando magari le date di validità delle due tabelle.

La seguente query di esempio calcola il “drift”, la differenza tra gli aggiornamenti di un servizio e quelli del cliente:

-- Quando è stato registrato il servizio come si chiamava il cliente?
select c.ndg, c.name, c.d_start, s.service_name, s.d_start as service_start,  (c.d_start - s.d_start) as drift
    from customer c, owned_service s
    where ndg=100 and
    ((c.xd=c.xds and s.customer_xd =c.xd)
     or  (s.customer_xd=c.xds ) )
order by drift desc   ;


100 Zeno Cosimo 06-MAR-21 14:20:36,000000000 AbbonamentoCinema 06-MAR-21 14:12:05,000000000 +00 00:08:31.000000 
^^^^ Il cliente è stato modificato dopo la creazione del servizio
100 Zeno Parisi 06-MAR-21 14:09:50,000000000 AbbonamentoCinema 06-MAR-21 14:12:05,000000000 -00 00:02:15.000000
^^^Questo record cliente esisteva prima della creazione del servizio (due minuti prima per l'esattezza)

Questa query ci dice che Zeno Parisi ha comprato un AbbonamentoCinema, e poi ha cambiato nome in Zeno Cosimo otto minuti dopo l’acquisto (e magari dovrebbe rinnovarlo…).

Se cambiamo i servizi, possiamo storicizzare solo la OWNED_SERVICE, mantenendo le due informazioni indipendenti (come è logico che sia).

PRE-UPDATE Trigger

Il seguente trigger (sintassi Oracle 12+) è un semplice esempio per gestire in modo automatico le update, e creare il campo storicizzato.

NB: Un trigger di questo tipo è costretto a tenere i record in memoria per funzionare e quindi è una startegia da valutare con cura in caso di situazione in cui sia necessario fare aggiornamenti massivi.

In tali contesti, preferire una implementazione più semplice e “trigger-free”.

create or replace TRIGGER AUTO_HISTORY_Customer
FOR UPDATE
ON Customer
COMPOUND TRIGGER  
   -- See https://www.oracletutorial.com/plsql-tutorial/mutating-table-error-in-oracle/
   TYPE t_customer_type IS TABLE OF Customer%ROWTYPE  
        INDEX BY PLS_INTEGER;   
   v_data t_customer_type ;

  BEFORE EACH ROW IS

  BEGIN
   IF UPDATING THEN
        -- Before update load old data and store it    
        DBMS_OUTPUT.PUT_LINE('BEFORE UPDATE');   

        if :old.xds != :old.xd then       
            RAISE_APPLICATION_ERROR (-20002, '!! HISTORYCAL/INACTIVE RECORDS CANNOT BE UPDATED', TRUE);
        END IF;


        v_data(v_data.COUNT+1).xd  := :old.xd;
        v_data(v_data.COUNT).xds := :old.xds;
        v_data(v_data.COUNT).d_start := :old.d_start;

        v_data(v_data.COUNT).ndg := :old.ndg;
        v_data(v_data.COUNT).editor := :old.editor;
        v_data(v_data.COUNT).name := :old.name;


   END IF;
  END BEFORE EACH ROW;

  AFTER STATEMENT IS
  BEGIN
    if UPDATING then
         -- Store copy now

         DBMS_OUTPUT.PUT_LINE('AFTER UPDATE');

        FOR i IN 1 .. v_data.COUNT  
        LOOP
            v_data(i).xd     :=id_gen.nextval;
            v_data(i).d_end  :=sysdate;

            insert into customer values v_data(i) ;     
        END LOOP;

    end if;
  END AFTER STATEMENT;
END AUTO_HISTORY_Customer;
/

Infine per maggiori informazioni sul concetto di normalizzazione, fare riferimento al seguente link su wikipedia

Posted in Italian Content, Structured Query Language (SQL) | Tagged , | Comments Off on Semplice metodo per storicizzare i dati su database

runif

This entry is part 8 of 10 in the series Unix swissknife

Idempotent and minimal python 3 library for rapid scripting.
Provide support for creating file, adding data to them, patching and so on.

Why?

(Ba)sh scripting is very easy to setup. So we end up using it all the time for simple procedural script.

Sometimes is it useful to have idempotent script, like Ansible and Saltstack teach use, this script should only do an action if needed.

I have this need for a complex set of migration procedures.
I was unable to do it bash
It was an overkilll using Java

So runif popped out

Try the examples running them from the root directory

The run() function is very handy to fire direct command, like you would do in a bash script, like running git pull or so on

Note: runif it is NOT a replacement for Gradle, GNU Make, Maven, etc.

Launch example

Install the package with

python setup.py install

Here an example of what happen if you run twice the same script:

$ python examples/stepByStep.py
[INFO] runif.py.run_if_missed demo ===> step1
[INFO] runif.py.run_if_missed demo/demofile.txt ===> step2
[INFO] runif.py.run_if_missed demo/demofile2.c ===> step2
[INFO] runif.py.run_if_unmarked demo/demofile.txt ===> Step3
[INFO] runif.py.run_if_present demo/demofile.txt ===>
demo/demofile.txt present!
[INFO] runif.py.run_each demo\demofile2.c ===>
** demo\demofile2.c

$ python examples/stepByStep.py
[INFO] runif.py.run_if_present demo/demofile.txt ===>
demo/demofile.txt present!
[INFO] runif.py.run_each demo\demofile2.c ===>
** demo\demofile2.c

https://github.com/daitangio/runif

Posted in English Content, Gio's Software Projects, Knowledgebase, Projects, Python, Software | Tagged , | Comments Off on runif

Pybash

This entry is part 4 of 10 in the series Unix swissknife

The project has been renamed into “runif”.
This page left only for Google Search happiness.

Pybash is a python library to make python script as easy as in Bourne Again Shell (Bash).

Bash scripting is very easy to setup. So we end up using it all the time for simple procedural script.

Sometimes is it useful to have idempotent script, like Ansible and Saltstack teach use, this script should only do an action if needed.

Some tima ago, I have the need to convert a bunch of projects from ant (2004 stuff) to gradle. It was a very hard work, and imple bash scripting was not up to the task.

Python run, fix, rerun cycle was faster then groovy and I know python a way lot better.
There are other projects like doit, but mine is easier to use and follow a minimalistic apprach: only one file needed to do everything.

Idempotent python library for rapid scripting
https://github.com/daitangio/runif
0 forks.
0 stars.
0 open issues.

Recent commits:
Posted in English Content, Knowledgebase, Python, Software | Tagged , | Comments Off on Pybash

Avoid Kafka if unsure (think twice series)

Some co-workers started using Apache Kafka con a bunch of our Customers.

Apache Kafka is a community distributed event streaming platform capable of handling trillions of events a day. Initially conceived as a messaging queue, Kafka is based on an abstraction of a distributed commit log[*].

To get this goal, Apache Kafka needs a complex servers setup, even more complex if you want the certification for the producing company (Confluent).  Now, if you are planning to use Kafka like a simple JavaMessaeSystem (JMS) implementation, think twice before going on this route.

PostgreSQL 12 offers a fair (and open source) partition implementation, whereas if money are not a problem, Oracle 12c can happy scale on billions of record before running into troubles (and ExaData can scale even more).

PostgreSQL and Oracle offer optimizations for partitioned data, called “Partition Pruning” in PostreSQL teminology:

With partition pruning enabled, the planner will examine the definition of each partition and prove that the partition need not be scanned because it could not contain any rows meeting the query’s WHERE clause. When the planner can prove this, it excludes (prunes) the partition from the query plan.

This feature is quite brand new (popped in PostreSQL 11) but it is essential to a successful partition strategy. Before these feature, partitioning was a black magic art. Now it is simpler to manage.

Continue reading

Posted in English Content, IT Featured, Knowledgebase, Structured Query Language (SQL) | Tagged , , | Comments Off on Avoid Kafka if unsure (think twice series)