Pesky Remote Meetings Take 2

There is a funny article called “I’m OK; The Bull Is Dead” I want to use as incipit for this post.

The Last week I went in a meeting about flipping a BUY/SELL direction involving three services A -> B -> C. The initial B spec was right, but it was supposed to be wrong, someone (A) asked a change, and then it raised a complain on C. The entire meeting was a circula discussion on how we could fix it (just a revert of the fix?), why it was done in that way, etc.

A 15-minutes call was enough, but we wasted about 1 hour.
So, following the article above, the best approach is to report project status in this way:

1. Punch line: The facts; no adjectives, adverbs or modifiers. “Milestone 4 wasn’t hit on time, and we didn’t start Task 8 as planned.” Or, “Received charter approval as planned.”

2. Current status: How the punch-line statement affects the project. “Because of the missed milestone, the critical path has been delayed five days.”

[…]

3. Next steps: The solution, if any. “I will be able to make up three days during the next two weeks but will still be behind by two days.”

4. Explanation: The reason behind the punch line. “Two of the five days’ delay is due to late discovery of a hardware interface problem, and the remaining three days’ delay is due to being called to help the customer support staff for a production problem.”

 

Examples:

  1. Punch line: UAT tests will be delayed by 1 week
  2. Current status: Because we found a critical wipe-bug
  3. Next steps: Avoid the wipe-bug delete the entire production database
  4. Explanation: One of our developer is sub-optimal developer (aka stupid)

 

  1. Punch line: System went offline during peak hours at 10:00 AM, because Exadata system shutdown improperly
  2. Current status: Transaction status is dirty. Branches office are offline and unable to look SSD transactions.
  3. Next steps: We are recovering the database, we estiamte to be back on line tomorrow.
  4. Explanation: The database queries was very slow, the operating system started trashing and was unable to write on the watchdog disk. The cluster watchdog thinked it was not responding because of  hard failure, so issue a rebooting without shutting down exadata properly. The load moved on the second server, which eventually crashed too.

 

  1. Punch line: Production database was restored from a 1-week ago backup because it was unreadable and encrypted
  2. Current status: We are losing 10% customer/day
  3. Next steps: We are trying to decrypt the database and calm down customers
  4. Explanation: The fired data architect Smith encrypted the database out of revenge.

One of this example is real.

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.