Subtitel

A blog by Juri Urbainczyk | Juri on Google+ | Juri on Twitter | Juri on Xing

Thursday, June 7, 2018

The "Sleeping beauty"-effect in agile software architecture


Agile projects, above all those with multiple Scrum teams, need a moderated architecture process, in order to facilitate architectural decisions. It is advantageous to decide as late as possible, nevertheless the correct point in time should be chosen intentionally. If not, important architecture decisions could be overlooked or made too late.

The promise
In the ideal agile world, architecture grows by evolution (“no big upfront design”). An architecture will not be sketched out with great detail before development starts, but it will be created instead step by step during the work in the sprints. Decisions will be taken when triggered by work on a current user story. Until then, the team has time to gather know-how which will come in handy for evaluating architectural decisions. Since the “big upfront design” doesn’t have to be adjusted along the way, requirements can be accepted more easily und can be integrated into development more quickly. As a consequence, risk will be lower and efforts will decrease – that’s the hope, at least. Even the quality should increase, because decisions can be made with more experience. All this is based on the implicit assumption, that all knowledge needed to take the decision will be accessible for all persons involved at the correct point in time. 

The reality
This assumption is – as we will see – incorrect, which leads to some negative effects. Some preliminary work on the architecture already helps with the beginning of an agile project. The question, which part of the system the scrum team should work on can only be addressed sensibly, when there is a rough breakdown of the “system”.

Basically, it’s not so bad an idea to decide late. In fact, there are many architecture questions which can be postponed. E.g. you don’t need to decide on a browser, when you don’t work with browser specific functions. You don’t need to talk about implementation details of microservices, which will be accessed via webservice API. To decide late indeed means to decide with greater knowledge and therefore, to decide with greater quality. Thus, the focus can be shifted to the current and immediate problems.


Picture 1: Sleeping beauty phase

Yet, this also is the catch: the “immediate” problems tend to push the postponed decisions from the agenda. They enter a “sleeping beauty” mode, and waking them up is hard. Somewhere in the project wiki they are documented, but who keeps track? All of a sudden the team faces the problem, that a new component has to be implemented in the next sprint – but the decision about its interior technology has not been taken. Important business information and conceptual information is missing, which is necessary to make the decision, but which does take some time to acquire. As we see, the assumption of the “always available information” is not true.

Due to the „sleeping beauty“ effect, the project gets late and misses its schedule – which could be prevented by some preliminary work. The same effect also casts its shadow on the hopes for better quality, since decisions taken under time pressure tend to be suboptimal. Furthermore, non-functional requirements also fall prey to that effect, which can lead to even more ugly consequences.

Best practices
But there is an effective method against the „sleeping beauty“-effect: somebody has to watch the “sleeping” decisions and has to ensure, that necessary knowledge is acquire early enough. To that end, architecture decisions must be moderated with regard to all teams on the project. The moderating instance has to take care for postponed decisions and is responsible to get them up on the agenda right on time. That’s early enough, to carry out important preliminary work, e.g. to clarify business relevant questions. Some decisions even shouldn’t be postponed at all (e.g. because there are dependencies to other topics). This weighing-up should also be done by the architecture instance (you might call it a “guild”).

Sometimes, there are dependencies between various decisions, which may even apply to different teams at first sight. Those overarching dependencies must be identified, tracked and resolved, in order to achieve overall consistency. Experience shows, that it’s a good idea to transfer that responsibility to a dedicated group of people. This enables evolutionary development of the architecture without taking the risk of the “sleeping beauty” effect.

Sunday, November 22, 2015

node.js and MIDI

jazz-midi (www.thejazzpage.de) is a node.js module which allows access to MIDI interfaces. Midi (musical instrument digital interface) is an international standard for communication between electronical music devices which was invented in the early 80ies. Recently, I have been playing around with jazz-midi quite a bit and I figured I should share my experiences here. Now, first of all opening MIDI devices goes like a breeze:

var outName = MIDI.MidiOutOpen(2);
if (outName) {
  console.log('Opened MIDI-Out port: ', outName); 
} else {
  console.log('Cannot open MIDI-Out port!');
}


This example opens a MIDI out port with the port number 2 (if this one is available). To get a list of available MIDI ports, just use MIDI.MidiOutList() or MIDI.MidiInList() respectively. As easy as this is, I quickly found out that you cannot open multiple ports and then, for instance, receive events from all the ports simultaneously. I tried different ways to circumvent this issue (e.g. calling MidiOutOpen() multiple times) but nothing really worked. That's quite a limitation.

On to the next step, receiving and sending MIDI events:

var inName = MIDI.MidiInOpen( 1, function(t, msg) {   
    if ((msg[0] != 254) && (msg[0] != undefined)) {
        MIDI.MidiOut(msg[0]+midiChannel-1, msg[1], msg[2]);   
    }
});


The above example opens the MIDI in port 1. Whenever a MIDI event comes along, it just triggeres the anonymous callback function which then again sends the MIDI event to the MIDI out port we just opened. The if-statement filters out all the MIDI "active sensing" events, which are generated by many devices just to tell the world that they are alive. Also, take a look at the MidiOut statement. It includes a global variable (bah!) called "midiChannel" which can hold the value 1 to 16 which indicate the "Midi Channel" to send the data to. A MIDI device, such as a synthesizer, will only receive the event, when it listens to the corresponding channel.

The next step, obviously would be to play a note. In MIDI each note is defined by two MIDI events called 'note-on' and 'note-off'. A MIDI note-of must be parametrized with a key value (which defines the pitch) and the key attack velocity (which most of the time defines the volume). Each can have valid values between 0 and 127. A note-off also has to have a key value and a release velocity (which is ignored by most MIDI devices). To play a note, you have to combine a note-on with a corresponding note-off event. The time difference between note-on and note-off defines the duration of the note. All this is accomplished by the following piece of code:

function playNote( note, velocity, length, channel ) {
        MIDI.MidiOut(144+channel-1, note, velocity);
        setTimeout( function() {
                MIDI.MidiOut(144+channel-1, note, 0);
                MIDI.MidiOut(128+channel-1, note, 64); // also send MIDI note off
            },
        length );
}


This sends a note-on on a defined MIDI channel (that's why it's 144+channel-1). And then it creates a timeout which will send the corresponding note-off 'length' milliseconds after the note-on.

Great, now we can play music on our devices triggered by JavaScript. But what happens if something goes awry and notes keep playing on the MIDI device forever? For this scenario MIDI inventend the 'all-notes-off' command, which tells the device to just shut up. Of course, we can send that from JavaScript as well, as the following example shows:

function sendAllNotesOff() {
    //loop over all MIDI channels and send all notes off and all controllers off
    for (var i=0; i<16; i++) {
        MIDI.MidiOut( 176+i, 123, 0 );   
        MIDI.MidiOut( 176+i, 121, 0 );   
    }
}


The following screenshot shows the MIDI events captured by the node.js program when playing on the MIDI keyboard.

MIDI Events
Have fun.

Wednesday, April 8, 2015

Implementing a REST API with node.js

Every so often there is the necessity to implement some mock-up web services or maybe even a prototype which shall offer its functionality via web services. This happened to me once again in March 2015 when preparing the API for a hackathon. To be precise, there already was an API, but it did not really fit to the requirements of the hackathon’s sponsor. Therefore, we decided to implement a second API (as a mock API) on top of the existing one. Our, second, API should add additional functionality to the existing services.
It was quite clear that the API should be REST with JSON as transport protocol. Unfortunately, we only had less than 4 weeks to design, implement and test the mock API. Thus, only a very lightweight technology could make this possible, like node.js.
Being JavaScript on the server, node.js is indeed a very lightweight technology. In its most basic configuration it will require you to write only one JavaScript file, which is then read and executed by node. Furthermore, it come with a module concept which lets you include already existing modules into your code with the “require” statement. Node also brings a packet manager (named “npm”) which lets you easily install all those modules to your project’s file system. In our mock API we used the following imports:
var http = require('http');
var express = require('express');
var url = require( "url" );
var path = require("path");
var fs = require("fs");
var queryString = require( "querystring" );

For our purposes, the “express” module is very important: it enables us to write REST services like in the following example. This short piece of code implements a web service, using the HTTP “GET” method, which echoes all input back to the response.

var api = express();

api.get('/echo', function(req,res) {

       var myobj = "";
      
        // parses the request url
        var theUrl = url.parse( req.url );
       
        //check if there is a query       
        if ((theUrl.query != null) && (theUrl.query != "")) {

             // gets the query part of the URL and parses it creating an object
             var queryObj = queryString.parse( theUrl.query );
              
             // queryObj contains the data of the query as an object
             // and jsonData is a property of it
             myobj = JSON.parse( queryObj.jsonData );                   
       }

       res.json( myobj );
});

As this examples shows, it’s really quite simple. For the response, you have to create a JavaScript object (which also might be a quite big one with arrays included and so on…) and transform it to JSON with “res.json”. This data is then returned to the client. There is a catch, though: with a HTTP “GET” method the input parameters will be encoded in the URL and therefore there is an upper limit of data which can be input to the service (around 8 kB, depending on the browser and other factors).  So, in order to potentially input more data into the web service, a “POST” method should be used, like in the next example. This one also shows how to extract parameters from the request header and do some basic authentication. Anyway, please bear in mind, it's still only a prototype source code (e.g. there should be another return code than 400 if an error occurred).

api.post('/api/booking', function (request, response) {

     //Read header & body
     var jsonData = request.body.requestPayload;

     var requestID = request.header('requestId');
     var userName = request.header('userName');
     var userPassword = request.header('password')
     var timeStamp = request.header('timeStamp');

     if (!checkAuthWithResponse(userName, userPassword, response)) return;

     resultAsString = '';

     var err = oVali.runHeaderValidation(request);
     if (err.length > 0) {
  
          // some logging and error handling here
     } else {

          // some business logic here

     }

     Response.send( resultAsString, 400);
})

Our mock API was intended to implement additional business logic on top of the existing API. The client would then call the mock API’s services to execute this business logic. Because the logic would only be depending on the values of the input parameters of the services there was no need for additional data storage or caching. Nevertheless, it would be quite easy as well to integrate a NoSQL database into node.js, for example.
Our mock API based on node.js was implemented just in time for the Hackathon and it was used there with very good results. We experienced no outages and had no problems with the performance as well.

As a summary, what are the advantages of using node.js for REST API development? We can name the following:

  1. As a developer, you are up and running very quickly. Install node, download some modules, write ONE file, and you are done.
  2. You don’t need to learn a lot of tools and IDEs (low overhead).
  3. The feedback cycles are very short, enabling you to develop with high speed.
  4. Integration with file system and databases is easy.
  5. Many standard problems can be used out-of-the-box with only a few lines of code.
  6. There is a big community, helping you out if you run into problems. Online documentation and FAQ is huge.
  7. It’s very easy to transport node projects. Just zip and unzip the files and you are done.
  8. There are a lot of possibilities to deploy and run node projects on the web. Pricing starts very low as well.

The bottom line is: node.js is a perfect choice for implementing prototype REST services. In how far this could be extended to full blown operational services – that’s another matter.