searchzen combating complexity so you do not have to

Code all the things

Much to my delight - it seems that the dramatic increase of new JavaScript MVC frameworks have stopped. Efforts like “TodoMVC” and “Realworld” have served to highlight the practical problems of using the web as a delivery platform while also highlighting the allready and wellcrafted solutions to the known problems. It also seems that people have started “second guessing the modern web” - wondering if there is more to it than just optimizing using serverside rendering and data fetching.

Some time ago, I received a good advice: “Do not think too much about the frontend, it will be redone in 5 years”. My experience so far has shown , that this advice has been a solid one - most of the choices having been made 5 years ago about encapsulating the variances in internet browsers are largely irrelevant today. I think it is safe to say, that my earlier JavaScript framework favourites “Prototype”, “jQuery” and “ExtJS” have come and gone. Having acknowledged that I need to move on I also realized that most of my architectural decisions have been formed by what I learned during these times - and to what extent my thinking have been formed what the words I have learned to use in this period.

“There is more to life than Java applets”

Now that the years have passed, I also understand that my eagerness and preference for modelling everything using object constructs during my web programming days in the 00’s and the early 10’s have been founded in what I have read in books, and used in my early Java applet programming days. My intuitive fondness of what I saw in my “ExtJS” was based in that I could easily relate to how I programmed solutions using Java and “AWT” - while paying little or no attention to the web as a delivery platform.

Modern web development, while still being centered around “SPA’s” is now progressing more in the lines of reactive programming than the traditional MVC/MVVM approach - so in that sense web programming is evolving away from the traditional desktop/server metaphor and is is now embracing concepts like reactive programming and immutable datastores. By employing immutable data stores, we now have a way for easy reasoning about what parts of the user interface should be updated, even allowing it to be done stepwise - in a coordinated manner. The concepts from “Reframe” now have widespread applications in the industry - eg “Redux” and “NGRX” . In addition, we now have the support of “map” and “reduce” constructs from “ES6” and more declarative style of programming via “rxjs” .

But before progressing more in the lines of reactive programming and leaving traditional MVC style programming behind completely I thought it would be worthwhile to stop and pause for a while to reflect on my route ahead.

“Hic Sunt Dracones”

In my daily practice I have seen good software development as something being rooted in usecases - and I have made most of my practical implementation choices from the the availability of supporting tools. It is hard to start to use a tool that you do not have - but when you get the new tools, they usually build upon learnings that were required to build the new tool. My learnings so far have been rooted in experiences gained by trying to improve my existing practice using existing tools or processing, not contemplating how to improve my practise with no reference to tools at all.

So how do we form a good practice for software development that is not rooted in tooling or use case analysis, but more in the line of captured experiences and less friction when developing? Thinking in these lines , I came to remember one afternoon where I visited “The Royal Geographic Society” where I saw a “sextant” for the first time in my life. A sextant has come to me as a symbol of a tool , that is now outdated , but has formed a line of new and updated tools , that serve the same purpose. Tools can be refined , but the original intent of them and the continued mindset will endure. Similar tools exist in the software world , e.g “LISP” and “Haskell” - these language have defined whole families of ideas , and served as a hallmark for industrial strength programming. But in themselves, these tools cannot carry the definitions of systems - they need earlier experiences to be captured in writing. For the example of using a sextant , a nautical charts and maps are required. For software development we have developed the tradition of “design patterns” that can be used to navigate and discuss software architectures . But still - as for the early days of exploration - effective use of a sextant and maps was not allways enough, there still had to be an idea for the journey, as welll as the passing along of experiences from earlier explorer In the early days of exploration ,there was a tradition of marking “Dangerous” or “unexplored” waters with “Hic Sunt dracones”. This sentence could serve both as a warning and a invitation to later explorers.

In my experience - the dangerous waters in software development tend to be related to discrepancies between formal methods and the actual lived experiences of human beings. Formal methods being based on logic or mathematical descriptions will at some point fail to capture the essense of problem at hand, when the system being developed is suposed to assist human beings. Or put it in antother way - a strictly systematic approach such as traditional “Software engineering” will fail to capture the elusive nature of human life. We need a fresh way to capture the ever changing conditions of human life in code in a rapid manner. In essence - we need to consider web development as a craft, not a formal discipline.

“Code all the things”

So how do we escape the formalism of Software engineering and focus more on the lived experiences of actual human beings? The answer to that question might seem a bit counter-intuitive. I think we need to code all the things!

Code All the tings

I think that decisions or design considerations relevant to features should be taken the teams implementing them whenever possible. In my experience - code that captures or tries to mimic human experience wille be more successfully implemented , when the teams implementing it have a better understanding of the context (I tried to capture my thoughts on this as “Mindful software” ). Given that features will be implemented using code, it would beneficial that the features are described in manner that uses code whenever possible. Concretely, to me “code all the things” would the mean the following to me in the context of implementing features for an organisation in 2020:

What it specifically does NOT mean is:

  • Encapsulate and code everything in your MVC framework frontend framework. Some things are better done on the serverside.
  • refer to general descriptions on post-its captured during “wave planning” . Agility is not being vague.
  • continuously referring to “what we agreed upon at standup”. Consensus is not allways optimal.

My point here is that the more technical context a description can provide , the better. If the description is in a allready running form , then this should be optimal. Maybe we can find a way to perform system development where we code all the things and embrace change faster in this way?

MVC considered harmful

Having spent considerable amounts of time looking at large javascript and HTML related codebases I have come to one conclusion: most attempts at structuring frontend code or anything UI related in a team setting will fail if the attempts does not consider the long traditions for software development - and understand the mental models that have been formed so far. Or as George Santayana put it “Those who cannot remember the past are condemned to repeat it”.

To better understand the historical context I think all web developers should be at least familiar with a basic description of the MVC pattern and familiar with the existing frameworks that are allready available.

A detour around Xerox PARC

Prior to the internet - programming mostly involved a workstation and in some cases a central server with a central datastore . Programming was mainly about data entry and computation and how share the results of that in a meaningfull manner. Great care and effort was put into meeting the immediate expectations of the users and programming was typically done using a structured plan - something that we know today as the waterfall model . Software modelling was (and still is) done by using metaphors to map user needs into software and carefully designing the sofware using object oriented constructs. One prevalent approach was to apply the MVC pattern invented by Trygve Reenskaug in 1978 while visiting Xerox PARC.

At the core of the MVC paradigm is :

  • Model - Encapsulating the data and the domain logic
  • View - Presenting data to the user
  • Controller - Responsible for controlling the flow in the application and sending data back and forth between the model and the view.

The MVC pattern has been been the foundation for many of the modern user interfaces and frameworks we know today. Examples include Java Swing, Ruby on Rails and asp.net MVC .

Several implementations of javascript MVC frameworks allready exist. You can compare the different implementations via the TodoMVC . As you might have noticed there are allready a lot of implementations readily available so there is no need to go about implementing your own.

Some of the most currently publicly visible projects are:

It should be noted that most of the listed frameworks now have implemented MVVM as an adaption of MVC. Microsoft introduced MVVM as an adaption of MVC to with considerations about the costs of exposing the entire model to the view. This was originally available via Silverlight and WPF and to some extent Adobe Flex - but has now been widely implemented by other frameworks.

Using MV* javascript frameworks is not a quick fix

With the advent of javascript MV* frameworks we now should have a shared vocabulary for programming , but for many practical use cases regarding web applications it is typically disregarded. Why is that?

One might argue that the average bravado of your average-day web hacker resembles that of a hired gun in the wild west. It is very easy to reach small implementation goals fast using HTML, css and javascript. Javascript was introduced during the early days of the web to introduce more advanced ways of interaction in a web application - not as a means of structuring applications per se. The early version of javascript was implemented by Brendan Eich in 10 days and contains shortcomings that has lead to use of linters like jslint. Popular javascript libraries like prototypejs (that was popular during the rise of ruby on rails ) and jquery tried to address some of the shortcomings and disparaties in
DOM manipulation the various browsers. Somewhat deservingly javascript gained the notion of a toy language by professional programmers in its early days - this situation has changed with the introduction of tools like jslint and the ability to focus on the good parts of javascript. And with the advent of javascript on the serverside it is possible to end-to-end solutions for clean room MVC - but in most practical scenarios where the server implementation will be in something other than javascript, then there will be some kind of duplication of logic on the server and the client. Or as zxibit could have said:

Yo Dawg I hurd you like MVC so I put a MVC in your MVC

Jokes aside … It is important to understand that taking the MVC paradigm and applying it on web application programming in the same manner as you would have applied it to application programming in the PC era could be considered harmful. The web has a entirely different delivery mechanism than your local PC had.

Understanding the web as a delivery platform

The main difference between a local pc application and a web application today is typically that the web application involves data available from another machine. This should not pose a noticeable problem when rendering logic is expressed on the server - e.g when all the view is rendered via one call to the server. But if the view is constructed on the client side and requires several calls to fetch data e.g for a compound view then this could pose a problem performancewise. It should be immediately obvious that a solution where all data comes prerendered will be faster than a solution that requires data to be assembled via several calls. When you also consider the added complexity of tackling layout changes as a result of unpredictable results then you could start considering looking for ways to improve performance. So - one obvious choice would be to minimize the amount of needed calls to the server.

Where to go from here?

First of all - You should think very hard about what you are doing if you are planning to roll your own MVC framework. If you are going to do it anyway I hope you at least browse through the allready available frameworks and familiarize yourself with the basics of MV* and understand the performance penalties of cleanroom MVC.

One way to minimize the number of calls to server would be to setup a fullstack solution - where javascript is used both on the server and on the client side. This could be done e.g with react.js that has introduced the use of a virtual dom so that you can reason about the entire page before starting rendering on the client side (here is an example to get started: react server example ). react.js could be used for the V in MV* . For the M you could use Backbone.js or vanilla javascript constructs.

If you are thinking about refactoring an existing codebase to support better separation of concerns then I suggest that you consider a temporary goal before switching your entire codebase to e.g angular og ember.js . If you are in a situation where view rendering are constructed via javascript or some custom constructs on the serverside the you could consider if you could express the sample logic via an existing templating engine. E.g if you are considering a switch to ember.js then it could be worthwhile to try to port some of your viewlogic to handlebars first. If your existing codebase was constructed sometime after 2000 then chances are that data retrieval/manipulation parts allready are expressed in manner that are without rendering logic - if not - then you could refactor them to be so.

Web development and the order of things

Las Meninas,Picasso

I got the chance to visit the Picasso museum in Barcelona in April 2014. Here I saw the collection of Picassos interpretations of Velasquez “Las Meninas” that he completed during the later stages of his career.

During my visit to the museum I saw some dutch high school students that had been given the task of sketching their own version of Picasso’s version of Velasquez’ work. It was fascinating to watch - it looked like that some of them were approaching the task with a high level of energy and some of theme seemed quite indifferent to the task.

Sitting there, watching students create an interpretation of an interpretation I realized that I lack ways to describe how software development ideas are formed.

Software development, like art, is underpinned by tradition. The software development community has developed a tradition of capturing and sharing ideas via “Software patterns” - some of the most influential being captured by “The gang of four” in “Design Patterns: Elements of Reusable Object-Oriented Software” and “Patterns of Enterprise Application Architecture” by Martin Fowler.

But I have come think of the limitations of design patterns. The most striking way I can describe the limitiations is to describe how hard it is to express how the visual vocabulary from Velasquez “Las meninas” is carried over to the Picasso version. These are two completely different works of art but it is obvious to the onlooker that the scene is the same and the ideas are the same, but that they are expressed in different ways. It seems to impossible to form an exhaustive list of patterns to describe to describe Velasquez ‘“Las meninas” that can be used to describe the Picasso version. But when you stand there and watch it seems obvious.

Another way to describe the limitations of design patterns would be to point how hard it is to describe the success of Ruby on rails and how it has influenced modern web development. Rail encompasses most of the popular enterprise design patterns, but the Active Record in particular. Rails stands as pillar stone in the formation of ideas for modern webdevelopment frameworks today - I would consider it the “Las Meninas” of web development. I see the inspiration from Rails in most of the other web frameworks I use today. Granted, some of the ideas expressed in Rails can only be expressed by a combination of tools - e.g I would choose a combination of express and yeoman as the “Picasso version”. Today - Rails itself and frameworks being inspired by it are hardly comparable - but when you sit there and watch , the feeling is the same.

Every now and then we talk of a “paradigm” shift in software development. I do not think that the word “paradigm” is adequate to describe the plethora of tools that we have available today being inspired by Rails. I think that a word could be used is “Episteme” as used by Foucault in “The order of things” . In my understanding of the word it better expresses the unconscious choices we make due to our cultural settings and influences ( our “epoch”).

I am just starting out reading the works of Michael Foucault . The introduction to “The order of things” where he describes “Las Meninas” in great detail fascinates me. I look forward to the journey. I hope to find new ways to describe how software ideas are formed.

Using qunit from grunt

When poking around in the jquery and jquery ui codebases I noticed an extensive use of qunit from grunt. I get it that the jquery guys also did qunit - so this makes sense. But why grunt? I remember the earlier versions just used makefiles.

Grunt and npm replaces make, autoconf and apt for javascript projects

I remember Makefiles as simple - just as long as you got the use of tabs right and kept them simple. It was just “the other tools” that made the experience bad for me. Remember automake and m4 scripting? Not to mention cmake. Tools like Make or ant are simple tools in themselves - they started getting complicated when the tradition to use associated tools arose - like Makefiles tend to assume that dependencies are handled using automake and m4 scripts and ant buildfiles retrieve dependencies with tools like ivy. When I consider the complexity I know from existing build toolchains on linux, then grunt starts to look simple. The simplicity of grunt comes from how easy it is to combine grunt plugins as compared to how complex the build situation using other tools has become. Using grunt it is now possible to build using only javascript based tools.

Installing grunt and npm.

To be able to use grunt you will need nodejs and npm. You can find a nodejs installer for most platforms at nodejs.org - this will include npm. When you install you should make sure that the “node” and “npm” commands are available on your commandline via your “PATH” environment variable. (For my less commandline-savvy friends there are some detailed instructions for windows 7 here)

You can install grunt globally from the commandline like this:

npm install grunt-cli

To take your newly installed grunt for a spin, you could try it out with building jquery-ui. To checkout jquery-ui you could do this:

git clone http://github.com/jquery/jquery-ui
cd jquery-ui
npm install
grunt --force

This should take you through an example of using grunt on an existing project. If all goes well then all the tests should pass and a new jquery-ui build should be avaible to you inside the “dist” folder.

Scaffolding a grunt project that supports qunit

The major strength of grunt is its strong tradition for plugins - but when starting up it can also be a major drawback. you need to setup some plugins to be able to start working. It does not help that the grunt-init command has been separated out in a plugin in version 0.4 (most of the existing blog entries just refers to it as being inside grunt) . See The Project scaffolding section in docs for more information.

To be able to run grunt-init you need to install the grunt-init plugin and grab a working template to start working:

npm install -g grunt-init
cd c:\users\jacob\.grunt-init
git clone https://github.com/gruntjs/grunt-init-gruntfile.git ~/.grunt-init/gruntfile gruntfile

Note that my username is “jacob” and that I am on windows here. You will probably have to insert another directory.

Now I could run “grunt-init gruntfile” and answer a couple of questions:

D:\Sites\2>grunt-init gruntfile --force
Running "init:gruntfile" (init) task
This task will create one or more files in the current directory, based on the
environment and the answers to a few questions. Note that answering "?" to any
question will show question-specific help and answering "none" to most questions

will leave its value blank.

Warning: Existing files may be overwritten! Used --force, continuing.

"gruntfile" template notes:
This template tries to guess file and directory paths, but you will most likely
need to edit the generated Gruntfile.js file before running grunt. If you run
grunt after generating the Gruntfile, and it exits with errors, edit the file!

Please answer the following:
[?] Is the DOM involved in ANY way? (Y/n)
[?] Will files be concatenated or minified? (Y/n)
[?] Will you have a package.json file? (Y/n)

After hitting Enter three times I got this file “Gruntfile.js”:

/*global module:false*/
module.exports = function(grunt) {

  // Project configuration.
  grunt.initConfig({
    // Metadata.
    pkg: grunt.file.readJSON('package.json'),
    banner: '/*! <%= pkg.title || pkg.name %> - v<%= pkg.version %> - ' +
      '<%= grunt.template.today("yyyy-mm-dd") %>\n' +
      '<%= pkg.homepage ? "* " + pkg.homepage + "\\n" : "" %>' +
      '* Copyright (c) <%= grunt.template.today("yyyy") %> <%= pkg.author.name %>;' +
      ' Licensed <%= _.pluck(pkg.licenses, "type").join(", ") %> */\n',
    // Task configuration.
    concat: {
      options: {
        banner: '<%= banner %>',
        stripBanners: true
      },
      dist: {
        src: ['lib/<%= pkg.name %>.js'],
        dest: 'dist/<%= pkg.name %>.js'
      }
    },
    uglify: {
      options: {
        banner: '<%= banner %>'
      },
      dist: {
        src: '<%= concat.dist.dest %>',
        dest: 'dist/<%= pkg.name %>.min.js'
      }
    },
    jshint: {
      options: {
        curly: true,
        eqeqeq: true,
        immed: true,
        latedef: true,
        newcap: true,
        noarg: true,
        sub: true,
        undef: true,
        unused: true,
        boss: true,
        eqnull: true,
        browser: true,
        globals: {}
      },
      gruntfile: {
        src: 'Gruntfile.js'
      },
      lib_test: {
        src: ['lib/**/*.js', 'test/**/*.js']
      }
    },
    qunit: {
      files: ['test/**/*.html']
    },
    watch: {
      gruntfile: {
        files: '<%= jshint.gruntfile.src %>',
        tasks: ['jshint:gruntfile']
      },
      lib_test: {
        files: '<%= jshint.lib_test.src %>',
        tasks: ['jshint:lib_test', 'qunit']
      }
    }
  });

  // These plugins provide necessary tasks.
  grunt.loadNpmTasks('grunt-contrib-concat');
  grunt.loadNpmTasks('grunt-contrib-uglify');
  grunt.loadNpmTasks('grunt-contrib-qunit');
  grunt.loadNpmTasks('grunt-contrib-jshint');
  grunt.loadNpmTasks('grunt-contrib-watch');

  // Default task.
  grunt.registerTask('default', ['jshint', 'qunit', 'concat', 'uglify']);

};

To be able to run this Gruntfile you need to install the necessary plugins. If you append “–save-dev” to the installation commands then installation info will inserted into package.json:

npm install grunt-contrib-jshint --save-dev
npm install grunt-contrib-qunit --save-dev
npm install grunt-contrib-watch --save-dev
npm install grunt-contrib-concat --save-dev
npm install grunt-contrib-uglify --save-dev

This will retrieve the plugins and place them inside the node_modules and add them inside the devDependencies section of package.json . As you might have noticed the Grunt files are a bit verbose but they support easy composition of plugins - e.g. if you would like to another target , using another plugin , then you could do this pretty easily. The grunt files are written in javascript, so if you wish to insert custom logic in the build files, then it should be pretty easy to do so (Without worrying about tabs and spaces ).

Note that once you have created package.json describing your dependencies, then you can simple write “npm install” to install your dependencies. There is no need to store the “node_modules” folder in your version control system.

Qunit replaces junit and phpunit on the frontend.

After installing grunt-contrib-qunit and enabling it your gruntfile you now have the option of writing automated qunit tests that can be run directly from grunt. grunt-contrib-qunit uses phantomjs behind the scenes to enable you run your tests directly from grunt (without opening a browser). This should make it easier to automate your tests.

I think that best way to learn qunit is to look at existing tests. The button core test in jquery ui is a good place to start.

The essential functionality in qunit is :

  • ok (truthy [,message])
  • assert ( value, expected [,message])
  • expect ( number of assertions)

Combining these let’s you write a test like the check for #7534 in jquery ui:

test( "#7534 - Button label selector works for ids with \":\"", function() {
  expect( 1 );
  var group = $( "<span><input type='checkbox' id='check:7534'> <label for='check:7534'>Label</label></span>" );
  group.find( "input" ).button();
  ok( group.find( "label" ).is( ".ui-button" ), "Found an id with a :" );
});

Here we expect 1 assertion to be run.

There are more advanced options in qunit that you can explore - feel free to take a look at the documentation on qunitjs.com - or be inspired by the existing tests in jquery-ui

Building postgres on windows

So. Lately I have been advocating a switch to postgres from various BigCo databases. I have been inferring that “postgres” is “just better”. But basically I don’t have a clue. I am not a database expert - and I did not take the advanced database classes in school. So - why I am I doing this?

Beating myself on the head with a wooden stick

I believe in having the source available for all my tools.This is just a personal preference of mine … I like to poke around to learn new stuff .. discover new ways and combine projects in new ways .. but mostly I like to learn from the insights of others. See implementation tidbits buried deep down. My list of projects to poke around in is long. Postgres is just one of the projects I’d like to poke around in. So - now I am going to poke around in postgres. Let’s see if we can compile it on window 7. I’ll just write down notes as I go along. Are you ready? Bring forward your wooden stick!

Checking out postgres on windows

So - the plan is to compile C/C++ code on windows. To do this you need a C/C++ compiler. I’ll just take the easy route here on windows and download Express 2012 for Windows Desktop . Note that you need to register to do this. Downloading and installing Visual Studio Express can take a while.

Then you need git . If you have been living under a rock for the last decade, then git is a distributed version control system - popularized by the linux kernel and github.com . You can grab a windows installer on git-scm.com . Go ahead and install it if you do not have it yet. You’ll be glad you did.

After installing git you can clone the postgres code base like this:

mkdir build
cd build
git clone git://git.postgresl.org/git/postgresql.git

After a while then you should have the source code available . If you are like me ,then you’ll probably hurry into \build\postgresql\src and notice win32.make. Maybe this will work?

nmake /f win32.mak

No luck! It fails with:

        link.exe -lib @C:\Users\Jacob\AppData\Local\Temp\nm3F8F.tmp
NMAKE : fatal error U1073: don't know how to make 'libpq-dist.rc'
Stop.
NMAKE : fatal error U1077: '"C:\Program Files (x86)\Microsoft Visual Studio 11.0\VC\BIN\nmake.EXE"' : return code '0x2'
Stop.
D:\Build\postgresql\src>

Luckily there is \build\postgresql\src\tools\msvc\build.pl .

Huh ? What’s *.pl files ? That’s perl. If you don’t know what perl is , then you are in for a treat. Grab active state perl and install it if you don’t have it yet, so you will be able to process the file.

Now. After installing perl, let’s cross our fingers and type

perl build.pl

No?? msbuild throws up with:

D:\Build\postgresql\src\tools\msvc>perl build.pl
Detected hardware platform: Win32
Microsoft (R) Build Engine version 4.0.30319.17929
[Microsoft .NET Framework, version 4.0.30319.18052]
Copyright (C) Microsoft Corporation. All rights reserved.

Building the projects in this solution one at a time. To enable parallel build,
please add the "/m" switch.
Build started 06-10-2013 17:58:55.
Project "D:\Build\postgresql\pgsql.sln" on node 1 (default targets).
Building with tools version "2.0".
Target "ValidateSolutionConfiguration" in file "D:\Build\postgresql\pgsql.sln.m
etaproj" from project "D:\Build\postgresql\pgsql.sln" (entry point):
Using "Error" task from assembly "Microsoft.Build.Tasks.v4.0, Version=4.0.0.0,
Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a".
Task "Error"
D:\Build\postgresql\pgsql.sln.metaproj : error MSB4126: The specified solution
configuration "Release|MCD" is invalid. Please specify a valid solution configu
ration using the Configuration and Platform properties (e.g. MSBuild.exe Soluti
on.sln /p:Configuration=Debug /p:Platform="Any CPU") or leave those properties
blank to use the default solution configuration. [D:\Build\postgresql\pgsql.sln
]
Done executing task "Error" -- FAILED.
Done building target "ValidateSolutionConfiguration" in project "pgsql.sln" --
FAILED.
Done Building Project "D:\Build\postgresql\pgsql.sln" (default targets) -- FAIL
ED.

Build FAILED.

Oh.you probably spotted it also. “Detected hardware platform: win32”. I ran this using the “Developer Command Prompt for VS2012” - maybe this targets win32 pr default? If I select “Microsoft Visual Studio 2012” > “Visual Studio Tools” > “VS2012 x64 Cross Tools Command Prompt” and execute “build” again - then it works!

After the compilation finished I typed:

mkdir c:\postgres
install c:\postgres

Now I can use postgres from c:\postgres !

And now for something completely different

After finishing what I did above I throw out my custom compile and starting using the postgres zip archive again. I kept the code locally though. Right now I am poking around in the source code using “Run Source code analysis on solution”. This gives me the lowdown on what Microsoft thinks could be improved in the code. Let’s see an example:

C6001	Using uninitialized memory	Using uninitialized memory 'replace_val'.	libpgtypes	timestamp.c	845
'replace_val' is not initialized			388
Enter this loop, (assume '*p')			394
Enter this branch, (assume '*p==37')			396
Assume switch ( '*p' ) resolves to case 99: 			401
'replace_val' is an In/Out argument to 'pgtypes_fmt_replace' (declared at d:\build\postgresql\src\interfaces\ecpg\pgtypeslib\extern.h:37)			845
	'replace_val' is used, but may not have been initialized			845

Note that this is a random example. Right now I have a limited understanding of the postgresql codebase, so following the hardening guidelines from OWASP seems like a good idea.

But wait! There’s more

I like what I see. Looks like there is an active community for developers here . And . oh. Here is the official “Installation From Source on Windows” in the documentation. It looks solid. I’ll go check that out now :P

installing geoserver on debian

I just installed geoserver on debian using apache 2.2 . Here’s what I did:

First of all I installed jetty using “sudo aptitude install jetty” , then I grabbed the geoserver source from http://svn.codehaus.org/geoserver/trunk/ and compiled it using openjdk-6 and maven 2.2  (looks like the build fails using the standard maven in debian, so I grabbed a version of maven from ftp://mirrors.sunsite.dk ) .

After compiling, I copied geoserver/src/web/app/target/geoserver.war to /usr/share/jetty/webapps/ and restarted jetty using /etc/init.d/jetty restart .

I can only access port 80 on my webhost, and I need apache 2 for other purposes , so I had to configure mod_proxy. I setup a virtual host in /etc/apache2/sites-available/geo.searchzen.org and symlinked it to /etc/apache2/sites-enabled/geo.searchzen.org . To enable mod_proxy , I created  symlinks  for  /etc/apache2/mods-available/proxy.load and /etc/apache2/mods-available/proxy_http.load to /etc/apache2/mods-enabled. (mod_proxy fails without the symlink to proxy_http.load)

Here’s the relevant parts of my mod_proxy configuration in /etc/apache2/sites-enabled/geo.searchzen.org :

Mindful software

I have spent considerable amounts of  time thinking about the concept of information and how to convey it in software.  Some years ago I came to the conclusion that I want to present information in context, e.g not present crude extracts from databases, but adapt it to the presentation context, with the user in mind.

Some useful contexts could be “location” or “social networks” - the context should be varied depending on the type of information context - e.g information about abstract concepts have no use of location information. The use of context should increase the likelihood of the information being conveyed to the user in a understandable manner.

When I observe users, I rarely see one user that uses only one tool to achieve her goal. Information gathering is usually done using a variety of sources - so a good system design principle could be to know where the system should stop - and how to present the information  in a such a manner that supplemental or related information can be retrieved from another system.

So , to me, presenting information in context is also about presenting the least amount of “friction” in the system. Here I consider any obstacles hindering information flow in and out of the system as “friction”. By minimizing the “friction” we make it easier to present information in context by connecting data between systems.

This has lead me to think that good system design should focus on how information is shared  between users via connected systems.  Turning the attention to how information is shared between users via connected systems, then we obtain a understanding what the data is (since we need to be able to share it).

BDD is not about testing

When talking to people about BDD and my lame example using Paris Hilton, then I got the question “it was really interesting reading about Paris Hiliton .. but what is BDD really about?.

The central insight of BDD is that TDD is really computer aided specifications of the executable behaviour of your system. BDD tries to express this using a Domain Specific language.

Back in 2006 , Dave Astels described how to ascend from the focus on 1-1 testing of production code to use the test proces as a way to describe the way you want your system to behave.

In the video Dave makes the valid point that “The words you use shape how you think”. So we should go away from thinking about “testing” constantly to think about describing how you want your system to work.

So to me - BDD sounds like a kickstart to be productive in TDD - and do it well from the start. So if you are starting out on TDD , you should really start out doing BDD.

Paris Hilton and Behaviour Driven Development

Recently, I have been giving Behaviour Driven Development some thought.

Let’s take a an example of how to develop and test a music video search and storage system. A traditional way of developing this would require formulating a object oriented system architecture, thinking about streaming and metadata enabled search. The system architecture could consist of a well chosen database server, a streaming server and a metadata enabled search engine - combining these technologies with a modern UI and encapsulating theme in carefully designed object oriented structures. During all these important choices , and all during development we would make sure to write tests before we write a single line of code.

All these things put together would lead to a well thought out system architecture , but all the effort put into the system architecture can be in vain - if we don’t have a solid business understanding of what a video storage system should do. What will the users expect?

While browsing on facebook this other day I found the “The Paris Hilton & Jacques Derrida Appreciation Society” - this group explores the connections between the works of Paris Hilton and Jacques Derrida. When we deconstruct the “pretty blonde” facade of Paris Hilton, then you can actually find some deep insights. Take “Nothing in this world” for instance:

Take the phrase “when you are with somebody else, that’s me in your eye”. There is the obvious interpretation of the sentence . But thinking about that sentence also let’s you reflect on the real meaning . When you look at Paris Hilton in this video, what do you see? Do you see the pretty blonde or the millionaire , hard working young girl . In this video I am seeing the image of the pretty blonde - but I am also thinking about the millions of dollars she is earning portraying herself in this way. So in a sense - I am reading the original message out of context. I am admiring what Paris Hilton in some other way than she intended - the original meaning of the words seem to have disappeared - but my understanding of the sentence is more useful to me. I wish I could do what Paris Hilton does - but in a way that would make sense in my world .

The producers of the “Nothing in this world” video are not likely to convey information about the business empire of Paris Hilton in the metadata supplied for the video. So a system formulated as a “video storage system” would not let me exploit the information I found in the facebook group.

BDD introduces the use of a Domain Specific language to express the users expectations in a manner more directly focused on the behavioural aspects of the system. This lifts the clouds from the system aspects and focuses on intent.

A better way to formulate my expectations for the system would be:

Describe the music video storage system:
  I should be able to search for videos using metadata
  it should play videos  in my browser
  I should be able to query facebook for information about it

If I had those expectations formulated to me , then I would choose to implement this system as a mashup between youtube and facebook as a facebook application. This would be a radically different system architecture than the one describe above.

Furthermore , by leveraging one of the several BDD test frameworks available, then the expectations could be formulated in way that can be used as tests.

what is information?

Listed as one of the of five deep questions in computing , this questions stands out to me as the one question we have to answer before we can answer questions like “what is computable” and “(how) can we build complex systems simply”.

To me the concept of “information” only makes sense if it can be extracted or related using wellknown techniques. In this sense “information” is put in the context of a “subject” and a “object” , e.g. the information identifies facts about the “object” in a manner that is understandable for the “subject”. When information allways should occur “in context”, then it should be clear that the information should be codified in a manner that is understandable by the “subject”.

This “codification process” is successfull when the information about the “object” is conveyed to the “subject” in a manner that is easily understood - so in my interpretation “information” can be expressed in many ways , and still be intended as the same information about the the same “object”.

In most concrete circumstances involving human communication “codification” will mean “telling” somebody “else”. E.g saying “I am hungry” .. or “do you mind passing me the water”. Information is revealed through use of language in a sentence and placed in the context of the situation where the sentence is spoken.

In most traditional computing software information is stored to examine facts about phenomena or physical items. The wealth of information in context has been lost in the process of gathering these facts. Take a traditional supply store. It is not very common for the desk clerk to capture the facial expression of the customers while entering the items bought in the cash register. So in this way, the information from the facial expression is available to the desk clerk but not the store manager.

In our current computing systems, then it is not easy to compute “the mood of the customers” - but if we understand the wealth of information lost in the “codification process” then we can exploit this for a better understand of the market siutation . But we will still not be able the compute the “mood of the customers”.

So - to me information can only be understood as “information in context”. I will try to design complex systems from this where it makes sense.