Thursday, July 2, 2009

Debugging revisited

There appear to be distinct methods of debugging, some of them more successful than others. When a project has a deadline, it is absolutely imperative that every detail is predefined and then cleanly implemented from the inside out. Analysis as you go development is increasingly stupid and bug prone so should be avoided at all costs no matter how one things one "can be more creative", or can defy logic by being able to "write clean code" when the code written has to be constantly revised.

Most systems stem from a database table relations guide. A recommended tool is Verio (so Microsoft bought it into their stable), but I prefer the Linux free offering Dia.

It allows you to outline class structures, creates lines of relation fairly well and then flesh out the objects with their bits and pieces. Very good. Verio does it better, but you do not want to be concerned with "features" of the interface when designing a system.

So, once the system is defined, it becomes increasingly important how you approach the system implementation. Most programmers start by defining an interface to the database or adopting one that is already defined and creating interfaces for the screens required of the designed system. Web designers are different - a page can be realised in HTML progressively. This of course may add functionality or features you did not account to in your database design. So the tendency is to start from the screen displays and then work out the database from that.

This sounds like a dilemma, but it is not, really. Make the database a little more fluid by implementation of a database tier that is a little "soft" (i.e. can be latently defined). There are numerous ways this can be achieved including attribute tables, linked lists and queues as well as traditional data tables.

A real n-tier system leverages the database against changing need, provides mid-ware that provides an interface that invites programmers to use it.

But still, bugs will occur. Especially when adding in new interfaces such as Ajax based systems - things become less clear when the back end starts serving event driven Javascript actions that respond to the back end. Therefore these interactions must be mapped or at least implemented via a framework.

Now to the debugging. When a bug can come from any tier or front or back end interaction it pays to a) have a map, and b) isolate the bug accurately.

It is important when you have a well defined structure not to change anything but the actual problem that is causing the issue. Before the bug is understood, there may be a period of familarisation with the segment of code, getting to know everything you can about a few lines of code may tell you more that testing ideas.

The bug often is not where it seems to be. Using conditional logging (so you can easily switch it all off) with debug levels enables code analysis. Take a leaf out of syslog - use a level indicator e.g.

logger = new Logger();
if ( debug > 3 ) {
logger::Log("this is a severe error");
} else if ( debug > 1 ) {
logger::Log("not so bad");
}

Of course it is better to leave the logger debug code in your code so long as it can be turned off. But when a piece of code is "clean" it is better for the distraction of the debugging phase be removed.

Use of svn or cvs make this delightfully feasible.

Friday, June 5, 2009

Tracking a debug

There is bug in my code. Not released code, this is code under development and the bug is interesting as refuses to submit to logical analysis. Tracking which module is replacing text is important, understanding why that is occurring may be the only way to fix what is happening, as opposed to fixing what I think is happening.

The final law of debugging may apply. When all logic seems to fail, the final law of debugging is this:

if you do not know what is causing the problem you are not be looking at the problem

One examines debugging iteration results for the clues to the logic. Post analysis of what a program is doing is often better than trying to "watch it".

Walk through step by step if necessary.

Individual testers improve software development by a significant degree. Programmers make terrible testers but this is simply due to the kind of thinking involved. Testing does not make terrible programmers, however.

Programming is best done from established rules and structures that provide a sound basis for proceeding. Without that solid analysis assumptions would start to creep in.

Programmers often create bugs by shortcutting or neglecting to complete processes. If every piece of code were diagrammed and written with exact application to detail - bugs may still exist, but only the difficult ones.

Finding all bugs and correcting ONLY them is absolutely vital. Programmers of course must test stuff before sending it to be tested by others. But if programmers are to alpha test software and they also wear the analysis hat, strict attention to not introducing new ideas from regression testing have to be kept out of that release iteration.

But sometimes a programmer faces a bug that seems impossible. This is a trap. Looking at test data, and verification against raw data seems a natural and necessary enough testing activity for validation.

The programmer looks at the log, and the database and notices that the log seems to reflect the database, but the display - is doing something else. So the bug must be in the code. Programmer stares at the code in disbelief. Swears "there is nothing wrong with my code, is there!?".

The code looks simple, there is simply nothing wrong there. But the programmer curses and runs it again not believing eyes that are now glaring at you "you see? as the same failing code is run again just in case you did not believe the test, either.

The programmer is reacting to the discrepancy and not trusting instinct. Think again - nothing is wrong with the code. The problem must be elsewhere. Take a 30 second break from looking at it, and stop trying to defend your code - it is right - look in another module of your code. Probably earlier in the code, but not always.

The Scientific method of debugging does not rely upon anything other than simple observation of what is.

Tuesday, April 14, 2009

Debugging: attitude

Debugging can be onorous - the programmer testing their own code naturally has a bias to make things work. Training oneself to be dispassionate with bug fixing seems against the tide but it is not.

What you have to do is find and isolate bugs. Highlight them. Make them stand out.

Difficulties:

There is a definite sense of defeat when encountering a bug. This for me was due to years of negotiating as a contractor and a moral obligation to have no bugs (as if they were a secret ingredient you had simply forgotten to add).

In fact, writing software without bugs requires very exact procedures. Writing software is also very creatively demanding.

The lack of decent specification before writing is the cause of bugs. Encouraging inventive creativity in the coding process creates a conflict of interests for the programmmer.

Thursday, March 5, 2009

Google Chrome: Missing Link

Google Chrome - the world's best browser by a long shot

Idea to improve browser semantics

The target in links - when it open a chrome window has a hoverplate that shows the named target.

This encourages developers to use named targets:

breakdown:
a href = url target = name

tab name is a hover plate

a hover link is a link that actions on hover
a hover tab is a distinctly styled tab that overlays the title of a browser tab
a hover plate is a set of hover tabs that occur above the tabs showing the target that derived the link - ie which page spawned it and the target name. As this is usually relative (_blank) the plates are sparsely populated.

This becomes useful as Chrome is so efficient - users end up with hundreds of windows opened as cache only. Rapid interpage navigation is strongly facilitated by hover link.

Copyright © 2009 by Nicholas Alexander.

Tuesday, December 2, 2008

Real Privacy

Forward looking Governments like this one in Massachusetts are passing laws that forbid the transmission of personal information over the net unencrypted. SFSW has been saying that for years, until we start encrypting normal communication then there is no concept of privacy in transmission. If it is http: traffic, generally it may be intended for public consumption but businesses who transmit their customer mailing lists are publishing them for the world to see. It is a) unnecessary, and b) a decrease in the value of email. When the protocol is SMTP: the details should not be default public. You should not transmit your credit card or bank details using email (or publish them on a website).

Saturday, October 25, 2008

Blogger back

It has been months since I could access my Blogger site. I trust this means that google will come out with some extremely good anti spam systems. All strength to google for recognising that a human writes these blogs.

Wednesday, August 6, 2008

Infrastructure

In one language called Structs - one can imagine virtual girders supporting platforms of functionality, with all sorts of connection between levels. Important packages - like people - tend to move via elevators (slow, safe, secure, private) where as documents can be seen as you walk through offices - the intelligence of running an office has little actual cost as it appears and scrolls on screens (fast, public, usually accurate but sometimes breaks). The office models of why structure is important in software. You want to apply appropriate rules to different objects depending on what floor they originate, and what their purpose is. The number of rules and attributes that attach to a single document (entity) increases as new routes and other people's requirements are added. The superhighway and its public nature sounds simpler, but in fact has to accomodate millions of users - therefore it has to have far more going on for it to be able to cope. Elevators, in contrast, have both imposed and real limiters both in capacity and rate of transmission.

In a MVC environment there is a greater rigour to the separation between the display and the detail. The flash and the function are created by opposing skill sets - it is only natural that MVC and OO evolved the way they have - are they the best idea yet? Maybe.