Pseudocode and agile modeling

The purpose of this article is not to be an omni-comprehensive guide about pseudocode but a starting point for a discussion about possible usages and the benefit we are experiencing from the usage of pseudocode during our development process.
We are so enthusiast about pseudocode that we are developing an Eclipse plug-in to create effective pseudocode models, more information about the plugin (currently in beta) are available here

What is pseudocode
Pseudocode is a compact and informal description of an algorithm or of an entire program and can be used also to improve the communication between technical guys and not technical guys, and between experienced and less experienced programmers.
Pseudocode usually omits details that are not essential for human understanding and doesn’t use any language specific syntax; the intent is to make easier exchanging ideas about the key principles of an algorithm or, just in case you are not working in a team, to figure out quickly what have to be implemented by the algorithm.
Several books and courses use pseudocode as way to teach programming language to students. Often students start to write down code as they receive a problem statement but we believe that the right approach (not only for students but also in the real life) is to “design first” and that pseudocode should be one amazing way to design the code due to its simplicity.
Pseudocode should not contain too many details and has to be written with very informal English because otherwise people could have the feeling to be learning a new programming language and start thinking that it’s a time waster.
A snippet of pseudocode may look like this

INIT the hours worked by an employee
READ maximum allowed hours FROM web service
IF workedHours > maxAllowed THEN
SHOW overtime message
SHOW regular time message

As you can see it is very simple to read and understand also for not technical guys and the logic is absolutely clear for everyone.

Best practices
While understanding pseudocode is usually not difficult, writing it can be a challenge especially because it’s very easy to be too much detailed or too much related to a programming language.
Pseudocode strikes a precarious balance between the understandability and informality of English and the precision of code.  If we write an algorithm in English, the description may be at so high a level that it is difficult to analyze the algorithm and to transform it into code. If instead we write the algorithm in code, we have invested a lot of time in determining the details of an algorithm we may not choose to implement or that doesn’t fit completely the requirements of the software we are working on.
The goal of writing pseudocode then is to provide a high-level description of an algorithm which facilitates analysis, eventual coding and the production of the documentation. The boundaries outlined by the words “high-level” depend on the audience, algorithms written for different audiences have to be written with a different level of details.
For this reason we strongly encourage to ignore unnecessary details but to keep always the pseudocode logically grouped and indented in order to improve readability and to focus the attention on the logic of an algorithm.
In order to keep alive the attention of your readers avoid to belabor the obvious, it’s not critical to specify the data type of a variable or to set up the counter to use in a loop when you are writing some pseudo code.
An easy way to keep short and concise your algorithms is the usage of the English words that are standard in most of the programming language: if, then, else, etc.
Another good habit is to consider the context and to avoid redundancies; if an algorithm deals with quicksort is not useful at all to write something like “use quicksort to sort the values”, it’s too much at high level but, if your model already defined some good quicksort algorithms, it’s not useful to repeat again the logic.
It’s very hard to keep the pseudocode easy to read because actually it’s text. In order to improve readability a tree can be used, in this way the indentation will help you to group logically the pseudocode and to avoid creating confusion and misunderstanding.

Pseudocode as an iterative modeling
The construction of a Class or of an algorithm usually it’s an iterative process that start with the definition of the responsibilities of a Class and that move on with the production of a general design and the enumeration of specific routines within the Class.
When the routines are defined, discussed and refined usually the process move on the next steps that is the definition of the details of each routine.
It’s very important to write down pseudocode at a level of intent describing the meaning of the approach rather then writing down how the approach will be implemented in the target language. In this way building code around it will be nearly automatic and the pseudocode will turn quickly into programming language comments.
Pseudocode supports the idea of iterative refinement. You start with a high-level design, refine the design to pseudocode, and then refine the pseudocode to source code. This successive refinement in small steps allows you to check your design as you drive it to lower levels of detail. The result is that you catch high-level errors at the highest level, mid-level errors at the middle level, and low-level errors at the lowest level before any of them becomes a problem or contaminates work at more detailed levels.
The continuous iteration over this process is a great way to do iterative modeling. Each step involves several sub steps, let’s focus our attention on the definition of the routine’s details using pseudocode.
The first step to perform is to define clearly the problem a routine will solve with enough details to allow the definition of the steps involved into the routine. The information you may need are the inputs needed by the routine, the output it will produce and what the routine will hide or even better will do behind the scene.
After this information is clear you can start to define a name of the routine. Naming a routine might seems a trivial task but it isn’t, remember that meaningful and not too long names are one of the building blocks of good quality code.
Before writing down the details of the routine we strongly encourage to make a small research in the code base of the organization in order to avoid re-defining the obvious (i.e. something that is already designed and it’s already working). If you get something from the code base it’s enough to refer to another routine and keep your pseudocode shorter.
Often a routine can generate an error, think to all the things that can go wrong in the routine (e.g. bad input values, invalid returned values, etc.) and define a strategy to handle gracefully the errors.
After completing the preceding steps you already done with your design because you should have a diagram like the following one but it’s a good habit to go deeper in the modeling on core components.

Especially with core components it’s very hard to summarize in few lines what a routine is supposed to do, for this reason you can go deeper writing down the logic of the routine, how data are retrieved and manipulated, how the app will react to the user input, etc.
When you believe a pseudocode snippet is completed we believe it’s the time to ask to someone else to read it. You don’t need a technical leader or an architect, ask to read it to any of the stakeholders involved in the project and you will be surprised how it’s easy to catch high level or logical errors before starting to write down the code.
The general idea is to iterate the routine until the pseudocode statements are simple enough that you can use each of them as comment in the code you will write. Keeping updated the pseudocode and refining it continuously seems to be a waste of time but it helps a lot; this is one of the main reason why we started to work on APDT.

Pro and cons
The introduction of pseudocode in our organization is improving our production process because it impacts the following fields

•    Reviews
•    Documentation
•    Liability to changes

Pseudocode makes reviews easier. We are used to do code review and often it’s hard for the reviewer to jump in the logic of a group of Classes and state if the code fits the overall architecture or not. The usage of pseudocode let us be able to move split the review in two phases, one happens at the beginning of the production process when people can discuss and review the logic of the algorithm, the second one happens when code is done but at this point logical issues have been already addressed and the review can be focused on speed, efficiency, etc.
Through pseudocode we are getting a documentation that it’s really easy to maintain rather than large diagrams and even more we use it as comments in the code in order to produce quickly a quite well refined JavaDoc for each Class we write down.
A few lines of pseudocode are easier to change than a page of code so your program can be changed easily during the modeling phase and can be discussed during production in a very fast way so that architects can provide their advice in time.
There are also some cons in the usage of pseudocode, for instance developers sometimes waste time on it because they add to much details and tend to write code. We strongly encourage keeping the pseudocode at a high level (until a good reason to go deeper came out) avoiding putting in place coding details that can be understandable only by developers.
Another issue we are expecting is that it’s very hard to test the logic of complex routines defined through pseudocode and for this reason we are working on a component for APDT to help us test the logic.
The biggest issue we are getting is that a high-level language limits the programmer’s flexibility. Additionally, the programmer can fail to distinguish between the analyst’s coding technique and the analyst’s design, and the result may bring to the rejection of a perfectly good design based on inappropriate criteria.

Pseudocode Syntax
As the name itself suggests the pseudocode generally does not follow any specific syntax rule and details like opening a file, initialize a counter are explicitly coded but language dependent details are ignored.
There are several document on the web that define different syntaxes to use writing down pseudocode (also our organization defined here the basic rules) but again the overall concept is to describe a routine in a form that a developer can easily translate into code and analysts can quickly check. One of the most complete syntax definition has been made by Stuart Garner, he created a library for NoteTab in order to write down pseudocode.
Our recommendation is to keep it clear, logically grouped and short as much as possible (until understandable) so that you can really improve the writing code process of your organization.

The usage of pseudocode is usually more related to students but we strongly believe that it is a way to improve the quality of code in all the organization. The major field of appliance is distributed teams because communication can be a challenge, in this scenario a simple and easy exchangeable file that shows up the routines to work is more valuable that hours of online meetings.

Posted in Agile, Best Practices | 2 Comments

2010 in review

The stats helper monkeys at mulled over how this blog did in 2010, and here’s a high level summary of its overall blog health:

Healthy blog!

The Blog-Health-o-Meter™ reads This blog is doing awesome!.

Crunchy numbers

Featured image

A Boeing 747-400 passenger jet can hold 416 passengers. This blog was viewed about 2,200 times in 2010. That’s about 5 full 747s.


In 2010, there were 5 new posts, growing the total archive of this blog to 37 posts.

The busiest day of the year was July 29th with 64 views. The most popular post that day was Code review, user stories and sustainable software.

Where did they come from?

The top referring sites in 2010 were,,,, and

Some visitors came searching, mostly for intelligere scs, flex mvp, mvp flex, intelligere, and giorgio natili.

Attractions in 2010

These are the posts and pages that got the most views in 2010.


Code review, user stories and sustainable software May 2010


De architectura: MVP in Flex / AIR components October 2009


Intelligere SCS RC1 May 2009


Helper Classes January 2010

Posted in general | Leave a comment

Communication challenges with distributed teams

Agile development is based on an easy and informal work-flow of communication between customers and software house. One of the greatest challenges is to keep the communication fluent also between the development team.
Recently we took part to a simulation during which two different teams, split into two sub teams living in different time zones, have to complete a “project” into 3 iterations.
Let’s recap in details the structure of one of the two teams

Team A
•    Country 1 sub team
o    1 business analyst
o    1 project manager
o    1 technical leader
o    2 developers
•    Country 2 sub team
o    1 business analyst
o    1 project manager
o    1 technical leader
o    2 developers

During iteration 0 the business analysts have had the opportunity to take a look to the overall project and collect the requirements (actually the simulation ask to redesign an abstract picture).
Each iteration was 20 minutes long and during the iteration each sub team can work only 10 minutes and have 2 minutes to have a face-to-face discussion.

After 3 iterations the “Team A” and “Team B” get a very different result, the first one completed the 80% of the picture but it without reflecting exactly the original specification, the second one was able to deliver the 50% of the picture but it was much more accurate.

Instead of focusing on which is the best result for the customer what is interesting is the way the two teams organized communication.
After 3 iterations “Team A” has used only one paper of the dashboard to put in place some communication between the 2 sub-teams, on the other side “Team B” have arranged a WIKI, a message board with tags, a defects list with priorities, status, description, unique id, etc.
The lesson learned here is that a good communication work flow has to be adaptive, during the simulation it was amazing but not useful at all a so well organized communication.
Another interesting field of this simulation is to see how team members worked together, in “Team B” developers started to work together with the Technical Leader only during last iteration, instead business analysts, project managers and tech leads started immediately to work together.
The lesson learned here is that people have different communication skills and especially developers need time to start to understand each other, so don’t pretend to have a great velocity since the beginning and focus your attention on the way people is working together is very important during the first iteration of a project.

The overall lesson is that agile with distributed teams is possible but more complex because you have to avoid concentrating yourself too much on tools and documentation (otherwise this is waterfall).
Moreover you have to keep in mind that communication between two different teams takes more than the 10% of the iteration time, so keep safe at lease the 25% of the time of each developer in order to let them communicate and understand each other.

Posted in Agile | Leave a comment

Code Quality through reviews and pseudo code

Code quality is one of our most concerns here at GNstudio, our willing is to avoid that the process of scaling up our organization negatively impact the quality of our code.
The aim of this short post is to discuss the building blocks of the process we use to keep high quality standards, this process essentially is based upon the following items

•    Architectural reviews
•    Pseudo code
•    Code reviews

All these items help us to keep all the stakeholders of a project on the same page also from a technical point of view.
When a new project starts we dedicate a while to the “iteration 0” in order to let us a chance to get all the requirements and to have the same vision of our customer about the project. At the same time we have a chance to start to discuss with our developers the building blocks of the architecture we will put in place for the project.
A software architecture is more than just a technical blueprint of a complex software-intensive system. In addition to its technical functions, a software architecture has important social, organizational, managerial, and business implications.
What we do during the “iteration 0” is to define just enough pieces of the architecture to start up the project, the layers involved in this phase are

•    The application domain
•    The subsystems (and their main interfaces)
•    The communication domain

The definition of the application domain is critical to us because we use this also in order to start to define a common terminology to refer to the entities involved in the application.
We strongly believe that the quality of the code can be also measured by its understandability, for this reason we made a review of the application domain explaining it with simple words to all the stakeholders involved in the project.
At the end of this phase our domain usually is changed both in terms and in relationships. We get this result because in order to explain a domain to not technical guy we have to drive them into some use cases that involve the product they own.
Use case exploration drive us to the validation of the domain because we make empiric tests about the entities and the relationship we have to put in place to keep our customer satisfied, moreover we believe that a good domain is understandable and the use case approach put into evidence immediately the terms or the relationship that are not easy to understand or (worst case scenario) that are totally wrong.
During “iteration 0” we identify the subsystems we have to deal with, the ones that we have to write down by ourselves and the ones that already exists and that we can use, and the communication domain.
When explaining these building blocks to the stakeholders the benefit is to get some past experiences or some different points of view, diversity and experience are the building blocks of any successful project.
We strongly believe that agile and architecture can coexist and that can live upon the same iterative principle, so the architecture we define during “iteration 0” should be seen not as immutable but as an asset to reevaluate at each iteration, in close collaboration between architects, developers and stakeholders.
The continuous review of the architecture of an application is very important because in this way nobody in the team can forget or misunderstand the “idea” that is behind the technical strategy adopted foe a specific project.

When development starts a good practice is to write down some informal “schema” of the code that the core components of the application have to implement.  This practice is called pseudo code, pseudo code is a compact and informal high-level description of a computer programming algorithm that uses the structural conventions of a programming language, but is intended for human reading rather than machine reading.
Usually the technical leader of a project defines also the core component of a system, through pseudo code he can share and discuss his vision with all the team. The greatest benefit we get from this practice is that also the internal layers of an application can be validated and improved through discussions, moreover following this practice we avoid that developers go in the wrong direction during the development or that some pieces went lost during long projects. To help us during the pseudo code phase we built our custom eclipse plugin named Intelligere Agile Modeler, for more details take a look here

The next steps is the implementation of the pseudo code models, so it’s in a later stage that we actually start to write down code, when the code is in place we start to make reviews of each single class involved in the system. Code reviews are very important to us and we already described here the way we do it. Code reviews without architecture reviews and pseudo code are still useful but can help you only to remove bugs from the system, increase performance, etc. Using code reviews as the check point of the implementation of your architecture is the way to use them as a mean to keep an high quality code in place.

Posted in Agile, Best Practices | Leave a comment

Code review, user stories and sustainable software

The purpose of this short post is to identify the building blocks of an effective code review. The suggestions you will find here come out of our experience and are especially meaningful for our code review team.

We believe that the use of code review is one way to create sustainable software that’s open to changes but closed to modification. By linking each code review to a story we also believe that it’s possible to communicate the effort needed for each piece of the software to all the stakeholders of a project.

We strongly believe that the code review has to be informal because it’s not a way to check if a developer is good or bad at his job, it’s the only way in which an organization can keep a reasonable code quality in place and improve the knowledge of all of its team members.
In an ideal scenario, code review can be performed sitting together in front of a monitor with an author who drives the review by sitting at the keyboard and mouse, opening various files, pointing out the changes since the previous release and explaining why it was done this way. This approach is great and can bring enormous benefits because it simple to do, anyone can do it without any training and it doesn’t involve an impersonal interaction with e-mail or instant messaging systems.
We have some concerns regarding this approach because we have identified some recurrent downfalls:

•    The author often forgets to present something to the reviewer
•    The author can explain something to the reviewer, but without an output from the conversation the next one that looks at the code won’t have the benefit of the explanation
•    Some change requests made by the reviewer can be forgotten
•    There is no way to track the issues in order to create a check list for all team members to follow as a “Best practices” guide

In our opinion, the major drawback of this approach is that sometimes code review is postponed because two people have to arrange a meeting (not always so easy) and there is no way to check that the changes will actually be made by the author.

After some failures in our code review practices we have outlined our procedure…

First of all we ask the whole team to prepare a code review for each piece of code they commit into our source control system so that everyone will be responsible for readability without any additional requests.
A code review is connected to a user story, so all of the code review will be defined through a meaningful title and a note that points the reviewer to the requirements the author worked on. The great advantage of this approach is that the reviewer can be aware of the feature that has been developed and can therefore have the right approach to the inspection of a specific piece of code.
Every author has to attach a file to a code review (the format is up to the author) that summarizes the workflow he followed to develop that piece of software: in this way, first of all the reviewing team’s job will be faster and secondly, the author will recap what he did and probably find and / or fix some issues before submitting the review.

The building blocks of a code review can be summarized as follows:

  • Title: Usually made up of an ID and something that is meaningful
  • Description: A very short description of the feature developed
  • Stories involved: A list of backlog items involved in the review
  • Workflow: A file that summarizes the workflow of this part of the software

Ok, it sounds like a wonderful scenario but how can a developer put a code review in place quickly and easily? And how do we avoid having meetings postponed or time wasted on both sides? How do we keep track of the workflow of the review?

We tried several different ways and at last we identified an online tool that can help us in this process. With this tool, each developer can attach files through a file upload or through the source control system so that, depending on the kind of feature he is working on, he can submit the code and then start the code review process, or he can upload the source code without using the source control.
The advantage of a code review over a piece of code that is already in the source control system is that if there are dependencies between the stories to which the code is related and other requirements, the team will not be blocked and development can continue in an iterative way. Furthermore, if a feature is going to be changed dramatically, keeping this code outside the source control system until it’s fully reviewed and refined makes life easier for the other team members and requires them to change their code only once during the development cycle.
In both scenarios, the time the review team spends on a review doesn’t impact the overall speed of the team.
Another benefit of using an online tool is that we can keep track of the defects opened by the review team and periodically create checklists for all team members.
In this scenario, each developer can perform the following steps before submitting a code review:

1.    Identify the code needed to create a code review (usually no more than 600 rows of code)
2.    Iterate through the checklists, making an initial review to avoid common mistakes
3.    Make a recap of the workflow of this feature in order to double-check the logic and gradually produce the documentation of the software
4.    Chose a meaningful and appropriate title for the review
5.    Identify the stories to which the review is related
6.    Create the review

As for the reviewer, his job is not only to ask for changes but to provide enough information so that the finding may provide enough detail about the problem that anyone can

•    understand the vulnerability
•    understand possible attack scenarios
•    know the key factors driving likelihood and impact

There is value in both assigning a qualitative value to each finding and further discussing why this value was assigned. Some possible risk ratings are

•    Critical
•    High
•    Moderate
•    Low

Justifying the assigned risk ratings is very important. This will allow stakeholders (especially non-technical ones) to gain a better understanding of the issue at hand. Two key points to identify are

•    Likelihood (ease of discovery and execution)
•    Business/Technical impact

Each code reviewer has to suggest remedies to a defect so that alternatives can be clear enough for the authors to provide (when requested) a reliable estimate of the effort required.

Another practice we follow is to provide authors with a reference to help them understand how to avoid this kind of defect: we call this “source” and the value assigned to it can be

•    Check list
•    Systematic
•    Use case

This is usually very helpful in preventing the need to resubmit a code review with common mistakes.

In order to make the code review a practice aimed at helping our team to grow, we periodically create reports that show not the number of defects or their authors (this is the way formal reviews usually take place), but rather the defects themselves, and we ask the team which ones can become part of our checklist.

This post describes only part of our process, but we believe that these building blocks can be the right way to start improving the quality of the software that an organization is working on.

Posted in Agile, Best Practices | 5 Comments

360|Flex – Agile Practices and Flex Developers

As promised attached to this post you can find the slides of the talk about Agile and Flex developers.

The aim of this talk was to explore  the benefit of an agile approaches during the development of a a Flex application. The session has been very practice and will show to you how to start gracefully with agile without breaking too much the confidence that developers have with the waterfall approach. During the session  some coding approach to keep your code and development work flow very agile has been explored.

The slides are available for the download.

All you get and find in the slides is based upon our experience, we are following the agile principles since one year and half in ten projects and with fifteen different projects, so everything you will get from the slides is not only theory but is what we  put in practice day by day.

Posted in Agile | Leave a comment

Helper Classes

On our blog we spent few words about the architecture of a component in Flex / Air applications identifying which are the building blocks of well organized component.
It’s the time to clarify the nature of the classes that we imagine to find in the helpers packaging.
Traditionally an helper class is a class made up by static methods used to isolate useful algorithms, the way we expect helper classes in a Flex / Air application is a little bit different because we really believe that a class filled with static methods is an example of laziness and a way to break down any pattern.
We are not saying here that helper classes are not useful, our willing is to enforce their usage avoiding common mistakes.

The first mistake that we usually see in an helper class filled with static methods is the violation of the Single Responsibility principle. This principle states that every object should have a single responsibility, and that responsibility should be entirely encapsulated by the class, during the development, especially with time constraints, the main temptation is to use static classes to group together algorithms that do similar things but that probably mix together different responsibilities.
Make an helper class not static help us to think if it’s correct to add a method to this class or if a new helper class is needed.
Moreover a class with static method make very difficult the identification of problems if you do test driven development, so we strongly encourage to avoid them.
An helper class written without static methods can be easily instantiated, used and destroyed from any component so it takes part to the overall application architecture and it doesn’t act as a garbage disposal for methods that a developer doesn’t know where to put in the application packaging.
We encourage all the team to consider also that an helper class should keep safe also the Open Closed Principle (software entities should be open for extension, but closed for modification) and for this reason an helper class that is not filled by static methods can be easily extended in order to avoid code smells like duplication and cross packaging references.
Our feeling is that a well designed helper class (good design starts from small classes) can be useful and productive in an enterprise application and we encourage to use them with the SOLID principles in mind

SRP     The Single Responsibility Principle
OCP     The Open Closed Principle
LSP     The Liskov Substitution Principle
ISP     The Interface Segregation Principle
DIP     The Dependency Inversion Principle

And writing them in the right package in order to keep the packaging flexible or even better maintainable.

Posted in Best Practices, RIA | 1 Comment