Naveen's Weblog

Bridge to future

Posts Tagged ‘Codingsense’

Sensible Code Part III – What’s in your toolkit??

Posted by codingsense on April 6, 2014

As we have seen in the previous article how does a software degrades by time and what is software quality, lets now see if there are any best practices that can be followed which will help us to keep the code in better shape and give early warnings before the code rot.

I have come accross couple of tools that will help us to get early warnings when we side track our quality path. Start filling your toolkit with the below tools.

Coding guidelines:
All developers think differently. Some follow coding standards some dont. How to make sure everyone writes the code in similar style following all the coding standards that are set, so that the overall code in differnt modules written by different developers looks same for others to understand better quickly. Here comes the first tool in your toolkit Resharper.

Resharper is a productivity tool for .Net. Its a plugin which integrates with Visual Studio and is very handy during coding. Apart from its beautiful default settings, it has options to customize setting to cater our company standards. It helps in keeping the code upto the company standard. As soon as we break any of the rule set, the particular code is underlined with warning and we can ask Resharper to fix it for us. All developers can set the same rules, to make the code look similar in all different modules.

This has lots of other benefits too that are listed here and here.

Resharper cheat sheet is available here.

While working on a file, resharper shows summary of all the warning and errors that are set for the coding guidelines and are violated with an image on the top right cornor of the editor. Now the teams responsibility is to fix all the warnings and errors suggested by the resharper in the file before check-in. If the top right corner shows a checked green icon, you are at the standard required.

Quality Parameters:
As we have seen in the previous post about the parameters those define software quality, now the question comes popping

  • “How the heck can I know where the quality is poor in my project with hundreds of dlls and multi million of lines of code??”
  • “I am interested, but where to start??”
  • “I dont want to waste more time, is there a tool which will give me the results quickly??”
  • “How often to run it?? Can I automate it??”

There are couple of tools that can help to monitor quality parameters some of them are FxCop, Sonar and NDepend. I have used NDepend extensively for .Net since its very easy to use, fast, excellent documentation in its site, has a great support team, has a sister product called JArchitect for Java. So once you have rules configured for .Net products the same can be used even for Java products too that is a wowww factor.

This tool when run on a project, lists out all the quality parameter voilations like Cyclic Dependencies, Unused code, Abstractness, Cyclomatic complexity, variables and fields in method and classes, even code that is breaking the company coding standards, performance issues and much much more. It also gives you flexiblity to write a custom rule in a format similar to SQL called CQL (Code Query Language) in older versions and CQLinq in new versions.

For eg: CQLinq to list all methods that are larger than 30 lines of code

warnif count > 0

from m in Application.Methods

where m.NbLinesOfCode > 30

select m

NDepend also has the features to compare 2 versions of the code, for eg: you ran the tool in first week and in second week you ran it again, you can compare the 2 builds and check how the quality changed, which is a good matrix to know how are we improving in quality.

This can also be integrated with contineous integrations tools, to run automatically during daily builds to monitor the quality every single day. For critical rules we can even break the builds and email the team about the code that was recently checked in that is not to quality.

Duplicate code:
The days were so good, when I used to love copy and pasting the code. I was so productive, delivering similar features were like a day’s or a weeks job for me and my managers used to praise me a lot, LOL.

But now, when I see a duplicate code my blood boils, what the heck happened to me? Is the duplicate code good or bad?

Duplicate code is like a cancer; as it grows it starts showing how bad is its presence and how hard it is to cure it, you need to invest huge money and time to cure it.

But why is duplicate code bad??

  • A fix at one peice of duplicate code will force us to change at all the places where we copy pasted the code. If we miss, god help the customers.
  • The person who has copy pasted is the only one who knows where and all they are other jewels(clones). What if a new guy comes to fix an issue in the section.
  • It breaks core design principles such as DRY, SRP and OCP.
  • Increases the LOC, More code to maintain, bigger assemblies

hmm interesting, now lets see is there any tool to help us get rid of it. I used couple of tools and found Simian (command console) and Atomiq (GUI) very good in identifying the places of clones.

Simian is really fast and has couple of options to include and exclude files and folders, set mininum duplicate line count based on our own standard, and gives output in differnt format like XML, CSV etc. I prefer xml with which I can write my own tool to read the xml and use the data to show duplicates in veriety of styles to analyze better.

Contineous integration:
As said in wiki : Continuous integration (CI) is the practice, in software engineering, of merging all developer working copies with a shared mainline several times a day.

Why do we have to do it so frequently?
In one of my old projects we used to generate a build only at the end of the iteration or cycle that used to be once in a month and that day would be like a festival for us. No work, whole team in one cubicle surrounding a person who builds the product ( he would be totally freaked), we would be going home really really late, all managers starring at us as we have done some crime, postponing the delivery was a habbit.

But, what was the problem,

  • Taking latest copy from the repository having all the changes from all the developers after a month and building it, guess what hundreds of errors, fights between the developers on why the interfaces were changed and not intimated.
  • Many files referred locally but not checked into the repository
  • After fixing the build issues and generating the build, we could find missing functionalities at the boundries, on everyone assuming the other would handle it.
  • Rework, nights at office

So generating builds frequently and doing integration tests regularly will ensure the final night we are at home sleeping peacefully in bed.

There are couple of tools to help us resolve the issue. CruiseControl.Net and FinalBuilder are on the top list. These tools can be scheduled to take the latest copy of your code from repository automatically and build the entire project/product and send notification via mails on failure or success of the builds.

Finally what we get from all these??
Using all these tools together we can improve the quality of overall software and keep monitoring them… Ask me how??
Use the continous integration tools to build the assembiles and run custom tools like NDepend/Simian to check quality parameters and report any breakages to the team through emails.

By having such systems in place we can get early notification and take corrective actions to resolve the issues as an when they are introduced. Imagine our leads coming and telling us that “yesterday you checked in one class and in it a method named I_AM_BIG_METHOD is too big, please make it small or move it to some other class to make SRP compliant” on the next day after we have checked in when its fresh in our mind, rather than making us to sit and fix it on the final day of the release or after few months when we have totally forgotten the code we ourselves have written.

As its said An apple a day keeps doctor away, for programmers its Using these tools everyday keeps bad code away.

Clean and happy coding,
Codingsense ๐Ÿ™‚

Posted in Uncategorized | Tagged: , | Leave a Comment »

Sensible code Part II – What is software quality??

Posted by codingsense on March 31, 2014

Before getting to software quality, lets spend time to see how a software degrades?

Software degrade doesn’t happen in a day or a week, it takes couple of months or even years for us to realize that the software quality is degraded. And it takes even more time after that to bring it back on track. It just starts to happen from the day when we start rushing the code into it, without giving enough time for ourselves to think before implementing features or fixing defects, we always tend to think the code that we write is good.

And if we are not concentrating on quality on every step we take in developing we end up as I had mentioned in my last post here.

Imagine how it would be if we could track the degrade at the moment when we insert a bad code. It really needs good observation and practices to achieve it, but in real world since we are so busy in writing new code and delivering quickly we least bother on looking back on what we have done wrong. And we are always proud of our code, how can we check if we are at the best.

There are few good practices that will really help us to avoid sidelining with your product quality path. And using these practices in every step will ensure us that we get early warnings and keep our product lively.

Letโ€™s see what are the quality parameters that really matters in product health.

  1. Cyclic Dependencies: This is the most important of all, this parameter targets the types and namespaces. If there is any cyclic dependencies then it shows most of the principles are broken. and once we introduce cyclic dependencies, its really really hard to maintain that piece of code. To describe a cyclic dependency lets taken an example. If a type/namespace A is using a type/namespace B and B in-turn is using A, then we call such dependency as cyclic dependency. In this case both A and B are tightly coupled and any changes in A or B will result in an impact in both of them. Writing UT for such dependent types is also hard to mock or stub.
  2. Afferent coupling (Ca): This parameter helps us to monitor single responsibility principle (SRP). This parameter is applicable for method, type (class), namespace (package) and even assemblies. For eg: if 3 other types are dependent on a type t1, then afferent couple of t1 is 3. If there are more types referring a type then it shows that it is taking more responsibility and it should be broken down into multiple types.
  3. Cyclomatic complexity (CC): This parameter is the count of number of path of execution. For eg: If there is no condition in your method then the CC of that method would be 1. If there is one if condition it would be 2. So any condition in your method will increase the CC by 1. All the following keywords increases CC by 1. If, while, for, foreach, case, default, continue, goto, ||, &&, catch, ?:,?? etc..
  4. Interfaces: Following dependency inversion principle (DIP) and interface segregation principle (ISP) gives us very much flexibility for future modification.
    DIP says no classes should be accessed directly but should be used thorough an interface.. For eg imaging if a type A wants to use B, instead of referring B directly in A, create IB interface and use IB in A, now A is dependent on the signature not the behavior, if new type of B1 comes we don’t have to change anything in A, and simple pass the instance of B1 instead of B.
    ISP says dont clutter your interface with all the methods but segregate it using SRP. For eg: if we are writing some class for Person and create an interface IPerson with methods, walk, run, stop, sit, stand, eat, sleep, meditate, exercise etc. Instead separate IPerson with behavioral interface, like IMove can have walk, run, stop methods, instead of having them in IPerson directly. and then inherit IPerson or better Person class from IMove.
  5. New keyword: For creating any object, we have to use a new keyword, but why is it called bad to use new keyword? new keyword is good if its in right places, it should not be scattered though out the code, the less new keyword is used the better. We can decrease the usage of new keyword using factory method, dependency injection or MEF. But how does it help us?? imaging a logger class, and as of now we have a requirement that logger should log only in a text file and we implemented it and used new keyword across our code wherever we need logging. Later if the requirement says we should provide also option to log in event logger then we start writing if-else inside our logger class. Instead its a new responsibility and should be taken care by a new class. So if we had the creation of logger in a factory, we can just write a new implementation and return it.
  6. Comment and names: Comments as suggested by uncle bob in his book โ€œClean Codeโ€ is ugly. He suggests that if the variable, methods, types, namespaces and assemblies are named in a descriptive way, then there is no need for a comment. Reading a line should feel like reading an English sentence. Always make a class noun and a method verb, Eg a class Cycle can have methods like Ride(), Start(), Stop()
  7. Number of lines: This parameter depends on programming language that is used, for .Net and Java for a method 15 is an ideal count, and for a type its 200. There are many cases that is observed when you cannot write a logical flow within 15 lines in a method. But ideally itโ€™s good to method and type as short as possible.

There are many many more quality parameters that needs to be considered. But lets focus on these above parameters since they are really important to be taken care of. In next post lets see how to get early warnings on the above parameters as soon as we break the law.

Let me know if there are any important parameters that I have missed to mention.

Happy Learning ๐Ÿ™‚

Posted in Uncategorized | Tagged: , | Leave a Comment »

Sensible coding part I – Howz your product???

Posted by codingsense on March 26, 2014

After few years of experiencing development in various projects and technologies, I started to think of reflecting my past on what went right and what went wrong in terms of development for self appraisal.

While analyzing I found couple of common things that happens in a life time of a product and people working on it.

Kick off a project – Happy birthday to our new project (Year 0):
Product is getting born, very much in versatile state. Lets name it Mr Perfect. Management bless Mr Perfect to be built with world class code and can sustain any changes in the requirement and will live long and long with good health and bring a very good name to the company.
Best competent people are selected for the team. Everyone in the team are excited about the new deliverable that are planned, we can see the excitement and energy in the team. All are busy in learning a new technology or new version of technology, people come up with different approaches for any given problem. Working with such team seems wonderful.

First release of Mr Perfect (Year 1):
Product has survived the first release and any changes or bug fixes have not effected it. People start working on new requirements and changes that comes from the filed users very positively.

Second release of Mr Perfect (Year 2):
Lots of changes and bug fixes were done, new and new features are getting pushed into the product. Deadlines are short, managers ask for quicker deliveries, customer asks for quicker fixes. People start to doing hard work and ignore smart work to achieve the goals.

Third release of Mr Perfect (Year 3):
The product has grown big with more than half million lines of code. People have created a mindset on which feature is good and which feature is bad ( few techies call it legacy code ๐Ÿ™‚ ). The feature that are labelled legacy, is creating panic in the people who are associated with it. If there is any bug raised in it, they start to panic and get frustrated sometimes.

Fourth release of Mr Perfect (Year 4):
Our marketing and requirement guys go to customer to check how do they feel about the product and what else they need. They come up with a big list of new features, scrap some features and list of customer complaints on improper support.
Development team is unable to digest the changes, they start looking for workarounds, hiding some features, implementing new features.


What are the probable outcomes?? any guesses??
Check which all of the below listed would be true.

  • A bug fix in one module, starts creating impact on some other module.
  • The code is hard to understand, very fragile and not versatile.
  • Duplicate code is introduced everywhere, any fixes at one place should be fixed everywhere.
  • Performance of the product is very low.
  • Loads of memory leaks results in slow performance and crashes the tool often.
  • Removing a feature is not easy since some of its classes is used by many of them.
  • Development team asks for much higher estimations to achieve even a small change.
  • Bugs reported in critical features (Legacy code) are ignored and delayed as much as possible.
  • Quality team raises non compliance on some features with more bugs.
  • Managers are worried about their competent team.
  • Development team suggests lets refactor the features or build a new product.

Blah Blah Blah..

Guess what would happen to such Mr Perfect product or the customers who rely on them?

Has anyone seen these problems in the product?? What went wrong suddenly just last year everything was fine?? Is there any solution for this??

Let me know your comments if you have seen such issues or are living with it or overcame it.
hmm I bet there has to be some solution for such ๐Ÿ™‚

Next >> Sensible Code part-ii What is software quality?

Happy Learning ๐Ÿ™‚

Posted in C#, Codingsense, Solutions | Tagged: , | Leave a Comment »