We are living in a time where businesses and the people running them often change their mind. I won’t be gong into details of why is that so, let’s just say it is a given, and let’s say they are right. It gives them the competitive edge when they are flexible. It is on us to provide that. We are witnessing a high demand for a maintainable software, software that can change easily over time, and where the most of the effort measured in time and people’s work on projects happens after the initial release…long after the initial release if you’re happy. Needless to say, if we want to software to succeed in the long run, we must set our own mindset towards a way, that will provide the businesses their much needed value in that long run.
I know the following is a bold statement, but, you cannot say this enough so that it doesn’t become true.
There is no silver bullet in software development.
The only thing that separates the good software from the bad one is the value of the software for the business at that particular point in time. In order to prolong the value of the software for a long period of time you have to make it respond easier to change.
How you achieve that is a completely different matter. It borderlines with art and being able to predict the future. But, the good thing is that there are a few guidelines that you can follow that can help you out.
Here, I try to put some light on two of those: cohesion and coupling….Read on…
Continue reading “High Cohesion, Loose Coupling”
Whatever you do in the .NET framework deals either with value or reference types, yet, there seems to be a great deal of confusion in many discussions with fellow developers and on online forums and QA sites about where the actual variables reside. It is so basic yet a cause of so many misconceptions. For example one of them is that value types reside on the stack and that the reference objects reside on the heap. We will try to break up some of those misunderstandings by carefully examining and explaining what really happens(with the current implementation of the .NET runtime, which at the time of writing is .NET 4.5.1)
Before we dig deeper into this issue I just want to say that this is by no means a comprehensive guide to how types are handled in the .NET framework. It would take a whole book on that. I’m simply trying to create a nice picture and get a few things clear as a general concept by working the foundations and trying to create a picture of what is one possibility of what happens behind the scenes down at the deepest level.
Continue reading “Value and Reference”
When our projects reach a certain size it is very hard to determine the complexity of our code. It gets harder and harder to see the overall picture. It also becomes very easy to introduce unnecessary complexity to parts of our system that we don’t want to. In simple words, it’s easy to get lost. Especially if there are may people working on the project.
On top of many other tools, practices and principles like unit tests, integration tests, acceptance tests, continuous integration, it is static code analysis tools like NDepend that come into play.
Continue reading “Code Analysis with NDepend”
It is pretty hard writing an article on something that so many super cool authors have written books about. But, as I said it is my own experience learning and embracing TDD that I want to share here so that maybe I can help someone out there that can relate to this. And also I can always remind myself of the process I went through while learning it.
Bottom line is that nobody can teach you a programming approach like this by writing or making videos about it. They can only get you started and they can tell you why you should do it. The real power comes by you actually digging into it. The more you do it the more you master it and the more you can actually feel the benefits of it. I didn’t believe when people said it was addictive, in matter of fact I opposed to the whole idea. I was one of the people that thought this is a waste of time and that you can achieve more by just writing production code….and boy….was I wrong about it.
We have to start somewhere, so why not at the very core of it. The unit test definition….
Continue reading “Test Driven Development”
There have been many questions I encountered lately of what are the best practices / guidances that you can take up on when designing an application.
Wheather that’s an ASP.NET application or any other type of application that uses object oriented principles.
First and upmost, let me begin with probably one of the most important principles in object-oriented development and design, and that is the Separation of Concerns(SoC) principle.
Separation of Concerns (SoC)
SoC is the process of dissecting a piece of software into distinct features that
encapsulate unique behavior and data that can be used by other classes. Generally, a concern represents a feature or behavior of a class. The act of separating a program into discrete responsibilities significantly increases code reuse, maintenance, and testability. Like, for example MVC can separate content from presentation and data-processing (model) from content.
Of course, every programming language has it’s own ways to incorporate this principle in it’s own ways.
The S.O.L.I.D. Design Principles
S.O.L.I.D. (stands for Single responsibility, Open-closed, Liskov substitution, Interface segregation
and Dependency inversion).
The S.O.L.I.D. design principles are a collection of best practices for object-oriented design. All of the generally known Gang of Four design patterns adhere to these principles in one form or another. The term S.O.L.I.D. comes from the initial letter of each of the five principles that were first collected in the book Agile Principles, Patterns, and Practices in C# by Robert C. Martin. Commonly known to us as “Uncle Bob”
The following sections describes each one of them.
Continue reading “The S.O.L.I.D. Principles”