My name is Jean-Dominique Nguele and this is my blog. FLVCTVAT NEC MERGITVR
Today I am going to do something I have not done before. A couple months ago I was contacted by NDepend to play around their software. I did not check but there is probably a fair amount of software reviews out there. Hence why I will try an hopefully different approach. A noob approach. I’ll read the promise from the software to review and just dive into it without any sort of guidance. Let’s call it Noob Review. Yep, that’s how you create a series that might or might not live longer than a post.
According to the website homepage, NDepend is the only Visual Studio extension that is able to tell us developers about created technical debt. This would then allow undoing the debt before it gets committed. The alleged debt is calculated based off a set of rules predefined using LINQ queries. We also can add our own queries.
Enough with introductions, let’s just get noobie!
I downloaded the latest version 2018.1.0 released on Wednesday you can find the link to the latest version here. on NDepend upon download presents itself as a ZIP archive containing some executables and a Visual Studio Extention installer.
As you can see below, the installer propose you to install the NDepend extension to Visual Studio versions all the way back to VS2010.
From there I just installed the extension using the licence key the NDepend team nicely offered me.
From now on I am going full improv. I will have no idea of what I am doing because that is what most people do when they get a new tool. That approach works when you know how to use a pencil and grab a pen for the first time. It might be a bit more entertaining if I do so with NDepend. Since it is a tool that should allow me to detect technical debt I will write an ok piece of code then some less ok code to see what happens.
First things first, I created a console project running on .Net Core. I am not seeing anything trigger automatically. Being used to Resharper, I check the toolbar and see an NDepend menu that was not there before.
After attaching the project, I went back to run an analysis on my console app but I kept getting this error:
Turns out the NDepend project did not pick up the Visual Studio project. I then closed Visual Studio to reopen it yet I had the same error after loading the NDepend project and attempting to run the analysis again. Paying more attention to the error message this time, I noticed the error was about a reference to my solution not being loaded in NDepend. I thought that maybe the issue was with me not creating the NDepend project in my console app solution folder. I conjectured that maybe these errors occur because the NDepend project is not in the same directory as my solution. Probably a noob error on my end. So I went on to edit the NDepend project properties.
Above, you can see the NDepend project properties after I added the reference to my solution using the “Add Assemblies from VS solution(s)” button. It seems that it loaded the binary generated by the solution along with It also shows the 3rd-party binaries used by my solution,
System.Console. After that, I ran another analysis and it eventually worked as you can see below:
Now that I finally set up the static analysis properly I can dive into what it reveals from a basic “Hello World!” console app. After that first successful analysis run, I could see that the NDepend menu changed. A whole new world opened to me. As a first reflex, I opened the “Rules” submenu. From there I could see that a rule was violated.
What rule could Microsoft’s “Hello World!” code possibly have violated? Well, look down.
Class with no descendant should be sealed if possible. It is actually more of a warning. A cool bit that I noted is that you even get a more detailed description of the warning cause along with why you should consider it.
I always learned that whenever possible we should have as little warnings as possible so let’s clean that up and make our Program class sealed. After making the change, when I re-ran the analysis, I got the same result and broken rules as before. Also, there was a message telling me that my file
Program.cs was out of sync. I got a hunch and rebuilt the solution. Then the analysis result views updated.
Now that the code is green and clean. It is time to try and build some technical debt. If you are not familiar with that term I will try to sum it up for you. Technical debt is the implied cost of rework needed in the future when choosing a quick and easy solution over one that would be more thorough but would take more time. More often than not chosing the easy way will hit you back. It will hit you hard.
Let’s say you take a complex subject at school. You could put in place a system to cheat to get good grades. It is easy and does not require extensive preparation work. Yet you can get caught and lose everything. Also, the ink can ruin your cheatsheet. Or, you could learn that subject and try to do your best mastering it class after class, exercise after exercise. You will not necessarily feel the effort was worth it from the start but eventually it will pay off. Learning your subject from the start is hard but you get more confidence to build on top of. Building technical debt on purpose is basically cheating on your Geometry class from high school. Don’t cheat on your Geometry class.
I felt like I did not want to spend months writing the perfect imperfect piece of code so I just googled “c# bad practices” and opened the first result that came up. From there I just copied the method and adjusted it to be called in our
Main(). You can copy the code below if you are trying to reproduce the experiment.
Once the code ready, I rebuilt the solution and ran a new analysis.
In the post mentioned earlier, there are a few things wrong that are pointed at but some that would be unfair to criticize here. However, I will keep the points that I wish would have been picked up and were not.
Calculate()method is public yet accessed by only one method in a console app. I hoped I would see more from the actually copied code and not from how I access it in my
if-elseis matched (to be fair, it might be a valid business logic in some cases but a warning would be welcome).
It can be considered unfair to point these out and it might be. I will try to spend some time later to see whether it would be possible to create custom rules to spot any of these. That will definitely be a fun exercise. Feel free to try the same at home.
I originally planned on adding a section where I would try to get more warnings and errors but that would be outside the boundaries of what I want a Noob review to be. A follow-up post covering more complex cases and custom queries would be more fitting for a separate post anyway. Since there are loads of things currently happening in my life, that post might not happen for a while. That being said let’s wrap up with some pros and cons I noted during that quick take.
While people love to customize things I do not trust myself for writing a rules engine determining my code’s quality. I’m likely to make a mistake in there and not notice it. I may actually change it to a pro after experimenting with it more.
After that first experiment, I do not think I would use NDepend for my personal projects. The cons I pointed above outweight the pros in my opinion. I do believe that spending more time with NDepend could change my vision of it and maybe make me realise that it fits my needs more than I think. I am no evangelist nor influencer, even if I was or become one by the time you read this, you should not take this post as absolute truth. It is a Noob Review after all, it cannot be right nor fair. My piece of advice is to go and have a look for yourself. If your interest got piqued by this post, you should download NDepend and figure out whether it fits your needs. You can have a 14-day trial to play with it. Happy experimentation!
This tutorial is an introduction to .NET Core CLI tools. More precisely it is about creating a web API using the CLI tools provided for .NET Core. Whether you are a beginner in development or just new to .NET Core this tutorial is for you. However, you need to be familiar with what an API is and unit tests to fully enjoy this tutorial. Today, we will set up a solution grouping an API project and a test project.
For the next steps, you will need to install .NET Core and Visual Studio Code (referred to as VSCode later for the sake of brevity) that are supported on Mac, Unix and Windows. If you want to know how that multi-platform/framework is working have a look here.
First things first we will open a terminal (or Powershell for Windows users) to create our solution. Once this is done we can create our solution that I will name
DotNetCoreSampleApi as follows:
dotnet new sln -o DotNetCoreSampleApi
This command will create a new folder and
DotNetCoreSampleApi a solution file with the surprising name
DotNetCoreSampleApi.sln .Next, we will enter that folder.
Now that the solution is here, we can create our API project. Because I am not the most creative mind I will also name it
DotNetCoreSampleApi. Here is the command to create the project.
dotnet new webapi -o DotNetCoreSampleApi
That command will create a subfolder named
DotNetCoreSampleApi to your solution
DotNetCoreSampleApi. If you followed all the steps your solution root should contain a file
DotNetCoreSampleApi.sln and the web API folder
DotNetCoreSampleApi.sln. The web API folder should contain a few files but the one we need now is
DotNetCoreSampleApi.csproj. We will add a reference to it in our solution. To do so, run the following command:
dotnet sln add ./DotNetCoreSampleApi/DotNetCoreSampleApi.csproj
After getting a confirmation message we can now start the API by running that command:
dotnet run --project DotNetCoreSampleApi
After a few seconds, it should display a message notifying you that the API is now running locally. You may access it at http://localhost:5000/api/values which is the Values API default endpoint.
You may be aching to see some code by now but unfortunately, you will have to wait a bit more. Back in the days of .NET Framework, there was no such thing as generating projects by command line. You had to use cumbersome windows to pick what you needed to create. So now all of this project generation can be done by command line thanks to the CLI tools you will like it. And this is merely a suggestion. Back to the terminal. If the API is still running you may kill it by pressing
Ctrl+C in the window you opened it in.
We are now able to create a test project and add it to the solution. First, let’s create the test project using
dotnet new as follows:
dotnet new mstest -o DotNetCoreSampleApi.Tests
That command creates a new unit test project using MSTests in a new folder with the name
DotNetCoreSampleApi.Tests. Note that if you are more of a xUnit person you can replace
mstest in the command with
xunit which will create a xUnit test project. Now similarly to what we did for our web API project, we will add our test project to the solution:
dotnet sln add ./DotNetCoreSampleApi.Tests/DotNetCoreSampleApi.Tests.csproj
Almost instantly you should have a confirmation that the project was added.
Now, open VSCode and open the folder containing the file
DotNetCoreSampleApi.sln. At this point you have that structure into the folder:
If you never used VSCode before, or at least not for C# development you will be suggested to install the C# extension:
Select “Show Recommendations” and apply what VSCode suggests. Then, once you finished installing the C# extension you will get a warning about adding missing assets to build and debug the project, select “Yes”.
Don’t hesitate to go back a few steps or even to restart this tutorial if something does not seem to work as expected. Here is how your test folder should look like by now:
And finally, we are getting in the fun code writing part. The part where we put aside our dear CLI tools By code writing I mean copy/paste the code I will show you later. And by fun, I mean code that compiles. There is nothing less frustrating than code that does not compile. Especially when you have no idea why. Fortunately, this will not happen here.
Now that you have your code editor ready to use you can go ahead and delete the
UnitTest1.cs file. Once done, you will create a new file named ValuesControllerTests.cs in your test project. Then your VSCode more or less looks like this:
Using VSCode the file should be empty, but in case it is not, delete its contents to match the screenshot above. As soon as you get your nice and empty file copy the code below into it:
Now you should get some warnings, which is perfectly fine because they should be here. If you hover over these you will see some referencing related error messages like below:
These appear because we did not reference the API project into our test project yet. It is time to open your terminal again. However, if you feel like having a bit of an adventure you can try VSCode’s terminal that will open in your solution folder. In order to do so, you can press
Ctrl+' while in VSCode to open it. Or
Ctrl+` if you’re using a Mac, probably either work for Unix.
Once the terminal open we will reference our API project into the test one with that command:
dotnet add DotNetCoreSampleApi.Tests/DotNetCoreSampleApi.Tests.csproj reference DotNetCoreSampleApi/DotNetCoreSampleApi.csproj
If you don’t see the full command above, you can still copy it using the copy button present when hovering.
Now that the reference to the API project is here the referencing warnings concerning it should be gone. However, a new one might appear about the
Get call as below:I am not quite sure why it happens but it seems to be a bug within VSCode not getting this reference is here through the API project. However, you should not worry about it because if you try to build the solution and or run the tests it will work.
Now we lay into the crispy part, the one we need before getting any further. The part we can use as the basis before delving into more advanced stuff like continuous integration or continuous deployment. Running a test that validates our logic. If you had a look at the
ValuesController.cs file inside our API project you will see that the
Get() method is returning an array of strings. This array contains the values “value1” and “value2”. The test class you copied earlier contains a method that verifies that both “value1” and “value2” are returned for this
So, back to the
ValuesControllerTests.cs file. You may have noticed some links appearing on top of our test method like this:
You can ignore the “0 references” and “debug test” links for now. Press “run test” to execute our test. Actually, it will first build our API project to have the latest version of it before linking it to our test binary. After running the test, you should see something like this:
It’s nice to know that one of your tests failed, however, you know what is better? Knowing which test actually broke and why. Therefore, this is the perfect time to bring up the .NET Core CLI tools again. Now, you can run our test using the .NET Core CLI tools with this command:
dotnet test DotNetCoreSampleApi.Tests
Which will actually provide you with some more details on what broke:
As you can see you get the message “value2 is not returned” that we defined in our test file. Here is a little callback for you:
I won’t say that now you are a fully fledged .NET Core developer but it’s a good start. You just created your (maybe) first API and test projects. Moreover, the test actually validates some of the API controller logic. So you know, congrats on that. However, if for a reason or another, something did not go according to plan, feel free to check the source code here.
I hope you enjoyed this new entry of my future-proof series and I will see you next time. You should look forward to it as I will cover how to setup continuous integration for such a project. It should be different from that other post from last year using Appveyor.
And remember, if you ever need anything from the CLI tools:
Hi everyone, it’s been exactly a month since my last post and I have a good excuse for it. As it turns out I was pretty busy between a wedding, a holiday and the beginning of a personal project. Yep, another one! From now I will refer to it as my Greek goddess gamble until I reveal what it is all about.
The phase 1 of that gamble started a few weeks ago, hopefully I’ll make enough progress by December. Time is key here so it is more than likely that I post even less until then which makes it an even bigger gamble. Not posting for a month slowed down the growth of the number of views by 8%. Still I am lucky enough to see the number of readers slightly increasing week after week and hope it will last until December. Hopefully, the break will allow me to fully focus through my weekends and evenings to deliver on that crazy move.
Before you ask, no, I am not gonna retire to a corn field to raise my chicken anytime soon. Anything chicken related I leave to KFC (not sponsored, but can be :wink wink:). Here I am digressing again because I don’t want to risk revealing too much. Back to the main topic, that Greek goddess gamble does involve a fair amount of coding along with research. I originally wanted to kind of serialize and post every week about it or even vlog my progress. But eventually I realised that it will be more meaningful if there is a clear narrative through the posts. It is much easier to tell a story when you know the end.
To conclude, if the phase 1 of that gamble goes well, I will start to post on a weekly basis and/or vlog through phase 2. In case of failure, well I’ll just present over a couple posts what it was about and what went wrong. Stay tuned!
P.S. If you feel craving for my personal posts you can check out my recent poetry or my techier stuff. You may even want to keep an eye on Poq’s blog within the next few weeks, just don’t tell anyone I told you.
Have I ever told you about that annoying Bugs Funny?
Bothered me day and night from his irritating company.
And another day and another night, yet again another one,
Thinking I had nothing better to do, no joy, no life, no plan.
Sneaking in my code when I was all chill and compiling,
His exception traces eyeing at me seem almost smiling.
Even mocking, it doesn’t matter how hard I have studied,
As for next few days he will torture, get my brain crippled.
Burned in its light, blind to its weak spot, feeling hopeless,
I keep browsing StackOverflow with Redbull and stress.
It aches in every bone, my date nights and parties gone,
Bugs look at me, trick me, slap me. Show me mercy? None.
Really unfair you know, not one bug should have all that power,
Emprison, break my mind, haunt from the kitchen to the shower.
Drinking my misery when suddenly I remember, flabbergasted,
I inadvertently turned a comparison to an assignation, damned!
Run my program again as I get closer and closer to the rise of dawn,
I finally got rid of Bugs Funny, indeed now he’s dead and gone.
When I squashed it I wondered why I was so numb, so dumb,
More than ever I was so close to cry, beg, call for my mum.
Rest my head now I will, not ever rest on my success I shall,
Because his brothers are lurking in the shadows, right behind the wall.
Waiting for my vigilance to fall, letting room for them to spawn,
My testing shall betray them and help eradicating them in a yawn.
I will be the watcher on that wall, protector of my software,
None shall corrupt it with uncovered logic, noobs beware!
It will not be easy but a man’s got to do what a man’s got to do,
I shall head to the bar for a few drinks without any further ado.
I got pretty inspired as I wrote two of these over 24 hours between Monday and Tuesday where the first one was a month ago. Thanks for reading again, if you missed the last one you can also have a look there. I see that “Poetry time!” is quite popular on here so I’ll definitely write more tech-ish poems in the future. Thanks again for reading guys!
Tick, tick, tick look at me it’s Mister Ozymandie,
Once more bringing, no, inflicting my opinion upon thee.
Whatever the effort, the time you put in your source,
My remarks, of your good day will disrupt the course.
No matter how close you were to a merge
It is time for me to compare with yours my verge.
I am the biggest, the best, better than the rest
The victim you will be of my self-esteem quest.
Whether right or wrong my assurance won’t fail
Poker facing you, hoping your knowledge frail.
Always trumping around like there is no tomorrow
Still making up shit when my mastery is shallow.
Even if you manage to see through my gambling
All day, every day, I will keep them dices rolling.
Although you call it perversion, it is my perfection
I know you see me as a pain, worse, a diversion.
It doesn’t matter what you think, it is my ship,
None shall questions my conduct, like dictatorship.
Become one for all, always know that all is for me,
Your personal judgement here has no place to be.
Line after line, block after block, thought after thought,
I shall erase your experience, everything you brought.
Indeed, I will not stop until all aspects of my glorious vision,
Sink deep in your mind, make the past you aversion.
For that I am the star on the hailed Christmas tree,
For that others forced the same behaviour on me.
Even though this might be to your growth toxic,
Above all, my ego, my satisfaction is what I pick.
Even if you’re right and I am turning value to churn
And someday for my crimes one makes me burn.
I just want one thing, that you dance on my symphony
Myself throning in development pantheons for eternity.
Because it’s me Mister Ozymandie. All! Look at me!
On the humanity commit history, my mark will be.
Thanks for reading, hope you enjoyed reading it as much as I enjoyed writing. I guess that now writing poems on that blog is a thing now. If you haven’t read the previous one you can check it out clicking here.
What is the HttpResponseSimulator? Apart from being the least original name. Well, it is tool that allows simulating the behaviour you want from an endpoint to test an http client and/or wrapper. I built it over an afternoon so that I could write a timeout test for an http client wrapper. I had to get familiar with Node.js and Express again, which I previously used to create HappyPostman. Despite the slow start, it took me about a couple of hours to implement and deploy.
Like every small projects written with a simplistic goal the first version was not great. If you follow that link you will notice a lot of coupling and no tests whatsoever. The first couple of commits are still good enough to deploy and serve the HttpResponseSimulator original purpose. However, I wanted to push it further and live up to my whole being “future-proof” thing and make it robust. To make it robust I need it to be testable and cover as much logic as possible. This is where I started googling to figure how I to write tests and get coverage feedback with Node.js.
Due to the high coupling of my code my only option was to write http assertions related tests. The kind of tests where I hit the endpoints directly and validate the output based on the given input. In order to write these tests, I had two options that would later allow me to refactor that code to clean it up. The easy option was to follow my own tutorial on Postman and remain in a known territory.
However, I chose to try something new and stumbled into supertest that can implemented in tests that can be ran using Mocha. It seemed like the best option since I can write all my other tests post-decoupling using that Mocha too. Also, Mocha can be used along tools like Istanbul to generate coverage metrics that can be uploaded to coveralls. In that case, my choices were all driven by what I wanted to achieve which is very important in software development. Eventually after a few days of test writing and refactoring, I was finally happy with myself you can see the test coverage result below:
Now that it is robust, I feel like it is time to share it with the world. It is time to make it open-source, it may just die out in a few months or grow and become something bigger. It currently serves a few more purposes than just waiting a few seconds before responding. You can now get your response from any freely available url or pastebin id among other things. If you have any improvement suggestions feel free to hit me up through Github. Actually while you’re at it, if you have any coding notion and want to try your hand at open-source development you can fork the project and open pull requests to improve it. Also, if you have a better name than HttpResponseSimulator you can google around to hit me up.
A few weeks ago, I saw a pull request to modify one of our webjobs which codebase is pretty old and had no tests. The pull request had no tests either. The thing is that we decided to make unit testing mandatory for any pull requests a couple weeks before.
I started reviewing the code when I noticed someone else already posted a review. A pretty laconic “please add tests”. Not a bad nor a mean review but not a really helpful one. Proof of it is that it was posted about an hour before and the pull request was blocked. Indeed we do not untested logic to enter or remain in our software. Yes it is aligned with our new policy about tests. That being said, the webjob code was tightly coupled and pretty impossible to test as it was.
This is where I stepped in, I reviewed the code and found a way to make it testable. I then suggested a few minor changes in the existing codebase to make it testable. Within thirty minutes he modified the code and was pretty happy to have tests for the logic he improved. Eventually, I went on and approved his pull request then the first reviewer followed up.
Please add tests
In most cases “please add tests” is enough to do the trick. The code is designed properly and decoupling is applied wherever possible. “Please add tests” is enough if the tests were not written because of laziness or just got forgotten. However, in this particular case, the reviewer did not take in consideration the context of the change. Indeed, it was an update to an old project designed at a time where the backend team was a couple of guys trying to launch a company. Delivering the software was prioritised over making it easily maintainable. In order to allow a business to take off, testing and decoupling was left for another day. Taking these factors in consideration I have been able to come up with a few strategic changes that eventually allowed to add some tests.
You may have noticed the two different approaches and their effect here. On one hand, turning a change of context into a problem, on the other hand, suggesting a solution. The first one had the pull request frozen for an hour where the latter allowed the pull request to move forward and the code to be merged. As software engineers we need to help others moving forward and propose solutions not problems. Solving problems is central to what we do, whether it is designing a seamless checkout or helping a colleague to make progress on a project.
We all have been that first reviewer at a moment or another, and if you currently recognized yourself there here are a few tips for you:
If you comment on a pull request because it will make you feel superior to the submitter by showing how big is your knowledge relative to theirs or how you are the best developer there is, don’t. Just don’t. Especially if it does not bring any value to what he is trying to accomplish through the pull request. Always leave your ego out of anything if you want to be productive.
Close to the previous one even though one may happen without the other. Please do not assume one’s coding or design choices are wrong because they do not match what you would do. Ask questions and if there is a real issue try to provide comments that drive the submitter towards a solution.
When you request changes, depending on the system you are using you may be blocking a pull request and preventing someone from working. Make sure you follow-up whenever you can, between two of your own pull request submissions, during a coffee break or anytime you come back to your desk. Time is precious and when you request changes on a pull request you become responsible for the additional time spent on it for every developer involved.
Ask yourself about the impact you have on a project or a colleague. Does your comment make your colleague’s day better or worse? If it makes it worse, does it actually help solving the problem at hand and bring a positive value? Because at the end of the day, all that matters is the value you can create. Value to a business, value to people. Making a positive impact on your environment will encourage others to do the same. Eventually it will help you and the people around you to thrive and yearn for improvement every day.
Special thanks to Joshua Dooms who did make a positive impact on my vision of how reviews should go.
Nowadays, most tools we use exists to save time. In London to travel through public transports we have tons of options to make our lives easier. Contactless debit card, ticket, Oyster card, you name it. However, having the choice between these options may reveal troublesome when in a rush. Indeed, on Monday I used my debit card by accident instead of my Oyster card making myself pay for a right I already have. Also, if I did not use it again while leaving the tube I would have ended up getting charged the max amount. I think it is £6.60 instead of £2.40 for a journey in zone 1. Luckily, I realised my mistake on the spot allowing me to rectify it while leaving at my station.
I did that mistake because I saw the elevator open and jumped in. Yes I got in the elevator, but instead of losing a couple minutes I lost money. Indeed the amount is as insignificant as the time saved, however that got me thinking. I started thinking about these times where I made design or coding decisions to save time. The classic “let’s do something quick” that is basically the coder’s “spray and pray”.
In Monday’s instance, the “spray and pray” was to tap my wallet on whichever side is more accessible. I knew odds to mistakenly use my debit card was 50/50 and I knew how to limit the loss in case of failure. When the failure happened, I paid a price I was ready for when the time came. Similarly to a project however, you need to reduce the risks of your decisions as much as possible or at least figure a way to turn things around if they go south. Failing to recognise the risks of the choices we make will be as punishing as the risks taken allow.
This might be the key here. Maybe, it’s not about missing a shot, but about the rebound. About what you will do when the ball bounces back. If you know how to bounce back from a mistake you will feel empowered to do more and learn from them. Maybe in the end being a good developer is not necessarily about making the right choice every time. It can be about evaluating the potential consequences of our choices and ensure that they are worth taking. Also it can be about whether we can adapt off the result of our choices.
So next time I take the tube, I will slow down to tap my Oyster instead of my debit card. Coding wise, I could run into the most MacGuffinest MacGuffin piece of software that might help on a project and still take time to evaluate its pros and cons so that I can mitigate the risks of using it.
Test, four letters, one meaning and for some people a struggle. Getting people around you to write tests is easy only when everyone already agrees with you. As often, there are instances where some people show resistance to writing tests. Here are the stuff I hear the most from them:
D: I don’t have time to write tests.
A: I don’t need to test this.
B: I can’t write a test for this.
Not writing tests will always lead to hours of tears and blood. Tears and blood from debugging something you let slip through. Something that broke your super edgy software. I am not saying that writing tests will lead you to a bug free software but at least you know exactly how your code behaves. There, you know what you can reasonably expect from it. Despite having a great code coverage your code will eventually break and it’s perfectly fine. This is where your tests become useful as they will help you ensure you don’t break your existing code while refactoring or fixing a bug. Then you can simply add a new test to cover that unexpected scenario.
Per example, yesterday a colleague had some weird data mix up on a development deployment of an API I created a few months ago which revealed a case I didn’t think of. That API had 95% coverage and still a bug showed up because it is how software works. Although the bug was generated from a virtually impossible case, so what I did was replicate it, write a test for it, fix it and get it through review and released it all within 30 minutes. That project coverage is now at 98% (highest we have now, of course I’m gonna show off about it) and yet I know that one day or the other another bug will pop. When that day comes, it may not be fixed as quickly as yesterday but it will be as easy to refactor parts of it safely.
Yes it takes some time to write tests but on the long run it is more than worth it. For a long time I thought that the only reason for one not to cover his code would be laziness. Not the good laziness that makes you want to save time by writing tests and not spend hours debugging and testing a whole bunch of non covered features. Still, over time I came to learn that no developer walks to his desk everyday to write buggy code on purpose. A lot of factors come into play such as clients and project managers pressuring you with tight deadlines. Tighter and tighter deadlines, day after day. Then ensues a drop of quality in favour of faster delivery that in the long run can hurt a business.
In that kind of situation, blaming a developer for not writing tests will not help anyone. However, what can help is providing tools to help that developer to move faster. This is where today’s post is supposed to help you. Help you to accelerate your development. Today, I am using this post to present three tools that help me everyday to deliver code faster without sacrificing quality. Although nothing is magic, I hope these will help you in a personal or professional context as they help me every single day.
It doesn’t matter whether you have access to continuous integration or not. However, what will matter is your ability to write decent tests. Even if you write only very simple happy path tests, as long as you write those properly you will be fine. Here we go!
Moq is awesome for unit testing. What is unit testing? Well, I don’t have a proper definition in mind and there are tons of different versions online. The version I learned mostly over experience and you are free not to believe the same thing. To me, a unit test is a piece of software written to test a component regardless of the dependencies it has to make sure that a defined input will provide an expected output. Basically, unit tests allow you to validate your software’s behaviour in a way that prevents you or a potential collaborator from breaking it your software later on.
How does Moq works? The premise is that you can mock any interface which allows you to define how your software behaves based on a dependency input. Which is great in an inversion of control context. This also extends to class virtual and abstract methods so that you can create tests defining how a class behaves based on what a method could return. Another cool feature of Moq is the possibility to verify the methods of a mocked interface/class got called with a specific input. That will allow you to make sure that the method under test is calling its dependencies methods with the parameters you expect.
For more information on Moq you can check out their documentation on Github
Let’s now move onto AutoFixture that I use pretty much since it exists. AutoFixture is a library that allows generating dummy data on the fly in any context. This thing made my test writing so much faster. It also works great with Moq to quickly write test cases where the input data does not really matter. You can use it to generate data of type, from string to bool to your custom classes. One of my main use for that library is to create data on the fly without thinking too much about it and use that generated data to validate my tests.
I have not reached the limitations of what you can do with that tool yet. However, you need to be careful with types that have a recursive relationship which you often get when you work using EntityFramework. Per example, if you have a Chicken class with an property of type Egg. Imagine that Egg class has a property of type Chicken, you will end up with an exception due to some kind of infinite loop situation. You can avoid that situation by defining what properties you don’t want to set when generating your data.
This one is a bit more different than the others mentioned previously. Indeed, you can use Postman to document how your API works. You can use it for monitoring with a paid account or build a monitoring of your own using Newman. I wrote a couple of posts about it over the past months to get started or to build simple CI using Appveyor. What I like with Postman is that it is pretty intuitive and straightforward to use even for non-technical people. Once you get started you can do some pretty advanced flow-based testing which is pretty useful in micro services architecture. In the end, how and where you use Postman is down to you and I love the flexibility of it. That flexibility that allows you to make it fit your needs and accelerate your development.
Thanks for reading, you can now go and write a bunch of cool software with loads of tests. Or don’t. I’m not your dad and I won’t punish you, but your code will.
Last month, I posted about Postman enabling you to test your APIs with little effort so that you can build future-proof software. Here we are going to cover setting up continuous integration for a simple project by using Newman to run your Postman collections. You may have heard about continuous integration in the past. Most commonly, continuous integration will build software from one’s changes before or after merging them to the main codebase. Even though there is an infinity of tools that allow implementing continuous integration, I will focus on Appveyor CI. In order to make things simple, I will create a very basic web API project and will host it on GitHub.
You can create the repository on GitHub by clicking this link: Create a repository on Github. For more details, please follow the documentation they provide on their website.
Big lines, you should see something like this when you create the repository:
Once you’re all set, if you have not done it yet, you need to clone your repository. Personally, command-line feels easier as a simple “git clone” will do the job.
git clone <your-repository-address>
Command-line execution will look like this.
Now that your repository is all set, we can actually create the Web API project. For this step, you will need to install Visual Studio, ideally 2017 that you can download here. Once installed, open it and create a new project by selecting “File”, then “New” then “Project”.
After the project template selection popup appears, select “ASP.NET Web Application”. As for the project path, select the one where you cloned your repository and press ok.
Now you will have to select what kind of web application you want to create. Select “Empty” and make sure that the “Web API” option is enabled like below. Note that selecting “Add unit tests” is not necessary for this tutorial.
Then press “Ok” and wait for the project creation. Once it’s done, your solution explorer should look like this.
Time to add some code. Yeah!
First, right-click on the “Controllers” folder. Now, select “Add” then “Controller”. Pick “Web API 2 Controller – Empty” and press “Add”.
Next, you get to pick the controller name. Here it will be DivisionController.
Now you should have an empty controller looking like this:
From here it’s time to run your project either by pressing F5. Also, you can open the menu and select “Debug” then “Start Debugging”. After a few seconds, a browser window will open and you will see 403 error page.
Chill, it’s perfectly normal as no method in our DivisionController is defined and access to your project directory is limited by default. At this point, we can already open Postman and create our first test.
Now, open Postman, create a new tab. Once the tab created, copy the URL opened by Visual Studio debugger in Chrome. In my case, it’s “http://localhost:53825” but yours could be different. Paste that URL in your postman tab like this:
Next, press “Send” and you shall see the Postman version of the result we observed previously in Chrome.
From here, we can start writing tests that will define our API behavior for the default endpoint that does not exist yet. Here you can notice a couple of things that we will want to change. First, we don’t want that ugly HTML message to be displayed by default but something a little more friendly. I guess a “Hello Maths!” message is friendlier, from a certain point of view. Let’s add a test for that.
If you remember the previous article, you know that you are supposed to go to the tests tab in order to add it. In this case, will pick the “Response body: Is equal to a string” snippet. You should get some code generated as below:
Next, you will update it to replace “response_body_string” with “Hello Maths!”.
Now that the response test is sorted, let’s add a response code test to validate we should not get that 403 HTTP code. For this, we will use the “Status code: Code is 200” test snippet.
After sending the request again you can see that both tests failed.
It is now time to write some code to right this wrong. Go back to Visual Studio to modify the DivisionController. We will add an Index method that will return the message we want to see.
public HttpResponseMessage Index()
var response = Request.CreateResponse(HttpStatusCode.OK);
response.Content = new StringContent("Hello Maths!", Encoding.UTF8, "text/plain");
This code basically creates a new response object with a status code OK (200) that we want to get. In this object, we add a StringContent object that contains our “Hello Maths!” message. Let’s run the Visual Studio solution again pressing “F5”.
As you can see, the horrible HTML error page has gone now and we see the “Hello Maths!” greeting. Now, if you run that same request in Postman you will see that now our tests pass.
Now save the request in a new collection that we will call “CalculatingWebApiAppveyor” as below.
You should see in the right tab the newly created collection along with the request we just saved.
If you got this far, you’ve done great already unlike our API doesn’t do much yet. It’s time to make it useful. From here, we will add a Divide action that will take in parameter a dividend and a divisor then return the quotient. You can copy the code below and add it to your controller.
public IHttpActionResult Divide(int dividend, int divisor)
return Ok(dividend / divisor);
You may notice that the code looks simpler than for “Hello Maths!”. Actually, we could have returned simply return Ok(“Hello Maths!”). However, this would have returned “Hello Maths!” with the quotes for which our test would not have passed. Now, let’s run the project again and add a test for that division endpoint in Postman.
What we want to do is to make sure that our division endpoint actually returns the result of a division. What we will test here is that for 10 divided by 2 we do get 5. From there, you know that the route to be tested will be “divisions/dividends/10/divisors/2/_result”. Now, create a new tab in Postman and copy the URL from your greetings endpoint. Then, append the route to be tested as below.
Next, we are going to use the “Response body: Is equal to string” snippet to validate that 10 divided by 2 should return 5. Also, we will add a status check just because.
If you followed all the steps correctly you should see both tests passed and the response is indeed 5.
Now, save that last request as “Validate division works” in the CalculatingWebApiAppveyor collection you created.
Finally, you can run your whole collection and you will see all the tests pass green.
Congratulations! You have a fully functional API as long as divisors are different from zero with its own Postman collection. A collection that you can run whenever you like to make sure your API is fine. The one issue though is that you may not be working alone nor want to run Postman whenever you push a change on GitHub.
There is a way to solve this issue and that’s where Appveyor comes into play. But first, let’s commit and push our changes.
If you haven’t done it yet, it’s time to commit your changes and push them to your Github repository. First, create a new file named .gitignore. More information about what that file does here.
I personally used the Powershell New-Item command but there is an infinity of ways to do that.
Then, open this .gitignore file that is the default one to use for Visual Studio projects, copy the contents into the file you created.
Now you can commit, push your changes and eventually move on to Appveyor thanks to a few commands. Note that you must run these commands from the directory where your solution and .gitignore are.
# This line make git aware of the files you want to commit
git add .
# This line generates a commit containing your changes
git commit -am "Project ready for CI"
# This line pushes your changes to GitHub
Once these commands executed you should see your solution with the files created on GitHub.
This is probably the simplest part of this tutorial. Simply go to the Appveyor login page, yes login. From here you can log in with a variety of source control related accounts but pick GitHub.
Once logged in you should land on an empty projects dashboard.
Simply press “New Project” and you will be prompted with a list of repositories you have on your GitHub account.
Select “CalculatingWebApiAppveyor” and press “Add”. After a few seconds, you should see this:
To see how it works, press “New build”. What happens next is that Appveyor will download your source code from Github. Then, your source will be compiled, and if there are unit tests in your solution they will be run. But for now, you will see something like this:
Are you surprised? Are you entertained? Because I am. Don’t panic it’s a benign error caused by the fact that Appveyor does not restore a project’s Nuget packages by default. To get rid of that error, go to the settings tab, then to “Build”.
Scroll down until you see the “Before script option”, enable it by selecting “PS”. Now, a text box should appear for you to input nuget restore like below:
Now, press the “Save” button below and go back to your build dashboard and press “New build” again. If everything goes according to plan you should end up with this:
Congratulations again! You now know at to set up a .NET project on Appveyor.
This is more or less where I would have stopped if I went with my original decision of making this tutorial a two-parter. Since it would not make much sense to stop here considering what’s left we can move on our Postman collection again.
Now that our project, collection, and continuous integration tools are setup, it is time to put our collection to a better use. An automated use. To do so, we will need to update our collection so that it can be run both locally and on Appveyor. In order to achieve that, we will extract the host URLs from our requests and place them in environment files. One we will use locally, the other one on Appveyor.
First, we will create our localhost and Appveyor environments. I will name mine CalculatingWebApiLocalhost and CalculatingWebApiAppveyor. If you don’t remember how to create environments and modify collections to use their variables I happen to have written a post about it. You need at least the requests host to be extracted in the collections.
Your localhost should contain the URL you used so far. Your Appveyor one will be “http://localhost”. Once done, you should have two environments that each should look like this:
Now your environments are ready, update your collection requests as below.
From here, you can open the collection runner to make sure your collection still works and tests still pass.
It’s time to introduce you to Postman exporting feature because you will now need to move your collection and Appveyor environment to your project. First, let’s export the collection, click on your collection menu button.
After pressing “Export”, you should see this:
Make sure that “Collection v2” is selected then press “Export” again. Now, save the collection in your solution folder.
Next, we will export the Appveyor environment. Go to the “Manage environments” menu, then click on the “Download environment” icon for CalculatingWebApiAppveyor.
Then, save your environment to your solution folder.
The last step, not the least commit and push your changes. Here is a reminder here:
# This line make git aware of the files you want to commit
git add .
# This line generates a commit containing your changes
git commit -am "Add Postman collection and environment"
# This line pushes your changes to GitHub
Now our repository is all set! Let’s get back to Appveyor.
First, go to the Tests tab:
Then, enter these lines after selecting “PS” on the “After tests script” textbox:
npm install -s -g --unicode=false newman
newman run --disable-unicode .\CalculatingWebApiAppveyor.postman_collection.json -e .\CalculatingWebApiAppveyor.postman_environment.json
The first line installs Newman on your Appveyor container, prevents the dependencies warnings and adapts the execution display to Appveyor. The second executes your collection using the environment you created and also adapts the execution display to Appveyor. If you used different filenames for your collection and environment, please update the command to match them. You should have something like this:
Now, go back to the “Latest build” tab and click on “New build”.
After a few moments, you will see that your build will fail.
Here you can see that Newman actually tells you what went wrong. All your tests failed, and there was a connection error for each of your collection requests. If your build fails for different reasons, you may want to go a few steps back and try again. But if your failed build looks like the capture above, you’re good to go.
Yes, we are very close to finishing setting up our Postman based continuous integration system. Now, we need to tell Appveyor that we want to package our solution and deploy it locally so that we can run our collections against it.
First, we will enable IIS locally. IIS is a service that allows running any kind of .NET web apps or APIs, even though it does not limit to it. To enable IIS, go to the “Environment” settings tab, then click on “Add service” and select “Internet Information Services (IIS)”.
After saving your changes, you will go to the “Build” tab and enable the “Package Web Applications for Web Deploy” option and save again.
That option will generate a zip package that will have the same name as your Appveyor project. What we need to do next is to configure Appveyor to deploy that package on the local IIS. In order to do so, we will go to the “Deployment” tab.
Click on “Add deployment” and select “Local Build Server”. Afterward, we will need to add some settings to tell Appveyor where and how to deploy. To do so, press “Add setting” three times then fill each setting to match these values:
Now, you should see something like this:
Remember the Powershell script we added in the “Test” section of the settings, we will need to put it in the “After deployment script” instead. If we don’t do that, the build will always fail since it will try to run our integration tests before locally deploying our application. I will put it here again in case you don’t feel like scrolling up a bit.
npm install -s -g --unicode=false newman
newman run --disable-unicode .\CalculatingWebApiAppveyor.postman_collection.json -e .\CalculatingWebApiAppveyor.postman_environment.json
If you followed everything your “Deployment” settings tab should look like this:
Don’t forget to save your changes and to update your “Tests” tab. Now, your “Tests” settings tab should look like that again:
After saving it, go back to “Latest build” and press “New Build”. Then, you will see that everything simply works.
Now that you know how to setup Newman powered API tests on Appveyor using GitHub, you can chill and call it a day. However, you can also show off your mastery of CI by adding your project badge to your README file.
Note that Appveyor allows you to deploy only when you push commits to your repository, whether it is a direct push or a pull request being merged. Nevertheless, if you have a private Appveyor account you can enable an option to allow local deployment to run your API tests even on pull requests.
Thanks for reading, I hope you enjoyed reading this as much as I enjoyed writing. Also, I would like to shout out a big thanks to Postman labs for featuring my previous post in their favorites of March, that was a really nice surprise.
Good luck helping to make this world fuller of future-proof software every day!
NB: If you don’t feel like creating the Web Api project and that you scrolled straight to the end of the post to get the sources, help yourself.