I am Nguele

My name is Jean-Dominique Nguele and this is my blog. FLVCTVAT NEC MERGITVR

Category Archives: Tutorials


Continuous delivery for free using Docker, CircleCI and Heroku

Reading Time: 10 minutes

Continuous what?

Continuous delivery. You may recall that in my previous post I announced that today’s entry would be revolving around continuous integration. And technically it can count as such since we will cover continuous integration along the next step. That next step is continuous delivery. If you are not familiar with these terms and the concepts behind them I will sum them up briefly.

Basically, continuous integration allows verifying that your codebase still builds and passes tests passing whenever you push changes. Add a trigger to deploy your code to production upon success and you pretty much have the idea around continuous delivery.

These practices help mainly to make sure that you don’t break your codebase when pushing changes. This is good when you work alone but a lifesaver when working in a team. You cannot imagine how many hours I wasted mostly during my studying years because of coding breaking without us realizing before days. Using source control was already a miracle in itself at a time when there limited options for continuous integration, especially for students. If you want more details about source control workflows the GitHub Flow is a great place to start.

Let’s just jump into it!

Back in today’s topic, continuous delivery. Before I start inundating you with scripts and screen captures you need to be familiar with a few things:

Retrieving the code

Since you read my previous tutorial, you should know more or less what the code does. It is the classic Values API sample returning an array with two values “value1” and “value2”. From there, the easiest step is to fork the repository created from that previous post which you can reach by clicking here.

fork that repo

Fork that repo, yep that one. Just do it!

Once the fork completed you will have an exact copy of my repository where you can push changes for the rest of the tutorial. If you have not yet, you need to clone your fork to your machine for the next stage.

Getting acquainted with Docker

Docker is going to be key for today’s tutorial. Why? I hear you ask. Because CircleCI does not support C# for continuous integration. Neither does Heroku for deployment, at least not officially but we’ll get back to that later. But do you know what is supported by both that we can use? Docker container images.

Basically consider a Docker container image as a box in where you put everything your software needs to run properly, from code to settings to system tools and libraries. A containerized software will always run the same way regardless of the environment. It will be completely isolated from its surroundings. The cool thing about this? Well it works on any environment, whether you run it on Windows, Mac and Linux. It is true as long as your computer supports VT-d virtualization. Then, you can make sure your container behaves as you expect locally before deploying. This should be the case if your device is no more than a couple years old. Also note that if you cannot run Docker on your local machine, you can still commit the docker files and it will work on CircleCI.

First things first, you will need to install Docker Community Edition which is free and available at this link. The installation is pretty straightforward so nothing special to mention here. If Docker is not supported on your machine you will get a message when trying to install it on Windows. The same should happen if you try to run it on Mac. If it is the case, don’t worry, you can still go through the tutorial and won’t be missing that much.

Making our tests ready to work on CircleCI

As mentioned previously, if we try to build our API straight away on CircleCI it will fail. Not because the code does not work but because it is not supported. In order to get our tests running, we will have them run in a containerized way. We don’t need to create a container image yet, only to get an existing container image that will support running them.

The first thing to do is to create docker compose file that will allow getting an image that supports running .NET Core 2 applications and run our tests inside of that image. Now you will copy a file definition that will do exactly that upon using the docker-compose command. You need to create a file named docker-compose.unittests.yml at the root of your repository.  Once it’s done, copy into it the contents of the gist below:

Now we need to write the script that will allow our continuous integration tool to restore the solution within the container image. After what the tests will be run. Here is the script to copy inside a file named docker-run-unittests.sh still at the root of your solution:

You may notice a line that is unusual to most people. The command set -eu -o pipefail. A short and stupid explanation is to say that it halts and makes the build process fail if an error occurs. If your build does not compile or that tests fail, that command will allow the docker-compose command to fail which will trigger an error and allow your CI system to know it failed.

Now that we have our tests ready to run within a container we will run them locally to make sure we’re all set. In order to do so, you will need to run the following command with your favourite terminal. This assumes that you are in your solution folder and that you can run Docker commands on your machine.

Running that command will give you an output similar to this:

Docker test run result

We are now able to run tests on any environment supporting Docker. Let’s now setup our continuous integration tool.

Continuous integration

Configuring for CircleCI

Now that we have all the Docker configuration ready to run tests, we can configure our project to have our continuous integration on CircleCI. The first thing to do here is to create a .circleci folder in your solution folder. Then, you will create a config.yml file inside of it so that its relative path to your solution is .circleci/config.yml. Into that file you will copy these contents:

Commit and push your changes, then move onto the next section.

Setting up CircleCI

CircleCI is a platform used for continuous integration and continuous delivery. I picked it for today’s post because it’s free and can be good if you just want to play around. Also, it can be great if you are creating a new business and want to keep the costs low before scaling up.

The first step here is to create an account. You can reach their signup page by clicking here. Once there you should see this screen:

Now you need to press “Sign Up with GitHub” to create your CircleCI account. This will land you on a page where GitHub will ask you if you want to grant CircleCI various permissions. As you will see below it will require your email address(es) and repository access rights.

Press “Authorize cicleci” to move onto the next step. Now you will see a welcome screen as below.

If you noticed the arrow and the red not circle you know where to click next. If not, press “Add projects”. You will see the forked repository name appear. Next to it, you will notice a “Setup project” link, press it.

No red mark to show where to click this time

Now you have pressed the right link you should see the project setup screen. You can leave the operating system as Linux and select “Other” as language.

Once you’ve done that a feedback box will appear asking what language you intend to use. I suppose it is to prioritise what they should add next to their roadmap. Don’t feel obliged to put C# as it might make the unit testing part of this post obsolete. Which I wouldn’t mind much because then I can update this post to avoid the build & test magic you were introduced to previously.

Next, scrolling down you should see a set of instructions to get the build to run but we already took care of that.

Now you can press “Start building” which will send you to your first build screen. Your build might be queuing for a few seconds before starting as below:

Your first Docker powered CircleCI build

In our case, there is not much going on apart from the test run so after up to a couple minutes you should get your successful build.

Successful build circleci

Build passing CI on the first try? A win in my book

Now that our CI tool is ready to build and validate our software, it’s time to prepare for deployment.

Time to deploy that API

Deployment over 9000 with Heroku

Heroku is a platform allowing developers to deploy, manage and scale web apps. They support most of the modern technologies and languages such as Node.js, Java, Go and many more. However, they do not officially support .NET Core even though they allow for extensions from Github (or buildpacks) to have some sort of support. But today we are not going to do that.

The first thing you will need to do now is creating an account. You can do so by clicking here. Once your account created, you will see a screen prompting you to create a new app.

heroku create app screen

Easy

Now you can press “Create New App”, and you will be asked to pick a name and region. For this tutorial, the region does not matter and you can pick any name you like.

heroku create appFrom here, press “Create app” to create your app and access its dashboard.

heroku new app dashboard

Now that the app is ready to receive our API deployment, you need to get your Heroku API key so that we can deploy our code to Heroku from CircleCI. In order to do so, you will have to access your Heroku settings. To get there, click on your profile icon (top right of the screen), you should see this menu pop up.

Heroku profile menu

Next, click “Account settings”. Once on the settings page scroll down until you see this:

heroku api keyFinally, press “Reveal” to display your API key and save it somewhere close, for that we will use it soon.

Creating our own docker image to run the API

Here we are, the time where we create our own (maybe your first) Docker image. The first step is to create our Dockerfile in the project folder.

Our Dockerfile is pretty standard here, it generates an environment allowing to compile build and run .NET Core apps. Then, it restores our project and publishes it locally to eventually run it using port number passed by Docker.

Now that our Dockerfile is ready to go, we will add a .dockerignore file that is a list of files/folders we want Docker to ignore. In our case, we want to make our build context as small as possible so we will ignore binaries as you can see below:

Once the file created, if you can run Docker locally, you may run the following commands to make sure your setup is valid:

Yet again, if you cannot run Docker locally, you will see the results on CircleCI later.

Updating the CircleCI config to set up our continuous delivery

We are almost there! It is time to put the delivery in continuous delivery. Now that we have our Docker image configuration ready, we can finalize our CircleCI configuration. Before editing our configuration file will need to add our Heroku credentials to the project environment variables. In order to do so, go back to your dashboard. From there, press your build’s settings button, it should look like this:

Then, click “Environment Variables” and add the email address you registered with on Heroku as HEROKU_USERNAME. Afterwards, add your Heroku API key as HEROKU_API_KEY. Finally, add your Heroku app name as HEROKU_APP_NAME.

After adding the variables, we can now update our CircleCI configuration file with the deployment steps.

Basically, what we do in that file is building our Docker image then authenticating to Heroku to eventually push our image to Heroku’s container registry. Now it is time to commit and push our changes for the last time. If you go back to CircleCI, you should see your build was successful.

continuous delivery all green

Continuous delivery in action, simply beautiful

Now, if you go to your Heroku app using https://<your-app-name>.herokuapp.com/api/values, you will see the following result.

Continuous delivery ✓✓✓

Congratulations! You are now smarter than 30 minutes ago! Not only you know how to setup continuous delivery using CircleCI and Heroku but you can build a Docker container image. If you missed anything, don’t hesitate to check the source code there.

What your solution folder structure should look like now.

Note that the sake of brevity, I chose to put all the commands in the CircleCI build job. Also I did not put any condition on which branch gets deployed, which is a check that you should always have to avoid publishing a test build to production. In the case of continuous delivery, pushing code to the dev branch should trigger a deployment to the development environment. Pushing code to master should trigger a deployment to production and so on. You can figure how to do this using condition-based instructions and the deployment job here.

Based on your feedback I may write a quick guide on setting up CI for multiple environments using this post as a basis. Since I have a few other things in the pipeline for the next few months it might not happen before a while.

Thanks again for reading, if it was any use to you don’t hesitate to share and subscribe to get more of these. The next future-proof entry should be about what you can do to avoid your continuous delivery to turn into this:

.NET Core CLI Tools: Build a web API in 10 minutes

Reading Time: 7 minutes

This tutorial is an introduction to .NET Core CLI tools. More precisely it is about creating a web API using the CLI tools provided for .NET Core. Whether you are a beginner in development or just new to .NET Core this tutorial is for you. However, you need to be familiar with what an API is and unit tests to fully enjoy this tutorial. Today, we will set up a solution grouping an API project and a test project.

For the next steps, you will need to install .NET Core and Visual Studio Code (referred to as VSCode later for the sake of brevity) that are supported on Mac, Unix and Windows. If you want to know how that multi-platform/framework is working have a look here.

Creating the solution

First things first we will open a terminal (or Powershell for Windows users) to create our solution. Once this is done we can create our solution that I will name DotNetCoreSampleApi as follows:

This command will create a new folder and DotNetCoreSampleApi a solution file with the surprising name DotNetCoreSampleApi.sln .Next, we will enter that folder.

Creating and running the sample web API

Now that the solution is here, we can create our API project. Because I am not the most creative mind I will also name it DotNetCoreSampleApi. Here is the command to create the project.

That command will create a subfolder named DotNetCoreSampleApi to your solution DotNetCoreSampleApi. If you followed all the steps your solution root should contain a file DotNetCoreSampleApi.sln and the web API folder DotNetCoreSampleApi.sln. The web API folder should contain a few files but the one we need now is DotNetCoreSampleApi.csproj. We will add a reference to it in our solution. To do so, run the following command:

After getting a confirmation message we can now start the API by running that command:

After a few seconds, it should display a message notifying you that the API is now running locally. You may access it at http://localhost:5000/api/values which is the Values API default endpoint.

Adding the test project to the solution

You may be aching to see some code by now but unfortunately, you will have to wait a bit more. Back in the days of .NET Framework, there was no such thing as generating projects by command line. You had to use cumbersome windows to pick what you needed to create. So now all of this project generation can be done by command line thanks to the CLI tools you will like it. And this is merely a suggestion. Back to the terminal. If the API is still running you may kill it by pressing Ctrl+C in the window you opened it in.

We are now able to create a test project and add it to the solution. First, let’s create the test project using dotnet new as follows:

That command creates a new unit test project using MSTests in a new folder with the name DotNetCoreSampleApi.Tests. Note that if you are more of a xUnit person you can replace mstest in the command with xunit which will create a xUnit test project. Now similarly to what we did for our web API project, we will add our test project to the solution:

Almost instantly you should have a confirmation that the project was added.

Getting acquainted with VSCode

Now, open VSCode and open the folder containing the file DotNetCoreSampleApi.sln. At this point you have that structure into the folder:

If you never used VSCode before, or at least not for C# development you will be suggested to install the C# extension:

Select “Show Recommendations” and apply what VSCode suggests. Then, once you finished installing the C# extension you will get a warning about adding missing assets to build and debug the project, select “Yes”.

Don’t hesitate to go back a few steps or even to restart this tutorial if something does not seem to work as expected. Here is how your test folder should look like by now:

Time to write our test

And finally, we are getting in the fun code writing part. The part where we put aside our dear CLI tools  By code writing I mean copy/paste the code I will show you later. And by fun, I mean code that compiles. There is nothing less frustrating than code that does not compile. Especially when you have no idea why. Fortunately, this will not happen here.

Now that you have your code editor ready to use you can go ahead and delete the UnitTest1.cs file. Once done, you will create a new file named ValuesControllerTests.cs in your test project. Then your VSCode  more or less looks like this:

Using VSCode the file should be empty, but in case it is not, delete its contents to match the screenshot above. As soon as you get your nice and empty file copy the code below into it:

Now you should get some warnings, which is perfectly fine because they should be here. If you hover over these you will see some referencing related error messages like below:

These appear because we did not reference the API project into our test project yet. It is time to open your terminal again. However, if you feel like having a bit of an adventure you can try VSCode’s terminal that will open in your solution folder. In order to do so, you can press Ctrl+' while in VSCode to open it. Or Ctrl+` if you’re using a Mac, probably either work for Unix.

Once the terminal open we will reference our API project into the test one with that command:

If you don’t see the full command above, you can still copy it using the copy button present when hovering.

Now that the reference to the API project is here the referencing warnings concerning it should be gone. However, a new one might appear about the Get call as below:I am not quite sure why it happens but it seems to be a bug within VSCode not getting this reference is here through the API project. However, you should not worry about it because if you try to build the solution and or run the tests it will work.

Understanding and running our test

Now we lay into the crispy part, the one we need before getting any further. The part we can use as the basis before delving into more advanced stuff like continuous integration or continuous deployment. Running a test that validates our logic. If you had a look at the ValuesController.cs file inside our API project you will see that the Get()  method is returning an array of strings. This array contains the values “value1” and “value2”. The test class you copied earlier contains a method that verifies that both “value1” and “value2” are returned for this Get().

So, back to the ValuesControllerTests.cs file. You may have noticed some links appearing on top of our test method like this:

You can ignore the “0 references” and “debug test” links for now. Press “run test” to execute our test. Actually, it will first build our API project to have the latest version of it before linking it to our test binary. After running the test, you should see something like this:

And unsurprisingly our test is passing. Now let’s see if we remove “value2” from the array returned from ValueController.Get() and run the test again.

Running the test again:

As you can see this time it failed, in order to have it pass you may now undo your changes in ValuesController.cs.

 

A little more of .NET Core CLI tools

It’s nice to know that one of your tests failed, however, you know what is better? Knowing which test actually broke and why. Therefore, this is the perfect time to bring up the  .NET Core CLI tools again. Now, you can run our test using the .NET Core CLI tools with this command:

Which will actually provide you with some more details on what broke:

.NET Core CLI tools magic with tests

.NET Core CLI tools magic

As you can see you get the message “value2 is not returned” that we defined in our test file. Here is a little callback for you:

I won’t say that now you are a fully fledged .NET Core developer but it’s a good start. You just created your (maybe) first API and test projects. Moreover, the test actually validates some of the API controller logic. So you know, congrats on that. However, if for a reason or another, something did not go according to plan, feel free to check the source code here.

I hope you enjoyed this new entry of my future-proof series and I will see you next time. You should look forward to it as I will cover how to setup continuous integration for such a project. It should be different from that other post from last year using Appveyor.

And remember, if you ever need anything from the CLI tools:

dotnet new everything

Just dotnet new it!

Simple continuous integration with Appveyor and Newman

Reading Time: 13 minutes

Last month, I posted about Postman enabling you to test your APIs with little effort so that you can build future-proof software. Here we are going to cover setting up continuous integration for a simple project by using Newman to run your Postman collections. You may have heard about continuous integration in the past. Most commonly, continuous integration will build software from one’s changes before or after merging them to the main codebase. Even though there is an infinity of tools that allow implementing continuous integration, I will focus on Appveyor CI. In order to make things simple, I will create a very basic web API project and will host it on GitHub.

Create GitHub repository

You can create the repository on GitHub by clicking this link: Create a repository on Github. For more details, please follow the documentation they provide on their website.

Big lines, you should see something like this when you create the repository:

Create a repository on GitHub

Once you’re all set, if you have not done it yet, you need to clone your repository. Personally, command-line feels easier as a simple “git clone” will do the job.

Command-line execution will look like this.

Create Web API project

Project setup

Now that your repository is all set, we can actually create the Web API project. For this step, you will need to install Visual Studio, ideally 2017 that you can download here. Once installed, open it and create a new project by selecting “File”, then “New” then “Project”.

After the project template selection popup appears, select “ASP.NET Web Application”. As for the project path, select the one where you cloned your repository and press ok.

Now you will have to select what kind of web application you want to create. Select “Empty” and make sure that the “Web API”  option is enabled like below. Note that selecting “Add unit tests” is not necessary for this tutorial.

Then press “Ok” and wait for the project creation. Once it’s done, your solution explorer should look like this.

Time to add some code. Yeah!

Add your Controller

First, right-click on the “Controllers” folder. Now, select “Add” then “Controller”. Pick “Web API 2 Controller – Empty” and press “Add”.

Next, you get to pick the controller name. Here it will be DivisionController.

Now you should have an empty controller looking like this:

The first project run

From here it’s time to run your project either by pressing F5. Also, you can open the menu and select “Debug” then “Start Debugging”. After a few seconds, a browser window will open and you will see 403 error page.

Chill, it’s perfectly normal as no method in our DivisionController is defined and access to your project directory is limited by default. At this point, we can already open Postman and create our first test.

It’s Postman time!

The first test

Now, open Postman, create a new tab. Once the tab created, copy the URL opened by Visual Studio debugger in Chrome. In my case, it’s “http://localhost:53825” but yours could be different. Paste that URL in your postman tab like this:

Next, press “Send” and you shall see the Postman version of the result we observed previously in Chrome.

From here, we can start writing tests that will define our API behavior for the default endpoint that does not exist yet. Here you can notice a couple of things that we will want to change. First, we don’t want that ugly HTML message to be displayed by default but something a little more friendly. I guess a “Hello Maths!” message is friendlier, from a certain point of view. Let’s add a test for that.

If you remember the previous article, you know that you are supposed to go to the tests tab in order to add it. In this case, will pick the “Response body: Is equal to a string” snippet. You should get some code generated as below:

Next, you will update it to replace “response_body_string” with “Hello Maths!”.

Now that the response test is sorted, let’s add a response code test to validate we should not get that 403 HTTP code. For this, we will use the “Status code: Code is 200” test snippet.

After sending the request again you can see that both tests failed.

Fix the API to make the tests pass

It is now time to write some code to right this wrong. Go back to Visual Studio to modify the DivisionController. We will add an Index method that will return the message we want to see.

This code basically creates a new response object with a status code OK (200) that we want to get. In this object, we add a StringContent object that contains our “Hello Maths!” message. Let’s run the Visual Studio solution again pressing “F5”.

As you can see, the horrible HTML error page has gone now and we see the “Hello Maths!” greeting. Now, if you run that same request in Postman you will see that now our tests pass.

Now save the request in a new collection that we will call “CalculatingWebApiAppveyor” as below.

You should see in the right tab the newly created collection along with the request we just saved.

Implement the division

If you got this far, you’ve done great already unlike our API doesn’t do much yet. It’s time to make it useful. From here, we will add a Divide action that will take in parameter a dividend and a divisor then return the quotient.  You can copy the code below and add it to your controller.

You may notice that the code looks simpler than for “Hello Maths!”. Actually, we could have returned simply return Ok(“Hello Maths!”). However, this would have returned “Hello Maths!” with the quotes for which our test would not have passed. Now, let’s run the project again and add a test for that division endpoint in Postman.

Test the Division

What we want to do is to make sure that our division endpoint actually returns the result of a division. What we will test here is that for 10 divided by 2 we do get 5. From there, you know that the route to be tested will be “divisions/dividends/10/divisors/2/_result”.  Now, create a new tab in Postman and copy the URL from your greetings endpoint. Then, append the route to be tested as below.

Next, we are going to use the “Response body: Is equal to string” snippet to validate that 10 divided by 2 should return 5. Also, we will add a status check just because.

If you followed all the steps correctly you should see both tests passed and the response is indeed 5.

Now, save that last request as “Validate division works” in the CalculatingWebApiAppveyor collection you created.

Finally, you can run your whole collection and you will see all the tests pass green.

Congratulations! You have a fully functional API as long as divisors are different from zero with its own Postman collection. A collection that you can run whenever you like to make sure your API is fine. The one issue though is that you may not be working alone nor want to run Postman whenever you push a change on GitHub.

There is a way to solve this issue and that’s where Appveyor comes into play. But first, let’s commit and push our changes.

Commit and push your code changes

If you haven’t done it yet, it’s time to commit your changes and push them to your Github repository. First, create a new file named .gitignore. More information about what that file does here.

I personally used the Powershell New-Item command but there is an infinity of ways to do that.

Then, open this .gitignore file that is the default one to use for Visual Studio projects, copy the contents into the file you created.

Now you can commit, push your changes and eventually move on to Appveyor thanks to a few commands. Note that you must run these commands from the directory where your solution and .gitignore are.

Once these commands executed you should see your solution with the files created on GitHub.

Get your continuous integration swag on

Create an Appveyor CI account

This is probably the simplest part of this tutorial. Simply go to the Appveyor login page, yes login. From here you can log in with a variety of source control related accounts but pick GitHub.

Once logged in you should land on an empty projects dashboard.

Connect your repository to Appveyor CI

Simply press “New Project” and you will be prompted with a list of repositories you have on your GitHub account.

Select “CalculatingWebApiAppveyor” and press “Add”. After a few seconds, you should see this:

To see how it works, press “New build”. What happens next is that Appveyor will download your source code from Github. Then, your source will be compiled, and if there are unit tests in your solution they will be run. But for now, you will see something like this:

Are you surprised? Are you entertained? Because I am. Don’t panic it’s a benign error caused by the fact that Appveyor does not restore a project’s Nuget packages by default. To get rid of that error, go to the settings tab, then to “Build”.

Scroll down until you see the “Before script option”, enable it by selecting “PS”. Now, a text box should appear for you to input  nuget restore  like below:

Now, press the “Save” button below and go back to your build dashboard and press “New build” again. If everything goes according to plan you should end up with this:

Congratulations again! You now know at to set up a .NET project on Appveyor.

This is more or less where I would have stopped if I went with my original decision of making this tutorial a two-parter. Since it would not make much sense to stop here considering what’s left we can move on our Postman collection again.

Setup Newman on Appveyor

Create environments

Now that our project, collection, and continuous integration tools are setup, it is time to put our collection to a better use. An automated use. To do so, we will need to update our collection so that it can be run both locally and on Appveyor. In order to achieve that, we will extract the host URLs from our requests and place them in environment files. One we will use locally, the other one on Appveyor.

First, we will create our localhost and Appveyor environments. I will name mine CalculatingWebApiLocalhost and CalculatingWebApiAppveyor. If you don’t remember how to create environments and modify collections to use their variables I happen to have written a post about it. You need at least the requests host to be extracted in the collections.

Your localhost should contain the URL you used so far. Your Appveyor one will be “http://localhost”.  Once done, you should have two environments that each should look like this:

Localhost environment

Appveyor environment

Now your environments are ready, update your collection requests as below.

Greetings request update

Division request update

 

From here, you can open the collection runner to make sure your collection still works and tests still pass.

Save your collection and environment to your project

It’s time to introduce you to Postman exporting feature because you will now need to move your collection and Appveyor environment to your project. First, let’s export the collection, click on your collection menu button.

After pressing “Export”, you should see this:

Make sure that “Collection v2” is selected then press “Export” again. Now, save the collection in your solution folder.

Next, we will export the Appveyor environment. Go to the “Manage environments” menu, then click on the “Download environment” icon for CalculatingWebApiAppveyor.

Then, save your environment to your solution folder.

The last step, not the least commit and push your changes. Here is a reminder here:

Now our repository is all set! Let’s get back to Appveyor.

Setup Newman on Appveyor

First, go to the Tests tab:

Then, enter these lines after selecting “PS” on the “After tests script” textbox:

The first line installs Newman on your Appveyor container, prevents the dependencies warnings and adapts the execution display to Appveyor. The second executes your collection using the environment you created and also adapts the execution display to Appveyor. If you used different filenames for your collection and environment, please update the command to match them. You should have something like this:

Now, go back to the “Latest build” tab and click on “New build”.

After a few moments, you will see that your build will fail.

Here you can see that Newman actually tells you what went wrong. All your tests failed, and there was a connection error for each of your collection requests. If your build fails for different reasons, you may want to go a few steps back and try again. But if your failed build looks like the capture above, you’re good to go.

Setup local deployment on Appveyor

Yes, we are very close to finishing setting up our Postman based continuous integration system. Now, we need to tell Appveyor that we want to package our solution and deploy it locally so that we can run our collections against it.

First, we will enable IIS locally. IIS is a service that allows running any kind of .NET web apps or APIs, even though it does not limit to it. To enable IIS, go to the “Environment” settings tab, then click on “Add service” and select “Internet Information Services (IIS)”.

After saving your changes, you will go to the “Build” tab and enable the “Package Web Applications for Web Deploy” option and save again.

That option will generate a zip package that will have the same name as your Appveyor project. What we need to do next is to configure Appveyor to deploy that package on the local IIS. In order to do so, we will go to the “Deployment” tab.

Click on “Add deployment” and select “Local Build Server”. Afterward, we will need to add some settings to tell Appveyor where and how to deploy. To do so, press “Add setting” three times then fill each setting to match these values:

  • CalculatingWebApiAppveyor.deploy_website: true
  • CalculatingWebApiAppveyor.site_name: Default Web Site
  • CalculatingWebApiAppveyor.port: 80

Now, you should see something like this:

Remember the Powershell script we added in the “Test” section of the settings, we will need to put it in the “After deployment script” instead. If we don’t do that, the build will always fail since it will try to run our integration tests before locally deploying our application. I will put it here again in case you don’t feel like scrolling up a bit.

If you followed everything your “Deployment” settings tab should look like this:

Don’t forget to save your changes and to update your “Tests” tab. Now, your “Tests” settings tab should look like that again:

 

After saving it, go back to “Latest build” and press “New Build”. Then, you will see that everything simply works.

Well done!

What’s next ?

Now that you know how to setup Newman powered API tests on Appveyor using GitHub, you can chill and call it a day. However, you can also show off your mastery of CI by adding your project badge to your README file.

Note that Appveyor allows you to deploy only when you push commits to your repository, whether it is a direct push or a pull request being merged. Nevertheless, if you have a private Appveyor account you can enable an option to allow local deployment to run your API tests even on pull requests.

Thanks for reading, I hope you enjoyed reading this as much as I enjoyed writing. Also, I would like to shout out a big thanks to Postman labs for featuring my previous post in their favorites of March, that was a really nice surprise.

Good luck helping to make this world fuller of future-proof software every day!

NB: If you don’t feel like creating the Web Api project and that you scrolled straight to the end of the post to get the sources, help yourself.

Postman collections: Making API testing great again!

Reading Time: 8 minutes

Turning shaky code into future-proof software

Over the past years we moved more and more towards web-oriented architectures, connecting to services in order to provide information. Along with the evolution of testing tools and development methodologies we can build crazily robust software. However it happens that sometimes we will not build unit tests because of project constraints. Those reasons often go from time pressure on a project to laziness but I am not here to judge.

Still, when you build a web service there is a way to ensure it works properly after implementation without doing a huge refactoring. I do not endorse not build unit tests and consider myself an herald of test driven development. That being said I am here to offer a solution for those who wrote code far from the 100% test coverage. This solution is to build API tests which is extremely easy using Postman. At least once your API tests are built you can then refactor bit by bit your code so that unit tests can be added at a later stage.

Now you can see this post as the first one of my future-proof series where I will introduce you to Postman collections and how to build flexible tests with them.

Requirements

This document has been written assuming that you have a basic knowledge of Javascript, JSON and web requests. If you do not, please feel free to visit these to be up-to-date:

Using postman makes it way easier and pleasant. Download it if it’s not done yet and you can follow through some examples later.

Creating a scenario

Create a request

Let’s start with something simple, create a new tab on Postman. Then in the textfield containing the placeholder Enter request URL type “http://echo.jsontest.com/ping/pong“. When it’s done press “Send” and you should get something like the next screenshot.

Save your request in a new collection

Now that your request is created you can save it by pressing “Save” and postman will ask you if you want to create a new collection, enter the collection details and press “Save”. Sounds repetitive but it’s the proof that they remain consistent in terms of UX.

Congratulation you just created your first Postman collection! If it is not your first then you just wasted 5 minutes of your life that you will never get back and even more by reading this whole sentence. And if you did it properly you should see this in your “Collections” tab:

Before moving into the whole Collection Runner thing we will add a test in the first request and create a second request using the response of the first one.

Adding tests

So now you will click on the “Tests” tab and you can see on the right that there is some tests snippets to help you writing tests faster. Let’s select two of them “Status Code: Code is 200” and “Response body: JSON value check”. You should now see this:

As you may notice, Postman tests are simple Javascript with some utility methods and variables that allow you to write simple yet powerful tests. This enables you to write very complex tests verifying every bit of your response. Then you can add tests around the response time, the response code the type your receive, etc.

The status code test does not need to change as the expected response code is 200 here. You need to replace “Your test name” with “Test ping value is pong”, and jsonData.value === 100 with jsonData.ping === "pong" now you should get this:

Now press “Save”, then “Send” and if you followed everything properly you should see the following:

Now you see you got the same response, and you can see “Tests (2/2)” which means that both tests passed. If you click on the Tests tab you will see the labels of the passed tests “Status code is 200” and “Test ping value is pong”:

Congratulations, you ran your first tests on Postman. If not, you wasted again some time of your life, yes it’s truly gone.

Adding a global variable for later use.

Now let’s add another request in our collection, but first we will set a global variable from the Ping pong request. Let’s go back to the “Tests” tab of the Ping pong request. In the snippets list on the right select “Set a global variable”.

From there you need to replace “variable_key” with “pingValue” and “variable_value” with jsonData.ping. Now if you press “Send” again the request is sent and the global variable is saved, click on the eye button to see it.

You can see the variable was set so now let’s move on to create the request we will use it in.

Duplicating a request and using global variables

Duplication is quite easy, all you have to do is go in your collection tab, click on the 3-dot button next to your Ping pong request and select duplicate.

Then you can rename your duplicated method “Ting pong”.

Now click on your “Ting pong” to see it in the builder, you can update the url from “http://echo.jsontest.com/ping/pong” to “http://echo.jsontest.com/ping/{{pingValue}}”. Putting pingValue between those brackets allows you to access any global or environment variable. It works as long as you try to access these values from request url, headers or body. To access a global variable from the pre-request script or the tests you use globals.variable_name. Here we will also update the test to retrieve pingValue, to do so you will replace “pong” with globals.pingValue.

Now if you run your request again all tests will pass again.

Now if you clear your globals, and try to run again the test for ping value will fail since it will send the string literal “%7B%7BpingValue%7D%7D”. This happened because you did not set any global or environment variable this time. So it will try to compare that with a variable that doesn’t exist which results in the test failing as you can see below.

However if you run again your Ping pong request, it will set the pingValue global variable therefore when you run the Ting pong request again, your test will pass again until you clear the variables.

The collection runner, finally

Now it is almost time to play with the collection runner. But first you need to know that the collections run requests based off request title alphabetic order so to ensure they run in the order you prefer I strongly advise you to use numbers for the ordering as below:

Yes you did not need this here since alphabet always places Ping pong before Ting pong. Same goes for collection folders on which I will not expand as they are really straightforward to use. If you want to have multiple scenarios in your collection that should not rely on each other you would be wise to group your requests in folders. Not only because it is much cleaner but also they can be ran individually if needed. On a 2 request collection it will not be an issue but the last one I created has 49 with 100s tests.

So now let’s go have a look at that collection runner, on your collection press on the arrow.

You will then see this, obviously the option you will select is “Run”.

The collection runner will then open after a couple seconds. You can simply press “Start Run”:

After the run you will see your collection run results.

Congratulations now you know how to write test scenarios using Postman.

Creating a new environment

Environments are pretty useful to write collections faster by using variables at any level. It can go from urls to tests values and so on. Using multiple environment is frequent for complex systems with multiple deployments and/or gateways.

Now we will create a simple environment  and set the ping service url that we will use in both requests.

First of all, click on the gear icon then “Manage environments” > “Add”. Once you get there you can name your environment and setup the values you need. 

Here we will name it “Postman tutorial environment” and add a key pongServiceUrl set to the value “http://echo.jsontest.com/” then press “Add”.

 

Now you created your environment you need to select it from the dropdown.

Once it’s done you can then update the urls from both your requests to use {{pongServiceUrl}} like this:


Now if you go to the runner again the environment is already set. You can press “Start Run” again.

Now you will see the same results as previously but with updated urls.

Congratulations! You now have all the tools you need to write fully flexible test collections. Your creativity is your only limit.

P.S. Today I turned 26. No I did not write this post on my birthday but did corrections and added screenshots today. Happy birthday to me!

Set tint color on an image in a NSAttributedString

Reading Time: 4 minutes

Introduction

Hi everyone, I have been working, for a few days now, on a project that requires to make an app fully customisable from a configuration file. I was coding and coding and coding, extracting color definitions, applying tints on images, when I ran into an issue. I could not apply tint over a mutable attributed string, nor simple attributed string for that matter. So I was there, looking at my NSAttributedString and my NSTextAttachment without any property allowing me to change only the image color.

It turns out that you actually can apply a color on your attributed string, I don’t know about the underlying magic when using attributed but it seems that you can apply your text color on an image embedded via your text attachment.

Let’s get started

Setting up the UI

First you create your single view project.

Create single view application

Create single view application

Set project name

Set project name

Then you go on your main storyboard (Main.storyboard) and add a label, that you will link to your view controller code (here as labelWithColoredImage).

Label setup

Label setup

Here is what your code should look like:

And that’s it terms of UI setup for now. Now you can run your app to be sure it’s behaving properly and that the label appears.

Simulator capture of a single label

Now the basic NSAttributedString implementation

Now let’s get our hands dirty with the coding part. First let’s update the text and change its color only using the textColor property only. Let’s add the following lines in the viewDidLoad function and run the app again.

First attempt with red color with no magic of NSAttributedString

First attempt with red color

Then we can observe that the text is red. One minor issue is that it doesn’t fit the screen so we can add contraints to the label to center it and make it responsive. Let’s add the text attachment magic now. It’s time to add an image to the project, I just added the image “swift_logo.png” in the project, drag-n-drop works as well as right clicking in the project and selecting “Add files to <ProjectName>”.

Image added to the project

Image added to the project

Let’s create our attributed string now with the text attachment, I will have the text over two lines to show how the image is now part of it and due to the image color I change the text color to blue so you can see the difference.

Another run!

Blue text, original image, thank you NSAttributedString

Blue text, original image

There you have it! Your image integrated within text. Now let’s check what happens if you render your image as a template. You need to replace this line  let image:UIImage=UIImage(named:"swift_logo")!  with  let image: UIImage = UIImage(named: "swift_logo")!.imageWithRenderingMode(.AlwaysTemplate) then run the app again.

Everything is blue now

Everything is blue now

Giving the image a color of its own different from the source one

Now the image is the same color as the text, for some of you it could be enough but we can go a little bit further by giving specifically to the image a color of its own. You can add the following lines before the assignment of the NSAttributedString to labelWithColoredImage.

Basically these lines say to color in green the character from the index matching the character after stringToDisplay for as long as the text attachment length. Now let’s run the app again.

The final step with NSAttributedString

The final step

And there you are, you know know how to apply a different color to an image through a NSAttributedString.

Conclusion

The source code is available on Github

There is one thing to keep in mind though, there must always be a space before the text attachment containing the image in you NSAttributedString otherwise it will not work. I am not quite sure wether the need to add a space is a bug or if it is actually given access to a hidden feature. Either way, happy coding !

How to install the CLI Amazon tools on a Mac

Reading Time: 5 minutes

Recently I had to connect to an amazon EC2 instance using SSH to do some stuff and realized that there is no manual for the Mac setup, but for Unix-like OS. However, since OS X is Unix based the instructions kinda work but if you’re unaware of some specifics you may get stuck at some point, plus even googling did not help me there. As I managed to make it quite easily, I thought it is my duty to share it with the world. That somewhere on the internet there is a reference allowing you to just copy-paste commands to gain time. Yes, the good engineer is lazy (work for devs too). Not the lazy laziness of not doing anything, but the one that forces you to think something long and well enough so you don’t have to restart it or to do it again and again.

Note

This guide is partially inspired from Amazon’s documentation on the subject

A. Download and Install the CLI Tools

First of all you will need the Amazon CLI tools. Download the tools. The CLI tools are available as a .zip file on this site: Amazon EC2 CLI Tools (aws.amazon.com/developertools/351). You can also download them with the curl utility.

You can, optionnal, verify that the CLI tools package has not been altered or corrupted after publication. For more information about authenticating the download before unzipping the file, see Verify the Signature of the Tools Download.

Unzip the files into a suitable installation directory, such as /usr/local/ec2. Notice that the .zip file contains a folder ec2-api-tools-x.x.x.x, where x.x.x.x is the version number of the tools (for example, ec2-api-tools-1.6.12.2).

B. Tell the Tools Where Java Lives

You can verify whether you have Java installed and where it is located using the following command:

You should see the following is example output.

If the previous command does not return a location for the Java binary, you need to install Java. For help installing Java on your platform, go here.

Find the Java home directory on your system. The which java command executed earlier returns Java’s location in the $PATH environment variable, but in most cases this is a symbolic link to the actual program; symbolic links do not work for the JAVA_HOME environment variable, so you need to locate the actual binary. The /usr/libexec/java_home command returns a path suitable for setting the JAVA_HOME variable.

Set JAVA_HOME to the full path of the Java home directory. Set the JAVA_HOME variable to $(/usr/libexec/java_home). The following command sets this variable to the output of the java_home command; the benefit of setting the variable this way is that it updates to the correct value if you change the location of your Java installation later.

You can verify your JAVA_HOME setting using this command.

If you’ve set the environment variable correctly, the output looks something like this.

Add this environment variable definition ($JAVA_HOME) to your shell start up scripts so that it is set every time you log in or spawn a new shell. The name of this startup file differs across platforms (in Mac OS X, this file is commonly called ~/.bash_profile and in Linux, it is commonly called ~/.profile), but you can find it with the following command:

If the file does not exist, you can create it. Use your favorite text editor to open the file that is listed by the previous command, or to create a new file with that name. Then edit it to add the variable definition you set in Step 3.

If the file exist set rights so you can edit it, then edit it using textedit, and change permissions again

Verify that the variable is set properly for new shells by opening a new terminal window and testing that the variable is set with the following command.

Note: If the following command does not correctly display the Java version, try logging out, logging back in again, and then retrying the command.

C. Tell the CLI Tools Where They Live

The Amazon EC2 CLI tools read the EC2_HOME environment variable to locate supporting libraries. Before using these tools, set EC2_HOME to the directory path where you unzipped them. This directory is named ec2-api-tools-w.x.y.z (where w, x, y, and z are components of the version number). It contains sub-directories named bin and lib.

In addition, to make things a little easier, you can add the bin directory for the CLI tools to your system path. The examples in the Amazon Elastic Compute Cloud User Guide assume that you have done so.

You can set the EC2_HOME and PATH environment variables as follows. Add them to your shell start up scripts so that they’re set every time you log in or spawn a new shell.

To set the EC2_HOME and PATH environment variables on Linux/Unix

Use this command to set the EC2_HOME environment variable. For example, if you unzipped the tools into the /usr/local/ec2 directory created earlier, execute the following command, substituting the correct version number of the tools.

Note

If you are using Cygwin, EC2_HOME must use Linux/Unix paths (for example, /usr/bin instead of C:\usr\bin). Additionally, the value of EC2_HOME cannot contain any spaces, even if the value is quoted or the spaces are escaped.

You can update your PATH as follows.

D. Tell the CLI Tools Who You Are

Your access keys identify you to the Amazon EC2 CLI tools. There are two types of access keys: access key IDs and secret access keys. You should have stored your access keys in a safe place when you created them. Although you can retrieve your access key ID from the Your Security Credentials page, you can’t retrieve your secret access key. Therefore, if you can’t find your secret access key, you’ll need to create new access keys before you can use the CLI tools.

Every time you issue a command, you must specify your access keys using the –aws-access-key and –aws-secret-key (or -O and -W) options. Alternatively, you might find it easier to store your access keys using the following environment variables:

If these environment variables are set properly, their values serve as the default values for these required options, so you can omit them from the commands. You can add them to your shell startup scripts so that they’re set every time you log in or spawn a new shell.

You can set these environment variables as follows.

E. (Optional) Set the Region

By default, the Amazon EC2 CLI tools use the US East (Northern Virginia) region (us-east-1) with the ec2.us-east-1.amazonaws.com service endpoint URL. To access a different region with the CLI tools, you must set the EC2_URL environment variable to the proper service endpoint URL.

To set the service endpoint URL

To list your available service endpoint URLs, call the ec2-describe-regions command, as shown in the previous section.

Set the EC2_URL environment variable using the service endpoint URL returned from the ec2-describe-regions command as follows.

If you’ve already launched an instance using the console and wish to work with the instance using the CLI, you must specify the endpoint URL for the instance’s region. You can verify the region for the instance by checking the region selector in the console navigation bar.

For more information about the regions and endpoints for Amazon EC2, see Regions and Endpoints in the Amazon Web Services General Reference.