I am Nguele

My name is Jean-Dominique Nguele and this is my blog. FLVCTVAT NEC MERGITVR

Tag Archives: github


Continuous delivery for free using Docker, CircleCI and Heroku

Reading Time: 10 minutes

Continuous what?

Continuous delivery. You may recall that in my previous post I announced that today’s entry would be revolving around continuous integration. And technically it can count as such since we will cover continuous integration along the next step. That next step is continuous delivery. If you are not familiar with these terms and the concepts behind them I will sum them up briefly.

Basically, continuous integration allows verifying that your codebase still builds and passes tests passing whenever you push changes. Add a trigger to deploy your code to production upon success and you pretty much have the idea around continuous delivery.

These practices help mainly to make sure that you don’t break your codebase when pushing changes. This is good when you work alone but a lifesaver when working in a team. You cannot imagine how many hours I wasted mostly during my studying years because of coding breaking without us realizing before days. Using source control was already a miracle in itself at a time when there limited options for continuous integration, especially for students. If you want more details about source control workflows the GitHub Flow is a great place to start.

Let’s just jump into it!

Back in today’s topic, continuous delivery. Before I start inundating you with scripts and screen captures you need to be familiar with a few things:

Retrieving the code

Since you read my previous tutorial, you should know more or less what the code does. It is the classic Values API sample returning an array with two values “value1” and “value2”. From there, the easiest step is to fork the repository created from that previous post which you can reach by clicking here.

fork that repo

Fork that repo, yep that one. Just do it!

Once the fork completed you will have an exact copy of my repository where you can push changes for the rest of the tutorial. If you have not yet, you need to clone your fork to your machine for the next stage.

Getting acquainted with Docker

Docker is going to be key for today’s tutorial. Why? I hear you ask. Because CircleCI does not support C# for continuous integration. Neither does Heroku for deployment, at least not officially but we’ll get back to that later. But do you know what is supported by both that we can use? Docker container images.

Basically consider a Docker container image as a box in where you put everything your software needs to run properly, from code to settings to system tools and libraries. A containerized software will always run the same way regardless of the environment. It will be completely isolated from its surroundings. The cool thing about this? Well it works on any environment, whether you run it on Windows, Mac and Linux. It is true as long as your computer supports VT-d virtualization. Then, you can make sure your container behaves as you expect locally before deploying. This should be the case if your device is no more than a couple years old. Also note that if you cannot run Docker on your local machine, you can still commit the docker files and it will work on CircleCI.

First things first, you will need to install Docker Community Edition which is free and available at this link. The installation is pretty straightforward so nothing special to mention here. If Docker is not supported on your machine you will get a message when trying to install it on Windows. The same should happen if you try to run it on Mac. If it is the case, don’t worry, you can still go through the tutorial and won’t be missing that much.

Making our tests ready to work on CircleCI

As mentioned previously, if we try to build our API straight away on CircleCI it will fail. Not because the code does not work but because it is not supported. In order to get our tests running, we will have them run in a containerized way. We don’t need to create a container image yet, only to get an existing container image that will support running them.

The first thing to do is to create docker compose file that will allow getting an image that supports running .NET Core 2 applications and run our tests inside of that image. Now you will copy a file definition that will do exactly that upon using the docker-compose command. You need to create a file named docker-compose.unittests.yml at the root of your repository.  Once it’s done, copy into it the contents of the gist below:

Now we need to write the script that will allow our continuous integration tool to restore the solution within the container image. After what the tests will be run. Here is the script to copy inside a file named docker-run-unittests.sh still at the root of your solution:

You may notice a line that is unusual to most people. The command set -eu -o pipefail. A short and stupid explanation is to say that it halts and makes the build process fail if an error occurs. If your build does not compile or that tests fail, that command will allow the docker-compose command to fail which will trigger an error and allow your CI system to know it failed.

Now that we have our tests ready to run within a container we will run them locally to make sure we’re all set. In order to do so, you will need to run the following command with your favourite terminal. This assumes that you are in your solution folder and that you can run Docker commands on your machine.

Running that command will give you an output similar to this:

Docker test run result

We are now able to run tests on any environment supporting Docker. Let’s now setup our continuous integration tool.

Continuous integration

Configuring for CircleCI

Now that we have all the Docker configuration ready to run tests, we can configure our project to have our continuous integration on CircleCI. The first thing to do here is to create a .circleci folder in your solution folder. Then, you will create a config.yml file inside of it so that its relative path to your solution is .circleci/config.yml. Into that file you will copy these contents:

Commit and push your changes, then move onto the next section.

Setting up CircleCI

CircleCI is a platform used for continuous integration and continuous delivery. I picked it for today’s post because it’s free and can be good if you just want to play around. Also, it can be great if you are creating a new business and want to keep the costs low before scaling up.

The first step here is to create an account. You can reach their signup page by clicking here. Once there you should see this screen:

Now you need to press “Sign Up with GitHub” to create your CircleCI account. This will land you on a page where GitHub will ask you if you want to grant CircleCI various permissions. As you will see below it will require your email address(es) and repository access rights.

Press “Authorize cicleci” to move onto the next step. Now you will see a welcome screen as below.

If you noticed the arrow and the red not circle you know where to click next. If not, press “Add projects”. You will see the forked repository name appear. Next to it, you will notice a “Setup project” link, press it.

No red mark to show where to click this time

Now you have pressed the right link you should see the project setup screen. You can leave the operating system as Linux and select “Other” as language.

Once you’ve done that a feedback box will appear asking what language you intend to use. I suppose it is to prioritise what they should add next to their roadmap. Don’t feel obliged to put C# as it might make the unit testing part of this post obsolete. Which I wouldn’t mind much because then I can update this post to avoid the build & test magic you were introduced to previously.

Next, scrolling down you should see a set of instructions to get the build to run but we already took care of that.

Now you can press “Start building” which will send you to your first build screen. Your build might be queuing for a few seconds before starting as below:

Your first Docker powered CircleCI build

In our case, there is not much going on apart from the test run so after up to a couple minutes you should get your successful build.

Successful build circleci

Build passing CI on the first try? A win in my book

Now that our CI tool is ready to build and validate our software, it’s time to prepare for deployment.

Time to deploy that API

Deployment over 9000 with Heroku

Heroku is a platform allowing developers to deploy, manage and scale web apps. They support most of the modern technologies and languages such as Node.js, Java, Go and many more. However, they do not officially support .NET Core even though they allow for extensions from Github (or buildpacks) to have some sort of support. But today we are not going to do that.

The first thing you will need to do now is creating an account. You can do so by clicking here. Once your account created, you will see a screen prompting you to create a new app.

heroku create app screen

Easy

Now you can press “Create New App”, and you will be asked to pick a name and region. For this tutorial, the region does not matter and you can pick any name you like.

heroku create appFrom here, press “Create app” to create your app and access its dashboard.

heroku new app dashboard

Now that the app is ready to receive our API deployment, you need to get your Heroku API key so that we can deploy our code to Heroku from CircleCI. In order to do so, you will have to access your Heroku settings. To get there, click on your profile icon (top right of the screen), you should see this menu pop up.

Heroku profile menu

Next, click “Account settings”. Once on the settings page scroll down until you see this:

heroku api keyFinally, press “Reveal” to display your API key and save it somewhere close, for that we will use it soon.

Creating our own docker image to run the API

Here we are, the time where we create our own (maybe your first) Docker image. The first step is to create our Dockerfile in the project folder.

Our Dockerfile is pretty standard here, it generates an environment allowing to compile build and run .NET Core apps. Then, it restores our project and publishes it locally to eventually run it using port number passed by Docker.

Now that our Dockerfile is ready to go, we will add a .dockerignore file that is a list of files/folders we want Docker to ignore. In our case, we want to make our build context as small as possible so we will ignore binaries as you can see below:

Once the file created, if you can run Docker locally, you may run the following commands to make sure your setup is valid:

Yet again, if you cannot run Docker locally, you will see the results on CircleCI later.

Updating the CircleCI config to set up our continuous delivery

We are almost there! It is time to put the delivery in continuous delivery. Now that we have our Docker image configuration ready, we can finalize our CircleCI configuration. Before editing our configuration file will need to add our Heroku credentials to the project environment variables. In order to do so, go back to your dashboard. From there, press your build’s settings button, it should look like this:

Then, click “Environment Variables” and add the email address you registered with on Heroku as HEROKU_USERNAME. Afterwards, add your Heroku API key as HEROKU_API_KEY. Finally, add your Heroku app name as HEROKU_APP_NAME.

After adding the variables, we can now update our CircleCI configuration file with the deployment steps.

Basically, what we do in that file is building our Docker image then authenticating to Heroku to eventually push our image to Heroku’s container registry. Now it is time to commit and push our changes for the last time. If you go back to CircleCI, you should see your build was successful.

continuous delivery all green

Continuous delivery in action, simply beautiful

Now, if you go to your Heroku app using https://<your-app-name>.herokuapp.com/api/values, you will see the following result.

Continuous delivery ✓✓✓

Congratulations! You are now smarter than 30 minutes ago! Not only you know how to setup continuous delivery using CircleCI and Heroku but you can build a Docker container image. If you missed anything, don’t hesitate to check the source code there.

What your solution folder structure should look like now.

Note that the sake of brevity, I chose to put all the commands in the CircleCI build job. Also I did not put any condition on which branch gets deployed, which is a check that you should always have to avoid publishing a test build to production. In the case of continuous delivery, pushing code to the dev branch should trigger a deployment to the development environment. Pushing code to master should trigger a deployment to production and so on. You can figure how to do this using condition-based instructions and the deployment job here.

Based on your feedback I may write a quick guide on setting up CI for multiple environments using this post as a basis. Since I have a few other things in the pipeline for the next few months it might not happen before a while.

Thanks again for reading, if it was any use to you don’t hesitate to share and subscribe to get more of these. The next future-proof entry should be about what you can do to avoid your continuous delivery to turn into this:

Trying to provide helpful pull request reviews

Reading Time: 3 minutes

How I unblocked a frozen pull request

A few weeks ago, I saw a pull request to modify one of our webjobs which codebase is pretty old and had no tests. The pull request had no tests either. The thing is that we decided to make unit testing mandatory for any pull requests a couple weeks before.

I started reviewing the code when I noticed someone else already posted a review. A pretty laconic “please add tests”. Not a bad nor a mean review but not a really helpful one. Proof of it is that it was posted about an hour before and the pull request was blocked. Indeed we do not untested logic to enter or remain in our software. Yes it is aligned with our new policy about tests. That being said, the webjob code was tightly coupled and pretty impossible to test as it was.

This is where I stepped in, I reviewed the code and found a way to make it testable. I then suggested a few minor changes in the existing codebase to make it testable. Within thirty minutes he modified the code and was pretty happy to have tests for the logic he improved. Eventually, I went on and approved his pull request then the first reviewer followed up.

What went wrong with the first review

Please add tests

In most cases “please add tests” is enough to do the trick. The code is designed properly and decoupling is applied wherever possible. “Please add tests” is enough if the tests were not written because of laziness or just got forgotten. However, in this particular case, the reviewer did not take in consideration the context of the change. Indeed, it was an update to an old project designed at a time where the backend team was a couple of guys trying to launch a company. Delivering the software was prioritised over making it easily maintainable. In order to allow a business to take off, testing and decoupling was left for another day. Taking these factors in consideration I have been able to come up with a few strategic changes that eventually allowed to add some tests.

You may have noticed the two different approaches and their effect here. On one hand, turning a change of context into a problem, on the other hand, suggesting a solution. The first one had the pull request frozen for an hour where the latter allowed the pull request to move forward and the code to be merged. As software engineers we need to help others moving forward and propose solutions not problems. Solving problems is central to what we do, whether it is designing a seamless checkout or helping a colleague to make progress on a project.

Become an enabler

We all have been that first reviewer at a moment or another, and if you currently recognized yourself there here are a few tips for you:

Leave your ego out

If you comment on a pull request because it will make you feel superior to the submitter by showing how big is your knowledge relative to theirs or how you are the best developer there is, don’t. Just don’t. Especially if it does not bring any value to what he is trying to accomplish through the pull request. Always leave your ego out of anything if you want to be productive.

Ask questions

Close to the previous one even though one may happen without the other. Please do not assume one’s coding or design choices are wrong because they do not match what you would do. Ask questions and if there is a real issue try to provide comments that drive the submitter towards a solution.

Follow-up

When you request changes, depending on the system you are using you may be blocking a pull request and preventing someone from working. Make sure you follow-up whenever you can, between two of your own pull request submissions, during a coffee break or anytime you come back to your desk. Time is precious and when you request changes on a pull request you become responsible for the additional time spent on it for every developer involved.

Bring a positive value

Ask yourself about the impact you have on a project or a colleague. Does your comment make your colleague’s day better or worse? If it makes it worse, does it actually help solving the problem at hand and bring a positive value? Because at the end of the day, all that matters is the value you can create. Value to a business, value to people. Making a positive impact on your environment will encourage others to do the same. Eventually it will help you and the people around you to thrive and yearn for improvement every day.

Special thanks to Joshua Dooms who did make a positive impact on my vision of how reviews should go.

Simple continuous integration with Appveyor and Newman

Reading Time: 13 minutes

Last month, I posted about Postman enabling you to test your APIs with little effort so that you can build future-proof software. Here we are going to cover setting up continuous integration for a simple project by using Newman to run your Postman collections. You may have heard about continuous integration in the past. Most commonly, continuous integration will build software from one’s changes before or after merging them to the main codebase. Even though there is an infinity of tools that allow implementing continuous integration, I will focus on Appveyor CI. In order to make things simple, I will create a very basic web API project and will host it on GitHub.

Create GitHub repository

You can create the repository on GitHub by clicking this link: Create a repository on Github. For more details, please follow the documentation they provide on their website.

Big lines, you should see something like this when you create the repository:

Create a repository on GitHub

Once you’re all set, if you have not done it yet, you need to clone your repository. Personally, command-line feels easier as a simple “git clone” will do the job.

Command-line execution will look like this.

Create Web API project

Project setup

Now that your repository is all set, we can actually create the Web API project. For this step, you will need to install Visual Studio, ideally 2017 that you can download here. Once installed, open it and create a new project by selecting “File”, then “New” then “Project”.

After the project template selection popup appears, select “ASP.NET Web Application”. As for the project path, select the one where you cloned your repository and press ok.

Now you will have to select what kind of web application you want to create. Select “Empty” and make sure that the “Web API”  option is enabled like below. Note that selecting “Add unit tests” is not necessary for this tutorial.

Then press “Ok” and wait for the project creation. Once it’s done, your solution explorer should look like this.

Time to add some code. Yeah!

Add your Controller

First, right-click on the “Controllers” folder. Now, select “Add” then “Controller”. Pick “Web API 2 Controller – Empty” and press “Add”.

Next, you get to pick the controller name. Here it will be DivisionController.

Now you should have an empty controller looking like this:

The first project run

From here it’s time to run your project either by pressing F5. Also, you can open the menu and select “Debug” then “Start Debugging”. After a few seconds, a browser window will open and you will see 403 error page.

Chill, it’s perfectly normal as no method in our DivisionController is defined and access to your project directory is limited by default. At this point, we can already open Postman and create our first test.

It’s Postman time!

The first test

Now, open Postman, create a new tab. Once the tab created, copy the URL opened by Visual Studio debugger in Chrome. In my case, it’s “http://localhost:53825” but yours could be different. Paste that URL in your postman tab like this:

Next, press “Send” and you shall see the Postman version of the result we observed previously in Chrome.

From here, we can start writing tests that will define our API behavior for the default endpoint that does not exist yet. Here you can notice a couple of things that we will want to change. First, we don’t want that ugly HTML message to be displayed by default but something a little more friendly. I guess a “Hello Maths!” message is friendlier, from a certain point of view. Let’s add a test for that.

If you remember the previous article, you know that you are supposed to go to the tests tab in order to add it. In this case, will pick the “Response body: Is equal to a string” snippet. You should get some code generated as below:

Next, you will update it to replace “response_body_string” with “Hello Maths!”.

Now that the response test is sorted, let’s add a response code test to validate we should not get that 403 HTTP code. For this, we will use the “Status code: Code is 200” test snippet.

After sending the request again you can see that both tests failed.

Fix the API to make the tests pass

It is now time to write some code to right this wrong. Go back to Visual Studio to modify the DivisionController. We will add an Index method that will return the message we want to see.

This code basically creates a new response object with a status code OK (200) that we want to get. In this object, we add a StringContent object that contains our “Hello Maths!” message. Let’s run the Visual Studio solution again pressing “F5”.

As you can see, the horrible HTML error page has gone now and we see the “Hello Maths!” greeting. Now, if you run that same request in Postman you will see that now our tests pass.

Now save the request in a new collection that we will call “CalculatingWebApiAppveyor” as below.

You should see in the right tab the newly created collection along with the request we just saved.

Implement the division

If you got this far, you’ve done great already unlike our API doesn’t do much yet. It’s time to make it useful. From here, we will add a Divide action that will take in parameter a dividend and a divisor then return the quotient.  You can copy the code below and add it to your controller.

You may notice that the code looks simpler than for “Hello Maths!”. Actually, we could have returned simply return Ok(“Hello Maths!”). However, this would have returned “Hello Maths!” with the quotes for which our test would not have passed. Now, let’s run the project again and add a test for that division endpoint in Postman.

Test the Division

What we want to do is to make sure that our division endpoint actually returns the result of a division. What we will test here is that for 10 divided by 2 we do get 5. From there, you know that the route to be tested will be “divisions/dividends/10/divisors/2/_result”.  Now, create a new tab in Postman and copy the URL from your greetings endpoint. Then, append the route to be tested as below.

Next, we are going to use the “Response body: Is equal to string” snippet to validate that 10 divided by 2 should return 5. Also, we will add a status check just because.

If you followed all the steps correctly you should see both tests passed and the response is indeed 5.

Now, save that last request as “Validate division works” in the CalculatingWebApiAppveyor collection you created.

Finally, you can run your whole collection and you will see all the tests pass green.

Congratulations! You have a fully functional API as long as divisors are different from zero with its own Postman collection. A collection that you can run whenever you like to make sure your API is fine. The one issue though is that you may not be working alone nor want to run Postman whenever you push a change on GitHub.

There is a way to solve this issue and that’s where Appveyor comes into play. But first, let’s commit and push our changes.

Commit and push your code changes

If you haven’t done it yet, it’s time to commit your changes and push them to your Github repository. First, create a new file named .gitignore. More information about what that file does here.

I personally used the Powershell New-Item command but there is an infinity of ways to do that.

Then, open this .gitignore file that is the default one to use for Visual Studio projects, copy the contents into the file you created.

Now you can commit, push your changes and eventually move on to Appveyor thanks to a few commands. Note that you must run these commands from the directory where your solution and .gitignore are.

Once these commands executed you should see your solution with the files created on GitHub.

Get your continuous integration swag on

Create an Appveyor CI account

This is probably the simplest part of this tutorial. Simply go to the Appveyor login page, yes login. From here you can log in with a variety of source control related accounts but pick GitHub.

Once logged in you should land on an empty projects dashboard.

Connect your repository to Appveyor CI

Simply press “New Project” and you will be prompted with a list of repositories you have on your GitHub account.

Select “CalculatingWebApiAppveyor” and press “Add”. After a few seconds, you should see this:

To see how it works, press “New build”. What happens next is that Appveyor will download your source code from Github. Then, your source will be compiled, and if there are unit tests in your solution they will be run. But for now, you will see something like this:

Are you surprised? Are you entertained? Because I am. Don’t panic it’s a benign error caused by the fact that Appveyor does not restore a project’s Nuget packages by default. To get rid of that error, go to the settings tab, then to “Build”.

Scroll down until you see the “Before script option”, enable it by selecting “PS”. Now, a text box should appear for you to input  nuget restore  like below:

Now, press the “Save” button below and go back to your build dashboard and press “New build” again. If everything goes according to plan you should end up with this:

Congratulations again! You now know at to set up a .NET project on Appveyor.

This is more or less where I would have stopped if I went with my original decision of making this tutorial a two-parter. Since it would not make much sense to stop here considering what’s left we can move on our Postman collection again.

Setup Newman on Appveyor

Create environments

Now that our project, collection, and continuous integration tools are setup, it is time to put our collection to a better use. An automated use. To do so, we will need to update our collection so that it can be run both locally and on Appveyor. In order to achieve that, we will extract the host URLs from our requests and place them in environment files. One we will use locally, the other one on Appveyor.

First, we will create our localhost and Appveyor environments. I will name mine CalculatingWebApiLocalhost and CalculatingWebApiAppveyor. If you don’t remember how to create environments and modify collections to use their variables I happen to have written a post about it. You need at least the requests host to be extracted in the collections.

Your localhost should contain the URL you used so far. Your Appveyor one will be “http://localhost”.  Once done, you should have two environments that each should look like this:

Localhost environment

Appveyor environment

Now your environments are ready, update your collection requests as below.

Greetings request update

Division request update

 

From here, you can open the collection runner to make sure your collection still works and tests still pass.

Save your collection and environment to your project

It’s time to introduce you to Postman exporting feature because you will now need to move your collection and Appveyor environment to your project. First, let’s export the collection, click on your collection menu button.

After pressing “Export”, you should see this:

Make sure that “Collection v2” is selected then press “Export” again. Now, save the collection in your solution folder.

Next, we will export the Appveyor environment. Go to the “Manage environments” menu, then click on the “Download environment” icon for CalculatingWebApiAppveyor.

Then, save your environment to your solution folder.

The last step, not the least commit and push your changes. Here is a reminder here:

Now our repository is all set! Let’s get back to Appveyor.

Setup Newman on Appveyor

First, go to the Tests tab:

Then, enter these lines after selecting “PS” on the “After tests script” textbox:

The first line installs Newman on your Appveyor container, prevents the dependencies warnings and adapts the execution display to Appveyor. The second executes your collection using the environment you created and also adapts the execution display to Appveyor. If you used different filenames for your collection and environment, please update the command to match them. You should have something like this:

Now, go back to the “Latest build” tab and click on “New build”.

After a few moments, you will see that your build will fail.

Here you can see that Newman actually tells you what went wrong. All your tests failed, and there was a connection error for each of your collection requests. If your build fails for different reasons, you may want to go a few steps back and try again. But if your failed build looks like the capture above, you’re good to go.

Setup local deployment on Appveyor

Yes, we are very close to finishing setting up our Postman based continuous integration system. Now, we need to tell Appveyor that we want to package our solution and deploy it locally so that we can run our collections against it.

First, we will enable IIS locally. IIS is a service that allows running any kind of .NET web apps or APIs, even though it does not limit to it. To enable IIS, go to the “Environment” settings tab, then click on “Add service” and select “Internet Information Services (IIS)”.

After saving your changes, you will go to the “Build” tab and enable the “Package Web Applications for Web Deploy” option and save again.

That option will generate a zip package that will have the same name as your Appveyor project. What we need to do next is to configure Appveyor to deploy that package on the local IIS. In order to do so, we will go to the “Deployment” tab.

Click on “Add deployment” and select “Local Build Server”. Afterward, we will need to add some settings to tell Appveyor where and how to deploy. To do so, press “Add setting” three times then fill each setting to match these values:

  • CalculatingWebApiAppveyor.deploy_website: true
  • CalculatingWebApiAppveyor.site_name: Default Web Site
  • CalculatingWebApiAppveyor.port: 80

Now, you should see something like this:

Remember the Powershell script we added in the “Test” section of the settings, we will need to put it in the “After deployment script” instead. If we don’t do that, the build will always fail since it will try to run our integration tests before locally deploying our application. I will put it here again in case you don’t feel like scrolling up a bit.

If you followed everything your “Deployment” settings tab should look like this:

Don’t forget to save your changes and to update your “Tests” tab. Now, your “Tests” settings tab should look like that again:

 

After saving it, go back to “Latest build” and press “New Build”. Then, you will see that everything simply works.

Well done!

What’s next ?

Now that you know how to setup Newman powered API tests on Appveyor using GitHub, you can chill and call it a day. However, you can also show off your mastery of CI by adding your project badge to your README file.

Note that Appveyor allows you to deploy only when you push commits to your repository, whether it is a direct push or a pull request being merged. Nevertheless, if you have a private Appveyor account you can enable an option to allow local deployment to run your API tests even on pull requests.

Thanks for reading, I hope you enjoyed reading this as much as I enjoyed writing. Also, I would like to shout out a big thanks to Postman labs for featuring my previous post in their favorites of March, that was a really nice surprise.

Good luck helping to make this world fuller of future-proof software every day!

NB: If you don’t feel like creating the Web Api project and that you scrolled straight to the end of the post to get the sources, help yourself.

C# dynamic interface implementation at runtime

Reading Time: 2 minutes

Some context first

How did I come to write a class allowing dynamic interface implementation in the first place? Ever had to work on a huge company project over the weekend? Because it is the weekend you pick up fixes what should be easy configuration changes. Then you think it will take you only a couple of hours then you will be off to the gym. I thought that yesterday and boy I mislead myself, much mislead indeed. Basically I had to update a couple of big projects to remove fields that are null from the json response. All of that listening to stuff like the Ding Dong Song, Purple Lamborghini and Slipknot’s Psychosocial. On the first project I had to add a little line to have that working, so the second one should be the same right? I actually thought I would grab another task before leaving that improvised hackathon.

The thought journey

It was all fun and games until, surprise surprise, the second project used a custom formatter. That was to do some processing on the response objects and update some values to match our apps implementation. Fair enough. But the magical line of configuration to ignore null fields when rendering json did not work there. The obvious solution was to get rid of that custom formatter. The obvious thing to do was get rid of that formatter and figure a way to have that object value setting logic without touching the project classes. I say obvious because there were hundreds of classes there and I did not feel like changing all of them even to simply add an interface and its implementation. I had to set properties that may exist for hundred of objects. This is how I started googling, going through StackOverflow to try and figure how to achieve that.

The much lower scale Newton moment

During that thinking process I realized I could try to do something with dynamic objects instead of adding a value during the json formatting process. Interestingly enough, a few minutes later the StackOverflow ex-machina did its thing and I found that post “How to extend class with an extra property“. The answer from unsung hero Mario Stopfer brought me light on something I did not know was possible. You guessed it: Dynamic interface implementation at runtime. Not really in the form I needed but it opened a door of possibilities to me and a new perspective on the property setting issue. And I started coding, building, testing, debugging like crazy. After a few hours, I achieved what did not know was a possibility a few hours before. Dynamic interface implementation was there working and solving my issue.

Dynamic interface implementation: Epilogue

I had a nice afternoon of coding at the office, lots of laughs and problem solving that provides me with an article I really enjoyed writing and a new class for my in progress .NET utility project that should appear when mature enough on Github. However since you have been reading all of this you will have the code in a preview gist along with sample code. The only issue is that it does not work with the new .NET Core (yet?) so I will update it at a later stage when I find the time and solution. That or add another version. Without any further ado, here is what I called the TypeMixer.

Going full necromancian on old projects

Reading Time: 2 minutes

Old projects, they often end in what I call development hell. This odd place where some projects with a good potential become stale after a release or die because too late for a market. Very often personal projects end up there even when they are open source. Open source really seem like a tool to help spreading and sharing knowledge worldwide depending on the kind of project. Before open source democratized with the likes of Github  a huge number of personal projects probably took years to be released when not abandoned.

Today I decided to do the only kind of necromancy one should do: Bringing back an old project to life. A project for which I already wrote the code for the model and business logic. I had to make create a user interface and build a user experience as sleek as possible. However, I stopped it due to my vision of a market I thought crowded along with the lack of time. Now I see clearly that was not as true as I thought.

Here we are. Two months I did not post here, eleven months without working on a personal development project I am back at it. I spent my past weekends alternating between gym, party and sleep. Luckily, I work in a position where my brain I can keep my brain stimulated. Indeed, when not investigating an issue on one of our live apps nor working on our platform features I am defining development tools and processes to be used at company scale within the next months. Eventually, a lot of cool things will come out of that. I will definitely post a few related tutorials depending on schedule.

About the blog, I will try to post more regularly than I have maybe a tutorial. In terms of work I would like to share with you the video that we recorded last week. It is basically the new company careers video that I like not just because I am in it. You can definitely check it out below:

If you arrived that far in the post, first I would like to thank you for reading and watching the video. Second, do not abandon your old projects if you are not a 100% sure they are dead. Check your old source code even if it is to mock your old coding style. In the end you could actually have something worth the hassle.

Retail week hackathon 2016 aftermath

Reading Time: 3 minutes

Retail week hackathon 2016 result

Last week at the same time I was at home playing League of Legends to break away from the frustration of losing at the Retail Week hackathon 2016. I was frustrated because I was, well I still am, convinced that our idea was good enough to win. Actually, I wanted to write a post immediately after to express the mixed feelings I felt that day. On one hand, I loved the experience and the excitement of suiting up as my schools days nerd self. On the other hand I hated losing in a way that did not feel fair. I discussed about the outcome of the hackathon around and people felt like we should have won.

I still cannot believe that a system to book an hour in-store discussing with an employee about an item you see online. An employee you whose job is to sell you the said item, especially in 2016. Nowadays we are only clicks away from users reviews from all around the world. You can even have video reviews at least on youtube. I guess it makes more sense to the judges otherwise the Retail Week hackathon 2016 winner trophy would be on Poq’s trophies shelf.

When we first got the idea of the self-checkout we thought that the hardest challenge was having a working prototype. We were so wrong. We had a working prototype 4 hours before the hackathon ended. From there, we spent the rest of the time testing and fixing bugs to ensure the presentation’s success. The presentation did not go perfectly but the idea and the product were there. To be fair, I think that pretty much all the teams had a much presentation for lesser ideas which could be what cost us the gold. When the judges are involved in retail during a fashion event I guess this is key.

The self-checkout idea

We built a self checkout app that allows customers in a store to find items they want to purchase with indoor location using estimotes and geolocation to handle both indoor and outdoor app behaviour. The most interesting part is that you can scan your items so they add to your basket and when you leave the store you get charged automatically. We even built a mini-backend displaying the last paid basket.

We built a solid proof of concept even though there is some security flows that are fixable on the operational side. For the security tags, just add a device connected to the store system against which you scan your order generated QR code  to allow you unlocking the number of items you need to remove the tags from. Even further we can use security tags that would emit the value from the item barcode to enforce that someone is not unlocking something they did not buy. It is a 24 hour hackathon and still we thought about some corner cases.

We did focus on bringing people back to store. I think that we did show creativity and innovation using the latest technology. Maybe we did not manage to pass the idea to the judges but I know that this is the future of retail. Walk in a store pick what you need and go. No more queuing hassle. Basically shoplifting without the criminal aspect.

Learning and progressing

I may go next year if we put up a team again and learn from our mistake. Technical advancement is not the focus, presentation is. Coding the whole night to get a working prototype is not the focus, sugar coat is. Still it will remain a special moment to me because I did have fun. The self-checkout will be in your hands in a few years, I will do my best for it. I went, I saw, I learned, that is probably what I do best, learning. I learned things my whole life, both at school and out of it. Even now that I worked for a few years I still try to learn as many things as possible. Learning is key to evolution, it is the key to become a better version of one self.

A great way of learning is to take part in open source development, looking at other people’s code, taking on challenges. Since a few days now I am helping other developers on community based websites such as StackOverflow and Github. I had an account on both for some time but did not do much with them. The good part is that on one hand I can learn and sharpen my skills by taking on issues and at the same time I help others. Well, there is not much downside. On Tuesday I submitted my first (non-professional) pull request and it got approved and merged pretty much instantly. It was not much but it still feels nice, you can check it here.  And yesterday I got my first upvotes on a few posts on StackOverflow showing that giving time is enough sometimes.

That’s where I will end today’s post before I start spreading on random stuff, thank you for reading.