We developed a News Tablet application for the Samsung Knox recently, and we are able to institute the various good engineering practices into our workflow process. Read on to find out more!


For quite some time now, we have visited Govtech who showed us their engineering quarters, and we have attended conferences, and read articles which advocated good engineering practices. While we do agree and recognize its benefits, adopting them require much time, resources and proper planning.

It is with the recent development of the News Tablet application for the Samsung Knox, that we are able to institute the various good engineering practices into our engineering workflow.

As a quick introduction, the News Tablet application is a collaboration between SPH and Samsung, to produce a mobile application exclusively for the Samsung Knox, that allows users to read the various SPH publications’ e-paper on the device.

Our engineering workflow

We have a executable build for every commit in main branches.
We have a executable build for every commit in main branches.

We employ Bitrise (many thanks to our QA engineers who did various product studies and recommended this tool) as the central orchestrator to perform Code Linting, calling up Codecov to execute Unit Tests, perform a test to build the app, initiate a Code Review session on Github, and after completing the review, generate a release build for the app store. It essentially serves as the main conveyor belt that carries our code from our hands into the end product.

Quite a number of processes isn’t it? Let us observe some of the salient processes in detail.

1. Code Linting

Back in our school days when we needed to do projects, each team member would work on different parts of it — some would compile and analyse the statistics, some would write the introduction, the findings and conclusion — and on nearing the submission date, discovered that there were various spelling or grammatical errors (that MS Word could not detect, or the team member blatantly ignore), and some parts of the facts or data were wrong. There would be the team leader, who would do the final rounds of checking and correcting of these errors. It was a very manual process.

In our context, to make sure our codes do not have “grammar” errors, we employ code linters to do the checks. Whenever one of us puts in new codes or changed existing ones, the linters would run and analyse these codes. Should there ever be any “grammar” errors, it would alert the team member, and stop carrying the code into the next step. As such, each team member is responsible for writing “grammatically correct” codes right at the beginning of the workflow, instead of deferring it until the later QC/QA stage.

An example of new codes that have passed linting.
An example of new codes that have passed linting.

An example of new codes that have failed linting from the Bitrise dashboard.
An example of new codes that have failed linting from the Bitrise dashboard.

The linter would point out which parts of the codes failed, along with suggestions on how to correct it.
The linter would point out which parts of the codes failed, along with suggestions on how to correct it.

Unit Testing

We remember when we were tackling long story problem sums, where each statement would entail us drawing some model diagram and doing some math calculation: while we have generally used the correct steps in solving the problem, we were waylaid by a careless mistake in our calculation in one of the steps (e.g. forgot to include the remainder of a long division), that led to an incorrect final answer. Our teachers would always sigh and remind us to always “check our answers again”.

Unit tests, in this sense, help us to be sure that our individual problem solving steps always give us the correct and expected answers. We would also wish to highlight that unit tests zooms in on the smaller (or even smallest) units of our code base, essentially testing the individual functions by itself. Hence, when one of us writes new codes, the new codes should have accompanying unit tests to prove its integrity.

An example of “happy path” (i.e. default scenario with no exception or error conditions) test case, which verifies that the intended functionality works.
An example of “happy path” (i.e. default scenario with no exception or error conditions) test case, which verifies that the intended functionality works.

An example of “unhappy path” test case, very valuable, but sadly quite often not taken into consideration by developers during crunch time.
An example of “unhappy path” test case, very valuable, but sadly quite often not taken into consideration by developers during crunch time.

Codecov — the tool we use to execute unit tests — is also helpful to indicate areas of our codes where we need to write tests. The coverage report visibly shows code changes covered by unit tests. Hits and misses are shown on the left-most side, in darker green and red colors respectively.

codecov details

During the initial phase when we were starting out with the project, we only covered 19% of our codes with unit tests. With our workflow in place, our tests coverage grew steadily to 41% after 3 months! Seeing the chart line ascending is very encouraging.

codecov progress chart

Code Reviewing

It is only after the new codes have gone through the automated steps of the workflow successfully, do we step in to conduct the manual activity of Code Review. It is also in this step where another team member is roped in to look at the new codes that the original author has written.

We take code reviews as opportunities for each of us to show our team members how we solved a particular problem or implemented a function. From this sharing, we invite constructive feedback, either to affirm our solution methodology, or to point out areas that need improvement. Hence, good coding practices are propagated to the whole team, and the whole team benefits!

At the end, after the other team member has gone through the code review and gives his/her “LGTM”, the Pull Request (PR) is approved and merged back to the dev or master branch for the application build to be generated.

Next steps

As mentioned earlier, it took a considerable amount of effort and time to get to where we are currently. With the workflow process in place, we would need to work on refining the individual steps in it: we plan to do more refactoring of our codes, so as to make them more maintainable and more unit testable. We are also in the process of putting in a code analysis tool to pick out code smells — potential bad coding structure or habits — so as to provide feedback to ourselves as to where we can write better, industry-accredited code.

We are using Sonarcloud to pick out code smells.
We are using Sonarcloud to pick out code smells.

We would also need to reach out, assist and build confidence in other teams in our department to adopt this workflow process, by setting up the linting and application building steps. Getting these easier to achieve workflow process steps in place will help our colleagues to be more familiar with this new workflow. With the new found confidence then can more of us get started with writing unit tests, which is the trickier part.

There is definitely more to share, and also more to be done in our efforts to revitalize our engineering. We will cover more details in our future posts, so do watch this space!