Minimum Viable Testing - Get Maximum Stability With Minimum Effort


Even though Test Driven Development(TDD)1 saves time & money in the long run, there are many excuses why developers don't test the software. In this article, lets look at Minimum Viable Testing(aka Risk-Based Testing)2 and how it helps to achieve maximum stability with minimum effort.

Minimum Viable Testing

Pareto principle states that 80% of consequences come from 20% of the causes. In software proucts, 80% of the users use 20% of the features. A bug in these 20% features is likely to cause higher impact than the rest. It makes sense to prioritize testing of these features than the rest.

Assessing the importance of a feature or risk of a bug depends on the product that we are testing. For example, in a project a paid feature gets more importance than free feature.

In TDD, we start with writing tests and then writing code. Compared to TDD, MVT consumes less time. When it comes to testing, there are unit tests, integration tests, snapshot tests, ui tests and so on.

When getting started with testing, it is important to have integration tests in place to make sure if something is working. Also the cost of integration tests is much cheaper compared to unit tests.

Most of the SAAS products have a web/mobile application and an API server to handle requests for the front end applications. Having UI tests for the applications and integration tests for APIs for the most crucial functionality should cover the ground. This will make sure any new code that is being pushed doesnt break the core functionality.


Even though RBT helps with building a test suite quicker that TDD, it should be seen as an alternate option to TDD. We should see RBT as a starting point for testing from which we can take next step towards achieving full stability for the product.

Finding Performance Issues In Python Web Apps with Sentry


Earlier, we have seen couple of articles here on finding performance issues1 and how to go about optimizing them2. In this article, lets see how to use Sentry Performance to find bottlenecks in Python web applications.

The Pitfalls

A common pitfall while identifying performance issues is to do profiling in development environment. Performance in development environment will be quite different from production environment due to difference in system requirements, database size, network latency etc.

In some cases, performance issues could be happening only for certain users and in specific scenarios.

Replicating production performance on development machine will be costly. To avoid these, we can use APM tool to monitor performance in production.

Sentry Performance

Sentry is widely used Open source error tracking tool. Recently, it has introduced Performance to track performance also. Sentry doesn't need any agent running on the host machine to track performance. Enabling performance monitoring is just a single line change in Sentry3 setup.

import sentry_sdk

    # Trace half the requests

Tracing performance will have additional overhead4 on the web application response time. Depending on the traffic, server capacity, acceptable overhead, we can decide what percentage of the requests we need to trace.

Once performance is enabled, we can head over to Sentry web application and see traces for the transactions along with operation breakdown as shown below.

At a glance, we can see percentage of time spent across each component which will pinpoint where the performance problem lies.

If the app server is taking most of the time, we can explore the spans in detail to pinpoint the exact line where it is taking most time. If database is taking most of the time, we can look out for the number of queries it is running and slowest queries to pinpoint the problem.

Sentry also provides option to set alerts when there are performance. For example, when the response time for a duration are less than a limit for a specified duration, Sentry can alert developers via email, slack or any other integration channels.


There are paid APM tools like New Relic, AppDynamics which requires an agent to be installed on the server. As mentioned in earlier articles, there are open source packages like django-silk to monitor performance. It will take time to set up these tools and pinpoint the issue.

Sentry is the only agentless APM tool5 available for Python applications. Setting up Sentry performance is quite easy and performance issues can be pinpointed without much hassle.

Make Python Docker Builds Slim & Fast


When using Docker, if the build is taking time or the build image is huge, it will waste system resources as well as our time. In this article, let's see how to reduce build time as well as image size when using Docker for Python projects.


Let us take a hello world application written in flask.

import flask

app = flask.Flask(__name__)

def home():
    return 'hello world - v1.0.0'

Let's create a requirements.txt file to list out python packages required for the project.


Pandas binary wheel size is ~10MB. It is included in requirements to see how python packages affect docker image size.

Here is our Dockerfile to run the flask application.

FROM python:3.7

ADD . /app


RUN python -m pip install -r requirements.txt


ENTRYPOINT [ "python" ]

CMD [ "-m" "flask" "run" ]

Let's use the following commands to measure the image size & build time with/without cache.

$ docker build . -t flask:0.0 --pull --no-cache
[+] Building 45.3s (9/9) FINISHED

$ touch  # modify file

$ docker build . -t flask:0.1
[+] Building 15.3s (9/9) FINISHED

$ docker images | grep flask
flask               0.1     06d3e985f12e    1.01GB

With the current docker, here are the results.

1. Install requirements first

FROM python:3.7


ADD ./requirements.txt /app/requirements.txt

RUN python -m pip install -r requirements.txt

ADD . /app


ENTRYPOINT [ "python" ]

CMD [ "-m" "flask" "run" ]

Let us modify the docker file to install requirements first and then add code to the docker image.

Now, build without cache took almost the same time. With cache, the build is completed in a second. Since docker caches step by step, it has cached python package installation step and thereby reducing the build time.

2. Disable Cache

FROM python:3.7


ADD ./requirements.txt /app/requirements.txt

RUN python -m pip install -r requirements.txt --no-cache

ADD . /app


ENTRYPOINT [ "python" ]

CMD [ "-m" "flask" "run" ]

By default, pip will cache the downloaded packages. Since we don't need a cache inside docker, let's disable pip cache by passing --no-cache argument.

This reduced the docker image size by ~20MB. In real-world projects, where there are a good number of dependencies, the overall image size will be reduced a lot.

3. Use slim variant

Till now, we have been using defacto Python variant. It has a large number of common debian packages. There is a slim variant that doesn't contain all these common packages4. Since we don't need all these debian packages, let's use slim variant.

FROM python:3.7-slim


This reduced the docker image size by ~750 MB without affecting the build time.

4. Build from source

Python packages can be installed via wheels (.whl files) for a faster and smoother installation. We can also install them via source code. If we look at Pandas project files on PyPi1, it provides both wheels as well as tar zip source files. Pip will prefer wheels over source code the installation process will be much smoother.

To reduce Docker image size, we can build from source instead of using the wheel. This will increase build time as the python package will take some time to compile while building.

Here build size is reduced by ~20MB but the build has increased to 15 minutes.

5. Use Alpine

Earlier we have used, python slim variant the base image. However, there is Alpine variant which is much smaller than slim. One caveat with using alpine is Python wheels won't work with this image2.

We have to build all packages from source. For example, packages like TensorFlow provide only wheels for installation. To install this on Alpine, we have to install from the source which will take additional effort to figure out dependencies and install.

Using Alpine will reduce the image size by ~70 MB but it is not recomended to use Alpine as wheels won't work with this image.

All the docker files used in the article are available on github3.


We have started with a docker build of 1.01 GB and reduced it to 0.13 GB. We have also optimized build times using the docker caching mechanism. We can use appropriate steps to optimize build for size or speed or both.

How To Deploy Mirth Connect To Kubernetes


NextGen Connect(previously Mirth Connect) is widely used integration engine for information exchange in health-care domain. In this article, let us see how to deploy Mirth Connect to a Kubernetes cluster.

Deployment To k8s

From version 3.8, NextGen has started providing official docker images for Connect1. By default, Connect docker exposes 8080, 8443 ports. We can start a Connect instance locally, by running the following command.

$docker run -p 8080:8080 -p 8443:8443 nextgenhealthcare/connect

We can use this docker image and create a k8s deployment to start a container.

apiVersion: apps/v1beta1
kind: Deployment
  name: mirth-connect
  namespace: default
      - name: mirth-connect
        - name: http
          containerPort: 8080
        - name: https
          containerPort: 8443
        - name: hl7-test
          containerPort: 9001
          - name: DATABASE
            value: postgres
          - name: DATABASE_URL
            value: jdbc:postgresql://

This deployment file can be applied on a cluster using kubectl.

$ kubectl apply -f connect-deployment.yaml

To access this container, we can create a service to expose this deployment to public.

apiVersion: v1
kind: Service
  name: mirth-connect
  annotations: arn:aws:acm:ap-south-1:foo "443"
  type: LoadBalancer
    app: mirth-connect
    - name: http
      port: 80
      targetPort: 8080
      protocol: TCP
    - name: https
      port: 443
      targetPort: 8443
      protocol: TCP
    - name: hl7-test
      port: 9001
      targetPort: 9001
      protocol: TCP

This will create a load balancer in AWS through which we can access mirth connect instance. If an ingress controller is present in the cluster, we can use it directly instead of using a seperate load balancer for this service.

Once Mirth Connect is up & running, we might have to create HL7 channels running on various ports. In the above configuration files, we have exposed 9001 HL7 port for testing of channel. Once we configure Mirth Channels, we need to expose appropriate ports in deployment as well as service similiar to this.


Earlier, there were no official docker images for Mirth Connect and it was bit diffucult to dockerize Mirth Connect and deploy it. With the release of official Docker images, deploying Mirth Connect to k8s or any other container orchestration platform has become much easier.

Serial Bluetooth Terminal With Python Asyncio


PySerial package provides a tool called miniterm1, which provides a terminal to interact with any serial ports.

However miniterm sends each and every character as we type instead of sending entire message at once. In addition to this, it doesn't provide any timestamps on the messages transferred.

In this article, lets write a simple terminal to address the above issues.

Bluetooth Receiver

pyserial-asyncio2 package provides Async I/O interface for communicating with serial ports. We can write a simple function to read and print all the messages being received on a serial port as follows.

import sys
import asyncio
import datetime as dt

import serial_asyncio

async def receive(reader):
    while True:
        data = await reader.readuntil(b'\n')
        now = str(
        print(f'{now} Rx <== {data.strip().decode()}')

async def main(port, baudrate):
    reader, _ = await serial_asyncio.open_serial_connection(url=port, baudrate=baudrate)
    receiver = receive(reader)
    await asyncio.wait([receiver])

port = sys.argv[1]
baudrate = sys.argv[2]

loop = asyncio.get_event_loop()
loop.run_until_complete(main(port, baudrate))

Now we can connect a phone's bluetooth to a laptop bluetooth. From phone we can send messages to laptop using bluetooth terminal app like Serial bluetooth terminal4.

Here a screenshot of messages being send from an Android device.

We can listen to these messages on laptop via serial port by running the following command.

$ python /dev/cu.Bluetooth-Incoming-Port 9600
2020-08-31 10:44:50.995281 Rx <== ping from android
2020-08-31 10:44:57.702866 Rx <== test message

Bluetooth Sender

Now lets write a sender to send messages typed on the terminal to the bluetooth.

To read input from terminal, we need to use aioconsole 3. It provides async input equivalent function to read input typed on the terminal.

import sys
import asyncio
import datetime as dt

import serial_asyncio
import aioconsole

async def send(writer):
    stdin, stdout = await aioconsole.get_standard_streams()
    async for line in stdin:
        data = line.strip()
        if not data:
        now = str(
        print(f'{now} Tx ==> {data.decode()}')

async def main(port, baudrate):
    _, writer = await serial_asyncio.open_serial_connection(url=port, baudrate=baudrate)
    sender = send(writer)
    await asyncio.wait([sender])

port = sys.argv[1]
baudrate = sys.argv[2]

loop = asyncio.get_event_loop()
loop.run_until_complete(main(port, baudrate))

We can run the program with the following command and send messages to phone's bluetooth.

$ python /dev/cu.Bluetooth-Incoming-Port 9600

ping from mac
2020-08-31 10:46:52.222676 Tx ==> ping from mac
2020-08-31 10:46:58.423492 Tx ==> test message

Here a screenshot of messages received on Android device.


If we combine the above two programmes, we get a simple bluetooth client to interact with any bluetooth via serial interface. Here is the complete code 5 for the client.

In the next article, lets see how to interact with Bluetooth LE devices.

Set Default Date For Date Hierarchy In Django Admin


When we monitor daily events from django admin, most of the time we are interested in events related to today. Django admin provides date based drill down navigation page via ModelAdmin.date_hierarchy1 option. With this, we can navigate to any date to filter out events related to that date.

One problem with this drill down navigation is, we have to navigate to todays date every time we open a model in admin. Since we are interested in todays events most of the time, setting todays date as default filtered date will solve the problem.

Set Default Date For Date Hierarchy

Let us create an admin page to show all the users who logged in today. Since User model is already registered in admin by default, let us create a proxy model to register it again.

from django.contrib.auth.models import User

class DjangoUser(User):
    class Meta:
        proxy = True

Lets register this model in admin to show logged in users details along with date hierarchy.

from django.contrib import admin

class MetaUserAdmin(admin.ModelAdmin):
    list_display = ('username', 'is_active', 'last_login')
    date_hierarchy = 'last_login'

If we open DjangoUser model in admin page, it will show drill down navigation bar like this.

Now, if we drill down to a particular date, django adds additional query params to the admin url. For example, if we visit 2020-06-26 date, corresponding query params are /?last_login__day=26&last_login__month=6&last_login__year=2020.

We can override changelist view and set default params to todays date if there are no query params. If there are query params then render the original response.

class MetaUserAdmin(admin.ModelAdmin):
    list_display = ('username', 'is_active', 'last_login')
    date_hierarchy = 'last_login'

    def changelist_view(self, request, extra_context=None):
        if request.GET:
            return super().changelist_view(request, extra_context=extra_context)

        date = now().date()
        params = ['day', 'month', 'year']
        field_keys = ['{}__{}'.format(self.date_hierarchy, i) for i in params]
        field_values = [getattr(date, i) for i in params]
        query_params = dict(zip(field_keys, field_values))
        url = '{}?{}'.format(request.path, urlencode(query_params))
        return redirect(url)

Now if we open the same admin page, it will redirect to todays date by default.


In this article, we have seen how to set a default date for date_hierarchy in admin page. We can also achieve similar filtering by settiing default values for search_filter or list_filter which will filter items related to any specific date.

How Dart, Flutter Stateful Hot Reload Work? - Part 1

This will be a series of articles on exploring the internals of Dart & Flutter stateful hot reload. In the first article, lets write a simple dart program to see stateful hot reload in action. Then lets delve into details on what is happening.

Stateful Hot Reload

import 'dart:async';

int total = 0;

void adder(_) {
  int delta = 2;
  total += delta;

  print("Total is $total. Adding $delta");

void main() {
  Timer.periodic(Duration(seconds: 2), adder);

In the above program1, we are using a Timer.periodic2 to create a timer which calls adder function every 2 seconds.

We can run this program from command line using

$ dart --observe hot_reload.dart
Observatory listening on
Total is 2. Adding 2
Total is 4. Adding 2
Total is 6. Adding 2
Total is 8. Adding 2

This will start executing the program and will provide a link to observatory3, a tool to profile/debug Dart applications.

As the program is executing, lets open the program in an editor, change delta from 2 to 3.

  # change this
  # int delta = 2;

  # change to
  int delta = 3;

If we restart the program, it will start executing from the beginning and it will lose the state of the program.

$ dart --observe hot_reload.dart
Observatory listening on
Total is 3. Adding 3
Total is 6. Adding 3
Total is 9. Adding 3

Instead of restart, we can open the observatory link in browser, open main isolate and click on Reload Source button.

As we can see from the below output, it did a stateful hot reload and state of the program is preserved instead of starting from the beginning.

$ dart --observe hot_reload.dart
Observatory listening on
Total is 2. Adding 2
Total is 4. Adding 2
Total is 6. Adding 2
Total is 8. Adding 2
Total is 11. Adding 3 # after hot reload
Total is 14. Adding 3
Total is 17. Adding 3
Total is 20. Adding 3

During a hot reload, Dart VM will apply changes to a live program4. If the source code of a method is changed, VM will replace the methods with the new updated methods. Next time, when the program looks up for a particular method, it will find the updated method and use it.


In this article, we have seen how hot reload works by writing a simple program in Dart. In the upcoming articles, lets dive into the Dart VM internals, Flutter architecture and other nitty gritties of hot reload.





Tips On Improving kubectl Productivity

kubectl is CLI tool to control Kubernetes clusters. As we start using kubectl to interact with mutliple clusters, we end up running lengthy commands and even running multiple commands for simple tasks like running a shell in a container.

In this article, lets learn few tips to improve our productivity when using kubectl.


Aliases in general improve the productivity when using a shell.

kubectl provides shortcuts for commands. For example,

# instead of running full command
$ kubectl get services

# we can use short hand version
$ kubectl get svc

It also provides completion for commands.

# enable completion for zsh
$ source <(kubectl completion zsh)

# type `kubectl ` and hit `<TAB>` will show possible options
$ kubectl
annotate       attach         cluster-info
api-resources  auth           completion
api-versions   autoscale      config
apply          certificate    convert

# type `kubectl g`, and hit `<TAB>` will show possible options
$ kubectl get

However, setup up aliases for most commanly used commands will lot of time.

alias k='kubectl'

alias kdp='kubectl describe pod'
alias kgp='kubectl get pods'
alias kgpa='kubectl get pods --all-namespaces'
alias ket='kubectl exec -it'
alias wkgp='watch -n1 kubectl getp pods'

alias kga='kubectl get all'
alias kgaa='kubectl get all --all-namespaces'

alias kaf='kubectl apply -f'

alias kcgc='kubectl config get-contexts'
alias kccc='kubectl config current-context'

If you don't write your own aliases, there is kubectl-aliases which provides exhuastive list of aliases. We can source this file in rc file and start using them.

Use Functions

Even though aliases help us to run lengthy commands with an alias, there are times where we have to run multiple commands to get things done for a single task.

For example, to view kubenetes dashboard, we have to get the token, start proxy server and then open the url in browser. We can write a simple function as shown below to do all of that.

kp() {
    kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}') | grep 'token:' | awk '{print $2}' | pbcopy
    open http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
    kubectl proxy

Now from the shell, when we run kp, it function will copy the token to clipboard, open kubernetes dashboard in browser and will start the proxy server.

Use Labels

To describe a pod or tail logs from a pod, we can use pod names.

$ kubectl get pods
NAME                             READY   STATUS
hello-world-79d794c659-tpfv2     1/1     Running

$ kubectl describe pod hello-world-79d794c659-tpfv2

$ kubectl logs -f pod/hello-world-79d794c659-tpfv2

When the app gets updated, the name of pod also updates. So, instead of using pod name, we can use pod labels.

$ kubectl describe pod -l=hello-world

$ kubectl logs -f -l=pod/hello-world

Kubectl Tools

k8s has a good ecosystem and the following packages are aimed to make certain k8s tasks easier.

kubectl-debug - Debug pod by a new container with all troubleshooting tools pre-installed.

kube-forwarder - Easy to use port forwarding manager.

stern - Multi pod and container log tailing.

kubectx - Quick way to switch between clusters and namespaces.

kubebox - Terminal and Web console for Kubernetes.

k9s - Interactive terminal UI.

kui - Hybrid CLI/UI tool for k8s.

click - Interactive controller for k8s.

lens - Stand alone corss platform k8s IDE.


In this article we have seen some useful methods as well as some tools to improve productivity with kubectl. If you spend a lot of time interacting with kubernetes cluster, it is important to notice your workflows and find better tools or ways to improve productivity.

Continuous Deployment To Kubernetes With Skaffold

In this article, let us see how to setup a continuous deployment pipeline to Kubernetes in CircleCI using Skaffold.


You should have a kubernetes cluster in a cloud environment or in your local machine. Check your cluster status with the following commands.

$ kubectl cluster-info
$ kubectl config get-contexts

You should know how to manually deploy your application to kubernetes.

# push latest docker image to container registry
$ docker push chillaranand/library

# deploy latest image to k8s
$ kubectl apply -f app/deployment.yaml
$ kubectl apply -f app/service.yaml


Skaffold is a CLI tool to facilitate continuous development and deployment workflows for Kubernetes applications.

Skaffold binaries are available for all platforms. Download the binary file for your OS and move it to bin folder.

$ curl -Lo skaffold
$ chmod +x skaffold
$ sudo mv skaffold /usr/local/bin

Inside your project root, run init command to generate a config file. If your project has k8s manifests, it will detect them and include it in the configuration file.

$ skaffold init
Configuration skaffold.yaml was written

$ cat skaffold.yaml
apiVersion: skaffold/v2beta1
kind: Config
  name: library
  - image:
    - kubernetes/deployment.yaml
    - kubernetes/service.yaml

To deploy latest changes to your cluster, run

$ skaffold run

This will build the docker image, push to registry and will apply the manifests in the clusters. Now, k8s will pull the latest image from the registry and create a new deployment.

CircleCI Workflow

version: 2.1

  aws-cli: circleci/aws-cli@0.1.19
  kubernetes: circleci/kubernetes@0.11.0


      - setup_remote_docker

      - aws-cli/setup:
          profile-name: default

      - kubernetes/install-kubectl:
          kubectl-version: v1.15.10

      - checkout

      - run:
          name: container registry log in
          command: |
            sudo $(aws ecr get-login --region ap-south-1 --no-include-email)

      - run:
          name: install skaffold
          command: |
            curl -Lo skaffold
            chmod +x skaffold
            sudo mv skaffold /usr/local/bin

      - run:
          name: update kube config to connect to the required cluster
          command: |
            aws eks --region ap-south-1 update-kubeconfig --name demo-cluster

      - run:
          name: deploy to k8s
          command: |
            skaffold run

CircleCI orbs are shareable packages to speed up CI setup. Here we are using aws-cli, kubernetes orbs to easily install/setup them inside the CI environment.

Since CircleCI builds run in a docker container, to run docker commands inside container, we have to specify setup_remote_docker key so that a seperate environment is created for it.

Remaining steps are self explainatory.


Here we have seen how to setup CD to kubernetes in CircleCI. If we want to setup this another CI like Jenkins or Travis, instead of using orbs, we have to use system package mangers like apt-get to install them. All others steps will remain same.

Work From Home Tips For Non-remote Workers

Remote-first and remote-friendly companies have a different work culture & communication process compared to non-remote companies. Due to COVID-191 world wide pandemic, majority of workers who didn't had prior remote experience are WFH(working from home). This article is intended to provide some helpful tips for such people.

Work Desk

It is important to have a dedicated room or at least a desk for work. This creates a virtual boundary between your office work and personal work. Otherwise, you will end up working from bed, dining tables, kitchen etc which will result in body pains due to bad postures.

Get Ready

Image Credit: raywenderlich 2

Start your daily routine as if you are going to office. It is easy to stop caring about personal grooming and attire when WFH. Your attire can influence your focus and productivity.

If you find getting ready is hard, schedule video calls for all meetings with your colleagues. This might give some additional motivation for you to get ready early in the morning. When the work time, go to your work desk and start working.

Self Discipline

Schedule your work time. Whenever possible try to stick to office working hours. Without proper schedule, you will either end up under working or over working as your personal work and office work gets mixed up.

Take regular breaks during work hours. Without any distractions, it is easy to get lost in the pixels for longer durations. Taking short breaks for a quick walk and getting a fresh air outside will freshen up.

With unlimited access to kitchen and snacks, it is hard to avoid binge eating at home. But atleast avoid binge eating during office hours.

Exercise. Since WFH involves only sitting in a chair through out the day, staying physically active is challenging especially during this pandemic. Exercise few minutes every morning, help yourself in the kitchen by making a meal or doing dishes, clean your house etc., should help in staying physically active.

Seek Help

WFH can be lonely at times as the social interactions are quite less. Schedule 1 to 1 meetings or virtual coffe meetings with your colleagues to increase social interactions. Discuss WFH problems with your colleagues, friends and remote communities to see how they are tacking those problems.