Timestamp to Relative Time - Kibana Scripted fields

When browsing logs in Kibana, there will be a timestamp stamp field on the left for all the docs. It is difficult to read & comprehend the timestamp in the logs. It would be better if we can convert the timestamp to a human-readable relative time like 5 minutes ago, 1 hour ago, etc.

Kibana Scripted Fields

Kibana provides a feature called scripted fields to create new fields in the index pattern. We can use this feature to convert the timestamp to a relative time.

kibana-relative-time

Go to Stack Management -> Index Patterns -> Create index pattern -> Select the index pattern -> Scripted fields, click on Add scripted field, add the below script.

long now = new Date().getTime();

long timestamp = doc['@timestamp'].value.toInstant().toEpochMilli();
long diff = now - timestamp;
if (diff > 7200000) {
  return Math.round(diff / 3600000) + " hours ago";
} else if (diff > 3600000) {
  return Math.round(diff / 3600000) + " hour ago";
} else if (diff > 120000) {
  return Math.round(diff / 60000) + " minutes ago";
} else if (diff > 60000) {
  return (Math.round(diff / 60000) + " minute ago");
} else {
  return Math.round(diff / 1000) + " seconds ago";
}

Once the field is saved, we can go back to Discover and see the new field in the logs. We can toggle the visibility of the Relative Time field to see the relative time.

kibana-relative-time

Conclusion

Instead of looking at the timestamp and calculating the relative time in our head, we can use relative time in Kibana . This will make it easier to read & comprehend the logs.

The Strange Case of Dr. Linux and Mr. Mac

Few days back, some of the tests started failing on CI server. When I tried to run the tests locally, they were passing.

After debugging for a while, I found that the tests were failing because of the case sensitivity of the file system. One of the developer was using Linux and had committed 2 files with the same name but different case(config.json, Config.json).

Linux file system is case-sensitive. So these 2 files will be shown as 2 different files.

linux-file-system

But Mac/Windows file system is case-insensitive. Out of these 2 files, only one file will be shown.

mac-file-system

Due to this, the tests were failing on Linux but passing on Mac. Once the case of the file was corrected, the tests started passing on both the systems.

I have been using Mac for a long time and never faced this issue. Even though Mac's APFS is case-insensitive, we can create a case-sensitive volume using Disk Utility.

case-sensitive-volume

We have to be aware of these differences when working on a project with developers using different OS.

Archiving Option Chain Data

BSE & NSE are the prominent exchanges in India, and they provide option chain data for the stocks & indices listed in their exchange.

The option chain data is available for the current date. But it is not available for the past dates. This is a problem for traders who want to analyze the historical option chain data.

ArchiveBox

ArchiveBox1 is a tool to archive web pages. It can be used to archive the option chain data for the stocks & indices.

Let's install ArchiveBox.

$ pip install archivebox
$ mkdir option_chain
$ cd option_chain
$ archivebox init
$ archivebox setup

We can start the server(defaults to https://localhost:8000) and add URLs manually to archive them.

$ archivebox server

historical-option-chain

There are 180+ stocks in FNO segment & 6 indices with weekly expiry. We can write a simple Python script to generate all combinations of URLs for the option chain data and archive them using ArchiveBox.

Once URLs is generated, we can use the following command to archive them.

$ archivebox add --input-file urls.txt

These URLs will be archived and stored in the archive directory. Since we want to archive the data regularly, we can use a schedule to archive daily.

$ archivebox schedule --every=day --depth=0 '{{url}}'

This will archive the option chain data for the stocks & indices on a daily basis.

Conclusion

Browsing the archived data of a single url is a bit difficult. Wayback machine provides a better interface to browse the archived data. I have raised an issue2 regarding the same in the ArchiveBox repository. Once the UI issue is resolved, this will serve as the best tool to browse the historical option chain data.

Cross Platform File Explorer in 50 lines of code

In an earlier post, I wrote about why I need a "line count" column in file explorer and how I wrote a Lua script to see it in xplr file manager.

xplr has only terminal interface. It is hard for non-developers to use it. I wanted a small team to use this feature so that it will save several hours of their time. So I decided to write a cross-platform GUI app.

GUI app

Since I am familiar with PySimpleGUI, I decided to write a simple file explorer using it.

Cross Platform File Explorer

As seen in the above screenshot, the file explorer has a "Line Count" column. It is a simple Python script with ~50 lines of code.

The project is open source and source code is available at github.com/AvilPage/LCFileExplorer.

Cross Platform

A new user can't directly run this Python script on his machine unless Python is already installed. Even if Python is installed, he has to install the required packages and run it. This requires technical expertise.

To make it easy for non-tech users to run this program, I decided to use PyInstaller to create a single executable file for each platform.

I created a GitHub action to build the executable files for Windows, Linux, and macOS. The action is triggered on every push to the master branch. This will generate .exe file for Windows, .AppImage file for Linux, and .dmg file for macOS. The executable files are uploaded to the artifacts.

Conclusion

It is easy to create a cross-platform GUI app using Python and PySimpleGUI. It is also easy to distribute the apps built with Python using pyinstaller.

Running tests in parallel with pytest & xdist

When tests are taking too long to run, an easy way to speed them up is to run them in parallel.

When using pytest as test runner, pytest-xdist & pytest-parallel plugins makes it easy to run tests concurrently or in parallel.

pytest-parallel works better if tests are independent of each other. If tests are dependent on each other, pytest-xdist is a better choice.

If there are parameterised tests, pytest-xdist will fail as the order of the tests is not guaranteed.

$ pytest -n auto tests/

Different tests were collected between gw0 and gw1. The difference is: ...

To fix this, we have to make sure that the parameterised tests are executed in the same order on all workers. It can be achieved by sorting the parameterised tests by their name.

Alternatively, we can use pytest-randomly plugin to order the tests.

Remap F4 to Raycast, Alfred (cmd + space)

On Mac keyboard, there is F4 key which opens Spotlight1 by default. I use Raycast2 a lot instead of Spotlight and wanted to remap F4 to Raycast.

There is an app called Karabiner-Elements3 which can be used to remap keys. After the app is installed, we can use this rule4 called Map F4 to cmd+space.

You can import the rule from the above URL directly. Once the rule is imported & enabled, F4 will be remapped to cmd + space as shown in the video below.

Add "Line Count" Column in File Manager

While monitoring an ETL pipeline, I browse a lot of files and often need to know how many lines are there in a file. For that, I can switch to that directory from terminal and run wc -l for that.

To avoid the hassle of switching to the directory and running a command in the terminal, I wrote a simple lua script to show line count column in xplr1 file manager.

Failed Attempts

Initially I set out to write a Finder2 plugin to show the line count column. But I couldn't find a way to get the line count of a file in Finder plugin. I have explored other GUI file managers but none of them have a way to show custom columns with line count.

Finally, I stumbled upon xplr a TUI file manager, and it was a breeze to write a lua script to show the line count column.

xplr - line count

xplr can be installed via brew.

$ brew install xplr

$ xplr --version
xplr 0.21.3

xplr reads the default configuration from ~/.config/xplr/init.lua. The following configuration shows the line count column in xplr.

version = '0.21.3'

xplr.fn.custom.fmt_simple_column = function(m)
  return m.prefix .. m.relative_path .. m.suffix
end

xplr.fn.custom.row_count = function(app)
  if not app.is_file then
    return "---"
  end

  local file = io.open(app.absolute_path, "r")
  if file then
    local row_count = 0
    for _ in file:lines() do
      row_count = row_count + 1
    end
    file:close()
    return tostring(row_count)
  end
end


xplr.config.general.table.header.cols = {
  { format = "  path" },
  { format = "line_count" },
}

xplr.config.general.table.row.cols = {
  { format = "custom.fmt_simple_column" },
  { format = "custom.row_count" },
}

xplr.config.general.table.col_widths = {
  { Percentage = 30 },
  { Percentage = 20 },
}

This will show a row count on launch.

xplr - line count

Conclusion

xplr is a very powerful file manager, and it is very easy to write lua scripts to create custom columns. I couldn't find a way to sort items based on the custom column. Need to explore more on that.

Guide to setting up GeoDjango on Mac M1

There are a lot of guides on setting up GeoDjango and PostGIS. But most of them are outdated and doesn't work on Mac M1. In this article, let us look at how to set up GeoDjango on Mac M1/M2.

Ensure you have already installed Postgres on your Mac.

Install GeoDjango

The default GDAL version available on brew fails to install on Mac M1.

$ brew install gdal
==> cmake --build build
Last 15 lines from /Users/chillaranand/Library/Logs/Homebrew/gdal/02.cmake:
    [javac] Compiling 82 source files to /tmp/gdal-20231029-31808-1wl9085/gdal-3.7.2/build/swig/java/build/classes
    [javac] warning: [options] bootstrap class path not set in conjunction with -source 7
    [javac] error: Source option 7 is no longer supported. Use 8 or later.
    [javac] error: Target option 7 is no longer supported. Use 8 or later.

BUILD FAILED
/tmp/gdal-20231029-31808-1wl9085/gdal-3.7.2/swig/java/build.xml:25: Compile failed; see the compiler error output for details.

Total time: 0 seconds
gmake[2]: *** [swig/java/CMakeFiles/java_binding.dir/build.make:108: swig/java/gdal.jar] Error 1
gmake[2]: Leaving directory '/private/tmp/gdal-20231029-31808-1wl9085/gdal-3.7.2/build'
gmake[1]: *** [CMakeFiles/Makefile2:9108: swig/java/CMakeFiles/java_binding.dir/all] Error 2
gmake[1]: Leaving directory '/private/tmp/gdal-20231029-31808-1wl9085/gdal-3.7.2/build'
gmake: *** [Makefile:139: all] Error 2

We can use conda to install gdal. Create a new environment and install gdal in it.

$ conda create -n geodjango python=3.9
$ conda install -c conda-forge gdal
$ pip install django
$ pip install psycopg2-binary

Once installed, you can check the version using gdalinfo --version.

Remaining dependencies can be installed via brew.

$ brew install postgresql
$ brew install postgis
$ brew install libgeoip

Let's create a new django project and add spatial backends.

$ django-admin startproject geodjango

Add django.contrib.gis to INSTALLED_APPS in settings.py.

INSTALLED_APPS = [
    ...,
    'django.contrib.gis',
]

Add the following to DATABASES in settings.py.

DATABASES['default']['ENGINE'] = 'django.contrib.gis.db.backends.postgis'

Since we used conda to install gdal, we need to set the path to gdal in our django settings. Run locate libgdal.dylib to find the path to gdal.

GDAL_LIBRARY_PATH = '/opt/homebrew/anaconda3/envs/geodjango/lib/libgdal.dylib'

Similarly, we need to set GEOS_LIBRARY_PATH as well.

GEOS_LIBRARY_PATH = '/opt/homebrew/anaconda3/envs/geodjango/lib/libgeos_c.dylib'

Now, we can create a new app and add PointField or any other spatial fields to our models.

$ python manage.py startapp places
from django.contrib.gis.db import models

class Place(models.Model):
    name = models.CharField(max_length=100)
    location = models.PointField()

Conclusion

In this article, we looked at how to set up GeoDjango on Mac M1. We used conda to install gdal and brew to install other dependencies.

tailscale: Remote SSH Access to Pi or Any Device

I recently started using Raspberry Pi and I wanted to access it when I am outside of home as well. After trying out few solutions, I stumbled upon Tailscale1.

Tailscale is a mesh VPN that makes it easy to connect out devices, wherever they are. It is free for personal use and supports all major platforms like Linux, Windows, Mac, Android, iOS, etc.

Installation

I installed tailscale on Raspberry Pi using the following command.

$ curl -fsSL https://tailscale.com/install.sh | sh

Setup

Once the installation is done, I run tailscale up to start the daemon. This opened a browser window and asked me to log in with email address. After I logged in, I can see all the devices in the tailscale dashboard.

tailscale dashboard

tailscale has CLI tool as well and status can be viewed with the following command.

$ tailscale status
100.81.13.75   m1                    avilpage@  macOS   -
100.12.12.92   rpi1.tailscale.ts.net avilpage@  linux   offline

I also set up a cron job to start tailscale daemon on boot.

$ crontab -e
@reboot tailscale up

Access

Now I can access the device from anywhere using the tailscale IP address. For example, if the IP address is 100.34.2.23. I can ssh into the device using the following command.

$ ssh pi@100.81.12.92

It also provides DNS names for each device. For example, I can ssh into the device using the following command as well.

$ ssh pi@raspberry3.tailscale.net

Conclusion

Tailscale is a great tool to access devices remotely. It is easy to set up and works well with Raspberry Pi, Mac & Linux as well.

Create Telegram Bot To Post Messages to Group

Introduction

Recently I had to create a Telegram bot again to post updates to a group based on IoT events. This post is just a reference for future.

Create a Telegram Bot

First, create a bot using BotFather in the Telegram app and get the API token. Then, create a group and add the bot to the group. This will give the bot access to the group.

Post Messages to the Group

Now, we need to fetch the group id. For this, we can use the following curl API call.

curl is available by default on Mac and Linux terminals. On Windows, we can use curl from command prompt.

$ curl -X GET https://api.telegram.org/bot<API_TOKEN>/getUpdates

{
  "ok": true,
  "result": [
    {
      "update_id": 733724271,
      "message": {
        "message_id": 9,
        "from": {
          "id": 1122,
          "is_bot": false,
          "username": "ChillarAnand",
          "language_code": "en"
        },
        "chat": {
          "id": -114522,
          "title": "DailyPythonTips",
          "type": "group",
          "all_members_are_administrators": true
        },
        "date": 1694045795,
        "text": "@DailyPythonTipsBot hi",
        "entities": [
          {
            "offset": 0,
            "length": 19,
            "type": "mention"
          }
        ]
      }
    }
  ]
}

This will return a JSON response with the group id. It sends empty response if there are no recent conversations.

In that case, send a dummy message to the bot in the group and try again. It should return the group id in the response.

We can use this group id to post messages to the group.

$ curl -X POST https://api.telegram.org/bot<API_TOKEN>/sendMessage -d "chat_id=<GROUP_ID>&text=Hello"

{
  "ok": true,
  "result": {
    "message_id": 12,
    "from": {
      "id": 3349238234,
      "is_bot": true,
      "first_name": "DailyPythonTipsBot",
      "username": "DailyPythonTipsBot"
    },
    "chat": {
      "id": -114522,
      "title": "DailyPythonTips",
      "type": "group",
      "all_members_are_administrators": true
    },
    "date": 1694046381,
    "text": "Hello"
  }
}

Here is the message posted by the bot in the group.

Telegram Bot for IoT Updates

Now, we can use this API to post messages to the group from our IoT devices or from any other devices where curl command is available.