Headlamp - k8s Lens open source alternative

headlamp - Open source Kubernetes Lens alternator

Since Lens is not open source, I tried out monokle, octant, k9s, and headlamp1. Among them, headlamp UI & features are closest to Lens.

Headlamp

Headlamp is CNCF sandbox project that provides cross-platform desktop application to manage Kubernetes clusters. It auto-detects clusters and provides cluster wide resource usage by default.

It can also be installed inside the cluster and can be accessed using a web browser. This is useful when we want to access the cluster from a mobile device.

$ helm repo add headlamp https://headlamp-k8s.github.io/headlamp/

$ helm install headlamp headlamp/headlamp

Lets port-forward the service & copy the token to access it.

$ kubectl create token headlamp

# we can do this via headlamp UI as well
$ kubectl port-forward service/headlamp 8080:80

Now, we can access the headlamp UI at http://localhost:8080.

headlamp - Open source Kubernetes Lens alternator

Conclusion

If you are looking for an open source alternative to Lens, headlamp is a good choice. It provides a similar UI & features as Lens and it is accessible via mobile devices as well.

macOS - Log & track historical CPU, RAM usage

macOS - Log CPU & RAM history

In macOS, we can use inbuilt Activity Monitor or third party apps like Stats to check the live CPU/RAM usage. But, we can't track the historical CPU & memory usage. sar, atop can track the historical CPU & memory usage. But, they are not available for macOS.

Netdata

Netdata1 is an open source observability tool that can monitor CPU, RAM, network, disk usage. It can also track the historical data.

Unfortunately, it is not stable on macOS. I tried installing it on multiple macbooks, but it didn't work. I raised an issue2 on their GitHub repository and the team mentioned that macOS is a low priority for them.

Glances

Glances3 is a cross-platform monitoring tool that can monitor CPU, RAM, network, disk usage. It can also track the historical data.

We can install it using Brew or pip.

$ brew install glances

$ pip install glances

Once it is installed, we can monitor the resource usage using the below command.

$ glances

macOS - Log CPU & RAM history

Glances can log historical data to a file using the below command.

$ glances --export-csv /tmp/glances.csv

In addition to that, it can log data to services like influxdb, prometheus, etc.

Let's install influxdb and export stats to it.

$ brew install influxdb
$ brew services start influxdb
$ influx setup

$ python -m pip install influxdb-client

$ cat glances.conf
[influxdb]
host=localhost
port=8086
protocol=http
org=avilpage
bucket=glances
token=secret_token

$ glances --export-influxdb -C glances.conf

We can view stats in the influxdb from Data Explorer web UI at http://localhost:8086.

macOS - Log CPU & RAM history

Glances provides a prebuilt Grafana dashboard4 that we can import to visualize the stats.

From Grafana -> Dashboard -> Import, we can import the dashboard using the above URL.

macOS - Log CPU & RAM history

Conclusion

In addition to InfluxDB, Glances can export data to ~20 services. So far, it is the best tool to log, track and view historical CPU, RAM, network and disk usage in macOS. The same method works for Linux and Windows as well.

Automating Zscaler Connectivity on macOS

Introduction

Zscaler is a cloud-based security service that provides secure internet access via VPN. Unfortunately, Zscaler does not provide a command-line interface to connect to the VPN. We can't use AppleScript to automate the connectivity as well.

Automating Zscaler Connectivity

Once Zscaler is installed on macOS, if we search for LaunchAgents & LaunchDaemons directories, we can find the Zscaler plist files.

$ sudo find /Library/LaunchAgents -name '*zscaler*'
/Library/LaunchAgents/com.zscaler.tray.plist


$ sudo find /Library/LaunchDaemons -name '*zscaler*'
/Library/LaunchDaemons/com.zscaler.service.plist
/Library/LaunchDaemons/com.zscaler.tunnel.plist
/Library/LaunchDaemons/com.zscaler.UPMServiceController.plist

To connect to Zscaler, we can load these services.

#!/bin/bash

/usr/bin/open -a /Applications/Zscaler/Zscaler.app --hide
sudo find /Library/LaunchAgents -name '*zscaler*' -exec launchctl load {} \;
sudo find /Library/LaunchDaemons -name '*zscaler*' -exec launchctl load {} \;

To disconnect from Zscaler, we can unload all of them.

#!/bin/bash

sudo find /Library/LaunchAgents -name '*zscaler*' -exec launchctl unload {} \;
sudo find /Library/LaunchDaemons -name '*zscaler*' -exec launchctl unload {} \;

To automatically toggle the connectivity, we can create a shell script.

#!/bin/bash

if [[ $(pgrep -x Zscaler) ]]; then
    echo "Disconnecting from Zscaler"
    sudo find /Library/LaunchAgents -name '*zscaler*' -exec launchctl unload {} \;
    sudo find /Library/LaunchDaemons -name '*zscaler*' -exec launchctl unload {} \;
else
    echo "Connecting to Zscaler"
    /usr/bin/open -a /Applications/Zscaler/Zscaler.app --hide
    sudo find /Library/LaunchAgents -name '*zscaler*' -exec launchctl load {} \;
    sudo find /Library/LaunchDaemons -name '*zscaler*' -exec launchctl load {} \;
fi

Raycast is an alternative to default spotlight search on macOS. We can create a script to toggle connectivity to Zscaler.

#!/bin/bash

# Required parameters:
# @raycast.schemaVersion 1
# @raycast.title toggle zscaler
# @raycast.mode silent

# Optional parameters:
# @raycast.icon ☁️

# Documentation:
# @raycast.author chillaranand
# @raycast.authorURL https://avilpage.com/

if [[ $(pgrep -x Zscaler) ]]; then
    echo "Disconnecting from Zscaler"
    sudo find /Library/LaunchAgents -name '*zscaler*' -exec launchctl unload {} \;
    sudo find /Library/LaunchDaemons -name '*zscaler*' -exec launchctl unload {} \;
else
    echo "Connecting to Zscaler"
    /usr/bin/open -a /Applications/Zscaler/Zscaler.app --hide
    sudo find /Library/LaunchAgents -name '*zscaler*' -exec launchctl load {} \;
    sudo find /Library/LaunchDaemons -name '*zscaler*' -exec launchctl load {} \;
fi

Save this script to a folder. From Raycast Settings -> Extensions -> Add Script Directory, we can select this folder, and the script will be available in Raycast.

raycast-connect-toggle

We can assign a shortcut key to the script for quick access.

raycast-connect-toggle

Conclusion

Even though Zscaler does not provide a command-line interface, we can automate the connectivity using the above scripts.

Screen Time Alerts from Activity Watch

Introduction


Activity Watch1 is a cross-platform open-source time-tracking tool that helps us to track time spent on applications and websites.

Activity Watch

At the moment, Activity Watch doesn't have any feature to show screen time alerts. In this post, we will see how to show screen time alerts using Activity Watch.

Python Script

Activity Watch provides an API to interact with the Activity Watch server. We can use the API to get the screen time data and show alerts.

import json
import os
from datetime import datetime

import requests


def get_nonafk_events(timeperiods=None):
    headers = {"Content-type": "application/json", "charset": "utf-8"}
    query = """afk_events = query_bucket(find_bucket('aw-watcher-afk_'));
window_events = query_bucket(find_bucket('aw-watcher-window_'));
window_events = filter_period_intersect(window_events, filter_keyvals(afk_events, 'status', ['not-afk']));
RETURN = merge_events_by_keys(window_events, ['app', 'title']);""".split("\n")
    data = {
        "timeperiods": timeperiods,
        "query": query,
    }
    r = requests.post(
        "http://localhost:5600/api/0/query/",
        data=bytes(json.dumps(data), "utf-8"),
        headers=headers,
        params={},
    )
    return json.loads(r.text)[0]


def main():
    now = datetime.now()
    timeperiods = [
        "/".join([now.replace(hour=0, minute=0, second=0).isoformat(), now.isoformat()])
    ]
    events = get_nonafk_events(timeperiods)

    total_time_secs = sum([event["duration"] for event in events])
    total_time_mins = total_time_secs / 60
    print(f"Total time: {total_time_mins} seconds")
    hours, minutes = divmod(total_time_mins, 60)
    minutes = round(minutes)
    print(f"Screen Time: {hours} hours {minutes} minutes")

    # show mac notification
    os.system(f"osascript -e 'display notification \"{hours} hours {minutes} minutes\" with title \"Screen TIme\"'")


if __name__ == "__main__":
    main()

This script2 will show the screen time alerts using the Activity Watch API. We can run this script using the below command.

$ python screen_time_alerts.py

Screen Time Alerts

We can set up a cron job to run this script every hour to show screen time alerts.

$ crontab -e
0 * * * * python screen_time_alerts.py

We can also modify the script to show alerts only when the screen time exceeds a certain limit.

Conclusion

Since Actvity Watch is open-source and provides an API, we can extend its functionality to show screen time alerts. We can also use the API to create custom reports and dashboards.

Setup FTP server on Mac OS X


On Linux & Mac OS X, Python comes pre-installed. On Windows, we can install it from Windows store or from https://python.org website.

We can verify the Python version using the below command.

$ python --version
Python 3.11.6

We can use the pyftpdlib library to create an FTP server. We can install the library using the below command.

$ python -m pip install pyftpdlib
[I 11:28:21] concurrency model: async
[I 11:28:21] masquerade (NAT) address: None
[I 11:28:21] passive ports: None
[I 11:28:21] >>> starting FTP server on :::2121, pid=99951 <<<

Now, we can start the FTP server using the below command.

$ python -m pyftpdlib

It will start the FTP server on port 2121. We can connect to the FTP server using the below command.

$ ftp localhost 2121

Timestamp to Relative Time - Kibana Scripted fields

When browsing logs in Kibana, there will be a timestamp stamp field on the left for all the docs. It is difficult to read & comprehend the timestamp in the logs. It would be better if we can convert the timestamp to a human-readable relative time like 5 minutes ago, 1 hour ago, etc.

Kibana Scripted Fields

Kibana provides a feature called scripted fields to create new fields in the index pattern. We can use this feature to convert the timestamp to a relative time.

kibana-relative-time

Go to Stack Management -> Index Patterns -> Create index pattern -> Select the index pattern -> Scripted fields, click on Add scripted field, add the below script.

long now = new Date().getTime();

long timestamp = doc['@timestamp'].value.toInstant().toEpochMilli();
long diff = now - timestamp;
if (diff > 7200000) {
  return Math.round(diff / 3600000) + " hours ago";
} else if (diff > 3600000) {
  return Math.round(diff / 3600000) + " hour ago";
} else if (diff > 120000) {
  return Math.round(diff / 60000) + " minutes ago";
} else if (diff > 60000) {
  return (Math.round(diff / 60000) + " minute ago");
} else {
  return Math.round(diff / 1000) + " seconds ago";
}

Once the field is saved, we can go back to Discover and see the new field in the logs. We can toggle the visibility of the Relative Time field to see the relative time.

kibana-relative-time

Conclusion

Instead of looking at the timestamp and calculating the relative time in our head, we can use relative time in Kibana . This will make it easier to read & comprehend the logs.

The Strange Case of Dr. Linux and Mr. Mac

Few days back, some of the tests started failing on CI server. When I tried to run the tests locally, they were passing.

After debugging for a while, I found that the tests were failing because of the case sensitivity of the file system. One of the developer was using Linux and had committed 2 files with the same name but different case(config.json, Config.json).

Linux file system is case-sensitive. So these 2 files will be shown as 2 different files.

linux-file-system

But Mac/Windows file system is case-insensitive. Out of these 2 files, only one file will be shown.

mac-file-system

Due to this, the tests were failing on Linux but passing on Mac. Once the case of the file was corrected, the tests started passing on both the systems.

I have been using Mac for a long time and never faced this issue. Even though Mac's APFS is case-insensitive, we can create a case-sensitive volume using Disk Utility.

case-sensitive-volume

We have to be aware of these differences when working on a project with developers using different OS.

Archiving Option Chain Data

BSE & NSE are the prominent exchanges in India, and they provide option chain data for the stocks & indices listed in their exchange.

The option chain data is available for the current date. But it is not available for the past dates. This is a problem for traders who want to analyze the historical option chain data.

ArchiveBox

ArchiveBox1 is a tool to archive web pages. It can be used to archive the option chain data for the stocks & indices.

Let's install ArchiveBox.

$ pip install archivebox
$ mkdir option_chain
$ cd option_chain
$ archivebox init
$ archivebox setup

We can start the server(defaults to https://localhost:8000) and add URLs manually to archive them.

$ archivebox server

historical-option-chain

There are 180+ stocks in FNO segment & 6 indices with weekly expiry. We can write a simple Python script to generate all combinations of URLs for the option chain data and archive them using ArchiveBox.

Once URLs is generated, we can use the following command to archive them.

$ archivebox add --input-file urls.txt

These URLs will be archived and stored in the archive directory. Since we want to archive the data regularly, we can use a schedule to archive daily.

$ archivebox schedule --every=day --depth=0 '{{url}}'

This will archive the option chain data for the stocks & indices on a daily basis.

Conclusion

Browsing the archived data of a single url is a bit difficult. Wayback machine provides a better interface to browse the archived data. I have raised an issue2 regarding the same in the ArchiveBox repository. Once the UI issue is resolved, this will serve as the best tool to browse the historical option chain data.

Cross Platform File Explorer in 50 lines of code

In an earlier post, I wrote about why I need a "line count" column in file explorer and how I wrote a Lua script to see it in xplr file manager.

xplr has only terminal interface. It is hard for non-developers to use it. I wanted a small team to use this feature so that it will save several hours of their time. So I decided to write a cross-platform GUI app.

GUI app

Since I am familiar with PySimpleGUI, I decided to write a simple file explorer using it.

Cross Platform File Explorer

As seen in the above screenshot, the file explorer has a "Line Count" column. It is a simple Python script with ~50 lines of code.

The project is open source and source code is available at github.com/AvilPage/LCFileExplorer.

Cross Platform

A new user can't directly run this Python script on his machine unless Python is already installed. Even if Python is installed, he has to install the required packages and run it. This requires technical expertise.

To make it easy for non-tech users to run this program, I decided to use PyInstaller to create a single executable file for each platform.

I created a GitHub action to build the executable files for Windows, Linux, and macOS. The action is triggered on every push to the master branch. This will generate .exe file for Windows, .AppImage file for Linux, and .dmg file for macOS. The executable files are uploaded to the artifacts.

Conclusion

It is easy to create a cross-platform GUI app using Python and PySimpleGUI. It is also easy to distribute the apps built with Python using pyinstaller.

Running tests in parallel with pytest & xdist

When tests are taking too long to run, an easy way to speed them up is to run them in parallel.

When using pytest as test runner, pytest-xdist & pytest-parallel plugins makes it easy to run tests concurrently or in parallel.

pytest-parallel works better if tests are independent of each other. If tests are dependent on each other, pytest-xdist is a better choice.

If there are parameterised tests, pytest-xdist will fail as the order of the tests is not guaranteed.

$ pytest -n auto tests/

Different tests were collected between gw0 and gw1. The difference is: ...

To fix this, we have to make sure that the parameterised tests are executed in the same order on all workers. It can be achieved by sorting the parameterised tests by their name.

Alternatively, we can use pytest-randomly plugin to order the tests.