Having used m
for many years now, I encountered two primary problems.
The first is startup performance. Even with lazy module loading (see here), calling m
without any subcommands still took ~250ms on my laptop.
The second problem is code verbosity. To add a new command, one needs to create a new class in a separate file, import it in main, and finally instantiate the new class.
In this short blog post, I am going to share how I tackled these two issues.
python-fire
In order to reduce startup time, I first had to profile m
startup performance. From python -c 'print("Hi")
, I know python interpreter program can be fast.
Iteratively removing modules m
depends on one by one revealed python-fire
was the biggest contributor to the long startup time. Python-fire is great for easily turning any script into a commandline program, but it became clear it is not a great fit for scripts one invoke dozens of times a day.
Having identified the culprit, I decided to implement my own basic commandline parser. The implementation itself is extremely simple but covers all m
needs right now.
Removing python-fire
dropped the startup time by almost half to around 130ms.
Another problem I wanted to tackle was reducing boilerplates in m
. With a bit of research, I learnt about inspect
and importlib
modules and converted m
to adopt them.
Instead of doing import
manually, I leveraged importlib.import_module
to import .py
files on startup and used inspect.getmembers
and inspect.isclass
to filter and expose instances of m_base.Base
, a special class in m
which every commands inherit from.
The change itself was relatively straightforward and is a big win for maintainability - I can remove a command just by deleting a file now!
Not only did I learn about inspect
and importlib
modules in python, I also saved ~100ms off the m
startup time and reduced a bunch of boilerplates.
I use m
everyday so I am a bit embarrassed that it took me this long to address two such obvious issues. Better late than never I suppose.