Why creating a new log library?
I was using pino logs, but I created my own log library, which is a lot faster in real use cases. Why? And how?
What is Pino logs?
It is a logging library for node.js/bun.js/deno.js and the browser. This library is popular because it is extremely fast, mainly thanks to the fact that log writing is offloaded to a WebWorker (a parallel thread).
Using a WebWorker is also what my library Jopi Logs does, which allows for a considerable speed gain.
Concretely
It is true that Pino is very fast. But:
- It does not support limiting the size of log files.
- I wanted something even faster!
- And something that allows me to easily analyze my logs.
Limiting the size of log files is important, otherwise after a few days or weeks the log file becomes so huge that it causes slowdowns and anomalies. The logging system must be able to rotate automatically: archive and zip the log file when it gets too big. It should also delete log files that are several days old. Without these two aspects, a production server eventually runs into problems.
The issue with log libraries
With Pino, just like with almost all log libraries, we write code that looks like this:
const logger = require('pino')()
const title = calcMyLogTitle();
const data = calcMyLogData();
logger.info(title, data);
The issue here is that if the info log level is not enabled (for example, if we only log errors and warnings), then the calls to calcMyLogTitle and calcMyLogData are still executed. For this reason, you have to do this:
if (logger.level==="info") {
const title = calcMyLogTitle();
const data = calcMyLogData();
logger.info(title, data);
}
There is a certain heaviness here, as we have to check the log level every time.
The answer from jopi-logs
With Jopi Logs, we write code like this:
logger.info(w => w(calcMyLogTitle(), calcMyLogData()));
This way of doing things seems less efficient, because we have an extra function call (calc.info calls our function, which calls the w
function). However, this is not the case because here we take advantage of automatic optimizations by the JavaScript VM.
These automatic optimizations mean that this extra function call costs nothing more, due to the way the code is written inside the logger.
But above all, Jopi Logs takes advantage of automatic dead code elimination by the JavaScript VM, so that a logger.info
call costs nothing if the info
log level is not enabled.
Technically, I do this by having four classes, one for each log level: LogSpamLevel
, LogInfoLevel
, LogWarnLevel
, LogErrorLevel
.
import {LogInfoLevel} from "jopi-logs";
export const logServer = new LogInfoLevel(null, "server");
logServer.spam(w => w("Processing request", {path: "/index.html"}));
Here, if the spam
log level is not enabled, because I am using the info log level. The implementation of the logServer.spam function is as follows:
class LogInfoLevel {
//...
logSpam() { return false; }
}
There is nothing in my logSpam function, and that's why the JavaScript VM can optimize.
// What I wrote.
logServer.spam(w => w("Processing request", {path: "/index.html"}));
// What the VM really uses.
logServer.spam(); // which only does "return false".
In conclusion
Jopi Logs provides a huge gain in concrete cases, with many calls to SPAM and INFO type logs. Why? Because when these log levels are not enabled, everything happens as if there is no code adding to the log.
- There is no cost of calling the log function.
- There is also no cost of preparing the data / calculating the log message.
Additionally, Jopi Logs uses a WebWorker to process logs, preventing console.log
or writing to a file from (greatly) slowing everything down. Writing to the console really slows down processing a lot.