We use cookies to improve your experience on our website. By browsing this website, you agree to our use of cookies. More info Ok, I understand!

29 March 2016

How to install JS-errors Tracker for Free

Logpacker: JS-errors tracker

In the contemporary web development the challenge of JavaScript error collection and analysis is very important. The quality and stability of a software program can be increased by fast error collection and analysis. Today there are lots of companies with different services for solving this problem. As a rule, these are subscription services, where a user pays a fee for a certain number of received events.

In this article we will tell you how to install and setup fault tolerance LogPacker Cluster for JS-error collection and analysis absolutely free. The service is designed for any load as it is easily scaled and it can process unlimited number of events.

The main LogPacker advantage is that it works with any types of log files on any platforms, such as:

  • Server logs (Linux, MacOS and Windows Server).
  • Mobile logs and crashes (iOS, Android and Windows Phone).
  • Custom application logs (any languages).
  • JS-errors.
  • Any Third party applications and Databases (full support out of the box).

Let’s consider the main steps of the whole process for logs collection and analysis by the example of working with JavaScript errors:

  • Infrastructure setup.
  • Cluster setting for log collection and analysis.
  • Dashboard setup.
  • JS-tracker connection to a website.
  • Notification setup.

We are going to review every point in detail, and let’s start with the architecture configuration for storing and analyzing log files.

Infrastructure setting

LogPacker Cluster stores and processes JS-errors. It can consist of several linked LogPacker-servers or it can be a standalone application, installed on Linux-server. Several nodes in cluster allow handling load balancing and saving log concurrently to different types of storages.

We provide you with a free license with a limit of five servers. This means that you can build a cluster of five nodes for free that can endure any load.

First, let’s see how to run a cluster of two servers. After registration you will be able to use the console daemon-application. It is necessary to install it on two servers. You can do it, for example, with the help of rpm/deb pack or to install it manually from the tar-archive.

Let’s consider the install process for every way in detail:

Install RPM:

sudo rpm -ihv logpacker-1.0_27_10_2015_03_45_30-1.x86_64.rpm

Daemon will be installed to the folder /opt/logpacker.

Install DEB:

sudo dpkg -i logpacker_1.0-27-10-2015_03-45-30_amd64.deb

Installation from .tar archive allows you to install LogPacker-daemon to any directory and to have several daemons on one machine (if necessary):

tar -xvf logpacker-27-10-2015_03-45-30.tar.gz
logpacker-27-10-2015_03-45-30/
logpacker-27-10-2015_03-45-30/configs/
logpacker-27-10-2015_03-45-30/configs/services.json
logpacker-27-10-2015_03-45-30/configs/notify.json
logpacker-27-10-2015_03-45-30/configs/main.json
logpacker-27-10-2015_03-45-30/logpacker_api
logpacker-27-10-2015_03-45-30/logpacker_daemon

Before run we need to setup log storage. This is Elasticsearch by default, working on localhost:9200. LogPacker server supports the following storage types: file, elasticsearch, mysql, postgresql, mongodb, hbase, influxdb, kafka, tarantool, memcached. We will setup the servers for concurrent write to two services: Elasticsearch and Apache Kafka.

Apache Kafka installation is described in detail on DigitalOcean.

Kafka Server is available at 127.0.0.1:9092.

We also need Supervisord for daemonizing LogPacker application. Let’s create /etc/supervisord.d/logpacker_daemon.ini file with the following content:

[program:logpacker_daemon]
command=/opt/logpacker/logpacker_daemon --server
autostart=true
autorestart=true
startretries=10
stdout_logfile=/tmp/logpacker-out.log
stdout_logfile_maxbytes=250MB
stdout_logfile_backups=10
stderr_logfile=/tmp/logpacker-err.log
stderr_logfile_maxbytes=250MB
stderr_logfile_backups=10

We inform Supervisord of a new program:

sudo supervisorctl reread

At the end we got the following architecture:

architecture scheme

Infrastructure is ready and we can start LogPacker cluster setup.

Cluster setting for log collection and analysis

Then we need to combine all five servers to the cluster. Each server has a configuration file configs/server.ini that we need to set up.

For full configuration we need to indicate other cluster nodes to only one server. This is an example of the first server configuration:

host=server1
port=9999
cluster.nodes=server2:9999,server3:9999,server4:9999,server5:9999

[PublicAPI]
host=localhost
port=9997
providers=elasticsearch,kafka

[ElasticsearchStorage]
host=es
port=9200
index=logpacker

[KafkaStorage]
brokers=127.0.0.1:9092
topic=logpacker
verifyssl=

If you are working with ES version 2 or higher, you need to change the provider’s name to elasticsearch2.

In configuration file of the second server and the next ones we need to change the following:

host=server2
port=9999

Let’s run all LogPacker nodes with the help of Supervisorctl:

sudo supervisorctl start logpacker_daemon

Now our cluster can receive logs, errors, and any other events. As messages can be received from different devices and networks, it is necessary to open externally accessible API port. Proxy pass for local port in Nginx is ok for that:

server {
    listen 80;
    server_name logpacker.mywebsite.com;
    location / {
        proxy_set_header    X-Real-IP   $remote_addr;
        proxy_set_header    Host        $http_host;
        proxy_pass          http://localhost:9997;
    }
}

It is possible to scale API on all five servers and install Load Balancer that distributes the load.

Dashboard setting

Now let’s turn to displaying received data. Cluster has two standard ways of displaying logs:

Let’s consider the first one. For Kibana setting for logs displaying in real time we need:

This function has the following dashboards:

  • Server Logs.
  • JS-errors.
  • Mobile Logs.

JavaScript errors will be available in JS-error dashboard. They will contain full information regarding errors, including User Agent, IP, etc.

kibana screen

JS-script setting

Now let’s setup script for collecting errors and transferring them to the cluster. JS-script is available in user account on my.logpacker.com.

It is necessary to indicate URL of running cluster. There are also two optional parameters: User ID / Username, containing JS-code that will return their values.

Ready script looks like:

<script type="text/javascript">
    var clusterURL = "http://logpacker.mywebsite.com";
    var userID = "";
    var userName = "";
    (function() {
        var lp = document.createElement("script");
        lp.type = "text/javascript";
        lp.async = true;
        lp.src = ("https:" == document.location.protocol ? "https://" : "http://") + "logpacker.com/js/logpacker.js";
        var s = document.getElementsByTagName("script")[0];
        s.parentNode.insertBefore(lp, s);
    })();
</script>

Then it is important to add this script to all pages of your website (or several websites).

Notification setting

LogPacker Cluster sends all received and processed fatal errors to the email of your account once an hour, by default. It is only possible if there is an installed local sendmail on your server.

The service also supports the following types of notifications:

  • Sendmail (by default)
  • Slack
  • SMTP
  • Twilio SMS

Message intervals and levels could be set in configs/notify.ini file:

providers=sendmail
interval=3600
levels=Fatal,Error
tags=*

As simple as that, we have setup the system for JS-error collection and analysis. This system can be easily set up and scaled and as a result it can handle any load. At the same time the server involves minimal resources and it is totally fault-tolerant thanks to the clusterization function.

In the built system it is enough to implement log collection and analysis. At the moment LogPacker supports all popular types of platforms.

In competitive IT market software quality and stability aspects are among the most important. LogPacker effectively solves the problem of log collection, transfer and analysis.

Register and install your own LogPacker Cluster absolutely free!