We use cookies to improve your experience on our website. By browsing this website, you agree to our use of cookies. More info Ok, I understand!

Getting Started with Cloud

This documentation will show you how to install and use LogPacker Cloud.

Each Cloud customer has only one Cloud License under one account. Please keep your License Key secure.

LogPacker Cloud works in the following way:

  • Customer installs LogPacker agents on own Linux, OS X or Windows servers. Also it can be integrated into Mobile applications.
  • Agents will find all possible logs. By default Agent can find all popular logs, but also you can configure your sources.
  • All logs will be sent asynchronously to the LogPacker Cloud cluster
  • So you will be able to see all logs, filter them in one centralized interface

Download and Install

Once you have a license you can download your Agents (as rpm, deb, dmg or .tar) from my.logpacker.com and upload to your servers.

Note: Each Daemon is distributed with private License to make sure it will work with your specific License.

Install LogPacker from RPM file (into /opt/logpacker):

sudo rpm -ihv logpacker-version.rpm

Install LogPacker from DEB file (into /opt/logpacker):

sudo dpkg -i logpacker-version.deb

Install LogPacker from TAR file (will be installed in current working folder):

tar -xvf logpacker-version.tar.gz

Server Requirements

LogPacker Agent is a standalone application and can be installed without any requirements on Linux, OS X or Windows servers.

Start Daemon

LogPacker installation contains config files and binary file logpacker_daemon.

To run LogPacker with default configuration type this command:

./logpacker_daemon -a -v

It's better to do it as root user to get better log files coverage.
To see all Daemon options:
./logpacker_daemon --help

Daemonize

We provide default Supervisord configuration to run LogPacker Daemon in the background. Copy this sample config to the file /etc/supervisord.d/logpacker_daemon.ini (path could be different for your supervisord configuration).

Then update config and start the application:

sudo supervisorctl reread && supervisorctl start logpacker_daemon

You can also use your favorite launch manager at anytime.

Configuration

Default configuration can be changed to a different one if needed. Config files:

  • configs/agent.ini - Agent will start with this config. Server doesn't need it.
  • configs/services.ini - Here we have built-in patterns for searching logs. You can also add your own one. It can be extended/edited by user/admin. Changes will be applied without Daemon restart.

After changing config you have to restart LogPacker.

Logs Sources

Agent will fetch logs on the server from syslog and from large amount of services directly. Of cause you can enable/disasble them in configs/agent.ini configuration, or you can create a new one with specific rules in configs/services.ini.

Built-in services:

  • mysql
  • postgresql
  • php
  • apache
  • nginx
  • elasticsearch
  • jenkins
  • redis
  • postfix
  • supervisor
  • sendmail
  • exim
  • yum
  • mariadb
  • dpkg
  • memcached
  • docker
  • puppet
  • unix
  • mongodb
  • jira
  • rabbitmq
  • gitlab
  • zabbix
  • upstart
  • squid
  • teamcity
  • chef
  • munin
  • aerospike
  • sphinx
  • uwsgi
  • newrelic
  • haproxy
  • simpana
  • icinga
  • libvirt
  • solr
  • tomcat
  • oracle
  • cassandra
  • hbase
  • hadoop
  • voltdb
  • stress
  • plesk
  • tarantool
  • splunk
  • cpanel
  • rethinkdb
  • clickhouse
  • prestodb
  • zookeeper
  • grafana
  • kairosdb

Collect JS errors

LogPacker can collect data outside from the servers, even if Agent Daemon is not installed.

If you are an owner/developer of website, you can include our JS script to your page to collect and send all JS errors thrown on the page of all your clients.

Before that you must setup your Cluster to accept errors from your website.

JS include example:

<script type="text/javascript">
var clusterURL = "https://storageN.logpacker.com";
var cloudKey   = "";

(function() {
var lp = document.createElement("script"); lp.type = "text/javascript"; lp.async = true;
lp.src = ("https:" == document.location.protocol ? "https://" : "http://") + "logpacker.com/js/logpacker.js";
var s = document.getElementsByTagName("script")[0]; s.parentNode.insertBefore(lp, s);
})();
</script>

Collect Mobile errors

We provide free mobile SDK for Android and iOS that will help you to send all client's errors/crashes to LogPacker Cluster.

You must set LogPacker Public API url in the Java/Objective-C/Swift/C# code (How to configure Public API).

Android

Download logpackermobilesdk.aar ⇪  file and import into Android Studio:

  • File ➤ New ➤ New Module ➤ Import .JAR or .AAR package
  • File ➤ Project Structure ➤ app ➤ Dependencies ➤ Add Module Dependency
  • Add import: import go.logpackermobilesdk.Logpackermobilesdk
import go.logpackermobilesdk.Logpackermobilesdk;

// It's possible to catch all app's crashes via Thread.setDefaultUncaughtExceptionHandler and send it to LogPacker
try {
    Client client = Logpackermobilesdk.newClient("https://storage.logpacker.com", "dev", android.os.Build.MODEL);

    Message msg = client.newMessage();
    msg.setMessage("Crash is here!");
    // Use another optional setters for msg object

    client.send(msg); // Send will return Cluster response
} catch (Exception e) {
    // Cannot connect to Cluster or validation error
}

iOS

Download Logpackermobilesdk.framework.tar ⇪  file.

Unpack it and drag Logpackermobilesdk.framework folder to the Xcode's file browser.

#import "ViewController.h"
#import "Logpackermobilesdk/Logpackermobilesdk.h"

@interface ViewController ()

@end

@implementation ViewController

- (void)viewDidLoad {
    [super viewDidLoad];
    GoLogpackermobilesdkClient *client;
    NSError *error;
    GoLogpackermobilesdkNewClient(@"https://storage.logpacker.com", @"dev", [[UIDevice currentDevice] systemVersion], &client, &error);

    GoLogpackermobilesdkMessage *msg;
    msg = client.newMessage;
    msg.message = @"Crash is here!";
    // Use another optional setters for msg object
    GoLogpackermobilesdkResult *result;
    [client send:(msg) ret0_:(&result) error:(&error)];
}

// It's possible to catch all app's crashes via signal(SIGSEGV, SignalHandler) and send it to LogPacker from SignalHandler func

Windows Phone

Download logpackermobilesdk.dll.tar ⇪  file and untar it:

  • Project ➤ Edit Preferences
  • .Net Assemply ➤ Browse ➤ Find logpackermobilesdk.dll file
  • Add import: using logpackermobilesdk;
using System;
using logpackermobilesdk;

namespace test
{
    class MainClass
	{
		public static void Main (string[] args)
		{
			try {
				Client c = new Client ("https://storage.logpacker.com", "dev", System.Environment.MachineName, "");
				Event e = new Event ("Crash is here!", "modulename", Event.FatalLogLevel, "1000", "John");
				c.Send (e);
			} catch {
				// Handle connection error here
			}
		}
	}
}

// It's possible to catch all app's crashes via global try-catch block and send it to LogPacker

Windows Agent

Requirements

Windows LogPacker Agent can be installed on any Windows machine with internet connection.

Install on Windows and Daemonize

  • Download .zip file on Profile page and unpack it's content to C:\Program Files\LogPacker
  • Open configs\agent.ini and enter your LogPacker Server address
  • Run the following commands to start as Windows service:

C:\"Program Files"\LogPacker\srvstart\srvstart.exe install logpackerwin_daemon -c C:\"Program Files"\LogPacker\srvstart\logpackerwin_daemon.ini
net start logpackerwin_daemon

Configuration

There are 2 configuration files.

configs\agent.ini contains Agent settings, LogPacker server address, etc.

configs\services.ini contains all possible Windows services which LogPacker Agent will tracks.

Both files can be edited, after that Agent must be restarted to apply the changes.

Usage

You can edit srvstart\logpackerwin_daemon.ini and provide your custom command line options:

  -v, --verbose       (Default: False) Prints all messages to standard output.

  --version           (Default: False) Prints LogPackerWin version.

  -l, --log           (Default: logpacker-win.log) Path to the log file to save
                      LogPackerWin statistics.

  -c, --configpath    (Default: ) Path to the directory with configuration
                      files.

  --help              Display this help screen.

LogPacker CLI

LogPacker package contains logpacker_cli binary. You can use it to send logs to the server from CLI

Usage:

./logpacker_cli -help
LogPacker CLI Usage:

-v|-version : Prints LogPacker CLI version
-m|-message : Log message
-t|-tag : Log tag name (optional)

./logpacker_cli -m "Test message"
Sent Message ID: E7DFDC74-81D2-1D76-878D-1B5620E6162C

Also you can configure almost any logger to send logs directly to LogPacker Server/Cloud via TCP (to Server Daemon) or HTTP (to Cloud).

curl -XPOST https://storageX.logpacker.com/save -d '{
    "client": {
        "user_id":     "1001",
        "user_name":   "John",
        "environment": "production",
        "agent":       "Nexus 100",
        "platform":    "js",
        "version":     "v0.1.0",
        "os":          "android"
    },
    "messages": [
        {
            "message":   "error message or event message",
            "trace":     "exception trace",
            "source":    "auth",
            "line":      100,
            "column":    20,
            "log_level": 1,
            "tag_name":  "android",
            "unixts":    1481273962
        }
    ],
    "cloud_key": ""
}'

Getting an Enterprise License

Each LogPacker plan provides you with unique License (even free plan).

You can have multiple plans under one account. And you can find it on my.logpacker.com.

Please keep your License Key secure.

Download and Install

Once you have a license you can download your Agents (as rpm, deb, dmg or .tar) from my.logpacker.com and upload to your servers.

Note: Each Daemon is distributed with private License to make sure it will work with your specific License.

Install LogPacker from RPM file (into /opt/logpacker):

sudo rpm -ihv logpacker-version.rpm

Install LogPacker from DEB file (into /opt/logpacker):

sudo dpkg -i logpacker-version.deb

Install LogPacker from TAR file (will be installed in current working folder):

tar -xvf logpacker-version.tar.gz

Server Requirements

By default LogPacker uses Elasticsearch for server storage. It's easy to install on any environment, and configure web interface via Kibana.

Anytime you can change server storage to another one. To change it you must edit configs/server.ini file in your downloaded package. Daemon restart is necessary in this case.

Start Daemon

LogPacker installation contains config files and three binary files: logpacker_daemon, logpacker_api and logpacker_cli.

To run LogPacker Daemon with default configuration type this command:

./logpacker_daemon -a -s -v

It's better to do it as root user to get better log files coverage.
To see all Daemon options:
./logpacker_daemon --help

Daemonize

We provide default Supervisord configuration to run LogPacker Daemon in the background. Copy this sample config to the file /etc/supervisord.d/logpacker_daemon.ini (path could be different for your supervisord configuration).

Then update config and start the application:

sudo supervisorctl reread && supervisorctl start logpacker_daemon

You can also use your favorite launch manager at anytime.

Kibana WEB GUI

It works automatically only when LogPacker server writes data into Elasticsearch. To make it you need to set provider in configs/server.ini. Note: for ES < 2 please use elasticsearch provider

providers=elasticsearch2

How-to configure Kibana:

Then you will have a list of Dashboards / Visualizations / Searches.

Grafana WEB GUI

It works automatically only when LogPacker server writes data into Elasticsearch. To make it you need to set provider in configs/server.ini.

providers=elasticsearch

OR for ES>2.0

providers=elasticsearch2

How-to configure Grafana:

  • Open the side menu by clicking the the Grafana icon in the top header.
  • In the side menu under the Dashboards link you should find a link named Data Sources.
  • Click the Add new link in the top header.
  • Select Elasticsearch from the dropdown.
  • Set your ES URL, version, etc. Set index name "logpacker" and time field name "@Time".
  • Click "Save" and use this DataSource in any dashboard.

Possible problems:

  • In some versions of ES CORS requests are not enabled. In this case you need to add 2 lines in ES config http.cors.enabled: true and http.cors.allow-origin: "*".

Configuration

Default configuration can be changed to a different one if needed. Config files:

  • configs/agent.ini - Agent will start with this config. Server doesn't need it.
  • configs/server.ini - Has all information about server storage, users, logging, etc. Also necessary to start REST API.
  • configs/services.ini - Here we have built-in patterns for searching logs. You can also add your own one. It can be extended/edited by user/admin. Changes will be applied without Daemon restart.
  • configs/notify.ini - Notification Email, intervals, Slack integration
  • configs/api.ini - LogPacker REST API configuration. Data source, bind address.

After changing config you have to restart LogPacker.

Storage Providers

Storage provider to saves aggregated logs. Here you can find all supporting storage providers with notes on configuration. All settings can be changed in configs/server.ini. Server can write Messages into different Storages concurrently.

Elasticsearch

Default Elasticsearch is running on localhost:9200, so we use it as default. If you are running ES version 2 you must set providers=elasticsearch2 in server.ini. >Indexes logpacker, logpacker_tag, logpacker_index will be auto-created

MySQL

Table will be automatically created if doesn't exist and if current user has permissions to do it. Default connection address: root@tcp(localhost:3306)/logpacker?charset=utf8, logpacker database should be created manually before start. Default table name: logpacker_event_collector

Postgresql

Table will be automatically created if doesn't exist and if current user has permissions to do it. Default connection address: host=localhost port=5432 user=postgres password= dbname=postgres sslmode=disable, logpacker database should be created manually before start. Default table name: logpacker_event_collector

File

Permanent symlink to the last chunk-file of logs is a /tmp/logpacker-storage.json. Files will be stored like this: /tmp/logpacker-storage.json.00001, /tmp/logpacker-storage.json.00002 etc.

Hbase

Default TCP thrift address: localhost:9090. Table logpacker_event_collector will be created automatically

Mongodb

Default connection address: mongodb://localhost:27017. Database logpacker will be created automatically

Influxdb

Default connection address: http://localhost:8086. Default DB name: logpacker (will be created automatically if current user has permissions to do it). Default table name: logpacker_event_collector (will be created automatically)

Apache Kafka

How To Install Apache Kafka on Ubuntu 14.04 Multiple brokers can be defined in configuration, 127.0.0.1:9092 is a default one, logpacker topic will be created automatically after first attempt to write data. LogPacker API isn't working with Kafka storage, because Kafka Consumers protocol doesn't allow to fetch all already published messages. By default it works with verifyssl=false, but it's possible to provide cert/key files to make secure connection.

Tarantool

Default connection: localhost:3013. LogPacker will create a Space automatically. By default LP will connect to Tarantool as guest user, but you can change it. User must have permissions to create space.

Memcached

Default address: 127.0.0.1:11211, LogPacker supports multiple comma-separated addresses, like 127.0.0.1:11211,127.0.0.1:11212. Also You can set any prefix to avoid keys conflicts in Memcached.

ClickHouse

Quick Start Guide. ClickHouse is available to save big amount of logs up to few petabytes of data. Database logpacker and table logpacker_event_collector will be created automatically. Default http address is localhost:8123.

PrestoDB

Overview. Presto is a distributed system that runs on a cluster of machines. It can work with big amount of catalogs: hive, mysql, etc. Default configuration is working with hive catalog and default schema, it can be changed.

Apache Hive

Overview. Default connection address is http://localhost:10000 and table name that will be create on start is logpacker_event_collector.

KairosDB

Overview. LogPacker can save metrics to KairosDB, metric logpacker will be auto-created.

OpentsDB

Overview. LogPacker can save metrics to OpentsDB (default address is localhost:4242), metric logpacker will be auto-created.

Clusterization

LogPacker server daemons can work together in Cluster. Agent will select the best server to store logs data. Agent will call NetworkAPI to retrieve the full list of nodes in Cluster with statistics.
To specify nodes - edit configs/server.ini cluster.nodes like this:

cluster.nodes=127.0.0.1:9999,127.0.0.1:10000

Then tell Agent where to get Network Info:
networkapi=127.0.0.1:9999

By one server address Agent can get all info about Cluster. To get info about Node, connect the server by TCP like this:
{
  "Stats":{
    "CPUUser":145987,
    "CPUSys":24572,
    "MemFree":3457952,
    "DiskFree":0,
    "LA":0.47
  },
  "Nodes":[
    {
      "Address":"127.0.0.1:9999",
      "IsAvailable":true,
      "Stats":{
        "CPUUser":145987,
        "CPUSys":24572,
        "MemFree":3457952,
        "DiskFree":0,
        "LA":0.47
      }
    }
  ]
}

To add new Node into Network we need to insert new address into cluster.nodes on any configs/server.ini configuration.

Logs Sources

Agent will fetch logs on the server from syslog and from large amount of services directly. Of cause you can enable/disasble them in configs/agent.ini configuration, or you can create a new one with specific rules in configs/services.ini.

Built-in services:

  • mysql
  • postgresql
  • php
  • apache
  • nginx
  • elasticsearch
  • jenkins
  • redis
  • postfix
  • supervisor
  • sendmail
  • exim
  • yum
  • mariadb
  • dpkg
  • memcached
  • docker
  • puppet
  • unix
  • mongodb
  • jira
  • rabbitmq
  • gitlab
  • zabbix
  • upstart
  • squid
  • teamcity
  • chef
  • munin
  • aerospike
  • sphinx
  • uwsgi
  • newrelic
  • haproxy
  • simpana
  • icinga
  • libvirt
  • solr
  • tomcat
  • oracle
  • cassandra
  • hbase
  • hadoop
  • voltdb
  • stress
  • plesk
  • tarantool
  • splunk
  • cpanel
  • rethinkdb
  • clickhouse
  • prestodb
  • zookeeper
  • grafana
  • kairosdb

REST API

LogPacker package is distributed with included REST API service. To start it run:

/opt/logpacker/logpacker_api -v
API can read data only from one provider, but it has failover feature to switch to another one if main provider is down. Example config:
provider=elasticsearch
failover.providers=mysql,file

We provide default Supervisord configuration to run LogPacker API in the background. Copy this sample config to the file /etc/supervisord.d/logpacker_api.ini (path could be different for your supervisord configuration).

Then update config and start the application:

sudo supervisorctl reread && supervisorctl start logpacker_api

You can also use your favorite launch manager at anytime.

Endpoints

GET /version - returns REST API version

Response:
{
    "Code": 200,
    "Error": "",
    "Data": "Logpacker API version 0.1.1"
}

GET /v2/list/FIELD - return list of unique FIELD values. FIELD possible values: env, version, os, tag_name, agent_id

Request:
  • platform - all|js|mobile|server
Response:
{
    "Code": 200,
    "Error": "",
    "Data": ["linux"]
}

GET /v2/logs - returns filetered list of logs

Request:
  • time_from - unix TS integer
  • time_to - unix TS integer
  • limit - 50 by defaukt
  • page - 1 by default
  • platform - string. possible values: all|js|mobile|server
  • tags - array of strings
  • agents - array of strings
  • levels - array of integers
  • os - array of strings
  • version - array of strings
  • env - array of strings
  • user_id - search string
  • id - message ID
  • search_all - search logs by all text fields
  • search_message - search logs by message
  • search_regexp - regexp search logs by message
Response:
{
    "Code": 200,
    "Error": "",
    "Data": {
        "PagesCount": 100,
        "Logs": [{
            "ID":"cb02ff31-842c-4711-5186-17d5383f6da9",
            "Message":"helo world",
            "Source":"/var/log/supervisor/supervisord.log",
            "Time":"1437994275",
            "ServerTime":"1437994276",
            "AgentID":"server1.aws",
            "TagName":"supervisor",
            "LogLevel":"3",
            "LogLevelStr":"Info",
            "FileSize":"3420",
            "Count":"1",
            "Platform":"server",
            "Trace":"",
            "UserID":"",
            "UserName":"",
            "Env":"",
            "UnixTS":"",
            "Version":"",
            "OS":""
        }]
    }
}

GET /v2/timerange - returns count of logs for time range per specified interval with log levels percents

Request:
  • time_from - unix TS
  • time_to - unix TS
  • interval - in seconds
  • platform
  • tags - array of strings
  • agents - array of strings
  • levels - array of integers
  • os - array of strings
  • version - array of strings
  • env - array of strings
  • user_id - search string
  • id - message ID
  • search_all - search logs by all text fields
  • search_message - search logs by message
  • search_regexp - regexp search logs by message
Response:
{
    "Code": 200,
    "Error": "",
    "Data": [{
        "Object": "LogLevel",
        "From": 1460084943,
        "To": 1460085543,
        "Values": [{
            "Title": "Warning",
            "Percent": 42.15,
            "Count": 100
        }]
    }]
}

GET /v2/multitimerange - returns count of logs for time range per specified interval per each unique value of provided Object

Request:
  • time_from - unix TS
  • time_to - unix TS
  • interval - in seconds
  • objects - array of strings. object is a column to group by: LogLevel, AgentID, TagName
  • platform
  • tags - array of strings
  • agents - array of strings
  • levels - array of integers
  • os - array of strings
  • version - array of strings
  • env - array of strings
  • user_id - search string
  • id - message ID
  • search_all - search logs by all text fields
  • search_message - search logs by message
  • search_regexp - regexp search logs by message
Response:
{
    "Code": 200,
    "Error": "",
    "Data": [[{
        "Object": "LogLevel",
        "From": 1460084943,
        "To": 1460085543,
        "Values": [{
            "Title": "Warning",
            "Percent": 42.15,
            "Count": 100
        }]
    }]]
}

GET /v2/percents - returns dispersion in percents for selected Object in time range. By default it returns dispersion by log level. Always ~100% in sum

Request:
  • time_from - unix TS
  • time_to - unix TS
  • objects - array of strings. object is a column to group by: LogLevel, AgentID, TagName
  • platform
  • tags - array of strings
  • agents - array of strings
  • levels - array of integers
  • os - array of strings
  • version - array of strings
  • env - array of strings
  • user_id - search string
  • id - message ID
  • search_all - search logs by all text fields
  • search_message - search logs by message
  • search_regexp - regexp search logs by message
Response:
{
    "Code": 200,
    "Error": "",
    "Data": [[{
        "Object": "LogLevel",
        "From": 1460084943,
        "To": 1460085543,
        "Values": [{
            "Title": "Warning",
            "Percent": 42.15,
            "Count": 100
        }]
    }]]
}

GET /v2/logs/count - returns count of logs saved in Storage

Request:
  • time_from - unix TS
  • time_to - unix TS
  • levels - array of integers
  • platform
  • tags - array of strings
  • agents - array of strings
  • levels - array of integers
  • os - array of strings
  • version - array of strings
  • env - array of strings
  • user_id - search string
  • search_all - search logs by all text fields
  • search_message - search logs by message
  • search_regexp - regexp search logs by message
Response:
{
    "Code": 200,
    "Error": "",
    "Data": {
        "Count": 10130,
        "Values": {"Error": 100, "Fatal": 10}
    }
}

Notifications

By default LogPacker will send urgent notifications to your account email via local sendmail. Fatal errors will be included into hourly report. You can disable it or change intervals and include Log Levels.

Edit configs/notify.ini for this purpose.

Available providers are: Sendmail, Slack, SMTP, Twilio SMS. You can set multiple providers, so Daemon will forward notifications to all providers in parallel.

For Slack notifications you must create an API application in your Slack and get an API token, then paste it into token option. And also you need to paste channel name into channel option.

Collect JS errors

If you are an owner/developer of website, you can include our JS script to your page to collect and send to the Cluster all JS errors thrown on the page of all your clients.

JS include example:

<script type="text/javascript">
var clusterURL = "https://storage.logpacker.com";
var userID = "";
var userName = "";

(function() {
var lp = document.createElement("script"); lp.type = "text/javascript"; lp.async = true;
lp.src = ("https:" == document.location.protocol ? "https://" : "http://") + "logpacker.com/js/logpacker.js";
var s = document.getElementsByTagName("script")[0]; s.parentNode.insertBefore(lp, s);
})();
</script>

Collect Mobile errors

We provide free mobile SDK for Android and iOS that will help you to send all client's errors/crashes to LogPacker Cluster.

You must set LogPacker Public API url in the Java/Objective-C/Swift/C# code (How to configure Public API).

Android

Download logpackermobilesdk.aar ⇪  file and import into Android Studio:

  • File ➤ New ➤ New Module ➤ Import .JAR or .AAR package
  • File ➤ Project Structure ➤ app ➤ Dependencies ➤ Add Module Dependency
  • Add import: import go.logpackermobilesdk.Logpackermobilesdk
import go.logpackermobilesdk.Logpackermobilesdk;

// It's possible to catch all app's crashes via Thread.setDefaultUncaughtExceptionHandler and send it to LogPacker
try {
    Client client = Logpackermobilesdk.newClient("https://storage.logpacker.com", "dev", android.os.Build.MODEL);

    Message msg = client.newMessage();
    msg.setMessage("Crash is here!");
    // Use another optional setters for msg object

    client.send(msg); // Send will return Cluster response
} catch (Exception e) {
    // Cannot connect to Cluster or validation error
}

iOS

Download Logpackermobilesdk.framework.tar ⇪  file.

Unpack it and drag Logpackermobilesdk.framework folder to the Xcode's file browser.

#import "ViewController.h"
#import "Logpackermobilesdk/Logpackermobilesdk.h"

@interface ViewController ()

@end

@implementation ViewController

- (void)viewDidLoad {
    [super viewDidLoad];
    GoLogpackermobilesdkClient *client;
    NSError *error;
    GoLogpackermobilesdkNewClient(@"https://storage.logpacker.com", @"dev", [[UIDevice currentDevice] systemVersion], &client, &error);

    GoLogpackermobilesdkMessage *msg;
    msg = client.newMessage;
    msg.message = @"Crash is here!";
    // Use another optional setters for msg object
    GoLogpackermobilesdkResult *result;
    [client send:(msg) ret0_:(&result) error:(&error)];
}

// It's possible to catch all app's crashes via signal(SIGSEGV, SignalHandler) and send it to LogPacker from SignalHandler func

Windows Phone

Download logpackermobilesdk.dll.tar ⇪  file and untar it:

  • Project ➤ Edit Preferences
  • .Net Assemply ➤ Browse ➤ Find logpackermobilesdk.dll file
  • Add import: using logpackermobilesdk;
using System;
using logpackermobilesdk;

namespace test
{
    class MainClass
	{
		public static void Main (string[] args)
		{
			try {
				Client c = new Client ("https://storage.logpacker.com", "dev", System.Environment.MachineName, "");
				Event e = new Event ("Crash is here!", "modulename", Event.FatalLogLevel, "1000", "John");
				c.Send (e);
			} catch {
				// Handle connection error here
			}
		}
	}
}

// It's possible to catch all app's crashes via global try-catch block and send it to LogPacker

Windows Agent

Requirements

Windows LogPacker Agent can be installed on any Windows machine with internet connection.

Install on Windows and Daemonize

  • Download .zip file on Profile page and unpack it's content to C:\Program Files\LogPacker
  • Open configs\agent.ini and enter your LogPacker Server address
  • Run the following commands to start as Windows service:

C:\"Program Files"\LogPacker\srvstart\srvstart.exe install logpackerwin_daemon -c C:\"Program Files"\LogPacker\srvstart\logpackerwin_daemon.ini
net start logpackerwin_daemon

Configuration

There are 2 configuration files.

configs\agent.ini contains Agent settings, LogPacker server address, etc.

configs\services.ini contains all possible Windows services which LogPacker Agent will tracks.

Both files can be edited, after that Agent must be restarted to apply the changes.

Usage

You can edit srvstart\logpackerwin_daemon.ini and provide your custom command line options:

  -v, --verbose       (Default: False) Prints all messages to standard output.

  --version           (Default: False) Prints LogPackerWin version.

  -l, --log           (Default: logpacker-win.log) Path to the log file to save
                      LogPackerWin statistics.

  -c, --configpath    (Default: ) Path to the directory with configuration
                      files.

  --help              Display this help screen.

LogPacker CLI

LogPacker package contains logpacker_cli binary. You can use it to send logs to the server from CLI

Usage:

./logpacker_cli -help
LogPacker CLI Usage:

-v|-version : Prints LogPacker CLI version
-m|-message : Log message
-t|-tag : Log tag name (optional)

./logpacker_cli -m "Test message"
Sent Message ID: E7DFDC74-81D2-1D76-878D-1B5620E6162C

Also you can configure almost any logger to send logs directly to LogPacker Server/Cloud via TCP (to Server Daemon) or HTTP (to Cloud).

curl -XPOST https://storageX.logpacker.com/save -d '{
    "client": {
        "user_id":     "1001",
        "user_name":   "John",
        "environment": "production",
        "agent":       "Nexus 100",
        "platform":    "js",
        "version":     "v0.1.0",
        "os":          "android"
    },
    "messages": [
        {
            "message":   "error message or event message",
            "trace":     "exception trace",
            "source":    "auth",
            "line":      100,
            "column":    20,
            "log_level": 1,
            "tag_name":  "android",
            "unixts":    1481273962
        }
    ],
    "cloud_key": ""
}'