Logstash是一個log收集的工具,提供了多種輸入輸出以及格式化的工具,但是在Linux平台會有比較好的支援,如果要跑在Windows上,就要多試試哪些指令是可以正常執行的
OS:Ubuntu14.04、Windows8
APP:JDK8、ElasticSearch1.6.0、logstash-1.5.2、kibana-4.1.1
Ubuntu14.04 安裝
用 Oracle VM VirtualBox 安裝 Ubuntu14.04,討厭的是 NAT 網卡並不提供給主機連線使用,所以要額外新增一張僅限主機的介面卡
編輯可供主機連線的網卡
sudo vi /etc/network/interfaces
auto eth1
iface eth1 inet dhcp
#啟動新增的網卡
sudo ifup eth1
關閉防火牆
sudo ufw disable
更新
sudo apt-get update
sudo apt-get -y upgrade
sudo apt-get clean
JDK8安裝
手動安裝
cd /tmp
sudo mkdir -p /usr/lib/jvm
wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u45-b14/jdk-8u45-linux-x64.tar.gz"
sudo tar -zxvf jdk-8u45-linux-x64.tar.gz -C /usr/lib/jvm
sudo mv /usr/lib/jvm/jdk1.8.0_45 /usr/lib/jvm/jdk8
增加環境變數
sudo vi ~/.bashrc
#最後一行添加以下內容
export JAVA_HOME=/usr/lib/jvm/jdk8
export JRE_HOME=$JAVA_HOME/jre
export CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib
export PATH=${JAVA_HOME}/bin:$PATH
立即生效
source ~/.bashrc
rpm安裝
sudo apt-get install rpm
wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u51-b16/jdk-8u51-linux-x64.rpm"
rpm -ivh jdk-8u51-linux-x64.rpm
repository安裝
sudo add-apt-repository ppa:webupd8team/java
sudo apt-get update
#JDK8 默認選擇條款
echo oracle-java8-installer shared/accepted-oracle-license-v1-1 select true | sudo /usr/bin/debconf-set-selections
sudo apt-get install oracle-java8-installer
ElasticSearch安裝
手動安裝
cd /tmp
wget https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-1.6.0.tar.gz
sudo tar -zxvf elasticsearch-1.6.0.tar.gz
sudo mv elasticsearch-1.6.0 /usr/share/elasticsearch
cd /usr/share/elasticsearch
# 背景啟動
./bin/elasticsearch -d
./bin/elasticsearch &
編輯自動啟動腳本
sudo vi /etc/init.d/elasticsearch
#內容
export JAVA_HOME=/usr/lib/jvm/jdk8
/usr/share/elasticsearch/bin/elasticsearch -d
#設定權限
sudo chmod 777 /etc/init.d/elasticsearch
#安裝
sudo apt-get install sysv-rc-conf
#設定 勾選 2~5 按 q 離開
sudo sysv-rc-conf
deb安裝
wget https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-1.6.0.deb
sudo dpkg -i elasticsearch-1.6.0.deb
#啟動服務
sudo service elasticsearch start
#設定為開機啟動
sudo update-rc.d elasticsearch defaults 95 10
deb安裝會將檔案放到相對應的資料夾中,所以要自己去找
# if you want to remove it:
#sudo dpkg -r elasticsearch
# binaries & plugin
#/usr/share/elasticsearch/bin
# log dir
#/var/log/elasticsearch
# data dir
#/var/lib/elasticsearch
# config dir
#/etc/elasticsearch
如果出現檔案不存在造成啟動失敗可試著改
vi /etc/default/elasticsearch
#PID_DIR=/var/run/elasticsearch => PID_DIR=/var/run
配置
安裝套件
cd /usr/share/elasticsearch
bin/plugin -install mobz/elasticsearch-head
# 即時分片狀況 http://localhost:9200/_plugin/head/
bin/plugin -install lukas-vlcek/bigdesk
# 即時健康狀況 http://localhost:9200/_plugin/bigdesk/
bin/plugin -i elasticsearch/marvel/latest
# 官方版本(付費) http://localhost:9200/_plugin/marvel
bin/plugin -install royrusso/elasticsearch-HQ
# 比較好看的即時綜合管理介面 http://localhost:9200/_plugin/HQ/
bin/plugin -install lmenezes/elasticsearch-kopf
# 看起來不錯可以管理很多功能 http://localhost:9200/_plugin/kopf/#!/cluster
Application log 改用 log4j2
你如果不換你會遇到各種千奇百怪的log格式,還要處理Exception時多行log的問題,所以還是換比較簡單
套件相依部分需額外引入jackson來處理輸出Json
apply plugin: 'java'
apply plugin: 'eclipse'
sourceCompatibility = 1.8
version = '1.0'
repositories {
mavenCentral()
}
dependencies {
compile 'org.apache.logging.log4j:log4j-core:2.3',
'org.apache.logging.log4j:log4j-slf4j-impl:2.3',
'com.fasterxml.jackson.core:jackson-core:2.5.4',
'com.fasterxml.jackson.core:jackson-databind:2.5.4'
}
log4j2開始有約定大於配置的概念,只要在resources中放置Log4j2.xml,程式中不需要去處理設定檔的載入,log4j2預設就會去載入這個設定檔
<?xml version="1.0" encoding="UTF-8"?>
<Configuration xmlns="http://logging.apache.org/log4j/2.0/config">
<Appenders>
<Console name="console" target="SYSTEM_OUT">
<!-- <JSONLayout compact="true" eventEol="true" /> -->
<PatternLayout pattern="%d{yyy/MM/dd HH:mm:ss.SSS} %-5level - %msg%n" />
</Console>
<!-- <Syslog name="syslog" host="localhost" port="514" protocol="UDP" /> -->
<RollingFile name="RollingFile" fileName="D:/Logserver/testlog/rolling_app.json"
filePattern="D:/Logserver/testlog/rolling_app_%d{yyyy-MM-dd}.json">
<JSONLayout compact="true" eventEol="true" locationInfo="true"></JSONLayout>
<Policies>
<TimeBasedTriggeringPolicy />
</Policies>
</RollingFile>
</Appenders>
<Loggers>
<Root level="trace">
<AppenderRef ref="console" />
<AppenderRef ref="RollingFile" />
</Root>
</Loggers>
</Configuration>
測試程式
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
public class Test1 {
private static final Logger log = LogManager.getLogger(Test1.class);
public static void main(String[] args) {
log.debug("{} {} {}", "SAMTest", "5", "Taiwan");
log.info("{} {} {}", "SAM", "30", "Taiwan");
try {
throw new Exception("exception test");
} catch (Exception e) {
log.error("exception", e);
}
}
}
這邊可以看到二代把slf4j好用的{}也實做出來了
輸出在log中是設定成一行一筆log,這邊好觀察就格式化了
這是INFO的log
{
"timeMillis": 1436842674878,
"thread": "main",
"level": "INFO",
"loggerName": "com.sam.log.Test1",
"message": "SAM499 499 Taiwan",
"endOfBatch": false,
"loggerFqcn": "org.apache.logging.log4j.spi.AbstractLogger",
"source": {
"class": "com.sam.log.Test1",
"method": "main",
"file": "Test1.java",
"line": 13
}
}
這是ERROR的log
{
"timeMillis": 1436842674879,
"thread": "main",
"level": "ERROR",
"loggerName": "com.sam.log.Test1",
"message": "exception",
"thrown": {
"commonElementCount": 0,
"localizedMessage": "exception test",
"message": "exception test",
"name": "java.lang.Exception",
"extendedStackTrace": [
{
"class": "com.sam.log.Test1",
"method": "main",
"file": "Test1.java",
"line": 17,
"exact": true,
"location": "bin/",
"version": "?"
}
]
},
"endOfBatch": false,
"loggerFqcn": "org.apache.logging.log4j.spi.AbstractLogger",
"source": {
"class": "com.sam.log.Test1",
"method": "main",
"file": "Test1.java",
"line": 19
}
}
其實資訊一點都沒有少,接下來要透過logstash把它輸出到Elasticsearch
logstash 安裝在Windows
執行很簡單,配置一下環境就可以運行了
SET PATH=D:\Logserver\logstash-1.5.2\bin;%PATH%
SET JAVA_HOME=D:\IDE\Java\jre1.8.0_45
logstash agent -f agent-es.conf
如果你想驗證conf正不正確可以加參數
logstash agent -f agent-es.conf --configtest
來說明一下配置檔部分
input {
file {
type => "my-type"
path => ["D:/Logserver/testlog/rolling_app.json"]
codec => "json"
}
}
filter {
date {
match => [ "timeMillis", "UNIX_MS" ]
target => "@timestamp"
locale => "zh_TW"
timezone => "Asia/Taipei"
}
if [level] == "DEBUG" {
drop {}
}
if [level] == "INFO" {
grok {
match => [ "message", "%{GREEDYDATA:username} %{GREEDYDATA:age} %{GREEDYDATA:country}" ]
}
}
}
output {
elasticsearch {
host => "192.168.56.102"
port => "9200"
protocol => "http"
codec => "json"
}
stdout { codec => rubydebug }
}
配置檔很簡單就分三部分input、filter、output
input > file > type 進到ES後就是ES的type名稱
filter > date 是為了處理 log 中的 "timeMillis": 1436842674879
進入ES變這樣"@timestamp": "2015-07-14T03:12:09.599Z"
filter > if [level] == "DEBUG" 這個level標籤是本來就存在於json內所以不用先經過拆解程序就可以知道就是輸出成json最方便的地方,不然你可能要這樣判斷 if "|ERROR|" in [message]{ #if this is the 1st message in many lines message 總是比較不可靠,DEBUG的log就drop {}掉不收集啦
filter > grok 可以幫你再拆解 message 變下面這樣
"username": "SAM0",
"age": "0",
"country": "Taiwan"
output 就不說明了
對了,output無法接tcp啊,不知為何,所以只好直接丟es了
ES內的完整資料
{
"took": 3,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"failed": 0
},
"hits": {
"total": 2,
"max_score": 1,
"hits": [
{
"_index": "logstash-2015.07.14",
"_type": "my-type",
"_id": "AU6Ko_DjHs08_r8b7xS3",
"_score": 1,
"_source": {
"timeMillis": 1436845074805,
"thread": "main",
"level": "ERROR",
"loggerName": "com.sam.log.Test1",
"message": "exception",
"thrown": {
"commonElementCount": 0,
"localizedMessage": "exception test",
"message": "exception test",
"name": "java.lang.Exception",
"extendedStackTrace": [
{
"class": "com.sam.log.Test1",
"method": "main",
"file": "Test1.java",
"line": 17,
"exact": true,
"location": "bin/",
"version": "?"
}
]
},
"endOfBatch": false,
"loggerFqcn": "org.apache.logging.log4j.spi.AbstractLogger",
"source": {
"class": "com.sam.log.Test1",
"method": "main",
"file": "Test1.java",
"line": 19
},
"@version": "1",
"@timestamp": "2015-07-14T03:37:54.805Z",
"host": "PCD-57",
"path": "D:/Logserver/testlog/rolling_app.json",
"type": "my-type"
}
},
{
"_index": "logstash-2015.07.14",
"_type": "my-type",
"_id": "AU6Ko-2lHs08_r8b7xS2",
"_score": 1,
"_source": {
"timeMillis": 1436845074805,
"thread": "main",
"level": "INFO",
"loggerName": "com.sam.log.Test1",
"message": "SAM 30 Taiwan",
"endOfBatch": false,
"loggerFqcn": "org.apache.logging.log4j.spi.AbstractLogger",
"source": {
"class": "com.sam.log.Test1",
"method": "main",
"file": "Test1.java",
"line": 14
},
"@version": "1",
"@timestamp": "2015-07-14T03:37:54.805Z",
"host": "PCD-57",
"path": "D:/Logserver/testlog/rolling_app.json",
"type": "my-type",
"username": "SAM",
"age": "30",
"country": "Taiwan"
}
}
]
}
}
Grok正則法
匹配多行
filter {
grok { match =>
["message" , "(?<level>(\w+)?) (?<request_time>\d+(?:\.\d+)?) (?<test>(\w+)?)",
"message" , "(?<level>(\w+)?) (?<request_time>(\w+)?) (?<test>(\w+)?)"]
}
}
匹配多行的格式如上;
當第一個message語句匹配不上的時候,會自動使用第二個message進行匹配
kibana
cd /tmp
wget https://download.elastic.co/kibana/kibana/kibana-4.1.1-linux-x64.tar.gz
tar xvf kibana-*.tar.gz
# 修改 host 限制來源為 localhost,因為稍後會裝上 Nginx
vi ./kibana-4*/config/kibana.yml
host: "localhost"
安裝,因為下面的語法預設位置是/opt/kibana,一致比較好安裝
sudo mkdir -p /opt/kibana
sudo cp -R ./kibana-4*/* /opt/kibana/
cd /etc/init.d && sudo wget https://gist.githubusercontent.com/thisismitch/8b15ac909aed214ad04a/raw/bce61d85643c2dcdfbc2728c55a41dab444dca20/kibana4
啟動服務並設成隨機啟動
sudo chmod +x /etc/init.d/kibana4
sudo update-rc.d kibana4 defaults 96 9
sudo service kibana4 start
Install Nginx
sudo apt-get install nginx apache2-utils
sudo htpasswd -c /etc/nginx/htpasswd.users kibanaadmin
sudo vi /etc/nginx/sites-available/default
server {
listen 80;
server_name example.com;
auth_basic "Restricted Access";
auth_basic_user_file /etc/nginx/htpasswd.users;
location / {
proxy_pass http://localhost:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
sudo service nginx restart
另一種設定
server {
listen 80;
return 301 https://example.com;
}
server {
listen *:443 ;
ssl on;
ssl_certificate /etc/nginx/ssl/all.crt;
ssl_certificate_key /etc/nginx/ssl/server.key;
server_name example.com;
access_log /var/log/nginx/kibana.access.log;
location /kibana4/ {
proxy_pass http://localhost:5601/;
proxy_redirect http://localhost:5601/ /kiban4/;
auth_basic "Restricted";
auth_basic_user_file /etc/nginx/conf.d/kibana.htpasswd;
}
}