Log Analysis Language (LAL) in SkyWalking is essentially a Domain-Specific Language (DSL) to analyze logs. You can use
LAL to parse, extract, and save the logs, as well as collaborate the logs with traces (by extracting the trace id,
segment id and span id) and metrics (by generating metrics from the logs and send them to the meter system).
The LAL config files are in YAML format, and are located under directory `lal`, you can
set `log-analyzer/default/lalFiles` in the `application.yml` file or set environment variable `SW_LOG_LAL_FILES` to
activate specific LAL config files.
## Filter
A filter is a group of [parser](#parser), [extractor](#extractor) and [sink](#sink). Users can use one or more filters
to organize their processing logics. Every piece of log will be sent to all filters in an LAL rule. The piece of log
sent into the filter is available as property `log` in the LAL, therefore you can access the log service name
via `log.service`, for all available fields of `log`, please refer to [the protocol definition](https://github.com/apache/skywalking-data-collect-protocol/blob/master/logging/Logging.proto#L41).
All components are executed sequentially in the orders they are declared.
### Global Functions
There are functions globally available that you can use them in all components (i.e. parsers, extractors, and sinks)
when needed.
-`abort`
By default, all components declared are executed no matter what flags (`dropped`, `saved`, etc.) have been set. There
are cases where you may want the filter chain to stop earlier when specified conditions are met. `abort` function aborts
the remaining filter chain from where it's declared, all the remaining components won't be executed at all.
`abort` function serves as a fast-fail mechanism in LAL.
```groovy
filter{
if(log.service=="TestingService"){// Don't waste resources on TestingServices
abort{}// all remaining components won't be executed at all
Sinks are the persistent layer of the LAL. By default, all the logs of each filter are persisted into the storage.
However, there are some mechanisms that allow you to selectively save some logs, or even drop all the logs after you've
extracted useful information, such as metrics.
#### Sampler
Sampler allows you to save the logs in a sampling manner. Currently, sampling strategy `rateLimit` is supported, welcome
to contribute more sampling strategies. If multiple samplers are specified, the last one determines the final sampling
result, see examples in [Enforcer](#enforcer).
`rateLimit` samples `n` logs at most in 1 second. `rateLimit("SamplerID")` requires an ID for the sampler, sampler
declarations with the same ID share the same sampler instance, and thus share the same `qps`, resetting logics.
Examples:
```groovy
filter{
// ... parser
sink{
sampler{
if(parsed.service=="ImportantApp"){
rateLimit("ImportantAppSampler"){
qps30// samples 30 pieces of logs every second for service "ImportantApp"
}
}else{
rateLimit("OtherSampler"){
qps3// samples 3 pieces of logs every second for other services than "ImportantApp"
}
}
}
}
}
```
#### Dropper
Dropper is a special sink, meaning that all the logs are dropped without any exception. This is useful when you want to
drop debugging logs,
```groovy
filter{
// ... parser
sink{
if(parsed.level=="DEBUG"){
dropper{}
}else{
sampler{
// ... configs
}
}
}
}
```
or you have multiple filters, some of which are for extracting metrics, only one of them needs to be persisted.
```groovy
filter{// filter A: this is for persistence
// ... parser
sink{
sampler{
// .. sampler configs
}
}
}
filter{// filter B:
// ... extractors to generate many metrics
extractors{
metrics{
// ... metrics
}
}
sink{
dropper{}// drop all logs because they have been saved in "filter A" above.
}
}
```
#### Enforcer
Enforcer is another special sink that forcibly samples the log, a typical use case of enforcer is when you have
configured a sampler and want to save some logs forcibly, for example, to save error logs even if the sampling mechanism
is configured.
```groovy
filter{
// ... parser
sink{
sampler{
// ... sampler configs
}
if(parserd.level=="ERROR"||parsed.userId=="TestingUserId"){// sample error logs or testing users' logs (userId == "TestingUserId") even if the sampling strategy is configured
enforcer{
}
}
}
}
```
You can use `enforcer` and `dropper` to simulate a probabilistic sampler like this.
```groovy
filter{
// ... parser
sink{
sampler{// simulate a probabilistic sampler with sampler rate 30% (not accurate though)
* The binding bridge between OAP and the DSL, which provides some convenient methods to ease the use of the raw {@link groovy.lang.Binding#setProperty(java.lang.String, java.lang.Object)} and {@link