README.en.md 4.9 KB
Newer Older
1 2 3
# JIT Kernel

JIT(Just In Time) Kernel contains actually generated code and some other implemenations with the same logic.
T
tensor-tang 已提交
4
Each implementation has its own condition to use, defined in `CanBeUsed`.
5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44
They are combined together to get the best performance of one single independent function.
They could be some very simple functions like vector multiply, or some complicated functions like LSTM.
And they can be composed with some other exited jit kernels to build up a complex function. 
Currently it's only supported on CPU yet.

## Contents

```txt
PaddlePaddle/Paddle/paddle/fluid/
├── ...
└── operators/
    ├── .../
    └── jit/
        ├── ...
        ├── gen/
        │   └── ...
        |── more/
        │   ├── ...
        │   ├── mkl/
        │   │   └── ...
        │   ├── mkldnn/
        │   │   └── ...
        │   ├── mix/
        │   │   └── ...
        │   ├── intrinsic/
        │   │   └── ...
        │   └── openblas/
        │       └── ...
        └── refer/
            └── ...
```

All basical definations of jit kernels are addressed in `paddle/fluid/operators/jit` including these three key folders `refer`, `gen`, `more`. There is only one unique name for each kernel while may have seraval implementations with same functionality.

- `refer`: Each kernel must have one reference implementation on CPU, and it should only focus on the correctness and should not depends on any third-party libraries.
- `gen`: The code generated should be kept here. They should be designed focusing on the best performance, which depends on Xbyak.
- `more`: All other implementations should be kept in this folder with one directory corresponding to one library kind or method kind, such as mkl, mkldnn, openblas or intrinsic code. Each implementation should have it advantage. 

## How to use

T
tensor-tang 已提交
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74
We present these methods to get the functions:
- `GetAllCandidateFuncs`. It can return all the implementations supported. All of the implementations can get the same result. You can do some runtime benchmark to choose which should actually be used.
- `GetDefaultBestFunc`. It only return one default function pointer, which is tuning offline with some genenal configures and attributes. This should cover most situations.
- `KernelFuncs::Cache()`. It can get the default functions and save it for next time with the same attribute. 
- `GetReferFunc`. It can only get the reference code in CPU, and all the others implementations have same logic with this reference code.

And here are some examples:

Get from cache:

```cpp
    using T = float;
    jit::seq_pool_attr_t attr(width, jit::SeqPoolType::kSum);
    auto seqpool_func = jit::KernelFuncs<jit::SeqPoolTuple<T>, platform::CPUPlace>::Cache().At(attr);
    seqpool_func(src_data, dst_data, &attr);
```

Get all implementations and run once:

```cpp
    using T = float;
    jit::seq_pool_attr_t attr(width, jit::SeqPoolType::kSum);
    auto funcs = jit::GetAllCandidateFuncsWithTypes<jit::SeqPoolTuple<T>, platform::CPUPlace>(attr);
    for (auto f : funcs) {
        LOG(INFO) << "Kernel implementation type: " << f.first;
        f.second(src_data, dst_data, &attr);
    }
```

All kernels are inlcuded in `paddle/fluid/operators/jit/kernels.h`, which is automatically generated in compile time, you can only include this one header to get all the registered kernels.
75 76 77 78 79 80

## Solid Test

- Unit Test
    All functions should be compared with the corresponding reference functions, including data tyep `float` and `double`.
- Benchmark
T
tensor-tang 已提交
81
    All functions should be tested, and make sure the `jit::GetDefaultBestFunc` function obtain the best performance with all attributes.
82 83 84 85 86 87

# How to add new kernel

## Required

1. Add `your_key` at `KernelType`.
T
tensor-tang 已提交
88 89
2. Add your new `KernelTuple` which must include `your_key`. It should be a combination of the data type, attribute type and function type. You can refer `SeqPoolTuple`.
3. Add reference function of `your_key`. 
90 91 92
Note:
    - this should be run on CPU and do not depend on any third-party.
    - Add `USE_JITKERNEL_REFER(your_key)` in `refer/CmakeLists.txt` to make sure this code can be used.
T
tensor-tang 已提交
93
4. Add unit test in `test.cc`, and verfiy at least `float` and `double`.
94
Test more data type for some special functions if necessary, for example `int8`.
T
tensor-tang 已提交
95
5. Add functions in `benchmark.cc` to test all function of same `KernelType`. Make sure `GetDefaultBestFunc` always get the best one.
96 97 98 99 100

## Optional

Add more implementations of `your_kery` for performance enhancement.

T
tensor-tang 已提交
101 102 103
1. Add functions based on generated code in `gen`. It should be derived from `JitCode` and should have correpsonding creator from `JitCodeCreator` which will be registered on the `your_key`.
2. If new attribute type is added, you should specialize `JitCodeKey` of this type.
3. Add more functions in `more`,you can use any third party you wish, like mkl, mkldnn or intrinsic code to reach the best performance.