Linux kernel ============ This file was moved to Documentation/admin-guide/README.rst Please notice that there are several guides for kernel developers and users. These guides can be rendered in a number of formats, like HTML and PDF. In order to build the documentation, use ``make htmldocs`` or ``make pdfdocs``. There are various text files in the Documentation/ subdirectory, several of them using the Restructured Text markup notation. See Documentation/00-INDEX for a list of what is contained in each file. Please read the Documentation/process/changes.rst file, as it contains the requirements for building and running the kernel, and information about the problems which may result by upgrading your kernel.
English | 简体中文
Documentation | 中文文档
PARL is a flexible and high-efficient reinforcement learning framework.
Features
Reproducible. We provide algorithms that stably reproduce the result of many influential reinforcement learning algorithms.
Large Scale. Ability to support high-performance parallelization of training with thousands of CPUs and multi-GPUs.
Reusable. Algorithms provided in the repository could be directly adapted to a new task by defining a forward network and training mechanism will be built automatically.
Extensible. Build new algorithms quickly by inheriting the abstract class in the framework.
Abstractions

Model
Model
is abstracted to construct the forward network which defines a policy network or critic network given state as input.
Algorithm
Algorithm
describes the mechanism to update parameters in Model
and often contains at least one model.
Agent
Agent
, a data bridge between the environment and the algorithm, is responsible for data I/O with the outside environment and describes data preprocessing before feeding data into the training process.
Note: For more information about base classes, please visit our tutorial and API documentation.
Parallelization
PARL provides a compact API for distributed training, allowing users to transfer the code into a parallelized version by simply adding a decorator. For more information about our APIs for parallel training, please visit our documentation.
Here is a Hello World
example to demonstrate how easy it is to leverage outer computation resources.
#============Agent.py=================
@parl.remote_class
class Agent(object):
def say_hello(self):
print("Hello World!")
def sum(self, a, b):
return a+b
parl.connect('localhost:8037')
agent = Agent()
agent.say_hello()
ans = agent.sum(1,5) # it runs remotely, without consuming any local computation resources
Two steps to use outer computation resources:
- use the
parl.remote_class
to decorate a class at first, after which it is transferred to be a new class that can run in other CPUs or machines. - call
parl.connect
to initialize parallel communication before creating an object. Calling any function of the objects does not consume local computation resources since they are executed elsewhere.

For users, they can write code in a simple way, just like writing multi-thread code, but with actors consuming remote resources. We have also provided examples of parallized algorithms like IMPALA, A2C and GA3C. For more details in usage please refer to these examples.
Install:
Dependencies
- Python 2.7 or 3.5+(On Windows, PARL only supprorts the enviroment with python3.6+).
- paddlepaddle>=1.6.1 (Optional, if you only want to use APIs related to parallelization alone)
pip install parl