未验证 提交 7cc9677c 编写于 作者: B Bo Zhou 提交者: GitHub

add docs for the cluster (#112)

* add docs for the cluster

* mispelling issues

* minor change

* fix comment

* fix comments from yuxiang

* fix comment

* fix the bug of hyperlink

* rename
上级 ee07ec21
......@@ -49,7 +49,7 @@ Note that each ``algorithm`` should have two functions implemented:
- ``learn``
updates the model's parameters given trainsition data
updates the model's parameters given transition data
- ``predict``
predicts an action given current environmental state.
......
......@@ -60,6 +60,13 @@ Abstractions
getting_started.rst
.. toctree::
:maxdepth: 2
:caption: Parallel Training
parallel_training/setup.rst
parallel_training/recommended_practice.rst
.. toctree::
:maxdepth: 1
:caption: High-quality Implementations
......
Recommended Practice
---------------------
.. image:: ./poster.png
:width: 200px
:align: center
| This tutorial shows how to use ``@parl.remote_class`` to implement parallel computation with **multi-threads**.
| Python has poor performance in multi-threading because of the `GIL <https://realpython.com/python-gil/>`_ , and we always find that multi-thread programming in Python cannot bring any benefits of the running speed, unlike other program languages such as ``C++`` and ``JAVA``.
Here we reveal the performance of Python threads. At first, let's run the following code:
.. code-block:: python
class A(object):
def run(self):
ans = 0
for i in range(100000000):
ans += i
a = A()
for _ in range(5):
a.run()
| This code takes **17.46** seconds to finish counting from 1 to 1e8 for five times.
| Now let's implement a thread-based code using the Python library, ``threading``, as shown below.
.. code-block:: python
import threading
class A(object):
def run(self):
ans = 0
for i in range(100000000):
ans += i
threads = []
for _ in range(5):
a = A()
th = threading.Thread(target=a.run)
th.start()
threads.append(th)
for th in threads:
th.join()
| It takes **41.35** seconds, much slower than previous code that finish counting serially. As the performance is limited by the GIL, there is only a single CPU running the task, and the CPU has to spend additional time on switching tasks between different threads.
| Finally, let's try to use PARL:
.. code-block:: python
:emphasize-lines: 4,11
import threading
import parl
@parl.remote_class
class A(object):
def run(self):
ans = 0
for i in range(100000000):
ans += i
threads = []
parl.connect("localhost:6006")
for _ in range(5):
a = A()
th = threading.Thread(target=a.run)
th.start()
threads.append(th)
for th in threads:
th.join()
.. image:: ./elapsed_time.jpg
:width: 400px
:align: center
| Only **3.74** seconds are needed !!! As you can see from the code above, it is the ``@parl.remote_class`` that makes the change happen. By simply adding this decorator, we achieved real multi-thread computation.
Cluster Setup
=============
Setup Command
###################
This tutorial demonstrates how to set up a cluster.
To start a PARL cluster, we can execute the following two ``xparl`` commands:
.. code-block:: bash
xparl start --port 6006
This command starts a master node to manage computation resources and adds the local CPUs to the cluster.
We use the port `6006` for demonstration, and it can be any available port.
Adding More Resources
###################
.. note::
If you have only one machine, you can ignore this part.
If you would like to add more CPUs(computation resources) to the cluster, run the following command on other machines.
.. code-block:: bash
xparl connect --address localhost:6006
It starts a worker node that provides CPUs of the machine for the master. A worker will use all the CPUs by default. If you wish to specify the number of CPUs to be used, run the command with ``--cpu_num <cpu_num>`` (e.g.------cpu_num 10).
Note that the command ``xparl connect`` can be run at any time, at any machine to add more CPUs to the cluster.
Example
###################
Here we give an example demonstrating how to use ``@parl.remote_class`` for parallel computation.
.. code-block:: python
import parl
@parl.remote_class
class Actor(object):
def hello_world(self):
print("Hello world.")
def add(self, a, b):
return a + b
# Connect to the master node.
parl.connect("localhost:6006")
actor = Actor()
actor.hello_world()# no log in the current terminal, as the computation is placed in the cluster.
actor.add(1, 2) # return 3
Shutdown the Cluster
###################
run ``xparl stop`` at the machine that runs as a master node to stop the cluster processes. Worker nodes at different machines will exit automatically after the master node is stopped.
Further Reading
###################
| Now we know how to set up a cluster and use this cluster by simply adding ``@parl.remote_class``.
| In `next_tutorial`_, we will show how this decorator help us implement the **real** multi-thread computation in Python, breaking the limitation of Python Global Interpreter Lock(GIL).
.. _`next_tutorial`: https://parl.readthedocs.io/parallel_training/recommended_practice.html
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册