未验证 提交 b6122aa2 编写于 作者: B Bo Zhou 提交者: GitHub

fix minor problems in the docs (#138)

* fix minor probmels in the docs

* typo

* remove pip source

* fix monitor

* add performance of A2C

* Update README.md

* modify logger for GPU detection
上级 46f59906
......@@ -155,8 +155,8 @@ function main() {
echo Running tests in $env ..
echo `which pip`
echo ========================================
pip install -i https://pypi.tuna.tsinghua.edu.cn/simple .
pip install -i https://pypi.tuna.tsinghua.edu.cn/simple -r .teamcity/requirements.txt
pip install .
pip install -r .teamcity/requirements.txt
run_test_with_cpu $env
run_test_with_cpu $env "DIS_TESTING_SERIALLY"
done
......
Summary
Overview
=======
Easy-to-use
......@@ -22,10 +22,11 @@ Web UI for computation resources
to the cluster. Users can view the cluster status at a WEB UI. It shows the
detailed information for each worker(e.g, memory used) and each task submitted.
Board compatibility
Supporting vairous frameworks
###################
| Our framework for distributed training is compatible with any other
frameworks, like tensorflow, pytorch or mxnet. By adding `@parl.remote_class`
| PARL for distributed training is compatible with any other
frameworks, like tensorflow, pytorch and mxnet. By adding `@parl.remote_class`
decorator to their codes, users can easily convert their codes to distributed
computation.
......@@ -33,7 +34,7 @@ Why PARL
########
High throughput
###############
-------------------------
| PARL uses a point-to-point connection for network communication in the
cluster. Unlike other framework like RLlib which replies on redis for
communication, PARL is able to achieve much higher throughput. The results
......@@ -41,7 +42,7 @@ High throughput
achieved an increase of 160% on data throughout over Ray(RLlib).
Automatic deployment
####################
-------------------------
| Unlike other parallel frameworks which fail to import modules from
external file, PARL will automatically package all related files and send
them to remote machines.
......
......@@ -61,4 +61,4 @@ Further Reading
| Now we know how to set up a cluster and use this cluster by simply adding ``@parl.remote_class``.
| In `next_tutorial`_, we will show how this decorator help us implement the **real** multi-thread computation in Python, breaking the limitation of Python Global Interpreter Lock(GIL).
.. _`next_tutorial`: https://parl.readthedocs.io/parallel_training/recommended_practice.html
.. _`next_tutorial`: https://parl.readthedocs.io/en/latest/parallel_training/recommended_practice.html
......@@ -8,11 +8,14 @@ Please see [here](https://gym.openai.com/envs/#atari) to know more about Atari g
### Benchmark result
Mean episode reward in training process after 10 million sample steps.
![learninng_curve](learning_curve.png)
Performance of A2C on various envrionments
<p align="center">
<img src="result.png" alt="result" width="700"/>
</p>
| | | | | |
|--------------|----------------|------------------|---------------|---------------------|
| Alien (1278) | Amidar (380) | Assault (4659) | Aterix (3883) | Atlantis (3040000) |
| Pong (20) | Breakout (405) | Beamrider (3394) | Qbert (14528) | SpaceInvaders (819) |
## How to use
### Dependencies
......
......@@ -81,12 +81,12 @@ class TestClusterMonitor(unittest.TestCase):
for i in range(10):
workers[i].exit()
time.sleep(40)
time.sleep(60)
self.assertEqual(10, len(cluster_monitor.data['workers']))
for i in range(10, 20):
workers[i].exit()
time.sleep(40)
time.sleep(60)
self.assertEqual(0, len(cluster_monitor.data['workers']))
master.exit()
......
......@@ -77,7 +77,7 @@ def get_gpu_count():
logger.info(
'CUDA_VISIBLE_DEVICES found gpu count: {}'.format(gpu_count))
except:
logger.warning('Cannot find available GPU devices, using CPU now.')
logger.info('Cannot find available GPU devices, using CPU now.')
gpu_count = 0
else:
try:
......@@ -85,7 +85,7 @@ def get_gpu_count():
"-L"])).count('UUID')
logger.info('nvidia-smi -L found gpu count: {}'.format(gpu_count))
except:
logger.warning('Cannot find available GPU devices, using CPU now.')
logger.info('Cannot find available GPU devices, using CPU now.')
gpu_count = 0
return gpu_count
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册