diff --git a/README.md b/README.md index a06e91e2944e8a2b4be348c8b10a193cc66c58e3..3643c5657c8e2f683a96ab102ac4cdd471a4fa68 100644 --- a/README.md +++ b/README.md @@ -57,7 +57,7 @@ Two steps to use outer computation resources: 2. call `parl.connect` to initialize parallel communication before creating an object. Calling any function of the objects **does not** consume local computation resources since they are executed elsewhere. PARL -As shown in the above figure, real actors(orange circle) are running at the cpu cluster, while the learner(blue circle) is running at the local gpu with several remote actors(yellow circle with dotted edge). +As shown in the above figure, real actors (orange circle) are running at the cpu cluster, while the learner (blue circle) is running at the local gpu with several remote actors (yellow circle with dotted edge). For users, they can write code in a simple way, just like writing multi-thread code, but with actors consuming remote resources. We have also provided examples of parallized algorithms like [IMPALA](examples/IMPALA), [A2C](examples/A2C) and [GA3C](examples/GA3C). For more details in usage please refer to these examples.