未验证 提交 b514ee8b 编写于 作者: A Avi Aryan

added adaptor docs #37

上级 7b2398ca
......@@ -197,13 +197,15 @@ Below is a list of each adaptor and its support of the feature:
Each adaptor has its own README page with details on configuration and capabilities.
* [elasticsearch](./adaptor/elasticsearch)
* [file](./adaptor/file)
* [mongodb](./adaptor/mongodb)
* mssql
* [postgresql](./adaptor/postgres)
* [rabbitmq](./adaptor/rabbitmq)
* [rethinkdb](./adaptor/rethinkdb)
* [csv](docs/adaptors/csv)
* [elasticsearch](docs/adaptors/elasticsearch)
* [file](docs/adaptors/file)
* [mongodb](docs/adaptors/mongodb)
* [mssql](docs/adaptors/mssql)
* [mysql](docs/adaptors/mysql)
* [postgresql](docs/adaptors/postgres)
* [rabbitmq](docs/adaptors/rabbitmq)
* [rethinkdb](docs/adaptors/rethinkdb)
#### Native Functions
......
# CSV adaptor
CSV adaptor works for csv files.
# Elasticsearch adaptor
The [elasticsearch](https://www.elastic.co/) adaptor sends data to defined endpoints. List of
supported versions is below.
| Version | Note |
| --- | --- |
| 1.X | This version does not support bulk operations and will thus be much slower. |
| 2.X | Will only receive bug fixes, please consider upgrading. |
| 5.X | Most recent and supported version. |
***IMPORTANT***
If you want to keep the source `_id` as the elasticsearch document `_id`, transporter will
automatically do this. If you wish to use the auto-generated `_id` field for elasticsearch but would
like to retain the originating `_id` from the source, you'll need to include a transform function
similar to the following (assumes MongoDB source):
```javascript
module.exports = function(msg) {
msg.data["mongo_id"] = msg.data._id['$oid']
msg.data = _.omit(msg.data, ["_id"]);
return msg;
}
```
***NOTE***
By using the elasticsearch auto-generated `_id`, it is not currently possible for transporter to
process update/delete operations. Future work is planned in [#39](https://github.com/compose/transporter/issues/39)
to address this problem.
### Configuration:
```javascript
es = elasticsearch({
"uri": "https://username:password@hostname:port/thisgetsignored"
"timeout": "10s" // optional, defaults to 30s
"aws_access_key": "XXX" // optional, used for signing requests to AWS Elasticsearch service
"aws_access_secret": "XXX" // optional, used for signing requests to AWS Elasticsearch service
})
```
# file adaptor
The file adaptor reads/writes data to the defined locations.
***NOTE***
This adaptor is primarily used for testing purposes.
### Configuration:
```javascript
f = file({
"uri": "stdout://"
})
```
# MongoDB adaptor
The [MongoDB](https://www.mongodb.com/) adaptor is capable of reading/tailing collections and
receiving data for inserts.
`collection_filters` is a JSON string where the top level key is the collection name and its value
is a query that will be used when iterating the collection. The commented out example below would only
include documents where the `i` field had a value greater than `10`.
***NOTE*** You may want to check your collections to ensure the proper index(es) are in place or performance may suffer.
### Configuration:
```javascript
m = mongodb({
"uri": "mongodb://127.0.0.1:27017/test"
// "timeout": "30s",
// "tail": false,
// "ssl": false,
// "cacerts": ["/path/to/cert.pem"],
// "wc": 1,
// "fsync": false,
// "bulk": false,
// "collection_filters": "{\"foo\": {\"i\": {\"$gt\": 10}}}"
})
```
# MSSQL
MSSQL adaptor works for Microsoft SQL Server.
# MYSQL
MYSQL adaptor works for MySQL Server.
# PostgreSQL adaptor
The [PostgreSQL](https://www.postgresql.org/) adaptor is capable of reading/tailing tables
using logical decoding and receiving data for inserts.
### Configuration:
```javascript
pg = postgres({
"uri": "postgres://127.0.0.1:5432/test"
})
```
# RabbitMQ adaptor
The [RabbitMQ](http://www.rabbitmq.com/) adaptor is capable of consuming and publishing JSON data.
When being used to publish data, you need to configure the `routing_key` and the exchange is pulled
from the message `namespace` (i.e. database collection/table). If `key_in_field` is set to true,
transporter will use the field defined `routing_key` to lookup the value from the data.
***NOTE***
`key_in_field` defaults to false and will therefore use the static `routing_key`, if you
set `routing_key` to an empty string, no routing key will be set in the published message.
### Configuration:
```javascript
rmq = rabbitmq({
"uri": "amqp://127.0.0.1:5672/",
"routing_key": "test",
"key_in_field": false
// "delivery_mode": 1, // non-persistent (1) or persistent (2)
// "api_port": 15672,
// "ssl": false,
// "cacerts": ["/path/to/cert.pem"]
})
```
# RethinkDB adaptor
The [RethinkDB](https://github.com/rethinkdb/rethinkdb) adaptor is capable of reading/tailing tables and
receiving data for inserts.
### Configuration:
```javascript
r = rethink({
"uri": "rethink://127.0.0.1:28015/",
// "timeout": "30s",
// "tail": false,
// "ssl": false,
// "cacerts": ["/path/to/cert.pem"]
})
```
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册