formats.md 59.4 KB
Newer Older
I
Ivan Blinkov 已提交
1 2 3 4 5
---
toc_priority: 21
toc_title: Input and Output Formats
---

I
Ivan Blinkov 已提交
6
# Formats for Input and Output Data {#formats}
7

8
ClickHouse can accept and return data in various formats. A format supported for input can be used to parse the data provided to `INSERT`s, to perform `SELECT`s from a file-backed table such as File, URL or HDFS, or to read an external dictionary. A format supported for output can be used to arrange the
9
results of a `SELECT`, and to perform `INSERT`s into a file-backed table.
10

11
The supported formats are:
12

13
| Format                                                          | Input | Output |
14
|-----------------------------------------------------------------|-------|--------|
15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46
| [TabSeparated](#tabseparated)                                   | ✔     | ✔      |
| [TabSeparatedRaw](#tabseparatedraw)                             | ✗     | ✔      |
| [TabSeparatedWithNames](#tabseparatedwithnames)                 | ✔     | ✔      |
| [TabSeparatedWithNamesAndTypes](#tabseparatedwithnamesandtypes) | ✔     | ✔      |
| [Template](#format-template)                                    | ✔     | ✔      |
| [TemplateIgnoreSpaces](#templateignorespaces)                   | ✔     | ✗      |
| [CSV](#csv)                                                     | ✔     | ✔      |
| [CSVWithNames](#csvwithnames)                                   | ✔     | ✔      |
| [CustomSeparated](#format-customseparated)                      | ✔     | ✔      |
| [Values](#data-format-values)                                   | ✔     | ✔      |
| [Vertical](#vertical)                                           | ✗     | ✔      |
| [VerticalRaw](#verticalraw)                                     | ✗     | ✔      |
| [JSON](#json)                                                   | ✗     | ✔      |
| [JSONCompact](#jsoncompact)                                     | ✗     | ✔      |
| [JSONEachRow](#jsoneachrow)                                     | ✔     | ✔      |
| [TSKV](#tskv)                                                   | ✔     | ✔      |
| [Pretty](#pretty)                                               | ✗     | ✔      |
| [PrettyCompact](#prettycompact)                                 | ✗     | ✔      |
| [PrettyCompactMonoBlock](#prettycompactmonoblock)               | ✗     | ✔      |
| [PrettyNoEscapes](#prettynoescapes)                             | ✗     | ✔      |
| [PrettySpace](#prettyspace)                                     | ✗     | ✔      |
| [Protobuf](#protobuf)                                           | ✔     | ✔      |
| [Avro](#data-format-avro)                                       | ✔     | ✔      |
| [AvroConfluent](#data-format-avro-confluent)                    | ✔     | ✗      |
| [Parquet](#data-format-parquet)                                 | ✔     | ✔      |
| [ORC](#data-format-orc)                                         | ✔     | ✗      |
| [RowBinary](#rowbinary)                                         | ✔     | ✔      |
| [RowBinaryWithNamesAndTypes](#rowbinarywithnamesandtypes)       | ✔     | ✔      |
| [Native](#native)                                               | ✔     | ✔      |
| [Null](#null)                                                   | ✗     | ✔      |
| [XML](#xml)                                                     | ✗     | ✔      |
| [CapnProto](#capnproto)                                         | ✔     | ✗      |
47

48
You can control some format processing parameters with the ClickHouse settings. For more information read the [Settings](../operations/settings/settings.md) section.
49

I
Ivan Blinkov 已提交
50
## TabSeparated {#tabseparated}
51

52
In TabSeparated format, data is written by row. Each row contains values separated by tabs. Each value is followed by a tab, except the last value in the row, which is followed by a line feed. Strictly Unix line feeds are assumed everywhere. The last row also must contain a line feed at the end. Values are written in text format, without enclosing quotation marks, and with special characters escaped.
53

I
Ivan Blinkov 已提交
54
This format is also available under the name `TSV`.
55

56
The `TabSeparated` format is convenient for processing data using custom programs and scripts. It is used by default in the HTTP interface, and in the command-line client’s batch mode. This format also allows transferring data between different DBMSs. For example, you can get a dump from MySQL and upload it to ClickHouse, or vice versa.
I
Ivan Blinkov 已提交
57

58
The `TabSeparated` format supports outputting total values (when using WITH TOTALS) and extreme values (when ‘extremes’ is set to 1). In these cases, the total values and extremes are output after the main data. The main result, total values, and extremes are separated from each other by an empty line. Example:
I
Ivan Blinkov 已提交
59

60
``` sql
I
Ivan Blinkov 已提交
61
SELECT EventDate, count() AS c FROM test.hits GROUP BY EventDate WITH TOTALS ORDER BY EventDate FORMAT TabSeparated``
62 63
```

64
``` text
I
Ivan Blinkov 已提交
65 66 67 68 69 70 71 72 73
2014-03-17      1406958
2014-03-18      1383658
2014-03-19      1405797
2014-03-20      1353623
2014-03-21      1245779
2014-03-22      1031592
2014-03-23      1046491

0000-00-00      8873898
74

I
Ivan Blinkov 已提交
75 76
2014-03-17      1031592
2014-03-23      1406958
77
```
I
Ivan Blinkov 已提交
78

79
### Data Formatting {#data-formatting}
I
Ivan Blinkov 已提交
80

81
Integer numbers are written in decimal form. Numbers can contain an extra “+” character at the beginning (ignored when parsing, and not recorded when formatting). Non-negative numbers can’t contain the negative sign. When reading, it is allowed to parse an empty string as a zero, or (for signed types) a string consisting of just a minus sign as a zero. Numbers that do not fit into the corresponding data type may be parsed as a different number, without an error message.
I
Ivan Blinkov 已提交
82

83
Floating-point numbers are written in decimal form. The dot is used as the decimal separator. Exponential entries are supported, as are ‘inf’, ‘+inf’, ‘-inf’, and ‘nan’. An entry of floating-point numbers may begin or end with a decimal point.
I
Ivan Blinkov 已提交
84 85 86 87
During formatting, accuracy may be lost on floating-point numbers.
During parsing, it is not strictly required to read the nearest machine-representable number.

Dates are written in YYYY-MM-DD format and parsed in the same format, but with any characters as separators.
88 89
Dates with times are written in the format `YYYY-MM-DD hh:mm:ss` and parsed in the same format, but with any characters as separators.
This all occurs in the system time zone at the time the client or server starts (depending on which of them formats data). For dates with times, daylight saving time is not specified. So if a dump has times during daylight saving time, the dump does not unequivocally match the data, and parsing will select one of the two times.
I
Ivan Blinkov 已提交
90 91 92 93
During a read operation, incorrect dates and dates with times can be parsed with natural overflow or as null dates and times, without an error message.

As an exception, parsing dates with times is also supported in Unix timestamp format, if it consists of exactly 10 decimal digits. The result is not time zone-dependent. The formats YYYY-MM-DD hh:mm:ss and NNNNNNNNNN are differentiated automatically.

94
Strings are output with backslash-escaped special characters. The following escape sequences are used for output: `\b`, `\f`, `\r`, `\n`, `\t`, `\0`, `\'`, `\\`. Parsing also supports the sequences `\a`, `\v`, and `\xHH` (hex escape sequences) and any `\c` sequences, where `c` is any character (these sequences are converted to `c`). Thus, reading data supports formats where a line feed can be written as `\n` or `\`, or as a line feed. For example, the string `Hello world` with a line feed between the words instead of space can be parsed in any of the following variations:
I
Ivan Blinkov 已提交
95

96
``` text
I
Ivan Blinkov 已提交
97
Hello\nworld
98

I
Ivan Blinkov 已提交
99 100 101 102 103 104 105 106 107 108
Hello\
world
```

The second variant is supported because MySQL uses it when writing tab-separated dumps.

The minimum set of characters that you need to escape when passing data in TabSeparated format: tab, line feed (LF) and backslash.

Only a small set of symbols are escaped. You can easily stumble onto a string value that your terminal will ruin in output.

109
Arrays are written as a list of comma-separated values in square brackets. Number items in the array are formatted as normally. `Date` and `DateTime` types are written in single quotes. Strings are written in single quotes with the same escaping rules as above.
I
Ivan Blinkov 已提交
110

I
Ivan Blinkov 已提交
111
[NULL](../sql_reference/syntax.md) is formatted as `\N`.
I
Ivan Blinkov 已提交
112

I
Ivan Blinkov 已提交
113
Each element of [Nested](../sql_reference/data_types/nested_data_structures/nested.md) structures is represented as array.
114 115 116

For example:

117
``` sql
118 119
CREATE TABLE nestedt
(
120
    `id` UInt8,
121
    `aux` Nested(
122
        a UInt8,
123 124 125 126 127
        b String
    )
)
ENGINE = TinyLog
```
128 129

``` sql
130 131
INSERT INTO nestedt Values ( 1, [1], ['a'])
```
132 133

``` sql
134 135
SELECT * FROM nestedt FORMAT TSV
```
136 137

``` text
138
1  [1]    ['a']
139 140
```

I
Ivan Blinkov 已提交
141
## TabSeparatedRaw {#tabseparatedraw}
I
Ivan Blinkov 已提交
142 143 144 145 146 147

Differs from `TabSeparated` format in that the rows are written without escaping.
This format is only appropriate for outputting a query result, but not for parsing (retrieving data to insert in a table).

This format is also available under the name `TSVRaw`.

I
Ivan Blinkov 已提交
148
## TabSeparatedWithNames {#tabseparatedwithnames}
I
Ivan Blinkov 已提交
149 150

Differs from the `TabSeparated` format in that the column names are written in the first row.
151
During parsing, the first row is completely ignored. You can’t use column names to determine their position or to check their correctness.
I
Ivan Blinkov 已提交
152 153 154 155
(Support for parsing the header row may be added in the future.)

This format is also available under the name `TSVWithNames`.

I
Ivan Blinkov 已提交
156
## TabSeparatedWithNamesAndTypes {#tabseparatedwithnamesandtypes}
I
Ivan Blinkov 已提交
157 158 159 160 161 162

Differs from the `TabSeparated` format in that the column names are written to the first row, while the column types are in the second row.
During parsing, the first and second rows are completely ignored.

This format is also available under the name `TSVWithNamesAndTypes`.

I
Ivan Blinkov 已提交
163
## Template {#format-template}
A
Alexander Tokmakov 已提交
164

165
This format allows specifying a custom format string with placeholders for values with a specified escaping rule.
A
Alexander Tokmakov 已提交
166

167
It uses settings `format_template_resultset`, `format_template_row`, `format_template_rows_between_delimiter` and some settings of other formats (e.g. `output_format_json_quote_64bit_integers` when using `JSON` escaping, see further)
A
Alexander Tokmakov 已提交
168

A
Alexander Tokmakov 已提交
169
Setting `format_template_row` specifies path to file, which contains format string for rows with the following syntax:
A
Alexander Tokmakov 已提交
170

171
`delimiter_1${column_1:serializeAs_1}delimiter_2${column_2:serializeAs_2} ... delimiter_N`,
A
Alexander Tokmakov 已提交
172

173 174 175
where `delimiter_i` is a delimiter between values (`$` symbol can be escaped as `$$`),
`column_i` is a name or index of a column whose values are to be selected or inserted (if empty, then column will be skipped),
`serializeAs_i` is an escaping rule for the column values. The following escaping rules are supported:
176

177 178 179 180 181
-   `CSV`, `JSON`, `XML` (similarly to the formats of the same names)
-   `Escaped` (similarly to `TSV`)
-   `Quoted` (similarly to `Values`)
-   `Raw` (without escaping, similarly to `TSVRaw`)
-   `None` (no escaping rule, see further)
182

183
If an escaping rule is omitted, then `None` will be used. `XML` and `Raw` are suitable only for output.
184

185
So, for the following format string:
186

187
      `Search phrase: ${SearchPhrase:Quoted}, count: ${c:Escaped}, ad price: $$${price:JSON};`
188

189
the values of `SearchPhrase`, `c` and `price` columns, which are escaped as `Quoted`, `Escaped` and `JSON` will be printed (for select) or will be expected (for insert) between `Search phrase:`, `, count:`, `, ad price: $` and `;` delimiters respectively. For example:
A
Alexander Tokmakov 已提交
190

191
`Search phrase: 'bathroom interior design', count: 2166, ad price: $3;`
192

193
The `format_template_rows_between_delimiter` setting specifies delimiter between rows, which is printed (or expected) after every row except the last one (`\n` by default)
A
Alexander Tokmakov 已提交
194

195
Setting `format_template_resultset` specifies the path to file, which contains a format string for resultset. Format string for resultset has the same syntax as a format string for row and allows to specify a prefix, a suffix and a way to print some additional information. It contains the following placeholders instead of column names:
A
Alexander Tokmakov 已提交
196

197 198 199 200 201 202 203 204 205
-   `data` is the rows with data in `format_template_row` format, separated by `format_template_rows_between_delimiter`. This placeholder must be the first placeholder in the format string.
-   `totals` is the row with total values in `format_template_row` format (when using WITH TOTALS)
-   `min` is the row with minimum values in `format_template_row` format (when extremes are set to 1)
-   `max` is the row with maximum values in `format_template_row` format (when extremes are set to 1)
-   `rows` is the total number of output rows
-   `rows_before_limit` is the minimal number of rows there would have been without LIMIT. Output only if the query contains LIMIT. If the query contains GROUP BY, rows\_before\_limit\_at\_least is the exact number of rows there would have been without a LIMIT.
-   `time` is the request execution time in seconds
-   `rows_read` is the number of rows has been read
-   `bytes_read` is the number of bytes (uncompressed) has been read
206 207 208 209 210 211 212 213

The placeholders `data`, `totals`, `min` and `max` must not have escaping rule specified (or `None` must be specified explicitly). The remaining placeholders may have any escaping rule specified.
If the `format_template_resultset` setting is an empty string, `${data}` is used as default value.
For insert queries format allows skipping some columns or some fields if prefix or suffix (see example).

Select example:

``` sql
214
SELECT SearchPhrase, count() AS c FROM test.hits GROUP BY SearchPhrase ORDER BY c DESC LIMIT 5 FORMAT Template SETTINGS
A
Alexander Tokmakov 已提交
215 216
format_template_resultset = '/some/path/resultset.format', format_template_row = '/some/path/row.format', format_template_rows_between_delimiter = '\n    '
```
217

A
Alexander Tokmakov 已提交
218
`/some/path/resultset.format`:
219 220

``` text
A
Alexander Tokmakov 已提交
221
<!DOCTYPE HTML>
A
Alexander Tokmakov 已提交
222 223 224 225 226 227 228 229 230 231 232
<html> <head> <title>Search phrases</title> </head>
 <body>
  <table border="1"> <caption>Search phrases</caption>
    <tr> <th>Search phrase</th> <th>Count</th> </tr>
    ${data}
  </table>
  <table border="1"> <caption>Max</caption>
    ${max}
  </table>
  <b>Processed ${rows_read:XML} rows in ${time:XML} sec</b>
 </body>
A
Alexander Tokmakov 已提交
233
</html>
A
Alexander Tokmakov 已提交
234
```
235

A
Alexander Tokmakov 已提交
236
`/some/path/row.format`:
237 238

``` text
A
Alexander Tokmakov 已提交
239
<tr> <td>${0:XML}</td> <td>${1:XML}</td> </tr>
A
Alexander Tokmakov 已提交
240
```
241

A
Alexander Tokmakov 已提交
242
Result:
243 244

``` html
A
Alexander Tokmakov 已提交
245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263
<!DOCTYPE HTML>
<html> <head> <title>Search phrases</title> </head>
 <body>
  <table border="1"> <caption>Search phrases</caption>
    <tr> <th>Search phrase</th> <th>Count</th> </tr>
    <tr> <td></td> <td>8267016</td> </tr>
    <tr> <td>bathroom interior design</td> <td>2166</td> </tr>
    <tr> <td>yandex</td> <td>1655</td> </tr>
    <tr> <td>spring 2014 fashion</td> <td>1549</td> </tr>
    <tr> <td>freeform photos</td> <td>1480</td> </tr>
  </table>
  <table border="1"> <caption>Max</caption>
    <tr> <td></td> <td>8873898</td> </tr>
  </table>
  <b>Processed 3095973 rows in 0.1569913 sec</b>
 </body>
</html>
```

A
Alexander Tokmakov 已提交
264
Insert example:
265 266

``` text
A
Alexander Tokmakov 已提交
267 268 269 270
Some header
Page views: 5, User id: 4324182021466249494, Useless field: hello, Duration: 146, Sign: -1
Page views: 6, User id: 4324182021466249494, Useless field: world, Duration: 185, Sign: 1
Total rows: 2
A
Alexander Tokmakov 已提交
271
```
272 273

``` sql
274
INSERT INTO UserActivity FORMAT Template SETTINGS
A
Alexander Tokmakov 已提交
275 276
format_template_resultset = '/some/path/resultset.format', format_template_row = '/some/path/row.format'
```
277

A
Alexander Tokmakov 已提交
278
`/some/path/resultset.format`:
279 280

``` text
A
Alexander Tokmakov 已提交
281 282
Some header\n${data}\nTotal rows: ${:CSV}\n
```
283

A
Alexander Tokmakov 已提交
284
`/some/path/row.format`:
285 286

``` text
A
Alexander Tokmakov 已提交
287
Page views: ${PageViews:CSV}, User id: ${UserID:CSV}, Useless field: ${:CSV}, Duration: ${Duration:CSV}, Sign: ${Sign:CSV}
A
Alexander Tokmakov 已提交
288
```
289 290

`PageViews`, `UserID`, `Duration` and `Sign` inside placeholders are names of columns in the table. Values after `Useless field` in rows and after `\nTotal rows:` in suffix will be ignored.
A
Alexander Tokmakov 已提交
291
All delimiters in the input data must be strictly equal to delimiters in specified format strings.
292

I
Ivan Blinkov 已提交
293
## TemplateIgnoreSpaces {#templateignorespaces}
A
Alexander Tokmakov 已提交
294

A
Alexander Tokmakov 已提交
295
This format is suitable only for input.
296 297 298 299
Similar to `Template`, but skips whitespace characters between delimiters and values in the input stream. However, if format strings contain whitespace characters, these characters will be expected in the input stream. Also allows to specify empty placeholders (`${}` or `${:None}`) to split some delimiter into separate parts to ignore spaces between them. Such placeholders are used only for skipping whitespace characters.
It’s possible to read `JSON` using this format, if values of columns have the same order in all rows. For example, the following request can be used for inserting data from output example of format [JSON](#json):

``` sql
A
Alexander Tokmakov 已提交
300
INSERT INTO table_name FORMAT TemplateIgnoreSpaces SETTINGS
A
Alexander Tokmakov 已提交
301 302
format_template_resultset = '/some/path/resultset.format', format_template_row = '/some/path/row.format', format_template_rows_between_delimiter = ','
```
303

A
Alexander Tokmakov 已提交
304
`/some/path/resultset.format`:
305 306

``` text
A
Alexander Tokmakov 已提交
307 308
{${}"meta"${}:${:JSON},${}"data"${}:${}[${data}]${},${}"totals"${}:${:JSON},${}"extremes"${}:${:JSON},${}"rows"${}:${:JSON},${}"rows_before_limit_at_least"${}:${:JSON}${}}
```
309

A
Alexander Tokmakov 已提交
310
`/some/path/row.format`:
311 312

``` text
A
Alexander Tokmakov 已提交
313
{${}"SearchPhrase"${}:${}${phrase:JSON}${},${}"c"${}:${}${cnt:JSON}${}}
A
Alexander Tokmakov 已提交
314
```
A
Alexander Tokmakov 已提交
315

I
Ivan Blinkov 已提交
316
## TSKV {#tskv}
I
Ivan Blinkov 已提交
317 318 319

Similar to TabSeparated, but outputs a value in name=value format. Names are escaped the same way as in TabSeparated format, and the = symbol is also escaped.

320
``` text
I
Ivan Blinkov 已提交
321 322 323 324 325 326 327 328 329 330 331 332
SearchPhrase=   count()=8267016
SearchPhrase=bathroom interior design    count()=2166
SearchPhrase=yandex     count()=1655
SearchPhrase=2014 spring fashion    count()=1549
SearchPhrase=freeform photos       count()=1480
SearchPhrase=angelina jolie    count()=1245
SearchPhrase=omsk       count()=1112
SearchPhrase=photos of dog breeds    count()=1091
SearchPhrase=curtain designs        count()=1064
SearchPhrase=baku       count()=1000
```

I
Ivan Blinkov 已提交
333
[NULL](../sql_reference/syntax.md) is formatted as `\N`.
I
Ivan Blinkov 已提交
334 335 336 337 338

``` sql
SELECT * FROM t_null FORMAT TSKV
```

339
``` text
340
x=1    y=\N
I
Ivan Blinkov 已提交
341 342
```

A
alexey-milovidov 已提交
343
When there is a large number of small columns, this format is ineffective, and there is generally no reason to use it. Nevertheless, it is no worse than JSONEachRow in terms of efficiency.
I
Ivan Blinkov 已提交
344 345 346 347

Both data output and parsing are supported in this format. For parsing, any order is supported for the values of different columns. It is acceptable for some values to be omitted – they are treated as equal to their default values. In this case, zeros and blank rows are used as default values. Complex values that could be specified in the table are not supported as defaults.

Parsing allows the presence of the additional field `tskv` without the equal sign or a value. This field is ignored.
348

I
Ivan Blinkov 已提交
349
## CSV {#csv}
350 351 352

Comma Separated Values format ([RFC](https://tools.ietf.org/html/rfc4180)).

353
When formatting, rows are enclosed in double-quotes. A double quote inside a string is output as two double quotes in a row. There are no other rules for escaping characters. Date and date-time are enclosed in double-quotes. Numbers are output without quotes. Values are separated by a delimiter character, which is `,` by default. The delimiter character is defined in the setting [format\_csv\_delimiter](../operations/settings/settings.md#settings-format_csv_delimiter). Rows are separated using the Unix line feed (LF). Arrays are serialized in CSV as follows: first, the array is serialized to a string as in TabSeparated format, and then the resulting string is output to CSV in double-quotes. Tuples in CSV format are serialized as separate columns (that is, their nesting in the tuple is lost).
354

355
``` bash
356
$ clickhouse-client --format_csv_delimiter="|" --query="INSERT INTO test.csv FORMAT CSV" < data.csv
C
chertus 已提交
357 358
```

359
\*By default, the delimiter is `,`. See the [format\_csv\_delimiter](../operations/settings/settings.md#settings-format_csv_delimiter) setting for more information.
360

361 362
When parsing, all values can be parsed either with or without quotes. Both double and single quotes are supported. Rows can also be arranged without quotes. In this case, they are parsed up to the delimiter character or line feed (CR or LF). In violation of the RFC, when parsing rows without quotes, the leading and trailing spaces and tabs are ignored. For the line feed, Unix (LF), Windows (CR LF) and Mac OS Classic (CR LF) types are all supported.

363
Empty unquoted input values are replaced with default values for the respective columns, if
364
[input\_format\_defaults\_for\_omitted\_fields](../operations/settings/settings.md#session_settings-input_format_defaults_for_omitted_fields)
365 366
is enabled.

367
`NULL` is formatted as `\N` or `NULL` or an empty unquoted string (see settings [input\_format\_csv\_unquoted\_null\_literal\_as\_null](../operations/settings/settings.md#settings-input_format_csv_unquoted_null_literal_as_null) and [input\_format\_defaults\_for\_omitted\_fields](../operations/settings/settings.md#session_settings-input_format_defaults_for_omitted_fields)).
368 369 370

The CSV format supports the output of totals and extremes the same way as `TabSeparated`.

371
## CSVWithNames {#csvwithnames}
372 373 374

Also prints the header row, similar to `TabSeparatedWithNames`.

I
Ivan Blinkov 已提交
375
## CustomSeparated {#format-customseparated}
A
Alexander Tokmakov 已提交
376 377 378 379

Similar to [Template](#format-template), but it prints or reads all columns and uses escaping rule from setting `format_custom_escaping_rule` and delimiters from settings `format_custom_field_delimiter`, `format_custom_row_before_delimiter`, `format_custom_row_after_delimiter`, `format_custom_row_between_delimiter`, `format_custom_result_before_delimiter` and `format_custom_result_after_delimiter`, not from format strings.
There is also `CustomSeparatedIgnoreSpaces` format, which is similar to `TemplateIgnoreSpaces`.

I
Ivan Blinkov 已提交
380
## JSON {#json}
381

382
Outputs data in JSON format. Besides data tables, it also outputs column names and types, along with some additional information: the total number of output rows, and the number of rows that could have been output if there weren’t a LIMIT. Example:
383

384
``` sql
385 386 387
SELECT SearchPhrase, count() AS c FROM test.hits GROUP BY SearchPhrase WITH TOTALS ORDER BY c DESC LIMIT 5 FORMAT JSON
```

388
``` json
389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451
{
        "meta":
        [
                {
                        "name": "SearchPhrase",
                        "type": "String"
                },
                {
                        "name": "c",
                        "type": "UInt64"
                }
        ],

        "data":
        [
                {
                        "SearchPhrase": "",
                        "c": "8267016"
                },
                {
                        "SearchPhrase": "bathroom interior design",
                        "c": "2166"
                },
                {
                        "SearchPhrase": "yandex",
                        "c": "1655"
                },
                {
                        "SearchPhrase": "spring 2014 fashion",
                        "c": "1549"
                },
                {
                        "SearchPhrase": "freeform photos",
                        "c": "1480"
                }
        ],

        "totals":
        {
                "SearchPhrase": "",
                "c": "8873898"
        },

        "extremes":
        {
                "min":
                {
                        "SearchPhrase": "",
                        "c": "1480"
                },
                "max":
                {
                        "SearchPhrase": "",
                        "c": "8267016"
                }
        },

        "rows": 5,

        "rows_before_limit_at_least": 141137
}
```

452
The JSON is compatible with JavaScript. To ensure this, some characters are additionally escaped: the slash `/` is escaped as `\/`; alternative line breaks `U+2028` and `U+2029`, which break some browsers, are escaped as `\uXXXX`. ASCII control characters are escaped: backspace, form feed, line feed, carriage return, and horizontal tab are replaced with `\b`, `\f`, `\n`, `\r`, `\t` , as well as the remaining bytes in the 00-1F range using `\uXXXX` sequences. Invalid UTF-8 sequences are changed to the replacement character � so the output text will consist of valid UTF-8 sequences. For compatibility with JavaScript, Int64 and UInt64 integers are enclosed in double-quotes by default. To remove the quotes, you can set the configuration parameter [output\_format\_json\_quote\_64bit\_integers](../operations/settings/settings.md#session_settings-output_format_json_quote_64bit_integers) to 0.
453 454 455 456

`rows` – The total number of output rows.

`rows_before_limit_at_least` The minimal number of rows there would have been without LIMIT. Output only if the query contains LIMIT.
457
If the query contains GROUP BY, rows\_before\_limit\_at\_least is the exact number of rows there would have been without a LIMIT.
458 459 460

`totals` – Total values (when using WITH TOTALS).

461
`extremes` – Extreme values (when extremes are set to 1).
462 463

This format is only appropriate for outputting a query result, but not for parsing (retrieving data to insert in a table).
464

I
Ivan Blinkov 已提交
465
ClickHouse supports [NULL](../sql_reference/syntax.md), which is displayed as `null` in the JSON output.
466

467
See also the [JSONEachRow](#jsoneachrow) format.
468

I
Ivan Blinkov 已提交
469
## JSONCompact {#jsoncompact}
470 471 472 473 474

Differs from JSON only in that data rows are output in arrays, not in objects.

Example:

475
``` json
476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493
{
        "meta":
        [
                {
                        "name": "SearchPhrase",
                        "type": "String"
                },
                {
                        "name": "c",
                        "type": "UInt64"
                }
        ],

        "data":
        [
                ["", "8267016"],
                ["bathroom interior design", "2166"],
                ["yandex", "1655"],
494 495
                ["fashion trends spring 2014", "1549"],
                ["freeform photo", "1480"]
496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514
        ],

        "totals": ["","8873898"],

        "extremes":
        {
                "min": ["","1480"],
                "max": ["","8267016"]
        },

        "rows": 5,

        "rows_before_limit_at_least": 141137
}
```

This format is only appropriate for outputting a query result, but not for parsing (retrieving data to insert in a table).
See also the `JSONEachRow` format.

I
Ivan Blinkov 已提交
515
## JSONEachRow {#jsoneachrow}
516

517
When using this format, ClickHouse outputs rows as separated, newline-delimited JSON objects, but the data as a whole is not valid JSON.
518

519
``` json
520
{"SearchPhrase":"curtain designs","count()":"1064"}
521
{"SearchPhrase":"baku","count()":"1000"}
522
{"SearchPhrase":"","count()":"8267016"}
523 524
```

525
When inserting the data, you should provide a separate JSON object for each row.
526

527
### Inserting Data {#inserting-data}
528

529
``` sql
530 531 532 533 534
INSERT INTO UserActivity FORMAT JSONEachRow {"PageViews":5, "UserID":"4324182021466249494", "Duration":146,"Sign":-1} {"UserID":"4324182021466249494","PageViews":6,"Duration":185,"Sign":1}
```

ClickHouse allows:

535 536
-   Any order of key-value pairs in the object.
-   Omitting some values.
537

538
ClickHouse ignores spaces between elements and commas after the objects. You can pass all the objects in one line. You don’t have to separate them with line breaks.
539

540
**Omitted values processing**
541

I
Ivan Blinkov 已提交
542
ClickHouse substitutes omitted values with the default values for the corresponding [data types](../sql_reference/data_types/index.md).
543

544
If `DEFAULT expr` is specified, ClickHouse uses different substitution rules depending on the [input\_format\_defaults\_for\_omitted\_fields](../operations/settings/settings.md#session_settings-input_format_defaults_for_omitted_fields) setting.
545

546
Consider the following table:
547

548
``` sql
549 550 551 552 553 554 555
CREATE TABLE IF NOT EXISTS example_table
(
    x UInt32,
    a DEFAULT x * 2
) ENGINE = Memory;
```

556 557
-   If `input_format_defaults_for_omitted_fields = 0`, then the default value for `x` and `a` equals `0` (as the default value for the `UInt32` data type).
-   If `input_format_defaults_for_omitted_fields = 1`, then the default value for `x` equals `0`, but the default value of `a` equals `x * 2`.
558

559 560
!!! note "Warning"
    When inserting data with `insert_sample_with_metadata = 1`, ClickHouse consumes more computational resources, compared to insertion with `insert_sample_with_metadata = 0`.
561

562
### Selecting Data {#selecting-data}
563

564
Consider the `UserActivity` table as an example:
565

566
``` text
567 568 569 570 571 572 573 574
┌──────────────UserID─┬─PageViews─┬─Duration─┬─Sign─┐
│ 4324182021466249494 │         5 │      146 │   -1 │
│ 4324182021466249494 │         6 │      185 │    1 │
└─────────────────────┴───────────┴──────────┴──────┘
```

The query `SELECT * FROM UserActivity FORMAT JSONEachRow` returns:

575
``` text
576 577 578 579 580 581 582
{"UserID":"4324182021466249494","PageViews":5,"Duration":146,"Sign":-1}
{"UserID":"4324182021466249494","PageViews":6,"Duration":185,"Sign":1}
```

Unlike the [JSON](#json) format, there is no substitution of invalid UTF-8 sequences. Values are escaped in the same way as for `JSON`.

!!! note "Note"
583
    Any set of bytes can be output in the strings. Use the `JSONEachRow` format if you are sure that the data in the table can be formatted as JSON without losing any information.
584

I
Ivan Blinkov 已提交
585
### Usage of Nested Structures {#jsoneachrow-nested}
586

I
Ivan Blinkov 已提交
587
If you have a table with [Nested](../sql_reference/data_types/nested_data_structures/nested.md) data type columns, you can insert JSON data with the same structure. Enable this feature with the [input\_format\_import\_nested\_json](../operations/settings/settings.md#settings-input_format_import_nested_json) setting.
588 589 590

For example, consider the following table:

591
``` sql
592 593 594
CREATE TABLE json_each_row_nested (n Nested (s String, i Int32) ) ENGINE = Memory
```

595
As you can see in the `Nested` data type description, ClickHouse treats each component of the nested structure as a separate column (`n.s` and `n.i` for our table). You can insert data in the following way:
596

597
``` sql
598 599 600
INSERT INTO json_each_row_nested FORMAT JSONEachRow {"n.s": ["abc", "def"], "n.i": [1, 23]}
```

601
To insert data as a hierarchical JSON object, set [input\_format\_import\_nested\_json=1](../operations/settings/settings.md#settings-input_format_import_nested_json).
602

603
``` json
604 605 606 607 608 609 610 611
{
    "n": {
        "s": ["abc", "def"],
        "i": [1, 23]
    }
}
```

612
Without this setting, ClickHouse throws an exception.
613

614
``` sql
615 616
SELECT name, value FROM system.settings WHERE name = 'input_format_import_nested_json'
```
617 618

``` text
619 620 621 622
┌─name────────────────────────────┬─value─┐
│ input_format_import_nested_json │ 0     │
└─────────────────────────────────┴───────┘
```
623 624

``` sql
625 626
INSERT INTO json_each_row_nested FORMAT JSONEachRow {"n": {"s": ["abc", "def"], "i": [1, 23]}}
```
627 628

``` text
629 630
Code: 117. DB::Exception: Unknown field found while parsing JSONEachRow format: n: (at row 1)
```
631 632

``` sql
633 634 635 636
SET input_format_import_nested_json=1
INSERT INTO json_each_row_nested FORMAT JSONEachRow {"n": {"s": ["abc", "def"], "i": [1, 23]}}
SELECT * FROM json_each_row_nested
```
637 638

``` text
639 640 641 642 643
┌─n.s───────────┬─n.i────┐
│ ['abc','def'] │ [1,23] │
└───────────────┴────────┘
```

I
Ivan Blinkov 已提交
644
## Native {#native}
645

646
The most efficient format. Data is written and read by blocks in binary format. For each block, the number of rows, number of columns, column names and types, and parts of columns in this block are recorded one after another. In other words, this format is “columnar” – it doesn’t convert columns to rows. This is the format used in the native interface for interaction between servers, for using the command-line client, and for C++ clients.
647

648
You can use this format to quickly generate dumps that can only be read by the ClickHouse DBMS. It doesn’t make sense to work with this format yourself.
649

I
Ivan Blinkov 已提交
650
## Null {#null}
651

652
Nothing is output. However, the query is processed, and when using the command-line client, data is transmitted to the client. This is used for tests, including performance testing.
653 654
Obviously, this format is only appropriate for output, not for parsing.

I
Ivan Blinkov 已提交
655
## Pretty {#pretty}
656

657
Outputs data as Unicode-art tables, also using ANSI-escape sequences for setting colours in the terminal.
658 659
A full grid of the table is drawn, and each row occupies two lines in the terminal.
Each result block is output as a separate table. This is necessary so that blocks can be output without buffering results (buffering would be necessary in order to pre-calculate the visible width of all the values).
660

I
Ivan Blinkov 已提交
661
[NULL](../sql_reference/syntax.md) is output as `ᴺᵁᴸᴸ`.
662

663 664
Example (shown for the [PrettyCompact](#prettycompact) format):

665
``` sql
666 667 668
SELECT * FROM t_null
```

669
``` text
670 671 672 673 674
┌─x─┬────y─┐
│ 1 │ ᴺᵁᴸᴸ │
└───┴──────┘
```

675
Rows are not escaped in Pretty\* formats. Example is shown for the [PrettyCompact](#prettycompact) format:
676

677
``` sql
678 679 680
SELECT 'String with \'quotes\' and \t character' AS Escaping_test
```

681
``` text
682
┌─Escaping_test────────────────────────┐
683
│ String with 'quotes' and      character │
684 685 686
└──────────────────────────────────────┘
```

687
To avoid dumping too much data to the terminal, only the first 10,000 rows are printed. If the number of rows is greater than or equal to 10,000, the message “Showed first 10 000” is printed.
688 689
This format is only appropriate for outputting a query result, but not for parsing (retrieving data to insert in a table).

690
The Pretty format supports outputting total values (when using WITH TOTALS) and extremes (when ‘extremes’ is set to 1). In these cases, total values and extreme values are output after the main data, in separate tables. Example (shown for the [PrettyCompact](#prettycompact) format):
691

692
``` sql
693 694 695
SELECT EventDate, count() AS c FROM test.hits GROUP BY EventDate WITH TOTALS ORDER BY EventDate FORMAT PrettyCompact
```

696
``` text
697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718
┌──EventDate─┬───────c─┐
│ 2014-03-17 │ 1406958 │
│ 2014-03-18 │ 1383658 │
│ 2014-03-19 │ 1405797 │
│ 2014-03-20 │ 1353623 │
│ 2014-03-21 │ 1245779 │
│ 2014-03-22 │ 1031592 │
│ 2014-03-23 │ 1046491 │
└────────────┴─────────┘

Totals:
┌──EventDate─┬───────c─┐
│ 0000-00-00 │ 8873898 │
└────────────┴─────────┘

Extremes:
┌──EventDate─┬───────c─┐
│ 2014-03-17 │ 1031592 │
│ 2014-03-23 │ 1406958 │
└────────────┴─────────┘
```

I
Ivan Blinkov 已提交
719
## PrettyCompact {#prettycompact}
720

721
Differs from [Pretty](#pretty) in that the grid is drawn between rows and the result is more compact.
722 723
This format is used by default in the command-line client in interactive mode.

I
Ivan Blinkov 已提交
724
## PrettyCompactMonoBlock {#prettycompactmonoblock}
725

726
Differs from [PrettyCompact](#prettycompact) in that up to 10,000 rows are buffered, then output as a single table, not by blocks.
727

I
Ivan Blinkov 已提交
728
## PrettyNoEscapes {#prettynoescapes}
729

730
Differs from Pretty in that ANSI-escape sequences aren’t used. This is necessary for displaying this format in a browser, as well as for using the ‘watch’ command-line utility.
731 732 733

Example:

734
``` bash
735
$ watch -n1 "clickhouse-client --query='SELECT event, value FROM system.events FORMAT PrettyCompactNoEscapes'"
736 737 738 739
```

You can use the HTTP interface for displaying in the browser.

740
### PrettyCompactNoEscapes {#prettycompactnoescapes}
741 742 743

The same as the previous setting.

744
### PrettySpaceNoEscapes {#prettyspacenoescapes}
745 746 747

The same as the previous setting.

I
Ivan Blinkov 已提交
748
## PrettySpace {#prettyspace}
749

750
Differs from [PrettyCompact](#prettycompact) in that whitespace (space characters) is used instead of the grid.
751

I
Ivan Blinkov 已提交
752
## RowBinary {#rowbinary}
753 754

Formats and parses data by row in binary format. Rows and values are listed consecutively, without separators.
755
This format is less efficient than the Native format since it is row-based.
756

757
Integers use fixed-length little-endian representation. For example, UInt64 uses 8 bytes.
758 759 760 761 762 763 764
DateTime is represented as UInt32 containing the Unix timestamp as the value.
Date is represented as a UInt16 object that contains the number of days since 1970-01-01 as the value.
String is represented as a varint length (unsigned [LEB128](https://en.wikipedia.org/wiki/LEB128)), followed by the bytes of the string.
FixedString is represented simply as a sequence of bytes.

Array is represented as a varint length (unsigned [LEB128](https://en.wikipedia.org/wiki/LEB128)), followed by successive elements of the array.

I
Ivan Blinkov 已提交
765
For [NULL](../sql_reference/syntax.md#null-literal) support, an additional byte containing 1 or 0 is added before each [Nullable](../sql_reference/data_types/nullable.md) value. If 1, then the value is `NULL` and this byte is interpreted as a separate value. If 0, the value after the byte is not `NULL`.
766

I
Ivan Blinkov 已提交
767
## RowBinaryWithNamesAndTypes {#rowbinarywithnamesandtypes}
768 769

Similar to [RowBinary](#rowbinary), but with added header:
D
Denis Zhuravlev 已提交
770

771 772 773
-   [LEB128](https://en.wikipedia.org/wiki/LEB128)-encoded number of columns (N)
-   N `String`s specifying column names
-   N `String`s specifying column types
774

I
Ivan Blinkov 已提交
775
## Values {#data-format-values}
776

I
Ivan Blinkov 已提交
777
Prints every row in brackets. Rows are separated by commas. There is no comma after the last row. The values inside the brackets are also comma-separated. Numbers are output in a decimal format without quotes. Arrays are output in square brackets. Strings, dates, and dates with times are output in quotes. Escaping rules and parsing are similar to the [TabSeparated](#tabseparated) format. During formatting, extra spaces aren’t inserted, but during parsing, they are allowed and skipped (except for spaces inside array values, which are not allowed). [NULL](../sql_reference/syntax.md) is represented as `NULL`.
778 779 780 781 782

The minimum set of characters that you need to escape when passing data in Values ​​format: single quotes and backslashes.

This is the format that is used in `INSERT INTO t VALUES ...`, but you can also use it for formatting query results.

783
See also: [input\_format\_values\_interpret\_expressions](../operations/settings/settings.md#settings-input_format_values_interpret_expressions) and [input\_format\_values\_deduce\_templates\_of\_expressions](../operations/settings/settings.md#settings-input_format_values_deduce_templates_of_expressions) settings.
A
Alexander Tokmakov 已提交
784

I
Ivan Blinkov 已提交
785
## Vertical {#vertical}
786

787
Prints each value on a separate line with the column name specified. This format is convenient for printing just one or a few rows if each row consists of a large number of columns.
788

I
Ivan Blinkov 已提交
789
[NULL](../sql_reference/syntax.md) is output as `ᴺᵁᴸᴸ`.
790 791 792

Example:

793
``` sql
794 795 796
SELECT * FROM t_null FORMAT Vertical
```

797
``` text
798 799 800 801 802
Row 1:
──────
x: 1
y: ᴺᵁᴸᴸ
```
803

804
Rows are not escaped in Vertical format:
805

806
``` sql
807
SELECT 'string with \'quotes\' and \t with some special \n characters' AS test FORMAT Vertical
808 809
```

810
``` text
811 812
Row 1:
──────
813
test: string with 'quotes' and      with some special
814 815 816
 characters
```

817
This format is only appropriate for outputting a query result, but not for parsing (retrieving data to insert in a table).
818

I
Ivan Blinkov 已提交
819
## VerticalRaw {#verticalraw}
820 821 822

Similar to [Vertical](#vertical), but with escaping disabled. This format is only suitable for outputting query results, not for parsing (receiving data and inserting it in the table).

I
Ivan Blinkov 已提交
823
## XML {#xml}
824 825 826

XML format is suitable only for output, not for parsing. Example:

827
``` xml
828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855
<?xml version='1.0' encoding='UTF-8' ?>
<result>
        <meta>
                <columns>
                        <column>
                                <name>SearchPhrase</name>
                                <type>String</type>
                        </column>
                        <column>
                                <name>count()</name>
                                <type>UInt64</type>
                        </column>
                </columns>
        </meta>
        <data>
                <row>
                        <SearchPhrase></SearchPhrase>
                        <field>8267016</field>
                </row>
                <row>
                        <SearchPhrase>bathroom interior design</SearchPhrase>
                        <field>2166</field>
                </row>
                <row>
                        <SearchPhrase>yandex</SearchPhrase>
                        <field>1655</field>
                </row>
                <row>
856
                        <SearchPhrase>2014 spring fashion</SearchPhrase>
857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875
                        <field>1549</field>
                </row>
                <row>
                        <SearchPhrase>freeform photos</SearchPhrase>
                        <field>1480</field>
                </row>
                <row>
                        <SearchPhrase>angelina jolie</SearchPhrase>
                        <field>1245</field>
                </row>
                <row>
                        <SearchPhrase>omsk</SearchPhrase>
                        <field>1112</field>
                </row>
                <row>
                        <SearchPhrase>photos of dog breeds</SearchPhrase>
                        <field>1091</field>
                </row>
                <row>
876
                        <SearchPhrase>curtain designs</SearchPhrase>
877 878 879 880 881 882 883 884 885 886 887 888
                        <field>1064</field>
                </row>
                <row>
                        <SearchPhrase>baku</SearchPhrase>
                        <field>1000</field>
                </row>
        </data>
        <rows>10</rows>
        <rows_before_limit_at_least>141137</rows_before_limit_at_least>
</result>
```

889
If the column name does not have an acceptable format, just ‘field’ is used as the element name. In general, the XML structure follows the JSON structure.
890 891 892 893
Just as for JSON, invalid UTF-8 sequences are changed to the replacement character � so the output text will consist of valid UTF-8 sequences.

In string values, the characters `<` and `&` are escaped as `<` and `&`.

894
Arrays are output as `<array><elem>Hello</elem><elem>World</elem>...</array>`,and tuples as `<tuple><elem>Hello</elem><elem>World</elem>...</tuple>`.
I
Ivan Blinkov 已提交
895

I
Ivan Blinkov 已提交
896
## CapnProto {#capnproto}
I
Ivan Blinkov 已提交
897

898
Cap’n Proto is a binary message format similar to Protocol Buffers and Thrift, but not like JSON or MessagePack.
I
Ivan Blinkov 已提交
899

900
Cap’n Proto messages are strictly typed and not self-describing, meaning they need an external schema description. The schema is applied on the fly and cached for each query.
I
Ivan Blinkov 已提交
901

902
``` bash
903
$ cat capnproto_messages.bin | clickhouse-client --query "INSERT INTO test.hits FORMAT CapnProto SETTINGS format_schema='schema:Message'"
I
Ivan Blinkov 已提交
904 905 906 907
```

Where `schema.capnp` looks like this:

908
``` capnp
I
Ivan Blinkov 已提交
909 910 911 912 913 914
struct Message {
  SearchPhrase @0 :Text;
  c @1 :Uint64;
}
```

915
Deserialization is effective and usually doesn’t increase the system load.
I
Ivan Blinkov 已提交
916

917 918
See also [Format Schema](#formatschema).

I
Ivan Blinkov 已提交
919
## Protobuf {#protobuf}
920 921 922 923

Protobuf - is a [Protocol Buffers](https://developers.google.com/protocol-buffers/) format.

This format requires an external format schema. The schema is cached between queries.
924 925
ClickHouse supports both `proto2` and `proto3` syntaxes. Repeated/optional/required fields are supported.

926 927
Usage examples:

928
``` sql
929 930 931
SELECT * FROM test.table FORMAT Protobuf SETTINGS format_schema = 'schemafile:MessageType'
```

932
``` bash
933 934 935
cat protobuf_messages.bin | clickhouse-client --query "INSERT INTO test.table FORMAT Protobuf SETTINGS format_schema='schemafile:MessageType'"
```

936
where the file `schemafile.proto` looks like this:
937

938
``` capnp
939 940 941 942 943 944 945 946 947 948
syntax = "proto3";

message MessageType {
  string name = 1;
  string surname = 2;
  uint32 birthDate = 3;
  repeated string phoneNumbers = 4;
};
```

949
To find the correspondence between table columns and fields of Protocol Buffers’ message type ClickHouse compares their names.
950
This comparison is case-insensitive and the characters `_` (underscore) and `.` (dot) are considered as equal.
951
If types of a column and a field of Protocol Buffers’ message are different the necessary conversion is applied.
952 953 954

Nested messages are supported. For example, for the field `z` in the following message type

955
``` capnp
956 957 958 959 960 961 962 963 964 965 966 967
message MessageType {
  message XType {
    message YType {
      int32 z;
    };
    repeated YType y;
  };
  XType x;
};
```

ClickHouse tries to find a column named `x.y.z` (or `x_y_z` or `X.y_Z` and so on).
I
Ivan Blinkov 已提交
968
Nested messages are suitable to input or output a [nested data structures](../sql_reference/data_types/nested_data_structures/nested.md).
969

970
Default values defined in a protobuf schema like this
971

972
``` capnp
973 974
syntax = "proto2";

975 976 977 978 979
message MessageType {
  optional int32 result_per_page = 3 [default = 10];
}
```

I
Ivan Blinkov 已提交
980
are not applied; the [table defaults](../sql_reference/statements/create.md#create-default-values) are used instead of them.
981

982 983 984 985
ClickHouse inputs and outputs protobuf messages in the `length-delimited` format.
It means before every message should be written its length as a [varint](https://developers.google.com/protocol-buffers/docs/encoding#varints).
See also [how to read/write length-delimited protobuf messages in popular languages](https://cwiki.apache.org/confluence/display/GEODE/Delimiting+Protobuf+Messages).

I
Ivan Blinkov 已提交
986
## Avro {#data-format-avro}
A
Andrew Onyshchuk 已提交
987

988
[Apache Avro](http://avro.apache.org/) is a row-oriented data serialization framework developed within Apache’s Hadoop project.
A
Andrew Onyshchuk 已提交
989 990 991

ClickHouse Avro format supports reading and writing [Avro data files](http://avro.apache.org/docs/current/spec.html#Object+Container+Files).

992
### Data Types Matching {#data_types-matching}
A
Andrew Onyshchuk 已提交
993

I
Ivan Blinkov 已提交
994
The table below shows supported data types and how they match ClickHouse [data types](../sql_reference/data_types/index.md) in `INSERT` and `SELECT` queries.
A
Andrew Onyshchuk 已提交
995

996 997
| Avro data type `INSERT`                     | ClickHouse data type                                                                                                  | Avro data type `SELECT`      |
|---------------------------------------------|-----------------------------------------------------------------------------------------------------------------------|------------------------------|
I
Ivan Blinkov 已提交
998 999
| `boolean`, `int`, `long`, `float`, `double` | [Int(8\|16\|32)](../sql_reference/data_types/int_uint.md), [UInt(8\|16\|32)](../sql_reference/data_types/int_uint.md) | `int`                        |
| `boolean`, `int`, `long`, `float`, `double` | [Int64](../sql_reference/data_types/int_uint.md), [UInt64](../sql_reference/data_types/int_uint.md)                   | `long`                       |
1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010
| `boolean`, `int`, `long`, `float`, `double` | [Float32](../sql_reference/data_types/float.md)                                                                       | `float`                      |
| `boolean`, `int`, `long`, `float`, `double` | [Float64](../sql_reference/data_types/float.md)                                                                       | `double`                     |
| `bytes`, `string`, `fixed`, `enum`          | [String](../sql_reference/data_types/string.md)                                                                       | `bytes`                      |
| `bytes`, `string`, `fixed`                  | [FixedString(N)](../sql_reference/data_types/fixedstring.md)                                                          | `fixed(N)`                   |
| `enum`                                      | [Enum(8\|16)](../sql_reference/data_types/enum.md)                                                                    | `enum`                       |
| `array(T)`                                  | [Array(T)](../sql_reference/data_types/array.md)                                                                      | `array(T)`                   |
| `union(null, T)`, `union(T, null)`          | [Nullable(T)](../sql_reference/data_types/date.md)                                                                    | `union(null, T)`             |
| `null`                                      | [Nullable(Nothing)](../sql_reference/data_types/special_data_types/nothing.md)                                        | `null`                       |
| `int (date)` \*                             | [Date](../sql_reference/data_types/date.md)                                                                           | `int (date)` \*              |
| `long (timestamp-millis)` \*                | [DateTime64(3)](../sql_reference/data_types/datetime.md)                                                              | `long (timestamp-millis)` \* |
| `long (timestamp-micros)` \*                | [DateTime64(6)](../sql_reference/data_types/datetime.md)                                                              | `long (timestamp-micros)` \* |
A
Andrew Onyshchuk 已提交
1011 1012 1013 1014 1015 1016 1017

\* [Avro logical types](http://avro.apache.org/docs/current/spec.html#Logical+Types)

Unsupported Avro data types: `record` (non-root), `map`

Unsupported Avro logical data types: `uuid`, `time-millis`, `time-micros`, `duration`

1018
### Inserting Data {#inserting-data-1}
A
Andrew Onyshchuk 已提交
1019 1020 1021

To insert data from an Avro file into ClickHouse table:

1022
``` bash
A
Andrew Onyshchuk 已提交
1023 1024 1025 1026 1027
$ cat file.avro | clickhouse-client --query="INSERT INTO {some_table} FORMAT Avro"
```

The root schema of input Avro file must be of `record` type.

1028
To find the correspondence between table columns and fields of Avro schema ClickHouse compares their names. This comparison is case-sensitive.
A
Andrew Onyshchuk 已提交
1029 1030
Unused fields are skipped.

1031
Data types of ClickHouse table columns can differ from the corresponding fields of the Avro data inserted. When inserting data, ClickHouse interprets data types according to the table above and then [casts](../sql_reference/functions/type_conversion_functions.md#type_conversion_function-cast) the data to corresponding column type.
A
Andrew Onyshchuk 已提交
1032

1033
### Selecting Data {#selecting-data-1}
A
Andrew Onyshchuk 已提交
1034 1035 1036

To select data from ClickHouse table into an Avro file:

1037
``` bash
A
Andrew Onyshchuk 已提交
1038 1039 1040 1041 1042
$ clickhouse-client --query="SELECT * FROM {some_table} FORMAT Avro" > file.avro
```

Column names must:

1043 1044
-   start with `[A-Za-z_]`
-   subsequently contain only `[A-Za-z0-9_]`
A
Andrew Onyshchuk 已提交
1045

1046
Output Avro file compression and sync interval can be configured with [output\_format\_avro\_codec](../operations/settings/settings.md#settings-output_format_avro_codec) and [output\_format\_avro\_sync\_interval](../operations/settings/settings.md#settings-output_format_avro_sync_interval) respectively.
A
Andrew Onyshchuk 已提交
1047

I
Ivan Blinkov 已提交
1048
## AvroConfluent {#data-format-avro-confluent}
A
Andrew Onyshchuk 已提交
1049 1050 1051 1052 1053 1054 1055

AvroConfluent supports decoding single-object Avro messages commonly used with [Kafka](https://kafka.apache.org/) and [Confluent Schema Registry](https://docs.confluent.io/current/schema-registry/index.html).

Each Avro message embeds a schema id that can be resolved to the actual schema with help of the Schema Registry.

Schemas are cached once resolved.

1056
Schema Registry URL is configured with [format\_avro\_schema\_registry\_url](../operations/settings/settings.md#settings-format_avro_schema_registry_url)
A
Andrew Onyshchuk 已提交
1057

1058
### Data Types Matching {#data_types-matching-1}
A
Andrew Onyshchuk 已提交
1059 1060 1061

Same as [Avro](#data-format-avro)

1062
### Usage {#usage}
A
Andrew Onyshchuk 已提交
1063

I
Ivan Blinkov 已提交
1064
To quickly verify schema resolution you can use [kafkacat](https://github.com/edenhill/kafkacat) with [clickhouse-local](../operations/utilities/clickhouse-local.md):
A
Andrew Onyshchuk 已提交
1065

1066
``` bash
A
Andrew Onyshchuk 已提交
1067 1068 1069 1070 1071 1072
$ kafkacat -b kafka-broker  -C -t topic1 -o beginning -f '%s' -c 3 | clickhouse-local   --input-format AvroConfluent --format_avro_schema_registry_url 'http://schema-registry' -S "field1 Int64, field2 String"  -q 'select *  from table'
1 a
2 b
3 c
```

I
Ivan Blinkov 已提交
1073
To use `AvroConfluent` with [Kafka](../engines/table_engines/integrations/kafka.md):
1074 1075

``` sql
A
Andrew Onyshchuk 已提交
1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093
CREATE TABLE topic1_stream
(
    field1 String,
    field2 String
)
ENGINE = Kafka()
SETTINGS
kafka_broker_list = 'kafka-broker',
kafka_topic_list = 'topic1',
kafka_group_name = 'group1',
kafka_format = 'AvroConfluent';

SET format_avro_schema_registry_url = 'http://schema-registry';

SELECT * FROM topic1_stream;
```

!!! note "Warning"
1094
    Setting `format_avro_schema_registry_url` needs to be configured in `users.xml` to maintain it’s value after a restart.
A
Andrew Onyshchuk 已提交
1095

I
Ivan Blinkov 已提交
1096
## Parquet {#data-format-parquet}
1097

1098
[Apache Parquet](http://parquet.apache.org/) is a columnar storage format widespread in the Hadoop ecosystem. ClickHouse supports read and write operations for this format.
1099

1100
### Data Types Matching {#data_types-matching-2}
1101

I
Ivan Blinkov 已提交
1102
The table below shows supported data types and how they match ClickHouse [data types](../sql_reference/data_types/index.md) in `INSERT` and `SELECT` queries.
1103

1104 1105
| Parquet data type (`INSERT`) | ClickHouse data type                                      | Parquet data type (`SELECT`) |
|------------------------------|-----------------------------------------------------------|------------------------------|
I
Ivan Blinkov 已提交
1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120
| `UINT8`, `BOOL`              | [UInt8](../sql_reference/data_types/int_uint.md)          | `UINT8`                      |
| `INT8`                       | [Int8](../sql_reference/data_types/int_uint.md)           | `INT8`                       |
| `UINT16`                     | [UInt16](../sql_reference/data_types/int_uint.md)         | `UINT16`                     |
| `INT16`                      | [Int16](../sql_reference/data_types/int_uint.md)          | `INT16`                      |
| `UINT32`                     | [UInt32](../sql_reference/data_types/int_uint.md)         | `UINT32`                     |
| `INT32`                      | [Int32](../sql_reference/data_types/int_uint.md)          | `INT32`                      |
| `UINT64`                     | [UInt64](../sql_reference/data_types/int_uint.md)         | `UINT64`                     |
| `INT64`                      | [Int64](../sql_reference/data_types/int_uint.md)          | `INT64`                      |
| `FLOAT`, `HALF_FLOAT`        | [Float32](../sql_reference/data_types/float.md)           | `FLOAT`                      |
| `DOUBLE`                     | [Float64](../sql_reference/data_types/float.md)           | `DOUBLE`                     |
| `DATE32`                     | [Date](../sql_reference/data_types/date.md)               | `UINT16`                     |
| `DATE64`, `TIMESTAMP`        | [DateTime](../sql_reference/data_types/datetime.md)       | `UINT32`                     |
| `STRING`, `BINARY`           | [String](../sql_reference/data_types/string.md)           | `STRING`                     |
| —                            | [FixedString](../sql_reference/data_types/fixedstring.md) | `STRING`                     |
| `DECIMAL`                    | [Decimal](../sql_reference/data_types/decimal.md)         | `DECIMAL`                    |
1121

1122
ClickHouse supports configurable precision of `Decimal` type. The `INSERT` query treats the Parquet `DECIMAL` type as the ClickHouse `Decimal128` type.
1123

1124 1125
Unsupported Parquet data types: `DATE32`, `TIME32`, `FIXED_SIZE_BINARY`, `JSON`, `UUID`, `ENUM`.

1126
Data types of ClickHouse table columns can differ from the corresponding fields of the Parquet data inserted. When inserting data, ClickHouse interprets data types according to the table above and then [cast](../query_language/functions/type_conversion_functions/#type_conversion_function-cast) the data to that data type which is set for the ClickHouse table column.
1127

1128
### Inserting and Selecting Data {#inserting-and-selecting-data}
1129

A
alexey-milovidov 已提交
1130
You can insert Parquet data from a file into ClickHouse table by the following command:
1131

1132
``` bash
1133
$ cat {filename} | clickhouse-client --query="INSERT INTO {some_table} FORMAT Parquet"
1134 1135 1136 1137
```

You can select data from a ClickHouse table and save them into some file in the Parquet format by the following command:

1138
``` bash
1139 1140 1141
$ clickhouse-client --query="SELECT * FROM {some_table} FORMAT Parquet" > {some_file.pq}
```

I
Ivan Blinkov 已提交
1142
To exchange data with Hadoop, you can use [HDFS table engine](../engines/table_engines/integrations/hdfs.md).
1143

I
Ivan Blinkov 已提交
1144
## ORC {#data-format-orc}
1145

1146
[Apache ORC](https://orc.apache.org/) is a columnar storage format widespread in the Hadoop ecosystem. You can only insert data in this format to ClickHouse.
1147

1148
### Data Types Matching {#data_types-matching-3}
1149

I
Ivan Blinkov 已提交
1150
The table below shows supported data types and how they match ClickHouse [data types](../sql_reference/data_types/index.md) in `INSERT` queries.
1151

1152 1153
| ORC data type (`INSERT`) | ClickHouse data type                                |
|--------------------------|-----------------------------------------------------|
I
Ivan Blinkov 已提交
1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167
| `UINT8`, `BOOL`          | [UInt8](../sql_reference/data_types/int_uint.md)    |
| `INT8`                   | [Int8](../sql_reference/data_types/int_uint.md)     |
| `UINT16`                 | [UInt16](../sql_reference/data_types/int_uint.md)   |
| `INT16`                  | [Int16](../sql_reference/data_types/int_uint.md)    |
| `UINT32`                 | [UInt32](../sql_reference/data_types/int_uint.md)   |
| `INT32`                  | [Int32](../sql_reference/data_types/int_uint.md)    |
| `UINT64`                 | [UInt64](../sql_reference/data_types/int_uint.md)   |
| `INT64`                  | [Int64](../sql_reference/data_types/int_uint.md)    |
| `FLOAT`, `HALF_FLOAT`    | [Float32](../sql_reference/data_types/float.md)     |
| `DOUBLE`                 | [Float64](../sql_reference/data_types/float.md)     |
| `DATE32`                 | [Date](../sql_reference/data_types/date.md)         |
| `DATE64`, `TIMESTAMP`    | [DateTime](../sql_reference/data_types/datetime.md) |
| `STRING`, `BINARY`       | [String](../sql_reference/data_types/string.md)     |
| `DECIMAL`                | [Decimal](../sql_reference/data_types/decimal.md)   |
1168

1169
ClickHouse supports configurable precision of the `Decimal` type. The `INSERT` query treats the ORC `DECIMAL` type as the ClickHouse `Decimal128` type.
1170 1171 1172

Unsupported ORC data types: `DATE32`, `TIME32`, `FIXED_SIZE_BINARY`, `JSON`, `UUID`, `ENUM`.

1173
The data types of ClickHouse table columns don’t have to match the corresponding ORC data fields. When inserting data, ClickHouse interprets data types according to the table above and then [casts](../sql_reference/functions/type_conversion_functions.md#type_conversion_function-cast) the data to the data type set for the ClickHouse table column.
1174

1175
### Inserting Data {#inserting-data-2}
1176

1177
You can insert ORC data from a file into ClickHouse table by the following command:
1178

1179
``` bash
1180
$ cat filename.orc | clickhouse-client --query="INSERT INTO some_table FORMAT ORC"
1181
```
1182

I
Ivan Blinkov 已提交
1183
To exchange data with Hadoop, you can use [HDFS table engine](../engines/table_engines/integrations/hdfs.md).
A
alexey-milovidov 已提交
1184

I
Ivan Blinkov 已提交
1185
## Format Schema {#formatschema}
1186 1187

The file name containing the format schema is set by the setting `format_schema`.
1188
It’s required to set this setting when it is used one of the formats `Cap'n Proto` and `Protobuf`.
1189
The format schema is a combination of a file name and the name of a message type in this file, delimited by a colon,
1190
e.g. `schemafile.proto:MessageType`.
1191
If the file has the standard extension for the format (for example, `.proto` for `Protobuf`),
1192
it can be omitted and in this case, the format schema looks like `schemafile:MessageType`.
1193

1194
If you input or output data via the [client](../interfaces/cli.md) in the [interactive mode](../interfaces/cli.md#cli_usage), the file name specified in the format schema
1195
can contain an absolute path or a path relative to the current directory on the client.
1196 1197 1198
If you use the client in the [batch mode](../interfaces/cli.md#cli_usage), the path to the schema must be relative due to security reasons.

If you input or output data via the [HTTP interface](../interfaces/http.md) the file name specified in the format schema
I
Ivan Blinkov 已提交
1199
should be located in the directory specified in [format\_schema\_path](../operations/server_configuration_parameters/settings.md#server_configuration_parameters-format_schema_path)
1200 1201
in the server configuration.

I
Ivan Blinkov 已提交
1202
## Skipping Errors {#skippingerrors}
A
Alexander Tokmakov 已提交
1203

1204 1205
Some formats such as `CSV`, `TabSeparated`, `TSKV`, `JSONEachRow`, `Template`, `CustomSeparated` and `Protobuf` can skip broken row if parsing error occurred and continue parsing from the beginning of next row. See [input\_format\_allow\_errors\_num](../operations/settings/settings.md#settings-input_format_allow_errors_num) and
[input\_format\_allow\_errors\_ratio](../operations/settings/settings.md#settings-input_format_allow_errors_ratio) settings.
A
Alexander Tokmakov 已提交
1206
Limitations:
1207 1208
- In case of parsing error `JSONEachRow` skips all data until the new line (or EOF), so rows must be delimited by `\n` to count errors correctly.
- `Template` and `CustomSeparated` use delimiter after the last column and delimiter between rows to find the beginning of next row, so skipping errors works only if at least one of them is not empty.
1209 1210

[Original article](https://clickhouse.tech/docs/en/interfaces/formats/) <!--hide-->