formats.md 60.0 KB
Newer Older
I
Ivan Blinkov 已提交
1 2 3 4 5
---
toc_priority: 21
toc_title: Input and Output Formats
---

I
Ivan Blinkov 已提交
6
# Formats for Input and Output Data {#formats}
7

8
ClickHouse can accept and return data in various formats. A format supported for input can be used to parse the data provided to `INSERT`s, to perform `SELECT`s from a file-backed table such as File, URL or HDFS, or to read an external dictionary. A format supported for output can be used to arrange the
9
results of a `SELECT`, and to perform `INSERT`s into a file-backed table.
10

11
The supported formats are:
12

13
| Format                                                          | Input | Output |
14
|-----------------------------------------------------------------|-------|--------|
15
| [TabSeparated](#tabseparated)                                   | ✔     | ✔      |
H
hcz 已提交
16
| [TabSeparatedRaw](#tabseparatedraw)                             | ✔     | ✔      |
17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39
| [TabSeparatedWithNames](#tabseparatedwithnames)                 | ✔     | ✔      |
| [TabSeparatedWithNamesAndTypes](#tabseparatedwithnamesandtypes) | ✔     | ✔      |
| [Template](#format-template)                                    | ✔     | ✔      |
| [TemplateIgnoreSpaces](#templateignorespaces)                   | ✔     | ✗      |
| [CSV](#csv)                                                     | ✔     | ✔      |
| [CSVWithNames](#csvwithnames)                                   | ✔     | ✔      |
| [CustomSeparated](#format-customseparated)                      | ✔     | ✔      |
| [Values](#data-format-values)                                   | ✔     | ✔      |
| [Vertical](#vertical)                                           | ✗     | ✔      |
| [VerticalRaw](#verticalraw)                                     | ✗     | ✔      |
| [JSON](#json)                                                   | ✗     | ✔      |
| [JSONCompact](#jsoncompact)                                     | ✗     | ✔      |
| [JSONEachRow](#jsoneachrow)                                     | ✔     | ✔      |
| [TSKV](#tskv)                                                   | ✔     | ✔      |
| [Pretty](#pretty)                                               | ✗     | ✔      |
| [PrettyCompact](#prettycompact)                                 | ✗     | ✔      |
| [PrettyCompactMonoBlock](#prettycompactmonoblock)               | ✗     | ✔      |
| [PrettyNoEscapes](#prettynoescapes)                             | ✗     | ✔      |
| [PrettySpace](#prettyspace)                                     | ✗     | ✔      |
| [Protobuf](#protobuf)                                           | ✔     | ✔      |
| [Avro](#data-format-avro)                                       | ✔     | ✔      |
| [AvroConfluent](#data-format-avro-confluent)                    | ✔     | ✗      |
| [Parquet](#data-format-parquet)                                 | ✔     | ✔      |
H
hcz 已提交
40 41
| [Arrow](#data-format-arrow)                                     | ✔     | ✔      |
| [ArrowStream](#data-format-arrow-stream)                        | ✔     | ✔      |
42 43 44 45 46 47 48
| [ORC](#data-format-orc)                                         | ✔     | ✗      |
| [RowBinary](#rowbinary)                                         | ✔     | ✔      |
| [RowBinaryWithNamesAndTypes](#rowbinarywithnamesandtypes)       | ✔     | ✔      |
| [Native](#native)                                               | ✔     | ✔      |
| [Null](#null)                                                   | ✗     | ✔      |
| [XML](#xml)                                                     | ✗     | ✔      |
| [CapnProto](#capnproto)                                         | ✔     | ✗      |
49

50
You can control some format processing parameters with the ClickHouse settings. For more information read the [Settings](../operations/settings/settings.md) section.
51

I
Ivan Blinkov 已提交
52
## TabSeparated {#tabseparated}
53

54
In TabSeparated format, data is written by row. Each row contains values separated by tabs. Each value is followed by a tab, except the last value in the row, which is followed by a line feed. Strictly Unix line feeds are assumed everywhere. The last row also must contain a line feed at the end. Values are written in text format, without enclosing quotation marks, and with special characters escaped.
55

I
Ivan Blinkov 已提交
56
This format is also available under the name `TSV`.
57

58
The `TabSeparated` format is convenient for processing data using custom programs and scripts. It is used by default in the HTTP interface, and in the command-line client’s batch mode. This format also allows transferring data between different DBMSs. For example, you can get a dump from MySQL and upload it to ClickHouse, or vice versa.
I
Ivan Blinkov 已提交
59

60
The `TabSeparated` format supports outputting total values (when using WITH TOTALS) and extreme values (when ‘extremes’ is set to 1). In these cases, the total values and extremes are output after the main data. The main result, total values, and extremes are separated from each other by an empty line. Example:
I
Ivan Blinkov 已提交
61

62
``` sql
I
Ivan Blinkov 已提交
63
SELECT EventDate, count() AS c FROM test.hits GROUP BY EventDate WITH TOTALS ORDER BY EventDate FORMAT TabSeparated``
64 65
```

66
``` text
I
Ivan Blinkov 已提交
67 68 69 70 71 72 73 74 75
2014-03-17      1406958
2014-03-18      1383658
2014-03-19      1405797
2014-03-20      1353623
2014-03-21      1245779
2014-03-22      1031592
2014-03-23      1046491

0000-00-00      8873898
76

I
Ivan Blinkov 已提交
77 78
2014-03-17      1031592
2014-03-23      1406958
79
```
I
Ivan Blinkov 已提交
80

81
### Data Formatting {#data-formatting}
I
Ivan Blinkov 已提交
82

83
Integer numbers are written in decimal form. Numbers can contain an extra “+” character at the beginning (ignored when parsing, and not recorded when formatting). Non-negative numbers can’t contain the negative sign. When reading, it is allowed to parse an empty string as a zero, or (for signed types) a string consisting of just a minus sign as a zero. Numbers that do not fit into the corresponding data type may be parsed as a different number, without an error message.
I
Ivan Blinkov 已提交
84

85
Floating-point numbers are written in decimal form. The dot is used as the decimal separator. Exponential entries are supported, as are ‘inf’, ‘+inf’, ‘-inf’, and ‘nan’. An entry of floating-point numbers may begin or end with a decimal point.
I
Ivan Blinkov 已提交
86 87 88 89
During formatting, accuracy may be lost on floating-point numbers.
During parsing, it is not strictly required to read the nearest machine-representable number.

Dates are written in YYYY-MM-DD format and parsed in the same format, but with any characters as separators.
90 91
Dates with times are written in the format `YYYY-MM-DD hh:mm:ss` and parsed in the same format, but with any characters as separators.
This all occurs in the system time zone at the time the client or server starts (depending on which of them formats data). For dates with times, daylight saving time is not specified. So if a dump has times during daylight saving time, the dump does not unequivocally match the data, and parsing will select one of the two times.
I
Ivan Blinkov 已提交
92 93 94 95
During a read operation, incorrect dates and dates with times can be parsed with natural overflow or as null dates and times, without an error message.

As an exception, parsing dates with times is also supported in Unix timestamp format, if it consists of exactly 10 decimal digits. The result is not time zone-dependent. The formats YYYY-MM-DD hh:mm:ss and NNNNNNNNNN are differentiated automatically.

96
Strings are output with backslash-escaped special characters. The following escape sequences are used for output: `\b`, `\f`, `\r`, `\n`, `\t`, `\0`, `\'`, `\\`. Parsing also supports the sequences `\a`, `\v`, and `\xHH` (hex escape sequences) and any `\c` sequences, where `c` is any character (these sequences are converted to `c`). Thus, reading data supports formats where a line feed can be written as `\n` or `\`, or as a line feed. For example, the string `Hello world` with a line feed between the words instead of space can be parsed in any of the following variations:
I
Ivan Blinkov 已提交
97

98
``` text
I
Ivan Blinkov 已提交
99
Hello\nworld
100

I
Ivan Blinkov 已提交
101 102 103 104 105 106 107 108 109 110
Hello\
world
```

The second variant is supported because MySQL uses it when writing tab-separated dumps.

The minimum set of characters that you need to escape when passing data in TabSeparated format: tab, line feed (LF) and backslash.

Only a small set of symbols are escaped. You can easily stumble onto a string value that your terminal will ruin in output.

111
Arrays are written as a list of comma-separated values in square brackets. Number items in the array are formatted as normally. `Date` and `DateTime` types are written in single quotes. Strings are written in single quotes with the same escaping rules as above.
I
Ivan Blinkov 已提交
112

113
[NULL](../sql-reference/syntax.md) is formatted as `\N`.
I
Ivan Blinkov 已提交
114

115
Each element of [Nested](../sql-reference/data-types/nested-data-structures/nested.md) structures is represented as array.
116 117 118

For example:

119
``` sql
120 121
CREATE TABLE nestedt
(
122
    `id` UInt8,
123
    `aux` Nested(
124
        a UInt8,
125 126 127 128 129
        b String
    )
)
ENGINE = TinyLog
```
130 131

``` sql
132 133
INSERT INTO nestedt Values ( 1, [1], ['a'])
```
134 135

``` sql
136 137
SELECT * FROM nestedt FORMAT TSV
```
138 139

``` text
140
1  [1]    ['a']
141 142
```

I
Ivan Blinkov 已提交
143
## TabSeparatedRaw {#tabseparatedraw}
I
Ivan Blinkov 已提交
144 145

Differs from `TabSeparated` format in that the rows are written without escaping.
H
hcz 已提交
146
When parsing with this format, tabs or linefeeds are not allowed in each field.
I
Ivan Blinkov 已提交
147 148 149

This format is also available under the name `TSVRaw`.

I
Ivan Blinkov 已提交
150
## TabSeparatedWithNames {#tabseparatedwithnames}
I
Ivan Blinkov 已提交
151 152

Differs from the `TabSeparated` format in that the column names are written in the first row.
153
During parsing, the first row is completely ignored. You can’t use column names to determine their position or to check their correctness.
I
Ivan Blinkov 已提交
154 155 156 157
(Support for parsing the header row may be added in the future.)

This format is also available under the name `TSVWithNames`.

I
Ivan Blinkov 已提交
158
## TabSeparatedWithNamesAndTypes {#tabseparatedwithnamesandtypes}
I
Ivan Blinkov 已提交
159 160 161 162 163 164

Differs from the `TabSeparated` format in that the column names are written to the first row, while the column types are in the second row.
During parsing, the first and second rows are completely ignored.

This format is also available under the name `TSVWithNamesAndTypes`.

I
Ivan Blinkov 已提交
165
## Template {#format-template}
A
Alexander Tokmakov 已提交
166

167
This format allows specifying a custom format string with placeholders for values with a specified escaping rule.
A
Alexander Tokmakov 已提交
168

169
It uses settings `format_template_resultset`, `format_template_row`, `format_template_rows_between_delimiter` and some settings of other formats (e.g. `output_format_json_quote_64bit_integers` when using `JSON` escaping, see further)
A
Alexander Tokmakov 已提交
170

A
Alexander Tokmakov 已提交
171
Setting `format_template_row` specifies path to file, which contains format string for rows with the following syntax:
A
Alexander Tokmakov 已提交
172

173
`delimiter_1${column_1:serializeAs_1}delimiter_2${column_2:serializeAs_2} ... delimiter_N`,
A
Alexander Tokmakov 已提交
174

175 176 177
where `delimiter_i` is a delimiter between values (`$` symbol can be escaped as `$$`),
`column_i` is a name or index of a column whose values are to be selected or inserted (if empty, then column will be skipped),
`serializeAs_i` is an escaping rule for the column values. The following escaping rules are supported:
178

179 180 181 182 183
-   `CSV`, `JSON`, `XML` (similarly to the formats of the same names)
-   `Escaped` (similarly to `TSV`)
-   `Quoted` (similarly to `Values`)
-   `Raw` (without escaping, similarly to `TSVRaw`)
-   `None` (no escaping rule, see further)
184

185
If an escaping rule is omitted, then `None` will be used. `XML` and `Raw` are suitable only for output.
186

187
So, for the following format string:
188

189
      `Search phrase: ${SearchPhrase:Quoted}, count: ${c:Escaped}, ad price: $$${price:JSON};`
190

191
the values of `SearchPhrase`, `c` and `price` columns, which are escaped as `Quoted`, `Escaped` and `JSON` will be printed (for select) or will be expected (for insert) between `Search phrase:`, `, count:`, `, ad price: $` and `;` delimiters respectively. For example:
A
Alexander Tokmakov 已提交
192

193
`Search phrase: 'bathroom interior design', count: 2166, ad price: $3;`
194

195
The `format_template_rows_between_delimiter` setting specifies delimiter between rows, which is printed (or expected) after every row except the last one (`\n` by default)
A
Alexander Tokmakov 已提交
196

197
Setting `format_template_resultset` specifies the path to file, which contains a format string for resultset. Format string for resultset has the same syntax as a format string for row and allows to specify a prefix, a suffix and a way to print some additional information. It contains the following placeholders instead of column names:
A
Alexander Tokmakov 已提交
198

199 200 201 202 203 204 205 206 207
-   `data` is the rows with data in `format_template_row` format, separated by `format_template_rows_between_delimiter`. This placeholder must be the first placeholder in the format string.
-   `totals` is the row with total values in `format_template_row` format (when using WITH TOTALS)
-   `min` is the row with minimum values in `format_template_row` format (when extremes are set to 1)
-   `max` is the row with maximum values in `format_template_row` format (when extremes are set to 1)
-   `rows` is the total number of output rows
-   `rows_before_limit` is the minimal number of rows there would have been without LIMIT. Output only if the query contains LIMIT. If the query contains GROUP BY, rows\_before\_limit\_at\_least is the exact number of rows there would have been without a LIMIT.
-   `time` is the request execution time in seconds
-   `rows_read` is the number of rows has been read
-   `bytes_read` is the number of bytes (uncompressed) has been read
208 209 210 211 212 213 214 215

The placeholders `data`, `totals`, `min` and `max` must not have escaping rule specified (or `None` must be specified explicitly). The remaining placeholders may have any escaping rule specified.
If the `format_template_resultset` setting is an empty string, `${data}` is used as default value.
For insert queries format allows skipping some columns or some fields if prefix or suffix (see example).

Select example:

``` sql
216
SELECT SearchPhrase, count() AS c FROM test.hits GROUP BY SearchPhrase ORDER BY c DESC LIMIT 5 FORMAT Template SETTINGS
A
Alexander Tokmakov 已提交
217 218
format_template_resultset = '/some/path/resultset.format', format_template_row = '/some/path/row.format', format_template_rows_between_delimiter = '\n    '
```
219

A
Alexander Tokmakov 已提交
220
`/some/path/resultset.format`:
221 222

``` text
A
Alexander Tokmakov 已提交
223
<!DOCTYPE HTML>
A
Alexander Tokmakov 已提交
224 225 226 227 228 229 230 231 232 233 234
<html> <head> <title>Search phrases</title> </head>
 <body>
  <table border="1"> <caption>Search phrases</caption>
    <tr> <th>Search phrase</th> <th>Count</th> </tr>
    ${data}
  </table>
  <table border="1"> <caption>Max</caption>
    ${max}
  </table>
  <b>Processed ${rows_read:XML} rows in ${time:XML} sec</b>
 </body>
A
Alexander Tokmakov 已提交
235
</html>
A
Alexander Tokmakov 已提交
236
```
237

A
Alexander Tokmakov 已提交
238
`/some/path/row.format`:
239 240

``` text
A
Alexander Tokmakov 已提交
241
<tr> <td>${0:XML}</td> <td>${1:XML}</td> </tr>
A
Alexander Tokmakov 已提交
242
```
243

A
Alexander Tokmakov 已提交
244
Result:
245 246

``` html
A
Alexander Tokmakov 已提交
247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265
<!DOCTYPE HTML>
<html> <head> <title>Search phrases</title> </head>
 <body>
  <table border="1"> <caption>Search phrases</caption>
    <tr> <th>Search phrase</th> <th>Count</th> </tr>
    <tr> <td></td> <td>8267016</td> </tr>
    <tr> <td>bathroom interior design</td> <td>2166</td> </tr>
    <tr> <td>yandex</td> <td>1655</td> </tr>
    <tr> <td>spring 2014 fashion</td> <td>1549</td> </tr>
    <tr> <td>freeform photos</td> <td>1480</td> </tr>
  </table>
  <table border="1"> <caption>Max</caption>
    <tr> <td></td> <td>8873898</td> </tr>
  </table>
  <b>Processed 3095973 rows in 0.1569913 sec</b>
 </body>
</html>
```

A
Alexander Tokmakov 已提交
266
Insert example:
267 268

``` text
A
Alexander Tokmakov 已提交
269 270 271 272
Some header
Page views: 5, User id: 4324182021466249494, Useless field: hello, Duration: 146, Sign: -1
Page views: 6, User id: 4324182021466249494, Useless field: world, Duration: 185, Sign: 1
Total rows: 2
A
Alexander Tokmakov 已提交
273
```
274 275

``` sql
276
INSERT INTO UserActivity FORMAT Template SETTINGS
A
Alexander Tokmakov 已提交
277 278
format_template_resultset = '/some/path/resultset.format', format_template_row = '/some/path/row.format'
```
279

A
Alexander Tokmakov 已提交
280
`/some/path/resultset.format`:
281 282

``` text
A
Alexander Tokmakov 已提交
283 284
Some header\n${data}\nTotal rows: ${:CSV}\n
```
285

A
Alexander Tokmakov 已提交
286
`/some/path/row.format`:
287 288

``` text
A
Alexander Tokmakov 已提交
289
Page views: ${PageViews:CSV}, User id: ${UserID:CSV}, Useless field: ${:CSV}, Duration: ${Duration:CSV}, Sign: ${Sign:CSV}
A
Alexander Tokmakov 已提交
290
```
291 292

`PageViews`, `UserID`, `Duration` and `Sign` inside placeholders are names of columns in the table. Values after `Useless field` in rows and after `\nTotal rows:` in suffix will be ignored.
A
Alexander Tokmakov 已提交
293
All delimiters in the input data must be strictly equal to delimiters in specified format strings.
294

I
Ivan Blinkov 已提交
295
## TemplateIgnoreSpaces {#templateignorespaces}
A
Alexander Tokmakov 已提交
296

A
Alexander Tokmakov 已提交
297
This format is suitable only for input.
298 299 300 301
Similar to `Template`, but skips whitespace characters between delimiters and values in the input stream. However, if format strings contain whitespace characters, these characters will be expected in the input stream. Also allows to specify empty placeholders (`${}` or `${:None}`) to split some delimiter into separate parts to ignore spaces between them. Such placeholders are used only for skipping whitespace characters.
It’s possible to read `JSON` using this format, if values of columns have the same order in all rows. For example, the following request can be used for inserting data from output example of format [JSON](#json):

``` sql
A
Alexander Tokmakov 已提交
302
INSERT INTO table_name FORMAT TemplateIgnoreSpaces SETTINGS
A
Alexander Tokmakov 已提交
303 304
format_template_resultset = '/some/path/resultset.format', format_template_row = '/some/path/row.format', format_template_rows_between_delimiter = ','
```
305

A
Alexander Tokmakov 已提交
306
`/some/path/resultset.format`:
307 308

``` text
A
Alexander Tokmakov 已提交
309 310
{${}"meta"${}:${:JSON},${}"data"${}:${}[${data}]${},${}"totals"${}:${:JSON},${}"extremes"${}:${:JSON},${}"rows"${}:${:JSON},${}"rows_before_limit_at_least"${}:${:JSON}${}}
```
311

A
Alexander Tokmakov 已提交
312
`/some/path/row.format`:
313 314

``` text
A
Alexander Tokmakov 已提交
315
{${}"SearchPhrase"${}:${}${phrase:JSON}${},${}"c"${}:${}${cnt:JSON}${}}
A
Alexander Tokmakov 已提交
316
```
A
Alexander Tokmakov 已提交
317

I
Ivan Blinkov 已提交
318
## TSKV {#tskv}
I
Ivan Blinkov 已提交
319 320 321

Similar to TabSeparated, but outputs a value in name=value format. Names are escaped the same way as in TabSeparated format, and the = symbol is also escaped.

322
``` text
I
Ivan Blinkov 已提交
323 324 325 326 327 328 329 330 331 332 333 334
SearchPhrase=   count()=8267016
SearchPhrase=bathroom interior design    count()=2166
SearchPhrase=yandex     count()=1655
SearchPhrase=2014 spring fashion    count()=1549
SearchPhrase=freeform photos       count()=1480
SearchPhrase=angelina jolie    count()=1245
SearchPhrase=omsk       count()=1112
SearchPhrase=photos of dog breeds    count()=1091
SearchPhrase=curtain designs        count()=1064
SearchPhrase=baku       count()=1000
```

335
[NULL](../sql-reference/syntax.md) is formatted as `\N`.
I
Ivan Blinkov 已提交
336 337 338 339 340

``` sql
SELECT * FROM t_null FORMAT TSKV
```

341
``` text
342
x=1    y=\N
I
Ivan Blinkov 已提交
343 344
```

A
alexey-milovidov 已提交
345
When there is a large number of small columns, this format is ineffective, and there is generally no reason to use it. Nevertheless, it is no worse than JSONEachRow in terms of efficiency.
I
Ivan Blinkov 已提交
346 347 348 349

Both data output and parsing are supported in this format. For parsing, any order is supported for the values of different columns. It is acceptable for some values to be omitted – they are treated as equal to their default values. In this case, zeros and blank rows are used as default values. Complex values that could be specified in the table are not supported as defaults.

Parsing allows the presence of the additional field `tskv` without the equal sign or a value. This field is ignored.
350

I
Ivan Blinkov 已提交
351
## CSV {#csv}
352 353 354

Comma Separated Values format ([RFC](https://tools.ietf.org/html/rfc4180)).

355
When formatting, rows are enclosed in double-quotes. A double quote inside a string is output as two double quotes in a row. There are no other rules for escaping characters. Date and date-time are enclosed in double-quotes. Numbers are output without quotes. Values are separated by a delimiter character, which is `,` by default. The delimiter character is defined in the setting [format\_csv\_delimiter](../operations/settings/settings.md#settings-format_csv_delimiter). Rows are separated using the Unix line feed (LF). Arrays are serialized in CSV as follows: first, the array is serialized to a string as in TabSeparated format, and then the resulting string is output to CSV in double-quotes. Tuples in CSV format are serialized as separate columns (that is, their nesting in the tuple is lost).
356

357
``` bash
358
$ clickhouse-client --format_csv_delimiter="|" --query="INSERT INTO test.csv FORMAT CSV" < data.csv
C
chertus 已提交
359 360
```

361
\*By default, the delimiter is `,`. See the [format\_csv\_delimiter](../operations/settings/settings.md#settings-format_csv_delimiter) setting for more information.
362

363 364
When parsing, all values can be parsed either with or without quotes. Both double and single quotes are supported. Rows can also be arranged without quotes. In this case, they are parsed up to the delimiter character or line feed (CR or LF). In violation of the RFC, when parsing rows without quotes, the leading and trailing spaces and tabs are ignored. For the line feed, Unix (LF), Windows (CR LF) and Mac OS Classic (CR LF) types are all supported.

365
Empty unquoted input values are replaced with default values for the respective columns, if
366
[input\_format\_defaults\_for\_omitted\_fields](../operations/settings/settings.md#session_settings-input_format_defaults_for_omitted_fields)
367 368
is enabled.

369
`NULL` is formatted as `\N` or `NULL` or an empty unquoted string (see settings [input\_format\_csv\_unquoted\_null\_literal\_as\_null](../operations/settings/settings.md#settings-input_format_csv_unquoted_null_literal_as_null) and [input\_format\_defaults\_for\_omitted\_fields](../operations/settings/settings.md#session_settings-input_format_defaults_for_omitted_fields)).
370 371 372

The CSV format supports the output of totals and extremes the same way as `TabSeparated`.

373
## CSVWithNames {#csvwithnames}
374 375 376

Also prints the header row, similar to `TabSeparatedWithNames`.

I
Ivan Blinkov 已提交
377
## CustomSeparated {#format-customseparated}
A
Alexander Tokmakov 已提交
378 379 380 381

Similar to [Template](#format-template), but it prints or reads all columns and uses escaping rule from setting `format_custom_escaping_rule` and delimiters from settings `format_custom_field_delimiter`, `format_custom_row_before_delimiter`, `format_custom_row_after_delimiter`, `format_custom_row_between_delimiter`, `format_custom_result_before_delimiter` and `format_custom_result_after_delimiter`, not from format strings.
There is also `CustomSeparatedIgnoreSpaces` format, which is similar to `TemplateIgnoreSpaces`.

I
Ivan Blinkov 已提交
382
## JSON {#json}
383

384
Outputs data in JSON format. Besides data tables, it also outputs column names and types, along with some additional information: the total number of output rows, and the number of rows that could have been output if there weren’t a LIMIT. Example:
385

386
``` sql
387 388 389
SELECT SearchPhrase, count() AS c FROM test.hits GROUP BY SearchPhrase WITH TOTALS ORDER BY c DESC LIMIT 5 FORMAT JSON
```

390
``` json
391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453
{
        "meta":
        [
                {
                        "name": "SearchPhrase",
                        "type": "String"
                },
                {
                        "name": "c",
                        "type": "UInt64"
                }
        ],

        "data":
        [
                {
                        "SearchPhrase": "",
                        "c": "8267016"
                },
                {
                        "SearchPhrase": "bathroom interior design",
                        "c": "2166"
                },
                {
                        "SearchPhrase": "yandex",
                        "c": "1655"
                },
                {
                        "SearchPhrase": "spring 2014 fashion",
                        "c": "1549"
                },
                {
                        "SearchPhrase": "freeform photos",
                        "c": "1480"
                }
        ],

        "totals":
        {
                "SearchPhrase": "",
                "c": "8873898"
        },

        "extremes":
        {
                "min":
                {
                        "SearchPhrase": "",
                        "c": "1480"
                },
                "max":
                {
                        "SearchPhrase": "",
                        "c": "8267016"
                }
        },

        "rows": 5,

        "rows_before_limit_at_least": 141137
}
```

454
The JSON is compatible with JavaScript. To ensure this, some characters are additionally escaped: the slash `/` is escaped as `\/`; alternative line breaks `U+2028` and `U+2029`, which break some browsers, are escaped as `\uXXXX`. ASCII control characters are escaped: backspace, form feed, line feed, carriage return, and horizontal tab are replaced with `\b`, `\f`, `\n`, `\r`, `\t` , as well as the remaining bytes in the 00-1F range using `\uXXXX` sequences. Invalid UTF-8 sequences are changed to the replacement character � so the output text will consist of valid UTF-8 sequences. For compatibility with JavaScript, Int64 and UInt64 integers are enclosed in double-quotes by default. To remove the quotes, you can set the configuration parameter [output\_format\_json\_quote\_64bit\_integers](../operations/settings/settings.md#session_settings-output_format_json_quote_64bit_integers) to 0.
455 456 457 458

`rows` – The total number of output rows.

`rows_before_limit_at_least` The minimal number of rows there would have been without LIMIT. Output only if the query contains LIMIT.
459
If the query contains GROUP BY, rows\_before\_limit\_at\_least is the exact number of rows there would have been without a LIMIT.
460 461 462

`totals` – Total values (when using WITH TOTALS).

463
`extremes` – Extreme values (when extremes are set to 1).
464 465

This format is only appropriate for outputting a query result, but not for parsing (retrieving data to insert in a table).
466

467
ClickHouse supports [NULL](../sql-reference/syntax.md), which is displayed as `null` in the JSON output.
468

469
See also the [JSONEachRow](#jsoneachrow) format.
470

I
Ivan Blinkov 已提交
471
## JSONCompact {#jsoncompact}
472 473 474 475 476

Differs from JSON only in that data rows are output in arrays, not in objects.

Example:

477
``` json
478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495
{
        "meta":
        [
                {
                        "name": "SearchPhrase",
                        "type": "String"
                },
                {
                        "name": "c",
                        "type": "UInt64"
                }
        ],

        "data":
        [
                ["", "8267016"],
                ["bathroom interior design", "2166"],
                ["yandex", "1655"],
496 497
                ["fashion trends spring 2014", "1549"],
                ["freeform photo", "1480"]
498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516
        ],

        "totals": ["","8873898"],

        "extremes":
        {
                "min": ["","1480"],
                "max": ["","8267016"]
        },

        "rows": 5,

        "rows_before_limit_at_least": 141137
}
```

This format is only appropriate for outputting a query result, but not for parsing (retrieving data to insert in a table).
See also the `JSONEachRow` format.

I
Ivan Blinkov 已提交
517
## JSONEachRow {#jsoneachrow}
518

519
When using this format, ClickHouse outputs rows as separated, newline-delimited JSON objects, but the data as a whole is not valid JSON.
520

521
``` json
522
{"SearchPhrase":"curtain designs","count()":"1064"}
523
{"SearchPhrase":"baku","count()":"1000"}
524
{"SearchPhrase":"","count()":"8267016"}
525 526
```

527
When inserting the data, you should provide a separate JSON object for each row.
528

529
### Inserting Data {#inserting-data}
530

531
``` sql
532 533 534 535 536
INSERT INTO UserActivity FORMAT JSONEachRow {"PageViews":5, "UserID":"4324182021466249494", "Duration":146,"Sign":-1} {"UserID":"4324182021466249494","PageViews":6,"Duration":185,"Sign":1}
```

ClickHouse allows:

537 538
-   Any order of key-value pairs in the object.
-   Omitting some values.
539

540
ClickHouse ignores spaces between elements and commas after the objects. You can pass all the objects in one line. You don’t have to separate them with line breaks.
541

542
**Omitted values processing**
543

544
ClickHouse substitutes omitted values with the default values for the corresponding [data types](../sql-reference/data-types/index.md).
545

546
If `DEFAULT expr` is specified, ClickHouse uses different substitution rules depending on the [input\_format\_defaults\_for\_omitted\_fields](../operations/settings/settings.md#session_settings-input_format_defaults_for_omitted_fields) setting.
547

548
Consider the following table:
549

550
``` sql
551 552 553 554 555 556 557
CREATE TABLE IF NOT EXISTS example_table
(
    x UInt32,
    a DEFAULT x * 2
) ENGINE = Memory;
```

558 559
-   If `input_format_defaults_for_omitted_fields = 0`, then the default value for `x` and `a` equals `0` (as the default value for the `UInt32` data type).
-   If `input_format_defaults_for_omitted_fields = 1`, then the default value for `x` equals `0`, but the default value of `a` equals `x * 2`.
560

561 562
!!! note "Warning"
    When inserting data with `insert_sample_with_metadata = 1`, ClickHouse consumes more computational resources, compared to insertion with `insert_sample_with_metadata = 0`.
563

564
### Selecting Data {#selecting-data}
565

566
Consider the `UserActivity` table as an example:
567

568
``` text
569 570 571 572 573 574 575 576
┌──────────────UserID─┬─PageViews─┬─Duration─┬─Sign─┐
│ 4324182021466249494 │         5 │      146 │   -1 │
│ 4324182021466249494 │         6 │      185 │    1 │
└─────────────────────┴───────────┴──────────┴──────┘
```

The query `SELECT * FROM UserActivity FORMAT JSONEachRow` returns:

577
``` text
578 579 580 581 582 583 584
{"UserID":"4324182021466249494","PageViews":5,"Duration":146,"Sign":-1}
{"UserID":"4324182021466249494","PageViews":6,"Duration":185,"Sign":1}
```

Unlike the [JSON](#json) format, there is no substitution of invalid UTF-8 sequences. Values are escaped in the same way as for `JSON`.

!!! note "Note"
585
    Any set of bytes can be output in the strings. Use the `JSONEachRow` format if you are sure that the data in the table can be formatted as JSON without losing any information.
586

I
Ivan Blinkov 已提交
587
### Usage of Nested Structures {#jsoneachrow-nested}
588

589
If you have a table with [Nested](../sql-reference/data-types/nested-data-structures/nested.md) data type columns, you can insert JSON data with the same structure. Enable this feature with the [input\_format\_import\_nested\_json](../operations/settings/settings.md#settings-input_format_import_nested_json) setting.
590 591 592

For example, consider the following table:

593
``` sql
594 595 596
CREATE TABLE json_each_row_nested (n Nested (s String, i Int32) ) ENGINE = Memory
```

597
As you can see in the `Nested` data type description, ClickHouse treats each component of the nested structure as a separate column (`n.s` and `n.i` for our table). You can insert data in the following way:
598

599
``` sql
600 601 602
INSERT INTO json_each_row_nested FORMAT JSONEachRow {"n.s": ["abc", "def"], "n.i": [1, 23]}
```

603
To insert data as a hierarchical JSON object, set [input\_format\_import\_nested\_json=1](../operations/settings/settings.md#settings-input_format_import_nested_json).
604

605
``` json
606 607 608 609 610 611 612 613
{
    "n": {
        "s": ["abc", "def"],
        "i": [1, 23]
    }
}
```

614
Without this setting, ClickHouse throws an exception.
615

616
``` sql
617 618
SELECT name, value FROM system.settings WHERE name = 'input_format_import_nested_json'
```
619 620

``` text
621 622 623 624
┌─name────────────────────────────┬─value─┐
│ input_format_import_nested_json │ 0     │
└─────────────────────────────────┴───────┘
```
625 626

``` sql
627 628
INSERT INTO json_each_row_nested FORMAT JSONEachRow {"n": {"s": ["abc", "def"], "i": [1, 23]}}
```
629 630

``` text
631 632
Code: 117. DB::Exception: Unknown field found while parsing JSONEachRow format: n: (at row 1)
```
633 634

``` sql
635 636 637 638
SET input_format_import_nested_json=1
INSERT INTO json_each_row_nested FORMAT JSONEachRow {"n": {"s": ["abc", "def"], "i": [1, 23]}}
SELECT * FROM json_each_row_nested
```
639 640

``` text
641 642 643 644 645
┌─n.s───────────┬─n.i────┐
│ ['abc','def'] │ [1,23] │
└───────────────┴────────┘
```

I
Ivan Blinkov 已提交
646
## Native {#native}
647

648
The most efficient format. Data is written and read by blocks in binary format. For each block, the number of rows, number of columns, column names and types, and parts of columns in this block are recorded one after another. In other words, this format is “columnar” – it doesn’t convert columns to rows. This is the format used in the native interface for interaction between servers, for using the command-line client, and for C++ clients.
649

650
You can use this format to quickly generate dumps that can only be read by the ClickHouse DBMS. It doesn’t make sense to work with this format yourself.
651

I
Ivan Blinkov 已提交
652
## Null {#null}
653

654
Nothing is output. However, the query is processed, and when using the command-line client, data is transmitted to the client. This is used for tests, including performance testing.
655 656
Obviously, this format is only appropriate for output, not for parsing.

I
Ivan Blinkov 已提交
657
## Pretty {#pretty}
658

659
Outputs data as Unicode-art tables, also using ANSI-escape sequences for setting colours in the terminal.
660 661
A full grid of the table is drawn, and each row occupies two lines in the terminal.
Each result block is output as a separate table. This is necessary so that blocks can be output without buffering results (buffering would be necessary in order to pre-calculate the visible width of all the values).
662

663
[NULL](../sql-reference/syntax.md) is output as `ᴺᵁᴸᴸ`.
664

665 666
Example (shown for the [PrettyCompact](#prettycompact) format):

667
``` sql
668 669 670
SELECT * FROM t_null
```

671
``` text
672 673 674 675 676
┌─x─┬────y─┐
│ 1 │ ᴺᵁᴸᴸ │
└───┴──────┘
```

677
Rows are not escaped in Pretty\* formats. Example is shown for the [PrettyCompact](#prettycompact) format:
678

679
``` sql
680 681 682
SELECT 'String with \'quotes\' and \t character' AS Escaping_test
```

683
``` text
684
┌─Escaping_test────────────────────────┐
685
│ String with 'quotes' and      character │
686 687 688
└──────────────────────────────────────┘
```

689
To avoid dumping too much data to the terminal, only the first 10,000 rows are printed. If the number of rows is greater than or equal to 10,000, the message “Showed first 10 000” is printed.
690 691
This format is only appropriate for outputting a query result, but not for parsing (retrieving data to insert in a table).

692
The Pretty format supports outputting total values (when using WITH TOTALS) and extremes (when ‘extremes’ is set to 1). In these cases, total values and extreme values are output after the main data, in separate tables. Example (shown for the [PrettyCompact](#prettycompact) format):
693

694
``` sql
695 696 697
SELECT EventDate, count() AS c FROM test.hits GROUP BY EventDate WITH TOTALS ORDER BY EventDate FORMAT PrettyCompact
```

698
``` text
699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720
┌──EventDate─┬───────c─┐
│ 2014-03-17 │ 1406958 │
│ 2014-03-18 │ 1383658 │
│ 2014-03-19 │ 1405797 │
│ 2014-03-20 │ 1353623 │
│ 2014-03-21 │ 1245779 │
│ 2014-03-22 │ 1031592 │
│ 2014-03-23 │ 1046491 │
└────────────┴─────────┘

Totals:
┌──EventDate─┬───────c─┐
│ 0000-00-00 │ 8873898 │
└────────────┴─────────┘

Extremes:
┌──EventDate─┬───────c─┐
│ 2014-03-17 │ 1031592 │
│ 2014-03-23 │ 1406958 │
└────────────┴─────────┘
```

I
Ivan Blinkov 已提交
721
## PrettyCompact {#prettycompact}
722

723
Differs from [Pretty](#pretty) in that the grid is drawn between rows and the result is more compact.
724 725
This format is used by default in the command-line client in interactive mode.

I
Ivan Blinkov 已提交
726
## PrettyCompactMonoBlock {#prettycompactmonoblock}
727

728
Differs from [PrettyCompact](#prettycompact) in that up to 10,000 rows are buffered, then output as a single table, not by blocks.
729

I
Ivan Blinkov 已提交
730
## PrettyNoEscapes {#prettynoescapes}
731

732
Differs from Pretty in that ANSI-escape sequences aren’t used. This is necessary for displaying this format in a browser, as well as for using the ‘watch’ command-line utility.
733 734 735

Example:

736
``` bash
737
$ watch -n1 "clickhouse-client --query='SELECT event, value FROM system.events FORMAT PrettyCompactNoEscapes'"
738 739 740 741
```

You can use the HTTP interface for displaying in the browser.

742
### PrettyCompactNoEscapes {#prettycompactnoescapes}
743 744 745

The same as the previous setting.

746
### PrettySpaceNoEscapes {#prettyspacenoescapes}
747 748 749

The same as the previous setting.

I
Ivan Blinkov 已提交
750
## PrettySpace {#prettyspace}
751

752
Differs from [PrettyCompact](#prettycompact) in that whitespace (space characters) is used instead of the grid.
753

I
Ivan Blinkov 已提交
754
## RowBinary {#rowbinary}
755 756

Formats and parses data by row in binary format. Rows and values are listed consecutively, without separators.
757
This format is less efficient than the Native format since it is row-based.
758

759
Integers use fixed-length little-endian representation. For example, UInt64 uses 8 bytes.
760 761 762 763 764 765 766
DateTime is represented as UInt32 containing the Unix timestamp as the value.
Date is represented as a UInt16 object that contains the number of days since 1970-01-01 as the value.
String is represented as a varint length (unsigned [LEB128](https://en.wikipedia.org/wiki/LEB128)), followed by the bytes of the string.
FixedString is represented simply as a sequence of bytes.

Array is represented as a varint length (unsigned [LEB128](https://en.wikipedia.org/wiki/LEB128)), followed by successive elements of the array.

767
For [NULL](../sql-reference/syntax.md#null-literal) support, an additional byte containing 1 or 0 is added before each [Nullable](../sql-reference/data-types/nullable.md) value. If 1, then the value is `NULL` and this byte is interpreted as a separate value. If 0, the value after the byte is not `NULL`.
768

I
Ivan Blinkov 已提交
769
## RowBinaryWithNamesAndTypes {#rowbinarywithnamesandtypes}
770 771

Similar to [RowBinary](#rowbinary), but with added header:
D
Denis Zhuravlev 已提交
772

773 774 775
-   [LEB128](https://en.wikipedia.org/wiki/LEB128)-encoded number of columns (N)
-   N `String`s specifying column names
-   N `String`s specifying column types
776

I
Ivan Blinkov 已提交
777
## Values {#data-format-values}
778

779
Prints every row in brackets. Rows are separated by commas. There is no comma after the last row. The values inside the brackets are also comma-separated. Numbers are output in a decimal format without quotes. Arrays are output in square brackets. Strings, dates, and dates with times are output in quotes. Escaping rules and parsing are similar to the [TabSeparated](#tabseparated) format. During formatting, extra spaces aren’t inserted, but during parsing, they are allowed and skipped (except for spaces inside array values, which are not allowed). [NULL](../sql-reference/syntax.md) is represented as `NULL`.
780 781 782 783 784

The minimum set of characters that you need to escape when passing data in Values ​​format: single quotes and backslashes.

This is the format that is used in `INSERT INTO t VALUES ...`, but you can also use it for formatting query results.

785
See also: [input\_format\_values\_interpret\_expressions](../operations/settings/settings.md#settings-input_format_values_interpret_expressions) and [input\_format\_values\_deduce\_templates\_of\_expressions](../operations/settings/settings.md#settings-input_format_values_deduce_templates_of_expressions) settings.
A
Alexander Tokmakov 已提交
786

I
Ivan Blinkov 已提交
787
## Vertical {#vertical}
788

789
Prints each value on a separate line with the column name specified. This format is convenient for printing just one or a few rows if each row consists of a large number of columns.
790

791
[NULL](../sql-reference/syntax.md) is output as `ᴺᵁᴸᴸ`.
792 793 794

Example:

795
``` sql
796 797 798
SELECT * FROM t_null FORMAT Vertical
```

799
``` text
800 801 802 803 804
Row 1:
──────
x: 1
y: ᴺᵁᴸᴸ
```
805

806
Rows are not escaped in Vertical format:
807

808
``` sql
809
SELECT 'string with \'quotes\' and \t with some special \n characters' AS test FORMAT Vertical
810 811
```

812
``` text
813 814
Row 1:
──────
815
test: string with 'quotes' and      with some special
816 817 818
 characters
```

819
This format is only appropriate for outputting a query result, but not for parsing (retrieving data to insert in a table).
820

I
Ivan Blinkov 已提交
821
## VerticalRaw {#verticalraw}
822 823 824

Similar to [Vertical](#vertical), but with escaping disabled. This format is only suitable for outputting query results, not for parsing (receiving data and inserting it in the table).

I
Ivan Blinkov 已提交
825
## XML {#xml}
826 827 828

XML format is suitable only for output, not for parsing. Example:

829
``` xml
830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857
<?xml version='1.0' encoding='UTF-8' ?>
<result>
        <meta>
                <columns>
                        <column>
                                <name>SearchPhrase</name>
                                <type>String</type>
                        </column>
                        <column>
                                <name>count()</name>
                                <type>UInt64</type>
                        </column>
                </columns>
        </meta>
        <data>
                <row>
                        <SearchPhrase></SearchPhrase>
                        <field>8267016</field>
                </row>
                <row>
                        <SearchPhrase>bathroom interior design</SearchPhrase>
                        <field>2166</field>
                </row>
                <row>
                        <SearchPhrase>yandex</SearchPhrase>
                        <field>1655</field>
                </row>
                <row>
858
                        <SearchPhrase>2014 spring fashion</SearchPhrase>
859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877
                        <field>1549</field>
                </row>
                <row>
                        <SearchPhrase>freeform photos</SearchPhrase>
                        <field>1480</field>
                </row>
                <row>
                        <SearchPhrase>angelina jolie</SearchPhrase>
                        <field>1245</field>
                </row>
                <row>
                        <SearchPhrase>omsk</SearchPhrase>
                        <field>1112</field>
                </row>
                <row>
                        <SearchPhrase>photos of dog breeds</SearchPhrase>
                        <field>1091</field>
                </row>
                <row>
878
                        <SearchPhrase>curtain designs</SearchPhrase>
879 880 881 882 883 884 885 886 887 888 889 890
                        <field>1064</field>
                </row>
                <row>
                        <SearchPhrase>baku</SearchPhrase>
                        <field>1000</field>
                </row>
        </data>
        <rows>10</rows>
        <rows_before_limit_at_least>141137</rows_before_limit_at_least>
</result>
```

891
If the column name does not have an acceptable format, just ‘field’ is used as the element name. In general, the XML structure follows the JSON structure.
892 893 894 895
Just as for JSON, invalid UTF-8 sequences are changed to the replacement character � so the output text will consist of valid UTF-8 sequences.

In string values, the characters `<` and `&` are escaped as `<` and `&`.

896
Arrays are output as `<array><elem>Hello</elem><elem>World</elem>...</array>`,and tuples as `<tuple><elem>Hello</elem><elem>World</elem>...</tuple>`.
I
Ivan Blinkov 已提交
897

I
Ivan Blinkov 已提交
898
## CapnProto {#capnproto}
I
Ivan Blinkov 已提交
899

900
Cap’n Proto is a binary message format similar to Protocol Buffers and Thrift, but not like JSON or MessagePack.
I
Ivan Blinkov 已提交
901

902
Cap’n Proto messages are strictly typed and not self-describing, meaning they need an external schema description. The schema is applied on the fly and cached for each query.
I
Ivan Blinkov 已提交
903

904
``` bash
905
$ cat capnproto_messages.bin | clickhouse-client --query "INSERT INTO test.hits FORMAT CapnProto SETTINGS format_schema='schema:Message'"
I
Ivan Blinkov 已提交
906 907 908 909
```

Where `schema.capnp` looks like this:

910
``` capnp
I
Ivan Blinkov 已提交
911 912 913 914 915 916
struct Message {
  SearchPhrase @0 :Text;
  c @1 :Uint64;
}
```

917
Deserialization is effective and usually doesn’t increase the system load.
I
Ivan Blinkov 已提交
918

919 920
See also [Format Schema](#formatschema).

I
Ivan Blinkov 已提交
921
## Protobuf {#protobuf}
922 923 924 925

Protobuf - is a [Protocol Buffers](https://developers.google.com/protocol-buffers/) format.

This format requires an external format schema. The schema is cached between queries.
926 927
ClickHouse supports both `proto2` and `proto3` syntaxes. Repeated/optional/required fields are supported.

928 929
Usage examples:

930
``` sql
931 932 933
SELECT * FROM test.table FORMAT Protobuf SETTINGS format_schema = 'schemafile:MessageType'
```

934
``` bash
935 936 937
cat protobuf_messages.bin | clickhouse-client --query "INSERT INTO test.table FORMAT Protobuf SETTINGS format_schema='schemafile:MessageType'"
```

938
where the file `schemafile.proto` looks like this:
939

940
``` capnp
941 942 943 944 945 946 947 948 949 950
syntax = "proto3";

message MessageType {
  string name = 1;
  string surname = 2;
  uint32 birthDate = 3;
  repeated string phoneNumbers = 4;
};
```

951
To find the correspondence between table columns and fields of Protocol Buffers’ message type ClickHouse compares their names.
952
This comparison is case-insensitive and the characters `_` (underscore) and `.` (dot) are considered as equal.
953
If types of a column and a field of Protocol Buffers’ message are different the necessary conversion is applied.
954 955 956

Nested messages are supported. For example, for the field `z` in the following message type

957
``` capnp
958 959 960 961 962 963 964 965 966 967 968 969
message MessageType {
  message XType {
    message YType {
      int32 z;
    };
    repeated YType y;
  };
  XType x;
};
```

ClickHouse tries to find a column named `x.y.z` (or `x_y_z` or `X.y_Z` and so on).
970
Nested messages are suitable to input or output a [nested data structures](../sql-reference/data-types/nested-data-structures/nested.md).
971

972
Default values defined in a protobuf schema like this
973

974
``` capnp
975 976
syntax = "proto2";

977 978 979 980 981
message MessageType {
  optional int32 result_per_page = 3 [default = 10];
}
```

982
are not applied; the [table defaults](../sql-reference/statements/create/table.md#create-default-values) are used instead of them.
983

984 985 986 987
ClickHouse inputs and outputs protobuf messages in the `length-delimited` format.
It means before every message should be written its length as a [varint](https://developers.google.com/protocol-buffers/docs/encoding#varints).
See also [how to read/write length-delimited protobuf messages in popular languages](https://cwiki.apache.org/confluence/display/GEODE/Delimiting+Protobuf+Messages).

I
Ivan Blinkov 已提交
988
## Avro {#data-format-avro}
A
Andrew Onyshchuk 已提交
989

H
hcz 已提交
990
[Apache Avro](https://avro.apache.org/) is a row-oriented data serialization framework developed within Apache’s Hadoop project.
A
Andrew Onyshchuk 已提交
991

H
hcz 已提交
992
ClickHouse Avro format supports reading and writing [Avro data files](https://avro.apache.org/docs/current/spec.html#Object+Container+Files).
A
Andrew Onyshchuk 已提交
993

994
### Data Types Matching {#data_types-matching}
A
Andrew Onyshchuk 已提交
995

996
The table below shows supported data types and how they match ClickHouse [data types](../sql-reference/data-types/index.md) in `INSERT` and `SELECT` queries.
A
Andrew Onyshchuk 已提交
997

998 999
| Avro data type `INSERT`                     | ClickHouse data type                                                                                                  | Avro data type `SELECT`      |
|---------------------------------------------|-----------------------------------------------------------------------------------------------------------------------|------------------------------|
1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012
| `boolean`, `int`, `long`, `float`, `double` | [Int(8\|16\|32)](../sql-reference/data-types/int-uint.md), [UInt(8\|16\|32)](../sql-reference/data-types/int-uint.md) | `int`                        |
| `boolean`, `int`, `long`, `float`, `double` | [Int64](../sql-reference/data-types/int-uint.md), [UInt64](../sql-reference/data-types/int-uint.md)                   | `long`                       |
| `boolean`, `int`, `long`, `float`, `double` | [Float32](../sql-reference/data-types/float.md)                                                                       | `float`                      |
| `boolean`, `int`, `long`, `float`, `double` | [Float64](../sql-reference/data-types/float.md)                                                                       | `double`                     |
| `bytes`, `string`, `fixed`, `enum`          | [String](../sql-reference/data-types/string.md)                                                                       | `bytes`                      |
| `bytes`, `string`, `fixed`                  | [FixedString(N)](../sql-reference/data-types/fixedstring.md)                                                          | `fixed(N)`                   |
| `enum`                                      | [Enum(8\|16)](../sql-reference/data-types/enum.md)                                                                    | `enum`                       |
| `array(T)`                                  | [Array(T)](../sql-reference/data-types/array.md)                                                                      | `array(T)`                   |
| `union(null, T)`, `union(T, null)`          | [Nullable(T)](../sql-reference/data-types/date.md)                                                                    | `union(null, T)`             |
| `null`                                      | [Nullable(Nothing)](../sql-reference/data-types/special-data-types/nothing.md)                                        | `null`                       |
| `int (date)` \*                             | [Date](../sql-reference/data-types/date.md)                                                                           | `int (date)` \*              |
| `long (timestamp-millis)` \*                | [DateTime64(3)](../sql-reference/data-types/datetime.md)                                                              | `long (timestamp-millis)` \* |
| `long (timestamp-micros)` \*                | [DateTime64(6)](../sql-reference/data-types/datetime.md)                                                              | `long (timestamp-micros)` \* |
A
Andrew Onyshchuk 已提交
1013

H
hcz 已提交
1014
\* [Avro logical types](https://avro.apache.org/docs/current/spec.html#Logical+Types)
A
Andrew Onyshchuk 已提交
1015 1016 1017

Unsupported Avro data types: `record` (non-root), `map`

1018
Unsupported Avro logical data types: `time-millis`, `time-micros`, `duration`
A
Andrew Onyshchuk 已提交
1019

1020
### Inserting Data {#inserting-data-1}
A
Andrew Onyshchuk 已提交
1021 1022 1023

To insert data from an Avro file into ClickHouse table:

1024
``` bash
A
Andrew Onyshchuk 已提交
1025 1026 1027 1028 1029
$ cat file.avro | clickhouse-client --query="INSERT INTO {some_table} FORMAT Avro"
```

The root schema of input Avro file must be of `record` type.

1030
To find the correspondence between table columns and fields of Avro schema ClickHouse compares their names. This comparison is case-sensitive.
A
Andrew Onyshchuk 已提交
1031 1032
Unused fields are skipped.

1033
Data types of ClickHouse table columns can differ from the corresponding fields of the Avro data inserted. When inserting data, ClickHouse interprets data types according to the table above and then [casts](../sql-reference/functions/type-conversion-functions.md#type_conversion_function-cast) the data to corresponding column type.
A
Andrew Onyshchuk 已提交
1034

1035
### Selecting Data {#selecting-data-1}
A
Andrew Onyshchuk 已提交
1036 1037 1038

To select data from ClickHouse table into an Avro file:

1039
``` bash
A
Andrew Onyshchuk 已提交
1040 1041 1042 1043 1044
$ clickhouse-client --query="SELECT * FROM {some_table} FORMAT Avro" > file.avro
```

Column names must:

1045 1046
-   start with `[A-Za-z_]`
-   subsequently contain only `[A-Za-z0-9_]`
A
Andrew Onyshchuk 已提交
1047

1048
Output Avro file compression and sync interval can be configured with [output\_format\_avro\_codec](../operations/settings/settings.md#settings-output_format_avro_codec) and [output\_format\_avro\_sync\_interval](../operations/settings/settings.md#settings-output_format_avro_sync_interval) respectively.
A
Andrew Onyshchuk 已提交
1049

I
Ivan Blinkov 已提交
1050
## AvroConfluent {#data-format-avro-confluent}
A
Andrew Onyshchuk 已提交
1051 1052 1053 1054 1055 1056 1057

AvroConfluent supports decoding single-object Avro messages commonly used with [Kafka](https://kafka.apache.org/) and [Confluent Schema Registry](https://docs.confluent.io/current/schema-registry/index.html).

Each Avro message embeds a schema id that can be resolved to the actual schema with help of the Schema Registry.

Schemas are cached once resolved.

1058
Schema Registry URL is configured with [format\_avro\_schema\_registry\_url](../operations/settings/settings.md#format_avro_schema_registry_url).
A
Andrew Onyshchuk 已提交
1059

1060
### Data Types Matching {#data_types-matching-1}
A
Andrew Onyshchuk 已提交
1061

1062
Same as [Avro](#data-format-avro).
A
Andrew Onyshchuk 已提交
1063

1064
### Usage {#usage}
A
Andrew Onyshchuk 已提交
1065

I
Ivan Blinkov 已提交
1066
To quickly verify schema resolution you can use [kafkacat](https://github.com/edenhill/kafkacat) with [clickhouse-local](../operations/utilities/clickhouse-local.md):
A
Andrew Onyshchuk 已提交
1067

1068
``` bash
A
Andrew Onyshchuk 已提交
1069 1070 1071 1072 1073 1074
$ kafkacat -b kafka-broker  -C -t topic1 -o beginning -f '%s' -c 3 | clickhouse-local   --input-format AvroConfluent --format_avro_schema_registry_url 'http://schema-registry' -S "field1 Int64, field2 String"  -q 'select *  from table'
1 a
2 b
3 c
```

1075
To use `AvroConfluent` with [Kafka](../engines/table-engines/integrations/kafka.md):
1076 1077

``` sql
A
Andrew Onyshchuk 已提交
1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095
CREATE TABLE topic1_stream
(
    field1 String,
    field2 String
)
ENGINE = Kafka()
SETTINGS
kafka_broker_list = 'kafka-broker',
kafka_topic_list = 'topic1',
kafka_group_name = 'group1',
kafka_format = 'AvroConfluent';

SET format_avro_schema_registry_url = 'http://schema-registry';

SELECT * FROM topic1_stream;
```

!!! note "Warning"
1096
    Setting `format_avro_schema_registry_url` needs to be configured in `users.xml` to maintain it’s value after a restart. Also you can use the `format_avro_schema_registry_url` setting of the `Kafka` table engine.
A
Andrew Onyshchuk 已提交
1097

I
Ivan Blinkov 已提交
1098
## Parquet {#data-format-parquet}
1099

H
hcz 已提交
1100
[Apache Parquet](https://parquet.apache.org/) is a columnar storage format widespread in the Hadoop ecosystem. ClickHouse supports read and write operations for this format.
1101

1102
### Data Types Matching {#data_types-matching-2}
1103

1104
The table below shows supported data types and how they match ClickHouse [data types](../sql-reference/data-types/index.md) in `INSERT` and `SELECT` queries.
1105

1106 1107
| Parquet data type (`INSERT`) | ClickHouse data type                                      | Parquet data type (`SELECT`) |
|------------------------------|-----------------------------------------------------------|------------------------------|
1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122
| `UINT8`, `BOOL`              | [UInt8](../sql-reference/data-types/int-uint.md)          | `UINT8`                      |
| `INT8`                       | [Int8](../sql-reference/data-types/int-uint.md)           | `INT8`                       |
| `UINT16`                     | [UInt16](../sql-reference/data-types/int-uint.md)         | `UINT16`                     |
| `INT16`                      | [Int16](../sql-reference/data-types/int-uint.md)          | `INT16`                      |
| `UINT32`                     | [UInt32](../sql-reference/data-types/int-uint.md)         | `UINT32`                     |
| `INT32`                      | [Int32](../sql-reference/data-types/int-uint.md)          | `INT32`                      |
| `UINT64`                     | [UInt64](../sql-reference/data-types/int-uint.md)         | `UINT64`                     |
| `INT64`                      | [Int64](../sql-reference/data-types/int-uint.md)          | `INT64`                      |
| `FLOAT`, `HALF_FLOAT`        | [Float32](../sql-reference/data-types/float.md)           | `FLOAT`                      |
| `DOUBLE`                     | [Float64](../sql-reference/data-types/float.md)           | `DOUBLE`                     |
| `DATE32`                     | [Date](../sql-reference/data-types/date.md)               | `UINT16`                     |
| `DATE64`, `TIMESTAMP`        | [DateTime](../sql-reference/data-types/datetime.md)       | `UINT32`                     |
| `STRING`, `BINARY`           | [String](../sql-reference/data-types/string.md)           | `STRING`                     |
| —                            | [FixedString](../sql-reference/data-types/fixedstring.md) | `STRING`                     |
| `DECIMAL`                    | [Decimal](../sql-reference/data-types/decimal.md)         | `DECIMAL`                    |
1123

1124
ClickHouse supports configurable precision of `Decimal` type. The `INSERT` query treats the Parquet `DECIMAL` type as the ClickHouse `Decimal128` type.
1125

1126 1127
Unsupported Parquet data types: `DATE32`, `TIME32`, `FIXED_SIZE_BINARY`, `JSON`, `UUID`, `ENUM`.

1128
Data types of ClickHouse table columns can differ from the corresponding fields of the Parquet data inserted. When inserting data, ClickHouse interprets data types according to the table above and then [cast](../query_language/functions/type_conversion_functions/#type_conversion_function-cast) the data to that data type which is set for the ClickHouse table column.
1129

1130
### Inserting and Selecting Data {#inserting-and-selecting-data}
1131

A
alexey-milovidov 已提交
1132
You can insert Parquet data from a file into ClickHouse table by the following command:
1133

1134
``` bash
1135
$ cat {filename} | clickhouse-client --query="INSERT INTO {some_table} FORMAT Parquet"
1136 1137 1138 1139
```

You can select data from a ClickHouse table and save them into some file in the Parquet format by the following command:

1140
``` bash
1141 1142 1143
$ clickhouse-client --query="SELECT * FROM {some_table} FORMAT Parquet" > {some_file.pq}
```

1144
To exchange data with Hadoop, you can use [HDFS table engine](../engines/table-engines/integrations/hdfs.md).
1145

A
Alexander Kuzmenkov 已提交
1146
## Arrow {#data-format-arrow}
H
hcz 已提交
1147 1148 1149

[Apache Arrow](https://arrow.apache.org/) comes with two built-in columnar storage formats. ClickHouse supports read and write operations for these formats.

1150
`Arrow` is Apache Arrow’s “file mode” format. It is designed for in-memory random access.
H
hcz 已提交
1151

A
Alexander Kuzmenkov 已提交
1152
## ArrowStream {#data-format-arrow-stream}
H
hcz 已提交
1153

1154
`ArrowStream` is Apache Arrow’s “stream mode” format. It is designed for in-memory stream processing.
H
hcz 已提交
1155

I
Ivan Blinkov 已提交
1156
## ORC {#data-format-orc}
1157

1158
[Apache ORC](https://orc.apache.org/) is a columnar storage format widespread in the Hadoop ecosystem. You can only insert data in this format to ClickHouse.
1159

1160
### Data Types Matching {#data_types-matching-3}
1161

1162
The table below shows supported data types and how they match ClickHouse [data types](../sql-reference/data-types/index.md) in `INSERT` queries.
1163

1164 1165
| ORC data type (`INSERT`) | ClickHouse data type                                |
|--------------------------|-----------------------------------------------------|
1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179
| `UINT8`, `BOOL`          | [UInt8](../sql-reference/data-types/int-uint.md)    |
| `INT8`                   | [Int8](../sql-reference/data-types/int-uint.md)     |
| `UINT16`                 | [UInt16](../sql-reference/data-types/int-uint.md)   |
| `INT16`                  | [Int16](../sql-reference/data-types/int-uint.md)    |
| `UINT32`                 | [UInt32](../sql-reference/data-types/int-uint.md)   |
| `INT32`                  | [Int32](../sql-reference/data-types/int-uint.md)    |
| `UINT64`                 | [UInt64](../sql-reference/data-types/int-uint.md)   |
| `INT64`                  | [Int64](../sql-reference/data-types/int-uint.md)    |
| `FLOAT`, `HALF_FLOAT`    | [Float32](../sql-reference/data-types/float.md)     |
| `DOUBLE`                 | [Float64](../sql-reference/data-types/float.md)     |
| `DATE32`                 | [Date](../sql-reference/data-types/date.md)         |
| `DATE64`, `TIMESTAMP`    | [DateTime](../sql-reference/data-types/datetime.md) |
| `STRING`, `BINARY`       | [String](../sql-reference/data-types/string.md)     |
| `DECIMAL`                | [Decimal](../sql-reference/data-types/decimal.md)   |
1180

1181
ClickHouse supports configurable precision of the `Decimal` type. The `INSERT` query treats the ORC `DECIMAL` type as the ClickHouse `Decimal128` type.
1182 1183 1184

Unsupported ORC data types: `DATE32`, `TIME32`, `FIXED_SIZE_BINARY`, `JSON`, `UUID`, `ENUM`.

1185
The data types of ClickHouse table columns don’t have to match the corresponding ORC data fields. When inserting data, ClickHouse interprets data types according to the table above and then [casts](../sql-reference/functions/type-conversion-functions.md#type_conversion_function-cast) the data to the data type set for the ClickHouse table column.
1186

1187
### Inserting Data {#inserting-data-2}
1188

1189
You can insert ORC data from a file into ClickHouse table by the following command:
1190

1191
``` bash
1192
$ cat filename.orc | clickhouse-client --query="INSERT INTO some_table FORMAT ORC"
1193
```
1194

1195
To exchange data with Hadoop, you can use [HDFS table engine](../engines/table-engines/integrations/hdfs.md).
A
alexey-milovidov 已提交
1196

I
Ivan Blinkov 已提交
1197
## Format Schema {#formatschema}
1198 1199

The file name containing the format schema is set by the setting `format_schema`.
1200
It’s required to set this setting when it is used one of the formats `Cap'n Proto` and `Protobuf`.
1201
The format schema is a combination of a file name and the name of a message type in this file, delimited by a colon,
1202
e.g. `schemafile.proto:MessageType`.
1203
If the file has the standard extension for the format (for example, `.proto` for `Protobuf`),
1204
it can be omitted and in this case, the format schema looks like `schemafile:MessageType`.
1205

1206
If you input or output data via the [client](../interfaces/cli.md) in the [interactive mode](../interfaces/cli.md#cli_usage), the file name specified in the format schema
1207
can contain an absolute path or a path relative to the current directory on the client.
1208 1209 1210
If you use the client in the [batch mode](../interfaces/cli.md#cli_usage), the path to the schema must be relative due to security reasons.

If you input or output data via the [HTTP interface](../interfaces/http.md) the file name specified in the format schema
1211
should be located in the directory specified in [format\_schema\_path](../operations/server-configuration-parameters/settings.md#server_configuration_parameters-format_schema_path)
1212 1213
in the server configuration.

I
Ivan Blinkov 已提交
1214
## Skipping Errors {#skippingerrors}
A
Alexander Tokmakov 已提交
1215

1216 1217
Some formats such as `CSV`, `TabSeparated`, `TSKV`, `JSONEachRow`, `Template`, `CustomSeparated` and `Protobuf` can skip broken row if parsing error occurred and continue parsing from the beginning of next row. See [input\_format\_allow\_errors\_num](../operations/settings/settings.md#settings-input_format_allow_errors_num) and
[input\_format\_allow\_errors\_ratio](../operations/settings/settings.md#settings-input_format_allow_errors_ratio) settings.
A
Alexander Tokmakov 已提交
1218
Limitations:
1219 1220
- In case of parsing error `JSONEachRow` skips all data until the new line (or EOF), so rows must be delimited by `\n` to count errors correctly.
- `Template` and `CustomSeparated` use delimiter after the last column and delimiter between rows to find the beginning of next row, so skipping errors works only if at least one of them is not empty.
1221 1222

[Original article](https://clickhouse.tech/docs/en/interfaces/formats/) <!--hide-->