Constructor
new DataFrameWriter()
Note: Do not use directly (see above).
- Since:
- 1.4.0
- Source:
Methods
insertInto(cb, tableName)
Inserts the content of the DataFrame to the specified table. It requires that the schema of the DataFrame is the same as the schema of the table.
Because it inserts data to an existing table, format or options will be ignored.
Parameters:
Name | Type | Description |
---|---|---|
cb |
Node-style callback function (error-first). |
|
tableName |
- Since:
- 1.4.0
- Source:
insertIntoSync(tableName)
The synchronous version of DataFrameWriter#insertInto
Parameters:
Name | Type | Description |
---|---|---|
tableName |
- Since:
- 1.4.0
- Source:
json(cb)
Saves the content of the DataFrame in JSON format at the specified path.
Parameters:
Name | Type | Description |
---|---|---|
cb |
Node-style callback function (error-first). |
- Since:
- 1.4.0
- Source:
jsonSync()
The synchronous version of DataFrameWriter#json
- Since:
- 1.4.0
- Source:
mode(saveMode)
Specifies the behavior when data or table already exists. Options include:
"overwrite"
: overwrite the existing data."append"
: append the data."ignore"
: ignore the operation (i.e. no-op)."error"
: default option, throw an exception at runtime.
Parameters:
Name | Type | Description |
---|---|---|
saveMode |
- Since:
- 1.4.0
- Source:
option(key, value)
Adds an input option for the underlying data source.
Parameters:
Name | Type | Description |
---|---|---|
key |
||
value |
- Since:
- 1.4.0
- Source:
saveAsTable(tableName)
Saves the content of the DataFrame as the specified table.
In the case the table already exists, behavior of this function depends
on the save mode, specified by the mode
function (default to throwing
an exception). When mode
is Overwrite
, the schema of the
DataFrame does not need to be the same as that of the existing table.
When mode
is Append
, the schema of the DataFrame need to be the
same as that of the existing table, and format or options will be
ignored.
When the DataFrame is created from a non-partitioned HadoopFsRelation with a single input path, and the data source provider can be mapped to an existing Hive builtin SerDe (i.e. ORC and Parquet), the table is persisted in a Hive compatible format, which means other systems like Hive will be able to read this table. Otherwise, the table is persisted in a Spark SQL specific format.
Parameters:
Name | Type | Description |
---|---|---|
tableName |
- Since:
- 1.4.0
- Source:
saveAsTableSync(tableName)
The synchronous version of DataFrameWriter#saveAsTable
Parameters:
Name | Type | Description |
---|---|---|
tableName |
- Since:
- 1.4.0
- Source:
text(cb, path)
Saves the content of the DataFrame in a text file at the specified path. The DataFrame must have only one column that is of string type. Each row becomes a new line in the output file.
Parameters:
Name | Type | Description |
---|---|---|
cb |
Node-style callback function (error-first). |
|
path |
- Since:
- 1.6.0
- Source:
Example
df.write().text("/path/to/output", cb)
textSync(path)
The synchronous version of DataFrameWriter#text
Parameters:
Name | Type | Description |
---|---|---|
path |
- Since:
- 1.6.0
- Source: