For HDFS and S3 file sources, this value defines the path to the source.
For JDBC sources, this value is not specified.
For uploaded sources, this value specifies the location on the default backend storage layer where the dataset has been uploaded.
|container||(Azure only) If the dataset is stored in on ADLS, this value specifies the container on the blob host where the source is stored.|
Identifies where the type of storage where the source is located. Values:
|blobHost||(Azure only) If the dataset is stored in on ADLS, this value specifies the blob host where the source is stored.|
|isDynamicOrConverted||Property is |
|Internal identifier of the imported dataset|
|dynamicPath||(Dataset with parameters only) Specifies the path without the parameters inserted into it. Full path is defined based on this value and the data in the |
|isSchematized||(If source file is avro, or |
|createdAt||Timestamp for when the dataset was imported|
|UpdatedAt||Timestamp for when the dataset was last updated|
|runParameters||If runtime parameters have been applied to the dataset, they are listed here. See below for more information.|
|size||Size of the file in bytes (if applicable)|
|name||Internal name of the imported dataset|
|description||User-friendly description for the imported dataset|
|creator.id||Internal identifier of the user who created the imported dataset|
|updater.id||Internal identifier of the user who last updated the imported dataset|
|workspace.id||Internal identifier of the workspace into which the dataset has been imported.|
|parsingRecipe.id||If initial parsing is applied, this value contains the internal identifier of the recipe that performs the parsing.|
Internal identifier of the connection to the server hosting the dataset.
If this value is
To acquire the entire connection for this dataset, you can use either of the following endpoints:
For more information, see API Connections Get v4.