Page tree

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

PropertyDescription
path

For HDFS and S3 file sources, this value defines the path to the source.

For JDBC sources, this value is not specified.

For uploaded sources, this value specifies the location on the default backend storage layer where the dataset has been uploaded.

bucket(If type=s3) Bucket on S3 where source is stored.
container(Azure only) If the dataset is stored in on ADLS, this value specifies the container on the blob host where the source is stored.
type

Identifies where the type of storage where the source is located. Values:

  • hdfs
  • s3
  • jdbc
blobHost(Azure only) If the dataset is stored in on ADLS, this value specifies the blob host where the source is stored.
isDynamicOrConvertedProperty is true if the dataset is either a dynamic or converted dataset.

id

Internal identifier of the imported dataset
dynamicPath(Dataset with parameters only) Specifies the path without the parameters inserted into it. Full path is defined based on this value and the data in the runParameters area.
isSchematized(If source file is avro, or type=jdbc) If true, schema information is available for the source.
isDynamicIf true, the imported dataset is a dynamic dataset (dataset with parameters). For more information, see Overview of Parameterization.
isConvertedIf true, the imported dataset has been converted to CSV format for storage.
disableTypeInference

If true, the initial type inferencing performed on schematized sources by the

D s platform
is disabled for this source. For more information, see Configure Type Inference.

hasStructuring

If true, initial parsing steps have been applied to the dataset.

Tip

Tip: Set detectStructure to true when creating the imported dataset applies the initial parsing steps.


createdAtTimestamp for when the dataset was imported
UpdatedAtTimestamp for when the dataset was last updated
runParametersIf runtime parameters have been applied to the dataset, they are listed here. See below for more information.
name 
sizeSize of the file in bytes (if applicable)
nameInternal name of the imported dataset
descriptionUser-friendly description for the imported dataset
creator.idInternal identifier of the user who created the imported dataset
updater.idInternal identifier of the user who last updated the imported dataset
workspace.idInternal identifier of the workspace into which the dataset has been imported.
parsingRecipe.idIf initial parsing is applied, this value contains the internal identifier of the recipe that performs the parsing.
connection.id

Internal identifier of the connection to the server hosting the dataset.

If this value is null, the file was uploaded from a local file system.

To acquire the entire connection for this dataset, you can use either of the following endpoints:

Code Block
/v4/importedDatasets?embed=connection
/v4/importedDatasets/:id?embed=connection

For more information, see API Connections Get v4.

...