Page tree

Release 6.0.2


Contents:

   

Contents:


Before you import a packaged flow into a Production environment, you may need to apply import rules to remap objects and locations from the source instance to the new instance. Import mapping rules are not required when importing into the same environment, although they may be helpful in some cases.

Tip: If you are importing a flow that references file-based sources and wish to use the original files in your imported file, you may find it easier to configure the importing user's permissions to access the appropriate directories of the sources and then to swap datasets as needed after you complete the import. This method is suitable and easier to do across a fewer number of flows.

NOTE: Import mapping rules apply to deployments in a Production instance under deployment management. You cannot apply import mapping rules between two Dev instances.

NOTE: Import mapping rules require the use of the APIs made available from the Designer Cloud Powered by Trifacta® platform. API usage is considered a developer-level skill.

  • For more information on creating an export package, see Export Flow.
  • For more information on how to import, see Import Flow.

You can apply the following types of remappings:

TypeDescription
Value

For value remappings, you can specify rules to match on specific values or patterns of values in the import package and remap those values for use in the new instance.

NOTE: In this release, value remapping is supported only for S3 bucket names and paths to imported datasets and output locations. Examples are provided below.

Object

For object remappings, you can specify rules to match a value listed in the import package and remap that value to a defined object in the new instance.

NOTE: In this release, object remapping is supported only for connections. An example is provided below.

Import Rules

When a flow is imported, references in the flow definition that apply in the source instance may not apply in the target instance. For example, the location paths to the source datasets may need to be rewritten to point to a different location in the target instance. 

Before you import your flow definition, you need to define rules for any value or object remapping that must be done in the target environment. 

Notes on import rules

  1. Value and object remapping rules should be completed before you import the flow. The flow may be non-functional until the rules are applied.

    Tip: After you create your import rules, you can perform via API a dry run of the import. Any errors are reported in the response. Details are provided below.


  2. Value and object remapping rules are applied at the time of import.  If you add new rules, they are not retroactively applied to release packages that have already been imported.
  3. When changing rules:
    1. Any previously applied rules to the same import object are deleted.
    2. You can apply multiple rules in the same change. 
    3. Rules are applied in the order in which they are listed in the request. Rules listed later in the request must be compatible with expected changes applied by the earlier rules. 
  4. Value and object remapping must be completed via API. API usage is considered a developer-level skill. Examples are provided below.

NOTE: Import mapping rules do not work for parameterized datasets. If the imported dataset with parameters is still accessible, you should be able to run jobs from it.

Import Rule Requirements

  • If you are importing into the same instance from which you exported (Dev/Test/Prod on the same instance):
    • Import rules are not required.
    • If you want to use a different source of data in your Prod flow, you must create import rules.
  • If you are importing into a different instance from which you exported (Dev and Prod on different instances):
    • Import rules are required, except in unusual cases.

Examples

The following are some example import rules to address specific uses.

Example - Replace a connection

In this following example, you must remap the connection from the source instance of the platform to the corresponding connection in the instance where you are importing. 

First, you must be able to uniquely identify the connection from the source that you wish to remap. 

  • While the connection Id may work in a limited scope, that identifier is unlikely to be unique within your environment.
  • If you do know the connect Id from the source system, you can skip the first step below.

In the API response in a connection definition, you can acquire the uuid value for the connection, which is a unique identifier for the connection object across all instances of the platform:

Itemv4 APIs
API Endpoint

From the source instance:

/v4/connections
Method
GET
Request Body

None.

Response Body
{
    "data": [
        {
            "connectParams": {
                "vendor": "redshift",
                "vendorName": "redshift",
                "host": "redshift.example.com",
                "port": "5439",
                "extraLoadParams": "BLANKSASNULL EMPTYASNULL TRIMBLANKS TRUNCATECOLUMNS",
                "defaultDatabase": "test"
            },
            "id": 2,
            "host": "redshift.example.com",
            "port": 5439,
            "vendor": "redshift",
            "params": {
                "extraLoadParams": "BLANKSASNULL EMPTYASNULL TRIMBLANKS TRUNCATECOLUMNS",
                "defaultDatabase": "test"
            },
            "ssl": false,
            "vendorName": "redshift",
            "name": "redshift",
            "description": null,
            "type": "jdbc",
            "isGlobal": true,
            "credentialType": "custom",
            "credentialsShared": true,
            "uuid": "097c2300-2f6a-11e9-a585-57562e0d9cd6",
            "disableTypeInference": false,
            "createdAt": "2019-02-13T08:33:28.368Z",
            "updatedAt": "2019-02-13T08:33:28.381Z",
            "credentials": [
                {
                    "iamRoleArn": "arn:aws:iam:something",
                    "username": "UserName"
                }
            ],
            "creator": {
                "id": 1
            },
            "updater": {
                "id": 1
            },
            "workspace": {
                "id": 1
            }
        },
        {
            "connectParams": {
                "vendor": "hive",
                "vendorName": "hive",
                "host": "hadoop",
                "port": "10000",
                "jdbc": "hive2",
                "defaultDatabase": "default"
            },
            "id": 1,
            "host": "hadoop",
            "port": 10000,
            "vendor": "hive",
            "params": {
                "jdbc": "hive2",
                "connectStringOptions": "",
                "defaultDatabase": "default"
            },
            "ssl": false,
            "vendorName": "hive",
            "name": "hive",
            "description": null,
            "type": "jdbc",
            "isGlobal": true,
            "credentialType": "conf",
            "credentialsShared": true,
            "uuid": "08a1a180-2f6a-11e9-b2b2-85d2b0b67f5e",
            "disableTypeInference": false,
            "createdAt": "2019-02-13T08:33:26.936Z",
            "updatedAt": "2019-02-13T08:33:26.952Z",
            "credentials": [],
            "creator": {
                "id": 1
            },
            "updater": {
                "id": 1
            },
            "workspace": {
                "id": 1
            }
        }
    ],
    "count": 2
}
 
DocumentationSee API Connections Get v4.

In the above, you identify that the connection used for the exported flow is the Redshift one. This object has the following unique identifier:

"uuid": "097c2300-2f6a-11e9-a585-57562e0d9cd6"

In the target system, you must now create a rule in the deployment into which you are importing that searches for this unique value. In the following example:

  • The deploymentId is known to be 4.
  • The connectionId for the equivalent Redshift connection in the target system is 1.

The uuid field in the import package is searched for the matching string. If it is found, the connection in the import package is replaced with the connection in the target system with an Id of 1:

Itemv4 APIs
API Endpoint
/v4/deployments/4/objectImportRules
Method

PATCH

Request Body
[
  {
    "tableName": "connections",
    "onCondition": {
      "uuid": "097c2300-2f6a-11e9-a585-57562e0d9cd6"
    },
    "withCondition": {
      "id": 1
    }
  }
]
Status Code - Success200 - OK
Response Body

When the new rules are applied, all previously existing rules for the object in the deployment are deleted. The response body contains any rules that have been deleted as part of this request.

In the following example, there were no rules, so nothing was deleted:

{
    "deleted": []
}
DocumentationSee API Deployments Object Import Rules Patch v4.

To test your rule, perform a dry run of the import. See below.

Example - Remap an HDFS location

In this example, your import rule must remap the path to the source from your Dev paths to your Prod paths. Suppose the pattern looks like this:

Dev Pathhdfs://datasets/dev/1/164e0bca-8c91-4e3c-9d0a-2a85eedec817/myData.csv
Prod Path hdfs://datasets/prod/1/164e0bca-8c91-4e3c-9d0a-2a85eedec817/myData-Prod.csv

Note the differences:

  • The /dev/ part of the path has been replaced by /prod/.
  • The filename is different.

You can use the following value import rules to change the path values. In the following example, the rules are applied separately.

NOTE: You can specify multiple rules in a single request. Rules are applied in the order that they are listed. Latter rules must factor the results of earlier rules.

Request:

Itemv4 APIs
API Endpoint
/v4/deployments/4/valueImportRules
Method

PATCH

Request Body:

 

[
  {"type":"fileLocation","on":"/\/dev\//","with":"/prod/"},
  {"type":"fileLocation","on":"/\/([a-zA-Z0-9_]*).csv/","with":"$1-Prod.csv"}
]

 

Response:

Itemv4 APIs
Status Code - Success200 - OK
Response Body

When the new rules are applied, all previously existing rules for the object in the deployment are deleted. The response body contains any rules that have been deleted as part of this request.

In the following example, there were no rules, so nothing was deleted:

{
    "deleted": []
}
DocumentationSee API Deployments Value Import Rules Patch v4.

 

To test your rule, perform a dry run of the import. See below.

Example - Remap an S3 location

For S3 sources, you can apply remapping rules including changing to a new S3 bucket. 

In this example, your import rule must remap the path to the source from your Dev paths to your Prod paths. Suppose the pattern looks like this:

Dev S3 Bucket Namewrangle-dev
Dev Path/projs/tweets/v04/tweets_month.csv
Prod S3 Bucket Namewrangle-prod
Prod Path /tweets/tweets_month.csv

You can use the following value import rules to change the bucket name and path values.

NOTE: You can specify multiple rules in a single request. Rules are applied in the order that they are listed. Latter rules must factor the results of earlier rules.

s3Bucket name rule: This rule replaces the name of the S3 bucket to use with the new one: wrangle-prod.

fileLocation rule: This rule uses regular expressions to match each segment of the path in the source bucket's paths. 

  • Files are located at a consistent depth in the source bucket.
  • Path segments and filename use only alphanumeric values and underscores (_). 
  • The replacement path is shortened to contain only the parent name ($2) and the filename ($4) in the path.

Request:

Itemv4 APIs
API Endpoint
/v4/deployments/4/valueImportRules
Method

PATCH

Request Body:

 

[
  {"type":"s3Bucket","on":"wrangle-dev","with":"wrangle-prod"},
  {"type":"fileLocation","on":"/\/([a-zA-Z0-9_]*)\/([a-zA-Z0-9_]*)\/([a-zA-Z0-9_]*)\/([a-zA-Z0-9_]*).csv/","with":"/$2/$4.csv"}
]

 

Response:

Itemv4 APIs
Status Code - Success200 - OK
Response Body

When the new rules are applied, all previously existing rules for the object in the deployment are deleted. The response body contains any rules that have been deleted as part of this request.

In the following example, there were no rules, so nothing was deleted:

{
    "deleted": []
}
DocumentationSee API Deployments Value Import Rules Patch v4.

To test your rule, perform a dry run of the import. See below.

Import Dry-Run

After you have specified a set of import rules, you can perform a dry-run of an import of an import package. This dry-run does not perform the actual import but does report any permissions errors or other issues in the response. 

In this example, the flow2import.zip file contains the package to import into deployment 4

Request:

Itemv4 APIs
API Endpoint
/v4/deployments/4/releases/dryRun
Method

POST

Request Body

In form data submitted with the request, you must include the following key-value pair:

KeyValue
data"@flow2import.zip"

 

Response:

Itemv4 APIs
Status Code - Success200 - OK
Response Body

The response body contains any import remapping rules that have been applied during the import process.

DocumentationSee API Releases Create DryRun v4.

After the above dry-run has been executed, the import package can be imported and is automatically connected to the appropriate connection. See API Releases Create v4.

This page has no comments.