For many recipes, the first step is to split data from a single column into multiple columns. This section describes the various methods that can be used for splitting a single column into one or more columns, based on character- or pattern-matching or position within the column's values.
When data is initially imported into , data in each row may be split on a single delimiter. In the following example, you can see that the tab key is a single clear delimiter:
<IMSI^MSIDN^IMEI> DATETTIME/TIMEZONE OFFSET/DURATION MSWCNT:BASCNT^BASTRA CALL_TYPE/CORRESP_IDN/DISCONNECT REASON <310170097665881^13011330554^011808005351311> 2014-12-12T00:06:13/-5/1.55 MSC001:BSC002^BTS783 MOT/00000000000:11 <310170097665881^13011330554^011808005351311> 2014-12-12T02:27:26/-5/0.00 MSC001:BSC002^BTS783 SMS/00000000000: <310-170-097665881^13011330554^011808005351311> 2014-12-12T03:24:20/-5/0 MSC001:BSC001^BTS783 SMS/00000000000:
However, when this data is imported, it may be rendered in the data grid in the following structure:
When the data is first imported, all of it is contained in a single column named column1. The application automatically splits the columns on the tab character for you and removes the original column1.
Tip: This auto-split does not appear in your recipe by default. For most formats, a set of initial steps is automatically applied to the dataset. Optionally, you can review and modify these steps, but you must deselect Detect Structure during the import. See Initial Parsing Steps.
11at the end, while the other two data rows do not have this value.
For column2, you can split the column into separate columns based on the caret delimiter:
NOTE: The Number of columns to create value reflects the total number of new columns to generate.
Below is how the data in column2 is transformed:
For column3, suppose that you want to keep the DATETIME and TIMEZONE OFFSET values in the same column, preserving the forward slash to demarcate these two values. The DURATION values are to be split into a separate column:
In this case, the expression is the following:
After splitting column3, the data resembled the following:
Suppose you want to break down the components of this date-time data into separate columns for year, month, day, hour, minute, second, and offset. The following could be use to do so:
Suppose that for column4, you want to split the column such that the middle part section is removed. You could use the previous transformation and then delete the middle column. You can also use the following transformation, which identifies that starting and editing delimiters that demarcate the separator between fields, effectively removing the middle column:
You can also perform column splits based on numerical positions in column values. These splitting options are useful for highly regular data that is of consistent length.
Suppose you have the following coordination information in three dimensions (x, y, and z). Note that the data is very regular, with leading zeroes for values that are less than 1000.
The above data could be split based on positions within a column's value:
Suppose that you wish to split the above source data such that the middle column is removed:
The above transformation could be simplified even further, since the splits happen at regular intervals:
The results would be the same as the first example.
If you are attempting to split columns based on non-ASCII characters that appear in the dataset, your transformations may fail.
In these cases, you should change the encoding that is applied to the dataset.
When a dataset is imported, the application attempts to split the data into individual rows, based on any available end of line delimiters. This transformation is performed automatically and is not included in your initial set of steps.
If the data is not consistently formatted, the rows may not be properly split. If so, you can disable the automatic splitting of rows.
The steps used to detect structure are listed as the first steps of your recipe, which allows you to modify them as needed. For more information, see Initial Parsing Steps.
See Import Data Page.