Returns the row number of the current row as it appeared in the original, source dataset before any steps had been applied.The following transforms might make original row information invalid or otherwise unavailable:
NOTE: This function does not apply to relational database inputs, such as Hive or JDBC sources.
NOTE: If the dataset is sourced from multiple files, a predictable original source row number cannot be guaranteed. This function may return unexpected results.
Tip: If the source row information is still available, you can hover over the left side of a row in the data grid to see the source row number in the original source data.
Output: Generates a new OriginalRowNums column containing the row numbers for each row as it appeared in the original data.
Output: Rows in the dataset are re-sorted according to the original order in the dataset.Delete Example:
Output: Deletes the rows in the dataset that were after row #101 in the original source data.
Syntax and Arguments
There are no arguments for this function.
Tip: For additional examples, see Common Tasks.
Tip: For additional examples, see Common Tasks.
Example - Header from row that is not the first one
You have imported the following racer data on heat times from a CSV file. When loaded in the Transformer page, it looks like the following:
|1||Racer||Heat 1||Heat 2||Heat 3|
In the above, the
(rowId) column references the row numbers displayed in the data grid; it is not part of the dataset. This information is available when you hover over the black dot on the left side of the screen.
You have examined the best performance in each heat according to the sample. You then notice that the data contains headers, but you forget how it was originally sorted. The data now looks like the following:
|2||Racer||Heat 1||Heat 2||Heat 3|
While you can undo your sort steps to return to the original sort order, this approach works best if you did not include other steps in between that are based on the sort order.
If you have steps that require retaining your sort steps, you can revert to the original sort order by adding this transform step:Then, you can create the header with the following simple step: If you need to retain the sort order and not revert to the original, you can do the following to the previous example data:
After you have applied the last
header transform, your data should look like the following:
You can sort by the
Racer column in ascending order to return to the original sort order.
Example - Using sourcerownumber to create unique row identifiers
The following example demonstrates how to unpack nested data. As part of this example, the
SOURCEROWNUMBER function is used as part of a method to create unique row identifiers.
You have the following data on student test scores. Scores on individual scores are stored in the
Scores array, and you need to be able to track each test on a uniquely identifiable row. This example has two goals:
- One row for each student test
- Unique identifier for each student-score combination
When the data is imported from CSV format, you must add a
header transform and remove the quotes from the
Scoresarray (4) and the actual number: When the transform is previewed, you can see in the sample dataset that all tests are included. You might or might not want to include this column in the final dataset, as you might identify missing tests when the recipe is run at scale.
Unique row identifier: The
Scores array must be broken out into individual rows for each test. However, there is no unique identifier for the row to track individual tests. In theory, you could use the combination of
LastName-FirstName-Scores values to do so, but if a student recorded the same score twice, your dataset has duplicate rows. In the following transform, you create a parallel array called
Tests, which contains an index array for the number of values in the
Scores column. Index values start at
SOURCEROWNUMBERfunction: One row for each student test: Your data should look like the following:
Now, you want to bring together the
Scores arrays into a single nested array using the
flatten transform, you can unpack the nested array:
unnest: After you drop
column1, which is no longer needed you should rename the two generated columns: Unique row identifier: You can do one more step to create unique test identifiers, which identify the specific test for each student. The following uses the original row identifier
OrderIndexas an identifier for the student and the
TestNumbervalue to create the
TestIdcolumn value: The above are integer values. To make your identifiers look prettier, you might add the following:
Extending: You might want to generate some summary statistical information on this dataset. For example, you might be interested in calculating each student's average test score. This step requires figuring out how to properly group the test values. In this case, you cannot group by the
LastNamevalue, and when executed at scale, there might be collisions between first names when this recipe is run at scale. So, you might need to create a kind of primary key using the following: You can now use this as a grouping parameter for your calculation:
After you drop unnecessary columns and move your columns around, the dataset should look like the following:
Example - Delete rows based on source row numbers
Your dataset is the following set of orders.
Initially, you want to review your list of orders by last name.
During your review, you notice that two customer orders are no longer valid and need to be removed. They are:
- LastName: Hall
- LastName: Jones
You might hover over the left side of the screen to reveal the row numbers. You select the row numbers for each of these rows, and a delete suggestion is provided for you. When you click Modify, you see the following transform:
The above checks the results of the
SOURCEROWNUMBER function, which returns the original row order for the selected rows. If a selected row matches values in the
[2,7] array of row numbers, then the row is deleted.
When the preceding transform is added, your dataset looks like the following, and your sort order is maintained:
This page has no comments.