Best practices for migrating golden IDs when setting up a new xDM environment
A
Anna
started a topic
5 months ago
I'm working with a customer who is migrating xDM from Oracle to PostgreSQL. They have many years of historical data which includes data steward manual merges as well as golden IDs (UUID) that downstream systems consume.
The customer wants to keep the historical manual merges from the data stewards and the automated merges from xDM's match engine.
What are the best practices and tips we should be aware of (or a step-by-step procedure would be excellent) for a successful migration?
I have in mind:
The golden ID is available in the MI table along with the PubID and SourceID
Pre-consolidation enricher to capture the golden ID from the source record if it exists
Post-consolidation enricher to capture the golden ID from a historical record if available
Survivorship rules to ensure the historical golden ID flows to the golden record and some tie breaking preferences (such as using publishers). Skip Nulls is important.
What else am I missing? What are your best articles for documenting the best practices about this topic?
Best Answer
A
Alexia SIMOND
said
5 months ago
Hi Anna,
Even though we already exchanged over the ticket, I wanted to provide some elements here for anyone who might be in the same situation.
Multiple solutions are available to answer this requirement. Each has its own merits and disadvantages. So depending on the context, one might be better than the other.
But to begin with, as the context requires to use the result of a post-consolidation enricher (present in GI - step 5) to calculate a field in MI (step 2), it is important to note that we can't go back steps in the certification process (below diagram is pretty useful to visualize the certification path):
So in order to answer this requirement data need to be reprocessed, thus offering us below options:
Using an integration tool (such as xDI) with the dedicated components to migrate the database (to reuse said data location).
If it is not a requirement to populate this attribute before the migration, it might work to directly populate this technical field (in the target instance) during the migration. Thus treating it as any other attribute and populating it with the primary key from the source instance.
SQL hook: execute a hook to catch the information once the data loaded (see attached pdf).
Enricher with custom PL/SQL: implement an enricher that would catch the ID from the GI table via a PL/SQL function and reprocess the associated data (see this article).
Even though we already exchanged over the ticket, I wanted to provide some elements here for anyone who might be in the same situation.
Multiple solutions are available to answer this requirement. Each has its own merits and disadvantages. So depending on the context, one might be better than the other.
But to begin with, as the context requires to use the result of a post-consolidation enricher (present in GI - step 5) to calculate a field in MI (step 2), it is important to note that we can't go back steps in the certification process (below diagram is pretty useful to visualize the certification path):
So in order to answer this requirement data need to be reprocessed, thus offering us below options:
Using an integration tool (such as xDI) with the dedicated components to migrate the database (to reuse said data location).
If it is not a requirement to populate this attribute before the migration, it might work to directly populate this technical field (in the target instance) during the migration. Thus treating it as any other attribute and populating it with the primary key from the source instance.
SQL hook: execute a hook to catch the information once the data loaded (see attached pdf).
Enricher with custom PL/SQL: implement an enricher that would catch the ID from the GI table via a PL/SQL function and reprocess the associated data (see this article).
Anna
I'm working with a customer who is migrating xDM from Oracle to PostgreSQL. They have many years of historical data which includes data steward manual merges as well as golden IDs (UUID) that downstream systems consume.
The customer wants to keep the historical manual merges from the data stewards and the automated merges from xDM's match engine.
What are the best practices and tips we should be aware of (or a step-by-step procedure would be excellent) for a successful migration?
I have in mind:
Hi Anna,
Even though we already exchanged over the ticket, I wanted to provide some elements here for anyone who might be in the same situation.
So in order to answer this requirement data need to be reprocessed, thus offering us below options:
Alexia SIMOND
Hi Anna,
Even though we already exchanged over the ticket, I wanted to provide some elements here for anyone who might be in the same situation.
So in order to answer this requirement data need to be reprocessed, thus offering us below options:
-
Extend a model with new entities or attributes
-
Data types in xDM
-
Effective date on entities
-
Search using wild cards
-
Export a model from production and import on a development environment
-
"Allow Delete" vs "Allow Removal" privileges
-
LOV label in Named Query
-
Select location on a map and save coordinates
-
Is there a way to set up a master-detail relationship on browse mode?
-
Choose Either a Stepper or A Workflow Based on The User Privileges
See all 276 topics