Spec proposal for duplicate-proof incremental replication
As part of the Singer spec, and also as a best practice to avoid records being skipped, taps are implemented with a greater-than-or-equal-to comparison against the replication key value.
The "or equal to" part of the comparison is confusing to new users but the reason for it is important: to ensure record "ties" are never omitted from the final target.
Existing mitigations
The current solution for this problem is (1) use primary key upserts on the target side, which will naturally solve the duplication problem or (2) use a solution like dbt to remove duplicates downstream.
Proposed improvement
- Continue, as today, to use "greater than or equal to" logic.
- Add to the STATE dictionary a new
record_hashes_seenproperty (or similar) as a list of record hashes having the same value as the max replication key value.- As we parse each record during a stream, assuming its value is equal to the max replication key (which will be true for every record if the stream is sorted), then we can store a hash of the record into STATE along with the max replication key value.
- Next time through the sync, if the hash of a record exactly matches a hash in the set of
record_hashes_seen, we can omit that record and not send it downstream. - Any other ties by replication_key_value will be emitted downstream to the target, assuming their hash has not yet been seen/sent.
- When writing out STATE messages, only the latest "ties" need to be included in the
record_hashes_seen. This would not be a cumulative list of all records, only the latest ties by replication key.
Since the SDK entirely handles the State implementation for SDK-based taps, we have an opportunity to build this as a more robust solution across all taps using the SDK.
Best reasons not to build
The reasons not to build this are (1) performance, (2) complexity, and (3) scalability.
The complexity argument can be mitigated by the fact that these are all aspects managed entirely in the SDK, so developers and users can in general completely ignore these internal state treatments.
Regarding performance, the hashing of a record should be able to be performed very quickly, and then it is just a matter of tuning the caching and variable comparison logic. This should be tunable to reach satisfactory performance, but if not, we could also mitigate by enabling as an optional setting, such as dedupe_incremental_streams (bool), or similar.
Regarding scalability, this solution should scale fine assuming a small number "ties" by replication key value. If the number of ties is in the thousands or larger, however, this could have adverse affects on the stability of STATE messages. An option to disable the behavior via settings (as described in the previous paragraph, could prove useful for this as well. That said, presumably the larger the number of ties (within reason), the higher the value of this deduplication capability.
Regarding adherance to Singer Spec
To my knowledge, this implementation would still adhere to the spec since (1) the STATE behavior is entirely up to the tap to control, (2) we still accomplish >= logic for replication keys, (3) we still guarantee that every record will be sent at least once.