The source project of this merge request has been removed.
Elasticsearch queries should retry on failure
What does this MR do?
Reconfigure our elasticsearch integration so that failing requests will be retried, possibly against a different node, before giving up.
Are there points in the code the reviewer needs to double check?
Should we set reload_connections
as well, per http://www.rubydoc.info/gems/elasticsearch-transport#Reloading_Hosts ? It shouldn't be needed for unicorn, since those processes are short-lived, but maybe it's worth it for sidekiq processes?
Why was this MR needed?
we should keep working when less-than-all elasticsearch nodes are available.
Screenshots (if relevant)
Does this MR meet the acceptance criteria?
-
Changelog entry added, if necessary -
Documentation created/updated -
API support added - Tests
-
Added for this feature/bug -
All builds are passing
-
-
Conform by the merge request performance guides -
Conform by the style guides -
Branch has no merge conflicts with master
(if it does - rebase it please) -
Squashed related commits together
What are the relevant issue numbers?
Closes #2681 (closed)
Edited by Nick Thomas