Switch Datacenter
See also the blog posts from the "2016 Failover" and "2017 Failover" and "2021 Switchover" on the Wikimedia Blogs.
Datacenter switchovers are a standard response to certain types of disasters (web search). Technology companies regularly practice them to make sure that everything will work properly when the process is needed. Switching between datacenters also make it easier to do some maintenance work on non-active servers (e.g. database upgrades/changes), so while we're serving traffic from datacenter A, we can do all that work at datacenter B.
A Wikimedia datacenter switchover (from eqiad to codfw, or vice-versa) comprises switching over multiple different components, some of which can happen independently and many of which need to happen in lockstep. This page documents all the steps needed to switch over from a master datacenter to another one, broken up by component. SRE Service Operations maintains the process and software necessary to run the switchover.
Weeks in advance preparation
Overall switchover flow
In a controlled switchover we first deactivate services in the primary datacenter and second deactivate caching in the datacenter. The next step is to switch Mediawiki itself. About a week later we activate caching in the datacenter again, as we believe that testing the situation without caching in the datacenter is sufficient.
Typical scheduling looks like:
The following week reactivate caching. 6+ Weeks later switchback Mediawiki
Per-service switchover instructions
We divide the process in logical phases that should be executed sequentially. Within any phase, top-level tasks can be executed in parallel to each other, while subtasks are to be executed sequentially to each other. The phase number is referred to in the names of the tasks in the operations/cookbooks repository, in the cookbooks/sre/switchdc/mediawiki/ path.
Days in advance preparation
  1. OPTIONAL: SKIP IN AN EMERGENCY: Make sure databases are in a good state. Normally this requires no operation, as the passive datacenter databases are always prepared to receive traffic, so there are no actionables. Some things that #DBAs normally should make sure for the most optimal state possible (sanity checks):
    • There is no ongoing long-running maintenance that affects database availability or lag (schema changes, upgrades, hardware issues, etc.). Depool those servers that are not ready.
    • Replication is flowing from eqiad -> codfw and from codfw -> eqiad (replication is usually stopped in the passive -> active direction to facilitate maintenance)
    • All database servers have its buffer pool filled up. This is taken care automatically with the automatic buffer pool warmup functionality. For sanity checks, some sample load could be sent to the MediaWiki application server to check requests happen as quickly as in the active datacenter.
    • These were the things we prepared/checked for the 2018 switch
  2. Make absolutely sure that parsercache replication is working from the active to the passive DC. Verify that the parsercache servers are set to read-write in the passive DC. This is important
  3. Check appserver weights on servers in the passive DC, make sure that newer hardware is weighted higher (usually 30) and older hardware is less (usually 25)
Phase 0 - preparation
  1. Disable puppet on maintenance hosts in both eqiad and codfw: 00-disable-puppet.py
  2. Reduce the TTL on appservers-ro, appservers-rw, api-ro, api-rw, jobrunner, videoscaler, parsoid-php to 10 seconds: 00-reduce-ttl.py Make sure that at least 5 minutes (the old TTL) have passed before moving to Phase 1, the cookbook should force you to wait.
  3. Warm up APC running the mediawiki-cache-warmup on the new site clusters. The warmup queries will repeat automatically until the response times stabilize: 00-warmup-caches.py
    • The global "urls-cluster" warmup against the appservers cluster
    • The "urls-server" warmup against all hosts in the appservers cluster.
    • The "urls-server" warmup against all hosts in the api-appservers cluster.
  4. Set downtime for Read only checks on mariadb masters changed on Phase 3 so they don't page. This is not covered by the switchdc script.
Phase 1 - stop maintenance
Stop maintenance jobs in both datacenters and kill all the periodic jobs (systemd timers) on maintenance hosts in both datacenters: 01-stop-maintenance.py
Phase 2 - read-only mode
Go to read-only mode by changing the ReadOnly conftool value: 02-set-readonly.py
Phase 3 - lock down database masters
Put old-site core DB masters (shards: s1-s8, x1, es4-es5) in read-only mode and wait for the new site's databases to catch up replication: 03-set-db-readonly.py
Phase 4 - switch active datacenter configuration
Switch the discovery records and MediaWiki active datacenter: 04-switch-mediawiki.py
Phase 5 - Invert Redis replication for MediaWiki sessions
Invert the Redis replication for the sessions cluster: 05-invert-redis-sessions.py
Phase 6 - Set new site's databases to read-write
Set new-site's core DB masters (shards: s1-s8, x1, es4-es5) in read-write mode: 06-set-db-readwrite.py
Phase 7 - Set MediaWiki to read-write
Go to read-write mode by changing the ReadOnly conftool value: 07-set-readwrite.py
Phase 8 - Restore rest of MediaWiki
  1. Restart Envoy on the jobrunners that are now inactive, to trigger changeprop to re-resolve the DNS name and connect to the new DC: 08-restart-envoy-on-jobrunners.py
    A steady rate of 500s is expected until this step is completed, because changeprop will still be sending edits to jobrunners in the old DC, where the database master will reject them.
  2. Start maintenance in the new DC: 08-start-maintenance.py
    • Run puppet on the maintenance hosts, which will reactivate systemd timers in both datacenters in the primary DC
    • Most Wikidata-editing bots will restart once this is done and the "dispatch lag" has recovered. This should bring us back to 100% of editing traffic.
Phase 9 - Post read-only
  1. Update tendril for new database masters: 09-update-tendril.py
    • Pure cosmetic change, no effect on production. No changes required for database zarcillo (which has a different master for eqiad and codfw).
    • The parsercache hosts and x2 will need to manually be updated in tendril see T266723. This is not covered by the switchdc script.
  2. Set the TTL for the DNS records to 300 seconds again: 09-restore-ttl.py
  3. Update DNS records for new database masters deploying eqiad->codfw; codfw->eqiad This is not covered by the switchdc script. Please use the following to SAL log !log Phase 8.5 Update DNS records for new database masters
  4. Run Puppet on the database masters in both DCs, to update expected read-only state: 09-run-puppet-on-db-masters.py​.
  5. Make sure the CentralNotice banner informing users of readonly is removed. Keep in mind, there is some minor HTTP caching involved (~5mins)
  6. Remove the downtime added in phase 0.
  7. Update disc_desired_state.py to reflect which services are pooled in which DCs. See T286231 as an example. This is not covered by the switchdc script.
  8. Re-order noc.wm.o's debug.json to have primary servers listed first, see T289745. This is not covered by the switchdc script.
Phase 10 - verification and troubleshooting
This is not covered by the switchdc script
  1. Make sure reading & editing works! :)
  2. Make sure recent changes are flowing (see Special:RecentChanges, EventStreams, and the IRC feeds)
  3. Make sure email works (exim4 -bp on mx1001/mx2001, test an email)
General context on how to switchover
CirrusSearch talks by default to the local datacenter ($wmfDatacenter). If Mediawiki switches datacenter, elasticsearch will automatically follow.
Manually switching CirrusSearch to a specific datacenter can always be done. Point CirrusSearch to codfw by editing wmgCirrusSearchDefaultCluster​InitialiseSettings.php​.
To ensure coherence in case of lost updates, a reindex of the pages modified during the switch can be done by following Recovering from an Elasticsearch outage / interruption in updates.
Preserving more_like query cache performance
CirrusSearch has a caching layer that caches the result of Elasticsearch queries such as "more like this" queries (which are used, among other things, to generate "Related Articles" at the bottom of mobile Wikipedia pages).
Switching datacenters will result in degraded performance while the cache fills back up.
In order to avoid the aforementioned performance degradation, a mitigation should be deployed that will hardcode more_like queries to keep routing to the "old" datacenter for 24 hours following the switchover.
Hardcoding the cirrus cluster will allow the stampede of cache misses to be sent to the secondary search cluster which has enough capacity, once typical traffic has migrated to the new datacenter, to serve the increased load.
This hardcoding should be deployed in advance of the switchover. Since it is effectively a no-op until the actual cutover, it can be deployed as far in advance as desired.
For example, if we are switching over from eqiad to codfw, more_like queries should be hardcoded to route to eqiad; this change should be deployed before the actual cutover.
Then, 24 hours after the cutover, the hardcoding can be removed, allowing more_like queries to route to the new cirrus dc - in this example, codfw.
Days in advance preparation
Deploy a patch to hardcode more_like query routing to the currently active DC (i.e. the datacenter we are switching over from).
Example Patch: [mediawiki-config] 635411 cirrus: Temporarily hardcode more_like query routing
This mitigation should be left in place for 24 hours following the switchover (equivalent to the cache length), at which point there is no longer any performance penalty to removing the hardcoding.
One day after datacenter switch
Revert the earlier patch to hardcode more_like query routing; this will allow these queries to route to the newly active DC, and there will not be any performance degradation since the caches have been fully populated by this point.
ElasticSearch Percentiles
It is relatively straightforward for us to depool an entire datacenter at the traffic level, and is regularly done during maintenance or outages. For that reason, we tend to only keep the datacenter depooled for about a week, which allows us to test for a full traffic cycle (in theory).
General information on generic procedures
See Global traffic routing.
GeoDNS (User-facing) Routing:
  1. gerrit: C+2 and Submit commit https://gerrit.wikimedia.org/r/#/c/operations/dns/+/458806
  2. <any authdns node>: authdns-update
  3. SAL Log using the following !log Traffic: depool eqiad from user traffic
(Running authdns-update from any authdns node will update all nameservers.)
Same procedure as above, with reversion of the commit specified: GeoDNS.
All services, are active-active in DNS discovery, apart from restbase, that needs special treatment. The procedure to fail over to one site only is the same for every one of them:
  1. reduce the TTL of the DNS discovery records to 10 seconds
  2. depool the datacenter we're moving away from in confctl / discovery
  3. restore the original TTL
All of the above is done using the sre.switchdc.services cookbooks:
# Switch the service "parsoid" to codfw-only $ cookbook sre.switchdc.services --services parsoid -- eqiad codfw # Switch all active-active services to codfw, excluding parsoid and cxserver $ cookbook sre.switchdc.services --exclude parsoid cxserver -- eqiad codfw
Restbase is a bit of a special case, and needs an additional step, if we're just switching active traffic over and not simulating a complete failover:
pool restbase-async everywhere, then depool restbase-async in the newly active dc, so that async traffic is separated from real-users traffic as much as possible.
Manual steps
Update WDQS lag reporting to point to the new primary DC, see gerrit:701927 as an example and T285710 for more details. This is not covered by the switchdc script.
Other miscellaneous
Upcoming and past switches
2021 switches
Tracked in Phabricator
Task T281515
Switching back:
Tracked in Phabricator
Task T287539
Datacenter switchover recap on wikitech-l (2m42s of read-only time)
2020 switches
Tracked in Phabricator
Task T243314
  • Services: Monday, August 31st, 2020 14:00 UTC
  • Traffic: Monday, August 31st, 2020 15:00 UTC
  • MediaWiki: Tuesday, September 1st, 2020 14:00 UTC
Incident documentation/2020-09-01 data-center-switchover (2m49s of read-only time)
Switching back:
  • Traffic: Thursday, September 17th, 2020 17:00 UTC
  • MediaWiki: Tuesday, October 27th, 2020 14:00 UTC
  • Services: Wednesday, October 28th, 2020 14:00 UTC
2018 switches
Tracked in Phabricator
Task T199073
  • Services: Tuesday, September 11th 2018 14:30 UTC
  • Media storage/Swift: Tuesday, September 11th 2018 15:00 UTC
  • Traffic: Tuesday, September 11th 2018 19:00 UTC
  • MediaWiki: Wednesday, September 12th 2018: 14:00 UTC
Datacenter Switchover recap (7m34s of read-only time)
Switching back:
  • Traffic: Wednesday, October 10th 2018 09:00 UTC
  • MediaWiki: Wednesday, October 10th 2018: 14:00 UTC
  • Services: Thursday, October 11th 2018 14:30 UTC
  • Media storage/Swift: Thursday, October 11th 2018 15:00 UTC
Datacenter Switchback recap (4m41s of read-only time)
2017 switches
Tracked in Phabricator
Task T138810
  • Elasticsearch: elasticsearch is automatically following mediawiki switch
  • Services: Tuesday, April 18th 2017 14:30 UTC
  • Media storage/Swift: Tuesday, April 18th 2017 15:00 UTC
  • Traffic: Tuesday, April 18th 2017 19:00 UTC
  • MediaWiki: Wednesday, April 19th 2017 14:00 UTC (user visible, requires read-only mode)
  • Deployment server: Wednesday, April 19th 2017 16:00 UTC
Editing pause for failover test on Wikimedia Blog
Switching back:
  • Traffic: Pre-switchback in two phases: Mon May 1 and Tue May 2 (to avoid cold-cache issues Weds)
  • MediaWiki: Wednesday, May 3rd 2017 14:00 UTC (user visible, requires read-only mode)
  • Elasticsearch: elasticsearch is automatically following mediawiki switch
  • Services: Thursday, May 4th 2017 14:30 UTC
  • Swift: Thursday, May 4th 2017 15:30 UTC
  • Deployment server: Thursday, May 4th 2017 16:00 UTC
2016 switches
  • Deployment server: Wednesday, January 20th 2016
  • Traffic: Thursday, March 10th 2016
  • MediaWiki 5-minute read-only test: Tuesday, March 15th 2016, 07:00 UTC
  • Elasticsearch: Thursday, April 7th 2016, 12:00 UTC
  • Media storage/Swift: Thursday, April 14th 2016, 17:00 UTC
  • Services: Monday, April 18th 2016, 10:00 UTC
  • MediaWiki: Tuesday, April 19th 2016, 14:00 UTC / 07:00 PDT / 16:00 CEST (requires read-only mode)
Wikimedia failover test on Wikimedia Blog
Switching back:
  • MediaWiki: Thursday, April 21st 2016, 14:00 UTC / 07:00 PDT / 16:00 CEST (requires read-only mode)
  • Services, Elasticsearch, Traffic, Swift, Deployment server: Thursday, April 21st 2016, after the above is done
Monitoring Dashboards
Aggregated list of interesting dashboards
Last edited on 15 September 2021, at 23:39
Content is available under CC BY-SA 3.0 unless otherwise noted.
Privacy policy
Terms of Use
HomeRandomLog inSettingsDonateAbout WikitechDisclaimers