Page MenuHomePhabricator

stub for enwiki broken, attempt to load content for bad rev during sha1 retrieval
Closed, ResolvedPublic0 Estimated Story Points

Description

Stubs should never load content. Apparently here the revision is bad which causes the load of content to throw a fatal exception.

/usr/bin/php7.2 /srv/mediawiki/multiversion/MWScript.php dumpBackup.php --wiki=enwiki --full --stub --report=1 --output=file:/mnt/dumpsdata/temp/dumpsgen/stubs-history.xml  --start=1240149 --end=1240150


MediaWiki\Revision\RevisionAccessException from line 1466 of /srv/mediawiki/php-1.34.0-wmf.14/includes/Revision/RevisionStore.php: Failed to load data blob from tt:10595714: Failed to load blob from address tt:10595714
#0 /srv/mediawiki/php-1.34.0-wmf.14/includes/Revision/RevisionStore.php(1673): MediaWiki\Revision\RevisionStore->loadSlotContent(Object(MediaWiki\Revision\SlotRecord), NULL, NULL, NULL, 0)
#1 [internal function]: MediaWiki\Revision\RevisionStore->MediaWiki\Revision\{closure}(Object(MediaWiki\Revision\SlotRecord))
#2 /srv/mediawiki/php-1.34.0-wmf.14/includes/Revision/SlotRecord.php(307): call_user_func(Object(Closure), Object(MediaWiki\Revision\SlotRecord))
#3 /srv/mediawiki/php-1.34.0-wmf.14/includes/Revision/SlotRecord.php(551): MediaWiki\Revision\SlotRecord->getContent()
#4 /srv/mediawiki/php-1.34.0-wmf.14/includes/Revision/RevisionSlots.php(200): MediaWiki\Revision\SlotRecord->getSha1()
#5 [internal function]: MediaWiki\Revision\RevisionSlots->MediaWiki\Revision\{closure}(NULL, Object(MediaWiki\Revision\SlotRecord))
#6 /srv/mediawiki/php-1.34.0-wmf.14/includes/Revision/RevisionSlots.php(202): array_reduce(Array, Object(Closure), NULL)
#7 /srv/mediawiki/php-1.34.0-wmf.14/includes/Revision/RevisionStoreRecord.php(174): MediaWiki\Revision\RevisionSlots->computeSha1()
#8 /srv/mediawiki/php-1.34.0-wmf.14/includes/export/XmlDumpWriter.php(357): MediaWiki\Revision\RevisionStoreRecord->getSha1()
#9 /srv/mediawiki/php-1.34.0-wmf.14/includes/export/WikiExporter.php(530): XmlDumpWriter->writeRevision(Object(stdClass), Array)
#10 /srv/mediawiki/php-1.34.0-wmf.14/includes/export/WikiExporter.php(473): WikiExporter->outputPageStreamBatch(Object(Wikimedia\Rdbms\ResultWrapper), Object(stdClass))
#11 /srv/mediawiki/php-1.34.0-wmf.14/includes/export/WikiExporter.php(287): WikiExporter->dumpPages('page_id >= 1240...', false)
#12 /srv/mediawiki/php-1.34.0-wmf.14/includes/export/WikiExporter.php(172): WikiExporter->dumpFrom('page_id >= 1240...', false)
#13 /srv/mediawiki/php-1.34.0-wmf.14/maintenance/includes/BackupDumper.php(289): WikiExporter->pagesByRange(1240149, 1240150, false)
#14 /srv/mediawiki/php-1.34.0-wmf.14/maintenance/dumpBackup.php(82): BackupDumper->dump(1, 1)
#15 /srv/mediawiki/php-1.34.0-wmf.14/maintenance/doMaintenance.php(99): DumpBackup->execute()
#16 /srv/mediawiki/php-1.34.0-wmf.14/maintenance/dumpBackup.php(144): require_once('/srv/mediawiki/...')
#17 /srv/mediawiki/multiversion/MWScript.php(101): require_once('/srv/mediawiki/...')
#18 {main}
MediaWiki\Storage\BlobAccessException from line 292 of /srv/mediawiki/php-1.34.0-wmf.14/includes/Storage/SqlBlobStore.php: Failed to load blob from address tt:10595714
#0 /srv/mediawiki/php-1.34.0-wmf.14/includes/Revision/RevisionStore.php(1464): MediaWiki\Storage\SqlBlobStore->getBlob('tt:10595714', 0)
#1 /srv/mediawiki/php-1.34.0-wmf.14/includes/Revision/RevisionStore.php(1673): MediaWiki\Revision\RevisionStore->loadSlotContent(Object(MediaWiki\Revision\SlotRecord), NULL, NULL, NULL, 0)
#2 [internal function]: MediaWiki\Revision\RevisionStore->MediaWiki\Revision\{closure}(Object(MediaWiki\Revision\SlotRecord))
#3 /srv/mediawiki/php-1.34.0-wmf.14/includes/Revision/SlotRecord.php(307): call_user_func(Object(Closure), Object(MediaWiki\Revision\SlotRecord))
#4 /srv/mediawiki/php-1.34.0-wmf.14/includes/Revision/SlotRecord.php(551): MediaWiki\Revision\SlotRecord->getContent()
#5 /srv/mediawiki/php-1.34.0-wmf.14/includes/Revision/RevisionSlots.php(200): MediaWiki\Revision\SlotRecord->getSha1()
#6 [internal function]: MediaWiki\Revision\RevisionSlots->MediaWiki\Revision\{closure}(NULL, Object(MediaWiki\Revision\SlotRecord))
#7 /srv/mediawiki/php-1.34.0-wmf.14/includes/Revision/RevisionSlots.php(202): array_reduce(Array, Object(Closure), NULL)
#8 /srv/mediawiki/php-1.34.0-wmf.14/includes/Revision/RevisionStoreRecord.php(174): MediaWiki\Revision\RevisionSlots->computeSha1()
#9 /srv/mediawiki/php-1.34.0-wmf.14/includes/export/XmlDumpWriter.php(357): MediaWiki\Revision\RevisionStoreRecord->getSha1()
#10 /srv/mediawiki/php-1.34.0-wmf.14/includes/export/WikiExporter.php(530): XmlDumpWriter->writeRevision(Object(stdClass), Array)
#11 /srv/mediawiki/php-1.34.0-wmf.14/includes/export/WikiExporter.php(473): WikiExporter->outputPageStreamBatch(Object(Wikimedia\Rdbms\ResultWrapper), Object(stdClass))
#12 /srv/mediawiki/php-1.34.0-wmf.14/includes/export/WikiExporter.php(287): WikiExporter->dumpPages('page_id >= 1240...', false)
#13 /srv/mediawiki/php-1.34.0-wmf.14/includes/export/WikiExporter.php(172): WikiExporter->dumpFrom('page_id >= 1240...', false)
#14 /srv/mediawiki/php-1.34.0-wmf.14/maintenance/includes/BackupDumper.php(289): WikiExporter->pagesByRange(1240149, 1240150, false)
#15 /srv/mediawiki/php-1.34.0-wmf.14/maintenance/dumpBackup.php(82): BackupDumper->dump(1, 1)
#16 /srv/mediawiki/php-1.34.0-wmf.14/maintenance/doMaintenance.php(99): DumpBackup->execute()
#17 /srv/mediawiki/php-1.34.0-wmf.14/maintenance/dumpBackup.php(144): require_once('/srv/mediawiki/...')
#18 /srv/mediawiki/multiversion/MWScript.php(101): require_once('/srv/mediawiki/...')
#19 {main}

Event Timeline

ArielGlenn created this task.

This looks like it's caused by https://gerrit.wikimedia.org/r/#/c/mediawiki/core/+/464768/ at line 355 of the new XmlDumpWriter.php, which calls $rev->getSha1(). This method will try to load content of the revision and compute the sha1 directly if the sha1 is NULL. We need a way to override that behavior, probably within the method itself. Perhaps an argument load_content normally set to True. All those class attributes are private so we can't just grab the field value directly like the XMLDumpWriter code used to do.

@daniel since you know these internals better, what fix do you recommend? Note that if the sha1 value is NULL, we should write an empty tag rather than a tag with an empty value.

Change 524966 had a related patch set uploaded (by ArielGlenn; owner: ArielGlenn):
[mediawiki/core@master] don't load content for stubs dump when getting size/sha1 of revisions or slots

https://gerrit.wikimedia.org/r/524966

@daniel Relying on https://meta.wikimedia.org/wiki/Cunningham%27s_Law here is a patch. It fiddles with getSize() as well because that can also load content when it should not. I think getSize() and getSha1() are the only such methods but if you know differently, please add them to the mix.

According to the database schema, size and sha1 in the slots table *cannot* be null. But I suppose they can still be empty strings. That's data corruption, though - this should never happen. I suppose here it does *because* the content couldn't be loaded to calculate the hash.

Your patch adds the ability to ignore this situation and presumably include an empty hash in the dump. That's contrary to the contract of the storage layer.

Instead, we should make the dump scripts more robust against such failures, so it doesn't just die when it can't read a revision. And we should definitely fix the database.

What we could also do is tell the SlotRecord that it shouldn't try to auto-calculate, at the time it is being constructed. Calculating the sha1 on the fly is only needed when reading from an old pre-MCR database, and when constructing a revision programmatically. When reading from the MCR schema, this really doesn't make much sense.

When the revision content is nonexistent,/unreachable what will the sha1 be? We have plenty of these in the db already from old bugs. Reading revision metadata should not load the content if we don't want it to. Note that if we caught all exceptions in XMLDumpWriter we would have missed yesterday's issue. I agree that we should catch them but I want to make sure we're not loading content where we aren't asking for it, first.

Fixing the existing data would be nice; someone needs to manage that project though, and no one has stepped up yet.

I updated my post while you responded. Sorry about that!

When the revision content is nonexistent,/unreachable what will the sha1 be? We have plenty of these in the db already from old bugs. Reading revision metadata should not load the content if we don't want it to. Note that if we caught all exceptions in XMLDumpWriter we would have missed yesterday's issue.

Only if we catch and ignore. We should catch and report.

I agree that we should catch them but I want to make sure we're not loading content where we aren't asking for it, first.

We can do that, but that would still have to trigger an exception. The SlotRecord would notice that it doesn't have a sha1, and can't calculate it, so it can't return it, and has to throw.

Fixing the existing data would be nice; someone needs to manage that project though, and no one has stepped up yet.

Yea... I'm not sure how we would fix broken revisions. We could only remove them...

I updated my post while you responded. Sorry about that!

When the revision content is nonexistent,/unreachable what will the sha1 be? We have plenty of these in the db already from old bugs. Reading revision metadata should not load the content if we don't want it to. Note that if we caught all exceptions in XMLDumpWriter we would have missed yesterday's issue.

Only if we catch and ignore. We should catch and report.

In an ideal world we would watch logstash output closely and see things like this right away. Unfortunately, we're living in this crappy one...

I agree that we should catch them but I want to make sure we're not loading content where we aren't asking for it, first.

We can do that, but that would still have to trigger an exception. The SlotRecord would notice that it doesn't have a sha1, and can't calculate it, so it can't return it, and has to throw.

Is there somewhere besides getSha1 that the SlotRecord class checks whether that attribute has a value and would raise an exception? Same question for getSize() too.

...

What we could also do is tell the SlotRecord that it shouldn't try to auto-calculate, at the time it is being constructed. Calculating the sha1 on the fly is only needed when reading from an old pre-MCR database, and when constructing a revision programmatically. When reading from the MCR schema, this really doesn't make much sense.

This is good (same for getSize()). How hard would that be to do?

Change 525064 had a related patch set uploaded (by Daniel Kinzler; owner: Daniel Kinzler):
[mediawiki/core@master] Make XmlDumpwriter resilient to blob store corruption.

https://gerrit.wikimedia.org/r/525064

Change 524966 abandoned by Daniel Kinzler:
don't load content for stubs dump when getting size/sha1 of revisions or slots

Reason:
let's do Iaadad44eb5b5fe5a4f2e60da406ffc11f39c735b instead

https://gerrit.wikimedia.org/r/524966

The above patch should fix the issue. Remaining questions:

Should SlotRecord::getSha1 treat missing content as empty? It could return sha1("") for content_sha1 = "" AND content_size = 0. getContent() would still throw if the content is inaccessible, but we would avoid errors when listing hashes as part of the meta-data, e.g. in an API call. This indirectly also affects Revision::getSha1 and RevisionRecord::getSha1.

The background is that SlotRecord::getSha1() will try to calculate the hash on the fly if it's not known. But in production, it's only unknown when the migration script failed to load the content to calculate it. So we should perhaps avoid trying to load it again.

Another approach would be to disable on-the-fly calculation of the hash (and the size) when loading slots from the new MCR schema. On-the-fly calculation is still needed when constructing a new revision for saving, as well. The on-the-fly calculation could be moved to a subclass, or gated by a flag set by the constructor of SlotRecord.

We might want both of the above. Seen from the dumps lens:

For retrieval of full content, in the case where content is genuinely missing, we want to write what we know about the sha1 and the size, which is not much; empty string and 0 seem fine to me for future dumps behavior. I'd have to check what current behavior is though.

For retrieval of metadata only, again for dumps, we don't want to load the content, so being able to not do these on-the-fly calculations which necessitate attempts to load the content, is a good idea too.

mark raised the priority of this task from High to Unbreak Now!.Jul 23 2019, 1:40 PM
mark subscribed.

Because this means that right now stub dumps generation for (at least) enwiki and dewiki and several other is broken, we have only a few days to fix this before the dumps need to be done at the end of the month. Setting UBN...

With the above patch, the problematic command in the task description ran to completion and output was (almost) identical to an earlier run without the bug. Besides the issue raised in T228763 which is completely separate, the empty sha1 tag is written ',sha1 />' with a space before the closing slash, as opposed to the old code, which we can ignore.

I ran a tiny test run in deployment-prep on enwikinews too. I checked that output from yesterday's test run vs today's test run was identical for stubs, content and abstracts.

Hopefully this will get us through the month's second run ok.

Change 525064 merged by jenkins-bot:
[mediawiki/core@master] Make XmlDumpwriter resilient to blob store corruption.

https://gerrit.wikimedia.org/r/525064

Change 525132 had a related patch set uploaded (by ArielGlenn; owner: Daniel Kinzler):
[mediawiki/core@wmf/1.34.0-wmf.15] Make XmlDumpwriter resilient to blob store corruption.

https://gerrit.wikimedia.org/r/525132

Change 525133 had a related patch set uploaded (by ArielGlenn; owner: Daniel Kinzler):
[mediawiki/core@wmf/1.34.0-wmf.14] Make XmlDumpwriter resilient to blob store corruption.

https://gerrit.wikimedia.org/r/525133

Change 525133 merged by jenkins-bot:
[mediawiki/core@wmf/1.34.0-wmf.14] Make XmlDumpwriter resilient to blob store corruption.

https://gerrit.wikimedia.org/r/525133

Change 525132 merged by jenkins-bot:
[mediawiki/core@wmf/1.34.0-wmf.15] Make XmlDumpwriter resilient to blob store corruption.

https://gerrit.wikimedia.org/r/525132

Mentioned in SAL (#wikimedia-operations) [2019-07-23T18:05:59Z] <jforrester@deploy1001> Synchronized php-1.34.0-wmf.14/includes/export/XmlDumpWriter.php: T228720 Make XmlDumpwriter resilient to blob store corruption (duration: 00m 57s)

Mentioned in SAL (#wikimedia-operations) [2019-07-23T18:08:07Z] <jforrester@deploy1001> Synchronized php-1.34.0-wmf.15/includes/export/XmlDumpWriter.php: T228720 Make XmlDumpwriter resilient to blob store corruption (duration: 00m 54s)

So very close. There are some instances of InvalidArgumentException thrown when going down the getSha1() -> getContent() rabbithole, for text addresses like DB://cluster16/54423 where cluster16 leads nowhere. Seen today for testwiki, slwiki, and frwiki. Sample stack trace:

InvalidArgumentException from line 226 of /srv/mediawiki/php-1.34.0-wmf.14/includes/libs/rdbms/lbfactory/LBFactoryMulti.php: Wikimedia\Rdbms\LBFactoryMulti::newExternalLB: Unk\
nown cluster "cluster16"
#0 /srv/mediawiki/php-1.34.0-wmf.14/includes/libs/rdbms/lbfactory/LBFactoryMulti.php(246): Wikimedia\Rdbms\LBFactoryMulti->newExternalLB('cluster16')
#1 /srv/mediawiki/php-1.34.0-wmf.14/includes/externalstore/ExternalStoreDB.php(146): Wikimedia\Rdbms\LBFactoryMulti->getExternalLB('cluster16')
#2 /srv/mediawiki/php-1.34.0-wmf.14/includes/externalstore/ExternalStoreDB.php(156): ExternalStoreDB->getLoadBalancer('cluster16')
#3 /srv/mediawiki/php-1.34.0-wmf.14/includes/externalstore/ExternalStoreDB.php(259): ExternalStoreDB->getSlave('cluster16')
#4 /srv/mediawiki/php-1.34.0-wmf.14/includes/externalstore/ExternalStoreDB.php(65): ExternalStoreDB->fetchBlob('cluster16', '54423', false)
#5 /srv/mediawiki/php-1.34.0-wmf.14/includes/externalstore/ExternalStoreAccess.php(52): ExternalStoreDB->fetchFromURL('DB://cluster16/...')
#6 /srv/mediawiki/php-1.34.0-wmf.14/includes/Storage/SqlBlobStore.php(427): ExternalStoreAccess->fetchFromURL('DB://cluster16/...', Array)
#7 /srv/mediawiki/php-1.34.0-wmf.14/includes/libs/objectcache/WANObjectCache.php(1412): MediaWiki\Storage\SqlBlobStore->MediaWiki\Storage\{closure}(false, 604800, Array, NULL)
#8 /srv/mediawiki/php-1.34.0-wmf.14/includes/libs/objectcache/WANObjectCache.php(1258): WANObjectCache->fetchOrRegenerate('global:BlobStor...', 604800, Object(Closure), Array)
#9 /srv/mediawiki/php-1.34.0-wmf.14/includes/Storage/SqlBlobStore.php(431): WANObjectCache->getWithSetCallback('global:BlobStor...', 604800, Object(Closure), Array)
#10 /srv/mediawiki/php-1.34.0-wmf.14/includes/Storage/SqlBlobStore.php(358): MediaWiki\Storage\SqlBlobStore->expandBlob('DB://cluster16/...', Array, 'tt:1236282')
#11 /srv/mediawiki/php-1.34.0-wmf.14/includes/Storage/SqlBlobStore.php(286): MediaWiki\Storage\SqlBlobStore->fetchBlob('tt:1236282', 0)
#12 /srv/mediawiki/php-1.34.0-wmf.14/includes/libs/objectcache/WANObjectCache.php(1412): MediaWiki\Storage\SqlBlobStore->MediaWiki\Storage\{closure}(false, 604800, Array, NULL\
)
#13 /srv/mediawiki/php-1.34.0-wmf.14/includes/libs/objectcache/WANObjectCache.php(1258): WANObjectCache->fetchOrRegenerate('global:BlobStor...', 604800, Object(Closure), Array\
)
#14 /srv/mediawiki/php-1.34.0-wmf.14/includes/Storage/SqlBlobStore.php(288): WANObjectCache->getWithSetCallback('global:BlobStor...', 604800, Object(Closure), Array)
#15 /srv/mediawiki/php-1.34.0-wmf.14/includes/Revision/RevisionStore.php(1464): MediaWiki\Storage\SqlBlobStore->getBlob('tt:1236282', 0)
#16 /srv/mediawiki/php-1.34.0-wmf.14/includes/Revision/RevisionStore.php(1673): MediaWiki\Revision\RevisionStore->loadSlotContent(Object(MediaWiki\Revision\SlotRecord), NULL, \
NULL, NULL, 0)
#17 [internal function]: MediaWiki\Revision\RevisionStore->MediaWiki\Revision\{closure}(Object(MediaWiki\Revision\SlotRecord))
#18 /srv/mediawiki/php-1.34.0-wmf.14/includes/Revision/SlotRecord.php(307): call_user_func(Object(Closure), Object(MediaWiki\Revision\SlotRecord))
#19 /srv/mediawiki/php-1.34.0-wmf.14/includes/Revision/SlotRecord.php(551): MediaWiki\Revision\SlotRecord->getContent()
#20 /srv/mediawiki/php-1.34.0-wmf.14/includes/Revision/RevisionSlots.php(200): MediaWiki\Revision\SlotRecord->getSha1()
#21 [internal function]: MediaWiki\Revision\RevisionSlots->MediaWiki\Revision\{closure}(NULL, Object(MediaWiki\Revision\SlotRecord))
#22 /srv/mediawiki/php-1.34.0-wmf.14/includes/Revision/RevisionSlots.php(202): array_reduce(Array, Object(Closure), NULL)
#23 /srv/mediawiki/php-1.34.0-wmf.14/includes/Revision/RevisionStoreRecord.php(174): MediaWiki\Revision\RevisionSlots->computeSha1()
#24 /srv/mediawiki/php-1.34.0-wmf.14/includes/export/XmlDumpWriter.php(309): MediaWiki\Revision\RevisionStoreRecord->getSha1()
#25 /srv/mediawiki/php-1.34.0-wmf.14/includes/export/XmlDumpWriter.php(389): XmlDumpWriter->invokeLenient(Object(MediaWiki\Revision\RevisionStoreRecord), 'getSha1', Array, 'fa\
iled to deter...')
#26 /srv/mediawiki/php-1.34.0-wmf.14/includes/export/WikiExporter.php(530): XmlDumpWriter->writeRevision(Object(stdClass), Array)
#27 /srv/mediawiki/php-1.34.0-wmf.14/includes/export/WikiExporter.php(473): WikiExporter->outputPageStreamBatch(Object(Wikimedia\Rdbms\ResultWrapper), Object(stdClass))
#28 /srv/mediawiki/php-1.34.0-wmf.14/includes/export/WikiExporter.php(287): WikiExporter->dumpPages('page_id >= 1200...', false)
#29 /srv/mediawiki/php-1.34.0-wmf.14/includes/export/WikiExporter.php(172): WikiExporter->dumpFrom('page_id >= 1200...', false)
#30 /srv/mediawiki/php-1.34.0-wmf.14/maintenance/includes/BackupDumper.php(289): WikiExporter->pagesByRange(12002, 12003, false)
#31 /srv/mediawiki/php-1.34.0-wmf.14/maintenance/dumpBackup.php(82): BackupDumper->dump(1, 1)
#32 /srv/mediawiki/php-1.34.0-wmf.14/maintenance/doMaintenance.php(99): DumpBackup->execute()
#33 /srv/mediawiki/php-1.34.0-wmf.14/maintenance/dumpBackup.php(144): require_once('/srv/mediawiki/...')
#34 /srv/mediawiki/multiversion/MWScript.php(101): require_once('/srv/mediawiki/...')
#35 {main}

So InvalidArgumentException ought to get added to invokeLenient() in XmlDumpWriter.php to the existing MWException, RuntimeException that we log and move on.

Change 525205 had a related patch set uploaded (by ArielGlenn; owner: ArielGlenn):
[mediawiki/core@master] make XmlDumpwriter more resilient to blob store corruption

https://gerrit.wikimedia.org/r/525205

The above has been tested with the problematic frwiki and slwiki pages, and processes the bad revisions appropriately.

Change 525205 merged by jenkins-bot:
[mediawiki/core@master] make XmlDumpwriter more resilient to blob store corruption

https://gerrit.wikimedia.org/r/525205

Change 525269 had a related patch set uploaded (by ArielGlenn; owner: ArielGlenn):
[mediawiki/core@wmf/1.34.0-wmf.14] make XmlDumpwriter more resilient to blob store corruption

https://gerrit.wikimedia.org/r/525269

Change 525270 had a related patch set uploaded (by ArielGlenn; owner: ArielGlenn):
[mediawiki/core@wmf/1.34.0-wmf.15] make XmlDumpwriter more resilient to blob store corruption

https://gerrit.wikimedia.org/r/525270

Change 525269 merged by jenkins-bot:
[mediawiki/core@wmf/1.34.0-wmf.14] make XmlDumpwriter more resilient to blob store corruption

https://gerrit.wikimedia.org/r/525269

Change 525270 merged by jenkins-bot:
[mediawiki/core@wmf/1.34.0-wmf.15] make XmlDumpwriter more resilient to blob store corruption

https://gerrit.wikimedia.org/r/525270

Mentioned in SAL (#wikimedia-operations) [2019-07-24T16:06:43Z] <jforrester@deploy1001> Synchronized php-1.34.0-wmf.14/includes/export/XmlDumpWriter.php: T228720 make XmlDumpwriter more resilient to blob store corruption (duration: 00m 55s)

Mentioned in SAL (#wikimedia-operations) [2019-07-24T16:07:46Z] <jforrester@deploy1001> Synchronized php-1.34.0-wmf.15/includes/export/XmlDumpWriter.php: T228720 make XmlDumpwriter more resilient to blob store corruption (duration: 00m 55s)

Yes for enwiki and dewiki, but now there is T228921 just to keep the fun coming.

It's just, this is an UBN task, so… :-)

One of the wikis named on the ticket, frwiki, is still running stubs; I'll close this when it completes.

ArielGlenn claimed this task.

Stubs for frwiki finished a little bit ago. Closing!