Software developer on the Wikidata team at Wikimedia Germany (he/him, Berlin timezone). Private account: @LucasWerkmeister.
User Details
- User Since
- Apr 3 2017, 2:45 PM (377 w, 6 d)
- Availability
- Available
- IRC Nick
- Lucas_WMDE
- LDAP User
- Lucas Werkmeister (WMDE)
- MediaWiki User
- Lucas Werkmeister (WMDE) [ Global Accounts ]
Fri, Jun 28
Can confirm this is fixed \o/
I can’t reproduce this either, but it sounds like someone else is having the same problem: https://www.wikidata.org/wiki/Wikidata:Report_a_technical_problem#Can't_escape_English
I’ve updated the documentation to hopefully make this clearer.
Somehow I managed to miss that this problem has, for most extensions, actually had a solution since 2016 (Gerrit change by @Legoktm, backported to REL1_27): You simply register the namespace in extension.json, but with "conditional": true. This will unconditionally set all the namespace-related globals ($wgContentNamespaces, $wgNamespaceContentModels, etc.), just as I already concluded should be done (T288819#7831800), except for registering the namespace itself – you do that, and only that, in the CanonicalNamespaces hook handler (based on whatever condition you want). This way, all the other globals will be set early enough that NamespaceInfo sees them, and you even get to use the nice and convenient syntax in extension.json. (This "conditional" flag has been documented since 2017, so I don’t know how I managed to miss it two years ago.)
Thu, Jun 27
I don’t think anything significant has happened to unstall this… it’s now clearer that the successor for Vuex, Pinia, should be used for new projects, and I think it’s assumed that eventually we’ll replace all Vuex usage with Pinia, but upgrading existing projects is not recommendet yet. More to the point, I don’t think we’ve done anything relevant in the Lexeme backend (moving more components to some store, whether Vuex or Pinia).
If this is causing major disruption I can messup with the index by hand but I'd rather not do that if not strictly required, sorry for the inconvenience!
Wed, Jun 26
I’m not really sure how this code could ever have worked:
Hm, though the search links in the task description still don’t yield the expected results :/
Alright, EntitySchema is a content namespace again. @dcausse, I guess we’ll have to reindex some recently touched EntitySchemas?
Assuming I’m not doing the search wrong, it looks like the namespace alias is unused on the wiki:
(Note that an alternative search using the localized namespace name yields plenty of results, so I think the search is at least not totally broken in principle.r
Pfft, and I just realized I duplicated @Pppery’s work there 🤦
So this is fun. I tried to check how Lexeme solves the issue of declaring its dynamically registered namespace as content, and it just doesn’t. We add 120 (Property) and 146 (Lexeme) to $wgContentNamespaces in the production config, which is why they’re content namespaces there; other / third-party wikis apparently get to pound sand. (On my local wiki, the Lexeme namespace is not considered a content namespace.) IMHO we should fix this, but also in the meantime, let’s just add 640 to that production config block to make it content again.
I think the task I remembered was this one (slightly different but still feels similar): T288724: defaultcontentmodel missing from most namespaces in Wikidata namespaces siteinfo (breaks pywikibot)
Well, the patch you found looks like it’s supposed to still register EntitySchema as a content namespace… but I think I vaguely remember a similar issue from before, and it’s that SomeMediaWikiComponent™ has already finished reading $wgContentNamespaces by the time our hook handler runs and adds 640 to it, and so the assignment is a no-op?
Logstash link for non-termstore deadlocks (I think they’re roughly evenly split between addUsages and removeUsages): https://logstash.wikimedia.org/goto/20ade51dc3a72b8b8234467babe021cf
Tue, Jun 25
Add cloudflare to the list of seemingly affected upstreams (build):
Seen in another build:
That’s a huge number of queued changes that are all going to fail, one by one, because the fix needed a second patch set… :blobfoxnotlikethisgoogly:
BTW, I also remember occasionally getting this failure… maybe ForeignResourceManager should retry the download once or twice if it fails? (AFAICT it’s never called during normal requests, so the potential extra runtime shouldn’t be a production concern, I think.)
I guess the expected output just needs to be updated after red-link-title was changed on TranslateWiki.net? https://gerrit.wikimedia.org/r/c/mediawiki/core/+/1049387/1/languages/i18n/ar.json
I just filed the jQuery version at T368385 as well; not sure if it makes sense to track separately or should be considered a duplicate, TBH.
If I’m not mistaken, the difference is between غير (expected) and مو (actual) in both lines.
Tagging 3D after all – I doubt the relevant WikibaseQualityConstraints or WikibaseMediaInfo code has changed much recently, and at least the mediainfoview error is a known long-standing issue: T321532 – IMHO this error is more likely due to be due to some recent refactorings in 3D (e.g. es6 changes)
Mon, Jun 24
I think this task would become obsolete with the completion of T343020: Converting MediaWiki Metrics to StatsLib – statslib doesn’t use this library AFAICT (it directly uses the sockets extension in \Wikimedia\IPUtils\UDPTransport::emit()).
Fri, Jun 21
We will need to inject the entity type as a message parameter (or create separate messages per entity type).
Thu, Jun 20
TBH, I think I’d like to start the statslib migration in Wikibase with another task than this… this task carries the additional complication that the tracking happens from Lua (mw.wikibase.lua and mw.wikibase.entity.lua call incrementStatsdKey() with hard-coded, but [as far as PHP is concerned] arbitrary string metric keys), and I think it would be less confusing for us to learn statslib on an easier conversion first. @Arian_Bozorg is it okay if we put one of the other Wikibase subtasks of T350592 ahead of this one? (E.g. T359251, though that one is probably big enough that it could be broken down into several subtasks. T359248 is probably an even better candidate.)
Wed, Jun 19
Reopening for this CentralAuth failure seen in several of the changes attached to T365676 (e.g. Wikibase and FlaggedRevs):
Nothing left to do for Wikibase here, I think.
With the above Wikibase change applied (I rebased the CI check onto it), AFAICT the only remaining failures in CI are the same MediaWiki core api-testing failures that were also seen in e.g. FlaggedRevs’ CI check. (Are those tracked in another task?)
I think the above changes cover all the npm dependencies (other than vue, vuex, and grunt-eslint, all of which are blocked per the README), and all the direct composer dependencies are up to date as far as I can tell.
Tue, Jun 18
Bleh, Vue reads document as soon as it initializes:
Vue might need a few more globals set (we’re already having to set SVGElement) but hopefully not too many… I think I’ll try out this approach.
I wonder if it wouldn’t be easier to ditch jsdom-global, and instead have the tests create a new JSDom document and assign global.window = document.window, and make all the non-test code access e.g. window.document instead of document. That way we only have one global to worry about, and in the browser it should work the same way (since window is the global there).
Well, applying this patch to jsdom-global (and still including the above hack to run the cleanup at all) fixes the issue:
Bah – jsdom-global provides a “cleanup” function, but even if you use that, you just get a different error:
It’s probably due to this line in node_modules/stylelint/lib/utils/FileCache.cjs:
Mon, Jun 17
I believe @WMDE-leszek fixed the password, so next Monday we can check again whether this worked.
Looks like it’s not working:
Currently, getting 500 history lines from Q42 makes 2816 DatabaseMysqli::doQuery() calls.
Can confirm this is working on https://gerrit.wikimedia.org/r/c/mediawiki/extensions/EntitySchema/+/1046598/1/src/MediaWiki/Hooks/LoadExtensionSchemaUpdatesHookHandler.php \o/
Fri, Jun 14
Alright, I think we can close this task then?
I guess that would be an option if “no parentheses” is deemed too risky, yeah – we could just stick the regexes with, say, 100+ uses into a config variable and check those directly. (Which becomes even more effective if someone™ converts those regexes that have ^ at the start and $ at the end, which isn’t actually needed, so that the currently 86 uses of ^\d+$ are checked as \d+ instead. I might do that from my volunteer account later.)
Amazing, thank you!
Thu, Jun 13
Alright, then I’m going to declare this… DONE.
The slowdown still seems to be happening – earlier this morning, each 100-page batch seemed to take about four minutes.
I just deployed a config change that relies on two changes in the EntitySchema extension that are only in wmf.9; backporting them to wmf.8 seems to be impractical (T367334#9885619, second part). If the train has to be rolled back, so that group1 and/or group0 are on wmf.8 again, I suggest that you first deploy this config change (a partial revert of the other one) to avoid errors on Test Wikidata client wikis (wikidataclient-test.dblist, i.e. testwiki, test2wiki, testwikidatawiki, testcommonswiki – note that test2wiki is in group1).