Attending: Jens (chair+mins), Vip, Wenlong, Teng, RobC, Luke, Dan, Winnie, Sam, Matt, Patrick, Pete 0. Operational blog posts Vip's problem of dpm-listspaces confusingly preventing draining disk servers was resolved by adding space to the token but arguably the solution is a bit unsatisfactory. 1. Upgrades. The DPM DOME upgrade should now be Boring™. CentOS upgrades? Bristol? Monitoring? EOL for Centos6 is Nov '20, although sites should check - maintenance updates tend to finish earlier than security updates. Bristol SE - Sam had advised increasing internal limits which seem to have done the trick for now. The next step would be a migration - it is thought that because DPM keeps data in HPFS laid out identically to the DPNS layout, it is necessary to change only the hostname (and protocol), and the hostname could be done by aliasing etc. We should know whether this works by the end of the quarter. This may also be of interest to other T2s that might migrate from DPM in the future as part of the T2 evolutions, although they would not have HDFS, which tends to be used (for WLCG purposes) mostly in the US. Teng had provided a container image for xcache monitoring, similarly to how Slate-CI installed cache works (it deploys an accounting container in each pod.) However, Mark did not wish to deploy this, preferring instead to manage the script manually. 2. How do we best share skills/experiences - we have talked about sharing ZFS experience, puppet config, etc. There is also the GridPP Github. The problem of course is how to find out that things are there, if we do not have a unique "start here" page. The main page of the wiki could be the start page. However, people sometimes use information from elsewhere - SE implementations have their own documentation and puppet recipes. However, RobC noted that they needed some customisation to their scripts to support CentOS6, so sharing some knowledge about these things would still be good, particularly when scripts assume CERN-centric or OSG-centric setup. When JohnH retired the Cambridge SE about six months ago, he used the EGI instructions rather than those from the GridPP wiki. Also the dteam tech talks are an option, as well as our blog posts - we do not have many posts but at least the content is great. As an aside discussion, also new starters in GridPP need instructions. Traditionally they have started with GAS (Grid Acronym Soup) and progressed from there. Patrick reports problems with his CERN account needed to join ATLAS. Several people had had problems with CERN accounts, and it should not be necessary to have more than a lightweight account - most VOs trust the CAs, and modern federated identity management rely on tracking the LoA of authentication - where Patrick's Sussex account (say) has a higher LoA than his account at CERN, so should be preferred (or his certificate). It's a problem with ATLAS. 3. There is a GDB today (https://indico.cern.ch/event/813757/); there was a planned DOMA pre-GDB yesterday but it was postponed. There isn't much storage related in the GDB agenda, but a few things networking related. 4. AOB SOme discussion about storage accounting - AFAWK, sites should be generating the WLCG-specified JSON files and placing them in known locations in their SEs, for each of the LHC experiments (well, not ALICE). This could also usefully be documented... this information is probably what's being aggregated by the grafana service at CERN (see link in chat). Twenty+ years ago, SEs were ~200GB. How things change - and not - CERN still runs /afs on lxplus. There was talk of migrating to EOS. From kreczko to Everyone: 10:08 AM November 2020 From rob-currie to Everyone: 10:25 AM These puppet modules still need work. We have an sensible wrapper for them From Me to Everyone: 10:29 AM https://github.com/gridpp From Vip (Oxford) to Everyone: 10:31 AM https://www.gridpp.ac.uk/wiki/Main_Page From Me to Everyone: 10:44 AM https://indico.cern.ch/event/813757/ From gronbech to Everyone: 10:49 AM https://monit-grafana.cern.ch/d/mHqFLAbik/wlcg-storage-space-accounting?orgId=20