Attending: Ste, Raja, Dan, Sam, Winnie, Brian, Jens (chair+mins), Teng, RobC, Matt, Vip 9. OBP NOI (but see Oxford discussion later); NBE. We currently have zero blog posts for this quarter... 8. Update(s) on testing: what happened to EOS at Glasgow? SLATE? dCache non-GridFTP TPC (see also Brian's links sent to the list, on TPC and the timescales) Glasgow: decided to test CEPH instead of EOS... SLATE: Brian had tested the RAL xroot proxy cache configured with SLATE but got a permissions error. dCache TCP: part of the wider TCP discussion. 7. Other deadlines - DOME migration for DPM sites, IPv6 for large T2s, baseline(s) Oxford (Kashif) have quietly upgraded to DOME with the result that the site is now not working but does not appear to be far from a working state? - see chatlog. Based on Manchester and Oxford experiences, we cannot yet impose a deadline for DOME migration. One specific problem is GridFTP redirect, which can cause problems with dual stacked sites, as an initial IPv6 call may redirect to IPv4. Matt regards the DOME upgrade and turning off legacy DPM as two distinct steps. Lancaster will try to turn off SRM. 6. Apropos DOMA, at least CERN are updating their wiki pages... only one GridPP storage related wiki page has been updated in the past month, the site storage status page. 5. I read that the EHT data (famously "too much to transfer over the Internet") was only 5PB. Although I guess if one of your SEs is at Antarctica, you might well have bandwidth issues. LOFAR and SKA are/will be doing the same kind of baseline interferometry and will be moving data over the Internet. 4. AOB UK representation in DPM community (cf last week) Brian https://indico.cern.ch/category/10830/ BD Vip https://lcgfts09.gridpp.rl.ac.uk:8449/var/log/fts3/2019-06-04/se2.ppgrid1.rhul.ac.uk__t2se01.physics.ox.ac.uk/2019-06-04-1505__se2.ppgrid1.rhul.ac.uk__t2se01.physics.ox.ac.uk__1340425659__634b0e99-e59c-5627-a6cc-2e784a027516 root@t2se01 log]# uptime 10:20:17 up 26 days, 18:26, 2 users, load average: 0.21, 0.07, 0.06 https://lcgfts3.gridpp.rl.ac.uk:8449/fts3/ftsmon/#/job/634b0e99-e59c-5627-a6cc-2e784a027516 V Matt @Vip: It looks like srm is trying to pull data through the headnode rather then redirect to a pool node gsiftp://t2se44.physics.ox.ac.uk/t2se44.physics.ox.ac.uk:/dpm/pool1/atlas/2019-06-04/user.openc.18274302._000001.signalMinitree.root.219118848.0 wait, I read that wrong. t2se44 is a pool node after all. MD Brian Im back, ( VPN dropped out) BD Daniel also they had to use helium sealed drives as normal drives are open to the air and the air is too thin for them to work. DP Matt @vip - it looks like just your srm service is mucked up - all other transfers work. MD Today at 10:31 AM even a gfal-ls srm://t2se01.physics.ox.ac.uk/dpm/physics.ox.ac.uk/home/dteam/ewan_testing_a_thing doesn't work MD Today at 10:35 AM wait - it does, but it takes a looong time MD Today at 10:36 AM @Vip: You've probably checked it already but does your srmv2.2 log look like? MD Today at 10:44 AM