Attending: Jens (chair+mins), Brian, John H, Winnie, Sam, Luke, John B, Marcus, Robert, Gareth, Matt, Pete, David, Ewan 0. Operational blog posts No news. Everyone should be able to post blog posts on the blog. 1. Progress on tasks - Great ATLAS Space Token Arkleseizure (Brian) QMUL had a space token reintroduced during a migration. Apart from this, the cleanup _is_ progressing; sites are being ticketed and tickets are being updated. One question is where the data is taken from, the SRM or the BDII (if different?) People are not using the BDII information at the moment, although it may be relevant to the non-LHC VOs. The move to GLUE2 is considered important partly for "political" reasons, and partly as an opportunity to provide useful information (no longer bound by the Installed Capacity document) There is a GLUE2 task force or working group where Alessandra is a member. - Uploading file catalogues to your own SE (and rollout to other sites) Still awaiting feedback from VOs - ATLAs specifically, but the current system seems to work. Recipes are available for DPM and dCache and we should check StoRM... most of the remaining issues are niggly ones like whether you need to create the catalogue as a privileged user but upload it as non-privileged, or whether the relative filenames (as required by ATLAS) are relative to the right path. Documentation exists in an ATLAs wiki which is allegedly readable also if you are not in ATLAS. 2. Continuing discussion from last week - T2C testing (doesn't help if I haven't written up the notes...) Apparently the test can go ahead anyway at Oxford, just with less site intervention/support: mostly needs action from VO (in this case ATLAS, as personified by Alastair Dewhurst). The test will involve (1) hammercloud, which doesn't stage to disk, so giving the "worst case" stats; and (2) testing with jobs at Oxford but staging data from a remote location. Jobs could perhaps be instrumented to provide performance information, as opposed to relying on admins at Oxford providing it. Bristol might be another candidate for this type of testing but is thought to be less well connected than Oxford (in other words, bandwidth limited). CMS scaling tests at Bristol was done by local users. 3. Update on the "small" VOs, if any - data stuff only! LSST: copying data to Edinburgh LIGO: Paul has returned to this task, talking also to Andrew Lahiff at T1. Using Globus catalogue which is usable by both ends. DiRAC: progressing - very very long thread on DIRAC-USERS, expecting Leicester to start Soon(tm) 4. AOB NOB Brian Davies @RAL-LCG2: (13/01/2016 09:57:22) AFK Samuel Cadellin Skipsey: (10:16 AM) https://twiki.cern.ch/twiki/bin/view/AtlasComputing/DDMOperationsScripts#Dark_data_and_lost_files_detecti Yeah, there's a "posix filesystems" script, apparently. Ewan Mac Mahon: (10:23 AM) Why is Bristol bandwidth limited? I remember Dave Newbold habing olans (and IIC spending money on hardware to support) upgrading the link to ~10Gbit/s. What happened to that? And now with fewer typos: Why is Bristol bandwidth limited? I remember Dave Newbold having plans (and IIRC spending money on hardware to support) upgrading the link to ~10Gbit/s. What happened to that? Perhaps we should consider a rebrand - maybe 'European Data Grid' might get the point across? Lukasz Kreczko: (10:27 AM) @ Ewan: the link is 10 Gbit/s but that's for the whole university (we are throttled to 5 Gbit/s). We have to show that we consistenlty use the bandwidth in order to qualify for more That's why I am keen on testing, we need to use 5 Gbit/s non-stop (ideally), so we can upgrade total bandwidth