Attending: Jens (c+m), Brian, Daniel, John B, Lukasz, Marcus, Winnie, Steve, Elena, Sam, Pete Apologies: Sam for being late; his train was cancelled. 0. Operational blog posts Everyone reminded to post, except Brian and Jens are not allowed till after the referendum because of the purdah Operational issues raised: Brian reported failed deletions of empty directories. 1. Pre-GDB coming up on IPv6 - did we (Ewan) ever get the Oxford DPM working on IPv6? http://indico.cern.ch/event/394830/ Oxford's IPv6 DPM may not exist any more but given that two GridPP people are speaking at the pre-GDB next week - namely Duncan Rand and Dave Kelsey - there might be an opportunity to wave the GridPP flag a bit if we have something interesting to say. ATLAS IPv6 test site: ATLAS thinks both Oxford and Glasgow have IPv6; there is an IPv6 "island" at RAL for perfsonar, now published in production in GOCDB and being tested. Sam could turn IPv6 back on if it would be useful for Duncan's demo next week. DPM "should be fully IPv6 compliant" - and indeed Ewan's one at Oxford seemed to work. 3. The LSST document - we postponed from last time (18/05), I think. Any other future communities we should be working with? The LSST document is encouraging in the sense that a new community could be joined into GridPP with "only" 25 days of effort... but also highlights that some sort of testing and hacking is always needed. The document is not very specific on data; the original plan was to scp input files into a local node and from there globus them into DIRAC; NERSC OTOH already had Globus and could globus them directly. In the UK, ended up using DIRAC tools directly to copy files in and register them with the DFC. However, LSST track which sites receive which data (and in general which sites support LSST). The original design and intention of "the grid" was that users should not need to worry specifically where their data goes - exactly in the sense how people use the cloud these days, that when you set up cloud storage or a VM you can specify which continent you want it on (or large region, etc.) but not by country. And you shouldn't care. Similarly with the grid: Data should go somewhere appropriate and the job should go to where the data is. This used to be supported, and we only in the past few years have moved to a post-location model where data can be accessed or copied remotely to/from where the job happens to be placed - for WLCG - thanks to the quality of the networks. If the location aware job submission has disappeared as a feature, then it would be worth knowing about it. Maybe time for some more testing. As regards output data, the document describes how these are collected from the CE sandbox. There is a case for doing further work with this data, by calling out to an external (non-grid) database (as we have no grid databases deployed, at least not in GridPP, other than the FCs). It is a long time ago people were talking about restricting outbound connectivity on WNs, and it never happened, so connecting to the database should be perfectly possible :-) 4. Summary of CHEP papers/abstracts submitted? Brian - DiRAC paper Brian - proposed FTS for data transfer but decided to roll in Alastair's CEPH-at-T1 abstract Alastair - CEPH-at-T1 (submitted?) Sam - T2C data paper Sam - local reconstruction and distributed error correction Marcus - ZFS Daniel - Lustre at QMUL So 6-7 abstracts submitted for GridPP. Not bad. Now all we have to do is to write the papers. Authors should feel free to circulate their papers for comments. 5. Progress on future T2 stuff? Probably need the update from Alastair for this one. There will be more things to test/look at 6. AOB There will be a CVMFS meeting at RAL next week (Mon+Tue). Brian Bockelman will be there and may be available for an audience at next week's storage meeting. Steve points out that the referendum is the same day as hepsysman so people should remember to postal vote Lukasz Kreczko: (01/06/2016 10:05:18) is anyone talking? I cannot hear anything Jens Jensen: (10:05 AM) Yes we have people speaking Lukasz Kreczko: (10:05 AM) OK, I will restart vidyo then Jens Jensen: (10:05 AM) good luck :-) http://indico.cern.ch/event/394830/ Brian @RAL-LCG2: (10:21 AM) https://www.gridpp.ac.uk/wiki/GridPP_VO_Incubator lcg-infosites --vo lsst space | grep ac.uk 0 17266 0 0 0 0 - bohr3226.tier2.hep.manchester.ac.uk 15941 5056 0 0 0 0 - bohr3226.tier2.hep.manchester.ac.uk 0 0 0 0 0 0 - bohr3226.tier2.hep.manchester.ac.uk 32191 21036 0 0 0 0 - gfe02.grid.hep.ph.ic.ac.uk 61609 2500 0 0 0 0 - gridpp09.ecdf.ed.ac.uk 41781 0 41781 0 0 0 T2KRESERVE hepgrid11.ph.liv.ac.uk 1689 5295 0 0 0 0 - hepgrid11.ph.liv.ac.uk 52129 46826 98956 0 0 0 ATLASLOCALGROUPDISK srm.glite.ecdf.ed.ac.uk 118560 69201 0 0 0 0 - srm.glite.ecdf.ed.ac.uk from inucator page: VOI-LSST-021 Copy new data from NERSC to one of the storage elements and register them in the DFC JZ, AF Closed 2016-04-30 Data have been copied to Liverpoool and registered. Daniel Peter Traynor: (10:32 AM) A lustre one from QM Marcus Ebert: (10:33 AM) I put a ZFS one