Attending: Jens (chair+mins), Dan, Luke, Matt, JohnH, Sam, Brian, Winnie, Ste The DPM workshop - tomorrow, two half days. Thought Raul was going but apparently he isn't? Matt is volunteered to give a presentation on behalf of the UK; inviting feedback from DPM sites. For example, Bristol needs to support HFDS which is not available on DOME; Liverpool plan to migrate to DOME "soon". Since Alessandra is also speaking at the workshop (on token-based access), the UK is at least fairly well represented on the agenda, and others will/should be attending remotely. We have also fed back a great deal of suggestions on improvements to the documentation. However, theirs is a closed wiki, so we cannot update it ourselves - in which case it would make sense to update the GridPP wiki instead. In fact it makes sense to update our wiki anyway... documentation is more detailed now but assumes you use puppet. In particular moving LHC tribes off SRM is not so hard; ATLAS command central for example just need to set the site to non-SRM. Moving non-LHC off SRM is slightly more tricky. Is there such a classification as "large T2D", "medium T2C", "medium/small T2C"? Probably each site is sufficiently different that we cannot make a broad recommendation for each category. Also, the future of storage at a given site will depend on the state of the current hardware and future funding. A cross-site support model is still hard to implement because universities' IT teams tend to not like giving the keys to the (virtual) house to someone who is not a member of staff. https://indico.cern.ch/event/776832/ Matt Site-by-site Birmingham - decommissioned for EOS (at ALICE request) Sheffield, Cambridge - not planning on running an SE for much longer (I could be wrong for Cambridge), may skip DOME. Bristol - outlier, ??? Brunel, Lancaster, Manchester - DOME'd and no plan to migrate away from dpm. Glasgow - Sam can phrase this one Oxford - DOME'd, but unsure of long long term plans for SE (they have no storage in warranty iirc) Liverpool, ECDF, RHUL - middle sized sites, planning on moving soon when things look safe. (looking at it this way makes me feel quite good about the UK DOME situation) Other SE flavours - 2 dcache, 2 STORM, 1 EOS, some xcache. It might be worth mentioning our users: Mainly atlas and cms, but lhcb do have storage at some sites. New user groups coming - a lot of astonomers (ska, lsst, cta). Increased access via DIRAC and the associated toolsets. Potential for user groups to want to move beyond x509, and away from gfal! MD Luke Critical for Bristol: Will HDFS be supported with DMLite in the future? L John You're right for Cambridge - I hope to start ATLAS moving next week. JH Daniel RHUL have issue they would like to use hdsf (it's what they use for the T3) but I understand this is not supported by dome. smae issue for Bristol as they use hdfs now DP Matt Thanks for the feedback John and Dan. MD Brian i have another meeting in 5 mins sorry I have to leave BD Luke Yes, Kubernetes is nice, but needs some work to set up L Daniel Each site is different. Need to sit down with each site, work out what they want to do /can do and match with what we (gridPP) want them to do with available software/services options. we need a catalogue of options DP Luke XrootD SE ticks all of our SE boxes (since users want to be able to transfer CMS data to our site): https://opensciencegrid.org/docs/data/xrootd/install-storage-element/ L Samuel Luke: yes, I think an xrood SE (which is basically just an xrootd server, really) is a good match for bristol SC Daniel with hdfs?, might also work for rhul DP Luke ^ yes, time is always an issue yes, with HDFS OSG has had a plugin for years L Samuel (the OSG plugin is also sort of what inspired the ceph plugin we're using here / RAL, so it's solid) SC Luke yum install xrootd-hdfs add xrootd.fslib /usr/lib64/libXrdOfs.so ofs.osslib /usr/lib64/libXrdHdfs.so to /etc/xrootd/xrootd-clustered.cfg done L Today at 10:39 AM