liverpool: John, Glasgow: Sam, Gareth, David Manchester: Alessandra Lancaster: Jeremy CAmbridge: John QMUL: Chris RAL: Jens, Brian Oxford Ewan RHUL: Govind Agenda for the storage meeting 1. Review of DPM planning/stuff/status - what can we realistically do with the DPM source code? - should we try to do somethign (realistically)? - DPM support loose ends/testing - Review of roadmap for decisions... Who is working on this? We are defining the tasks; then look for volunteers? Sam planning on using HDFS; Sam may work with the DMlite plugins for Hadoop. A little bit simple for now? dCache, StoRM/Lustre, StoRM/GPFS, DMlite/Lustre, RAL-EOS - or DPM, we should have recommendations for sites. Also, "Classic SEs" are possibilities. Input to CERN by mid September; discussion at EGI TF Is CERN interested in the solution? Do we need to move towards industry-recognised "solutions"? HDFS is used but is a "minority" product GridFTP - single endpoint on top of Lustre; the SRM just touches the file to give you permission to do something to it. Or xroot. dpm-listspaces - can we solve the publishing of negative spaces? Also does not correctly publish xroot - could we add things to it? This script is not supportable by anyone who is not Michel... Can solving problems like this (in the core of DPM) help us give an idea whether we can support it in the future? 2. Update on FTS 3.0 progress? Brian reports that we have a test FTS 3.0 server at RAL; Andrew Lahiff has been setting it up - FTS 2 commands should work but they're looking at setting up one with FTS3 commands. Supposed to have dynamic optimisation. Not clear what the limits are for the optimisations? Starting with PPD and ECDF, as discussed in yesterday's dteam meeting. Redirecting CMS load test transfers rather than adding to sites' load. How do we find out if it can transmit to a "Classic SE"? Does not depend on the information system? Current FTS automatically picks up SEs from the information systems. CMS already publishing GridFTP endpoints. Maybe we can add static informations. Customisation required to support EOS? Would use GridFTP alone than SRM. Of course SRM hands over to GridFTP so is clearly slower, even synchronous. Which suggests that a 3. Filesystems testing - OrangeFS, CEPH, ... ongoing T1 looking at OrangeFS and CEPH again; if others are looking at it then it'd make sense to talk together. Maybe some of the clever things earlier on the todo list have vanished (and not for being implemented). HDFS, OTOH, should have RAID style rathr than full replication, unlike CEPH. CEPH presents aPOSIX filesystem but can also do S3. Related to that: Hadoop - if HDFS is useful, what use can we make of Hadoop...? HFDS RAID works by using Hadoop to checksum. What else can we do other than checksums? Could we do syncat? CMS were looking at using Hadoop? Rob Appleyard (T1 CASTOR team) should turn up here and present CEPH. 4. Preparations for: - HEPiX in Beijing(!) - too far away? Needs justification. And visas. Two requests from T1, none - "Federation" workshop at Lyon? Shaun is going. - EGI TF? Jeremy will probably be going; lots of people. No need for input. Federated identity management. - Storage (protocols) attached to GDB - now takes place in October. Not closed but not widely advertised meeting on 11th, which Wahid cannot be going to as it clashes with pre-GDB - or Chris could go - ATLAS xrootd "federation" - FAX - OGF in Chicago? - GridPP in Oxford? Re "federations": A site may need to have multiple xroot instances if supporting multiple VOs, because of different usage patterns. 5. What happened to that stress testing discussion? Wahid went on holiday...? Wahid's idea to have stress-testing-in-a-box. 6. AOB Any other sites who want to try testing job recovery for ATLAS? (Brian) - T1 reports improvements in overall completion rates. - copying files to the SE back to the blog. Ewan is interested. [10:00:50] Matt Doidge Heyup [10:02:04] Jeremy Coles joined [10:02:22] John Hill joined [10:02:54] David Crooks joined [10:04:31] Jens Jensen https://savannah.cern.ch/task/?group=srmsupportuk [10:04:32] Christopher Walker joined [10:06:29] Brian Davies joined [10:08:43] Ewan Mac Mahon joined [10:09:37] Jeremy Coles Sam - you mentioned HDFS. Oliver mentioned a potential test collaboration implementation with Lustre but perhaps it does not matter. [10:12:08] Ewan Mac Mahon Also, don't OSG use HDFS with bestman? Which has been put out to pasture. [10:15:14] Ewan Mac Mahon Is Jens talking about the SRM free 'classic SE' approach though? [10:15:28] Sam Skipsey Potentially. [10:16:01] Ewan Mac Mahon The permissions situation is different though - storm does it's own the file and set acls thing. [10:16:17] Ewan Mac Mahon Frankly, AIUI the classic se approach is actually much cleaner. [10:16:21] Sam Skipsey Right. [10:18:10] Govind Songara joined [10:18:32] Jeremy Coles left [10:18:32] Govind Songara sorry for late [10:20:22] Jeremy Coles joined [10:20:52] Ewan Mac Mahon Well., if we rewrite the whole thing in python it would be..... [10:34:33] Ewan Mac Mahon Though to an extent, anything posix-y can presumably be persuaded to present an S3 interface by running a separate S3 service on top of it. [10:35:33] Sam Skipsey sure, but ceph's is built into it [10:41:33] Jens Jensen https://www.gridpp.ac.uk/gridpp29/programme.html [10:41:52] Jeremy Coles Dave Colling's session will have discussion of storage. [10:42:01] Ewan Mac Mahon AIUI from Pete there's still scope for agenda tweaking if people feel particularly strongly that they'd like to talk about something. [10:44:01] Ewan Mac Mahon Yes, we're interested. [10:44:11] Ewan Mac Mahon But we've not actually done it yet. [10:44:48] Ewan Mac Mahon We don't expect it to make much difference though - our SE is fantastic [10:45:28] Alessandra Forti left [10:45:29] Govind Songara left [10:45:29] Gareth Roy left [10:45:30] John Bland left [10:45:31] David Crooks left