Attending: Jens (chair+mins), RobC, David, Teng, Sam, Matt, Pete, Dan, JohnH, Govind 0. Operational blog posts No operational issues. Should we use the blog mostly for technical things (a la ZFS configuration) or also for "fluffy" or "managerial" things like wondering about interoperation or the state of the art or something? The suggestion is that as long as the entries are tagged appropriately, it should be OK. 1. Quarterly stuff - particularly if you have something I don't know about that I should report, such as publications, meetings with CERN, with other users, submissions to chepix or similar, etc. The main question is the person accounting; since the Edinburgh post is half LSST, Teng would be 50% GridPP storage and RobC the other 50%. 2. Open issues, such as xcache, docs (in the wiki), remaining todos from GridPP and the storage pre-GDB in September, storage accounting and information, etc. Accounting - seems to be very expensive in DPM, at least recent 1.8s, takes 20hrs at Glasgow so won't be run daily despite the recommendation that sites do so. Could this be better in 1.9.0? ATLAS have requirements to run 1.9.0, so we should have sites upgraded. Also there is a bug in the portal to which Brian sent a link: http://accounting-devel.egi.eu/storage.php?SubRegion=1.65&query=logical_tb_used_avg&startYear=2016&startMonth=10&endYear=2017&endMonth=9&yrange=SITE&xrange=VO&groupVO=lhc&chart=GRBAR&scale=LIN&localJobs=onlygridjobs Related to that is the GLUE 2.1 work which seems to be ticking on but have received no firm conclusion on whether it is suitable for ATLAS (et al.). Another suggestion was to use SwordFish to automate the publishing, although SwordFish would only give you information about the StorageShare, and may still need to be supplemented with information, such as the Path for the share, the retention policy, and the Tag (the GLUEVOinfoTag is used to assign a space token description in GLUE1); however, being single valued (0..1), a shared space would have to be published as multiple shares, each with its own path and tag. xcache - some questions about the hardware requirements - probably just a disk server or two would do initially. However, it may be worth considering whether the cache would be used by all (ATLAS) jobs, or whether one could configure AGIS with two sites, one using the cache and the other not, and have the jobs split between the two, until the site has enough experience/confidence with the xcache performance to move all jobs over. Note that the cache will be used only by local jobs. not for cross-site "federation". Network capacity might also be an issue, as data is fetched from elsewhere rather than stored locally, although presumably caching is not hugely different from writing file to site's SE and reading it locally, from a network perspective. And the cache is easier to run. It looks like Govind has (been) volunteered to be the first to try this; he should be ably assisted by the Power Rangers. Oxford may be next on the list, once Kashif finishes his upgrades and has time for storageier things. Brian mentions FTS talking to S3 endpoints, although surely in talking to Echo it should go through the GridFTP layer to S3, in order to benefit from 3rd party transfers? 3. I was wondering whether I should relate some activities in other data projects. Not the top of our priorities so won't spend too much time on it. 4. AOB brian: (11/10/2017 10:13:16) b0c5c62e-ae5c-11e7-bf66-001dd8b71d42 http://accounting-devel.egi.eu/storage.php?SubRegion=1.65&query=logical_tb_used_avg&startYear=2016&startMonth=10&endYear=2017&endMonth=9&yrange=SITE&xrange=VO&groupVO=lhc&chart=GRBAR&scale=LIN&localJobs=onlygridjobs Daniel Traynor: (10:16 AM) it takes about a day to run a du at QMUL https://github.com/cea-hpc/robinhood/wiki Robinhood Policy Engine is a versatile tool to manage contents of large file systems John Hill: (10:17 AM) For us (a very small site) it takes 30 seconds Daniel Traynor: (10:17 AM) i'm going to look into this for lustre brian: (10:18 AM) different topic to discuss; FTS transfer from Castor (SRM) to S3 endpoing using dynafed endpoint using FTS to submit: https://fts3-test.gridpp.rl.ac.uk:8449/fts3/ftsmon/#/?vo=&source_se=&dest_se=davs:%2F%2Fdynafed.stfc.ac.uk&time_window=24 http://www.stfc.ac.uk/news-events-and-publications/events/general-interest-events/computing-insight-uk/ David Crooks: (10:47 AM) https://indico.cern.ch/event/670330/ brian: (10:47 AM) https://indico.cern.ch/event/670207/ tape matters from WLCG mops meeting last week. OPS not mops