Attending: Dan, Jens (chair+mins), Wenlong, Ste, Sam, Teng, Duncan, Winnie, Vip, Matt, Patrick 0. OBP No new blog post. Perhaps we need new metrics for GridPP6? Sam suggests combining publications, like blog posts and docs. We're not terribly good at updating docs, either, but perhaps this could improve. Things tend to work better if someone drives them forward - someone who cares about them. Notably, no other GridPP blog is being updated, and the operations bulletin that Jeremy advocated is not updated either. Is there a risk that someone would come to GridPP and see our stale blogs and out of date wikis and conclude that we're not doing very much? Sam suggests adding a note to the blog that it gets infrequent updates. 1. CEPH and ZFS (separately, not together) A note seems to have come around recently from Linus warning people not to use ZFS with Linux. But why now? This may need a little more looking into - we were under the impression that ZFS was at least forked into an open source part (out of Sun), although that would potentially be quite old. OpenZFS versions should be >=5000. Will need a little more looking at. There's nothing new to discussions about licensing and the Linux kernel, so what has prompted this recent discussion? CEPH - as it's now an option for a large T2, is any large T2 interested? Sam points out you need enough servers to have server-level redundancy; whether you do EC or replication. Glasgow's setup will go into production after another round of procurement. Glasgow's setup is quite RALish, others may wish to have different layouts - but at this stage, effort to set up CEPH should become less heroic and more routine. 2. GridPP attendees in DOMA meetings The main part of this item is to request that people who attend DOMA meetings report back that they have had a meeting, to keep everyone else in the group abreast of ongoing developments. Unhelpfully, most of the representatives have missed most of the meetings recently (with the possible exception of the ubiquitous Alessandra) - some meetings are at slightly inconvenient times. Also, if meetings are fortnightly (TPC), they might take up a fair bit of time in our calls, but let's see how it works out. Going forward, we should look at having more people representing the relevant DOMA groups, so there's less risk of missing a meeting. Of course, to have sufficient continuity they should talk to each other, but they should report to the group anyway. It was noted that some work, TPC specifically, can get exceedingly hairy and technical, and discussions would not be followable by anyone not intimately familiar with the protocols. As a summary of the current state, Alessandra is on all groups?, Sam's on the main group, TPC, and possibly QoS; Matt's on TPC, and Teng on access. 3. Storage accounting - and CERN's monit thing The background to this goes back to the WLCG in Manchester where Oliver et al presented proposed work on updates to the information system, where a site was required to generate an information file and upload it into a defined location in their SE. Experiments would then read this file and update their information. The data format was JSON - at the time it was discussed whether it should use the GLUE JSON rendering directly, or something merely inspired by GLUE. For practical reasons it was decided to do the latter. Summarising the current state, SEs generate this file - which should be the same format, or possibly the same file, at least for CMS, LHCb, and ATLAS. All SEs should be doing this, hence Pete's request to sites to check their storage (as the only action item from this, for now), in CERN's monit report. Taking a step back, accounting was always an important topic for IRIS, and before that, to some people in GridPP. Fermilab proposed originally a Storage Accounting Record (aka StAR, see link in chat) which could be resurrected, if it is supported by APEL. At least the DiRAC side of IRIS only does CPU accounting, but it is clear that the proposed AAAI could - and possibly should - include storage accounting. Possibly worth investigating. 4. T2 updates Any other (voluntary) updates from T2s? * Glasgow - also decommissioning stuff * Lancs - "boring" SL6 upgrades [hopefully]; was planning to run ZFS on new kit. * Liv - also running ZFS, so interested in the discussion from 1. 5. AOB How do people like to have their minutes served? A separate mail to the list with an attachment, an announcement that it's gone up, or simply a note in next week's mail, or nothing (people check the web site without being prompted). We agreed to try announcing last week's minutes in each week's agenda. (As an aside, minutes were also linked to in the operations bulletin, but this is now not being updated as much, or at all.) From Daniel Traynor to Everyone: 10:27 AM for storm. https://italiangrid.github.io/storm/documentation/how-to/how-to-publish-json-report/ But if you have SRM they can still use that From Matt Doidge to Everyone: 10:27 AM We've got it running at Lancaster: https://fal-pygrid-30.lancs.ac.uk/dpm/lancs.ac.uk/home/atlas/storagesummary.json From Daniel Traynor to Everyone: 10:27 AM Json is for webdav From Me to Everyone: 10:29 AM https://monit-grafana.cern.ch/d/000000425/default?orgId=20 From Ste to Everyone: 10:35 AM https://twiki.cern.ch/twiki/bin/view/EGEE/WLCGISEvolution From Me to Everyone: 10:38 AM http://www.ogf.org/documents/GFD.201.pdf From Daniel Traynor to Everyone: 10:48 AM Our new storage is delayed due to shortages of intel cpus! They have had to upgrade our cpu option (silver 4210->4214) to get a sensible delivery date. When with hardware raid our storage this time. Performance is quoted to be better with hardware raid than zfs for linux lustre