Attending: Sam, Jens (chair+mins), Wenlong, Teng, Vip, Winnie, Matt 0. OBP NOP NBE 1. Summary of hepsysman storage discussion? Matt's slides - https://indico.cern.ch/event/859095/contributions/3623681/attachments/1937351/3212098/FancyFilesystems.pdf People often find themselves with "big storage boxes". Glasgow's ancientest server being decommissioned had 20 TB which was a lot of space back in the day, now it's practically a single drive. Some healthy scepticism towards "xcache". ChrisB noted his use of dCache, as it could comfortably provide both grid and non-grid use. Recurring theme of supporting non-grid users which may be relevant also for IRIS. ZFS in Lustre: good idea? Use case for sharing ZFS skills; for example setting it up to minimise reboot times. Matt has JBOD with 1GB RAM per 1TB disk; Sam points out the need to spec RAM and core per ZFS OSD, particularly with erasure code and its recovery/rebuild from errors. Machines with those specs could be repurposed if needed. Use of tokens. Stuff comes out of the US in what one might diplomatically describe as a devops anti pattern. Implementing and communicating data resilience - custodial not necessarily just for tape, user expectations. Also object stores can cause confusion for some users. More work may also be needed on CVMFS for data - DLS did not originally pick it up due to lack of security (then), and it is not clear whether anyone is pursuing it with DLS at the moment in the postcatalinian era. Need for managing input nodes/doors; and output - access for clusters etc. DPM generally expects every node to have publicly accessible IP addresses. Also some general questions about BeeGFS (in its various spellings), how it's set up and how well it scales. Similarly Lustre skills, like using DKMS to rebuild kernel modules when required. 2. I wanted to pick up on a possibly related thing which dates back to Oliver's overview given to the PMB. The main topic is essentially how GridPP is represented internationally, specifically on the DOMA activity groups. Sam's suggestion is that not only should we have more people involved in DOMA, and ideally actively involved, but there should also be more frequently reporting on DOMA activities in the Storage and Data Management group. Alessandra is involved with a group (access) and Teng (QoS). As a related issue, there has been a samizdat call for participation in "TPC smoke tests", going to DPM sites. As Sam points out, this is not necessarily the best thing, as most DPM sites would be similar and it would be more fruitful to smoke a diversity of sites. Also, it would be better to circulate the call to arms to the storage list, so others could contribute if available/capable/willing. On a not unrelated note, DPM sites should consider contributing to the DPM whitepaper. Sam has already contributed, but points out that his opinions may not reflect those of the project as a whole, so people should feel encouraged to weigh in, particularly if they disagree with Sam ;-P 3. (DPM sites) any DOME issues? DOME upgrades - Edinburgh eventually upgraded to DOME, albeit on CentOS6, as doing both in one hop is too much of a mouthful [mixing my metaphors]. Considering upgrading to CentOS8 next year, bypassing 7, if possible. Oxford's status is pretty much the same; currently on 1.12, and considering upgrading to 1.13. This step (as reported by Brunel and Lancs) should be easy, merely a question of yum upgrading and restarting services. Vip will take advantage of downtime associated with power supply work in the coming weeks. 4. Docs... (and next quarter, ie 2020) Postponed, as we'd run out of time.