Attending: Dan, RobC, Wenlong, Jens (chair+mins), Teng, Vip, Sam, Winnie, Matt, Patrick, Duncan, Ste OK, so your organiser messed up the agenda, having forgotten not just about the EOS talk but also the Crick workshop next week. Most of the meeting today was spent going through Wenlong's and RobC's presentations of EOS @Ed. Things to note: * It's a container based deployment. Considered easy to set up. Can, however, be configured incorrectly which would make it run slowly. * For optimal configuration, experience could be built in the community; like we have with DPM. * Essentially writing at disk speed with EOS; however 6.4 is unusual and it may be worth while testing other parities. * Also whether rebuilding parities vs rebuilding data blocks * Believed to use jErasure library * Different redundancies can be configured (and reconfigured) and selected by path. However, changing redundancies does not lead to existing data being re-encoded. * If/when considering EOS as an alternative, note that there are also other alternatives, like HDFS or Lustre. The DOMA Access meeting Tuesday next week. Teng asks for input from sites running xcache; particularly Sheffield, Cambridge, and RALPP if RALPP still has an xcache setup. Also, the draft for DOMA Access is now fleshed out a bit more; we will not put the link in the minutes because it's publicly writeable, but contact Jens or Teng for the link if you haven't got it. Note also the annual cloud workshop at Crick next week; Dan and Duncan will be attending and can hopefully give us a summary (usually presentations are not uploaded very quickly) 10:04:59 From Jensen, Jens (STFC,RAL,SC) : [link] 10:05:40 From Vip (Oxford) : https://cern.service-now.com/service-portal/article.do?n=KB0003846 10:10:47 From Sam Skipsey : So, my understanding is that the "filesystems are flexible" just because all EOS is doing is using xrootd to write to posix-compatible storage volumes, yes? 10:11:41 From Sam Skipsey : It's an external library, I think, Jens. 10:13:43 From rob-currie : @Sam yes, that’s pretty much the case 10:14:02 From Sam Skipsey : Did you experiment with different EC configurations? 10:14:45 From Sam Skipsey : 6,4 is a considerably larger parity overhead than we use currently in T2 storage; and longer stripes are usually significantly more effort to reconstruct. 10:15:53 From Sam Skipsey : Is this *write* performance or *read* performance? It's not clear on this slide. 10:16:20 From rob-currie : Write performance from these to disk 10:17:05 From Vip (Oxford) : how slow is it compared to others? 10:24:59 From rob-currie : @Vip We managed to reach similar speeds to ZFS when reading from a plain EOS so not using RAIN/RAID but using RAIN/RAID seemed to have some overhead which we’ve not fully understood. The number of iops was higher than expected but this could be due to metadata or reading the parity in parallel to the data being read… 10:53:17 From Duncan Rand : https://cloud.ac.uk/ 10:53:30 From Jensen, Jens (STFC,RAL,SC) : https://cloud.ac.uk/ukri-cloud-workshop-2020-call-for-participation/ 10:54:26 From Jensen, Jens (STFC,RAL,SC) : [link]