Attending: Brian, Daniel, Duncan, Jens (chair+mins), John B, John H, Marcus, Winnie, Rob, Steve, Raja, Lukasz, Matt, David, Sam, Govind Apologies: Tom 0. Operational blog posts No operational issues reported. For once we seem to have enough blog posts? Certainly some very interesting ones with Marcus' ZFS investigations. 1. End of quartery things: blog posts, milestones, furlong pebbles, experiments support, the Great ATLAS Arkleseizure, As reporting potentially changes in the coming quarter perhaps we should revisit the reporting metrics? Suggestions for future metrics, milestones, etc -> please send to Jens. You may want to think about these if/when you are attending GridPP next week. 2. Items for next week's GridPP In particular, Duncan suggests we look at the GridPP sites firewall configurations compared to the ESnet Science DMZ - maybe there is a role for looking ahead here. Also, if Duncan attends tomorrow, we can give him feedback on his topics for his presentation, if he should so wish, like we did with Sam's and Tom's last week. Also on the experiments data; we previously discussed whether we had particular things we'd want experiments to report on next week - as success stories, of course! - although it doesn't look like a huge amount of stuff has happened since that discussion? Jeremy also asked for slides for experiments feedback from the experiments' GridPP contacts. Science DMZ (see link in chat) requires network architecture, dedicated data transfer nodes, performance measurement, and security policies. In other words, GridPP already provides a Science DMZ. There are questions about end user access - the securiy policy currently requires X.509 certificates but some end users (outside of HEP :-) do not like X.509 certificates, so other services like GlobusOnline may be relevant. 3. AOB Duncan Rand: (06/04/2016 10:05:28) https://fasterdata.es.net/science-dmz/ Samuel Cadellin Skipsey: (10:09 AM) It's important to note that "X509 certificates" does not necessarily mean "the way we use X509 certificates" (which was really the tacit undertone of the actual disagreement between Ewan and Steve) Lukasz Kreczko: (10:11 AM) like OSG in the US? Matt Doidge: (10:14 AM) Lancaster is a "faux DMZ" - we go through the firewall but nothing happens to our traffic. We're thinking of moving to a proper DMZ as we still stress the routers when we're busy. Jens Jensen: (10:15 AM) ... and Jodrell Banks' e-merlin John Bland: (10:16 AM) we also have faux-DMZ. But that's not choice, but the most we can get from the university Matt Doidge: (10:17 AM) The main motivation for us to move is from the network guys - it's they're kit melting! Jens Jensen: (10:17 AM) http://www.e-merlin.ac.uk/ John Bland: (10:18 AM) even if there was a dedicated SDMZ link to the uni, I would bet good money it would be commandeered by our CSD and we'd have to go their systems go=go through Jens Jensen: (10:34 AM) Unless your old stuff becomes a corner of the new stuff (because the new stuff will have more capcaity) ... and JASMIN is something like 2.5 Tb/s from storage to cluster John Bland: (10:39 AM) There's always a bottleneck; local disk, local NICs, local switches, local storage. WAN is just another area that may or may not be a limit. Capping is always an issue unless you have a small cluster. Marcus Ebert: (10:46 AM) BTW: For the ZFS blog posts, there was an error in there: When "raid1" was mentioned, it should read "raid0" (fixed now). I have ZFS by the end of the week running on at least 2 different dpm client machines in different configurations. If someone is interested in this, we could have a look to it at the meeting next week, during the breaks for example. John Bland: (10:50 AM) liberating, surely