Attending: Sam, Jens (chair+mins), Daniel, Marcus, Luke, Elena, John H, Winnie, Steve, Matt, Govind, Brian 0. Operational blog posts We're nearly 1/3 through the quarter and the number of blog posts is 0... No operational issues. Storage systems should not be vulnerable to the very popular CVE. 1. Report from the WLCHEPG (Marcus? David?) Marcus gave a report from CHEPiX (didn't attend WLCG) - discussion/comments points in [] * Need to combine resources, e.g. traditional grid/compute infrastructure with other HPC, other infrastructures or public or private clouds. [And even if we don't need it, it gives the users more flexibility, as in BYOC] * Big data tools used for HEP * Ian Fisk on managing large storage - can we learn from the large scale commercial providers (if they have ~10-15EB, they are still two or three orders of magnitude beyond us) * Also need to keep control of data [I guess it is a problem if it gets replicated a lot...] * Data analysis - optimise workflows for data - data reduction facilities [like a T1 preprocessing?] to ensure user analysis has reduced data access needs, [and presumably if it can do common preprocessing] [similar to the model LHCb is moving away from...?] - decouple storage and processing [as in T2C and T2D], [although WAN data access will only work up to a point, e.g. a CSP would not decouple their storage from compute - it's a feature that they are "close" within the given datacentre]; - Future T1-3s to "disappear", or distinction to become blurred, and become more like a "infrastructure" [as in the original Foster/Kesselman vision of the grid - either we are finally proposing to implement it, or, more likely, we have come full circle.] [also the success of WLCG lies partly in having very regimented data models, but one could continue to do those in a decentralised way, but it may become more difficult to keep track of - to not throw the baby out with the bathwater] * Use of CEPH reports - only used for replicas; easy for users but admin intensive. * SRM and AFS (expected to be) phased out. * ML: aiming to predict data access patterns, e.g. popular files. Proof is in pudding. * xrootd metalink support - redirectors * xrootd "extreme copy" where a copy is striped from different sources; a fast pipeline may send more stripes if it is available. * xrootdfs - unix or gsi access * Network transfer tests got ~200Gb/s with TCP; bottleneck is in storage where parallel filesystem on SSDs "only" got 80-90 Gb/s. * posters: interest in EC; DESY will redo ZFS tests. ALICE-supporting site to test random reads on ZFS? Also ALICE data get good compression performance. HEPiX contained much the same topics but at a more technical level. DPM 1.9-as-a-cache: script-based staging, will work remotely with e.g. GridFTP, redirecting. 2. Report from the data intensive workshop (Brian?) Whatever we were supposed to have heard from here, it is postponed till next week. 3. Coming workshops (incl DPM one) Cloud workshop at Crick, cf grid/cloud hybrid as discussed above: see link in chat. DPM workshop: Sam has had feedback and would still welcome more. 4. AOB NOB Daniel Peter Traynor: (26/10/2016 09:59:05) hi except we are moving to fewer storage sites. https://indico.cern.ch/event/505613/contributions/2227410/attachments/1345151/2039452/Oral_74.pdf talk from BNL 70% of cost is ireducable harware cost Jens Jensen: (10:27 AM) See also https://www.eventbrite.com/e/rcuk-cloud-working-group-workshop-tickets-27722389413 Matt Doidge: (10:31 AM) Late to the last point - If we do need "cheap node type Y" a lot of our institutions can provide that on their mini-cloud/VM infrastructures. Jens Jensen: (10:35 AM) We do have use cases for cheap nodes with not much load - e.g. development, or responding to occasional web queries. Daniel Peter Traynor: (10:37 AM) networks are cheep cheap Lukasz Kreczko: (10:38 AM) Can I have 100Gbit/s then? Daniel Peter Traynor: (10:38 AM) as long as you are not buying cisco yes we were looklin at optics yesterday, its afodable. but since oiur university uses cisco the modul is 35K list price