Attending: Jens (chair+mins), John B, Matt, Winnie, Steve, Marcus, Luke, Rob, Sam, Brian, John H, Raja, David, Govind, Tom Apologies: Daniel 0. Operational blog posts Good posts from Marcus on ZFS [...is STFC in purdah? which means STFC can't blog?] 1. storage-related CHEP abstracts submitted by people from the group? (it's a reporting metric but may be of interest to others?!) Sam has two; Daniel reports two, Marcus one. 2. Summary of GridPP36 (everyone who went)? Obviously Sam's talk [1], Tom's talk [2], Duncan's talk [3] [1] https://indico.cern.ch/event/477023/contributions/1155267/attachments/1258435/1858772/GridPP36StorageGroup.pdf [2] https://indico.cern.ch/event/477023/contributions/1155264/attachments/1257011/1856075/twhyntie_GridPP36-new-user-case-study_v1-1.pdf [3] not available at the time of writing AAA CMS T2C test: 1000 slots need 10 Gb/s connection. ATLAS doesn't benefit so much from direct IO any more, as whole file access is needed. T2C testing at Oxford: arriving at a "less distributed" model, with access to a single remote SE, cf. UCL. Manchester-Glasgow-QueenMary-Lancaster - Lancaster "further north than Glasgow" since they route via Glasgow(!) What about disks at T2Cs? Could obviously continue to run as cache, so the need to recover from disk failures is reduced, presumably. Speaking of cache, we are testing ARC caches at Durham: the idea is to make Rucio aware of the contents of the cache so it can optimise accordingly. The ARC cache provides a POSIX interface plus an interface for Rucio to see what's in it. Used for ATLAS, testing with David Cameron. Compare to non-cache usage at Durham (and document it!) 3. Summary of DataCentreWorld (and cloudstuff expo) Jens attended the expo (apparently about 1/3 the size of SuperComputing), which is data centre world with then some cloud attached, and this year also cloud security and IoT attached. * Spoke to our friends at Boston; maybe get Dave Power to talk to us about technology * Optical fiber interconnect, apparently more dense switching than ever before! And the fibres are pink! Yay! Wires for physical data centre hosting - we tend to do the allocation in more virtual ways. * FitSM as a ITIL-lite, "focusing on the useful bits" so can be done without shelffuls of books, but still has training and certification. Is actually open source - CC-BY-SA. * IPMI replacement: redfish (DMTF) * Do SDN and NFV relate to Science DMZ? and if so, how? * ZFS "alive and well", thank you very much. Company can provide hardware, or software only, or support only. * 480TB in a disk server... you know you want one. Cannot find the pricelist, maybe I didn't get one to take away. * Our friends from SixSQ (namely Cal and MEB et al) have a cloud-in-a-box, fanless with heatsink and wireless antennae. Like others, they specialise in adapting applications to clouds in a vendor-neutral way, so not locked in. * Intel NVMe glossy, www.intel.com/ssd * Also from Intel, SDS "solutions", www.intelserveredge.com/intel-cloud-block-vsan * AAEON (an ASUS company) switches, www.aaeon.com * Loads of sexy little IoT things...! which we can pick up later/elsewhere if people are interested. 4. AOB NOB Matt Doidge: (20/04/2016 10:14:34) Almost all my hardware interventions are replacing disks or soothing raid cards. Tom Whyntie: (10:16 AM) Sorry I'm late - half an ear as I'm babysitting from home Steve Jones: (10:22 AM) BTW: In case of doubt, I think a good shake down cruise with with ARC caching would be a really great thing, esp. if the resport compares the results (in a big sense) with the current baseline, e.g. at CHEP! Govind: (10:24 AM) by Dell Samuel Cadellin Skipsey: (10:24 AM) by Dell, surely? Lukasz Kreczko: (10:29 AM) nice cabeling