Attending: Winnie, Jens (chairing and minuting), Matt D, Elena, John H, John B, Robert, Gareth, Sam, David, Ewan, Brian Apologies: Tom Jens also apologised for the late agenda, having arrived back home late last night from travel (a train to Exeter had broken down near Reading, and other delays). ======================================================================== 0. Purdah is lifted (presumably), so also Brian and Jens can blog again No operational issues reported. ======================================================================== 1. Round table of Interesting Things(tm): what you would do if you had time (stop laughing). Winnie - DMLite Sam - "modern stuff", like distributed and EC storage, object stores, protocols Matt - trying CEPH. Integrate into uni local clusters\ Elena - Updating servers from SL5 to SL6. Mostly need to spend time on current stuff. John H - look at Lustre?, get rid of SL5 John B - DPM latest version, DMLite. Many NFS systems, consolidate into a single storage system where quotas could be carved out. Looked at Lustre. Robert - CEPH. There was a talk at HEPiX about HDST drives. [meaning drives you can netwrok to directly?] David - investigate CEPH and alternatives Gareth - content addressable storage, ipfs and camlistore [see links in chat] Ewan - modernise: storage on SL6, puppet. How about ditching all and provide CEPH+xroot. Would it be a "courageous" t2 site? Brian - monitoring, TCP settings, jumbo frames. Particularly if features are "lost" because large VOs don't need them (they have their own stuff) but they are still needed by "small" VOs. Could we have a single CEPh cluster across all T2s? [there was a similar discussion somewhere in the ancient days of GridPP, at least within a t2 - and of course a good number of projects did it but sometimes benefited from dedicated network resources, like DEISA, or the NDGF dCache installation] Jens - back to CDMI again, SAGE project (H2020) will be looking at it, too, and of course it is required in EGI. Maybe more EC, maybe accounting for GLUE2. Also intergration with other projects such as EUDAT and SAGE, or DiRAC of course. ------------------------------------------------------------------------ In a quick attempt to pivot this table (or at least identify common themes), we get: CEPH (install or use of): Sam, MattD, Robert, David, Ewan, Brian Single storage provisioning: MattD, JohnB. Lustre: JohnH, JohnB SL6 upgrade: Elena, JohnH, Ewan Explore new tech: Sam, Gareth, David, Robert So in *theory* people who are interested in the same stuff could work together and maybe get more done than if they were tinkering with it by themselves? ======================================================================== 2. There is a GDB today and there seems to be storage and data stuff in at least Alessandra's presentation and the NDGF one. http://indico.cern.ch/event/319747/ ======================================================================== 3. AOB Brian raised the question of jumbo frames; at least one problem on a t1 router was causde by it breakign up jumbo frames. Are sites allowing jumbo frames? Could be tested as part of perfsonar. May need a table of mtus? Jens Jensen: (13/05/2015 09:54:55) Good morning Winnie! Paige Winslowe Lacesso: (09:55 AM) Good morning Doctor! Is there anything / any talk in the GDB about storage? Jens Jensen: (09:59 AM) Don't know: there is an NDGF one, and a middleware one Paige Winslowe Lacesso: (09:59 AM) Can anyone hear me? Does my microphone work? Empirical data suggests not! Guess not :( You sound fine :) Sorry, I'm still laughing at "if you had time" Just learn about dmlite in a more organized fashion than in the bits & scraps in scavenged time It's supposed to be prod late June & I know so little about it Ewan Mac Mahon: (10:12 AM) Ceph as a SAN is an interesting approach - i.e. rather than having it necessarily directly accessed by ceph aware clients you front it with intermediate layers, so for example you might use cephfs and re-export it over NFS, or rados block devices and export them over iSCSI etc. Samuel Cadellin Skipsey: (10:13 AM) Well, that's explicitly what Inktank seem to expect you to do. I'm just interested in the low-level access modes, as they approximate what Tier2 Grid Storage seems to look like. Gareth Douglas Roy: (10:13 AM) https://camlistore.org/ ipfs.io Samuel Cadellin Skipsey: (10:21 AM) (I note, having just looked at Gareth's links, that he should probably look at Tahoe-LAFS too!) (and that all of this stuff reminds me of Fossil, which just goes to show that Bell Labs invented all concepts in computing.) Ewan Mac Mahon: (10:26 AM) Breaking PMTU discovery causes weird stuff to happen, and with IPv6 is causes catastrophically weird stuff to happen. If this is a problem it's one we need to fix. (also, add this to the general list of reasons why firewalls are evil) I wonder if an explicit test for this could be built into perfsonar somehow