Present (dropping in and out of the attendance list): Adam Alessandra Brian Chris David Ewan Gareth Jens (chair+notes) John B John H Mark N Matt Pete Rob Robert Sam Steve Apologies: Wahid It seems to have been a very quiet week, it may just be a quiet (or quick) meeting today? (Ha!) 1. Review of the GDB last week: http://indico.cern.ch/conferenceDisplay.py?confId=197803 Three potentially interesting things for us: EGI after EMI (but we've covered this before?) - note the new "URT" which is the UMD release team - is there anyone from the UK on this team? SL6 migration (see Alessandra's presentation) - mostly concerns WNs, not so much the storage stuff, although you'd probably want to move the whole site eventually. There is an SL6 DPM whcih is working. xroot stuff (and monitoring) - there is FAX for ATLAS and AAA for CMS, and to monitor how it's working there is an implementation which will capture the logs in great detail - some issue whether VOs ought to be able to see each others' transfers, which probably they shouldn't, but there is less need for privacy within the VO. Can we not just filter the data before it's handed over to the VO? Probably depends on whether the experiments are watching teh data live or they're getting it handed over later. Every xrootd server needs to be set to log its stuff. Sam and Wahid and Chris are following the discussion on xroot-fax mailing list. technology review - tapes, disks, networks. Interesting stuff - tapes are still progressing and so is disk capacity. Will they trail off now as consumers (home users) increasingly put things in the cloud? SSD also advancing (as expected). We have the usual cost/resilience tradeoff - either you advertise your storage as cheap-ish-and-cheerful, or you provide resilience with multiple copies internally, or with RAID/ECC. CEPH implementation also being "thought about." ZFS does Reed-Solomon in "extended". There is current research in ECC which looks at speed/bandwidth required for rebuild. Also depends on how the datastore is being used, eg for a working repository or as a long term archive. 2. dCache re-evaluation - quick discussion of the current plans. T1 D1T0 evaluation will reevaluate dCache again (probably). Also StoRM status - QMUL at EMI1 StoRM; EMI3 StoRM is not currently ready for deployment. Expecting updated release "later this month" to be tested. Nevertheless, EMI1 release flags as "error" in the dashboard. Currently the EMI1 release is still supported. More likely to have deployed EMI3 than to wait for fix to broken test. New developers need to get to know the code base. 3. Coming events: hepsysman, "big data" workshop in London, updates. Anything else we should aim for? 4. AOB (Note clock skew by 15 mins in chat log) [10:14:15] Alessandra Forti hurray for the fish! [10:15:16] Sam Skipsey We can hear you though [10:15:19] Brian Davies we can here you jens [10:15:21] Ewan Mac Mahon Why does EVO think it's 10:15? [10:15:29] Sam Skipsey A good question, Ewan [10:15:37] Sam Skipsey I think because it's basically useless. [10:16:39] Ewan Mac Mahon First I've heard of it. [10:16:56] Jens Jensen http://indico.cern.ch/conferenceDisplay.py?confId=197803 [10:18:39] Ewan Mac Mahon I can't help but think this sounds like the sort of pending disaster that we should actually stand well back from, not go and poke. [10:18:45] Ewan Mac Mahon Like a lit firework. [10:22:21] Ewan Mac Mahon And on DPM it doesn't separate local from WAN either. Which was funny. [10:28:34] Ewan Mac Mahon Prthrp! [10:28:58] Ewan Mac Mahon It's not a system with much reasonable expectation of privacy. Certainly not within a VO. [10:29:58] Brian Davies is this not enough http://dashb-wlcg-transfers.cern.ch/ui/#date.interval=720&tab=transfer_plots&technology=%28xrootd%29&xrootd.access_mode=%280,1%29&xrootd.access_type=%280,1%29 [10:31:00] Ewan Mac Mahon On ours the line in question says this: xrootd.monitor all rbuff 32k auth flush 30s window 5s dest files info user io redir atl-prod05.slac.stanford.edu:9930 [10:33:04] Ewan Mac Mahon Indeed - that config line comes from a pool server, not the head node. [10:33:19] Ewan Mac Mahon They all just send everything to Stanford. [10:34:26] Ewan Mac Mahon Er, not sure I am. [10:40:46] Steve Jones Ewan - haven't they got 100 gig of movies, like me?!? [10:41:05] Ewan Mac Mahon No, mostly they stream them from lovefilm/netflix [10:42:02] Ewan Mac Mahon Or iplayer. [10:44:36] Ewan Mac Mahon Indeed - Hadoop isnt'about storing anything, it's about processing it. [10:49:29] Ewan Mac Mahon Redundant Array of Inexpensive Grid Sites. [10:49:49] Ewan Mac Mahon Solomon? [10:50:27] Jens Jensen Reed-Solomon [10:50:33] Ewan Mac Mahon There's a talk title in here for someone "The Wisdom of Reed-Solomon" [10:51:33] Ewan Mac Mahon You can do md raid of network block devices. [10:51:50] Ewan Mac Mahon You could do that across the WAN if you want. It sounds like fun. [10:52:10] Ewan Mac Mahon We could build one UK wide lustre and run StoRM on it. [10:52:37] Ewan Mac Mahon That must count as 'cloud'. [10:53:48] Christopher Walker Ewan, we should test this. [10:54:22] Ewan Mac Mahon We'll be sending everyone grid jobs that have iSCSI servers in them. [10:54:27] Ewan Mac Mahon Please don't kill them. [10:55:57] Christopher Walker Google Lustre+wan [10:58:02] Ewan Mac Mahon Yup. [10:58:09] Ewan Mac Mahon It's t2storm1.physics.ox.ac.uk [10:58:23] Ewan Mac Mahon I don't have an IPv6 UI yet; that's next on the list. [10:58:51] Ewan Mac Mahon And that's why I don't know whether the storm is actually accessible yet because I don't have anything to test it from. [10:59:34] Ewan Mac Mahon Oh, and my storm is EMI2, so it might break for non IPv6 reasons.