Attending: Jens (chair+mins), Winnie, Marcus, Gareth, Tom, Duncan, John B, Daniel, Sam, Steve, Ewan, Pete, Raja, Elena Special guest star: Matthew Mottram (QMUL/SNO+) Apologies: Brian's on leave 0. Operational blog posts Would be interesting with a blog post on GFALFS... Daniel has his installation on a VM, not stress tested. What happened with the Oxford climate change - meant to write files to DIRAC storage. Need to work on this, also for LIGO. Email thread with discussion; Sam also informed Janusz and Daniela. In the case of LIGO, they had their own file catalogue, so need to figure out how to make use of it. 1. Suggestions for things that we want the users to achieve and report on in Pitlochry? (cf discussion last week) - LIGO - mostly jobs (Andrew Lahiff), not so much data - LSST (Marcus) - not sure about what to aim for - DiRAC (Lydia) - maybe finish Durham?! 2. SNO+ data model and GridPP support discussion - Matt will be joining. Primary (30TB) and backup server underground in Snolab in Sudbury, ON. Aiming to move data to McGill and from there out to US and European sites, to avoid overloading the Sudbury networks. Using FTS at RAL. Will have data production and processing at eight GridPP sites. Should Snolab itself have an SE? McGill and Alberta are SEs, also Chicago but the latter is not visible yet. Expecting 50-100TB/yr but not started yet; current allocation at RAL is 50TB of tape. Need to transfer back to disk for processing. Expecting about a 100 members of the VO, with one production account. Skimmed file production at T2s, ideally transferred by each institution (~20) rather than each user (100). ... so it's similar to DiRAC? We should send the Document around. In general, it's best to have the GridFTP server outside the site firewall, but it *should* work inside it, too. If the allocated bandwidth isn't too high, maybe it doesn't matter. There is a planned network upgrade. Scripts transfer and register files into an LFC. How to get grid certificates to Snolab? Both Matt and one of the locals have root access and certificates from, respectively, UK and OSG, so either one could get a host certificate. Ewan also suggests to test on a local (to QMUL) machine first. As regards security, Adler32 checksums are widely used but do not provide "cryptographic" security (i.e. against a malicious adversary). Ewan suggests that if they need *confidentiality* then it's best to encrypt the data at rest and transfer it in encrypted form. There is in fact a precedent for this; the biomed VO used to do this, and the key to unlock the encrypted medical image was maintained in the metadata server (with suitable access control). 3. AOB - Successful usage of DFC? Maybe we only hear about the problems? Are some SEs not getting stuff? - NAT HOWTO - storage system behind a NAT - Unavailable SEs should not be visible in the current version of DiRAC - check this - congratulations to Paul M Daniel Peter Traynor: (24/02/2016 10:01:09) have we bloged about gfalfs in the past Ewan Mac Mahon: (10:04 AM) They've got a comments section. Anyone can leave a peer review if they'd like. Jens Jensen: (10:04 AM) ...nobody's ever commented on a post... in our blog Ewan Mac Mahon: (10:05 AM) I haven't heard anything outside the mailing list. Get them to report via the medium of blog posts? Tom Whyntie: (10:14 AM) @Sam: with respect, I've just checked the email thread and the last I heard it was working. @Sam: I didn't receive any updates after that @Sam: please don't say I knew something when I didnt - thanks Samuel Cadellin Skipsey: (10:16 AM) @Tom: huh, I apologise - he did (although only to a subset of storage - which I don't think we understood). Tom Whyntie: (10:16 AM) @Sam: Thanks. Samuel Cadellin Skipsey: (10:16 AM) @Tom Anyway, I apologise for the mis-remembering. It would be interesting to know why it never worked for Paul for some endpoints (which is I think what I misremembered as it "not working" still) Tom Whyntie: (10:17 AM) @Sam: There's also no update on the Incubator VO wiki: https://www.gridpp.ac.uk/wiki/GridPP_VO_Incubator#LIGO Samuel Cadellin Skipsey: (10:17 AM) To be fair, I think that wiki post-dates the conversation, doesn't it? Tom Whyntie: (10:18 AM) @Sam: Indeed - I can send a catch-up email to check the status, and inform Paul of the latest support stuff (e.g. GRIDPP-SUPPORT etc.) @Sam: I'm not sure how much that wiki page is updated ;-) Samuel Cadellin Skipsey: (10:19 AM) I think it would be good, in general, to get Paul posts on GRIDPP-SUPPORT (current LIGO conversations seem to be in an email chain between me, Paul and Andrew Lahiff, which Paul started...) Tom Whyntie: (10:19 AM) @Sam: Absolutely Samuel Cadellin Skipsey: (10:19 AM) We should probably at least let Catalin know, given he's the GridPP Rep for LIGO ;) Duncan Rand: (10:20 AM) Sounds like Sudbury should be upgraded. Peter Gronbech: (10:22 AM) Current GridPP sites running SNO+ are ral ,QM, RHUL,Lanc,Liv, Sheff,OX and sussex Duncan Rand: (10:22 AM) Is there a computing model document ? Ewan Mac Mahon: (10:23 AM) My understanding of Snolab is that it's an icy hole in the middle of nowhere, so upgrades might be tricky. Samuel Cadellin Skipsey: (10:23 AM) @Tom but that LIGO thread does remind me that Paul's problems were, amongst other things, NATing v bridging on VMs. So it's worth looking at for our current problem with a user and DFC transfers... Ewan Mac Mahon: (10:24 AM) Clearly, if the network can't cope with the data rates at all, there's no alternative, but if it can, then experience suggests that FTS is quite good at eficiently exploiting what bandwidth is available. Daniel Peter Traynor: (10:28 AM) e.g.https://fasterdata.es.net/science-dmz/ Peter Gronbech: (10:28 AM) duncan you are very quiet Tom Whyntie: (10:31 AM) @Sam: Yes - and thank you for the reminder - it was all the way back in June, after all... @Sam: writing email now Samuel Cadellin Skipsey: (10:31 AM) @Tom: I replied to Mamun with a suggestion based on this. Tom Whyntie: (10:31 AM) @Sam: On GRIDPP-SUPPORT? Samuel Cadellin Skipsey: (10:32 AM) @Tom: I hope so, let me check what my reply-all hit. Yes, GridPP-Support, but literally a minute ago, so might not have filtered through yet Tom Whyntie: (10:33 AM) Ah, right @Sam: OK, but not on the previous thread. Cool. @Sam: Got it - just received now (I hadn't heard anything since, so I'd assumed he was liaising directly with lcg-admin@imperial.ac.uk to fix it or had got it working... ;-) Samuel Cadellin Skipsey: (10:35 AM) Yeah, one problem is that people often just Stop Talking (Which did also happen with Paul, to be fair) Daniel Peter Traynor: (10:36 AM) can we do it. Yes we can (probably) Duncan Rand: (10:42 AM) http://dashb-fts-transfers.cern.ch/ui/#date.interval=10080&m.content=%28efficiency,successes,throughput%29&server=%28bnl,cmsfts3.fnal.gov,fts.hep.pnnl.gov,fts3-pilot.cern.ch,fts3.cern.ch,lcgfts3.gridpp.rl.ac.uk%29&vo=%28snoplus.snolab.ca%29 There don't appear to be any snoplus trasnfers at the moment. Ewan Mac Mahon: (10:54 AM) Aww. it'll be the best monitored baby ever. Daniel Peter Traynor: (10:55 AM) IPV6 Ewan Mac Mahon: (10:55 AM) Well quite. But I wouldn't put it past a lot of university network admins to try NATting that too.