Present: Tom, David, Gang, Gareth, John B, John H, Raja, Rob, Steve, Wahid-with-a-cold, Jens (chair+mins), Chris, Ewan, Robert, Brian, Jeremy, Raul 0. Tom had joined us to talk about his work as dissemination officer (50%) and CERN@School (50%). 25-35 schools have radioactive stuff, participating in experiments. There is also the LUCID experiment which will launch on a satellite and start gathering data in low Earth orbit. There is also a MEDAL experiment, no less. Web app for schools: like Galaxy Zoo. Will be working on calibration data based on images which are 20 GB each. So not massive storage requirements. Currently mainly supported on QMUL, GLA, BHAM, is there a need to ask other sites? Sam points out C@S has been going for a while but not many sites support it. In terms of the need for resources there are not many sites needed, but C@S are ramping up now, and there may be the wow-factor of running something at sites throughout the UK, or perhaps more importantly, to have something running at a "local" T2 site (eg Lancaster for Preston). Most universities have outreachy activities anyway, but there may be a benefit to doing C@S and then visit the T2 in terms of impact and outreach and engagement, if less so in terms of science and capacity. Tom also worked with Catalin from T1 (CVMFS). Chris points out that C@S so far have been using only a few hours of CPU, and there are disadvantages of having multiple sites, that they are not always able to be equally responsive when they have to support many non-local VOs, and making sure that things work at many sites can be time consuming. However, this is the generic resources argument for "small" VOs, and not based on the wow-factor. Sites have masterclasses from time to time. It might be good to engage with one or more of these. One option is a Galaxy Zoo style classification of particle tracks - not just to amuse people but because there aren't yet algorithms to process them. Tom needs to know more about metadata and file management. Do we have a tutorial somewhere which is not out of date? Not so much what's in the files but more whether there are datasets and such. We did have guidance for non-LHC VOs, to be updated from time to time. The conclusions were: a. Sites should volunteer to support C@S (and really volunteer, not the usual "volunteer") b. We should check the docs again, for the "small" VOs. c. Tom should try it out (again), and also see if C@S new requirements can be documented. d. There's a discussion to be had about identity management - eg using social media identities to authenticate to portals with restricted capabilities - like GZ where you build a reputation of being a Galaxy Zoologist rather than sign up with cast iron credentials initially. We can then see what we can do, or alternatively get together and sketch some stuff. 1. Regarding DiRAC, there is a requirement/interest "from above" to have the national e-infrastructures interoperating or at least working together. The ones in question are DiRAC, GridPP, and JASMIN (climate). DiRAC has been talked about before and Sam had met them a number of times (where the number was 1 in practice). They may be using scp for transfers and for credentials use SAFE (https://www.hector.ac.uk/safe/); have previously looked at SARoNGS - this is one of the things to be discussed in next week's dteam. Could the local DiRACers talk to the local GridPPers? Ie Glasgow and Cambridge (and Durham?) (and UCL - but there doesn't seem to be resources at UCL) - need to talk at the level of people doing Real Work(tm) as well as managers and coordinators. Wahid notes that EPCC also have people in EUDAT and RDA (as does STFC) whom it might be worth talking to as well. 2. As regards small VOs, we've talked about CERN@School at some length... Jens, however, asked about LCLS - Diamond have expressed an interest in building on GridPP's expertise in transferring data from Stanford to RAL CASTOR. Brian will investigate current transfer numbers - SLAC is a T2 or T3 - and Wahid will have a quick look whether he knows some of the right people - alternatively we will just contact the Usual Suspect. Tom points out it would make a good case for GridPP publicity. 3. Skipped the milestones for now; either we talk about them next week or Jens will make some up. $-1. Ewan mentioned an IPv6-only DPM at Oxford, t2dpm1-v6 (see link in chat) availble for testing - more functional than stress at this stage as the machine is a everything-in-one box DPM and there may be bandwidth problems with IPv6 in general. Chris is talking to IPv6ers tomorrow. There is a HEPiX IPv6 VO - and a task force led by Dave Kelsey - but too early for Oxford to get involved. $. Brian mentioned a rolled back FTS3 upgrade - rollback was done when we hit problems with short proxies (512 bits), could be fixed by rolling back OpenSSL which in turn required rolling back FTS. However, CERN appear to have it rolled out (and not back). Chris had also seen failures when both ends had upgraded but a short proxy was used. Jens will look at the GGUS tickets again. [09:58:35] Wahid Bhimji morning [09:58:40] Rob Fay good morning [10:00:38] David Crooks morning [10:12:05] Wahid Bhimji which schools are involved [10:12:25] Christopher Walker QMUL, Bham and Glasgow support CERN@school [10:12:29] Wahid Bhimji ah even in scotland ! [10:16:10] Wahid Bhimji we are doing one soon [10:16:14] Wahid Bhimji masterclass [10:24:57] Ewan Mac Mahon I think the thing with data management is that 'use the LFC' is roughly equivalent to 'use a filesystem' - you need to have a scheme based on an understanding of the structure of the data that you may (or may not) then implement on top of an LFC [10:26:01] Sam Skipsey Indeed, LFCs are basically a filesystem - the problem that we can't fix with tech is knowing how you want to organise stuff. [10:26:14] Sam Skipsey (Although you *can* add some metadata in LFC entries.) [10:27:49] Jens Jensen And - with my other hat - how people authenticate to the infrastructure... [10:28:29] Christopher Walker Portals was another thought... [10:29:15] Christopher Walker Stuart Purdie would have been my immediate thought - but he's no longer working for GridPP. [10:29:41] Sam Skipsey Tom Doherty would have been mine, Chris, but he's also not working for us any more, really. [10:30:10] Sam Skipsey I assume Janusz is far too oversubscribed? [10:30:16] Christopher Walker Ah, but he's still sort of involved. [10:30:49] Ewan Mac Mahon It might be worth talking to Bristol to see what's going on with landslides - they had a portally thing at one point, but the person that wrote it left. [10:30:57] Ewan Mac Mahon I'm not sure if they're still using it at all. [10:37:50] Tom Whyntie http://www.jasmin.ac.uk/ [10:38:09] Jens Jensen Ta [10:38:11] Ewan Mac Mahon We need to talk to someone who's not Jeremy. He's good at what he does, but we need to talk to the Dirac equivalent of us. Jeremy's more the dirac equivalent of Dave Britton [10:38:29] Jens Jensen Agreed... [10:39:17] Wahid Bhimji if there is someone in epcc I can pop down [10:39:27] Ewan Mac Mahon And we need a mailing list, and a wiki. Obvs. [10:39:50] Sam Skipsey I'm just looking back through emails to find names [10:40:42] Sam Skipsey I have an A Turner from EPCC on the last emails I got [10:41:27] Wahid Bhimji ok - yeah I know who that is - never talked to him [10:42:32] Ewan Mac Mahon The X509 is pretty much non-negotioable IMO; it's a good technical solution, and all our stuff absolutely needs it, and will do for ages. [10:42:51] Ewan Mac Mahon We need to get them at least somewhat able to use it as a first step. [10:43:21] Sam Skipsey X509 is easy, it's stuff like VOMS and things on top of it that make it look hard. [10:43:34] Sam Skipsey (I might have been ranting on that topic recently ; ) ) [10:45:29] Ewan Mac Mahon It's not too hard if you can give users something that's already set up. The principle is easy enough to explain. [10:45:48] Sam Skipsey Well, anyone who uses SSH keys already knows how it works. [10:46:23] Ewan Mac Mahon Hmm. I'm never too sure how much people really understand SSH keys either. [10:46:26] Sam Skipsey [10:49:43] Ewan Mac Mahon So it's: t2dpm1-v6.physics.ox.ac.uk [10:50:30] Jens Jensen There is this famous hepix wg on IPv6 - led by Dave Kelsey [10:51:08] Raul Lopes btw, dc2-grid-23.brunel.ac.uk is also Ipv6. been online (not production) sunce December. Just 2TB availale [10:53:05] Christopher Walker Thanks Raul - remind me in a couple of weeks. [10:56:45] Ewan Mac Mahon Oh, and we need to poke the gocdb people about supporting IPv6 addresses in gocdb entries. [10:56:53] Ewan Mac Mahon Because it doesn't at all at the moment.