Attending: David, John B, John H, Robert, Tom, Jens (chair+mins), Henry, Sam, Ewan, Elena Brian's on leave 0. Operational blog posts There are currently two posts, one from Sam and one from Brian. Please consider blogging about it when you do something Interesting(tm). 1. "Small" (ie non-LHC) VOs Henry from MICE was present and gave an update on MICE. "close to taking data"; some problems with the physical instrument meant slow ramp up. Meant to be doing reprocessing at T2s - IC and Brunel; later Glasgow. Using hardware token to manage keys for robot certificate. It works but was not easy to get working - it's supported on SL6 but on SL7 the version of the OpenSC libraries used was precisely the version that had a bug for precisely the hardware token that MICE are using. Currently using Fedora Core. Using a proxy which gets vomsified and then re-delegated. Would others in GridPP be interested in using tokens for managing robot certificate keys? In which case it might make sense to bulk buy them and have consistent instructions for managing them. SNO+ expected to attend the T1 liaison meeting today and agree something [but didn't] 2. Issues from last week: * How do we do long term data transfers if we need to include VOMS proxies. (relevant esp to DiRAC VO immediately, but also others) DiRAC can work without them as they GridFTP from GridFTP to CASTOR and neither depends on VOMS extensions. Lydia had a model where she was trying to re-bless the proxy every so often in order to make sure there was always a "healthy" one. In fact the choice of healthy is somewhat arbitrary - will reject transfers where the proxy lives less than one hour, and will only redelegate when the server's proxy lives less than four - seems lots of "magic" numbers in FTS3. Lydia tried to time the redelegation to fit with this magic window - less than four but more than one - although one could perhaps try once an hour and just have the redelegation fail if it is not needed. This kind of shenanigans are only needed if we need the VOMS proxy; otherwise we can use the long lifetime proxies as we currently do. * ATLAS syncat /dump experiences / compatibility Ongoing investigations - didn't work in SL5. Ewan wanted to try it too, but with sanitising database first. There is a dpm_db_check script which, perhaps unsurprisingly, checks the db of a DPM. Needs downtime to run the fix as it needs to check consistency against disk servers. The consistency check doesn't seem to have many options - it's -n or not (ie no-op or everything). The consistency check script is writting in Perl. Jens offered to have a read to see if special perly things happen. * T2C/D model testing - testing Oxford as a T2C? Proposal needs final updating - expecting a PMB Wednesday next week (prior to GridPP). Discussion previously suggested ATLAS might get confused and we would test instead with CMS; however, we now suggest both. 3. Moreover, next week's GDB has a few items which might be of interest to us: http://indico.cern.ch/event/319751/ Namely, the httpd, MW readiness, and information system update in this order. The information system seems to be an information systems task force? Will it revisit the old WLCG document on publishing in GLUE 1.3, which is the one publishing dynamic data which is different from what the experiments need because it is following the document? This may be useful, or it may look at GLUE 2... (or one could do GLUE without BDIIs) The httpd is probably the http task force. $. Due to GridPP next week, we do not expect to have a storage meeting next week? Ewan Mac Mahon: (02/09/2015 10:20:57) I haven't touched this yet. But my head node is also SL5 Jens Jensen: (10:24 AM) a bit like fsck Ewan Mac Mahon: (10:25 AM) It is a lot like an fsck. Samuel Cadellin Skipsey: (10:25 AM) https://svnweb.cern.ch/trac/lcgdm/browser/contrib/lcgdm/admin-tools/gridpp-dpm-tools/trunk/src/dpm-dbck Ewan Mac Mahon: (10:25 AM) Which is partly why just running it in fully automatic mode is a bit scary. John Bland: (10:38 AM) Poke Steve, he's in charge.