Attending: Jens (chair+mins), Winnie, Matt, John H, John B, Marcus, Steve, Daniel, Raja, Sam, Ewan, George V, Elena, Pete Apologies: none 0. Operational blog posts (we have 2 blog posts for this quarter - halfway through, we should have had more...) ... it would be nice to have some more, don't be shy! 1. summary of Monday's secret ATLAS meeting/dynafed, single endpoints, and stuff. Sam and George V both attended (remotely) the meeting and could give reports. The meeting was on the future of ATLAS data management. Dynafed has already been trialed at DESY (demonstrator), and there should be support for S3 and Azure. Tests are needed for production and monitoring. xroot can also speak http Some Atlassians thought it was "interesting, but" - RUCIO can also do redirect and has perhaps a more sophisticated metadata model. However, dynafed would be experiment agnostic whereas RUCIO is quite specific to ATLAS. As a general point there was discussion about integration of object stores into DDM; can generally use S3 endpoints (only). Used for log files rather than events. Authenticating to S3 with X.509 - Alastair Dewhurst had proposed some work on this but people are currently using pre-signed URLs, issued for a specific file and valid in a specific time window - so basically exactly like an OAuth bearer token. Needs integrating into Panda. Also generally one could have a front end which then manages access to S3 via a "normal" S3 credential. RAL, BNL, and ATLAS collaborating on CEPH. RUCIO redirector and cache. Interest in volunteer test sites for the cache function. The basic idea is to have an ARC CE cache the files needed by the jobs, and scheduling can take this into account - and then to integrate this with RUCIO. So it would not apply directly to pilot jobs. However, there is a pilot job type of mechanism which takes caching into account. The "control tower" needs to be run but can be shared between sites. What would be required to run a cache - would it have to be a fast clustered filesystem or would something less do? ECDF could be volunteered but doesn't currently have an ARC CE; Lancaster might also be able to do something. On another note, how would we migrate to a single endpoint, or even to someone else's single endpoint? The single endpoint policy is not set in stone but this problem will need some thought. LHCb was also looking at dynafed. Raja didn't know the details but will investigate. 2. GridPP in Pitlochry. Are there any things we want to achieve with the VOs before then that we would like them to report (as successes). Like "finish DiRAC Durham" or "get all DiRAC sites going". Also for LSST, LIGO, etc. Or for CMS/LHCb/ATLAS for that matter. Obviously anything we'd aim to do that would look good in a presentation would have to be agreed with the VO in question - we can nudge and prod but generally the work needs to be done by the VO. Sam will prod LIGO, Jens DiRAC, and Marcus LSST. Anything achieved here could also be good for the MSDC activity. Need a new name for this... see suggestions at the end of chatlog (and be very afraid). Any updates, could people please cc the GridPP Support list or summarise to it. 3. AOB Paige Winslowe Lacesso: (17/02/2016 10:00:07) It's POURING here! :( Matt Doidge: (10:01 AM) Hopefully not in your actual office... Matt Doidge III: (10:06 AM) Blame Vidyo! Jens Jensen: (10:07 AM) Hi george. Sam was also in the Monday meeting so feel free to drop out or stay to tell us your view George V @ RAL: (10:08 AM) I'll stick around and see if I have something to add. Jens Jensen: (10:08 AM) Thanks, George. Like a TURL... Paige Winslowe Lacesso: (10:17 AM) It just went v quiet Ewan Mac Mahon: (10:24 AM) Surely the hard bit is having the cache. Fast cluster shared filesystems are rare; QMUL has one. Er. And that's about it. Matt Doidge III: (10:26 AM) We could possibly try something at Lancaster with our NFS mount on our soon-to-be-enabled arc ce But there's a lot of ifs there. Jens Jensen: (10:29 AM) Thanks, Matt. Even with the ifs... Ewan Mac Mahon: (10:29 AM) Of course, given our strategic priorities at the moment, we should probably refuse to contanence any solution that isn't open to the novel user communities. It's an interesting way to focus minds on getting things working - if someone's got a talk slot scheduled they need to have made sure they've done the work to be able to talk about it. The AwesomeTron(TM) The Epic-O-Matic Matt Doidge III: (10:40 AM) The Thing that Does That Thing You Need It To Do. John Bland: (10:40 AM) DiRaC (because you can never have enough diracs) Jens Jensen: (10:40 AM) :-) Ewan Mac Mahon: (10:42 AM) Jeffrey Thereby making the UK's large computing resources 'Jeffrey/Archer' :-) Daniel Peter Traynor: (10:46 AM) something with hyper converged in it John Bland: (10:46 AM) hyperdonkey Ewan Mac Mahon: (10:46 AM) HyperDirac Winner.