Attending: Jens (chair+mins), RobC, Teng, JohnH, Sam, Brian, Matt, Chris, Dan, Govind, Winnie 0. Operational blog posts We have a new blog post from Brian. 1. Chris Brew might join and tell us about his testing of xcache on dCache? We talked about it last week but I forgot to remind him...!? Chris did join. Chris has got two 20TB disk servers with RAID6, experimenting with CMS AAA access through xcache, the idea being that CMS AAA goes through this cache rather than the RAL site firewall. The client certificate used to access data is managed on a VO box; there is no official CentOS7 release but the repositories have the relevant packages at least for proxy management. The nodes would go into "extreme swap"; initially they were set up with 8GB of RAM which was too little, so they were upgraded to 64GB each. The cache software is "RAM hungry", maybe there are memory leaks. Currently running 4.6.1; there is a 4.7.0 which has problems accessing dCache, which should be fixed in 4.7.1 but this one introduces another problem. 4.6.1, however, is unstable - it almost certainly would be if it's leaking memory - so there is a restarter - and fortunately xroot is pretty tolerant to having its endpoints restarted. The disk thrashing seemed to get better as the cache filled up. Originally the system was so busy the RAID did not rebuild. Reduced block sizes to 64K; may look into further reducing it to 32 or 16K, or even a memory only cache (no disk). Certainly some advantages arise from bypassing the firewall. The ScotGrid proxy caches didn't do VO work but were set up for testing - maybe there is some value in limited testing before opening it to the great unwashed masses of VOs. (Also maybe worth documenting the technical details in the wiki and blogging about the results - hint!) CMS has all data readable by any of its members. The scheme may not work with more fine grained authorisation. The (xroot) proxy could delegate a proxy certificate to itself and use that, but once it's got a file cached how will it remember who was authorised to read that file? It could maybe do something with local unixy permissions, but that is pretty much equivalent to running one xcache instance per VO anyway. Alternatively, for sensitive files it should not cache them but redirect instead, so authorisation is enforced every time - which is a bit sub-optimal for a cache. It might work to have a single redirector. Some VOs manage access at a higher level, like ATLAS with RUCIO. AAA basically manages two redirectors, such as a primary and a fallback. If you want it to go to local dCache first, local cache next, and then out, some experimenting may be required, e.g. hierarchically managed? 2. There's a GDB today with a "storage accounting update": https://indico.cern.ch/event/578992/ Also, our very own David Crooks is on the agenda. Did anyone attend the pre-GDB yesterday? Which presumably is relevant to our earlier discussion of non-certificate access to storage? -> postponed till next week. 3. Loose ends. What happened to CERN@School's request? Update on xcache testing? DPM 1.9 upgrade? Is accounting quicker in 1.9? If not, could we paint go faster stripes on it? -> postponed till next week. 4. AOB Brian points out FTS now supports IPv6. So what will it do against dual-stacked sites? Using IPv6 preferentially may be the Right Thing(tm) but not necessarily the Best Thing(tm). jens: (08/11/2017 10:39:47) https://gridpp-storage.blogspot.co.uk/ https://indico.cern.ch/event/578992/ Chris Brew: (10:40 AM) Jens, I'm cajbrew@gmail.com if you could add me. jens: (10:40 AM) Sure