Attending: Duncan, Elena, Gareth, Gordon, Henry, Jens (C+M), Jeremy, John H, Sam, Steve, Robert, Chris W, David, Raul, Elena, John B, Wahid, Brian, Ewan Apols: none 1. Jon (T2K) could not attend as planned, but Henry (MICE), luckily, could. MICE have a problem: * Certain users in the US are unable to get certificates, so will need unauthenticated access: in this case HTTP. * Yum yum goes the web crawler, fetching not just web pages in DPM but also tarballs with data in them: and ~10GB tarballs seem to use 40% of the link capacity (every five mins or so a new download is started) * MICE have been unable to stave off the crawlers (maybe there is a halloweeny parallel with vampires and garlic) * This would also apply to other HTTP based SEs: dCache is thought to have a configurable robots.txt file, but others don't seem to have it. Possible workarounds and/or solutions: 1. (Jens) Set up a proxy in front of it which intercepts calls for robots.txt and serves a local file. 2. (Henry/Sam) get the meta html to tell the crawler to not grab the tarballs. 3. (Chris) See whether calls to robots.txt are being logged and 404ed and see if it's possible to add the file to the given location. 4. (Ewan) DPM is using a normal Apache instance, so add a rewrite rule to direct /robots.txt to direct to a local file or a file in DPM with the right content. 0. Other operational issues (and blog posts) GFAL2 files at RAL: RAL is failing a GFAL copy, but lcg-cp does work. Ticket has been assigned to RAL now. Possibly some CASTOR related weirdness? https://ggus.eu/?mode=ticket_info&ticket_id=109503 ATLAS CMS xroot monitoring - an upgrade is needed in DPM, not just the head nodes but also the pool nodes. It is documented but the documentation uses a different order. Wahid thinks the order doesn't matter. 2. We'll skip item 2 in the interest of time. 3. AOB Sam and Alistair and others working on CEPH access as an object store via xroot. Based on work from CERN: Andreas Peters. Brian asks about questions for the ATLAS Jamboree, coming up in the beginning of December. Brian also mentioned the currently ongoing ATLAS "technical interchange" meeting, where some interesting stuff may happen... Jeremy Coles: (29/10/2014 10:01:29) Gordon/Gareth: GRNET rebooted the dteam VOMS so the problem mentioned yesterday should be resolved. Gareth Douglas Roy: (10:02 AM) Great, thanks Jeremy! Samuel Cadellin Skipsey: (10:04 AM) robots.txt ? Duncan Rand: (10:06 AM) What bandwidth have google been using up? Samuel Cadellin Skipsey: (10:19 AM) https://ggus.eu/?mode=ticket_info&ticket_id=109503 for the room wahid: (10:21 AM) how come that ticket is marked 'solved' Ewan Mac Mahon: (10:21 AM) basically it's mod_dpm, iyswim. Duncan Rand: (10:22 AM) One is allowed to reopen a solved ticket Get more bandwidth and forget about it. Jens Jensen: (10:30 AM) https://ggus.eu/index.php?mode=ticket_info&ticket_id=109694 wahid: (10:31 AM) I havent actually indeed - exactly John Hill: (10:31 AM) I've done it twice now, due to the changing instructions wahid: (10:31 AM) CMS is also in the EGI advisory Ewan Mac Mahon: (10:33 AM) I'm not sure I have any CMS spcific reporting config anyway. wahid: (10:33 AM) Not just DPM BTW.. instructions sent https://twiki.cern.ch/twiki/bin/view/AtlasComputing/FAXposixStorageNew