Adding dCache to the grid information system

The information systems gather information from grid services such as dCache and are queried by services such as Job Schedulers and Data management tools.

To add dCache to to the information system very little work is needed. just adding the user edginfo and touching a file to trigger an information upgrade.

useradd -r -s /bin/false -d /var/lib/edginfo edginfo
touch /etc/sysconfig/edg

srm-storage-element-info command

"owen maroney"

srm-storage-element-info command

Hi all,

I've been trying to understand why the correct storage information is not being reported in MDS, and it seems to be about the behaviour of the command srm-storage-element-info.

When running the dynamic information provider we get the error:

> [edginfo@gfe02 libexec]$ ./lcg-info-dynamic-se
> Bad /opt/d-cache/srm/bin/srm-storage-element-info 
-x509_user_proxy=/opt/lcg/hostproxy: status 256

Digginat around a bit, if, as a user, on a UI, with a dteam user proxy,=20 I do:

> [maroney@gfe03 maroney]$ /opt/d-cache/srm/bin/srm-storage-element-info 
https://gfe02.hep.ph.ic.ac.uk:8443/srm/infoProvider1_0.wsdl

I get some nice output ending with:

> StorageElementInfo :
>                      totalSpace     =2541546897408 (2481979392 KB)
>                      usedSpace      =19174916 (18725 KB)
>                      availableSpace =2541536551097 (2481969288 KB)

Hurrah! Presumably this is the command that is supposed to generate the storage space.

But alas, when I log onto the dCache admin node itself, "su - edginfo" and try

> [edginfo@gfe02 libexec]$ /opt/d-cache/srm/bin/srm-storage-element-info https://gfe02.hep.ph.ic.ac.uk:8443/srm/infoProvider1_0.wsdl

this generates an enormous java error starting with:

> AxisFault
>  faultCode: {http://schemas.xmlsoap.org/soap/envelope/}Server
>  faultSubcode:
>  faultString: org.dcache.srm.SRMAuthorizationException:  can not 
determine username from 
GlobusId=/C=UK/O=eScience/OU=Imperial/L=Physics/CN=gfe02.hep.
ph.ic.ac.uk/E=lcg-site-admin@imperial.ac.uk
>  faultActor:
>  faultNode:
>  faultDetail:
>         {}stacktrace:java.lang.RuntimeException: 
org.dcache.srm.SRMAuthorizationException:  can not determine username 
from 
GlobusId=/C=UK/O=eScience/OU=Imperial/L=Physics/CN=gfe02.hep.
ph.ic.ac.uk/E=lcg-site-admin@imperial.ac.uk
>         at 
diskCacheV111.srm.server.SRMServerV1.getStorageElementInfo(SRMServerV1.ja
va:615)
....

Is this normal and/or expected, and if not, what can be done about it?

Things I have checked:

a) the host certificate DN is in the dcache.kpwd file

b) the original .srmconfig/config.xml file for the user edginfo has user certs etc. pointing to /home/edginfo/k5-ca-proxy.pem by default. However, this file doesn't exist. I created a symbolic link to /opt/lcg/hostproxy, but this didn't seem to help...

However, the client ools did manage to identify the host certificate DN.

cheers, Owen.

Re: srm-storage-element-info command

"Alessandra Forti"

Hi Owen,

that's the problem I found when I tried to make it work. It is part of the bug I submitted about the IS configuration. I don't know if they have changed it in the new release because I never received a reply for that bug.

Re: srm-storage-element-info command

"Steve Traylen"

On Wed, Jun 22, 2005 at 05:20:09PM +0100 or thereabouts, owen maroney 
wrote:
> Hi all,
>
> I've been trying to understand why the correct storage information is
> not being reported in MDS, and it seems to be about the behaviour of the
> command srm-storage-element-info.
>
> When running the dynamic information provider we get the error:
> >[edginfo@gfe02 libexec]$ ./lcg-info-dynamic-se
> >Bad /opt/d-cache/srm/bin/srm-storage-element-info
> >-x509_user_proxy=/opt/lcg/hostproxy: status 256
>
> Digginat around a bit, if, as a user, on a UI, with a dteam user 
proxy,
> I do:
>
> >[maroney@gfe03 maroney]$ 
/opt/d-cache/srm/bin/srm-storage-element-info
> >https://gfe02.hep.ph.ic.ac.uk:8443/srm/infoProvider1_0.wsdl
>
> I get some nice output ending with:
> >StorageElementInfo :
> >                     totalSpace     =2541546897408 (2481979392 KB)
> >                     usedSpace      =19174916 (18725 KB)
> >                     availableSpace =2541536551097 (2481969288 KB)
>
> Hurrah!  Presumably this is the command that is supposed to generate 
the
> storage space.
>
> But alas, when I log onto the dCache admin node itself, "su - edginfo" 

> and try
> >[edginfo@gfe02 libexec]$ 
/opt/d-cache/srm/bin/srm-storage-element-info
> >https://gfe02.hep.ph.ic.ac.uk:8443/srm/infoProvider1_0.wsdl
> this generates an enormous java error starting with:
> >
> >AxisFault
> > faultCode: {http://schemas.xmlsoap.org/soap/envelope/}Server
> > faultSubcode:
> > faultString: org.dcache.srm.SRMAuthorizationException:  can not 
determine
> > username from
> > 
GlobusId=/C=UK/O=eScience/OU=Imperial/L=Physics/CN=gfe02.hep.
ph.ic.ac.uk/E=lcg-site-admin@imperial.ac.uk
> > faultActor:
> > faultNode:
> > faultDetail:
> >        {}stacktrace:java.lang.RuntimeException:
> >        org.dcache.srm.SRMAuthorizationException:  can not determine
> >        username from
> >        
GlobusId=/C=UK/O=eScience/OU=Imperial/L=Physics/CN=gfe02.hep.
ph.ic.ac.uk/E=lcg-site-admin@imperial.ac.uk
> >        at
> >        
diskCacheV111.srm.server.SRMServerV1.getStorageElementInfo(SRMServerV1.ja
va:615)
> ....
>
> Is this normal and/or expected, and if not, what can be done about it?
>
> Things I have checked:
> a) the host certificate DN is in the dcache.kpwd file
Check carefully, note the email, vs E, vs Email in the DN in particular?

  Steve
> b) the original .srmconfig/config.xml file for the user edginfo has 
user
> certs etc. pointing to /home/edginfo/k5-ca-proxy.pem by default.
> However, this file doesn't exist.  I created a symbolic link to
> /opt/lcg/hostproxy, but this didn't seem to help...  However, the 
client
> tools did manage to identify the host certificate DN.
>
> cheers,
> Owen.
>
>
> --
> 
> Dr O J E Maroney # London Tier 2 Technical Co-ordinator
>
> Tel. (+44)20 759 47802
>
> Imperial College London
> High Energy Physics Department
> The Blackett Laboratory
> Prince Consort Road, London, SW7 2BW


> begin:vcard
> fn:Owen Maroney
> n:Maroney;Owen
> org:Imperial College London;High Energy Physics Department
> adr:Prince Consort Road;;The Blackett Laboratory;London;;SW7 
2BW;United Kingdom
> email;internet:o.maroney@imperial.ac.uk
> title:London Tier 2 Technical Co-ordinator
> tel;work:(+44)2075947802
> x-mozilla-html:FALSE
> version:2.1
> end:vcard
>


--
Steve Traylen
s.traylen@rl.ac.uk
http://www.gridpp.ac.uk/

Advertising SRM information

"Greig A Cowan"

Hi everyone,

I'm still trying to get my head round all of the LCG middleware so I was

hoping that someone on this list mught be able to help. The question I have is this: Once you have an SRM up and running, how do you go about advertising that you have an SRM available and the amount of storage space? Is this done through the BDII? If so, how can I interface it with

my dCache setup?

If I run the command:

[gcowan@srm bin]$ ./srm-storage-element-info
https://srm.epcc.ed.ac.uk:8443/srm/infoProvider1_0.wsdl
StorageElementInfo :
                     totalSpace     =5733781340160 (5599395840 KB)
                     usedSpace      =1121973950 (1095677 KB)
                     availableSpace =5732659366210 (5598300162 KB)

So something knows how much storage we have available. Can we advertise the presence of our SRM so that it appears in an ldap search?

Thanks in advance,

Greig

Re: Advertising SRM information

"Mona Aggarwal"

Hi Greig,

I have added the following lines in site-info.def of (CE) to advertise SRM through BDII.



// CE - site-info.def

 SRM_HOST=gfe02.$MY_DOMAIN
 BDII_REGIONS="CE SE1 SE2"       # list of the services provided by 
the site
 BDII_SE2_URL="ldap://$SRM_HOST:2135/mds-vo-name=local,o=grid"

I hope this helps.

Cheers,

Mona

dCache information system

"Greig A Cowan"

dCache information system

"Greig A Cowan"

Hi everyone,

Hope you have all recovered from GridPP13 ;-)

Have RAL got a work-a-round for the problem of publishing the correct available and used space in your dCache? We are having problems doing this in Edinburgh (and I think other sites are the same, namely IC and Manchester).

ldapsearch -x -H ldap://site-bdii.gridpp.rl.ac.uk:2170 -b mds-vo-name=RAL-LCG2,o=grid

seems to report reasonable looking information unlike the corresponding commands for the other sites. The bugs the Alessandra and Owen submitted to savannah regarding the dCache information system are still open. Is there anything more that we can do to address this issue?

Greig

Re: dCache information system

"Steve Traylen"

On Thu, Jul 07, 2005 at 04:27:27PM +0100 or thereabouts, Greig A Cowan 
wrote:
> Hi everyone,
>
> Hope you have all recovered from GridPP13 ;-)
>
> Have RAL got a work-a-round for the problem of publishing the correct
> available and used space in your dCache? We are having problems doing 
this
> in Edinburgh (and I think other sites are the same, namely IC and
> Manchester).

I don't think we have a workaround, it just works?

I expect you have mentioned it before but what is the problem?

 Steve
>
> ldapsearch -x -H ldap://site-bdii.gridpp.rl.ac.uk:2170 -b
> mds-vo-name=RAL-LCG2,o=grid
>
> seems to report reasonable looking information unlike the 
corresponding
> commands for the other sites. The bugs the Alessandra and Owen 
submitted
> to savannah regarding the dCache information system are still open. Is 

> there anything more that we can do to address this issue?
>
> Greig
>
> --
Re: dCache information system

"Alessandra Forti"

The srm command doesn't accept the host certificate. If I have to repeat

it another time I'll scream ;-)

cheers

alessandra

Re: dCache information system

"Philip Clark"

I don't think we have a workaround, it just works? I expect you have mentioned it before but what is the problem?

http://savannah.cern.ch/bugs/?func=detailitem&item_id=8777

We need to understand why you are not seeing this bug. IC, Manchester and Edinburgh all seem to have it. If we try to monitor your storage through the lcg information system then I expect it will show up too.

-Phil

Re: dCache information system

"Steve Thorn"

Steve

First we had the proxy problem. There was a discrepancy between /opt/d-cache/etc/dcache.kpwd and the host's DN. Running /opt/d-cache/bin/grid-mapfile2dcache-kpwd as root fixes it but something that I've yet to identify changes it back in a period of approximately 1 hour.

With the above fixed, the following commands all give sensible space output

# /opt/lcg/libexec/lcg-info-wrapper
# /opt/lcg/libexec/lcg-info-dynamic-se
# /opt/lcg/libexec/lcg-info-dynamic-dcache
/opt/lcg/var/gip/lcg-info-generic.conf

but the GRIS still reports incorrectly even after restarting globus-mds, killing any remaining slapd processes etc.

ldapsearch -x -H ldap://srm.epcc.ed.ac.uk:2135 -b
mds-vo-name=local,o=grid|grep Space
GlueSAStateAvailableSpace: 1
GlueSAStateUsedSpace: 1
...

Steve

Re: dCache information system

"Jamie Kelvin Ferguson"

Whats the name of the srm/SE machine at RAL. If I use dcache.gridpp.rl.ac.uk in a standard query that I use successfully for all tier 2 sites, I get the following

$ ldapsearch -LLL -x -h dcache.gridpp.rl.ac.uk -p 2135 -b
"mds-vo-name=local,
o=grid" "GlueSAAccessControlBaseRule=dteam" 
GlueSAStateAvailableSpace
GlueSAStateUsedSpace

ldap_bind: Can't contact LDAP server

But thats after its been hanging for ages.

However if I try,

$ ldapsearch -x -H ldap://site-bdii.gridpp.rl.ac.uk:2170 -b
mds-vo-name=RAL-LCG2,o=grid "GlueSAAccessControlBaseRule=dteam"
GlueSAStateAvailableSpace GlueSAStateUsedSpace

I get the storages returned.

The second one clearly queries a different machine but I thought you could query an SE/srm directly? Why doesn't the first one work?

Jamie

Re: dCache information system

"Alessandra Forti"

> that I've yet to identify changes it back in a period of approximately 
1
> hour.

that's the cron job.

/etc/cron.d/edg-mkgridmap

cheers
alessandra

On Thu, 7 Jul 2005, Steve Thorn wrote:

> Steve
>
> First we had the proxy problem. There was a discrepancy between
> /opt/d-cache/etc/dcache.kpwd and the host's DN. Running
> /opt/d-cache/bin/grid-mapfile2dcache-kpwd as root fixes it but 
something
>
> With the above fixed, the following commands all give sensible space
> output
>
> # /opt/lcg/libexec/lcg-info-wrapper
> # /opt/lcg/libexec/lcg-info-dynamic-se
> # /opt/lcg/libexec/lcg-info-dynamic-dcache
> /opt/lcg/var/gip/lcg-info-generic.conf
>
> but the GRIS still reports incorrectly even after restarting 
globus-mds,
> killing any remaining slapd processes etc.
>
> ldapsearch -x -H ldap://srm.epcc.ed.ac.uk:2135 -b
> mds-vo-name=local,o=grid|grep Space
> GlueSAStateAvailableSpace: 1
> GlueSAStateUsedSpace: 1
> ...
>
> Steve
Re: dCache information system

"Steve Thorn"

Alessandra

That's what you'd think but running the cron by hand doesn't change it.

Steve

Re: dCache information system

"Alessandra Forti"

Things are publicly published through the local BDII that normally runs on the CE. So IMO it is reasonable and perhaps also us (other sites) should do the same in order to reduce publicly available ports.

cheers

alessandra

Because port 2135 on our SRM isn't open at our site firewall. I'd have to defer to someone more knowledgeable about the information system that me to tell you if that's reasonable or not, but yours is the first complaint I've heard of. Derek

RAL dcache information system

"Ross, D \(Derek\)"

Okay, I've had look at the scripts in the information system setup, and it looks as though we're not using srm-storage-element-info. Instead our lcg-dynamic-dcache script is this perl script:

#!/usr/bin/perl -w

use strict ;
use File::Basename ;

my $used  = '/var/lib/edginfo/used-space.dat' ;
my $total = '/var/lib/edginfo/available-space.dat' ;

my %space ;

open(USED,$used) or die "Could not open $used: $!\n" ;
while(<USED>) {
   if (/^(\d+)\s+(\S+)\s*/) {
       my $kb    = $1  ;
       my $path  = &basename($2)  ;
       $space{$path}{'used'} =  $kb ;
   }
}
close(USED) ;

open(TOTAL,$total) or die "Could not open $total: $!\n" ;
while(<TOTAL>) {
   if (/^(\d+)\s+(\S+)\s*/) {
       my $kb    = $1  ;
       my $path  = &basename($2)  ;
       $space{$path}{'total'} =  $kb ;
   }
}


foreach( qw/cms dteam atlas lhcb/ ){
  print "dn: 
GlueSARoot=$_:/pnfs/gridpp.rl.ac.uk/data/$_,GlueSEUniqueID=dcache.gri
dpp.rl.ac.uk,Mds-Vo-name=local,o=grid\n" ;
  print "GlueSAStateAvailableSpace: ".$space{$_}{'total'}."\n" ;
  print "GlueSAStateUsedSpace: ".$space{$_}{'used'}."\n\n" ;

}

We also have a cron job in /etc/cron.d/:

50 3 * * *  edginfo /usr/bin/du -s  /pnfs/gridpp.rl.ac.uk/data/* > /var/lib/edginfo/used-space.dat

And the /var/lib/edginfo/available-space.dat is a file consisting of

14252613123     /pnfs/gridpp.rl.ac.uk/data/atlas
14252613123     /pnfs/gridpp.rl.ac.uk/data/cms
14252613123     /pnfs/gridpp.rl.ac.uk/data/dteam
14252613123     /pnfs/gridpp.rl.ac.uk/data/lhcb

Derek

Re: dCache information system

"Philip Clark"

Alessandra Forti writes:

> Things are publicly published through the local BDII that normally 
runs
> on the CE. So IMO it is reasonable and perhaps also us (other sites)
> should do the same in order to reduce publicly available ports.

Hi Alessandra,

If this happens would there be anyway for an external user/job to find out what storage you have available? In the long run we might be able to reserve storage via srm, but this is not in place yet, so this information is quite important.

-Phil

Re: dCache information system

"Alessandra Forti"

Hi Phil,

you just have to do the same query to the site CE on port 2170.

For example to query RAL:

  ldapsearch -LLL -x -h lcgce01.gridpp.rl.ac.uk -p 2170 -b "mds-vo-name=RAL-LCG2, o=grid" "GlueSAAccessControlBaseRule=dteam" GlueSAStateAvailableSpace GlueSAStateUsedSpace

gives

dn: 
GlueSARoot=dteam:/pnfs/gridpp.rl.ac.uk/data/dteam,GlueSEUniqueID=dcac
he.gridpp.rl.ac.uk,mds-vo-name=RAL-LCG2,o=grid
GlueSAStateAvailableSpace: 14252613123
GlueSAStateUsedSpace: 79981449

cheers

alessandra

Re: dCache information system

"Steve Traylen"

On Thu, Jul 07, 2005 at 04:54:33PM +0100 or thereabouts, Philip Clark 
wrote:
>
> >
> > I don't think we have a workaround, it just works?
> >
> > I expect you have mentioned it before but what is the problem?
>
> http://savannah.cern.ch/bugs/?func=detailitem&item_id=8777
>
> We need to understand why you are not seeing this bug. IC, Manchester
> and Edinburgh all seem to have it. If we try to monitor your storage
> through the lcg information system then I expect it will show up too.

Does your dcache.kpwd contain

/C=UK/O=eScience/OU=Manchester/L=HEP/CN=bohr0013.tier2.hep.man.
ac.uk/E=alessandra.forti@manchester.ac.uk

i.e are you seeing this one.

https://savannah.cern.ch/bugs/?func=detailitem&item_id=5295

but I'm sure I have already asked this question twice so feel free to
scream if we are going through the same loop?

 Steve
>
> -Phil

--
Steve Traylen
s.traylen@rl.ac.uk
http://www.gridpp.ac.uk/
Re: dCache information system

"Steve Traylen"

On Thu, Jul 07, 2005 at 05:05:05PM +0100 or thereabouts, Jamie Kelvin 
Ferguson wrote:
> Whats the name of the srm/SE machine at RAL. If I use
> dcache.gridpp.rl.ac.uk in
> a standard query that I use successfully for all tier 2 sites, I get 
the
> following
>
> $ ldapsearch -LLL -x -h dcache.gridpp.rl.ac.uk -p 2135 -b
> "mds-vo-name=local,
> o=grid" "GlueSAAccessControlBaseRule=dteam" 
GlueSAStateAvailableSpace
> GlueSAStateUsedSpace
>
> ldap_bind: Can't contact LDAP server
>
> But thats after its been hanging for ages.
>
> However if I try,
>
> $ ldapsearch -x -H ldap://site-bdii.gridpp.rl.ac.uk:2170 -b
> mds-vo-name=RAL-LCG2,o=grid "GlueSAAccessControlBaseRule=dteam"
> GlueSAStateAvailableSpace GlueSAStateUsedSpace
>
> I get the storages returned.
>
> The second one clearly queries a different machine but I thought you 
could
> query an SE/srm directly?
> Why doesn't the first one work?

It is behind a firewall on purpose, to stop people writing monitoring
scripts against it :)

Please use the site-bdii. 

 Steve
Re: RAL dcache information system

"Steve Traylen"

Okay, I've had look at the scripts in the information system setup, and it looks as though we're not using srm-storage-element-info. Instead our lcg-dynamic-dcache script is this perl script:

This is true, I have completly replaced the script with something less intensive but it was working before I changed it.

Steve

Re: dCache information system

"Greig A Cowan"

Hi Steve,

Does your dcache.kpwd contain

/C=UK/O=eScience/OU=Manchester/L=HEP/CN=bohr0013.tier2.hep.man.
ac.uk/E=alessandra.forti@manchester.ac.uk

No, dcache.kpwd does not contain this line referring to Manchester. It does contain the following line that refer to our dCache host:

mapping
"/C=UK/O=eScience/OU=Edinburgh/L=NeSC/CN=host/srm.epcc.ed.ac.uk
/emailAddress=g.cowan@ed.ac.uk" 
edginfo
       
/C=UK/O=eScience/OU=Edinburgh/L=NeSC/CN=host/srm.epcc.ed.ac.uk/
emailAddress=g.cowan@ed.ac.uk

I do not have Email= or E=, but emailAddress= in the DN. Is this what everyone has?

Greig

i.e are you seeing this one.

https://savannah.cern.ch/bugs/?func=detailitem&item_id=5295

but I'm sure I have already asked this question twice so feel free to scream if we are going through the same loop?

Steve

Re: dCache information system

"Jensen, J \(Jens\)"

emailAddress is correct.

OpenSSL, as of version 0.9.7 (long time ago), uses emailAdress instead of Email when it *displays* the DN, so GT, being built on top of OpenSSL, does the same.

We shouldn't really have the email address in the DN at all, but that's another story.

-j

Re: RAL dcache information system

"Alessandra Forti"

Hi Steve, mybe there is something in th format of RAL host DN that makes it different from the other sites.

Just as an attmpts it would be perhaps useful to compare the host lines in dcache.kpwd. Can you send the RAL ones?

thanks

cheers alessandra

Re: dCache information system

"Alessandra Forti"

no I have

mapping
"/C=UK/O=eScience/OU=Manchester/L=HEP/CN=bohr0013.tier2.hep.man
.ac.uk/emailAddress=alessandra.forti@manchester.ac.uk" edginfo

login edginfo read-write 18948 18948 / / /

/C=UK/O=eScience/OU=Manchester/L=HEP/CN=bohr0013.tier2.hep.man.
ac.uk/emailAddress=alessandra.forti@manchester.ac.uk

cheers
alessandra
Re: dCache information system

"owen maroney"

And for IC we have:

mapping
"/C=UK/O=eScience/OU=Imperial/L=Physics/CN=gfe02.hep.ph.ic.ac.u
k/emailAddress=lcg-site-admin@imperial.ac.uk"
edginfo

login edginfo read-write 19491 19491 / / /

/C=UK/O=eScience/OU=Imperial/L=Physics/CN=gfe02.hep.ph.ic.ac.uk
/emailAddress=lcg-site-admin@imperial.ac.uk

So, I try replacing this with:

> mapping 
"/C=UK/O=eScience/OU=Imperial/L=Physics/CN=gfe02.hep.ph.ic.ac.u
k/E=lcg-site-admin@imperial.ac.uk" edginfo
>
> login edginfo read-write 19491 19491 / / /
>         
/C=UK/O=eScience/OU=Imperial/L=Physics/CN=gfe02.hep.ph.ic.ac.uk
/E=lcg-site-admin@imperial.ac.uk

then

su - edginfo
> [edginfo@gfe02 edginfo]$ /opt/d-cache/srm/bin/srm-storage-element-info 
https://gfe02.hep.ph.ic.ac.uk:8443/srm/infoProvider1_0.wsdl

produces stuff ending with:

> StorageElementInfo :
>                      totalSpace     =2541546897408 (2481979392 KB)
>                      usedSpace      =30397658395 (29685213 KB)
>                      availableSpace =2502704826621 (2444047682 KB)

Hurrah! (although this will get overwritten at the next edg-mkgridmap update...)

However, although now when I run, as edginfo:

> [edginfo@gfe02 edginfo]$ /opt/lcg/libexec/lcg-info-dynamic-se

I get a 3 second pause and output like:

> dn: 
GlueSARoot=lhcb:/pnfs/hep.ph.ic.ac.uk/data/lhcb,GlueSEUniqueID=gfe02.
hep.ph.ic.ac.uk,Mds-Vo-name=local,o=grid
> GlueSAStateAvailableSpace: 2444047682
> GlueSAStateUsedSpace: 37931710

when I run, as edginfo, /opt/lcg/libexec/lcg-info-wrapper, it takes less than a second and produces output including:

> GlueSAStateAvailableSpace: 00
> GlueSAStateUsedSpace: 00

in the output. I checked /opt/lcg/var/gip/tmp and the file lcg-info-dynamic-dcache.ldif.7010 is being updated but is only zero sized.

So the output of the dynamic-se script does not seem to be getting incorporated into the output of the wrapper script.

cheers,

Owen.

Re: dCache information system

"Greig A Cowan"

I essentially find the same as Owen just reported.

Should we be making another bug report?

Greig

On Fri, 8 Jul 2005, owen maroney wrote:

> And for IC we have:
>
> mapping
> 
"/C=UK/O=eScience/OU=Imperial/L=Physics/CN=gfe02.hep.ph.ic.ac.u
k/emailAddress=lcg-site-admin@imperial.ac.uk"
> edginfo
>
> login edginfo read-write 19491 19491 / / /
> 
> 
/C=UK/O=eScience/OU=Imperial/L=Physics/CN=gfe02.hep.ph.ic.ac.uk
/emailAddress=lcg-site-admin@imperial.ac.uk
>
> So, I try replacing this with:
> > mapping 
"/C=UK/O=eScience/OU=Imperial/L=Physics/CN=gfe02.hep.ph.ic.ac.u
k/E=lcg-site-admin@imperial.ac.uk" edginfo
> >
> > login edginfo read-write 19491 19491 / / /
> >         
/C=UK/O=eScience/OU=Imperial/L=Physics/CN=gfe02.hep.ph.ic.ac.uk
/E=lcg-site-admin@imperial.ac.uk
>
> then
>
> su - edginfo
> > [edginfo@gfe02 edginfo]$ 
/opt/d-cache/srm/bin/srm-storage-element-info 
https://gfe02.hep.ph.ic.ac.uk:8443/srm/infoProvider1_0.wsdl
>
> produces stuff ending with:
>
> > StorageElementInfo :
> >                      totalSpace     =2541546897408 (2481979392 KB)
> >                      usedSpace      =30397658395 (29685213 KB)
> >                      availableSpace =2502704826621 (2444047682 KB)
>
> Hurrah! (although this will get overwritten at the next edg-mkgridmap
> update...)
>
> However, although now when I run, as edginfo:
> > [edginfo@gfe02 edginfo]$ /opt/lcg/libexec/lcg-info-dynamic-se
>
> I get a 3 second pause and output like:
> > dn: 
GlueSARoot=lhcb:/pnfs/hep.ph.ic.ac.uk/data/lhcb,GlueSEUniqueID=gfe02.
hep.ph.ic.ac.uk,Mds-Vo-name=local,o=grid
> > GlueSAStateAvailableSpace: 2444047682
> > GlueSAStateUsedSpace: 37931710
>
> when I run, as edginfo, /opt/lcg/libexec/lcg-info-wrapper, it takes 
less
> than a second and produces output including:
> > GlueSAStateAvailableSpace: 00
> > GlueSAStateUsedSpace: 00
> in the output.  I checked /opt/lcg/var/gip/tmp and the file
> lcg-info-dynamic-dcache.ldif.7010 is being updated but is only zero 
sized.
>
> So the output of the dynamic-se script does not seem to be getting
> incorporated into the output of the wrapper script.
>
> cheers,
> Owen.
Re: dCache information system

"Alessandra Forti"

Yes.

On Fri, 8 Jul 2005, Greig A Cowan wrote:

> I essentially find the same as Owen just reported.
>
> Should we be making another bug report?
>
> Greig

lcdif cache file

"Alessandra Forti"

Hi,

I got it finally working. This line

$ENV{PATH} = "/opt/d-cache/srm/bin";

needs to be added to /opt/lcg/libexec/lcg-info-dynamic-dcache

I put it after

$ENV{HOME}         = "/var/tmp";
$ENV{SRM_PATH}     = "/opt/d-cache/srm";

for house keeping.

cheers

alessandra

Information system

"Greig A Cowan"

Hi everyone,

Just to let you all know that in order to get Edinburgh publishing the correct storage I had to add in an extra step in addition to what Alessandra previously mentioned. Even after making Alessandra changes, I was still finding that the wrong version of openssl (i.e. the non-globus one) was being used in the /opt/d-cache/bin/grid-mapfile2dcache-kpwd script. To rectify this, I added /opt/globus/bin to the PATH variable in /etc/crontab (this was in addition to adding /opt/globus/bin to PATH in /etc/cron.d/edg-mkgridmap).

The correct version of openssl is now being used, meaning that there are no more references to emailAddress= in the /opt/dcache/etc/dcache.kpwd file. You can see that our storage is now being correctly reported at:

http://www.ph.ed.ac.uk/~jfergus7/gridppDiscStatus.html

Mona: if you need a hand with Imperials information publishing, let me know.

Thanks,

Greig

information system

"Greig A Cowan"

information system

"Greig A Cowan"

Hi everyone,

I am not sure if this list is the correct place to post this question, but it is related to SRM, so I will give it a go.

The GStat tests at Edinburgh are currently giving us a status of WARN.

http://goc.grid.sinica.edu.tw/gstat/ScotGRID-Edinburgh/

This is due to the following:

Missing DN and Attributes:

DN: 'dn: GlueServiceURI=httpg://srm.epcc.ed.ac.uk'

Owen S has previously submitted a bug to savannah:

https://savannah.cern.ch/bugs/?func=detailitem&item_id=8721

rearding this matter. Using the information provided in this bug, I added the following lines to /opt/lcg/var/gip/lcg-info-generic.conf

dn: 
GlueServiceURI=httpg://srm.epcc.ed.ac.uk:8443/srm/managerv1,Mds-Vo-name
=loc$GlueServiceURI:
httpg://srm.epcc.ed.ac.uk:8443/srm/managerv1
GlueServiceAccessPointURL: httpg://srm.epcc.ed.ac.uk:8443/srm/managerv1

I re-ran the appropriate scripts after changing the .conf file, but our GStat status has not changed. Has anyone else seen and/or resolved this problem?

Thanks in advance,

Greig

Re: Information system

"Alessandra Forti"

Hi Greig,

(this was in addition to adding /opt/globus/bin to PATH in /etc/cron.d/edg-mkgridmap).

sorry it is a stupid question, but when you did this did you add /opt/globus/bin at the beginning of the PATH or at the end?

cheers

alessandra

Re: information system

"Alessandra Forti"

Hi Greig,

it is always the same problem.

http://savannah.cern.ch/bugs/?func=detailitem&item_id=8777

download the config_gip and run it on your system.

cheers

alessandra

Re: Information system

"Greig A Cowan"

Hi Alessandra,

> > (this was in addition to adding /opt/globus/bin to PATH in
> > /etc/cron.d/edg-mkgridmap).
>
> sorry it is a stupid question, but when you did this did you add
> /opt/globus/bin at the beginning of the PATH or at the end?

I added it at the start of the PATH:

PATH=/opt/globus/bin:/sbin:/bin:/usr/sbin:/usr/bin

Greig







> > Hi everyone,
> >
> > Just to let you all know that in order to get Edinburgh publishing 
the
> > correct storage I had to add in an extra step in addition to what
> > Alessandra previously mentioned. Even after making Alessandra 
changes, I
> > was still finding that the wrong version of openssl (i.e. the 
non-globus
> > one) was being used in the /opt/d-cache/bin/grid-mapfile2dcache-kpwd
> > script. To rectify this, I added /opt/globus/bin to the PATH 
variable in
> > /etc/crontab
> >
> > The correct version of openssl is now being used, meaning that there 
are
> > no more references to emailAddress= in the 
/opt/dcache/etc/dcache.kpwd
> > file.
> >
> > You can see that our storage is now being correctly reported at:
> >
> > http://www.ph.ed.ac.uk/~jfergus7/gridppDiscStatus.html
> >
> > Mona: if you need a hand with Imperials information publishing, let 
me
> > know.
> >
> > Thanks,
> > Greig
> >
> > --

Re: information system

"Greig A Cowan"

Hi Alessandra,

> it is always the same problem.
>
> http://savannah.cern.ch/bugs/?func=detailitem&item_id=8777
>
> download the config_gip and run it on your system.

Everything is now working as it should and our site test status has
returned to OK.

Thanks very much for your help.
Greig




> On Thu, 14 Jul 2005, Greig A Cowan wrote:
>
> > Hi everyone,
> >
> > I am not sure if this list is the correct place to post this 
question, but
> > it is related to SRM, so I will give it a go.
> >
> > The GStat tests at Edinburgh are currently giving us a status of 
WARN.
> > http://goc.grid.sinica.edu.tw/gstat/ScotGRID-Edinburgh/
> >
> > This is due to the following:
> >
> > Missing DN and Attributes:

> >
> > DN: 'dn: GlueServiceURI=httpg://srm.epcc.ed.ac.uk'
> >
> >
> > Owen S has previously submitted a bug to savannah:
> >
> > https://savannah.cern.ch/bugs/?func=detailitem&item_id=8721
> >
> > rearding this matter. Using the information provided in this bug, I 
added
> > the following lines to /opt/lcg/var/gip/lcg-info-generic.conf
> >
> > dn: 
GlueServiceURI=httpg://srm.epcc.ed.ac.uk:8443/srm/managerv1,Mds-Vo-name
=loc$GlueServiceURI:
> > httpg://srm.epcc.ed.ac.uk:8443/srm/managerv1
> > GlueServiceAccessPointURL: 
httpg://srm.epcc.ed.ac.uk:8443/srm/managerv1
> >
> > I re-ran the appropriate scripts after changing the .conf file, but 
our
> > GStat status has not changed. Has anyone else seen and/or resolved 
this
> > problem?
> >
> > Thanks in advance,
> > Greig
> >
> > --