Chapter 9. Improving your D-Cache System

Table of Contents

Quotas
Introduction
Adding a VO existing admin node setup
Adding to the PNFS Database
Adding Directory for the PNFS Database
Pool groups and directory affinities
Updating the authorised user list
Adding dCache to the grid information system
srm-storage-element-info command
information system
logrotate

Quotas

Introduction

D-Cache does not explisitly support quotas. It is posible to provide a form of quota, indirectly based upon Virtual Organisation. Vo's can be bound to pools and since pools all have a fixed size it is posible to provide quotas for groups at the sizes of pools. This unfortunately does not allow for dynamically setting quota size.

This stage is difficult to do after files have been added to D-Cache so its best to set the pools sizes, and creating the pnfs databases before running D-Cache in production.

Adding a VO existing admin node setup

Virtual Organisations (Abbreviated as VO) are similar to Groups under Linux. All users must be part of a VO. The file "/opt/edg/etc/edg-mkgridmap.conf" sets which VO servers are integrated into the system. Please check this file to add the appropriate members to the Gridmap file which will then be loaded into dCache. The following file is setup for CMS and dteam users. Your SE may support more than these two virtual organisations in which case you must add these further VO to the configuration.

# Map VO members  cms
group ldap://grid-vo.nikhef.nl/ou=lcg1,o=cms,dc=eu-datagrid,dc=org .cms
 
# Map VO members  dteam
group ldap://lcg-vo.cern.ch/ou=lcg1,o=dteam,dc=lcg,dc=org .dteam

# A list of authorised users.
auth ldap://lcg-registrar.cern.ch/ou=users,o=registrar,dc=lcg,dc=org

gmf_local /opt/edg/etc/grid-mapfile-local

The success of this step can be tested by viewing the content of "/etc/grid-security/grid-mapfile".

Adding to the PNFS Database

Once the Gridmap file is being updated dCache also needs to add database for each VO in the PNFS service. Please substitute ${vo} with the name of all virtual organisations in the Gridmap file.

. /usr/etc/pnfsSetup
PATH=$pnfs/tools:$PATH
mdb create ${vo} /opt/pnfsdb/pnfs/databases/${vo}

To enable PNFS to read the new databases it must be informed the databases are changed. The following command updates a running copy of PNFS.

mdb update

To confirm that a VO has successfully been added into the PNFS system the following command is used to establish the database ID for each virtual Organisation (VO).

mdb show

Adding Directory for the PNFS Database

To map a vo to a pool firstly you have to tag the directory in the pnfs filesystem that the vo will use. The tags will be inherited by any directory created under the tagged directory after it has been tagged. To tag a directory, change into it and run the following commands. Please substitute the ${ID} with the id returned by "mdb show". Also its worth noting that mkdir may exit with an error, but if the directory exists it was successful.

mkdir -p /pnfs/$(hostname -d)/data/.(${ID})(${vo})"

Set storage group for each VO dir (there are 4 spaces between StoreName and the VO name, don't know if its significant)

cd ${vo}
echo "StoreName    ${vo}" >".(tag)(OSMTemplate)"       
echo ${vo} > ".(tag)(sGroup)"
cd ..

Note that although we use the same name both times here it isn't necessary to do so, for instance the Tier 1 has a dteam directory where the .(tag)(sGroup) contains the words tape, and this is used to map to a seperate set of pools for access to the Atlas DataStore.

Pool groups and directory affinities

The second part of configuring mappings between vos and pools involves the PoolManager. If your dcache instance is halted then you can add them to the /opt/d-cache/config/PoolManager.conf on the admin node, otherwise they should be entered into the PoolManager modules of the admin interface, remembering to finish with save to write the configuration to disk.

psu create pgroup ${vo}-pgroup
psu create unit -store ${vo}:${vo}@osm
psu create ugroup ${vo}
psu addto ugroup ${vo} ${vo}:${vo}@osm
psu create link ${vo}-link world-net ${vo}
psu add link ${vo}-link ${vo}-pgroup
psu set link ${vo}-link -readpref=10 -writepref=10 -cachepref=10

Note that most of the names of things in the above commands are convention, and there is no requirement to actually follow this scheme. The commands are explained step by step in the following text.

psu create pgroup ${vo}-pgroup
                

This creates a pool group, this is exactly what it sounds: a group of pools.

psu create unit -store ${vo}:${vo}@osm
                

This command defines a unit, this is something that matches against a property of the incoming request, in this case the storage information of where the file should be written. The names in this command do matter, they should match those used to tag the directory earlier, the name used in the .(tag)(OSMTemplate) comes first.

psu create ugroup ${vo}
                

This creates a unit group, this is just a group of units. I do not know what unit is in this case.

psu addto ugroup ${vo} ${vo}:${vo}@osm
                

The fourth command adds the unit created to the new unit group.

psu create link ${vo}-link world-net ${vo}
                

The fifth commmand creates a link, this is the mapping between incoming requests and destination pool, and adds two unit groups to it. world-net is a existing unit group that matches requests coming from any ip address and the second unit group is the one just created.

psu add link ${vo}-link ${vo}-pgroup
                

The sixth command adds the pool group created to the new link.

psu set link ${vo}-link -readpref=10 -writepref=10 -cachepref=10
                

The seventh command set various properties of the link.

Once all those commands are down, psu addto pgroup ${vo}-pgroup <poolname> will add a pool to the pool group. If this pool is not for all vos to access, you may wish to remove it from the default pool group with psu removefrom default <poolname>, to ensure that files from other vos cannot get written to that pool. Note that a pool can belong to more than one pool group, so it is perfectly possible to have two vos writing to the same pool, however there is no way to stop one vo using all of the space in the pool.