Western Digital ActiveScale Connector
Last Updated: April 19, 2018
The Globus Western Digital ActiveScale Connector can be used for access and sharing of data on the ActiveScale Storage System. The connector is available as an add-on subscription to organizations with a Globus Standard subscription - please contact us for pricing.
This document describes the steps needed to install an endpoint, and the ActiveScale Connector needed to access the storage system. This installation should be done by a system administrator, and once completed users can use the endpoint to access the ActiveScale storage via Globus to transfer, share and publish data on the system.
Prerequisites
A functional Globus Connect Server installation is required for installation and use of the ActiveScale Connector. The server can be hosted on any machine that can connect to the ActiveScale system. The Globus Connect Server Installation Guide provides detailed documentation on the steps for installing and configuring a server endpoint.
The ActiveScale Connector is available for all distributions supported by Globus Connect Server.
Installation
Install the package globus-gridftp-server-s3 from the Globus repository.
For Red Hat-based systems:
$ yum install globus-gridftp-server-s3
For Debian-based systems:
$ apt-get install globus-gridftp-server-s3
For SLES 11-based systems:
$ zypper install globus-gridftp-server-s3
Configuration
The ActiveScale Connector requires the following steps for configuration:
-
Configure the ActiveScale Connector
-
Create a gridmap to S3 credentials
-
Restart the GridFTP server
Configure the ActiveScale Connector
Create the file /etc/gridftp.d/gridftp-s3 containing these lines:
threads 2 load_dsi_module s3
Edit the file /etc/globus/globus-gridftp-server-s3.conf
and set the host_name option to point at the appropriate hostname for the ActiveScale compatible storage system you want to access. For example, to configure the ActiveScale Connector to use the activescale.example.local storage system:
host_name = activescale.example.local
Create a file for each user containing their ActiveScale credentials
Each user will need to have a special file created which specifies the S3 credentials associated with their local user account. The default s3_map_filename
configuration for the ActiveScale Connector looks in $HOME/.globus/s3
for a file mapping the current user’s ID to S3 access keys. Each user who will be using the ActiveScale Connector must create such a file with their credentials. This file can be created and populated by the user with the following commands:
$ mkdir -m 0700 -p ~/.globus
$ (umask 077; echo "$(id -un);$S3_ACCESS_KEY_ID;$S3_SECRET_ACCESS_KEY" \
> ~/.globus/s3)
The S3_ACCESS_KEY_ID and S3_SECRET_ACCESS_KEY correspond to the Access Key ID and Secret Access Key for the user’s S3 credentials that have been granted access to the S3 buckets the user intends to access.
Optionally create a service user account
Since ActiveScale users need not have user accounts on the local endpoint, ActiveScale transfers can be configured to run under a local service user account. Create a user named globus-s3
, and add the following line to the /etc/gridftp.d/gridftp-s3 configuration file:
process_user globus-s3
The s3_map_filename
credential file configuration, and Globus Connect Server configuration that refers to $HOME
, such as SharingStateDir
, will be using the home directory of this account. Ensure that these files are only accessible by the globus-s3
account.
All credentials must be stored in a single s3_map_filename
in the format above, by default in the file ~globus-s3/.globus/s3
. The username must correspond to the username based on the Globus Connect Server AuthorizationMethod
i.e. CILogon or Gridmap.
Debugging Tips
To enable a debugging log for the ActiveScale Connector, set the environment variable GLOBUS_S3_DEBUG "1023,/tmp/s3.log" to enable a highly verbose log of the connector. This can be easily done for a gridftp configuration by creating a file /etc/gridftp.d/s3-debug with the contents
$GLOBUS_S3_DEBUG "1023,/tmp/s3.log"
Basic Endpoint Functionality Test
After completing the installation, you should do some basic transfer tests with your endpoint to ensure that it is working. We document a process for basic endpoint functionality testing here.