Page tree
Skip to end of metadata
Go to start of metadata

S3cmd is a free open-source command-line tool and client for uploading, retrieving and managing S3-compliant object storages. It’s a powerful tool for advanced users who are familiar with command-line programs but is also simple enough for beginners to learn quickly. It is also great for automation using, for example, shell scripts or cron.

Object Storage Access Details

VPS Cloud+ Object Storage is fully S3-compliant, meaning any existing S3 client should be able to connect and access your Object Storage. To do so, you will need to configure your S3 client to be able to authenticate with the storage.

You can find the access details for your Object Storage at your VPS Cloud+ Control Panel.

                               Fig. 1: Object Storage Access Details

Make note of your Object Storage’s Endpoint, Access Key and Secret Key which are needed to be able to connect using an S3 client.

Installing S3cmd

Thanks to its ease in automation, S3cmd is a good option for managing your Object Storage from your cloud server.

Linux

You can install S3cmd on most Linux distributions using pip, the package installer for Python programming language.

First, check that Python is installed and available.

# Ubuntu and Debian

2  sudo apt-get install python3 python3-distutils -y

3

# CentOS

sudo yum install python3 -y

Next, download the pip install script.

   wget https://bootstrap.pypa.io/get-pip.py -O ~/get-pip.py

Then install pip using the following command.

   sudo python3 ~/get-pip.py

Finally, install S3cmd using pip.

   sudo pip install s3cmd

Mac

S3cmd can be easily installed on Mac via brew:

   brew install s3cmd

Windows

You need Python first on Windows before using S3cmd. Download and install Python 2.7 or newer, Python 3.x is supported with S3cmd version 2.x. After installation, ensure that the Python directory is added to your global PATH variable.

Once Python is installed and working, download the S3cmd source code from s3tools/s3cmd and navigate to the downloaded directory. Run the below command to install S3cmd:

     python setup.py install

After S3cmd is installed, on Windows only, all commands will be of the below format:

     python s3cmd <command> <options>


That’s it! Now that S3cmd is installed, you’ll need to configure it to connect to your Object Storage. Continue below with the steps on how to accomplish this.

Configuring S3cmd

S3cmd is configured to connect to a single Object Storage at the time by creating a configuration file. It contains all the necessary keys and details needed to manage buckets and files on the Object Storage.

Run the following command to use the configuration script.

  s3cmd --configure

Then enter the required details in the order they appear. The values highlighted below are examples, use your own keys and endpoint URL as described in the first section of this guide. Other fields left empty in the example below can be skipped by simply pressing Enter key to use the default value.

Then at the end, confirm to save settings.

Access Key: LUZ4VODIY10KSMXDNOLQ
Secret Key: wm7YMOVRj93x1fRqDC6ahvs141hiv8Hw4YMCj4Wa
Default Region [US]: 
S3 Endpoint [s3.amazonaws.com]: external.object.gb.thghosting.cloud
DNS-style bucket+hostname:port template for accessing a bucket [%(bucket)s.s3.amazonaws.com]: external.object.gb.thghosting.cloud
Encryption password:
Path to GPG program [/usr/bin/gpg]:
Use HTTPS protocol [Yes]: Yes
HTTP Proxy server name: 
...
Test access with supplied credentials? [Y/n] 
Please wait, attempting to list all buckets...
Success. Your access key and secret key worked fine :-)
...
Save settings? [y/N] y
Configuration saved to '/home/user/.s3cfg'


Once configured, you are ready to start working on your Object Storage!

If you want to be able to access multiple Object Storage devices from the same system, you can create .s3cfg configuration files for each and run S3cmd with the -c parameter.

  s3cmd -c /path/to/.s3cfg


Managing Buckets

Now that you have S3cmd configure, test out a few commands to see how it works.

All data in Object Storage is organized in “buckets” of objects.

Create Bucket

To start with, create a new bucket with the following command. Replace the example-bucket with whatever you want to name your bucket.

  s3cmd mb s3://example-bucket 2Bucket 's3://example-bucket/' created


List Buckets

Once you’ve created a bucket, you can confirm it by listing the buckets in your Object Storage.

  s3cmd ls 22021-04-13 19:38 s3://example-bucket


Remove Bucket

If you want to remove a certain bucket, you can delete it by using the following command.

  s3cmd rb s3://example-bucket 2Bucket 's3://example-bucket/' removed


Note that all objects must be placed within a bucket. Create a new bucket if you followed the above and deleted the example-bucket then continue in the next section on how to transfer files to and from the Object Storage.


Managing Files

S3cmd follows common file repository terminology in the object operations. It refers to copying files into the Object Storage as “put” and downloading files from the Object Storage as “get”. You can test out the various commands by using the example commands as described in this section.

Put Object

Once you’ve created your bucket, put a file or files into it with the following command.

  s3cmd put FILE [FILE...] s3://example-bucket[/PREFIX]

You can transfer any number of files at the same time by simply listing them separated by a space. You can also further organize the files into groups by assigning an optional prefix.

  touch example.txt

    s3cmd put example.txt s3://example-bucket

    upload: 'example.txt' -> 's3://example-bucket/example.txt' [1 of 1] 4 0 of 0 0% in 0s 0.00 B/s done

List Objects

Once copied into the Object Storage, it can be found by listing the files within the target bucket.

  s3cmd ls s3://example-bucket 22021-04-13 19:42 0 s3://example-bucket/example.txt

Get Object

Getting objects from buckets follows the same logic as put operations but in reverse order.

  s3cmd get s3://BUCKET/OBJECT LOCAL_FILE

Get the same example.txt but rename it during copying by using the next command.

  s3cmd get s3://example-bucket/example.txt example2.txt

    download: 's3://example-bucket/example.txt' -> 'example2.txt' [1 of 1] 

                        0 of 0 0% in 0s 0.00 B/s done

Copy Object

You can also copy files between buckets within the same Object Storage without having to save the object elsewhere.

  s3cmd cp s3://BUCKET1/OBJECT1 s3://BUCKET2/OBJECT2

First, create a new bucket, then copy the example.txt into it.

  s3cmd mb s3://new-bucket

    s3cmd cp s3://example-bucket/example.txt s3://new-bucket/

    remote copy: 's3://example-bucket/example.txt' -> 's3://new-bucket/example.txt'

Move Object

There’s also an option to move objects between buckets which works much like the copy operation above.

  s3cmd mv s3://BUCKET1/OBJECT1 s3://BUCKET2[/OBJECT2]

The following command will move the example.txt to the new-bucket and rename to example2.txt.

  s3cmd mv s3://example-bucket/example.txt s3://new-bucket/example2.txt

  move: 's3://example-bucket/example.txt' -> 's3://new-bucket/example2.txt'

List All Objects

Afterward, you can check the files in all buckets by using list all command as shown below.

  s3cmd la 22021-04-13 19:45 0 s3://new-bucket/example.txt 32021-04-13 19:47 0 s3://new-bucket/example2.txt

Delete Object

Naturally, it’s also possible to delete files when they are no longer needed.

  s3cmd rm s3://BUCKET/OBJECT

Once done, delete example files from the new-bucket with the following command.

  s3cmd rm s3://new-bucket/example.txt s3://new-bucket/example2.txt

    delete: 's3://new-bucket/example.txt'

   delete: 's3://new-bucket/example2.txt'

Afterward, both of your test buckets should be empty and can be deleted as well.

Managing buckets and files access

Using S3cmd you can manage access permissions of buckets and files allowing you to make them “public” or “private”. You can make a bucket or file public and share the link allowing another user to access your data. Or, you can make a bucket or file private to ensure that only you have access to your data. By default, buckets and files are private.

Making a bucket or file public

Can be used to set the bucket or file as public allowing access to anyone with the link.

  s3cmd setacl s3://example-bucket --acl-public

    s3://example-bucket/: ACL set to Public

    s3cmd setacl s3://example-bucket/example.txt --acl-public

    s3://example-bucket/example.txt: ACL set to Public [1 of 1]

The bucket can be accessed at the following URL:


<Object Storage Endpoint>/<Tenant ID>:<Bucket Name>/<File Name>

https://external.object.gb.thghosting.cloud/1027142d942a4526a9d2acf6cbec2920_VPS_net:example-bucket

https://external.object.gb.thghosting.cloud/1027142d942a4526a9d2acf6cbec2920_VPS_net:example-bucket/example.txt

Both the Object Storage Endpoint and Tenant ID can be retrieved from the UI as seen in Fig. 1.

Making a bucket or file private

Can be used to set the bucket or file as private allowing access to only the owner.

  s3cmd setacl s3://example-bucket --acl-private

    s3://example-bucket/: ACL set to Private

    s3cmd setacl s3://example-bucket/example.txt --acl-private

    s3://example-bucket/example.txt: ACL set to Private [1 of 1]

Other operations

Disk Usage

Provides information about storage usage by the bucket.

  s3cmd du [s3://BUCKET[/PREFIX]]

    s3cmd du s3://example-bucket

                 0 1 objects s3://example-bucket/

Bucket and Object Details

Used to get information about buckets or objects.

  s3cmd info s3://BUCKET[/OBJECT]

    s3cmd info s3://example-bucket

    s3://example-bucket/ (bucket):

      Location: default

      Payer: BucketOwner

           Expiration Rule: none

           Policy: none

     CORS: none

          ACL: 679c6a91893e4eb1adb3a440523333fe_VPS_net$admin: FULL_CONTROL

    s3cmd info s3://example-bucket/example.txt

    s3://example-bucket/example.txt (object):

    File size: 0

        Last mod: Tue, 13 Apr 2021 19:54:39 GMT

    MIME type: text/plain

       Storage: STANDARD

       MD5 sum: d41d8cd98f00b204e9800998ecf8427e

       SSE: none

       Policy: none

      CORS: none

      ACL: 679c6a91893e4eb1adb3a440523333fe_VPS_net$admin: FULL_CONTROL

      x-amz-meta-s3cmd-attrs: atime:1618343673/ctime:1618343673/gid:20/gname:staff/md5:d41d8cd98f00b204e9800998ecf8427e/mode:33188/mtime:1618343673/uid:502/uname:admin

Modify Metadata

Can be used to change the object metadata.

  s3cmd modify s3://BUCKET1/OBJECT

Help

Check the full list of supported Object Storage operations by using the help command.

  s3cmd --help


Other Tools

Cyberduck (a UI tool for Windows and Mac users)


Cyberduck is a libre server and cloud storage browser for Mac and Windows with support for FTP, SFTP, WebDAV, Amazon S3, OpenStack Swift, Backblaze B2, Microsoft Azure & OneDrive, Google Drive and Dropbox. You can install Cyberduck from https://cyberduck.io/download/.


Configuring Cyberduck to access your Cloud+ Object Storage is simple. Open Cyberduck and click on “Open Connection”. Select Amazon S3 from the drop-down list and enter the access details (as seen in Fig. 1). Refer to Fig. 2 for entering the access details correctly. Click on Connect when done.

                                                                                                                      Fig. 2 - Cyberduck Configuration Screen


Cyberduck connects to your Cloud+ Object Storage and loads a tree view of your buckets and files as seen below.

                                                                                                                                 Fig. 3 - Cyberduck Tree View


  • No labels