Installation:

sudo apt update
sudo apt install s3cmd -y

OR

sudo apt install python3-pip
sudo pip install s3cmd

Commands:

  • s3cmd –configure
    • follow the steps to configure.
  • s3cmd ls
    • list existing buckets.
  • s3cmd ls s3://bucket_name/
    • list all objects in a bucket.
  • s3cmd ls s3://bucket_name/dir/
    • list all objects in a directory of a bucket.
  • s3cmd la
    • list all objects inside all buckets.
  • s3cmd put file.zip s3://bucket_name/
    • upload a file.
  • s3cmd put –multipart-chunk-size-mb=100 file.zip s3://bucket_name/
    • upload a file braking it in 100MB pieces. The final file will be shown as one.
  • s3cmd get s3://bucket_name/file.zip
    • download a file.
  • s3cmd del s3://bucket_name/file.zip
    • delete a file.
  • s3cmd put -r /directory s3://bucket_name/
    • upload a directory secursively.
  • s3cmd mv s3://bucket_name/origen/file.zip s3://bucket_name/destiny/file.zip
    • move file.
  • s3cmd mv s3://bucket_name/origen/ s3://bucket_name/destiny/
    • move directory recursively.
  • s3cmd sync . -r s3://bucket_name/
    • sync local directory with a directory on the bucket recursively.
  • s3cmd sync . –delete-removed s3://bucket_name/destiny/file.zip
    • sync local directory with a directory on the bucket deleting removed files from the origen.
  • s3cmd signurl s3://bucket_name/file.zip +600
    • generates a public URL for download that will be available for 10 minutes (600 seconds).
  • s3cmd signurl s3://bucket_name/file.zip $(date “+%s” -d “12/31/2030 23:59:59”)
    • generates a public URL for download that will be available until the specified date and time.
  • mysqldump –all-databases | gzip | s3cmd put – s3://bucketName/fileName.sql.gz
    • Piping command output directly to the S3 bucket.

To throttle the upload speed (to 1 Mbps) on Linux use the following commands:

cat mybigfile | throttle -k 1024 | s3cmd put - s3://bucket_name/mybigfile

Each user will have its configuration saved on ~/.s3cfg.


BONUS

It is also possible to mount an S3 bucket in Ubunto as a file system:

sudo apt install s3fs -y
echo ABGTKAHSICAKIAWDUVUL:JUllFbRAmydkgEz089/UZgm/VR1ShAZejOFiPan0 > /etc/passwd-s3fs
chmod 600 /etc/passwd-s3fs
mkdir /bucket-name
s3fs bucket-name /bucket-name

Note: use this file /etc/passwd-s3fs for the system or this ~/.passwd-s3fs for an specific user,

To enable debug mode use:

s3fs bucket-name /bucket-name -o dbglevel=info -f -o curldbg

And to auto-mount, the bucket on startup add the following line to the /etc/fstab:

s3fs#bucket-name /bucket-name fuse _netdev,allow_other,umask=227,uid=33,gid=33,use_cache=/root/cache 0 0

To make it compatible with Linode S3 service use the following commands:

s3fs bucket-name /bucket-name -o url="https://us-east-1.linodeobjects.com" -o endpoint=us-east-1 -o passwd_file=/etc/passwd-s3fs

For the /etc/fstab:

s3fs#bucket-name /bucket-name fuse _netdev,allow_other,use_path_request_style,url=https://us-east-1.linodeobjects.com/,uid=userid,gid=groupid 0 0

Backup a whole directory (recursively) encrypt and push to AWS S3:

tar -cvzf - directoryName | gpg -r "[email protected]" --yes --encrypt - | aws s3 cp - s3://bucketName/fileName.tar.gpg

BONUS

Cloudflare offers a S3 compatible block storage called R2. Here is how the configuration file ~/.s3cfg shall look like:

[default]
host_base = <ACCOUNT_ID>.r2.cloudflarestorage.com
access_key = <ACCESS_KEY>
secret_key = <SECRET_KEY>
host_bucket = 
bucket_location = auto
enable_multipart = False

The AWS-CLI tool can be used with other cloud providers (not only with AWS). See example configuration to connect to Cloudflare R2 to be added to the configuration file ~/.aws/credentials:

[default]
endpoint_url = https://<ACCOUNT_ID>.r2.cloudflarestorage.com
aws_access_key_id = <ACCESS_KEY>
aws_secret_access_key = <SECRET_KEY>
region = auto
output = json