Amazon S3 Tools: Command Line S3 Client Software and S3 Backup

AWS S3 Command Line Clients for Windows, Linux, Mac. Backup to S3, upload, retrieve, query data on Amazon S3.


S3cmd Home   |   S3cmd Download   |   FAQ / KB   

S3cmd: FAQ and Knowledge Base

Main Page > Browse Categories > Tips, Tricks and More > Show All
Tips, Tricks and More
 About the s3cmd configuration file
 CloudFront support in s3cmd
 Enforcing server-side encryption for all objects in a bucket
 How can I remove a bucket that is not empty?
 How to configure s3cmd for alternative S3 compatible services
 How to restrict access to a bucket to specific IP addresses
 How to throttle bandwidth in s3cmd
 Why doesn't 's3cmd sync' support PGP / GPG encryption for files?


Tips, Tricks and More


About the s3cmd configuration file

The s3cmd configuration file is named .s3cfg and it is located in the user's home directory, e.g. /home/username/ ($HOME).

On Windows the configuration file is called s3cmd.ini and it is located in %USERPROFILE% -> Application Data , usually that is c:\users\username\AppData\Roaming\s3cmd.ini


The s3cmd configuration file contains all s3cmd settings. This includes the Amazon access key and secret key for s3cmd to use to connect to Amazon S3.

A basic configuration file is created automatically when you first issue the s3cmd --configure command after installation. You will be asked a few questions about your Amazon access key and secret key and other settings you wish to use, and then s3cmd will save that information in a new config file.

Other advanced settings can be changed (if needed) by editing the config file manually. Some of the settings contain the default values for s3cmd to use. For instance, you could change the multipart_chunk_size_mb default value from 15 to 5, and that would become the new default value for the s3cmd option --multipart-chunk-size-mb.


The following is an example of a s3cmd config file:

[default]
access_key = TUOWAAA99023990001
access_token =
add_encoding_exts =
add_headers =
bucket_location = US
cache_file =
cloudfront_host = cloudfront.amazonaws.com
default_mime_type = binary/octet-stream
delay_updates = False
delete_after = False
delete_after_fetch = False
delete_removed = False
dry_run = False
enable_multipart = True
encoding = UTF-8
encrypt = False
expiry_date =
expiry_days =
expiry_prefix =
follow_symlinks = False
force = False
get_continue = False
gpg_command = /usr/bin/gpg
gpg_decrypt = %(gpg_command)s -d --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
gpg_encrypt = %(gpg_command)s -c --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
gpg_passphrase =
guess_mime_type = True
host_base = s3.amazonaws.com
host_bucket = %(bucket)s.s3.amazonaws.com
human_readable_sizes = False
ignore_failed_copy = False
invalidate_default_index_on_cf = False
invalidate_default_index_root_on_cf = True
invalidate_on_cf = False
list_md5 = False
log_target_prefix =
max_delete = -1
mime_type =
multipart_chunk_size_mb = 15
preserve_attrs = True
progress_meter = True
proxy_host =
proxy_port = 0
put_continue = False
recursive = False
recv_chunk = 4096
reduced_redundancy = False
restore_days = 1
secret_key = sd/ceP_vbb#eDDDK
send_chunk = 4096
server_side_encryption = False
simpledb_host = sdb.amazonaws.com
skip_existing = False
socket_timeout = 300
urlencoding_mode = normal
use_https = True
use_mime_magic = True
verbosity = WARNING
website_endpoint = http://%(bucket)s.s3-website-% (location)s.amazonaws.com/
website_error =
website_index = index.html



CloudFront support in s3cmd

CloudFront is Amazon content delivery network (CDN) — a ton of webservers distributed in multiple datacentres across the globe that should provide a fast access to public files stored in your buckets. The idea of a CDN is to bring some content as close to the user as possible, for instance when an european user is browsing your website he should be served by Europe-based webservers, if the same content is being accessed by clients from Japan they should be served by the CDN servers in Asia. In this case the web content is stored in Amazon S3 and the CDN in use is Amazon CloudFront. See more details at Amazon’s CloudFront page.

Since these two services are very closely related, it makes sense to have CloudFront support directly in s3cmd. CloudFront support has been added since version 0.9.9.

How is CloudFront related to Amazon S3

  • About buckets — As you know the files uploaded to Amazon S3 are organised in buckets. A bucket can have a name of your choice, but it pays off to name it in a DNS-compatible way. That in general means lower case only characters of the following groups: a-z, 0-9, - (dash) and . (dot). DNS-incompatible named buckets are not usable with CloudFront. A DNS compatible bucket name is for instance s3tools-test, with s3cmd URI syntax it is s3://s3tools-test
  • About publicly accessible files — A file uploaded to S3 with a Public ACL is accessible to anyone over a standard HTTP protocol. For example upload a file logo.png to the above named bucket:
    s3cmd put --acl-public logo.jpg s3://s3tools-test/example/logo.png

    The HTTP host name is always http://bucketname.s3.amazonaws.com so in our case the file would be accessible as http://s3tools-test.s3.amazonaws.com/example/logo.png
  • About virtual-hosts — If you don’t like the public URL above check out Amazon S3 Virtual Hosts: if your bucket name is a fully qualified domain name and your DNS is set properly you can refer to the bucket directly with its name. For instance let’s have a bucket called s3://public.s3tools.org and upload the above mentioned logo.png in there:
    s3cmd put --acl-public logo.jpg s3://public.s3tools.org/example/logo.png
    Create a DNS record for public.s3tools.org to have a CNAME of public.s3tools.org.s3.amazonaws.com:
    public.s3tools.org.   IN   CNAME   public.s3tools.org.s3.amazonaws.com.
    From now on everybody can access the logo as http://public.s3tools.org/example/logo.png – this way you can offload all the static images, pdf documents, etc from your web server to Amazon S3.
  • About CloudFront on the scene — The disadvantage in the above is that your content is in a data centre either in the US or in Europe. If it’s in EU and your visitor lives in South Pacific they’ll experience a poor access performance, even if they live in the US it still won’t be optimal. Wouldn’t it be nice to bring your content closer to them? Let Amazon copy it to the CloudFront datacentres in many places around the world and let it do the magic when selecting the closest datacentre for each client. Simply create for example a DNS record cdn.s3tools.org pointing to a special CNAME that we’ll find out in a later example and have all your static content at http://cdn.s3tools.org/.... This cdn.s3tools.org name will resolve to different IP addresses in different parts of the world, always pointing to the closest CloudFront datacentre available. The before mentioned logo.png accessed through CDN now has a URL: http://cdn.s3tools.org/example/logo.png

How manage CloudFront using s3cmd

  • CloudFront is set up at a bucket level — you can publish one or more of your buckets through CloudFront, creating a CloudFront distribution (CFD) for each bucket in question. To publish our public.s3tools.org bucket let’s do:
    s3cmd cfcreate s3://public.s3tools.org
  • Each CFD has a unique Distribution ID (DistId) in a form of a URI: cf://123456ABCDEF It’s printed in the output of s3cmd cfcreate:
    Distribution created:
    Origin:         s3://public.s3tools.org/
    DistId:         cf://E3RPA4Z4ALGTGO
    DomainName:     d11jv2ffak0j4h.cloudfront.net
    CNAMEs:
    Comment:        http://public.s3tools.org.s3.amazonaws.com/
    Status:         InProgress
    Enabled:        True
    Etag:           E3JGOIONPT9834
  • Each CFD has a unique “canonical” hostname automatically assigned by Amazon at the time the CFD is created. This could be for instance d11jv2ffak0j4h.nrt4.cloudfront.net.. It can also be found in the cfcreate output, or later on with cfinfo:
    ~$ s3cmd cfinfo
    Origin:         s3://public.s3tools.org/
    DistId:         cf://E3RPA4Z4ALGTGO
    DomainName:     d11jv2ffak0j4h.cloudfront.net
    Status:         Deployed
    Enabled:        True
  • Apart from the canonical name you can assign up to 10 DNS aliases for each CFD. For example the above canonical name can have an alias of cdn.s3tools.org. Either add the CNAMEs at the time of CFD creation or later with cfmodify command:
    ~$ s3cmd cfmodify cf://E3RPA4Z4ALGTGO --cf-add-cname cdn.s3tools.org
    Distribution modified:
    Origin:         s3://public.s3tools.org/
    DistId:         cf://E3RPA4Z4ALGTGO
    DomainName:     d11jv2ffak0j4h.cloudfront.net
    Status:         InProgress
    CNAMEs:         cdn.s3tools.org
    Comment:        http://public.s3tools.org.s3.amazonaws.com/
    Enabled:        True
    Etag:           E19WWJ5059E2W3

    At this moment you should update your DNS again:
    cdn.s3tools.org.   IN   CNAME   d11jv2ffak0j4h.cloudfront.net.
  • Run cfinfo to confirm that your change has been deployed. Look for the Status: and Enabled: fields:
    ~$ s3cmd cfinfo cf://E3RPA4Z4ALGTGO
    Origin:         s3://public.s3tools.org/
    DistId:         cf://E3RPA4Z4ALGTGO
    DomainName:     d11jv2ffak0j4h.cloudfront.net
    Status:         Deployed
    CNAMEs:         cdn.s3tools.org
    Comment:        http://public.s3tools.org.s3.amazonaws.com/
    Enabled:        True
    Etag:           E19WWJ5059E2W3
  • Congratulation, you’re set up. Now you should be able to access CloudFront using the host name of your choice: http://cdn.s3tools.org/example/logo.png and serve your visitors faster then ever ;-)
  • Oh, you may want to remove your CloudFront Distributions later, indeed. Simply run s3cmd cfremove cf://E3RPA4Z4ALGTGO to achieve that. Be aware that it will take a couple of minutes to finish because the CFD must be disabled first and that change must be propagated (“deployed”) before actually removing the the distribution. It’s perhaps easier to disable it manually using s3cmd cfmodify --disable cf://E3RPA4Z4ALGTGO, go get a coffee and once you’re back check that cfinfo says Enabled: False and Status: Deployed. At that moment s3cmd cfremove should succeed immediately.



Enforcing server-side encryption for all objects in a bucket

Amazon S3 supports bucket policy that you can use if you require server-side encryption for all objects that are stored in your bucket. For example, the following bucket policy denies upload object (s3:PutObject) permission to everyone if the request does not include the x-amz-server-side-encryption header requesting server-side encryption.

{
   "Version":"2012-10-17",
   "Id":"PutObjPolicy",
   "Statement":[{
         "Sid":"DenyUnEncryptedObjectUploads",
         "Effect":"Deny",
         "Principal":{
            "AWS":"*"
         },
         "Action":"s3:PutObject",
         "Resource":"arn:aws:s3:::YourBucket/*",
         "Condition":{
            "StringNotEquals":{
               "s3:x-amz-server-side-encryption":"AES256"
            }
         }
      }
   ]
}

In S3cmd, the --server-side-encryption option adds the x-amz-server-side-encryption header to uploaded objects.



How can I remove a bucket that is not empty?

You have to empty it first, sorry :-) There are two ways:

  1. The convenient one is available in s3cmd 0.9.9 and newer and is as simple as s3cmd del --recursive s3://bucket-to-delete
  2. The less convenient one available prior to s3cmd 0.9.9 involves creating an empty directory, say /tmp/empty and synchronizing its content (i.e. nothing) to the bucket: s3cmd sync --delete /tmp/empty s3://bucket-to-delete

Once the bucket is empty it can then be removed with s3cmd rb s3://bucket-to-delete



How to configure s3cmd for alternative S3 compatible services

Please see: wiki on GitHub



How to restrict access to a bucket to specific IP addresses

To secure our files on Amazon S3, we can restrict access to a S3 bucket to specific IP addresses.

The following bucket policy grants permissions to any user to perform any S3 action on objects in the specified bucket. However, the request must originate from the range of IP addresses specified in the condition. The condition in this statement identifies 192.168.143.* range of allowed IP addresses with one exception, 192.168.143.188.

{
    "Version": "2012-10-17",
    "Id": "S3PolicyIPRestrict",
    "Statement": [
        {
            "Sid": "IPAllow",
            "Effect": "Allow",
            "Principal": {
                "AWS": "*"
            },
            "Action": "s3:*",
            "Resource": "arn:aws:s3:::bucket/*",
            "Condition" : {
                "IpAddress" : {
                    "aws:SourceIp": "192.168.143.0/24"
                },
                "NotIpAddress" : {
                    "aws:SourceIp": "192.168.143.188/32"
                }
            }
        }
    ]
}

The IPAddress and NotIpAddress values specified in the condition uses CIDR notation described in RFC 2632. For more information, go to http://www.rfc-editor.org/rfc/rfc4632.txt



How to throttle bandwidth in s3cmd

On Linux you can throttle bandwidth using the throttle command line program, e.g.

cat mybigfile | throttle -k 512 | s3cmd put - s3://mybucket/mybigfile

which would limit reads from mybigfile to be 512kbps.

Throttle is available by apt-get or yum in all the major Linux distributions.


Alternatively, the utility trickle can be used:

trickle -d 250 s3cmd get... would limit the download rate of s3cmd to 250 kilobytes per second.

trickle -u 250 s3cmd put... would limit the upload rate of s3cmd to 250 kilobytes per second.

Trickle is a yum or apt-get install if you're on a Fedora or Debian/Ubuntu machine. You must have the libevent library (Trickle's only dependency) already installed before you install Trickle. Most modern distribution will already have this installed.

On Windows, the throttle or trickle utilities are not available, but if you are using S3Express instead of s3cmd, you can limit max bandwidth simply by using the flag -maxb of the PUT command.



Why doesn't 's3cmd sync' support PGP / GPG encryption for files?

What the s3cmd sync command does is:

  1. Walk the filesystem to generate a list of local files
  2. Retrieve a list of remote files uploaded to Amazon S3
  3. Compare these two lists to find which local files need to be uploaded and which remote files should be deleted

The information about remote files we get from Amazon S3 is limited to names, sizes and md5 of the stored files. If the stored file is GPG encrypted we only get size and md5 of the encrypted file, not the original one and therefore we can't compare the local and remote lists against each other.


 A printable version of the entire FAQ and Knowledge Base is also available.
 For further queries or questions, please contact us.