The following is S3cmd usage (as shown if you type s3cmd -h). To access all the options and commands listed below, you'll need version 1.5
S3cmd is a tool for managing objects in Amazon S3 storage. It allows for
making and removing "buckets" and uploading, downloading and removing
"objects" from these buckets.
-h, --help show this help message and exit
--configure Invoke interactive (re)configuration tool. Optionally
use as '--configure s3://some-bucket' to test access
to a specific bucket instead of attempting to list
-c FILE, --config=FILE
Config file name. Defaults to /home/mludvig/.s3cfg
--dump-config Dump current configuration after parsing config files
and command line options and exit.
AWS Access Key
AWS Secret Key
-n, --dry-run Only show what should be uploaded or downloaded but
don't actually do it. May still perform S3 requests to
get bucket listings and other information though (only
for file transfer commands)
-e, --encrypt Encrypt files before uploading to S3.
--no-encrypt Don't encrypt files.
-f, --force Force overwrite and other dangerous operations.
--continue Continue getting a partially downloaded file (only for
--continue-put Continue uploading partially uploaded files or
multipart upload parts. Restarts/parts files that
don't have matching size and md5. Skips files/parts
that do. Note: md5sum checks are not always
sufficient to check (part) file equality. Enable this
at your own risk.
UploadId for Multipart Upload, in case you want
continue an existing upload (equivalent to --continue-
put) and there are multiple partial uploads. Use
s3cmd multipart [URI] to see what UploadIds are
associated with the given URI.
--skip-existing Skip over files that exist at the destination (only
for [get] and [sync] commands).
-r, --recursive Recursive upload, download or removal.
--check-md5 Check MD5 sums when comparing files for [sync].
--no-check-md5 Do not check MD5 sums when comparing files for [sync].
Only size will be compared. May significantly speed up
transfer but may also miss some changed files.
-P, --acl-public Store objects with ACL allowing read for anyone.
--acl-private Store objects with default ACL allowing access for you
--acl-grant=PERMISSION:EMAIL or USER_CANONICAL_ID
Grant stated permission to a given amazon user.
Permission is one of: read, write, read_acp,
write_acp, full_control, all
Revoke stated permission for a given amazon user.
Permission is one of: read, write, read_acp, wr
ite_acp, full_control, all
-D NUM, --restore-days=NUM
Number of days to keep restored file available (only
for 'restore' command).
--delete-removed Delete remote objects with no corresponding local file
--no-delete-removed Don't delete remote objects.
--delete-after Perform deletes after new uploads [sync]
--delay-updates Put all updated files into place at end [sync]
--max-delete=NUM Do not delete more than NUM files. [del] and [sync]
Additional destination for parallel uploads, in
addition to last arg. May be repeated.
--delete-after-fetch Delete remote objects after fetching to local file
(only for [get] and [sync] commands).
-p, --preserve Preserve filesystem attributes (mode, ownership,
timestamps). Default for [sync] command.
--no-preserve Don't store FS attributes
--exclude=GLOB Filenames and paths matching GLOB will be excluded
--exclude-from=FILE Read --exclude GLOBs from FILE
--rexclude=REGEXP Filenames and paths matching REGEXP (regular
expression) will be excluded from sync
--rexclude-from=FILE Read --rexclude REGEXPs from FILE
--include=GLOB Filenames and paths matching GLOB will be included
even if previously excluded by one of
--include-from=FILE Read --include GLOBs from FILE
--rinclude=REGEXP Same as --include but uses REGEXP (regular expression)
instead of GLOB
--rinclude-from=FILE Read --rinclude REGEXPs from FILE
--ignore-failed-copy Don't exit unsuccessfully because of missing keys
--files-from=FILE Read list of source-file names from FILE. Use - to
read from stdin.
Datacentre to create bucket in. As of now the
datacenters are: US (default), EU, ap-northeast-1, ap-
southeast-1, sa-east-1, us-west-1 and us-west-2
Store object with 'Reduced redundancy'. Lower per-GB
price. [put, cp, mv]
Target prefix for access logs (S3 URI) (for [cfmodify]
and [accesslog] commands)
--no-access-logging Disable access logging (for [cfmodify] and [accesslog]
Default MIME-type for stored objects. Application
default is binary/octet-stream.
Guess MIME-type of files by their extension or mime
magic. Fall back to default MIME-Type as specified by
--no-guess-mime-type Don't guess MIME-type and use the default type
--no-mime-magic Don't use mime magic when guessing MIME-type.
-m MIME/TYPE, --mime-type=MIME/TYPE
Force MIME-type. Override both --default-mime-type and
Add a given HTTP header to the upload request. Can be
used multiple times. For instance set 'Expires' or
'Cache-Control' headers (or both) using this option.
Specifies that server-side encryption will be used
when putting objects.
--encoding=ENCODING Override autodetected terminal and filesystem encoding
(character set). Autodetected: UTF-8
Add encoding to these comma delimited extensions i.e.
(css,js,html) when uploading to S3 )
--verbatim Use the S3 name as given on the command line. No pre-
processing, encoding, etc. Use with caution!
--disable-multipart Disable multipart upload on files bigger than
Size of each chunk of a multipart upload. Files bigger
than SIZE are automatically uploaded as multithreaded-
multipart, smaller files are uploaded using the
traditional method. SIZE is in Mega-Bytes, default
chunk size is 15MB, minimum allowed chunk size is
5MB, maximum is 5GB.
--list-md5 Include MD5 sums in bucket listings (only for 'ls'
Print sizes in human readable form (eg 1kB instead of
Name of index-document (only for [ws-create] command)
Name of error-document (only for [ws-create] command)
--progress Display progress meter (default on TTY).
--no-progress Don't display progress meter (default on non-TTY).
--enable Enable given CloudFront distribution (only for
--disable Enable given CloudFront distribution (only for
--cf-invalidate Invalidate the uploaded filed in CloudFront. Also see
When using Custom Origin and S3 static website,
invalidate the default index file.
When using Custom Origin and S3 static website, don't
invalidate the path to the default index file.
--cf-add-cname=CNAME Add given CNAME to a CloudFront distribution (only for
[cfcreate] and [cfmodify] commands)
Remove given CNAME from a CloudFront distribution
(only for [cfmodify] command)
--cf-comment=COMMENT Set COMMENT for a given CloudFront distribution (only
for [cfcreate] and [cfmodify] commands)
Set the default root object to return when no object
is specified in the URL. Use a relative path, i.e.
default/index.html instead of /default/index.html or
s3://bucket/default/index.html (only for [cfcreate]
and [cfmodify] commands)
-v, --verbose Enable verbose output.
-d, --debug Enable debug output.
--version Show s3cmd version (1.5.0-beta1) and exit.
Follow symbolic links as if they are regular files
--cache-file=FILE Cache FILE containing local source MD5 values
-q, --quiet Silence output on stdout
s3cmd mb s3://BUCKET
s3cmd rb s3://BUCKET
List objects or buckets
s3cmd ls [s3://BUCKET[/PREFIX]]
List all object in all buckets
Put file into bucket
s3cmd put FILE [FILE...] s3://BUCKET[/PREFIX]
Get file from bucket
s3cmd get s3://BUCKET/OBJECT LOCAL_FILE
Delete file from bucket
s3cmd del s3://BUCKET/OBJECT
Restore file from Glacier storage
s3cmd restore s3://BUCKET/OBJECT
Synchronize a directory tree to S3
s3cmd sync LOCAL_DIR s3://BUCKET[/PREFIX] or s3://BUCKET[/PREFIX] LOCAL_DIR
Disk usage by buckets
s3cmd du [s3://BUCKET[/PREFIX]]
Get various information about Buckets or Files
s3cmd info s3://BUCKET[/OBJECT]
s3cmd cp s3://BUCKET1/OBJECT1 s3://BUCKET2[/OBJECT2]
s3cmd mv s3://BUCKET1/OBJECT1 s3://BUCKET2[/OBJECT2]
Modify Access control list for Bucket or Files
s3cmd setacl s3://BUCKET[/OBJECT]
Modify Bucket Policy
s3cmd setpolicy FILE s3://BUCKET
Delete Bucket Policy
s3cmd delpolicy s3://BUCKET
show multipart uploads
s3cmd multipart s3://BUCKET [Id]
abort a multipart upload
s3cmd abortmp s3://BUCKET/OBJECT Id
list parts of a multipart upload
s3cmd listmp s3://BUCKET/OBJECT Id
Enable/disable bucket access logging
s3cmd accesslog s3://BUCKET
Sign arbitrary string using the secret key
s3cmd sign STRING-TO-SIGN
Sign an S3 URL to provide limited public access with expiry
s3cmd signurl s3://BUCKET/OBJECT expiry_epoch
Fix invalid file names in a bucket
s3cmd fixbucket s3://BUCKET[/PREFIX]
Create Website from bucket
s3cmd ws-create s3://BUCKET
s3cmd ws-delete s3://BUCKET
Info about Website
s3cmd ws-info s3://BUCKET
List CloudFront distribution points
Display CloudFront distribution point parameters
s3cmd cfinfo [cf://DIST_ID]
Create CloudFront distribution point
s3cmd cfcreate s3://BUCKET
Delete CloudFront distribution point
s3cmd cfdelete cf://DIST_ID
Change CloudFront distribution point parameters
s3cmd cfmodify cf://DIST_ID
Display CloudFront invalidation request(s) status
s3cmd cfinvalinfo cf://DIST_ID[/INVAL_ID]