S3cmd is alive! We are pulling patches together for the upcoming s3cmd 1.0.0 release. The current development snapshot has just been released as 1.0.0-rc1.
Support for Reduced Redundancy Storage
s3cmd put, sync, cp and mv commands now accept
--rr for short) parameter to that tells Amazon to store the files in a little cheaper but slightly less reliable way. See the Amazon S3 RRS page for more details
Access logging for S3 Buckets and for CloudFront
Turn access logging on and off for S3 buckets with s3cmd accesslog command or in case of turning access logs on/off for a CloudFront distribution with s3cmd cfmodify command. In either case use
--access-logging-target-prefix=s3://some-other-bucket/logs/blah/ to enable access logging and
--no-access-logging to disable them again. The access logs would be stored under s3://some-other-bucket/logs/blah/… and could be listed with s3cmd ls and downloaded for further processing with s3cmd get or s3cmd sync.
--acl-revoke supporting email addresses and other kinds of grantees
Earlier releases of s3cmd recognise only two kinds of ACL (access control list): public and private. Amazon S3 can, however, do much more – it can grant access based on email address, username and even url in some special cases. From now on s3cmd setacl can set these permissions too. For example to give someone from example.com a read-only access to all files in the bucket do:
Note that email@example.com must have an Amazon AWS account registered with this address. This feature has been contributed by Timothee Groleau
Support for creating buckets in locations outside the original US and EU.
It is now possible to create buckets in any S3 location with s3cmd mb
--bucket-location=.... As of now the bucket locations are US (default if no explicit bucket location is set), EU, us-west-1 and ap-southeast-1. These names are no longer pre-set in the code which means that as soon as Amazon opens a new datacentre somewhere s3cmd will be able to create buckets in there.
Follow local symlinks with
s3cmd put and sync will now follow symbolic links. i.e. upload the file the symlink points to instead of ignoring it, when
--follow-symlinks is used. This feature has been contributed by Aaron Maxwell
Support for Default Root Object in CloudFront
CloudFront distributions now support specifying a default root object which is returned when the user requests a URL without an explicit object name, for instance http://cdn.example.com/
Using s3cmd cfmodify —default-root-object=index.html s3://cdn.example.com the above URL will behave exactly the same as if it was http://cdn.example.com/index.html
This feature was contributed by Luke Andrew
CloudFront commands now accept bucket name (s3://bucket) as well as CloudFront distribution id (cf://A1B2C3D4E5) for convenience
This is a minor convenience improvement. Most s3cmd cf*-commands used to require the cryptic CloudFront distribution ID (e.g. cf://A1B2C3D4E5) that users usually don’t remember (at least I don’t). From now on it’s possible to use the S3 URI with most cf*-commands, e.g. s3cmd cfinfo s3://my.bucket.com
Obviously we’ve had some annoying bugs fixed too. The most important ones are:
- Don’t crash when files disappear during upload
- Don’t crash on the infamous “Error 21 – Is a directory” condition
- Don’t crash when listing buckets with 1000+ directories. Contributed by Timothee Groleau.
- Don’t give up easily on failed requests, retry upload in more cases than before.
- Improved python 2.7 compatibility
- Lots of other minor bugfixes
One major thing holding us back from releasing the final 1.0.0 is indeed the documentation. Any volunteers to bring the manpage up to date with the current code? You, there! Will you? No? Oh well… :(
Pretty please test test test the current s3cmd 1.0.0-rc1 and let me know if you experience any problems:
Any questions or problems? Please send an email to the mailing list: firstname.lastname@example.org
Looking forward for your feedback!