s3cmd 0.9.9-pre5 is now available for download on SourceForge.
This release highlights:
- potentially incompatible change in how put and sync work
- added —dry-run parameter for testing sync
- added recursive setacl command (development of this feature has been sponsored by Joseph Denne from Airlock.com, thanks!)
Details of the above:
Non-recursive put changes:
In earlier versions when you ran:
s3cmd put blah/file1.txt s3://bucket/somewhere/ you ended up with
s3://bucket/somewhere/blah/file1.txt, ie, the whole local filename (
blah/file1.txt) had been appended to the S3 URI given. That’s not what a unix admin expects – when you run
cp blah/file1.txt xyz/ you get the file copied into xyz/file1.txt and not xyz/blah/file1.txt. From now on s3cmd follows a similar logic. The rules are:
- if there is only one local file specified (eg
blah/file1.txt) and the remote uri doesn’t end with ‘/’ (eg
s3://bkt/backup/whatever) then the local file is copied to the given uri, ie to
s3://bkt/backup/whatever. For example:
s3cmd put blah/file1.txt s3://bkt/backup/whateverresults to:
blah/file1.txt -> s3://bkt/backup/whatever
- if there is one or more local files specified (eg
foo/file2.jpg) and the remote uri ends with ‘/’ (eg
s3://bkt/backup/) then only the basenames of local files are appended to the remote uri:
s3cmd put blah/file1.txt foo/file2.jpg s3://bkt/backup/results to:
blah/file1.txt -> s3://bkt/backup/file1.txt foo/file2.jpg -> s3://bkt/backup/file2.jpg
this also works with wildcards, ie
s3cmd put *.jpg s3://bkt/backup/is possible.
- when multiple local files are specified and the remote uri doesn’t end
with ‘/’ you’ll get an error.
Recursive ‘put’ and ‘sync’
put --recursive or
sync the rules are:
- when the local path ends with ‘/’ only its contents is appended to the s3 uri given. For instance
s3cmd put --recursive /path/blah/ s3://bkt/backup/leads to:
/path/blah/file1.txt -> s3://bkt/backup/file1.txt /path/blah/dir2/file2.jpg -> s3://bkt/backup/dir2/file2.jpg
s3cmd sync /path/blah/ s3://bkt/backup/except that sync will first fetch a list of remote files and upload only what’s needed, based on sizes and md5’s comparisons.
- however when the local path does not end with ‘/’ then the last component of the path is used remotely as well:
s3cmd put --recursive /path/blah s3://bkt/backup/does:
/path/blah/file1.txt -> s3://bkt/backup/blah/file1.txt /path/blah/dir2/file2.jpg -> s3://bkt/backup/blah/dir2/file2.jpg
Why it behaves like? To make it possible to upload multiple local dirs at once into the respective remote folders:
s3cmd put —recursive dir1 dir2 s3://bkt/backup/will create
On the other hand
put dir1/ behaves in the same way as
put dir1/* except that the wildcard ‘*’ won’t include ‘hidden’ unix files and dirs starting with a dot (eg
.profile). It’s not a s3cmd bug, that’s how unix shell works.
Essentially the last component of the path given on the command line is always appended to the remote uri (“base”). When the local path is a directory and on the command line ends with ‘/’ then the last component is “empty” and only the contents of that directory is appended to the remote uri “base”.
I hope it’s clear ;-)
sync now supports —dry-run
--dry-run parameter prevents s3cmd from actually transferring any files to or from S3. It will read remote and local filelists, apply
--exclude patterns, compile upload/download lists and then print them out and exit. No file will be uploaded / downloaded / removed.
I suggest you run sync with
--dry-run to check whether it really does what you meant and whether the paths are composed as you wanted. It’s also great for debugging —exclude and —rexclude patterns. For now it only works with ‘sync’ however ‘put’ and ‘get’ will get —dry-run support before 0.9.9 final as well.
New command ‘setacl’
You can change ACL from private to public and back on existing objects in S3. Works recursively when requested.
~$ s3cmd setacl --acl-public --recursive s3://bkt/backup s3://bkt/backup/file1.txt: ACL set to Public [1 of 3] s3://bkt/backup/dir2/file2.jpg: ACL set to Public [2 of 3] [...]
Please let me know if you experience any issues, especially with the new semantics of put, get and sync. I believe the new path handling is better and more close to what a unix person exects, but let me know your thoughts.
I’d like to get 0.9.9 out pretty soon and want to be reasonably sure that it works for everyone.
Download s3cmd 0.9.9-pre5 from here
Follow ups and general discussion should go here: firstname.lastname@example.org
Report bugs here: email@example.com