as far as I know it’s not directly possible to change the storage type of existing objects. You can, however, copy the objects elsewhere and then move them back to the original place with —rr, effectively overwriting the non-rr objects with —rr ones under the same name.
I don’t know of any other way to achieve that – it’s not a limitation of s3cmd, it’s a limitation of Amazon S3 itself.
1. Is there an .s3cfg setting for Reduced Redundancy (could not locate a document or man describing those options)?
2. Could the “Patches for updating headers without re-uploading” be used for marking an entire bucket for rr and if so how so?
Would I be better off using Bryce Boe’s fix?
Add RR header feature is now available through AWS’ console for both objects and folders via properties without having to re-put the object. Simple to use, but selection is by object or folder which depending upon your structure could be time consuming.
I also discovered that CloudBerryLab.com free has a nice facility for updating headers of any type. Coding is http header “x-amz-storage-class” with value “REDUCED_REDUNDANCY”. It too is easy to use and included the added benefit of selecting large numbers of files and folders all at once.
If you tried —reduced-redundancy, and it still shows “Standard” in the S3 console: try retyping the —reduced-redundancy yourself from scratch on the command line instead of copy pasting from this site. I copy pasted the flag from this site, and it didn’t work because this site changes consecutive – - to a special unicode dash character, which the command line ignores (i made the mistake of keeping the dash and adding a second normal dash, which looks identical but isn’t).