I have written some small scripts and programs that probably are too trivial to be packaged properly.
This Bourne shell script converts all file names in a directory tree from ISO 8859-1 (ISO Latin 1) to the UTF-8 encoding of Universal Character Set (Unicode).
The script requires GNU versions of xargs
and find
. The
script also converts the contents of all files named 00INDEX from to
UTF-8. Unlike many shell scripts I have encountered, this script
should not have problems with file names containing shell
metacharacters.
This script could be adapted and used with jpegcom
from the pHoToMoLo package in order to convert the character set of
picture galleries (image file names and embedded comments).
This Perl script removes any Quoted-Printable encoding from a UTF-8 encoded text document.
The script depends on the MIME::EncWords and MIME::Charset packages for doing the actual decoding and character set conversions. This is my first Perl script written to deal with UTF-8 data.
The reason why I wrote this script is that the PIM backup of my Android phone (SonyEricsson Xperia™ active) encodes any non-ASCII data, making it hard to read and edit the file. It does import UTF-8 encoded text files without problems.
Adjust the time stamp of files by a given offset. I have found this program useful when combining digital photographs of some event, taken with multiple cameras with inaccurate clocks. For instance, to advance the clock by one hour, you can use the following command:
find /path/to/images -type f -name \*.jpg -print0| xargs -0 toucher 3600
Note that the same (and more) can be achieved by specifying the
-r
and --date
parameters to
the touch
command in the GNU coreutils:
find /path/to/images -type f -name \*.jpg -exec \ touch -r {} --date '+3600 seconds' {}
This program writes to standard output a DV stream corresponding to simple SMIL files, such as those produced by Kino. I had some timing problems when trying to export video to my camera and thought that this should be a simple and efficient solution.