I care about three concepts:
Output Console Encoding
Command line internal encoding (that changed with chcp)
.bat Text Encoding
The easiest scenario to me: I will have the first two mentioned in the same encoding, say CP850, and I will store my .bat in that same encoding (in Notepad++, menu Encoding → Character sets → Western European → OEM 850).
But suppose someone hands me a .bat in another encoding, say CP1252 (in Notepad++, menu Encoding* → Character sets → Western European → Windows-1252)
Then I would change the command line internal encoding, with chcp 1252.
This changes the encoding it uses to talk with other processes, neither the input device nor output console.
So my command line instance will effectively send characters in 1252 through its STDOUT file descriptor, but gabbed text appears when the console decodes them out as 850 (é is Ú).
Then I modify the file as follows:
@echo off
perl -e "use Encode qw/encode decode/;" -e "print encode('cp850', decode('cp1252', \"ren -hlice hlice\n\"));"
ren -hlice hlice
First I turn echo off so the commands don't output unless explicitly doing either echo... or perl -e "print..."
Then I put this boilerplate each time I need to output something
perl -e "use Encode qw/encode decode/;" -e "print encode('cp850', decode('cp1252', \"ren -hélice hélice\n\"));"
I substitute the actual text I'll show for this: ren -hélice hélice.
And also I could need to substitute my console encoding for cp850 and other side encoding for cp1252.
And just below I put the desired command.
I did broke the problematic line into the output half and the real command half.
The first I make for sure: The "é" is interpreted as an "é" by means of transcoding. It is necessary for all the output sentences since the console and the file are at different encodings.
The second, the real command (muttered with @echo off), knowing we have the same encoding both from chcp and the .bat text is enough to ensure a proper character interpretation.