I recommend using the latest Oracle Data Migration Assistant for Unicode (DMU). Here is a link:
http://www.oracle.com/technetwork/database/database-technologies/globalization/dmu/overview/index.html
But also it's good to have an understanding of Unicode (which isn't that complicated). Take a look at Wiki for a better understanding of Unicode.
I agree with those who have commented that it makes the most sense to use a default unicode based character set so that you can use varchar and you won't need nvarchar.
One issue with conversion is size. Suppose you have a declaration of varchar2(3). This declaration means you can store up to 3 bytes which for non-unicode equates to 3 characters because each character only takes up one byte. But in unicode a character can take up more then one byte. You could change the declaration to varchar2(3 CHAR) and then it would work for unicode. You could also change the default for the database for varchar2 to be CHAR rather then BYTE. In either case you still have the maximum bytes issue to contend with. The maximum number of bytes for varchar2 is 4000 (unless you are upgrade to Oracle 12c in which case you can change the maximum amount to 32767 bytes). In any case if you have a declaration of varchar2(4000 char) you may or may not be able to insert 4000 unicode characters. You can only insert that many if all the characters are represented with one byte in you unicode encoding (for example if they are ascii characters). In AL32UTF all characters take up 1-4 bytes. So that means if you use a declaration of varchar2(1000 CHAR) you are guaranteed to be able to have up to 1000 characters, even if all the characters have 4 byte encodings.
I also recommend changing the default nls_length_semantics to CHAR. That way varchar2(n) will mean up to n characters can be stored, irregardless of the number of bytes that the n characters require (as long as it is less then 4000 bytes).