As Stephen ODonnell pointed out in the comments, internal/external is really more about the location of the data and what manages it.
I would say there are other important performance factors to consider, for example the table format and whether or not compression is to be used.
The following is from an HDP perspective; for Cloudera the general concept is the same, but the specifics would probably differ.)
For example, you could define the table as being in ORC Format, which offers many optimizations, such as predicate pushdown that allows rows to be optimized out at the storage layer before they are even added into the SQL processing layer. More details on that.
Another option would be whether or not you want to specify compression, such as Snappy, a compression algorithm which balances speed and compression ratio (see ORC link above for more info).
Generally speaking, I treat the HDFS data as a source, and sqoop it into Hive into a managed (internal) table with with ORC format and snappy compression enabled. I find that provides good performance with the added benefit that any ETL can be done to this data without regard for the original source data in HDFS, since it was copied into Hive during the sqoop.
This does of course require extra space, which may be a consideration depending on your environment and/or specific use case.