I am reading a hive table and writing it to a Teradata table (column to column, no transformations)
try {
val df=spark.table("Hive Table")
df.write.mode(SaveMode.Append).jdbc(jdbcURL, "TD Table", properties)
}catch {case ex: Exception =><print error by calling getNextException repeatedly>
It runs for a while and fails with Teradata Database] [TeraJDBC 16.20.00.06] [Error 6706] [SQLState HY000] The string contains an untranslatable character
If I just insert the date/numeric columns, it works fine.
I have tried making Teradata table columns as UNICODE with no success.
Question is, how do I identify the errant record/column? There are hundreds of millions of rows and hundreds of columns, so running one at a time is not a viable solution. I have to either a)Identify the record/column or b) force a translation using whatever (junk) characters