Using JDBC datasource in Spark sql we try to run below query
select nvl( columnName , 1.0) from tablename
gives error as
cannot resolve 'nvl(tablename.`columnname`, 1.0BD)' due to data type mismatch: input to function coalesce should all be the same type, but it's [decimal(38,10), decimal(2,1)]
I know we can solve this with
select nvl( columnname , CAST( 1.0 as decimal(38,10))) from tablename
looks like i need to find the datatype of each and every column and cast to it.
- Is there any other way to handle it?
- Can i give schema definition upfront while loading the dataframe like csv format. [https://issues.apache.org/jira/browse/SPARK-16848]
- How to convert loaded Dataframe data types for each column.