I'm trying to insert a hex string into a Cassandra table with blob datatype column. The Cassandra table structure is as follows:
CREATE TABLE mob.sample ( id text PRIMARY KEY, data blob );
Here is my code:
from pyspark.sql import SparkSession, SQLContext
from pyspark.sql.types import *
from pyspark.sql.functions import *
from pyspark.sql.functions import udf
def hexstrtohexnum(hexstr):
ani = int(hexstr[2:],16)
return(ani)
# Create a DataFrame using SparkSession
spark = (SparkSession.builder
.appName('SampleLoader')
.appName('SparkCassandraApp')
.getOrCreate())
schema = StructType([StructField("id",StringType(),True),
StructField("data",StringType(),True)])
# Create a DataFrame
df = spark.createDataFrame([("key1", '0x546869732069732061206669727374207265636f7264'),
("key2", '0x546865207365636f6e64207265636f7264'),
("key3", '0x546865207468697264207265636f7264')],schema)
hexstr2hexnum = udf(lambda z: hexstrtohexnum(z),IntegerType())
spark.udf.register("hexstr2hexnum", hexstr2hexnum)
df.withColumn("data",hexstr2hexnum("data"))
df.write.format("org.apache.spark.sql.cassandra").options(keyspace='mob',table='sample').save(mode="append")
When I run the code above it's giving an error:
WARN 2020-09-03 19:41:57,902 org.apache.spark.scheduler.TaskSetManager: Lost task 3.0 in stage 17.0 (TID 441, 10.37.122.156, executor 2): com.datastax.spark.connector.types.TypeConversionException: Cannot convert object 0x546869732069732061206669727374207265636f7264 of type class java.lang.String to java.nio.ByteBuffer.
at com.datastax.spark.connector.types.TypeConverter$$anonfun$convert$1.apply(TypeConverter.scala:44)
at com.datastax.spark.connector.types.TypeConverter$ByteBufferConverter$$anonfun$convertPF$11.applyOrElse(TypeConverter.scala:258)
at com.datastax.spark.connector.types.TypeConverter$class.convert(TypeConverter.scala:42)
at com.datastax.spark.connector.types.TypeConverter$ByteBufferConverter$.com$datastax$spark$connector$types$NullableTypeConverter$$super$convert(TypeConverter.scala:255)
Here's the contents of the dataframe.
>>> df.show(3)
+----+--------------------+
| id| data|
+----+--------------------+
|key1|0x546869732069732...|
|key2|0x546865207365636...|
|key3|0x546865207468697...|
+----+--------------------+
Can someone help me what's wrong with my code? Is there something I am missing?