0
votes

In my spark data frame i have a here is schema

root
 |-- locations: array (nullable = true)
 |    |-- element: struct (containsNull = true)
 |    |    |-- address_line_2: string (nullable = true)
 |    |    |-- continent: string (nullable = true)
 |    |    |-- country: string (nullable = true)
 |    |    |-- geo: string (nullable = true)
 |    |    |-- is_primary: boolean (nullable = true)
 |    |    |-- last_updated: string (nullable = true)
 |    |    |-- locality: string (nullable = true)
 |    |    |-- most_recent: boolean (nullable = true)
 |    |    |-- name: string (nullable = true)
 |    |    |-- postal_code: string (nullable = true)
 |    |    |-- region: string (nullable = true)
 |    |    |-- street_address: string (nullable = true)
 |    |    |-- subregion: string (nullable = true)
 |    |    |-- type: string (nullable = true)
 |    |    |-- zip_plus_4: string (nullable = true)

here is a sample of the location

[Row(locations=[Row(address_line_2=None, continent='north america', country='united states', geo='40.41,-74.36', is_primary=True, last_updated=None, locality='old bridge', most_recent=True, name='old bridge, new jersey, united states', postal_code=None, region='new jersey', street_address=None, subregion=None, type=None, zip_plus_4=None)])]

as you can see there is a field called isPrimary based on that I want to select the field here is function I wrote


def geoLambda(locations):

    """
    Pre process geo locations
    :param x:
    :return: dict
    """
    try:
        for x in locations:
            if x.get("is_primary") == "True" or x.get("is_primary") == True:
                data = x
                data = data.get("geo", None)
                if data is None:
                    lat,lon = -83, 135
                else:
                    lat,lon = data.split(",")
                Payload = {"lat":float(lat), "lon":float(lon)}
                return Payload
            else:
                pass
    except Exception as e:
        print("EXCEPTION: {} ".format(e))
        lat,lon = -83, 135
        Payload = {"lat":float(lat), "lon":float(lon)}
        return Payload
udfValueToCategoryGeo = udf(geoLambda, StructType())
df = df.withColumn("myloc", udfValueToCategoryGeo("locations"))

output

 |-- myloc: struct (nullable = true)

----+
|   {}|
|   {}|
|   {}|
|   {}|
|   {}|
|   {}|
|   {}|

If I select the type as string

udfValueToCategoryGeo = udf(geoLambda, StringType())
df = df.withColumn("myloc", udfValueToCategoryGeo("locations"))
|               myloc|
+--------------------+
|{lon=135.0, lat=-...|
|{lon=135.0, lat=-...|
|{lon=135.0, lat=-...|
|{lon=135.0, lat=-...|
|{lon=135.0, lat=-...|
|{lon=135.0, lat=-...|
|{lon=135.0, lat=-...|
|{lon=135.0, lat=-...|
|{lon=135.0, lat=-...|
|{lon=135.0, lat=-...|
|{lon=135.0, lat=-...|

i get all constant not sure why ?

same function works fine in pandas but I don't want to use pandas any help would be great

This is how single row looks like

Location ROW

[{'name': 'princeton, new jersey, united states',
  'locality': 'princeton',
  'region': 'new jersey',
  'subregion': None,
  'country': 'united states',
  'continent': 'north america',
  'type': None,
  'geo': '40.34,-74.65',
  'postal_code': None,
  'zip_plus_4': None,
  'street_address': None,
  'address_line_2': None,
  'most_recent': True,
  'is_primary': True,
  'last_updated': '2021-03-01'}]

ANY HELP

1

1 Answers

0
votes

this is how i solved


def geoLambda(locations):
  for x in locations:
      if x["is_primary"] == True:
          data = x["geo"]
          if data is None:
              lat,lon = -83, 135
          else:
              lat,lon = data.split(",")
          Payload = {"lat":float(lat), "lon":float(lon)}
          return Payload
      else:
          pass

udfValueToCategoryGeo = udf(geoLambda, StructType(
    
[
 
  StructField('lat', nullable=True, dataType=FloatType()),
  StructField('lon', nullable=True, dataType=FloatType())
]

))
df = df.withColumn("myloc", udfValueToCategoryGeo("locations"))