I want to put aggregated data into memory but getting error.Any suggestion ??
orders = spark.read.json("/user/order_items_json")
df_2 = orders.where("order_item_order_id == 2").groupby("order_item_order_id")
df_2.persist(StorageLevel.MEMORY_ONLY)**
Traceback (most recent call last): File "", line 1, in AttributeError: 'GroupedData' object has no attribute 'persist'