Spark's programming model for data engineering/transformation is fundamentally more flexible and extensible than U-SQL.
For small, simple projects you wouldn't notice the difference and I'd recommend you go with whatever you are familiar with. For complex projects and/or ones where you expect significant flux in requirements, I would strongly recommend Spark using one of the supported languages: Scala, Java, Python or R and not SparkSQL. The reason for the recommendation is that Spark's domain specific language (DSL) for data transformations makes the equivalent of SQL code generation, which is the trick all BI/analytics/warehousing tools use under the covers to manage complexity, very easy. It allows logic/configuration/customization to be organized and managed in manners that are impossible or impractical when dealing with SQL which, we should not forget, is a 40+ year old language.
For an extreme example of the level of abstraction that's possible with Spark, you might enjoy https://databricks.com/session/the-smart-data-warehouse-goal-based-data-production
I would also recommend Spark if you are dealing with dirty/untrusted data (the JSON in your case) where you'd like to have a highly controlled/custom ingestion process. In that case, you might benefit from some of the ideas in the spark-records library for bulletproof data processing. https://databricks.com/session/bulletproof-jobs-patterns-for-large-scale-spark-processing
When it comes to using Spark, especially for new users, Databricks provides the best managed environment. We've been a customer for years managing petabytes of very complex data. People on our team who come from SQL backgrounds and are not software developers use SparkSQL in Databricks notebooks but they benefit from the tooling/abstractions the data engineering and data science teams create for them.
Good luck with your project!