Hi @Hanni
You didn't mention what distribution or version of Hadoop you're using, but in general you have a few widely available options. If you prefer using a low-to-no-code data ingestion engine with a rich graphical user interface and support for processors that provide connectivity, transformation, and content routing, including for JSON, you should investigate Apache NiFi, which powers Cloudera Flow Management (CFM). I would say that's your all-around best alternative, because NiFi has a very complete set of readers and writers for dealing with JSON.
If you have programming ability, another alternative would be to write a script using Spark SQL to ingest the data to HDFS. This will require the use of JDBC connectivity to the PostgreSQL DBMS just as Sqoop does. If you don't already know Spark, there's going to be a bit of a learning curve.
Lastly, if the source PostgreSQL table allows it, you could try importing the data using the hcatalog import function, which is an approach that was discussed here quite a while ago, using Oracle as the source DBMS in this thread: Sqoop to write JSON in HDFS . I am not aware of anyone getting that approach to actually work in practice using PostgreSQL.
Bill Brooks, Community Moderator
Was your question answered? Make sure to mark the answer as the accepted solution.
If you find a reply useful, say thanks by clicking on the thumbs up button.