Member since
09-16-2021
333
Posts
52
Kudos Received
27
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
144 | 11-22-2024 05:29 AM | |
105 | 11-15-2024 06:38 AM | |
231 | 11-13-2024 07:12 AM | |
283 | 11-10-2024 11:19 PM | |
414 | 10-25-2024 05:02 AM |
06-13-2023
12:12 AM
Output of below to identify the exact ouptut records details, explain formatted <query> explain extended <query> explain analyze <query>
... View more
06-09-2023
11:59 AM
@Josh2023 Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. Thanks.
... View more
06-06-2023
09:37 PM
Once the data has been read from database, you don't need to write the same data to file (i.e. CSV ) . Instead you can write directly into hive table using DataFrame API's. Once the Data has been loaded you query the same from hive. df.write.mode(SaveMode.Overwrite).saveAsTable("hive_records") Ref - https://spark.apache.org/docs/2.4.7/sql-data-sources-hive-tables.html Sample Code Snippet df = spark.read \
.format("jdbc") \
.option("url", "jdbc:postgresql://<server name>:5432/<DBNAME>") \
.option("dbtable", "\"<SourceTableName>\"") \
.option("user", "<Username>") \
.option("password", "<Password>") \
.option("driver", "org.postgresql.Driver") \
.load()
df.write.mode('overwrite').saveAsTable("<TargetTableName>")
From hive
INFO : Compiling command(queryId=hive_20230607042851_fa703b79-d6e0-4a4c-936c-efa21ec00a10): select count(*) from TBLS_POSTGRES
INFO : Semantic Analysis Completed (retrial = false)
INFO : Created Hive schema: Schema(fieldSchemas:[FieldSchema(name:_c0, type:bigint, comment:null)], properties:null)
INFO : Completed compiling command(queryId=hive_20230607042851_fa703b79-d6e0-4a4c-936c-efa21ec00a10); Time taken: 0.591 seconds
INFO : Executing command(queryId=hive_20230607042851_fa703b79-d6e0-4a4c-936c-efa21ec00a10): select count(*) from TBLS_POSTGRES
.
.
.
+------+
| _c0 |
+------+
| 122 |
+------+
... View more
04-20-2023
10:47 PM
It's working expected. Please find the below code snippet >>> columns = ["language","users_count"]
>>> data = [("Java", "20000"), ("Python", "100000"), ("Scala", "3000")]
>>> df = spark.createDataFrame(data).toDF(*columns)
>>> df.write.csv("/tmp/test")
>>> df2=spark.read.csv("/tmp/test/*.csv")
d>>> df2.show()
+------+------+
| _c0| _c1|
+------+------+
|Python|100000|
| Scala| 3000|
| Java| 20000|
+------+------+
... View more
04-20-2023
05:31 AM
From the error could see the query failed in MoveTask. MoveTask can be loading the partitions as well since the load statement belongs to the partitioned table, Along with HS2 logs HMS logs for the corresponding time period gives a better idea to identify the root cause of the failure. If it's just timeout issue, increase client socket timeout value.
... View more
10-13-2022
02:46 AM
@Sunil1359 Compilation might be higher if the table has a large number of partitions or if the HMS process is slow when the query runs. Please check the below on the corresponding time period to find the root cause. HS2 log HMS log HMS jstack In Tez engine queries will run in the form of DAG. In the compilation phase, once the semantic analysis process is completed, the plan will be generated depending on the query you submitted. explain <your query> gives the plan of the query. Once the plan is generated DAG will be submitted to yarn and the DAG will run depending on the plan. As part of DAG, Split generation, input file read, shuffle fetch ..etc will be taken care and the end result will be transferred to the client.
... View more
04-29-2022
03:32 AM
Hi ggangadharan I follow the way u suggest, but now i need help in code that we can execute that hdfscode shell script from python script. and do it as an import subprocess subprocess.call('./home/test.sh/' ,shell=True) file = open('toDelete, 'r') for each in file: subprocess.call(["hadoop", "fs", "-rm", "-f", each]) but now my shell script is not executing and not showing any output ,plz suggest me what i do. thanks
... View more
03-09-2022
04:25 AM
Hello, Sorry for the late answer. I can't paste all the informations on the chat. Do you have some specific values that i can share ?
... View more
03-08-2022
01:34 AM
Hello @nikrahu We believe the Post by @ggangadharan answer your Queries. As such, We shall mark the Post as Resolved. If you have any concerns, Feel free to engage Cloudera Community via a Post. Thanks @ggangadharan for the detailed Examples !!! Regards, Smarak
... View more
- « Previous
- Next »