Member since
03-23-2015
1288
Posts
114
Kudos Received
98
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3296 | 06-11-2020 02:45 PM | |
5014 | 05-01-2020 12:23 AM | |
2815 | 04-21-2020 03:38 PM | |
2620 | 04-14-2020 12:26 AM | |
2321 | 02-27-2020 05:51 PM |
01-17-2020
08:43 AM
Hi @EricL , Can you please let me know your comments, And the strange thing is i have cleanedup everything and freshly installed and setup a cluster. But still we are facing same issue and the jobs are running in Local mode not in YARN Mode. I have ran it in Active Resource Manager server and edgenode, But no luck. Can someone please let us know where i can debug and fix this issue ? Is this related to OS level of issue or YARN issue or any other issue? When i ran hostinspector i ddint find any issues like, Firewall, selinux or other issues. Please do the needful and help us. Best Regards, Vinod
... View more
01-17-2020
04:36 AM
Hi Sudhnidra, Please take a look at: https://blog.cloudera.com/yarn-fairscheduler-preemption-deep-dive/ https://blog.cloudera.com/untangling-apache-hadoop-yarn-part-3-scheduler-concepts/ https://clouderatemp.wpengine.com/blog/2016/06/untangling-apache-hadoop-yarn-part-4-fair-scheduler-queue-basics/ What type of FairScheduler are you using: Steady FairShare Instantaneous FairShare What is the weight of the default queue you are submitting your apps to? Best, Lyubomir
... View more
01-16-2020
03:00 PM
@DPez, Yeah, please share the result with your research. Cheers Eric
... View more
01-15-2020
10:02 AM
@Shelton @EricL Thank you both. the correct ACL spec is group::r-x Now the following command works. sudo -u zeppelin hadoop fs -ls /warehouse/tablespace/managed/hive/test1 From what I just ran into, I feel that, by design, Hive takes extra effort to prevent users from accessing managed table files directly. I will follow that design and access Hive managed table only through Hive.
... View more
01-14-2020
02:05 PM
@Nekkanti, The whole {application_id} is the placeholder, so you should run below instead: sudo -u yarn yarn logs -applicationId application_1578980062850_0002
... View more
01-09-2020
03:15 AM
@legendarier, Please go to CM > Cloudera Management Services > Instances > Service Monitor > Charts Library > Service Monitor Storage, and check impala-query-monitoring and see how old the data is (whether it has any data or not). See my screenshot below: This can help to confirm if data is stored correctly or not. In detail, CM agent will get two types of Impala queries and sends to Service Monitor, which will in term display in CM interface: 1. in flight queries CM agent will fetch in flight query IDs via API call: http(s)://{host}:{impalad-port}/inflight_query_ids CM agent will fetch in flight query details via API call: http(s)://{host}:{impalad-port}/query_profile_encoded?query_id={query_id} Above details will be combined and sent to Service Monitor service 2. finished queries CM agent will parse the query profiles under /var/log/impalad/profiles on the impala daemon host and sends data to Service monitor. So to troubleshoot the issue, you can check: a. CM agent log on the impala coordinator host to see if there is any impala related errors b. Service Monitor server log to see if there is any impala related errors Hope above helps. Cheers Eric
... View more
01-07-2020
01:58 AM
Thanks @EricL for chime in. @manjj That's correct I found it later today as well this resides in desktop_document2 nowadays 🙂
... View more
01-06-2020
11:42 PM
Hi @EricL . Thank you very much... but the same use case is here:- this is my output data structure from a JSON we required:- {
"name": [
{
"use": "official", //here "tab1.use" is column and value
"family": "family",//here "tab1.family" is column and value
"given": [ //this column we need to create and add value from "tab1.fn&ln"
"first1", //here "first1" is coming from tab1.fname
"last1" //here "last1"is coming from tab1.lname
]
},
{
"use": "usual", //here "tab2.use" is column and value
"given": [ //here we need to create column with fn&ln
"first1 last1" //here "first1 last1" is coming from tab1.fname &tab1.lname
]
}
]
}
here we want to create a column(name) from above columns :-
above data is JSON structure but i want in Hive with table columns.
then further we can convert the same into JSON in my use cases.
Note :- structure is matter here.
Thanks
HadoopHelp
... View more
01-05-2020
05:24 PM
@Cl0ck Glad that it is all resolved. cheers
... View more
01-03-2020
10:03 PM
@Shaneg For Sqoop export, parameter "--export-dir" is required, please refer to below doc: https://sqoop.apache.org/docs/1.4.7/SqoopUserGuide.html#_syntax_4 Export is designed to export HDFS data to RDBMS, not Hive tables to RDBMS. Hope that helps. Cheers Eric
... View more