Member since
07-06-2018
59
Posts
1
Kudos Received
4
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 3757 | 03-05-2019 07:20 AM | |
| 3927 | 01-16-2019 09:15 AM |
08-09-2019
08:45 AM
Thanks for confirming that. We'll enable for Impala as well but after a week or so, but wanted to know if in the meantime it'd still work or not.
... View more
08-09-2019
07:52 AM
Hi Team, We are in process on enabling TLS on HS2, I wanted to know/clarify if that happens will Impala be affected? Impala is dependent on HMS and TLS is only limited to hive server 2, given that having TLS enabled on Hive and running Impala without it should be ok or do you see a scenario where this combination may not work? Regards
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Impala
07-30-2019
01:59 PM
Hi everyone, I'm using command in subject to fetch all yarn applications and use grep on the output to filter out a specific application . I wanted to find out what is the limit of this command, as in how far in the history does it go to get these states. Is this limited by the job history server limit or does it return something below that like last 1000 jobs etc? Regards
... View more
Labels:
- Labels:
-
Apache YARN
06-05-2019
07:15 AM
Hey Network, Anyone had this issue or maybe Cloudera team in this community may share if this a known bug etc? Regards
... View more
06-03-2019
01:55 PM
I have a scenario where i'm trying to create a table which points to an HDFS location which has a directory name starting with an "_" in the HDFS path. Now table creation goes through but If I try to read data out of the table it throws error, below is what i get: create external table `ingest.workgroup__views2` row format serde 'org.apache.hadoop.hive.serde2.avro.AvroSerDe' STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat' location 'hdfs://nameservice1/user/data/ingest/mdm/workgroup_i/workgroup/_views' tblproperties ('avro.schema.url'='hdfs://nameservice1/user/data/ingest/mdm/workgroup_i/workgroup/_views/_gen/_views.avsc'); No rows affected (0.232 seconds) 0: jdbc:hive2://t-hive.sys.cigna.com:25006/de> select * from ingest.workgroup__views2; Error: java.io.IOException: org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://nameservice1/user/data/ingest/mdm/workgroup_i/workgroup/_views (state=,code=0) 0: jdbc:hive2://t-hive.sys.cigna.com:25006/de> drop table ingest.workgroup__views2; So i escape the special character "_" in location and the table gets created and i' able to run select to see data as below: create external table `ingest.workgroup__views2` row format serde 'org.apache.hadoop.hive.serde2.avro.AvroSerDe' STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat' location 'hdfs://nameservice1/user/data/ingest/mdm/workgroup_i/workgroup/\_views' tblproperties ('avro.schema.url'='hdfs://nameservice1/user/data/ingest/mdm/workgroup_i/workgroup/_views/_gen/_views.avsc'); No rows affected (0.19 seconds) 0: jdbc:hive2://t-hive.sys.cigna.com:25006/de> select * from ingest.workgroup__views2; +-----------------------+-------------------------+-----------------------------+-------------------------------+-----------------------------+-------------------------------+--------------------------------+--------------------------+--------------------------+----------------------------+----------------------------+--+ | workgroup__views2.id | workgroup__views2.name | workgroup__views2.view_url | workgroup__views2.created_at | workgroup__views2.owner_id | workgroup__views2.owner_name | workgroup__views2.workbook_id | workgroup__views2.index | workgroup__views2.title | workgroup__views2.caption | workgroup__views2.site_id | +-----------------------+-------------------------+-----------------------------+-------------------------------+-----------------------------+-------------------------------+--------------------------------+--------------------------+--------------------------+----------------------------+----------------------------+--+ +-----------------------+-------------------------+-----------------------------+-------------------------------+-----------------------------+-------------------------------+--------------------------------+--------------------------+--------------------------+----------------------------+----------------------------+--+ No rows selected (0.139 seconds) Now the weird part is its only the location part which has this issue, parsing of URI mentioned under tblproperties goes through as you can see above and if I explicitly try to escape "_" in tblproperties it doesn't work. Any comments or suggestions will be helpful on the above obesrvation. Regards
... View more
Labels:
- Labels:
-
Apache Hive
-
HDFS
03-05-2019
07:20 AM
Ways to change the pools via API today: Use the PUT call of http://$HOSTNAME:7180/api/v19/clusters/<cluster>/services/<yarn>/config, to change yarn_fs_scheduled_allocations, followed by a POST to refresh pools (http://$HOSTNAME:7180/api/v19/clusters/<cluster>/commands/poolsRefresh) Pros: It does update the pools, as desired. It does NOT affect the web UI Cons: The JSON is complex and prone to typos. A typo could mess up all pools and cause issues on the cluster
... View more
01-16-2019
09:15 AM
Turns out, its a limitation as of now. Updates made to resource pools using CM API is known to break DRP UI. There are improvement tickets open internally with Cloudera to address that.
... View more
01-09-2019
08:20 AM
Specifically looking for : Is this doable through the CM API? Can I adjust the weights, memory, CPU of a Yarn resource pool? Can I adjust create and delete pools? Can I do the same allocations for Admission Control in Impala?
... View more
01-08-2019
02:19 PM
1 Kudo
Hi Team,
Can anyone share if they have updated resource pool configurations in YARN using CM API, what end points were used if so. Thanks in advance.
... View more
Labels:
- Labels:
-
Apache Impala
-
Apache YARN
-
Cloudera Manager