Member since
07-06-2018
59
Posts
1
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2694 | 03-05-2019 07:20 AM | |
2864 | 01-16-2019 09:15 AM | |
1524 | 10-25-2018 01:46 PM | |
1709 | 08-02-2018 12:34 PM |
06-05-2019
07:15 AM
Hey Network, Anyone had this issue or maybe Cloudera team in this community may share if this a known bug etc? Regards
... View more
06-03-2019
01:55 PM
I have a scenario where i'm trying to create a table which points to an HDFS location which has a directory name starting with an "_" in the HDFS path. Now table creation goes through but If I try to read data out of the table it throws error, below is what i get: create external table `ingest.workgroup__views2` row format serde 'org.apache.hadoop.hive.serde2.avro.AvroSerDe' STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat' location 'hdfs://nameservice1/user/data/ingest/mdm/workgroup_i/workgroup/_views' tblproperties ('avro.schema.url'='hdfs://nameservice1/user/data/ingest/mdm/workgroup_i/workgroup/_views/_gen/_views.avsc'); No rows affected (0.232 seconds) 0: jdbc:hive2://t-hive.sys.cigna.com:25006/de> select * from ingest.workgroup__views2; Error: java.io.IOException: org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://nameservice1/user/data/ingest/mdm/workgroup_i/workgroup/_views (state=,code=0) 0: jdbc:hive2://t-hive.sys.cigna.com:25006/de> drop table ingest.workgroup__views2; So i escape the special character "_" in location and the table gets created and i' able to run select to see data as below: create external table `ingest.workgroup__views2` row format serde 'org.apache.hadoop.hive.serde2.avro.AvroSerDe' STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat' location 'hdfs://nameservice1/user/data/ingest/mdm/workgroup_i/workgroup/\_views' tblproperties ('avro.schema.url'='hdfs://nameservice1/user/data/ingest/mdm/workgroup_i/workgroup/_views/_gen/_views.avsc'); No rows affected (0.19 seconds) 0: jdbc:hive2://t-hive.sys.cigna.com:25006/de> select * from ingest.workgroup__views2; +-----------------------+-------------------------+-----------------------------+-------------------------------+-----------------------------+-------------------------------+--------------------------------+--------------------------+--------------------------+----------------------------+----------------------------+--+ | workgroup__views2.id | workgroup__views2.name | workgroup__views2.view_url | workgroup__views2.created_at | workgroup__views2.owner_id | workgroup__views2.owner_name | workgroup__views2.workbook_id | workgroup__views2.index | workgroup__views2.title | workgroup__views2.caption | workgroup__views2.site_id | +-----------------------+-------------------------+-----------------------------+-------------------------------+-----------------------------+-------------------------------+--------------------------------+--------------------------+--------------------------+----------------------------+----------------------------+--+ +-----------------------+-------------------------+-----------------------------+-------------------------------+-----------------------------+-------------------------------+--------------------------------+--------------------------+--------------------------+----------------------------+----------------------------+--+ No rows selected (0.139 seconds) Now the weird part is its only the location part which has this issue, parsing of URI mentioned under tblproperties goes through as you can see above and if I explicitly try to escape "_" in tblproperties it doesn't work. Any comments or suggestions will be helpful on the above obesrvation. Regards
... View more
Labels:
- Labels:
-
Apache Hive
-
HDFS
03-05-2019
07:20 AM
Ways to change the pools via API today: Use the PUT call of http://$HOSTNAME:7180/api/v19/clusters/<cluster>/services/<yarn>/config, to change yarn_fs_scheduled_allocations, followed by a POST to refresh pools (http://$HOSTNAME:7180/api/v19/clusters/<cluster>/commands/poolsRefresh) Pros: It does update the pools, as desired. It does NOT affect the web UI Cons: The JSON is complex and prone to typos. A typo could mess up all pools and cause issues on the cluster
... View more
01-16-2019
09:15 AM
Turns out, its a limitation as of now. Updates made to resource pools using CM API is known to break DRP UI. There are improvement tickets open internally with Cloudera to address that.
... View more
01-09-2019
08:20 AM
Specifically looking for : Is this doable through the CM API? Can I adjust the weights, memory, CPU of a Yarn resource pool? Can I adjust create and delete pools? Can I do the same allocations for Admission Control in Impala?
... View more
01-08-2019
02:19 PM
1 Kudo
Hi Team,
Can anyone share if they have updated resource pool configurations in YARN using CM API, what end points were used if so. Thanks in advance.
... View more
Labels:
- Labels:
-
Apache Impala
-
Apache YARN
-
Cloudera Manager
12-07-2018
12:11 PM
Hi can anyone confirm if the post from HDP support can be applied in CDH environment as well: dbms.py is available under: /opt/cloudera/parcels/CDH/lib/hue/apps/beeswax/src/beeswax/server and has: def get_indexes(self, db_name, table_name): hql = 'SHOW FORMATTED INDEXES ON `%(table)s` IN `%(database)s`' % {'table': table_name, 'database': db_name} query = hql_query(hql) handle = self.execute_and_wait(query, timeout_sec=15.0) if handle: result = self.fetch(handle, rows=5000) self.close(handle) return result def get_functions(self, prefix=None): filter = '"%s.*"' % prefix if prefix else '".*"' hql = 'SHOW FUNCTIONS %s' % filter query = hql_query(hql) handle = self.execute_and_wait(query, timeout_sec=15.0) if handle: result = self.fetch(handle, rows=5000) self.close(handle) return result
... View more
11-29-2018
09:48 AM
Hi All, Is there a way to list more than 5000 tables in a database. By default Hue shows first 5000 tables from a database, is there a configuration change supplied through snippet by which we can override this. Found a related article not sure if this can be applied in CDH 5.14 as well? https://community.hortonworks.com/articles/75938/hue-does-not-list-the-tables-post-5000-in-number.html Regards
... View more
Labels:
- Labels:
-
Cloudera Hue
10-25-2018
01:46 PM
Found a solution to this, had to get configuration on role level which prints everything set in CM. https://hostname:7183/api/v19/clusters/cluster/services/sentry/roles/role-name/config?view=full
... View more
10-25-2018
12:49 PM
Hi Team, I was looking for a way to change some cluster configs (CDH 5.14) and realized that CM API calls for viewing configurations for a particular service doesn't return all the configurable items . For ex: I ran below: https://hostname:7183/api/v19/clusters/cluster/services/sentry/config?view=full But this didn't capture Java heap set for Sentry process. Whats the best way to see entire configuraton items for particular role/service? Thanks in advance.
... View more
Labels:
- Labels:
-
Cloudera Manager