Member since
04-22-2014
1218
Posts
341
Kudos Received
157
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
19264 | 03-03-2020 08:12 AM | |
10393 | 02-28-2020 10:43 AM | |
3120 | 12-16-2019 12:59 PM | |
2431 | 11-12-2019 03:28 PM | |
4232 | 11-01-2019 09:01 AM |
07-18-2018
02:31 PM
@yassine24, Basic information about how to query and update Service config via python: http://cloudera.github.io/cm_api/docs/python-client/#configuring-services-and-roles Also, I pulled this from the Community ... it shows how to update an hdfs safety valve via REST API: curl -iv -X PUT -H "Content-Type:application/json" -H "Accept:application/json" -d '{"items":[{ "name": "core_site_safety_valve","value": "<property><name>hadoop.proxyuser.ztsps.users</name><value>*</value></property><property><name>hadoop.proxyuser.ztsps.groups</name><value>*</value></property>"}]}' http://admin:admin@10.1.0.1:7180/api/v12/clusters/cluster/services/hdfs/config You can try using the above information to update the safety valve for hbase_service_config_safety_valve. NOTE that when updating the safety valve, what you update will replace what was there. If you want to "add" a property to the safety valve, you need to include all the properties you want as an end result.
... View more
07-18-2018
02:21 PM
@Manindar, Indeed, if you have a replication factor of 3 and only one DataNode is alive, then there is nowhere to replicate. 3 nodes with a replication factor of 3 means the blocks are already on that one node and there is nothing to replicate/move.
... View more
07-18-2018
02:15 PM
@alexmc6 "*" is not a valid regex. ".*" may be what you were going for... I am not quite clear on your business requirement, but I think you are saying that you want to maybe create 10 replication schedules that will replicate chunks of 10 of your area databases... akin to this: area([0-9]|1[0])_.*db
... View more
07-18-2018
11:01 AM
@alexmc6, Cloudera Manager does some basic checks to find out if there are other Hive Replication commands running that involve the same databases and tables. The fact that your error says "The remote command failed with error message" indicates that the Hive Export command failed on the source Cloudera Manager server. I would open up the peer (source) Cloudera Manager and check to see what commands are running. Based on the response, there may be one or more Hive Export commands running. If they are, you can Abort them if you want to continue testing. After doing that, you can try running Hive replication from you destination cluster's Cloudera Manager again. If there are no other Hive replication commands running, you should not see this failure.
... View more
07-17-2018
03:27 PM
1 Kudo
@yassine24, I'm still not quite sure what you are trying to do exactly. There is no function that will update specific configuration parameters. Rather, you would need to update attributes for particular services and roles. For example, "hbase_service_config_safety_valve" is the attribute for service level safety valve for Hbase. You can see it and other service-level attributes with the following REST api: cm_host:cm_port/api/v19/clusters/Cluster 1/services/HBASE-1/config?view=full The above assumes your cluster is named "Cluster 1" and your Hbase service is HBASE-1. Adjust for your configuration as necessary.
... View more
07-17-2018
12:30 PM
1 Kudo
@yassine24, Yes, you can use the API. Try searching the community for "safety valve api" and you will get some hits. I don't have specific examples off-hand. Cloudera Manager API docs are here: https://cloudera.github.io/cm_api/ For Python and Java, thre are examples included with the API download.
... View more
07-17-2018
11:43 AM
@tjford, Yup, if you have already installed the "cloudera-manager-agent" and "cloudera-manager-daemons" packages, configured the agent's config.ini with the server_host as your CM host, and started it (so that it is heartbeating to Cloudera Manager) then you can probably just add it to the cluster using this: https://cloudera.github.io/cm_api/apidocs/v5.15.0/path__clusters_-clusterName-_hosts.html
... View more
07-17-2018
08:35 AM
@t5, I am pretty sure we are still in the closed beta stage so you would have to enter into the beta program. You can try contacting the folks responsible for that via our website: https://www.cloudera.com/products/cloudera-enterprise-6.html I would say there is a fair chance that your application works OK with CDH 5.15, though, so I'll stay optimistic!
... View more
07-17-2018
08:28 AM
@prabhat10, The logs you pasted in came in one line; I was able to add some hard returns in to make it easier to read. We can see that it appears that HBase Thrift calls are succeeding but slowly: [16/Jul/2018 20:10:18 -0700] thrift_util WARNING SLOW: 8.33 - Thrift call: hbased.Hbase.Client.getTableNames(args=(), kwargs={'doas': u'root'}) returned in 8331ms: ['beta1', 'beta99', 'emp', 'qwe1', 'sample'] [16/Jul/2018 20:10:21 -0700] thrift_util INFO SLOW: 2.87 - Thrift call: hbased.Hbase.Client.isTableEnabled(args=('beta1',), kwargs={'doas': u'root'}) returned in 2870ms: True [16/Jul/2018 20:10:22 -0700] thrift_util INFO SLOW: 1.65 - Thrift call: hbased.Hbase.Client.isTableEnabled(args=('beta99',), kwargs={'doas': u'root'}) returned in 1646ms: True [16/Jul/2018 20:10:25 -0700] thrift_util INFO SLOW: 2.36 - Thrift call: hbased.Hbase.Client.isTableEnabled(args=('emp',), kwargs={'doas': u'root'}) returned in 2356ms: True [16/Jul/2018 20:10:28 -0700] thrift_util INFO SLOW: 3.96 - Thrift call: hbased.Hbase.Client.isTableEnabled(args=('qwe1',), kwargs={'doas': u'root'}) returned in 3960ms: True [16/Jul/2018 20:10:30 -0700] thrift_util INFO SLOW: 1.58 - Thrift call: hbased.Hbase.Client.isTableEnabled(args=('sample',), kwargs={'doas': u'root'}) returned in 1575ms: True [16/Jul/2018 20:11:15 -0700] thrift_util WARNING SLOW: 8.33 - Thrift call: hbased.Hbase.Client.scannerOpenWithScan(args=(u'beta99', TScan(stopRow=None, filterString='(ColumnPaginationFilter(500,0) AND PageFilter(500))', timestamp=None, batchSize=None, startRow='', caching=None, columns= []), None), kwargs={'doas': u'root'}) returned in 8331ms: 0 [16/Jul/2018 20:11:23 -0700] thrift_util WARNING SLOW: 7.69 - Thrift call: hbased.Hbase.Client.scannerGetList(args=(0, 10), kwargs={'doas': u'root'}) returned in 7687ms: [TRowResult(columns={'info:patient_name': TCell(timestamp=1531376997284, value='santosh10'), 'info:registration_id': TCell(timestamp=1531376960851, value='10009')}, row='10')] [16/Jul/2018 20:11:23 -0700] access INFO 182.70.14.175 root - "POST /hbase/api/getRowQuerySet/HBase/beta99/ []/ [{"row_key":"null","scan_length":10,"columns": [],"prefix":"false","filter":null,"editing":true}] HTTP/1.1" returned in 16037ms This indicates that the HBase Thrift Server is receiving the requests from Hue and is returning a result. I noted in your log that it appears Hive requests are slow to return as well: [16/Jul/2018 20:09:58 -0700] hive_server2_lib INFO Opening beeswax thrift session for user root [16/Jul/2018 20:10:26 -0700] thrift_util WARNING SLOW: 27.75 - Thrift call: <class 'TCLIService.TCLIService.Client'>.OpenSession(args=(TOpenSessionReq(username='hue', password=None, client_protocol=6, configuration={'hive.server2.proxy.user': u'root'}),), kwargs={}) returned in 27748ms: TOpenSessionResp(status=TStatus(errorCode=None, errorMessage=None, sqlState=None, infoMessages=None, statusCode=0), sessionHandle=TSessionHandle(sessionId=THandleIdentifier(secret=fc4174a460dbffc2:a70e91db1c9f7c84, guid=3d46ed646f3340b1:5dbe2e16127091b6)), configuration={}, serverProtocolVersion=6) [16/Jul/2018 20:10:26 -0700] hive_server2_lib INFO Session '\xb1@3od\xedF=\xb6\x91p\x12\x16.\xbe]' opened This may indicate that one or more of your services is not able to perform as quickly as we may expect. I would recommend looking more closely at the HBase Thrift Server itself to review its logs and see if there are any clues regarding what is taking longer than one would expect. From what I see in the Hue log you provided, there is no indication that the HBase queries are failing at the Hue --> HBase Thrift connection side.
... View more
07-16-2018
12:17 PM
1 Kudo
@willschlemmel, Please have a look at the thread here where we discuss how the API works with impala queries: http://community.cloudera.com/t5/Cloudera-Manager-Installation/CM-API-Maximum-number-of-requests/m-p/67802/highlight/true#M13943 Check to see if you have "warnings" returned in your result and if it contains a timestamp. Also, please show us more about your problem... you said "I cannot obtain any queries from dates earlier...." How do you know this? What do you observe exactly?
... View more