Member since
10-15-2014
126
Posts
2
Kudos Received
6
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1026 | 10-29-2018 12:45 PM | |
2134 | 10-29-2018 11:42 AM | |
1182 | 04-05-2017 06:57 PM | |
3829 | 02-05-2016 05:13 PM | |
21613 | 10-31-2014 11:25 AM |
10-29-2018
12:45 PM
I had to customize Hue Server Advanced Configuration Snippet (Safety Valve) for hue_safety_valve_server.ini to add back the base functionality [notebook] [[interpreters]] [[[hive]]] name=Hive interface=hiveserver2 [[[impala]]] name=Impala interface=hiveserver2 [[[presto]]] name=Presto interface=jdbc options='{"url": "jdbc:presto://ta-pci-prod-presto-master01.tripactions.local:8889/ta", "driver": "com.facebook.presto.jdbc.PrestoDriver"}'
... View more
10-29-2018
11:42 AM
I had to customize Hue Server Advanced Configuration Snippet (Safety Valve) for hue_safety_valve_server.ini to add back the base functionality [notebook] [[interpreters]] [[[hive]]] name=Hive interface=hiveserver2 [[[impala]]] name=Impala interface=hiveserver2 [[[presto]]] name=Presto interface=jdbc options='{"url": "jdbc:presto://ta-pci-prod-presto-master01.tripactions.local:8889/ta", "driver": "com.facebook.presto.jdbc.PrestoDriver"}'
... View more
10-26-2018
03:19 PM
sure I should be able to see both Hive and HUE on this drop down see attached and yes it is activated
... View more
10-26-2018
01:25 PM
I added Impala as a service after the installation of CDH6 Impala via commandline on the box works but it doesn't work in HUE I followed the instructions per the documents and and from here https://community.cloudera.com/t5/Web-UI-Hue-Beeswax/Do-not-see-impala-app-in-hue-after-installation-of-impala/m-p/63267/highlight/true#M2560 According to @Romainr I need to add the following to the HUE.ini [[[impala]]]
name=Impala
interface=hiveserver2 Every time I add something to Hue Service Advanced Configuration Snippet (Safety Valve) for hue_safety_valve.ini and try to restart the service the service refuses to start am I not putting the condition in the right place or in the right format? It suppose to be a simple change from None to Impala and it should work
... View more
Labels:
09-18-2018
03:38 PM
On CMS 6 (playing around with it ) and I configured HUE but Hive is not available The only option is Hive Service Hue (Service-Wide) Hive am I missing something?
... View more
07-18-2018
11:11 AM
Is there a Chart or way to see all the tables used by Impala for the last 6 months? I know that in CM the Queries tab shows only 30 days of prior queries is there a chart I can build out?
... View more
Labels:
03-05-2018
03:16 PM
I have this same issue I cannot use Sqoop or any ETL tool SSIS/SSRS is not installed so I cannot create a package Hoping to reverse this https://blogs.msdn.microsoft.com/igorpag/2013/11/12/how-to-create-a-sql-server-linked-server-to-hdinsight-hive-using-microsoft-hive-odbc-driver/
... View more
02-06-2018
10:32 AM
We are on Impala Shell v2.8.0-cdh5.11.1 community edition SSL is enabled but no sentry Executed a command refresh schema.table 17:15:14 Query: refresh hbasestage.raw_transactions
17:15:14 Query submitted at: 2018-02-06 01:15:14 (Coordinator: http://hadoop4-private.wdc01.infra.ripple.com:25000)
03:21:40 Query progress can be monitored at: http://hadoop4-private.wdc01.infra.ripple.com:25000/query_plan?query_id=984250bf5d880d33:fddc42d100000000
03:21:40
03:21:40 Fetched 0 row(s) in 36385.13s There has to be a better way Table is normally populated by Hive, so refresh is required for impala to recognize new partitions Table has 1861 partitions - total 1.28 TB of data, each partition is no bigger than 3GB (partition is by date) Files are avro but that shouldn't impact it (should it?) Yarn does NOT manage resources
... View more
Labels:
08-17-2017
09:20 AM
Another update left master alone to see if it can resolve Starting namespace manager (since 11hrs, 55mins, 3sec ago) unfortunately I cannot find any logs on what exactly its trying to do
... View more
08-16-2017
04:23 PM
add on https://issues.apache.org/jira/browse/HBASE-16488 and master is still starting Starting namespace manager (since 1hrs, 55mins, 57sec ago)
... View more
08-16-2017
03:54 PM
My primary master died due to scheduled maintenance and my backup master failed to kick in CM Agent has tried to start it up but could not initialize the namespace table After several manual efforts where I followed https://community.cloudera.com/t5/Storage-Random-Access-HDFS/HBase-Master-Failed-to-become-active-master/m-p/27186#M1225 I did the following only rmr /hbase/meta-region-server
rmr /hbase/rs
rmr /hbase/splitWAL
rmr /hbase/backup-masters
rmr /hbase/table-lock
rmr /hbase/flush-table-proc
rmr /hbase/region-in-transition
rmr /hbase/running
rmr /hbase/balancer
rmr /hbase/recovering-regions
rmr /hbase/draining
rmr /hbase/namespace
rmr /hbase/hbaseid
rmr /hbase/table I got the a master to come up after setting hbase.master.namespace.init.timeout to some absurd value I see the master registering dead region servers (though I cannot find where it pick them up, not in the WAL, Archive or data) and I see the master registering the following Starting namespace manager (since 1hrs, 20mins, 5sec ago) even though cloudera manager shows healthy list the catalog in hbase shell gives me the following error hbase(main):004:0> list
TABLE
ERROR: org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2373)
at org.apache.hadoop.hbase.master.MasterRpcServices.getTableNames(MasterRpcServices.java:907)
at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55650)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2182)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:185)
at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:165) fsck /hbase -files -blocks shows healthy hbck shows zero inconsistencies I am on version Hadoop 2.6.0+cdh5.11.1+2400 HBase 1.2.0+cdh5.11.1+319 I did have a master colocated with a region server and was wondering if I ran into this https://issues.apache.org/jira/browse/HBASE-14861 and then this https://issues.apache.org/jira/browse/HBASE-14664 as the cause of the failed backup kicking in But I cannot determine why I am getting namespace manager would take so long to initialize
... View more
Labels:
08-10-2017
11:39 AM
Recently upgraded Cloudera from 5.4 to 5.12 but running 5.11 components so Impala 2.8.0+cdh5.11.1+0 Also recently moved Hive Metastore database from postgres to MySQL Currently I have found that Impala is reacting slowly to executing queiries and executing them at all I have a 7 node cluster and only 5 real users using Impala In searching the logs I am getting the following error E0810 18:01:54.606694 25271 ShortCircuitCache.java:215] ShortCircuitCache(0xba3834e): failed to release short-circuit shared memory slot Slot(slotIdx=17, shm=DfsClientShm(5db0b20a1ec13fd9af815db2d22cd894)) by sending ReleaseShortCircuitAccessRequestProto to /var/run/hdfs-sockets/dn. Closing shared memory segment.
Java exception follows:
java.io.IOException: ERROR_INVALID: there is no shared memory segment registered with shmId 5db0b20a1ec13fd9af815db2d22cd894
at org.apache.hadoop.hdfs.shortcircuit.ShortCircuitCache$SlotReleaser.run(ShortCircuitCache.java:208)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748) I also see the following as well cc:278
channel send status:
Sender timed out waiting for receiver fragment instance: 3d4e8ea794a486e7:a74d612000000010
... View more
Labels:
08-10-2017
11:31 AM
I recently migrated the HUE database to MySQL as well as the hive metastore database and then upgraded Cloudera manager to 5.12 but using a 5.11 components (so HUE 3.9.0+cdh5.11.1+5073) My users have noticed that HUE takes a tremendous amount of time to render its pages Note cross post on google group hue users I extended the timeout settings and under an incognito window (to ensure nothing is cached) I was able to get this message Lock wait timeout exceeded Traceback: File "/opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/hue/build/env/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/core/handlers/base.py" in get_response 112. response = wrapped_callback(request, *callback_args, **callback_kwargs) File "/opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/hue/build/env/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/db/transaction.py" in inner 371. return func(*args, **kwargs) File "/opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/hue/build/env/lib/python2.7/site-packages/django_axes-1.5.0-py2.7.egg/axes/decorators.py" in decorated_login 304. response = func(request, *args, **kwargs) File "/opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/hue/desktop/core/src/desktop/auth/views.py" in dt_login 121. login(request, user) File "/opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/hue/build/env/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/contrib/auth/__init__.py" in login 89. user_logged_in.send(sender=user.__class__, request=request, user=user) File "/opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/hue/build/env/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/dispatch/dispatcher.py" in send 185. response = receiver(signal=self, sender=sender, **named) File "/opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/hue/build/env/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/contrib/auth/models.py" in update_last_login 30. user.save(update_fields=['last_login']) File "/opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/hue/build/env/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/db/models/base.py" in save 545. force_update=force_update, update_fields=update_fields) File "/opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/hue/build/env/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/db/models/base.py" in save_base 573. updated = self._save_table(raw, cls, force_insert, force_update, using, update_fields) File "/opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/hue/build/env/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/db/models/base.py" in _save_table 635. forced_update) File "/opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/hue/build/env/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/db/models/base.py" in _do_update 679. return filtered._update(values) > 0 File "/opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/hue/build/env/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/db/models/query.py" in _update 510. return query.get_compiler(self.db).execute_sql(None) File "/opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/hue/build/env/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/db/models/sql/compiler.py" in execute_sql 980. cursor = super(SQLUpdateCompiler, self).execute_sql(result_type) File "/opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/hue/build/env/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/db/models/sql/compiler.py" in execute_sql 786. cursor.execute(sql, params) File "/opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/hue/build/env/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/db/backends/util.py" in execute 53. return self.cursor.execute(sql, params) File "/opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/hue/build/env/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/db/utils.py" in __exit__ 99. six.reraise(dj_exc_type, dj_exc_value, traceback) File "/opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/hue/build/env/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/db/backends/util.py" in execute 53. return self.cursor.execute(sql, params) File "/opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/hue/build/env/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/db/backends/mysql/base.py" in execute 124. return self.cursor.execute(query, args) File "/opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/hue/build/env/lib/python2.7/site-packages/MySQL_python-1.2.5-py2.7-linux-x86_64.egg/MySQLdb/cursors.py" in execute 205. self.errorhandler(self, exc, value) File "/opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/hue/build/env/lib/python2.7/site-packages/MySQL_python-1.2.5-py2.7-linux-x86_64.egg/MySQLdb/connections.py" in defaulterrorhandler 36. raise errorclass, errorvalue Exception Type: OperationalError at /accounts/login/ Exception Value: (1205, 'Lock wait timeout exceeded; try restarting transaction') On the runscpserver log I get [04/Aug/2017 07:53:13 -0700] wsgiserver ERROR WSGI (<WorkerThread(CP WSGIServer Thread-33, started 140223207950080)>) error: [('SSL routines', 'SSL23_WRITE', 'ssl handshake failure')] Traceback (most recent call last): File "/opt/cloudera/parcels/CDH-5.4.11-1.cdh5.4.11.p0.5/lib/hue/desktop/core/src/desktop/lib/wsgiserver.py", line 1304, in run conn.communicate() File "/opt/cloudera/parcels/CDH-5.4.11-1.cdh5.4.11.p0.5/lib/hue/desktop/core/src/desktop/lib/wsgiserver.py", line 1214, in communicate req.simple_response("408 Request Timeout") File "/opt/cloudera/parcels/CDH-5.4.11-1.cdh5.4.11.p0.5/lib/hue/desktop/core/src/desktop/lib/wsgiserver.py", line 618, in simple_response self.wfile.sendall("".join(buf)) File "/opt/cloudera/parcels/CDH-5.4.11-1.cdh5.4.11.p0.5/lib/hue/desktop/core/src/desktop/lib/wsgiserver.py", line 1139, in sendall return self._safe_call(False, super(SSL_fileobject, self).sendall, *args, **kwargs) File "/opt/cloudera/parcels/CDH-5.4.11-1.cdh5.4.11.p0.5/lib/hue/desktop/core/src/desktop/lib/wsgiserver.py", line 1091, in _safe_call return call(*args, **kwargs) File "/opt/cloudera/parcels/CDH-5.4.11-1.cdh5.4.11.p0.5/lib/hue/desktop/core/src/desktop/lib/wsgiserver.py", line 753, in sendall bytes_sent = self.send(data) File "/opt/cloudera/parcels/CDH-5.4.11-1.cdh5.4.11.p0.5/lib/hue/desktop/core/src/desktop/lib/wsgiserver.py", line 1142, in send return self._safe_call(False, super(SSL_fileobject, self).send, *args, **kwargs) File "/opt/cloudera/parcels/CDH-5.4.11-1.cdh5.4.11.p0.5/lib/hue/desktop/core/src/desktop/lib/wsgiserver.py", line 1121, in _safe_call raise FatalSSLAlert(*e.args) but I am using self signed certs Any thoughts or help would be appreciated
... View more
Labels:
07-29-2017
12:42 PM
I am getting very strange behavior that requires me to restart the cluster each time to clear it For context I am on Cloudera version 5.8.5 Hadoop 2.6.0-cdh5.8.5 Subversion http://github.com/cloudera/hadoop -r 47218bf3433a3c3e52036f79d99a597fed09f261 Compiled by jenkins on 2017-05-11T21:06Z Compiled with protoc 2.5.0 From source with checksum 197c50392cb0362b8f23f945ae5aca42 This command was run using /opt/cloudera/parcels/CDH-5.8.5-1.cdh5.8.5.p0.5/jars/hadoop-common-2.6.0-cdh5.8.5.jar HBase 1.2.0-cdh5.8.5 Source code repository file:///data/jenkins/workspace/generic-package-ubuntu64-14-04/CDH5.8.5-Packaging-HBase-2017-05-11_13-49-18/hbase-1.2.0+cdh5.8.5+263-1.cdh5.8.5.p0.10~trusty revision=Unknown Compiled by jenkins on Thu May 11 14:14:09 PDT 2017 From source with checksum 888872fd1ae945e40fea73d87f264b23 java -version java version "1.8.0_131" Java(TM) SE Runtime Environment (build 1.8.0_131-b11) Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode) After a while I see the following warnings Master logs are reporting 2017-07-29 19:33:51,775 WARN org.apache.hadoop.hbase.master.CatalogJanitor: Failed scan of catalog table
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=351, exceptions:
Sat Jul 29 19:33:51 UTC 2017, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=78531: row '' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=hadoop1-private.sjc03.infra.ripple.com,60020,1501335437100, seqNum=0
at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:286)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:231)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:61)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:320)
at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:295)
at org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:160)
at org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:155)
at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:867)
at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:193)
at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:89)
at org.apache.hadoop.hbase.master.CatalogJanitor.getMergedRegionsAndSplitParents(CatalogJanitor.java:183)
at org.apache.hadoop.hbase.master.CatalogJanitor.getMergedRegionsAndSplitParents(CatalogJanitor.java:135)
at org.apache.hadoop.hbase.master.CatalogJanitor.scan(CatalogJanitor.java:236)
at org.apache.hadoop.hbase.master.CatalogJanitor.chore(CatalogJanitor.java:117)
at org.apache.hadoop.hbase.ScheduledChore.run(ScheduledChore.java:185)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at org.apache.hadoop.hbase.JitterScheduledThreadPoolExecutorImpl$JitteredRunnableScheduledFuture.run(JitterScheduledThreadPoolExecutorImpl.java:110)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.SocketTimeoutException: callTimeout=60000, callDuration=78531: row '' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=hadoop1-private.sjc03.infra.ripple.com,60020,1501335437100, seqNum=0
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:159)
at org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:80)
... 3 more
Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.CallQueueTooBigException): Call queue is full on /0.0.0.0:60020, too many items queued ?
at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1268)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:34094)
at org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:400)
at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:204)
at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:65)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:381)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:355)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)
... 4 more Region server logs every once in a while get 2017-07-29 19:35:44,515 ERROR org.apache.hadoop.hbase.replication.regionserver.ReplicationSink: Unable to accept edit because:
org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 3 actions: RemoteWithExtrasException: 3 times,
at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:258)
at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$2000(AsyncProcess.java:238)
at org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.getErrors(AsyncProcess.java:1682)
at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:982)
at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:996)
at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:256)
at org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:163)
at org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:198)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:1820)
at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:22253)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2170)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109)
at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
at java.lang.Thread.run(Thread.java:748) I also see the following errors in the data node logs 2017-07-29 14:45:39,705 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: hadoop5-private.sjc03.infra.ripple.com:50010:DataXceiver error processing WRITE_BLOCK operation src: /10.160.22.70:54803 dst: /10.160.22.113:50010
java.io.IOException: Premature EOF from inputStream
at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:201)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:500)
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:896)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:808)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:169)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:106)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:246)
at java.lang.Thread.run(Thread.java:748) performing the following netstat -nap | grep CLOSE_WAIT -c
68699 Digging around I think I am facing https://issues.apache.org/jira/browse/HBASE-9393 Checking the release notes and it doesn't look like this patch was back ported. Will this be scheduled?
... View more
04-05-2017
06:57 PM
Fixed Romain posted on Hue user group that I had a conflict in python libraries and indeed I did I had Impyla and Airflow installed as part of the main lib/distro instead of a controlled "virtualized" environment This caused the conflict. Unfortunately both Impyla and Airflow could not be uninstalled with a simple pip uninstall command Had to manually compare directories from my staging environment and remove several of the directories that could be potentially conflicting These include: hive_metastore hive_thrift_py pyhive hive_serde impala PyHive TCLIService hive_service impyla thrift Once the directories where removed from /usr/local/lib/python2.7/dist-packages and restarted HUE, it was querying again. Many kudos to Romain
... View more
04-03-2017
06:39 AM
I had Hive and Impala working on HUE before Versions HUE 3.7 Clouderan Manager 5.4.11 I recently restarted HUE and HUE no longer sees Impala or hive Configuration files located in /run/cloudera-scm-agent/process/7850-hue-HUE_SERVER
Potential misconfiguration detected. Fix and restart Hue.
Hive Editor The application won't work without a running HiveServer2.
Impala Editor No available Impalad to send queries to. When I run a query I get cannot import name TGetLogReq Error log shows the following [03/Apr/2017 00:16:53 -0700] wsgiserver ERROR WSGI (<WorkerThread(CP WSGIServer Thread-19, started 140018618201856)>) error: [('SSL routines', 'SSL23_WRITE', 'ssl handshake failure')]
Traceback (most recent call last):
File "/opt/cloudera/parcels/CDH-5.4.10-1.cdh5.4.10.p0.16/lib/hue/desktop/core/src/desktop/lib/wsgiserver.py", line 1304, in run
conn.communicate()
File "/opt/cloudera/parcels/CDH-5.4.10-1.cdh5.4.10.p0.16/lib/hue/desktop/core/src/desktop/lib/wsgiserver.py", line 1214, in communicate
req.simple_response("408 Request Timeout")
File "/opt/cloudera/parcels/CDH-5.4.10-1.cdh5.4.10.p0.16/lib/hue/desktop/core/src/desktop/lib/wsgiserver.py", line 618, in simple_response
self.wfile.sendall("".join(buf))
File "/opt/cloudera/parcels/CDH-5.4.10-1.cdh5.4.10.p0.16/lib/hue/desktop/core/src/desktop/lib/wsgiserver.py", line 1139, in sendall
return self._safe_call(False, super(SSL_fileobject, self).sendall, *args, **kwargs)
File "/opt/cloudera/parcels/CDH-5.4.10-1.cdh5.4.10.p0.16/lib/hue/desktop/core/src/desktop/lib/wsgiserver.py", line 1091, in _safe_call
return call(*args, **kwargs)
File "/opt/cloudera/parcels/CDH-5.4.10-1.cdh5.4.10.p0.16/lib/hue/desktop/core/src/desktop/lib/wsgiserver.py", line 753, in sendall
bytes_sent = self.send(data)
File "/opt/cloudera/parcels/CDH-5.4.10-1.cdh5.4.10.p0.16/lib/hue/desktop/core/src/desktop/lib/wsgiserver.py", line 1142, in send
return self._safe_call(False, super(SSL_fileobject, self).send, *args, **kwargs)
File "/opt/cloudera/parcels/CDH-5.4.10-1.cdh5.4.10.p0.16/lib/hue/desktop/core/src/desktop/lib/wsgiserver.py", line 1121, in _safe_call
raise FatalSSLAlert(*e.args)
FatalSSLAlert: [('SSL routines', 'SSL23_WRITE', 'ssl handshake failure')]
... View more
03-30-2017
06:18 PM
Additional information if at all possible - I would like to only use the users in HUE and not build the users on the local server, or activate LDAP. Unless absolutely required
... View more
03-30-2017
05:03 PM
1 Kudo
do you mind expanding on the answer a bit I have the same problem is the group setting in CM or on hdfs or os files system?
... View more
03-30-2017
04:15 PM
So after 5 days of following https://www.cloudera.com/documentation/enterprise/latest/topics/cm_sg_sentry_service.html http://gethue.com/apache-sentry-made-easy-with-the-new-hue-security-app/#howto testing out with policy files and not policy files, hdfs ownership, and finally after followinghttp://www.yourtechchick.com/hadoop/no-databases-available-permissions-missing-error-hive-sentry/ I can create roles in HUE - I can see them, manage them and I see the Sentry logs updating something Admin users can see query and work with everything. And error messages are no longer comming in on HUE; small victories After applying roles to groups to limit access to certain db (the only thing I needed Sentry to do) The users belonging to the limited set cannot see any database and hive is throwing the following 2017-03-30 22:48:09,619 ERROR org.apache.hadoop.hive.ql.Driver: FAILED: SemanticException No valid privileges
Required privileges for this query: Server=server1->Db=*->Table=+->action=insert;Server=server1->Db=*->Table=+->action=select;
org.apache.hadoop.hive.ql.parse.SemanticException: No valid privileges
Required privileges for this query: Server=server1->Db=*->Table=+->action=insert;Server=server1->Db=*->Table=+->action=select;
at org.apache.sentry.binding.hive.HiveAuthzBindingHook.postAnalyze(HiveAuthzBindingHook.java:356)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:436)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:306)
at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1120)
at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1113)
at org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:99)
at org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:170)
at org.apache.hive.service.cli.operation.Operation.run(Operation.java:257)
at org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(HiveSessionImpl.java:398)
at org.apache.hive.service.cli.session.HiveSessionImpl.executeStatement(HiveSessionImpl.java:379)
at org.apache.hive.service.cli.CLIService.executeStatement(CLIService.java:245)
at org.apache.hive.service.cli.thrift.ThriftCLIService.ExecuteStatement(ThriftCLIService.java:487)
at org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1313)
at org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1298)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
at org.apache.hive.service.auth.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:56)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:285)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hive.ql.metadata.AuthorizationException: User vchetty does not have privileges for SWITCHDATABASE
at org.apache.sentry.binding.hive.authz.HiveAuthzBinding.authorize(HiveAuthzBinding.java:320)
at org.apache.sentry.binding.hive.HiveAuthzBindingHook.authorizeWithHiveBindings(HiveAuthzBindingHook.java:540)
at org.apache.sentry.binding.hive.HiveAuthzBindingHook.postAnalyze(HiveAuthzBindingHook.java:346)
... 20 more Technically my goal is to have sentry manage users in HUE on what they can see via Impala and BeesWax(hive) versions Cloudera Express 5.4.7 Hue™ 3.7.0 Sentry installed correctly and running on same server as HUE config LDAP and Kerberos not enabled On Hive Hive sentry is enabled sentry-site.xml has <property> <name>sentry.hive.testing.mode</name> <value>true</value></property> hive.server2.enable.impersonation, hive.server2.enable.doAs is off On HUE Hue sentry is enabled user_augmentor desktop.auth.backend.DefaultUserAugmentor desktop.auth.backend.AllowFirstUserDjangoBackend sentry-site.xml has <property> <name>sentry.hive.testing.mode</name> <value>true</value></property> beeline CLI shows the roles and I can show a specific role on which databases it can use the user is mapped to that specific role in HUE Haven't tried any settings on Impala, I hope it just adopts hive settings once enabled Any ideas on what I may be missing ?? LDAP and kerberos is not something I am willing to deal with at this time
... View more
03-30-2017
01:24 PM
2 things to update 1 - all under /user/hive/warehouse set to hive (probably not the problem) 2 - user is case sensative - I had Hue in the portal, CM has HUE, but linux has hue in groups Correcting the username in the portal to lower case allowed me to create roles
... View more
03-29-2017
11:01 AM
My goal is to enable sentry on HUE only to protect some databases via hive and Impala But both Impala/hive CLI should not be impacted. CLI tools are isolated to an edge node and can only be directly accessed via ssh sessions by a select set of users LDAP and Kerberos not enabled on HUE (maybe later?) versions Cloudera Express 5.4.7 Hue™ 3.7.0 Sentry installed correctly and running on same server as HUE Hive and Impala do NOT have sentry enabled admin groups and allowed connecting users hive, impala, hue, hdfs and 1 custom service account all other settings are default HUE is configured through CM Sentry service checked - no snippet invoked Authentication Backend = desktop.auth.backend.AllowFirstUserDjangoBackend create_users_on_login checked no LDAP settings not kerberized Synced user and now have the Hue user and promoted account to be admin When I try to add policy I get the following Sentry Log 2017-03-29 17:37:25,030 ERROR org.apache.sentry.provider.db.service.thrift.SentryPolicyStoreProcessor: Access denied to Hue
org.apache.sentry.provider.db.SentryAccessDeniedException: Access denied to Hue
at org.apache.sentry.provider.db.service.thrift.SentryPolicyStoreProcessor.list_sentry_roles_by_group(SentryPolicyStoreProcessor.java:450)
at org.apache.sentry.provider.db.service.thrift.SentryPolicyService$Processor$list_sentry_roles_by_group.getResult(SentryPolicyService.java:953)
at org.apache.sentry.provider.db.service.thrift.SentryPolicyService$Processor$list_sentry_roles_by_group.getResult(SentryPolicyService.java:938)
at sentry.org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
at sentry.org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
at org.apache.sentry.provider.db.service.thrift.SentryProcessorWrapper.process(SentryProcessorWrapper.java:48)
at sentry.org.apache.thrift.TMultiplexedProcessor.process(TMultiplexedProcessor.java:123)
at sentry.org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:285)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
^@2017-03-29 17:38:56,565 ERROR org.apache.sentry.provider.db.service.thrift.SentryPolicyStoreProcessor: Access denied to Hue
org.apache.sentry.provider.db.SentryAccessDeniedException: Access denied to Hue
at org.apache.sentry.provider.db.service.thrift.SentryPolicyStoreProcessor.authorize(SentryPolicyStoreProcessor.java:205)
at org.apache.sentry.provider.db.service.thrift.SentryPolicyStoreProcessor.create_sentry_role(SentryPolicyStoreProcessor.java:215)
at org.apache.sentry.provider.db.service.thrift.SentryPolicyService$Processor$create_sentry_role.getResult(SentryPolicyService.java:833)
at org.apache.sentry.provider.db.service.thrift.SentryPolicyService$Processor$create_sentry_role.getResult(SentryPolicyService.java:818)
at sentry.org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
at sentry.org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
at org.apache.sentry.provider.db.service.thrift.SentryProcessorWrapper.process(SentryProcessorWrapper.java:48)
at sentry.org.apache.thrift.TMultiplexedProcessor.process(TMultiplexedProcessor.java:123)
at sentry.org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:285)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745) HUE Error log ERROR could not retrieve roles
^@[29/Mar/2017 10:38:56 -0700] hive ERROR could not create role
Traceback (most recent call last):
File "/opt/cloudera/parcels/CDH-5.4.10-1.cdh5.4.10.p0.16/lib/hue/apps/security/src/security/api/hive.py", line 156, in create_role
api.create_sentry_role(role['name'])
File "/opt/cloudera/parcels/CDH-5.4.10-1.cdh5.4.10.p0.16/lib/hue/desktop/libs/libsentry/src/libsentry/api.py", line 49, in decorator
raise e
SentryException: Access denied to Hue
... View more
06-21-2016
10:21 AM
Thanks very helpful but the ASCII function only translates the first character and the decode function (in hive) is in conflict with Impala interpretation of decode UDF to be similar to case (a poor decision IMHO since case is much more readable than decode) Anyway opinions aside How do I convert a large hex value into a large number?
select ascii(unhex('130F9C6D6A2972')) largenumber
, ascii(unhex('13')) firstpart both yeild the value of 19 when the first one suppose to yeild 5365189082491250
... View more
06-20-2016
01:32 PM
Running in Impala I try the following select unhex('4B') and I get a string result of K If I go to Hex to decimal converter to convert hex value '4B' I get the number 75 which is what I expect Is unhex not a hexadecimal to decimal converter? according to http://www.cloudera.com/documentation/enterprise/5-5-x/topics/impala_math_functions.html unhex(string a) Purpose: Returns a string of characters with ASCII values corresponding to pairs of hexadecimal digits in the argument. Return type: string I don't even know where to begin to start troubleshooting this
... View more
06-20-2016
01:25 PM
ok makes sense I think can you point me to documentation on how to get my system to see the 5.7 parcels?
... View more
06-15-2016
04:52 PM
Is there a place where I can find the release schedule for community edition The parcel search only allow me to go to 5.4.10 but 5.7 is already out as of this post.
... View more
04-13-2016
05:03 PM
my hbase replication has stopped on version 1.0.0-cdh5.4.8, rUnknown, Thu Oct 15 08:57:42 PDT 2015 I have 2 clusters in 2 different datacenters 1 is master the other is slave I see the following errors in log 2016-04-13 22:32:50,217 WARN org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint: Can't replicate because of a local or network error:
java.io.IOException: Call to hadoop2-private.sjc03.infra.ripple.com/10.160.22.99:60020 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=1014, waitTime=1200001, operationTimeout=1200000 expired.
at org.apache.hadoop.hbase.ipc.RpcClientImpl.wrapException(RpcClientImpl.java:1255)
at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1223)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:216)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:300)
at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:21783)
at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:65)
at org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint.replicate(HBaseInterClusterReplicationEndpoint.java:161)
at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:696)
at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:410)
Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=1014, waitTime=1200001, operationTimeout=1200000 expired.
at org.apache.hadoop.hbase.ipc.Call.checkAndSetTimeout(Call.java:70)
at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1197)
... 7 more which in turn fills the queue and I get 2016-04-13 22:35:19,555 WARN org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint: Can't replicate because of an error on the remote cluster:
org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.ipc.RpcServer$CallQueueTooBigException): Call queue is full on /0.0.0.0:60020, is hbase.ipc.server.max.callqueue.size too small?
at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1219)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:216)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:300)
at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:21783)
at org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:65)
at org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint.replicate(HBaseInterClusterReplicationEndpoint.java:161)
at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:696)
at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:410) My peers look good and this was working until Mar 27 we did have an inadvertant outage but I was able to restore all cluster services status 'replication'
version 1.0.0-cdh5.4.8
5 live servers
hadoop5-private.wdc01.infra.ripple.com:
SOURCE: PeerID=1, AgeOfLastShippedOp=1538240180, SizeOfLogQueue=2135, TimeStampsOfLastShippedOp=Sun Mar 27 04:00:42 GMT+00:00 2016, Replication Lag=1539342209
SINK : AgeOfLastAppliedOp=0, TimeStampsOfLastAppliedOp=Tue Mar 22 10:09:39 GMT+00:00 2016
hadoop2-private.wdc01.infra.ripple.com:
SOURCE: PeerID=1, AgeOfLastShippedOp=810222876, SizeOfLogQueue=1302, TimeStampsOfLastShippedOp=Mon Apr 04 14:31:37 GMT+00:00 2016, Replication Lag=810287122
SINK : AgeOfLastAppliedOp=0, TimeStampsOfLastAppliedOp=Fri Mar 25 21:20:59 GMT+00:00 2016
hadoop4-private.wdc01.infra.ripple.com:
SOURCE: PeerID=1, AgeOfLastShippedOp=602417946, SizeOfLogQueue=190, TimeStampsOfLastShippedOp=Thu Apr 07 00:06:38 GMT+00:00 2016, Replication Lag=602983605
SINK : AgeOfLastAppliedOp=0, TimeStampsOfLastAppliedOp=Mon Apr 04 14:35:56 GMT+00:00 2016
hadoop1-private.wdc01.infra.ripple.com:
SOURCE: PeerID=1, AgeOfLastShippedOp=602574285, SizeOfLogQueue=183, TimeStampsOfLastShippedOp=Thu Apr 07 00:10:29 GMT+00:00 2016, Replication Lag=602753383
SINK : AgeOfLastAppliedOp=0, TimeStampsOfLastAppliedOp=Thu Apr 07 00:10:23 GMT+00:00 2016
hadoop3-private.wdc01.infra.ripple.com:
SOURCE: PeerID=1, AgeOfLastShippedOp=602002192, SizeOfLogQueue=1148, TimeStampsOfLastShippedOp=Thu Apr 07 00:06:52 GMT+00:00 2016, Replication Lag=602971172
SINK : AgeOfLastAppliedOp=0, TimeStampsOfLastAppliedOp=Thu Apr 07 00:06:50 GMT+00:00 2016 I can curl the quorum I set so I don't think its network What can I do to troubleshoot? Tried to run the following hbase org.apache.hadoop.hbase.replication.regionserver.ReplicationSyncUp 100000 got the following response 16/04/13 23:37:17 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /10.125.122.237:50784, server: hadoop2-private.sjc03.infra.ripple.com/10.160.22.99:2181
16/04/13 23:37:17 INFO zookeeper.ClientCnxn: Session establishment complete on server hadoop2-private.sjc03.infra.ripple.com/10.160.22.99:2181, sessionid = 0x252f1a90269f5d6, negotiated timeout = 150000
16/04/13 23:37:17 INFO regionserver.ReplicationSource: Replicating de6643f5-2a36-413e-b55f-8840b26395b1 -> 06a68811-0e50-4802-a478-d199df96bf85
16/04/13 23:37:27 INFO regionserver.ReplicationSource: Closing source 1 because: Region server is closing
16/04/13 23:37:27 WARN regionserver.ReplicationSource: Interrupted while reading edits
java.lang.InterruptedException
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095)
at java.util.concurrent.PriorityBlockingQueue.poll(PriorityBlockingQueue.java:553)
at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.getNextPath(ReplicationSource.java:489)
at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:308)
16/04/13 23:37:27 INFO zookeeper.ZooKeeper: Session: 0x252f1a90269f5d6 closed
16/04/13 23:37:27 INFO zookeeper.ClientCnxn: EventThread shut down
16/04/13 23:37:27 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x152f1a8ff4ef600
16/04/13 23:37:27 INFO zookeeper.ZooKeeper: Session: 0x152f1a8ff4ef600 closed
16/04/13 23:37:27 INFO zookeeper.ClientCnxn: EventThread shut down
16/04/13 23:37:31 INFO zookeeper.ZooKeeper: Session: 0x153ee0d274c3c6a closed
16/04/13 23:37:31 INFO zookeeper.ClientCnxn: EventThread shut down I am willing to lose the queue if there is a way to flush and reset the sync process I can do distscp of various data and manually load my tables to play catchup as long as I can flush the queue
... View more
04-07-2016
10:48 AM
My cluster version 5.4.9-1.cdh5.4.9.p0.19 community edition My problem stems from the verbose logging that Impala lineage creates for Navigator. After 30 days moderate usage has created over 500 million files on each node. Logrotate can't keep up to compress and collapse these files. Logrotate efforts to "manage" that directory (with that many files) will eat up 50% of CPU and memory that will cause intensive swapping The bigger problem is that I left the default setting of Enabled collection of lineage from the service's roles. but forgot to disable it before my trial exprired so I can no longer disable it. How do I disable it since I do not have the license for it anymore and Cloudera Manager does not allow me to disable it? The short term solution is to rm that directory, but I am wasting resources overall.
... View more
04-06-2016
05:36 PM
My cluster version 5.4.9-1.cdh5.4.9.p0.19 community edition I had Enabled collection of lineage from the service's roles. but forgot to disable it before my trial exprired so I can no longer disable it How do I disable it ? This process is eating away at disk and logrotate cannot keep up in purging I don't believe I need it since I am on the community edition and have no plans for usage with Navigator
... View more
02-17-2016
04:25 PM
I am on CDH 5.4.8
I have d
This host is in contact with Cloudera Manager. The host's Cloudera Manager Agent's software version can not be determined.
in looking at cloudera-scm-agent.log I do see the following
[18/Feb/2016 00:15:57 +0000] 4289 MonitorDaemon-Reporter throttling_logger ERROR (9 skipped) Error sending messages to firehose: mgmt-SERVICEMONITOR-2a120f74c4dcd1ce29d389348e5952a5
Traceback (most recent call last):
File "/usr/lib/cmf/agent/src/cmf/monitor/firehose.py", line 70, in _send
self._port)
File "/usr/lib/cmf/agent/build/env/lib/python2.7/site-packages/avro-1.6.3-py2.7.egg/avro/ipc.py", line 464, in __init__
self.conn.connect()
File "/usr/lib/python2.7/httplib.py", line 778, in connect
self.timeout, self.source_address)
File "/usr/lib/python2.7/socket.py", line 571, in create_connection
raise err
error: [Errno 111] Connection refused
I can ping
ping -c 3 -s 1800 cloudera-manager-host:
PING hidden (hidden) 1800(1828) bytes of data.
1808 bytes from hidden: icmp_seq=1 ttl=54 time=71.4 ms
1808 bytes from hidden: icmp_seq=2 ttl=54 time=71.4 ms
1808 bytes from hidden: icmp_seq=3 ttl=54 time=71.4 ms
--- hidden ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 71.429/71.440/71.451/0.009 ms
node2 :/var/log/cloudera-scm-agent# curl cloudera-manager-host:9000
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8"></meta>
<title>403 Forbidden</title>
<style type="text/css">
#powered_by {
margin-top: 20px;
border-top: 2px solid black;
font-style: italic;
}
#traceback {
color: red;
}
</style>
</head>
<body>
<h2>403 Forbidden</h2>
<p>Missing or invalid token.</p>
<pre id="traceback"></pre>
<div id="powered_by">
<span>Powered by <a href="http://www.cherrypy.org">CherryPy 3.2.2</a></span>
</div>
</body>
</html>
I have tried restarting services and redeploying config but cloudera manager still shows that it wants to deploy new configs
... ...
@@ -1,1 +1,1 @@
1 -
1 +CDH-5.4.9-1.cdh5.4.9.p0.19
Any ideas would be appreciated
... View more
Labels: