Member since
05-02-2017
88
Posts
173
Kudos Received
15
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
7003 | 09-27-2017 04:21 PM | |
3176 | 08-17-2017 06:20 PM | |
2879 | 08-17-2017 05:18 PM | |
3333 | 08-11-2017 04:12 PM | |
4723 | 08-08-2017 12:43 AM |
07-21-2017
11:55 AM
@Jay SenSharma I even tried this to set to 300, But no luck. I will try to set ambari-agent debug mode and will check the stack.
... View more
07-21-2017
01:12 AM
2 Kudos
Getting Below Trace in Ambari-agent.log traceback (most recent call last):
File "/usr/lib/python2.6/site-packages/ambari_agent/Controller.py", line 165, in registerWithServer
ret = self.sendRequest(self.registerUrl, data)
File "/usr/lib/python2.6/site-packages/ambari_agent/Controller.py", line 496, in sendRequest
raise IOError('Request to {0} failed due to {1}'.format(url, str(exception)))
IOError: Request to https://lntpmn01.snapdot.com:8441/agent/v1/register/lntpdn03.snapdot.com failed due to Error occured during connecting to the server: ('The read operation timed out',)
ERROR 2017-07-19 16:10:19,383 Controller.py:213 - Error:Request to https://lntpmn01.snapdot.com:8441/agent/v1/register/lntpdn03.snapdot.com failed due to Error occured during connecting to the server: ('The read operation timed out',) I have tried increasing the timeout in security.py script to 180. Still no luck. SSL enabled ambari. No firewall on all the nodes. I can ping each other. # telnet <ambari-server> 8441
successful !!!
# openssl s_client -connect <ambari-server>:8441
successful !!!
Please help me out.
... View more
Labels:
- Labels:
-
Apache Ambari
07-12-2017
08:55 PM
3 Kudos
There is JIRA for Hbase Bulk loading to cross-cluster replication, https://issues.apache.org/jira/browse/HBASE-13153 It is mentioned that is it is fixed in Hbase versions 1.3.0 and 2.0.0. In HDP2.5.0, it is mentioned that this Jira is fixed with Hbase 1.1.2 version. https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.0/bk_release-notes/content/new_features.html Could someone please provide me the documentation for HBase Bulk loading to cross cluster to implement it in my cluster.
... View more
Labels:
07-11-2017
07:08 PM
3 Kudos
@arjun more You can try following on the xxx host, On "xxx" host do the following, # ambari-agent stop
# yum remove ambari-agent -y
# rm -rf /etc/ambari-agent/*
# rm -rf /var/lib/ambari-agent/*
# rm -f /usr/sbin/ambari-agent
# rm -rf /usr/lib/python2.6/site-packages/ambari*
# rm -rf /usr/lib/python2.6/site-packages/resource_management
And now Install the ambari-agent,
# wget -nv http://public-repo-1.hortonworks.com/ambari/centos6/2.x/updates/2.5.1.0/ambari.repo -O /etc/yum.repos.d/ambari.repo
# yum install -y ambari-agent
# ambari-agent start This might solve's your problem.
... View more
06-28-2017
07:33 PM
4 Kudos
On loading Hive parquet data in pig using HCatalog, I am facing an issue - Internal error. org.xerial.snappy.SnappyNative.uncompressedLength My script is : abc_flow = LOAD 'TEST.abc_flow' using org.apache.hive.hcatalog.pig.HCatLoader();
table1 = filter abc_flow by year in ('2011') and month in ('2') and day in ('1');
table10 = limit table1 10;
dump table10;
I get the following error logs : ERROR 2998: Unhandled internal error. org.xerial.snappy.SnappyNative.uncompressedLength(Ljava/nio/ByteBuffer;II)I
java.lang.UnsatisfiedLinkError: org.xerial.snappy.SnappyNative.uncompressedLength(Ljava/nio/ByteBuffer;II)I
at org.xerial.snappy.SnappyNative.uncompressedLength(Native Method)
at org.xerial.snappy.Snappy.uncompressedLength(Snappy.java:561)
at org.apache.parquet.hadoop.codec.SnappyDecompressor.decompress(SnappyDecompressor.java:62)
at org.apache.parquet.hadoop.codec.NonBlockedDecompressorStream.read(NonBlockedDecompressorStream.java:51)
at java.io.DataInputStream.readFully(DataInputStream.java:195)
at java.io.DataInputStream.readFully(DataInputStream.java:169)
at org.apache.parquet.bytes.BytesInput$StreamBytesInput.toByteArray(BytesInput.java:204)
at org.apache.parquet.column.impl.ColumnReaderImpl.readPageV1(ColumnReaderImpl.java:591)
at org.apache.parquet.column.impl.ColumnReaderImpl.access$300(ColumnReaderImpl.java:60)
at org.apache.parquet.column.impl.ColumnReaderImpl$3.visit(ColumnReaderImpl.java:540)
at org.apache.parquet.column.impl.ColumnReaderImpl$3.visit(ColumnReaderImpl.java:537)
at org.apache.parquet.column.page.DataPageV1.accept(DataPageV1.java:96)
at org.apache.parquet.column.impl.ColumnReaderImpl.readPage(ColumnReaderImpl.java:537)
at org.apache.parquet.column.impl.ColumnReaderImpl.checkRead(ColumnReaderImpl.java:529)
at org.apache.parquet.column.impl.ColumnReaderImpl.consume(ColumnReaderImpl.java:641)
at org.apache.parquet.column.impl.ColumnReaderImpl.<init>(ColumnReaderImpl.java:357)
at org.apache.parquet.column.impl.ColumnReadStoreImpl.newMemColumnReader(ColumnReadStoreImpl.java:82)
at org.apache.parquet.column.impl.ColumnReadStoreImpl.getColumnReader(ColumnReadStoreImpl.java:77)
at org.apache.parquet.io.RecordReaderImplementation.<init>(RecordReaderImplementation.java:270)
at org.apache.parquet.io.MessageColumnIO$1.visit(MessageColumnIO.java:135)
at org.apache.parquet.io.MessageColumnIO$1.visit(MessageColumnIO.java:101)
at org.apache.parquet.filter2.compat.FilterCompat$NoOpFilter.accept(FilterCompat.java:154)
at org.apache.parquet.io.MessageColumnIO.getRecordReader(MessageColumnIO.java:101)
at org.apache.parquet.hadoop.InternalParquetRecordReader.checkRead(InternalParquetRecordReader.java:140)
at org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:214)
at org.apache.parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:227)
at org.apache.parquet.pig.ParquetLoader.getNext(ParquetLoader.java:230)
at org.apache.pig.impl.io.ReadToEndLoader.getNextHelper(ReadToEndLoader.java:251)
at org.apache.pig.impl.io.ReadToEndLoader.getNext(ReadToEndLoader.java:231)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POLoad.getNextTuple(POLoad.java:137)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:307)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POLimit.getNextTuple(POLimit.java:122)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:307)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POStore.getNextTuple(POStore.java:159)
at org.apache.pig.backend.hadoop.executionengine.fetch.FetchLauncher.runPipeline(FetchLauncher.java:157)
at org.apache.pig.backend.hadoop.executionengine.fetch.FetchLauncher.launchPig(FetchLauncher.java:81)
at org.apache.pig.backend.hadoop.executionengine.HExecutionEngine.launchPig(HExecutionEngine.java:302)
at org.apache.pig.PigServer.launchPlan(PigServer.java:1431)
at org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1416)
at org.apache.pig.PigServer.storeEx(PigServer.java:1075)
at org.apache.pig.PigServer.store(PigServer.java:1038)
at org.apache.pig.PigServer.openIterator(PigServer.java:951)
at org.apache.pig.tools.grunt.GruntParser.processDump(GruntParser.java:754)
at org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:376)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:230)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:205)
at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:81)
at org.apache.pig.Main.run(Main.java:631)
at org.apache.pig.Main.main(Main.java:177)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
... View more
Labels:
- Labels:
-
Apache HCatalog
-
Apache Hive
-
Apache Pig
06-13-2017
06:38 AM
4 Kudos
@Deven Chauhan You can follow the instructions given in this article, https://community.hortonworks.com/articles/107398/how-to-change-ranger-admin-usersync-tagsync-log-di.html This will help you to solve your issue.
... View more
06-13-2017
02:38 AM
7 Kudos
ISSUE:- /var/log/ambari-server/ambari-server.log 18 May 2017 07:56:33,754 WARN [ambari-client-thread-26] PermissionHelper:78 - Error occurred when cluster or view is searched based on resource id java.lang.NullPointerException at org.apache.ambari.server.security.authorization.PermissionHelper.getPermissionLabels(PermissionHelper.java:74) 18 May 2017 07:56:33,975 ERROR [ambari-client-thread-26] ContainerResponse:419 - The RuntimeException could not be mapped to a response, re-throwing to the HTTP container com.google.common.cache.CacheLoader$InvalidCacheLoadException: CacheLoader returned null for key 58. at com.google.common.cache.LocalCache$Segment.getAndRecordStats(LocalCache.java:2348) at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2318) at com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2280) 18 May 2017 07:56:33,977 WARN [ambari-client-thread-26] ServletHandler:561 - Error Processing URI: /api/v1/users/admin - (com.google.common.cache.CacheLoader$InvalidCacheLoadException) CacheLoader returned null for key 58. 18 May 2017 07:56:33,977 WARN [ambari-client-thread-26] ServletHandler:561 - Error Processing URI: /api/v1/users/admin - (com.google.common.cache.CacheLoader$InvalidCacheLoadException) CacheLoader returned null for key 58. ROOT CAUSE:- Zeppelin View is removed from ambari 2.5 but looks like a reference stays there. SOLUTION:- Check this by running the following query on ambari DB. # psql -U ambari ambari
Password for user ambari: bigdata
psql => command -U ambari => is the username ambari => is the DB name It will ask for password, Where we can try using the ambari's default password as "bigdata" If the password is changed then we can check the file. # grep 'server.jdbc.user.passwd' /etc/ambari-server/conf/ambari.properties server.jdbc.user.passwd=/etc/ambari-server/conf/password.dat# cat /etc/ambari-server/conf/password.datbigdata So please get the "resource_id" of Zeppelin View if present?(In my case it is 58) # select * from adminresourcetype;
# select * from adminresourcetype; where resource_type_id IN (select resource_type_id from adminresource where resource_id =58); If that is the case then please Take Ambari DB Dump and then run the following commands to clean the Zeppelin View reference: # psql -U ambari ambari
Password for user ambari: bigdata
# DELETE FROM adminprivilege where resource_id in(58);
# DELETE FROM adminresource where resource_id in(58);
Then restart Ambari Server # ambari-server restart
... View more
Labels:
06-13-2017
02:18 AM
7 Kudos
Change Log directory for Ranger Admin logs On Ranger Admin nodes: ADMIN LOGS (Change the symbolic link for in ews directory): mkdir /opt/ranger/admin
chmod -R 775/opt/ranger/admin
chown -R ranger:ranger /opt/ranger/admin
cd /usr/hdp/current/ranger-admin/ews
unlink log
ln -s /opt/ranger/admin logs Change log directory for USERSYNC LOGS On Ranger Usersync node: mkdir /opt/ranger/usersync
chmod -R 775/opt/ranger/usersync
chown -R ranger:ranger /opt/ranger/usersync
cd /usr/hdp/current/ranger-usersync/
unlink logs
ln -s /opt/ranger/usersync logs
cd /usr/hdp/current/ranger-usersync/
cp ranger-usersync-services.sh{,.backup01122015}
vim ranger-usersync-services.sh
#logdir=/var/log/ranger/usersync (If it is present)
logdir=/opt/ranger/usersync Change log directory for TAGSYNC LOGS: On Ranger Tagsync nodes: mkdir /opt/ranger/tagsync
chmod -R 775/opt/ranger/tagsync
chown -R ranger:ranger /opt/ranger/tagsync
cd /usr/hdp/current/ranger-tagsync/
unlink logs
ln -s /opt/ranger/tagsync logs
cp ranger-tagsync-services.sh{,.backup01122015}
vim ranger-tagsync-services.sh
#logdir=/var/log/ranger/tagsync
logdir=/opt/ranger/tagsync From Ambari Web UI --> Change Log directory for Ranger Usersync, Ranger Admin and Ranger Tagsync, Restart Ranger Service. Note: If you are not able to change the directories in Ambari UI use below mathod for this,
Go to Ranger Service -> Configs -> Advanced -> Advanced ”Advanced ranger-env”. Right click on the value of property “ranger_admin_log_dir” -> Inspect element -> Change the value in element section(in my case value=”/opt/ranger/admin”)-> Hit Enter. 3. In the same tab Right click on the value of property “ranger_usersync_log_dir” -> Inspect element -> Change the value in element section(in my case value=”/opt/ranger/usersync”)-> Hit Enter. 4. Go to ”Advanced ranger-tagsync-site” -> Right click on the value of property “ranger_tagsync_logdir” -> Inspect element -> Change the value in element section(in my case value=”/opt/ranger/tagsync”)-> Hit Enter. 5. Click on save changes and restart the Ranger Whole service and check at the backend in those directories(In my case it is in /opt/ranger/ directory).
... View more
06-06-2017
07:21 AM
3 Kudos
@white wartih I see some errors in ambari-metrics-collector logs like, Unable to connect to HBase store using Phoenix.
org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table undefined. tableName=SYSTEM.CATALOG Can you check the below link for these errors. https://community.hortonworks.com/articles/11805/how-to-solve-ambari-metrics-corrupted-data.html
... View more
06-06-2017
06:02 AM
@white wartih Can you attach your Metrics-Collector logs
... View more