Member since
12-15-2015
66
Posts
32
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
903 | 07-22-2016 08:15 PM |
11-20-2018
10:05 PM
@julien laurenceau what do you mean change sftp server configuration ? I am having the same issue
... View more
04-24-2017
04:40 PM
can we include more than one region servers to restart in the rolling restart ? if not why should be the default 1 and we have option to have more than 1 in the text box
... View more
Labels:
- Labels:
-
Apache HBase
12-15-2016
04:56 PM
client OS is RHEL 6.7 Our cluster is running HDP 2.4.2 Our cluster is configured with an HA namenode, We're using Kerberos for authentication. scala> val parquetFile =
sqlContext.read.parquet("hdfs://clustername/folder/file") any idea whats the issue ??? Error: java.lang.IllegalArgumentException:
java.net.UnknownHostException: cluster name
... View more
Labels:
- Labels:
-
Apache Spark
07-22-2016
08:17 PM
I disabled the hbase plugin and ambari prompted with the above classed and restarted it didnt work. please see below the steps I did. Thanks Yadav for your reply.
... View more
07-22-2016
08:15 PM
Looks like after you disable the hbase plugin for ranger, the owner for /data/apps/data is HDFS is not getting changed to hbase for ACL fallback to work. so I changed the owner to hbase and it worked
... View more
07-22-2016
07:58 PM
we disabled hbase plugin for ranger and restarted the Hbase but hbase master failed to restart. Tried the solution mentioned in one of the post to restart HDFS, YARN and HBASE...nothing works:( error message: 016-07-22 15:36:58,585 FATAL [RBCDHADA112:60000.activeMasterManager] master.HMaster: Unhandled exception. Starting shutdown
.
org.apache.hadoop.security.AccessControlException: Permission denied: user=hbase, access=READ, inode="/apps/hbase/data/data/
hbase/meta/.tabledesc/.tableinfo.0000000002":hdfs:hdfs:-rw-r-----
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:219)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1771)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1755)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPathAccess(FSDirectory.java:1729)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1823)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1792)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1705)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:588)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodePr
otocolServerSideTranslatorPB.java:365)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(Cl
... View more
Labels:
- Labels:
-
Apache HBase
-
Apache Ranger
06-21-2016
04:17 PM
HDP 2.3.2
... View more
06-16-2016
03:36 PM
screen shot attached
... View more
06-16-2016
01:32 PM
Can some one please help me with this error? Please see attached it runs in default Q and Q looks fine in terms of other jobs hive-error.png
... View more
Labels:
- Labels:
-
Apache Hive
04-11-2016
05:53 PM
which is fastest way of copying files?
... View more
Labels:
- Labels:
-
Apache Hadoop
04-01-2016
01:50 AM
@Josh Elser is this related to bug https://issues.apache.org/jira/browse/ACCUMULO-4069 ? This is pulled from another environment where we have same issue. Looks like master was unable to receive tablet status report from T server for 3 times,before that it fails to find any Kerberos ticket from Tserver: 2016-03-29 22:48:53,052 [tserver.TabletServer] [server.TThreadPoolServer] ERROR: Error occurred during processing of message.
java.lang.RuntimeException: org.apache.thrift.transport.TTransportException
at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
at org.apache.accumulo.core.rpc.UGIAssumingTransportFactory$1.run(UGIAssumingTransportFactory.java:51)
at org.apache.accumulo.core.rpc.UGIAssumingTransportFactory$1.run(UGIAssumingTransportFactory.java:48)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1637)
...skipping...
2016-03-30 21:56:49,881 [tserver.TabletServer] INFO : Master requested tablet server halt
~ From Master server: unable to get tablet server status XXXYYYY XXX.com:9997[352d68b0c3801b6] org.apache.thrift.transport.TTransportE
xception: GSS initiate failed
2016-03-30 21:56:17,937 [master.Master] ERROR: master:XXXYYYY.XXX.com unable to get tablet server status From Monitor log: XXXYYYY1213.fg.XXX.com:9997[152d68b041401b8] org.apache.thrift.transport.TTransportE
xception: GSS initiate failed
2016-03-30 21:56:17,938 [master.Master] ERROR: master:XXXYYYY1 unable to get tablet server status 016-03-30 21:56:47,403 [transport.TSaslTransport] ERROR: SASL negotiation failure
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
at org.apache.thrift.transport.TSaslClientTransport.handleSaslStartMessage(TSaslClientTransport.java:94)
at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:253)
at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37)
at org.apache.accumulo.core.rpc.UGIAssumingTransport$1.run(UGIAssumingTransport.java:53)
at org.apache.accumulo.core.rpc.UGIAssumingTransport$1.run(UGIAssumingTransport.java:49)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.accumulo.core.rpc.UGIAssumingTransport.open(UGIAssumingTransport.java:49)
at org.apache.accumulo.core.rpc.ThriftUtil.createClientTransport(ThriftUtil.java:298)
at org.apache.accumulo.core.client.impl.ThriftTransportPool.createNewTransport(ThriftTransportPool.java:478)
at org.apache.accumulo.core.client.impl.ThriftTransportPool.getTransport(ThriftTransportPool.java:410)
at org.apache.accumulo.core.client.impl.ThriftTransportPool.getTransport(ThriftTransportPool.java:388)
at org.apache.accumulo.core.rpc.ThriftUtil.getClient(ThriftUtil.java:135)
at org.apache.accumulo.core.rpc.ThriftUtil.getClientNoTimeout(ThriftUtil.java:102)
at org.apache.accumulo.core.client.impl.MasterClient.getConnection(MasterClient.java:69)
at org.apache.accumulo.monitor.Monitor.fetchData(Monitor.java:252)
at org.apache.accumulo.monitor.Monitor$1.run(Monitor.java:486)
... View more
03-31-2016
06:57 PM
@Josh Elser 22 days back below errors got logged on all TServers and after 22 days Tservers all went down ERROR: Lost tablet server lock (reason = LOCK_DELETED), exiting at org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.thrift.transport.TTransportException: Peer indicated failure: GSS initiate failed
at org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:190)
at org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125)
at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:253)
at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
... 11 more
2016-03-09 20:35:36,971 [tserver.TabletServer] INFO : Master requested tablet server halt
... View more
03-31-2016
06:29 PM
@Josh Elser @Artem Ervits is there any timeline when this bug https://issues.apache.org/jira/browse/ACCUMULO-4059 will be fixed? we are also seeing same error (Tservers getting crashed often) [server.TThreadPoolServer] ERROR: Error occurred during processing of message.
java.lang.RuntimeException: org.apache.thrift.transport.TTransportException: Peer indicated failure: GSS initiate failed
at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
at org.apache.accumulo.core.rpc.UGIAssumingTransportFactory$1.run(UGIAssumingTransportFactory.java:51)
at org.apache.accumulo.core.rpc.UGIAssumingTransportFactory$1.run(UGIAssumingTransportFactory.java:48)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1637)
at org.apache.accumulo.core.rpc.UGIAssumingTransportFactory.getTransport(UGIAssumingTransportFactory.java:48)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:208)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.thrift.transport.TTransportException: Peer indicated failure: GSS initiate failed
at org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:190)
at org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125)
at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:253)
at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
... 11 more
... View more
03-11-2016
07:19 PM
1 Kudo
Thanks @Andrew Watson @andrew watson @Vperiasamy Do we know which version of HDP saving ranger Audit to DB is going to be unsupported? https://community.hortonworks.com/questions/2202/ranger-audit-options-is-db-audit-still-supported-i.html
... View more
03-11-2016
03:25 PM
2 Kudos
whats the difference, prons and cons having ranger audit to DB vs ranger audit to HDFS?
... View more
Labels:
- Labels:
-
Apache Ranger
03-09-2016
01:59 AM
1 Kudo
Thanks Arterm
... View more
03-09-2016
01:41 AM
3 Kudos
can you please let me know is there any size limit that is send to trash after delete?? Please advise. we are trying to delete a file around 6GB but its not getting deleted
... View more
Labels:
- Labels:
-
Apache Hadoop
03-07-2016
07:21 PM
@Neeraj Sabharwal @Jonas Straub @vperiasamy Ranger
UI configuration input box for FSDefaultfs defaults to the “clustername”,
but documentation specifies to enter the “NameNode”. (Current assumption
is that “NameNode” is likely the correct parameter – working when configured according to Hortonworks HDP2.3.2 doc Chap 9:
“Special Requirements for High Availability Environments”). Question : if
there is a solution or fix that would allow Ranger HDFS Repository configuration
to use fs.defaultFS instead of the active NameNode.
... View more
03-06-2016
10:47 PM
Thanks @Jonas Straub @bdurai for your comments. In user.name what value should I pass? is it a kerberos principal of the user who is the executing the command This is what I got after I ran the above command. ~> curl --negotiate -u : -X GET 'http://WebdhcatserverDNS:50111/templeton/v1/hive?user.name=ekoifman'
{"error":null}:~> curl --negotiate -u : -X GET 'http://WebhcatserverDNS:50111/templeton/v1/hive?user.name=ekoifman
... View more
03-03-2016
06:03 PM
1 Kudo
How can I run curl with hive command in secured cluster? curl -s -d execute="select+*+from+pokes;" \
-d statusdir="pokes.output" \
'http://localhost:50111/templeton/v1/hive?user.name=ekoifman' in user.name - I tried to pass Hive Principal and keytab didnt work and tried with user principal didnt work. can you please provide some example?
... View more
Labels:
- Labels:
-
Apache Hive
03-02-2016
02:48 PM
where should I mention the time interval for refreshing the users from Unix source to ranger
... View more
Labels:
- Labels:
-
Apache Ranger
02-27-2016
04:18 PM
1 Kudo
@Xi Sanderson @Artem Ervits Thanks for sharing this useful information. How can I download the patch from jira and install it rather than running manually. This is the first time am installing the ambari patch 😞
... View more
02-23-2016
05:10 PM
@xingxing di @bhagan @Neeraj Sabharwal can you guys please post step by steps to start Kylin on HDP 2.3.2 and pre-requisites for the same? Thanks in advance
... View more
01-29-2016
07:34 PM
@Neeraj Sabharwal Thank you for your response. But what if we need to allow users to view the logs (read only) through Ambari only not even edge node access. just this url access http://domainame:50070/logs/
... View more
01-29-2016
06:33 PM
1 Kudo
How to restrict access to Name node logs for a particular users/groups?
... View more
Labels:
- Labels:
-
Apache Hadoop
01-28-2016
09:02 PM
instead of all users, can we restrict to confined users and the list should be pulled from KDC ?
... View more