Member since
10-12-2016
37
Posts
3
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
982 | 03-13-2017 06:16 PM |
08-13-2018
02:02 PM
@subash sharma How did you solve this?
... View more
05-10-2018
06:51 PM
@Olivér Szabó Its exactly the same error again Exception in thread "main" org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode = NoAuth for /hive
at org.apache.zookeeper.KeeperException.create(KeeperException.java:113)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1155)
at org.apache.solr.common.cloud.SolrZkClient$7.execute(SolrZkClient.java:345)
at org.apache.solr.common.cloud.SolrZkClient$7.execute(SolrZkClient.java:342)
at org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:60)
at org.apache.solr.common.cloud.SolrZkClient.getData(SolrZkClient.java:342)
at org.apache.solr.common.cloud.SolrZkClient.printLayout(SolrZkClient.java:583)
at org.apache.solr.common.cloud.SolrZkClient.printLayout(SolrZkClient.java:608)
at org.apache.solr.common.cloud.SolrZkClient.printLayoutToStdOut(SolrZkClient.java:624)
at org.apache.solr.cloud.ZkCLI.main(ZkCLI.java:244)
... View more
05-10-2018
04:30 PM
I was using FQDN just replaced with local host as I didnt want to post the hostname there. Tried both with FQDN and localhost
... View more
05-10-2018
03:39 PM
1 Kudo
If not a keytab you should have the trust store of the hive and pass it along for the knox to connect to hive. So if you are making a db connection it should look something like knoxhsot.com:8443;ssl=1;sslTrustStore=/path/to/keystore.jks;trustStorePassword=passwordofkeystore;transportMode=http;httpPath=hive/gateway/path
... View more
05-10-2018
02:41 PM
Can you try using the host name instead of the ip in the hdfs path? Also make sure the user has permissions on that folder on hdfs to create new folder and write data to it.
... View more
05-10-2018
02:21 PM
Trying to access the access zookeeper from the client. Throws up a Auth error. Executing as ./zkcli.sh -zkhost localhost:2181 -cmd list from the folder /usr/lib/ambari-infra-solr/server/scripts/cloud-scripts Gived the exception Exception in thread "main" org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode = NoAuth for /hive
at org.apache.zookeeper.KeeperException.create(KeeperException.java:113)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1155)
at org.apache.solr.common.cloud.SolrZkClient$7.execute(SolrZkClient.java:345)
at org.apache.solr.common.cloud.SolrZkClient$7.execute(SolrZkClient.java:342)
at org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:60)
at org.apache.solr.common.cloud.SolrZkClient.getData(SolrZkClient.java:342)
at org.apache.solr.common.cloud.SolrZkClient.printLayout(SolrZkClient.java:583)
at org.apache.solr.common.cloud.SolrZkClient.printLayout(SolrZkClient.java:608)
at org.apache.solr.common.cloud.SolrZkClient.printLayoutToStdOut(SolrZkClient.java:624)
at org.apache.solr.cloud.ZkCLI.main(ZkCLI.java:244)
The kerberos ticket is initialized using kinit and have the permissions to access zk. Any pointers? Can some one let me know what Auth that is needed here?
... View more
Labels:
03-27-2018
06:06 PM
Do you have any updated version of this program with the latest libraries and also with a secure hbase?
... View more
02-02-2018
01:13 PM
If your vm can drive can deliver the required throughput it should be fine.
... View more
01-26-2018
06:22 PM
if you are using files or dirs starting with . and _ they are considered to be hidden in hdfs .. check from hive console if you can see the path properly for the table that got created.
... View more
01-24-2018
08:36 PM
as it is trying to re over write means it could be stale or corrupt. clean up the file and try again.
... View more
01-24-2018
02:08 PM
Check if the URL http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7 is accessible and you are write permissions to /etc/yum.repos.d/ambari-hdp-52.repo
... View more
01-16-2018
01:20 PM
OK when you said "yum install command it works as expected" I assumed proxy was working. with out that even yum wouldn't work directly.
... View more
01-13-2018
12:54 AM
The processes hangs randomly which I too have observed. Did you try restarting ambar-server?
... View more
01-13-2018
12:52 AM
Check the permissions on /var/lib/ambari-agent/cache/common-services/NIFI/ and make sure you have execute permissions for the nifi user.
... View more
11-29-2017
02:07 PM
1 Kudo
May be the the UNIX param needs to be set accordingly. Set it to unlimited or to a desired number ulimit -u unlimited
... View more
11-16-2017
03:33 PM
Verify the space and format and characters in the cron, may be some special characters spoiling the format.
... View more
11-15-2017
06:25 PM
Also depends on how much data is being processed by each job and how intensive is your processing or transformation of data. Going by average system you could look at 128 G
... View more
11-14-2017
06:56 PM
Totally depends on the data, Though Hortonworks has the recommendations, finally it will be how much data data and how many jobs are running.
... View more
11-14-2017
06:53 PM
Which is the user you are running the shell as? Are you able to write a file to that dir? If yes, then it could be the tmp cache that is getting cleared and the process getting killed abnormal.
... View more
11-14-2017
05:14 PM
Check if you have the write permissions to the user that is running the job Make sure your tmp dir is not being cleaned up as it runs or conflicting with another job
... View more
11-14-2017
04:20 PM
https://nifi.apache.org/docs/nifi-docs/rest-api/index.html
... View more
11-14-2017
03:41 PM
Not an answer, Wouldn't that be a security issue where the local hdfs gets used up? What is the use case for it. WebHDFS is a REST based app.. you can try exploring adding api.
... View more
11-14-2017
03:38 PM
@Michael DeGuzis Any pointers?
... View more
11-08-2017
04:50 PM
You can load the file into spark -> apply filters -> write the rest of the df to avro..
... View more
03-14-2017
01:29 PM
During what operation? Share what you tried to do, what you have done and the entire transcript log. Just a line out of no where will not help in anyway
... View more
03-13-2017
07:06 PM
As I mentioned --schema is not supported by import-all-tables. Looks like it takes the db name as the schema name by default. or if you have write access to db2, create a schema with same name and create aliases/synonyms or all tables in that schema. should work.
... View more
03-13-2017
07:01 PM
That is something related to DB2 you need to resolve. -206 object-name IS NOT VALID IN THE CONTEXT WHERE IT IS USED https://www.ibm.com/support/knowledgecenter/en/SSEPEK_10.0.0/codes/src/tpc/n206.html To isolate the problem just get that single table and see how it goes.
... View more
03-13-2017
06:49 PM
Can you check what it he contents of PROJECT_ID, looks like the issue is on that. What type of field is that
... View more