Member since
07-11-2017
44
Posts
1
Kudos Received
0
Solutions
07-18-2019
10:15 AM
Hi @Jay Kumar SenSharma Thanks for the answer. Yes, we are seeing same behavior in Incognito mode and other browsers. No, we are not using any frontend browser. We are observing the following error in ambari-server.log ERROR [ambari-client-thread-2910804] ContainerResponse:537 - Mapped exception to response: 500 (Internal Server Error) org.apache.ambari.view.hive20.utils.ServiceFormattedException Ambari version 2.6.2.2 Thanks in advance!
... View more
07-18-2019
07:22 AM
@Jay Kumar SenSharma Can you please help in this? Thanks in advance
... View more
07-18-2019
07:21 AM
Hive 2.0 view query page is taking time to load. All the checks are green but the query page is not opening and is taking time to load
... View more
Labels:
07-17-2019
02:17 PM
We have configured group sync uisng LDAP for Ranger. Also on the OS level we have configured SSSD to sync with AD. Able to see the AD groups in Groups tab of Ranger. Also on running id <username>, we are able to see the groups the user is part of. But the output of hdfs groups is null. ALso the ranger authorization using AD groups is not happening
... View more
Labels:
06-07-2019
06:11 AM
Hi @Jay Kumar SenSharma, Yes, we have installed Ambari and Ranger on the same node. And we are using HDP 2.6.5 in our cluster. Now I have a clear picture on why we are getting this error. Thank you so much for answering and your detailed explanation.
... View more
06-07-2019
05:06 AM
Hi @Jay Kumar SenSharma, can you please explain more about these properties?
... View more
06-06-2019
12:57 PM
@Geoffrey Shelton Okot Can you please help us with the issue?
... View more
06-06-2019
12:45 PM
Hi, We have HDP 2.6.5 in our cluster. The property mentioned by you is already disabled. The link to which it is getting redirected to is "https:/<hostname>:6080"
... View more
03-21-2018
10:57 AM
@Krishna Pandey Thank you so much for the reply. Will look into this.
... View more
03-21-2018
08:42 AM
@Krishna Pandey Hi, thanks for the reply, Major concern is if this implementation would take the credentials from the system rather than giving it while logging in to Ambari.
... View more
03-21-2018
04:59 AM
Hi @Krishna Pandey, @Jay Kumar SenSharma, Thanks for the answer, that is of huge help. I had one small query, the requirement is "Whenever any user logs in to his system(laptop/desktop), the credentials entered have to be stored and then used to validate his/her login to Ambari, he/she need not enter the credentials again". Is this possible using Knox or PAM-based authentication? Thanks in advance
... View more
03-20-2018
05:11 AM
The requirement is to open Ambari UI without giving any credentials and only using the credentials used to login to the system, can this be done using Knox? Thanks in advance
... View more
Labels:
03-20-2018
05:08 AM
hi @Saumil Mayani, Thank you so much for the answer, it did work. Just wanted to know if all the regions can be collectively moved instead of moving every region individually?
... View more
03-07-2018
10:01 AM
Need to move all the regions of a region server to another region server manually
... View more
Labels:
01-08-2018
10:14 AM
Hi, Wanted to know the type of file created by TestDFSIO write command for HDFS benchmarking. For ex: /benchmarks/TestDFSIO/io_data/test_io_0 is created, but the file type? Can it be changed?
... View more
Labels:
11-15-2017
08:29 AM
How to sync the production and DR clusters?? How the configurations can be synced? Thanks in advance.
... View more
Labels:
11-14-2017
04:58 AM
Suppose Cluster B is the backup cluster for Cluster A, The replication of tables of Cluster A is stored in Cluster B. How do I manage to change the configurations such that whenever cluster A is down, the request has to be sent to cluster B. How can this be done??
... View more
Labels:
11-01-2017
08:51 AM
@Aditya Sirna Hi Aditya, Thanks for the answer. Both maven and gcc are installed. Is there any way I can build this and generate data without internet connection??? since the cluster I am using has no connection to internet. Thanks in advance.
... View more
11-01-2017
07:00 AM
@pankaj singh Can you please help me i this regard? Thanks in advance
... View more
11-01-2017
06:52 AM
I am executing the tpcds-build.sh file before running tpcds-setup.sh file to generate data for hive benchmarking. The error after executing the command ./tpcds-build.sh : Building TPC-DS Data Generator
curl http://dev.hortonworks.com.s3.amazonaws.com/hive-testbench/tpcds/README
curl: (7) couldn't connect to host
make: *** [tpcds_kit.zip] Error 7
TPC-DS Data Generator built, you can now use tpcds-setup.sh to generate data. Please help in this regard. Thanks in advance.
... View more
Labels:
10-12-2017
07:01 AM
@vshukla Please help in this regard.
... View more
10-12-2017
06:59 AM
While running the spark-bench command ./examples/multi-submit-sparkpi/multi-submit-example.sh from spark bench distribution file, getting this error Exception in thread "main" java.lang.Exception: spark-submit failed to complete properly given these arguments:
--class com.ibm.sparktc.sparkbench.cli.CLIKickoff --master yarn-client /home/dialdev/spark-bench_2.1.1_0.2.2-RELEASE/lib/spark-bench-2.1.1_0.2.2-RELEASE.jar /tmp/spark-bench-3184271760086441425.conf
at com.ibm.sparktc.sparkbench.sparklaunch.SparkLaunch$.launch(SparkLaunch.scala:47)
at com.ibm.sparktc.sparkbench.sparklaunch.SparkLaunch$anonfun$run$2.apply(SparkLaunch.scala:39)
at com.ibm.sparktc.sparkbench.sparklaunch.SparkLaunch$anonfun$run$2.apply(SparkLaunch.scala:39)
at scala.collection.immutable.List.foreach(List.scala:381)
at com.ibm.sparktc.sparkbench.sparklaunch.SparkLaunch$.run(SparkLaunch.scala:39)
at com.ibm.sparktc.sparkbench.sparklaunch.SparkLaunch$.main(SparkLaunch.scala:16)
at com.ibm.sparktc.sparkbench.sparklaunch.SparkLaunch.main(SparkLaunch.scala) In spark-bench-env.sh, the environment variables are set as follows: export SPARK_HOME=/usr/hdp/2.5.3.0-37/spark export SPARK_MASTER_HOST=yarn-client (also tried with yarn) Thanks in advance
... View more
Labels:
10-05-2017
05:46 AM
Running the command hbase org.apache.hadoop.hbase.PerformanceEvaluation --rows=20000 --table='w1' --valueSize=10485 --columns=100 filterScan 5 The table size is 100gb approx. It is failing with the error message: Timed out after 300 secs
Container killed by the ApplicationMaster.
Container killed on request. Exit code is 143 What can be done? Thanks in advance
... View more
09-01-2017
09:04 AM
@Jay SenSharma Hi, Can you please help me in this regard?
... View more
09-01-2017
09:01 AM
Hbase provides pe tool for performance benchmarking. It has an option to mention the number of clients. Is it the number of Hbase region servers we have in the cluster?
... View more
Labels:
09-01-2017
03:36 AM
1 Kudo
Hi, I wanted to know the details about the different options and commands present with pe tool. Specifically number of rows, number of clients and columns. How it works?
... View more
Labels:
08-31-2017
05:40 AM
Hi @Rajesh, Thanks for the answer. This seems to be feasible to be implemented in the cluster I am using. Needed few clarifications though. 1. What would be the content of jceks file? Can i have an example jceks fle for better understanding? 2. hadoop credential create contextFactory.systemPassword -provider jceks:///etc/zeppelin/conf/credentials.jceks This line would be included in [main] part of shiro.ini file? It would be really helpful if you could guide me through this. Thanks in advance.
... View more
08-30-2017
03:59 AM
Hi All, Currently the passwords of all the users of zeppelin are stored in shiro.ini file in plain text format. I want to encrypt these passwords and store in mysql database. I am struck with this since many days as I am new to the technology. I don't have LDAP or AD. Also no kerberization is implemented. Ambari UI is being used. Looking to achieve it through configuration changes itself rather than changing the existing functionality completely. Can anyone please guide me to achieve the same. Thanks in advance.
... View more
08-09-2017
06:55 AM
Hi @Geoffrey Shelton Okot Thank you for the reply. The requirements I am working for is follows: 1. I have got multiple users for zeppelin and I have to store the passwords of each user of zeppelin into the database. 2. The passwords have to be hashed and have to be stored in database. I have also found this link regarding encrypting password in shiro: http://shiro.apache.org/configuration.html#Configuration-INISections Under the section encrypting passwords, the hashed passwords are again being stored in shiro.ini file which is again visible to the user. Hence I want to store this in the database. Please help me out with this. Thanks in advance.
... View more