Member since
03-22-2016
27
Posts
9
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
975 | 09-02-2016 05:00 PM | |
653 | 08-16-2016 06:58 AM | |
661 | 06-08-2016 12:19 PM |
06-29-2017
12:00 PM
@Johannes Peter The Kerberos plugin uses SPNego to negotiate authentication. HTTP indicates the type of requests which this service principal will be used to authenticate. The HTTP/ in the service principal is a must for SPNego to work with requests to Solr over HTTP.
... View more
06-28-2017
09:50 PM
try disabling the firewall. service iptables stop Please let me know if that helps
... View more
06-28-2017
08:54 PM
@Weidong Ding Port 8080 might be already in use. Usually tomcat runs on port 8080. You can optionally change the ambari port https://docs.hortonworks.com/HDPDocuments/Ambari-2.4.2.0/bk_ambari-reference/content/optional_changing_the_default_ambari_server_port.html
... View more
06-28-2017
04:11 PM
@Bill Schwanitz There is no single property such as server.user that can be modified. There are a lot of modifications/permissions that need to be set as a part of setting up of ambari-server user. Hence this is the only available option as far as I know.
... View more
06-28-2017
02:44 PM
@Bill Schwanitz I assume you want to run ambari-server under a non-root user. If so, you can run the ambari-server setup command. It can be executed even after ambari is already installed. Then select "Customize user account for ambari-server daemon" as "y" and enter the new user name. You will also need to take care of Sudoer Configuration https://docs.hortonworks.com/HDPDocuments/Ambari-2.4.2.0/bk_ambari-security/content/sudoer_configuration_server.html
... View more
06-28-2017
02:33 PM
@Sami Ahmad For some tables hive just looks at the table metadata and fetches the values which might not have been updated. There are two ways to approach this. 1. You can run ANALYZE TABLE pa_lane_txn and then run select count(*) statement. This will give you the correct value. 2. You can force hive to run a mapreduce job to count the number of rows by setting fetch task conversion to none; hive> set hive.fetch.task.conversion=none;
... View more
06-28-2017
04:12 AM
2 Kudos
@Anishkumar Valsalam You can run the command desc database <db_name>. There you can find the location of hdfs directory where the db exists and then navigate to find the creation date. From hdfs you can run this command to find the creation date hdfs dfs -stat /apps/hive/warehouse/<db_name.db>
... View more
06-28-2017
04:05 AM
1 Kudo
@Wilson Blanco You can set these variables in /etc/hadoop/conf/hadoop-env.sh file. Typical values of these variables look like export HADOOP_HOME=${HADOOP_HOME:-/usr/hdp/current/hadoop-client} export HADOOP_LIBEXEC_DIR=/usr/hdp/current/hadoop-client/libexec
... View more
06-27-2017
02:27 PM
1 Kudo
@PJ Even after setting replication factor as 1 the data would be split into blocks and would be distributed across different datanodes. So, incase of a datanode failure you will only be able to partially retrieve data. Other advantage of setting replication factor > 1 is parallel processing, i.e. you have multiple copies of data at multiple places and all the machines can simultaneously process data.
... View more
06-27-2017
12:35 PM
@Vishal Gupta You might not have added principals for kadmin/fqdn@DOMAIN as well as the legacy fallback kadmin/admin@DOMAIN. You can add them using kadmin.local https://web.mit.edu/kerberos/krb5-1.13/doc/admin/admin_commands/kadmin_local.html
... View more
06-27-2017
12:28 PM
@Leenurs Quadras You can install multiple Hive Servers in case of multiple workloads or applications in which case each HiveServer2 instance can have its own settings for Hive and Tez. You can refer to this document https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.2/bk_performance_tuning/content/section_hive_setup_for_multiple_queues.html
P.S- If the answers help please accept and upvote the answers. Thanks
... View more
06-27-2017
12:08 PM
1 Kudo
@Phoncy Joseph Old versions of ambari used to display such health messages. Usually they harmless. This was fixed in Ambari 2.3.0. https://issues.apache.org/jira/browse/AMBARI-12420. You can check if you are able to access and create files in DFS. You can also run hadoop fsck command to check its health status.
... View more
06-27-2017
06:02 AM
1 Kudo
@Facundo Bianco For the PROD queue you can specify yarn.scheduler.capacity.<prod_queue_path>.capacity to be 50%. For DEV and LABS each you can specify yarn.scheduler.capacity.<dev/labs_queue_path>.capacity to be 25% and yarn.scheduler.capacity.<dev/labs_queue_path>.maximum-capacity as 50% each. This will provide the required elasticity, i.e. when PROD is not being used the other two can use up to 50% each.
... View more
06-27-2017
05:51 AM
@Leenurs Quadras Hive installation is independent of NameNode/Secondary NameNode location. In the configuration file you just need to specify where is Hadoop installed so that it can access the job tracker for submitting Map Reduce programs. Theoretically you can setup HiveServer, MetaStore Server, hive clients, etc. all in the Master Node. However in a production scenario placing them in a Master or Slave node is not a good idea. You should set up hiveserver on a dedicated Node so it doesn't compete for resources against the namenode processes (It gets quite busy in a multi-user scenario). MetaStore is a separate daemon which can be either embedded with the HiveServer (in this case it uses derby and isn't ideal for a production use) or can be setup separately as a dedicated database service(the recommended way). Beeline (hive client) can run in embedded mode as well as remotely. Remote HiveServer2 mode is recommended for production use, as it is more secure and doesn't require direct HDFS/metastore access to be granted for users. Hope this answers all your questions.
... View more
12-02-2016
02:01 PM
@Tristan Fily, Were you able to resolve this issue? IF yes, was there some misconfiguration?
... View more
09-20-2016
03:10 PM
Please check if the resource manager at http://<resource_manager_host>:8088 is showing correct StartTime/FinishTime. If not this would explain calculating duration from the epoch time. If it is fine please check/correct the value of time_zone in hue.ini To find the available timezones:
ls /usr/share/zoneinfo ==> gives you X: the available zones (for example Europe)
ls /usr/share/zoneinfo/{your zone} ==> gives you Y: the time zone for specific cities (if they are available. for example Paris)
time_zone should be equal to X/Y (eg: Europe/Paris) After making changes in hue.ini please restart hue service hue restart
... View more
09-19-2016
04:02 PM
1 Kudo
There could be a possible misconfiguration in your hue.ini. Please open /etc/hue/conf/hue.ini and ensure the value of history_server_api_url, i.e. the host name (most of the times is same as the resource manager host) and the port number is correct. Also looking up logs at /var/log/hue/runcpserver.log should be helpful.
... View more
09-02-2016
05:00 PM
1 Kudo
This is a known issue. Please refer to this link http://gethue.com/hadoop-tutorial-oozie-workflow-credentials-with-a-hive-action-with-kerberos/
... View more
08-16-2016
08:55 PM
you can change the ownership of that file by logging in as the user 'hdfs'
... View more
08-16-2016
06:58 AM
try this as /usr/lib/hue/build/env/bin/hue passwd <username> run this command as root
... View more
06-16-2016
07:40 PM
Not sure if this is what you are looking for this, but in the job browser view you can click on 'Failed' to view the failed jobs. See the screenshot
screen-shot-2016-06-16-at-33907-pm.png
... View more
06-08-2016
12:19 PM
1 Kudo
You can use your VM bridged adapter (Settings->Network->Adapter 2->Enable Network Adapter and set Attached To Bridged Adapter). You can do a ifconfig and connect to this network (in my case it is 192.168.xxx.xxx) . You can also use CLI to submit a oozie job. For eg: oozie job -oozie http://localhost:8080/oozie -config job.properties -run
... View more
06-03-2016
03:58 PM
this is a known limitation in non-secure clusters, whereby the containers are running as YARN user and not running as logged user. try setting this <env-var>HADOOP_USER_NAME=${wf:user()}</env-var>
... View more
06-02-2016
12:04 PM
You can change the http_port in hue.ini and restart hue to see if it works. Also your database might be corrupt. You can start a testserver (/usr/lib/hue/build/env/bin/hue testserver) which creates a fresh database, then replace the corrupt database with this fresh database (cp /usr/lib/hue/desktop/desktop-test.db /var/lib/hue/desktop.db) and then restart hue (/etc/init.d/hue restart)
... View more
05-31-2016
02:11 PM
You can try restarting hue after commenting out line 70 ssl_cipher_list="DEFAULT:!aNULL:!eNULL:!LOW:!EXPORT:!SSLv2" (default)
... View more
05-31-2016
01:51 PM
you can start hue by [root@sandbox ~]# /etc/init.d/hue start If you are unable to access Hue web UI and using HDP 2.4 or higher the below steps might help 1. Shut down your VM. 2. Go to your VM Settings->Network->Adapter 2 and check 'Enable Network Adapter' & Attached to: Bridged Adapter, Name: <wireless adapter> 3. Click Ok and start your VM 4. Start hue as stated above and check your ethernet address using 'ifconfig' command. Should be something like 192.168.xxx.xxx 5. Hue Interface should be accessible at ethernet_address:8000
... View more