Member since
03-01-2017
33
Posts
2
Kudos Received
0
Solutions
12-04-2019
12:12 PM
If this issue is fixed, can you share the sample code. I am looking to use 1.6.3 and need some Java code samples for extracting data from Hive.
... View more
10-03-2019
10:51 AM
Can you check value of hive.llap.io.enabled? It should be checked. hive.llap.io.enabled and hive.llap.io.memory.mode are the only two that will determine if cache is used or not. If both of these are set, you should make sure that the queries that you are running are hitting actually data and pruning isn't skipping everything (e.g. if queryset is select col from tbl where date="2019-01-01" and there isn't any data in 2019-01-01 then you wont ever get anything in your cache). James
... View more
11-03-2018
03:56 AM
Following up on this. All services are up and running. Is there another tool I can use besides DBeaver to connect to HiveServer2?
... View more
10-24-2017
08:17 AM
Thanks all for your help I resolved the issue by deleting entries of new server from ambari.hoststate table and retired. It worked. 🙂
... View more
10-10-2017
07:16 AM
1 : Check if hbase-master is running sudo /etc/init.d/hbase-master status
if not, then start it sudo /etc/init.d/hbase-master start 2 : Check if hbase-regionserver is running sudo /etc/init.d/hbase-regionserver status
if not, then start it sudo /etc/init.d/hbase-regionserver start 3 : Check if zookeeper-server is running sudo /etc/init.d/zookeeper-server status
if not, then start it sudo /etc/init.d/zookeeper-server start 4: Grepping for open port netstat -apn |grep <port to look for> 5: Process memory usage ps -eo pmem,pcpu,vsize,pid,cmd | sort -k 1 -nr | head -5 or ps -A --sort -rss -o comm,pmem | head -n 11 6: Free memory on system: CentOS free -g 7: Yarn application log checking yarn logs -applicationId <application ID> dig deeper via suing the container id along with application id yarn logs -applicationId <application ID> -containerId <container id> 8: starting zkCli cli connection to server cd /grid/0/hdp/current/zookeeper-client/bin And ./zkCli.sh -server <zookeeper server fqdn>:2181 9: starting name node from cli su -l hdfs -c "/usr/hdp/current/hadoop-hdfs-namenode/../hadoop/sbin/hadoop-daemon.sh start namenode" 10: Spark server - starting it when missing hdfs directory hdfs dfs -mkdir /spark-history hdfs dfs -chown -R spark:hadoop /spark-history hdfs dfs -chmod -R 777 /spark-history su - spark -c "/usr/hdp/current/spark-historyserver/sbin/start-history-server.sh" 11: Start History server::: (mapreduce) from cli su -l mapred -c "/usr/hdp/current/hadoop-mapreduce-historyserver/sbin/mr-jobhistory-daemon.sh start historyserver" 12: Starting App timeline server (Yarn) from cli su - yarn /grid/0/hdp/2.5.3.0-37/hadoop-yarn/sbin/yarn-daemon.sh start timelineserver 13: start and stop oozie server from CLI (on the machine where Ozzie server is installed) su oozie /usr/hdp/current/oozie-server/bin/oozied.sh start /usr/hdp/current/oozie-server/bin/oozied.sh stop 14: Hbase mater (Hbase) from cli Go to cluster node where Hbase master is installed, and then- su -l hbase -c "/usr/hdp/current/hbase-master/bin/hbase-daemon.sh start master; sleep 25" 15: Hive2 server (Hive) from cli Go to cluster node where HiveServer2 is installed, then- su hive nohup /usr/hdp/current/hive-server2/bin/hiveserver2 -hiveconf hive.metastore.uris=/tmp/hiveserver2HD.out 2 /tmp/hiveserver2HD.log Or su - hive -l -c 'HIVE_CONF_DIR=/etc/hive/conf /usr/hdp/current/hive-server2/bin/hiveserver2 -hiveconf hive.metastore.uris="" -hiveconf hive.log.dir=/var/log/hive -hiveconf hive.log.file=hiveserver2.log 1>/var/log/hive/hiveserver2.log 2>/var/log/hive/hiveserver2.log &' Or sudo su - -c "export HIVE_CONF_DIR=/tmp/hiveConf;nohup /usr/hdp/current/hive-server2/bin/hiveserver2 -hiveconf hive.metastore.uris=' ' -hiveconf hive.log.file=hiveServer2.log -hiveconf hive.log.dir=/var/log/hive > /var/log/hive/hiveServer2.out 2>> /var/log/hive/hiveServer2.log &" hive [root@stlrx2540m1-109 ~]# ps -ef | grep HiveSer [root@stlrx2540m1-109 ~]# echo 28627 > /var/run/hive/hive-server.pid [root@stlrx2540m1-109 ~]# echo 28627 > /var/run/hive/hive.pid [root@stlrx2540m1-109 ~]# chmod 644 /var/run/hive/hive-server.pid [root@stlrx2540m1-109 ~]# chmod 644 /var/run/hive/hive.pid [root@stlrx2540m1-109 ~]# chown hive:hadoop /var/run/hive/hive-server.pid [root@stlrx2540m1-109 ~]# chown hive:hadoop /var/run/hive/hive.pid
... View more
- Find more articles tagged with:
- app_timeline_server
- Hadoop Core
- HBase
- hiveserver2
- Issue Resolution
- issue-resolution
- namenode
- Oozie
10-10-2017
06:50 AM
1 Kudo
Manual install cluster::: Create VM’s: OS = RHEL6/RHEL7 (depending on ur HDP version) Java = OpenJDK8 Set java home on all the nodes of cluster (make sure you have the correct java path first before setting it. The path value might differ based on what java subversion gets installed)
export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk.x86_64 Do the pass-word less ssh
pick one node as amber host, create pub-pri key pair using ssh-keygen, then copy the public key into "/home/root/.ssh/authorized_keys" file on all the nodes Do test is password less ssh works (via doing ssh from the ambari node to all other nodes), if this fails, you need to resolve it first as the entire installation depends on it. NTP set-up (on all nodes)
Yum install -y ntp Start ntp: "/etc/init.d/ntpd start" (RHEL6) "systemctl enable ntpd" and "systemctl start ntpd" (RHEL7) Check "/etc/hosts" file on all hosts: should have all node entries in it
vi /etc/hosts Edit network file (on all nodes)
vi /etc/sysconfig/network NETWORKING=yes HOSTNAME=<fully.qualified.domain.name>
(FQDN of the particular node where you are editing the /etc/sysconfig/network file) Stop IP-tables: some times it might throw some error like "permission denied", you can ignore it)
service iptables stop (RHEL6) systemctl disable firewalld and service firewalld stop (RHEL7) Disable secure linux (all nodes)
setenforce 0 Umask set to 0022 (all nodes)
umask 0022 echo umask 0022 >> /etc/profile Get the amber repo file (on node where amber will be installed)
wget -nv http://s3.amazonaws.com/dev.hortonworks.com/ambari/centos6/2.x/BUILDS/2.4.3.0-35/ambaribn.repo -O /etc/yum.repos.d/ambari.repo (RHEL6, HDP 2.6) wget -nv http://public-repo-1.hortonworks.com/ambari/centos7-ppc/2.x/updates/2.5.0.1/ambari.repo -O /etc/yum.repos.d/ambari.repo (RHEL7, HDP 2.6) Install amber server (amber server node only)
yum install ambari-server If want to use mysql for ambari: Set-up mysql
Check the page: https://docs.hortonworks.com/HDPDocuments/Ambari-2.1.0.0/bk_ambari_reference_guide/content/_using_ambari_with_mysql.html yum install mysql-connector-java Yum install mysql-server Start the service:service mysqld start(when mysql is installed) Do the steps to set-up the DAT file required by the amber install.
When u do 'mysql -u root -p’ is will ask for password, just hit ‘enter’ (i.e. blank password) Do amber server config, and start amber server
ambari-server setup -j $JAVA_HOME (assuming JAVA_HOME is already set, if not set it)
Accept y for temporarily disable SELinux Accept n for Customize user account for ambari-server (assuming ambari runs under "root" user). Accept y for temporarily disabling iptables Select n for advanced database configuration ( select y if you want to set-up ambari with MySQL or any other db which should be already installed on the same node). At Proceed with configuring remote database connection properties [y/n] choose y. This completes the set-up Start and check ambari server
ambari-server start ambari-server status If you want to stop ambari server: ambari-server stop When successful, you should reach amber ui and should work from there on. Happy Installing....
... View more
- Find more articles tagged with:
- Hadoop Core
- hdp-2.5.0
- hdp-2.6.0
- How-ToTutorial
- Installation
Labels:
07-20-2017
07:15 AM
@dnyanesh kulkarni The issue seems to be due to no permission to execute the query as end-user on hive server side. Please change the 'Run as end user instead of Hive user' and set it to 'true' the 'hive-site.xml' (hive.server2.enable.doAs=true)
... View more