Member since
06-24-2016
111
Posts
8
Kudos Received
0
Solutions
09-24-2017
03:18 AM
There are some factors which are affecting performance of sqoop import. 1. Network Bandwidth between source server (Oracle, MySQL, PostgreSQL) and destination server (Datanodes). 2. Source server's connection policy for clients. 3. CPU, RAM, DISK Performance listing of servers.
... View more
09-24-2017
02:47 AM
What is the application_1504517816511_0001~4? I'm not sure, if a run application_1504517816511 before enabled ResourceManager HA, then kill this application first.
... View more
09-22-2017
12:40 AM
Did you install HDP Client on that server?
... View more
09-19-2017
04:25 AM
As I know, sqoop import "--append" options is not compatible with "--hcatalog-*". Just remove "--append".
... View more
08-04-2017
07:55 AM
It's a weird. Where is these partitions "/grid/data2, /grid/data3" on slave1?
... View more
08-04-2017
03:47 AM
Then, check the hostname which is installed with hiveserver2 on a node.
... View more
08-04-2017
01:21 AM
It's a wrong jdbc class name for uri. Try to use this "beeline jdbc:hive2://hostname:10000/default" instead of.
... View more
08-04-2017
12:53 AM
I'm gonna have some questions. Q1. Did you install DataNode service on slave1? Q2. Could you let me know values of DataNode directories on "Ambari > HDFS > Configs > Settings > DataNode"? Q3, Did you check the disk mount list on slave?
... View more
07-28-2017
02:23 PM
I'm just interesting. Why does it setup these options in hive config from ambari web? Well, actually, that means is if I want to use hive ORC file format with advanced TBLPROPERTIES such as "orc.compress, orc.compress.size, orc.stripe.size, orc,create.index....etc", I have to specify these tblproperties options every times when I'm trying to create hive table ORC file format.
... View more
07-28-2017
06:28 AM
I'm using HDP 2.5.3. Hive Setting ACID Transacctions ON Execution Engine TEZ CBO ON Fetch column stats at compiler ON Default ORC Stripe Size 64MB ORC Compression Algorithm ZLIB ORC Storage Strategy SPEED Here's my question. If I created hive table like this, CREATE TABLE test01 no int, id string, code, string ROW FORMAT DELIMITED FIELDS TERMININATED BY '|' STORED AS ORC then, what is default tableproperties of test01 table's ORC options? TBLPROPERTIES ( 'orc.compress' = '?', 'orc.create.index'='?', 'orc.stripe.size'='?', 'orc.row.index.stride'='?' ) for example. TBLPROPERTIES ( 'orc.compress' = 'ZLIB', 'orc.create.index'='true', 'orc.stripe.size'='67108864', 'orc.row.index.stride'='50000' )
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Tez
07-13-2017
11:35 PM
As I know, you don't need to specify oracle jdbc driver parameter in cmd.
... View more
07-10-2017
02:46 PM
The jar path you typed is wrong path pattern. ADD JAR hdfs:////user/test/lib/my-custom-format-0.0.1-SNAPSHOT.jar It's not working properly. Try it again below. Add a jar in local path, then file:///home/username/some_lib/...jar or just use /home/username/some_lib/...jar. Add a jar in hdfs path, then hdfs://namenode_fqdn:port(8020)/user/username/lib/...jar or if you setup the namenode HA, just use the nameserviceid such as hdfs:///nameserviceid/user/username/lib/...jar.
... View more
07-10-2017
06:40 AM
1st. If you were executed spark cmd with master(local), then check the connection host and port in that local server. 2nd. Check your firewall & iptables status whether it is of or off.
... View more
07-08-2017
04:50 AM
I think, the hivemetastore is not loaded hadoop&hive env properly. Check this conf file and hive library from installed server with hive packages. 1. /etc/hive/conf/hive-env.sh ... # Set HADOOP_HOME to point to a specific hadoop install directory HADOOP_HOME=${HADOOP_HOME:-/usr/hdp/current/hadoop-client} export HIVE_HOME=${HIVE_HOME:-/usr/hdp/current/hive-client} # Hive Configuration Directory can be controlled by: export HIVE_CONF_DIR=${HIVE_CONF_DIR:-/usr/hdp/current/hive-client/conf} ... 2. /usr/hdp/2.4.2.0-258/hive-metastore/lib & /usr/hdp/2.4.2.0-258/hive/lib Check library files. /usr/hdp/2.4.2.0-258/hive-metastore/lib ..... hive-jdbc-1.2.1000.2.5.3.0-37-standalone.jar hive-jdbc-1.2.1000.2.5.3.0-37.jar hive-jdbc.jar -> hive-jdbc-1.2.1000.2.5.3.0-37-standalone.jar hive-metastore-1.2.1000.2.5.3.0-37.jar hive-metastore.jar -> hive-metastore-1.2.1000.2.5.3.0-37.jar hive-serde-1.2.1000.2.5.3.0-37.jar hive-serde.jar -> hive-serde-1.2.1000.2.5.3.0-37.jar hive-service-1.2.1000.2.5.3.0-37.jar hive-service.jar -> hive-service-1.2.1000.2.5.3.0-37.jar .... /usr/hdp/2.4.2.0-258/hive/lib libthrift-0.9.3.jar
... View more
07-05-2017
11:40 PM
It's a normal, cause' you don't have access ownership & permission of that path. The path, /tmp/ambari-qa, ownership & permission is ambari:hdfs:700. That means is only access to the path by ambari-qa user include a sub-path (staging). So you have to access another user such as nifi, you should acquire ambari-qa ownership or hdfs ownership not group. I'm not recommend to change ownership or permission /tmp/../path, but if you using with only nifi user necessarily, then change the permission "755" of the path "/tmp/ambari-qa" and a sub-path by hdfs user or ambari-qa user.
... View more
06-30-2017
06:41 AM
If you're using hadoop cluster with ambari of hortonworks, then you don't have to use that --master yarn parameter. Cause' spark service mode of HDP cluster is installed to yarn mode basically.
... View more
06-29-2017
08:21 AM
Try like this command. spark-shell --jars /app/spark/a.jar,/app/spark/b.jar
... View more
06-29-2017
06:41 AM
Is that the same distribution hadoop version between CentOS and Ubuntu?
... View more
06-22-2017
02:22 AM
Try to change hdp-utils.repo's baseurl to "http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos6"
... View more
06-22-2017
02:07 AM
The best way to connect to Hadoop Cluster as client server, register client server to ambari-server. If you don't have same OS version between ambari-server and client server, then you should setup same version's hadoop library and config files in client server. To handle easily for HA services like NameNode, ReourceManager, Hive,...etc, I'll recommend to use ZooKeeper Curatorframework.
... View more
06-21-2017
07:16 AM
jdbc:hive2//fqdn/10000 .... You have returned 'invalid URL'. Check the metastore url setting value in Ambari > Hive.
... View more
06-21-2017
07:08 AM
Thanks, prsingh. I think, CURL command is the correct way to clean delete old fqdn list in HST. And it works clearly.
... View more
06-21-2017
06:47 AM
Hm,,It's not working. I run that procedures, and I got same issues. Duplicated fqdn's Uppercase and Lowercase names still exist. Backup ambari-db server. Stop all services Stop ambari-server Stop all amabri-agent ambari-server update-host-names new_hosts.json After update-host-names completed successfully, I got same return values from hst list-agents
... View more
06-21-2017
02:05 AM
Is there a nicely way to clean up old fqdns only from hst? Cause' originally, as I know, hadoop achitecture is handling Uppercase and Lowercase FQDN names well without any issues.
... View more
06-21-2017
01:03 AM
HDP 2.4.3 SmartSense 1.3.1.0-136 For example, I got 5 nodes and setup like this cluster firstly. Hadoop Cluster /etc/hosts 10.10.x.x MASTER1.hadoop.com master1 10.10.x.x MASTER2.hadoop.com master2 10.10.x.x SLAVE1.hadoop.com slave1 10.10.x.x SLAVE2.hadoop.com slave2 10.10.x.x SLAVE3.hadoop.com slave3 /etc/sysconfig/network hostname=FQDN for all servers And installed ambari and hdp with smartsense. And then changed every node's FQDN name in "/etc/hosts and /etc/sysconfig/network" Uppercase to Lowercase. ex. MASTER1.hadoop.com -> master1.hadoop.com But I got these issues from smartsense view. Hosts registered in Ambari and SmartSense do not match ........ command hostname -f ........ Also returned these results after executed command "hst list-agents" MASTER1.hadoop.com master1.hadoop.com MASTER2.hadoop.com master2.hadoop.com SLAVE1.hadoop.com slave3.hadoop.com SLAVE2.hadoop.com slave2.hadoop.com SLAVE3.hadoop.com slvae3.hadoop.com How to delete all upppercase fqdn lists?
... View more
Labels:
- Labels:
-
Hortonworks SmartSense
06-20-2017
12:52 AM
Did you install hive service with metastore clearly? Could you let me show the Hive Summary Menu in ambari.
... View more
06-19-2017
08:01 AM
Could you attach your hivemetastore.log after started hive service from ambari?
... View more
06-19-2017
05:08 AM
1 Kudo
1. Connect to mysql as root. 2. Execute these queries. mysql> use mysql; mysql> select * User,Host,Password from mysql; Normally, that returned two or three results for user 'hive'. hive | localhost | *encodedPassword hive | adrien.cluster | *encodedPassword hive | % | *encodedPassword if you can't get above results, then execute queries. mysql> create user 'hive'@'adrien.cluster' identified by 'PASSWORD' mysql> create user 'hive'@'%' identified by 'PASSWORD' mysql> grant all privileges on *.* to 'hive'@'adrien.cluster' mysql> grant all privileges on *.* to 'hive'@'%' mysql> flush privileges; 3. Try connection test in Ambari Web's Hive menu.
... View more
06-12-2017
04:38 AM
Did you check the option "dfs.namenode.acls.enabled=true"?
... View more