Member since
09-10-2015
32
Posts
29
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3596 | 10-04-2015 10:36 PM | |
1333 | 09-30-2015 04:59 PM | |
7942 | 09-26-2015 05:24 PM |
03-29-2017
09:10 PM
https://hortonworks.com/hadoop-tutorial/manage-security-policy-hive-hbase-knox-ranger/
... View more
Labels:
- Labels:
-
Hortonworks Data Platform (HDP)
01-23-2017
10:00 PM
I did, but it looks like the jar is not being automatically copeied on restart of Hive to other nodes
... View more
01-23-2017
09:09 PM
I am setting the Hive metastore db driver using: ambari-server setup --jdbc-db=mysql --jdbc-driver=/usr/hdp/current/hive-server2/lib/mysql-connector-java-5.1.38.jar
Using python /usr/bin/python
Setup ambari-server
.Copying /usr/hdp/current/hive-server2/lib/mysql-connector-java-5.1.38.jar to /var/lib/ambari-server/resources
If you are updating existing jdbc driver jar for mysql with mysql-connector-java-5.1.38.jar. Please remove the old driver jar, from all hosts. Restarting services that need the driver, will automatically copy the new jar to the hosts.
JDBC driver was successfully initialized.
Ambari Server 'setup' completed successfully. and I have checked that the jar is in that path. But, when I try to restart Hiveserver2, I get the following error message: 2017-01-23 20:59:37,414 - Error! Sorry, but we can't find jdbc driver with default name mysql-connector-java.jar in hive lib dir. So, db connection check can fail. Please run 'ambari-server setup --jdbc-db={db_name} --jdbc-driver={path_to_jdbc} on server host.'
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hive
10-08-2016
04:17 PM
1 Kudo
Great step-by-step instruction. You can skip Step 2 and replace step 3 with
docker load < HDP_2.5_docker.tar.gz
... View more
12-28-2015
02:16 AM
I am trying to upgrade HDP to 2.3.4 from 2.3.2 from Ambari 2.2. I am facing an issue on the Amabri UI where, clicking on Install Packages does nothing as illustrated in the video below: https://www.dropbox.com/s/f9vbhp1vmy7gmg4/hdp-upgrade.mp4?dl=0
... View more
10-15-2015
02:49 PM
1 Kudo
I tried installing the ODBC driver on Mac OS X El Capitan, I got "Installation failed" https://www.dropbox.com/s/9hpim6rjl5qr21m/Screenshot%202015-10-15%2007.47.06.png?dl=0 Any idea what I am doing wrong?
... View more
10-14-2015
02:47 PM
2 Kudos
Has anyone tried manually upgrading to Spark 1.5.1 on Hortonworks Sandbox and faced any issues?
... View more
Labels:
- Labels:
-
Apache Spark
10-05-2015
03:04 PM
Hi, I am new to HDP and hadoop.I managed to install HDP 2.3 sandbox on Virtual box. I tried a few sample programs and they are working fine from the sandbox. I have installed Eclipse with Scala in my Windows machine. At present ,I use SBT and package my application and deploy the jar in the HDP Sandbox for execution. I would like to execute programs from my Eclipse against the HDP sandbox directly instead of packaging it each and every time. A sample code which I am trying to modify val conf = new SparkConf().setAppName(“Simple Application”).setMaster(“local[2]”).set(“spark.executor.memory”,”1g”) I guess , I have to change the local[2] to the master node / yarn cluster url. How do I get the url from the sandbox ? Any other configurations which has to be done on the Virtual box or on my code ?
... View more
Labels:
10-04-2015
11:04 PM
While upgrading HDP 2.0 to HDP 2.1 and Metastore Schema from 0.12 to 0.13 I got the Error: Duplicate column name ‘OWNER_NAME’ (state=42S21,code=1060). The Metastore version is 0.12 in the VERSION Table however the ‘OWNER_NAME’ column in the ‘DBS’ table already exists. Here is the detailed error: +———————————————+
| < HIVE-6386: Add owner filed to database > |
+———————————————+
1 row selected (0.001 seconds)
0: jdbc:mysql://hadoop.domain> ALTER TABLE <code>DBS
ADD OWNER_NAME varchar(128)
Error: Duplicate column name ‘OWNER_NAME’ (state=42S21,code=1060)
Closing: 0: jdbc:mysql://hadoop.domain/hive?createDatabaseIfNotExist=true
org.apache.hadoop.hive.metastore.HiveMetaException: Upgrade FAILED! Metastore state would be inconsistent !!
org.apache.hadoop.hive.metastore.HiveMetaException: Upgrade FAILED! Metastore state would be inconsistent !!
at org.apache.hive.beeline.HiveSchemaTool.doUpgrade(HiveSchemaTool.java:242)
at org.apache.hive.beeline.HiveSchemaTool.doUpgrade(HiveSchemaTool.java:211)
at org.apache.hive.beeline.HiveSchemaTool.main(HiveSchemaTool.java:489)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
Caused by: java.io.IOException: Schema script failed, errorcode 2
at org.apache.hive.beeline.HiveSchemaTool.runBeeLine(HiveSchemaTool.java:377)
at org.apache.hive.beeline.HiveSchemaTool.runBeeLine(HiveSchemaTool.java:350)
at org.apache.hive.beeline.HiveSchemaTool.doUpgrade(HiveSchemaTool.java:237)
… 7 more
*** schemaTool failed *** Has anyone run into the same issue? Any idea what is the source of this problem?
... View more
Labels:
- Labels:
-
Apache Hive
10-04-2015
10:57 PM
I’m not a Kerberos wizard, so I’m on a bit of a learning curve. I’ve followed all of the Kerberos instructions in the HDP 2.1 documentation and run into an issue where my datanodes won’t start (3 node cluster). If I roll back all of the xml files to non-kerberos versions, I can start everything from the command line. When I shut down the cluster and roll in the kerberos versions of the xml files, I’m able to start the namenode, but all of the datanodes refuse to start and the only clue I have is as follows; ************************************************************/2014-07-24 11:04:22,181 INFO datanode.DataNode (SignalLogger.java:register(91)) - registered UNIX signal handlers for [TERM, HUP, INT]2014-07-24 11:04:22,399 WARN common.Util (Util.java:stringAsURI(56)) - Path /opt/hadoop/hdfs/dn should be specified as a URI in configuration files. Please update hdfs configuration.2014-07-24 11:04:23,055 INFO security.UserGroupInformation (UserGroupInformation.java:loginUserFromKeytab(894)) - Login successful for user dn/abc0123.xy.local@XYZ.COM using keytab file /etc/security/keytabs/dn.service.keytab2014-07-24 11:04:23,210 INFO impl.MetricsConfig (MetricsConfig.java:loadFirst(111)) - loaded properties from hadoop-metrics2.properties2014-07-24 11:04:23,274 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:startTimer(344)) - Scheduled snapshot period at 60 second(s).2014-07-24 11:04:23,274 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:start(183)) - DataNode metrics system started2014-07-24 11:04:23,279 INFO datanode.DataNode (DataNode.java:<init>(269)) - File descriptor passing is enabled.2014-07-24 11:04:23,283 INFO datanode.DataNode (DataNode.java:<init>(280)) - Configured hostname is cvm0932.dg.local2014-07-24 11:04:23,284 FATAL datanode.DataNode (DataNode.java:secureMain(2002)) - Exception in secureMainjava.lang.RuntimeException: Cannot start secure cluster without privileged resources.at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:700)at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:281)at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1885)at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1772)at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1819)at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1995)at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2019)2014-07-24 11:04:23,287 INFO util.ExitUtil (ExitUtil.java:terminate(124)) - Exiting with status 12014-07-24 11:04:23,289 INFO datanode.DataNode (StringUtils.java:run(640)) - SHUTDOWN_MSG:/**********************************
... View more
Labels: