Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2452 | 04-27-2020 03:48 AM | |
4890 | 04-26-2020 06:18 PM | |
3977 | 04-26-2020 06:05 PM | |
3222 | 04-13-2020 08:53 PM | |
4928 | 03-31-2020 02:10 AM |
03-15-2017
11:56 AM
@Claudia Volpetti Can you please share some more stackTrace form ambari-server.log specially after the following lines: ERROR [main] AmbariServer:929 - Failed to run the Ambari Servercom.google.inject.ProvisionException: Guice provision errors: . If you can share the complete "ambari-server.log" then it will be much better though. Also the "/var/log/ambari-server/ambari-server.out" file will be useful as looks like it is failing immediately....so we can find some reasons inside the Out file as well.
... View more
03-15-2017
12:57 AM
1 Kudo
You will need to register a new version first. Are you defining the "Name" properly in the ambari UI while registering a new HDP 2.5 version?
Ambari UI --> Admin (Tab) --> 'Stack and Versions' --> "Versions" (tab) --> "Manage Version" (Button) --> "Register Version" --> From drop down button choose "HDP 2.5 Default Version Definition"
Then on the right side of this page you will see "Name: HDP-2.5." there you should define the version as 3.0 as mentioned in the screenshot. . Now you should be able to install a cluster on this newly register version.
... View more
03-15-2017
12:32 AM
Sometimes the ORC input files has the columns as VARCHAR columns instead of STRING. This can be identified easily by running hive orc dump for input files utility. https://cwiki.apache.org/confluence/display/Hive/LanguageManual+ORC#LanguageManualORC-ORCFileDumpUtility
Many times it happens that the input files are generated by mapreduce job, It is recommended to check the mapreduce program to consistency generate files of STRING type columns.
... View more
03-14-2017
05:16 PM
@Sai Deepthi Can you pelase check if you have done the following before executing the job? ADD JAR /usr/hdp/2.5.0.0-1245/hive2/lib/json-serde-1.3.8-SNAPSHOT-jar-with-dependencies.jar; . Also make sure that the JAR exist and has correct read permission. ls -lart /usr/hdp/2.5.0.0-1245/hive2/lib/json-serde-1.3.8-SNAPSHOT-jar-with-dependencies.jar .
Also
i noticed that you are using "org.openx.data.jsonserde.JsonSerDe" , may
be you can try using the "org.apache.hive.hcatalog.data.JsonSerDe" just incase you would like to use it. java.lang.ClassNotFoundException: Class org.openx.data.jsonserde.JsonSerDe not found .
... View more
03-14-2017
03:49 PM
@Juan Manuel Nieto
Also can we specify the "--config" to make sure that it is taking the right configuration files. Example: # hive --config /etc/hive/conf.server --service metatool -listFSRoot
.
... View more
03-14-2017
03:33 PM
@Juan Manuel Nieto
The result is good and it indicates that the username that you passed to the above mentioned java code is correct. So looks like when you are running the command "hive --service metatool -listFSRoot" then it is taking the password from somewhere else. Or may be getting incorrect password. Can you please check the "javax.jdo.option.connectionpassword" and "javax.jdo.option.ConnectionUserName" property in your hive-site.xml ?
... View more
03-14-2017
02:35 PM
@Juan Manuel Nieto
As this is basically a JDBC call that is failing hence it will be really good to test the same using the oracle jdbc driver that you are using. Hence i have modified the code "JDBCVersion.java" and attached here. import java.sql.*;
import oracle.jdbc.driver.*;
public class JDBCVersion {
public static void main (String args []) throws SQLException {
String connection_URL = "jdbc:oracle:thin:@(DESCRIPTION_LIST=(FAILOVER=ON)(LOAD_BALANCE=OFF)(DESCRIPTION=(RETRY_COUNT=3)(CONNECT_TIMEOUT=5)(TRANSPORT_CONNECT_TIMEOUT=3)(ADDRESS_LIST=(LOAD_BALANCE=ON)(ADDRESS=(PROTOCOL=TCP)(HOST=exa02-1.int)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=exa02-2.int)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=exa02-3.int)(PORT=1521)))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=SD0MISC)))(DESCRIPTION=(RETRY_COUNT=3)(CONNECT_TIMEOUT=5)(TRANSPORT_CONNECT_TIMEOUT=3)(ADDRESS_LIST=(LOAD_BALANCE=ON)(ADDRESS=(PROTOCOL=TCP)(HOST=exa01-1.int)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=exa01-2.int)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=exa01-2.int)(PORT=1521)))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=DEV01))))";
String dbUsername = "ambari";
String dbPassword = "bigdata";
DriverManager.registerDriver(new oracle.jdbc.driver.OracleDriver());
Connection conn = DriverManager.getConnection(connection_URL, dbUsername, dbPassword);
DatabaseMetaData meta = conn.getMetaData ();
System.out.println("JDBC driver version is " + meta.getDriverVersion());
}
} . The only change in the above code you will need to make is the dbUsername, dbPassword. I have already placed the same "connection_URL" as you mentioned above. .
In order to test the same please do the following: $ mkdir /tmp/TestOracle
$ cp -f ~/Downloads/ojdbc7.jar /tmp/TestOracle
$ cp -f ~/Desktop/JDBCVersion.java /tmp/TestOracle
$ cd /tmp/TestOracle
$ export CLASSPATH=/tmp/TestOracle/ojdbc7.jar:.:
$ javac JDBCVersion.java
$ java JDBCVersion . jdbcversion.zip
... View more
03-14-2017
02:02 PM
@rama By default you should see the default file view instance. If not then check the "/var/log/ambari-server/ambari-server.log" file to see if your default File View instance got deployed or not? You should see some entries like following: INFO [main] ViewRegistry:1656 - Reading view archive /var/lib/ambari-server/resources/views/files-2.4.1.0.22.jar.
INFO [main] ViewRegistry:1747 - setting up logging for view FILES{1.0.0} as per property file view.log4j.properties
INFO [main] ViewRegistry:1811 - Auto creating instance of view FILES for cluster ClusterDemo.
INFO [main] ViewRegistry:1689 - View deployed: FILES{1.0.0}.
.
If you do not see those entries there means the file view is not deployed ... so you need to find out if the view jar is present in ambari installation or not? Please check the following directory if you have the file view jar present there or not? (Version might be different in your case) # ls -l /var/lib/ambari-server/resources/views/files-2.2.2.xxx.jar
-rwxrwxrwx 1 root root 42287295 Dec 2 11:50 /var/lib/ambari-server/resources/views/files-2.2.2.xxx.jar . Also check if the user who is running Ambari Server has the read permission to this jar. . Once ambari deploys the views properly you should see directories created in the "work: folder as following: ]# ls -l /var/lib/ambari-server/resources/views/work/ | grep -i FILES
drwxr-xr-x 7 root root 4096 Dec 2 11:52 FILES{1.0.0}
. You can try deleting the "/var/lib/ambari-server/resources/views/work" directory and then restart ambari server so that it will recreate the "work" directory and extract the views jars there.
... View more
03-14-2017
11:30 AM
@Juan Manuel Nieto As you are getting the error from Oracle side. Which means your JDBC driver is passing your connection string & credentials to the DB but the DB is rejecting due to incorrect username/password. with the error code ORA-01017 java.sql.SQLException: ORA-01017: invalid username/password; logon denied
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:447) . So it will be best to try using some Standalone Java Program (Or some native clients like sqlplus client) to validate if your DB username & Password is correct or not? Or if it is expired? The credentials are case sensitive. . So we should isolate the issue first whether oyu are entering valid credentials ? Following link provides some generic tips from Oracle side: http://www.dba-oracle.com/t_ora_01017.htm . You can try the following Java code to validate if you are able to connect to the oracle DB . You only need to change the following code with your DB connection URL , dbUsername & dbPassword in the following line "jdbc:oracle:thin:@host:port:sid","scott","tiger" . vi /tmp/JDBCVersion.java import java.sql.*;
import oracle.jdbc.driver.*;
public class JDBCVersion {
public static void main (String args []) throws SQLException {
DriverManager.registerDriver(new oracle.jdbc.driver.OracleDriver());
Connection conn = DriverManager.getConnection("jdbc:oracle:thin:@host:port:sid","scott","tiger");
DatabaseMetaData meta = conn.getMetaData ();
System.out.println("JDBC driver version is " + meta.getDriverVersion());
}
} . How to compile and run. cd /tmp
ls -l /tmp/JDBCVersion.java
export CLASSPATH=/PATH/TO/ojdbc7.jar:.:
javac JDBCVersion.java
java JDBCVersion .
... View more
03-13-2017
08:33 AM
@Harold Allen Badilla
Place the microsofy MSSQL JDBC driver inside the "/usr/hdp/current/sqoop-client/lib/" then try again. cp -f sqljdbc4.jar /usr/hdp/current/sqoop-client/lib/ . You can download the MSSQL server Jdbc driver from the following link based on your MySQL database version. https://www.microsoft.com/en-in/download/details.aspx?id=11774
... View more