Member since
10-20-2017
59
Posts
0
Kudos Received
0
Solutions
01-06-2019
06:59 AM
Adding mysql connector to CLASSPATH solved the problem. Previously SAM was unable to locate the driver.
... View more
01-02-2019
11:04 PM
After trying an extensive set of things, what worked was adding the path to the mysql-connector jar to the CLASSPATH in the .env file for streamline.
... View more
12-15-2018
06:26 AM
Executing the below command to start streamline gives error. ./streamline-server-start.sh /etc/streamline/conf/streamline.yaml Streamline.yaml # Jar storage configuration
fileStorageConfiguration:
className: com.hortonworks.registries.common.util.LocalFileSystemStorage
properties:
directory: /hdf/streamline/jars
dashboardConfiguration:
url: "http://<streamline_host>:9088"
storageProviderConfiguration:
properties:
db.properties:
dataSource.password: "mysql_streamline_password"
dataSource.url: "jdbc:mysql://<mysql_host>:3306/streamline"
dataSource.user: "streamline"
dataSourceClassName: "com.mysql.jdbc.jdbc2.optional.MysqlDataSource"
#dataSourceClassName: "com.mysql.jdbc.Driver"
db.type: mysql
queryTimeoutInSecs: 30
providerClass: "com.hortonworks.registries.storage.impl.jdbc.JdbcStorageManager" Error Exception in thread "main" com.zaxxer.hikari.pool.HikariPool$PoolInitializationException: Failed to initialize pool: Communication link failure: Bad handshake
at com.zaxxer.hikari.pool.HikariPool.throwPoolInitializationException(HikariPool.java:544)
at com.zaxxer.hikari.pool.HikariPool.checkFailFast(HikariPool.java:536)
at com.zaxxer.hikari.pool.HikariPool.<init>(HikariPool.java:112)
at com.zaxxer.hikari.HikariDataSource.<init>(HikariDataSource.java:72)
at com.hortonworks.registries.storage.impl.jdbc.connection.HikariCPConnectionBuilder.prepare(HikariCPConnectionBuilder.java:49)
at com.hortonworks.registries.storage.impl.jdbc.connection.HikariCPConnectionBuilder.<init>(HikariCPConnectionBuilder.java:38)
at com.hortonworks.registries.storage.impl.jdbc.provider.QueryExecutorFactory.getHikariCPConnnectionBuilder(QueryExecutorFactory.java:79)
at com.hortonworks.registries.storage.impl.jdbc.provider.QueryExecutorFactory.get(QueryExecutorFactory.java:45)
at com.hortonworks.registries.storage.impl.jdbc.JdbcStorageManager.init(JdbcStorageManager.java:240)
at com.hortonworks.streamline.webservice.StreamlineApplication.getStorageManager(StreamlineApplication.java:194)
at com.hortonworks.streamline.webservice.StreamlineApplication.getDao(StreamlineApplication.java:183)
at com.hortonworks.streamline.webservice.StreamlineApplication.registerResources(StreamlineApplication.java:215)
at com.hortonworks.streamline.webservice.StreamlineApplication.run(StreamlineApplication.java:101)
at com.hortonworks.streamline.webservice.StreamlineApplication.run(StreamlineApplication.java:75)
at io.dropwizard.cli.EnvironmentCommand.run(EnvironmentCommand.java:43)
at io.dropwizard.cli.ConfiguredCommand.run(ConfiguredCommand.java:85)
at io.dropwizard.cli.Cli.run(Cli.java:75)
at io.dropwizard.Application.run(Application.java:79)
at com.hortonworks.streamline.webservice.StreamlineApplication.main(StreamlineApplication.java:79)
Caused by: java.sql.SQLException: Communication link failure: Bad handshake
at com.mysql.jdbc.MysqlIO.init(Unknown Source)
at com.mysql.jdbc.Connection.connectionInit(Unknown Source)
at com.mysql.jdbc.jdbc2.Connection.connectionInit(Unknown Source)
at com.mysql.jdbc.Driver.connect(Unknown Source)
at com.mysql.jdbc.jdbc2.optional.MysqlDataSource.getConnection(Unknown Source)
at com.mysql.jdbc.jdbc2.optional.MysqlDataSource.getConnection(Unknown Source)
at com.mysql.jdbc.jdbc2.optional.MysqlDataSource.getConnection(Unknown Source)
at com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:356)
at com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:199)
at com.zaxxer.hikari.pool.HikariPool.createPoolEntry(HikariPool.java:444)
at com.zaxxer.hikari.pool.HikariPool.checkFailFast(HikariPool.java:515)
... 17 more
... View more
12-12-2018
12:34 AM
When using mysql for streaming Analytics Manager, it fails to start with the below error. Exception in thread "main" java.lang.RuntimeException: java.lang.ClassNotFoundException: com.mysql.jdbc.jdbc2.optional.MysqlDataSource
at com.zaxxer.hikari.util.UtilityElf.createInstance(UtilityElf.java:93)
at com.zaxxer.hikari.pool.PoolBase.initializeDataSource(PoolBase.java:319)
at com.zaxxer.hikari.pool.PoolBase.<init>(PoolBase.java:114)
at com.zaxxer.hikari.pool.HikariPool.<init>(HikariPool.java:105)
at com.zaxxer.hikari.HikariDataSource.<init>(HikariDataSource.java:72)
at com.hortonworks.registries.storage.impl.jdbc.connection.HikariCPConnectionBuilder.prepare(HikariCPConnectionBuilder.java:49)
at com.hortonworks.registries.storage.impl.jdbc.connection.HikariCPConnectionBuilder.<init>(HikariCPConnectionBuilder.java:38)
at com.hortonworks.registries.storage.impl.jdbc.provider.QueryExecutorFactory.getHikariCPConnnectionBuilder(QueryExecutorFactory.java:79)
at com.hortonworks.registries.storage.impl.jdbc.provider.QueryExecutorFactory.get(QueryExecutorFactory.java:45)
at com.hortonworks.registries.storage.impl.jdbc.JdbcStorageManager.init(JdbcStorageManager.java:240)
at com.hortonworks.streamline.webservice.StreamlineApplication.getStorageManager(StreamlineApplication.java:194)
at com.hortonworks.streamline.webservice.StreamlineApplication.getDao(StreamlineApplication.java:183)
at com.hortonworks.streamline.webservice.StreamlineApplication.registerResources(StreamlineApplication.java:215)
at com.hortonworks.streamline.webservice.StreamlineApplication.run(StreamlineApplication.java:101)
at com.hortonworks.streamline.webservice.StreamlineApplication.run(StreamlineApplication.java:75)
at io.dropwizard.cli.EnvironmentCommand.run(EnvironmentCommand.java:43)
at io.dropwizard.cli.ConfiguredCommand.run(ConfiguredCommand.java:85)
at io.dropwizard.cli.Cli.run(Cli.java:75)
at io.dropwizard.Application.run(Application.java:79)
at com.hortonworks.streamline.webservice.StreamlineApplication.main(StreamlineApplication.java:79)
Caused by: java.lang.ClassNotFoundException: com.mysql.jdbc.jdbc2.optional.MysqlDataSource
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at com.zaxxer.hikari.util.UtilityElf.createInstance(UtilityElf.java:80)
... 19 more
... View more
12-11-2018
11:57 PM
I found that root is trying to execute the below command. Why is root executing this command or how do I make root execute this command successfully? My service account for schemaregistry is 'registry' which is already in the sudoers file. Execute['export JAVA_HOME=/SCRATCH/jdk1.8.0_91 ; source /usr/hdf/current/registry/conf/registry-env.sh ; /usr/hdf/current/registry/bootstrap/bootstrap-storage.sh migrate'] {'user': 'root'}
... View more
12-11-2018
10:31 PM
I installed my HDP cluster as a non-root user. I have my non-root sudoers access setup as specified by ambari,ambari-sudoer-config.txt I installed HDF management pack to install schema registry. But it gives me the below error. Is there anything I'm missing in the sudoer config? /usr/hdf/current/registry/bootstrap/bootstrap-storage.sh migrate' returned 1. sudo: no tty present and no askpass program specified
... View more
Labels:
- Labels:
-
Schema Registry
09-18-2018
10:11 PM
I have existing HBase tables for which I build Phoenix Views. CREATE VIEW "MDS" (pk VARCHAR PRIMARY KEY, "auth_group"."id" VARCHAR, "auth_group"."name" CHAR(10)); where MDS is my hbase table, auth_group is one of the column families with columns id, name. I have the below questions: 1. Can I use one of the columns in my column family as a PRIMARY KEY? Or should it something outside of the columns in the column family? How do I build a PRIMARY KEY in such a scenario. 2. When I did the below it gave me the below error. How do I choose the length of CHAR? select * from MDS;
Error: ERROR 201 (22000): Illegal data. Expected length of at least 12 bytes, but had 10 (state=22000,code=201) 3. If I picked the min of the CHAR to some minimum number, I get the below result where the characters are cut off(based on the number I gave) and the id column is blank . (It doesn't reflect the same values as in hbase.) PK id name 1 resource_u 2 adminid_id 3 systems_id 4 usersystem 5 uaccess_id 6 numofusers 7 idofusersac
... View more
Labels:
- Labels:
-
Apache HBase
-
Apache Phoenix
04-23-2018
04:39 AM
When data is accidentally removed from HDFS, is there a way to recover it from Trash? Trash is enabled to 360. However I don’t see the files in the user’s /.Trash.
... View more
Labels:
- Labels:
-
Apache Hadoop
04-04-2018
07:26 PM
@Geoffrey Shelton Okot I have downloaded the jar already. The question I have is since I’ve already setup the ambari server using mysql DB and mysql jdbc driver, would doing a setup again with bdb and it’s jar cause a conflict? Or can I set up ambari using any number of databases like for eg mysql for ambari, derby for oozie, bdb for falcon etc
... View more
04-04-2018
06:16 PM
Currently my ambari server is setup using mysql DB and corresponding jdbc driver. If I do the setup again to specify a different DB for Falcon will that cause inconsistency? Or I can just setup mysql for ambari and bdb for falcon?
... View more
03-23-2018
12:50 AM
@Rahul Soni I tried to run the balancer as hdfs balancer -source <overloadedhost> It ran in 3iterations saying it needs to transfer around 100gb and ended. There were no errors. But it didn’t fix the imbalance.
... View more
03-21-2018
06:21 PM
I had a datanode failure due to the JAVA HEAP SIZE which caused a huge number of under replicated blocks since there were writes that happened when the node failed.I fixed the JAVA HEAP size and got the node alive.When I'm trying to re-replicate the blocks as mentioned here the number doesn't seem to comedown even when the setrep operation running. Also, the other thing I observed was the data looks skewed on the datanodes:
237.95
GB 2567977 222.77 GB (93.62%) 2.7.3.2.6.4.0-91 775.82 GB 2657650 244.16 GB (31.47%) 2.7.3.2.6.4.0-91 776.17
GB 2657711 244.17 GB (31.46%) 2.7.3.2.6.4.0-91 Is the skewed data interfering with the setrep operation? IS there a way I can deal with the skew and the under replicated blocks?
... View more
03-16-2018
07:13 AM
I'm trying to submit a pyspark job from a spark2 client on HDP-2.6.4-91 like ./bin/spark-submit script.py But this gives me an error: NameError: global name "callable" not defined.
... View more
Labels:
- Labels:
-
Apache Spark
02-28-2018
12:31 AM
I think I found the answer. The dfs.datanode.dir was found inconsistent as I saw it from the logs. I added a healthy datanode, balanced the cluster then deleted the data direcories from the other inconsistent nodes after taking a backup at /tmp. Restarting after that works fine now.
... View more
02-27-2018
09:12 PM
@Jay Kumar SenSharma So the reason I felt my JDK was corrupt was, replacing the JDK with a fresh copy which I had on my home directory(The same version as the previous one and the same copy of it which I used) fixed my issue. That means some pieces of the existing JDK were lost since it's the same copy I use each time. I didnt find any DK crash report like (hs_err_pid" ...) or JDK level errors. And the initial issue isn't completely resolved yet, once I started my namenode after replacing the JDK as mentioned above and set the env to the right location, the datanode service on all the data nodes came down. Trying to start the service doesn't even show an error this time. It starts fine but comes down immediately without any trace of error.
... View more
02-26-2018
11:36 PM
@Jay Kumar SenSharma Thanks for the prompt response. You are right. Somehow my jdk got corrupted. I've set the JAVA_HOME again and I see the services are starting. Also I see this happening all the time that my JDK is getting corrupte. Do you have any suggestions on the JDK permissions for the ambari user and other services in the hadoop group?
... View more
02-26-2018
09:15 PM
My namenode service suddenly stopped. I don't see any error messages in the log either. When I tried starting it from the Ambari, it says below. ulimit -c unlimited ; /usr/hdp/2.6.4.0-91/hadoop/sbin/hadoop-daemon.sh --config /usr/hdp/2.6.4.0-91/hadoop/conf start namenode'' returned 1 When I tried to start the namenode from the command line, it gives me an error as below. Can't find HmacSHA1 algorithm
... View more
Labels:
- Labels:
-
Apache Hadoop
02-02-2018
11:01 PM
I still face the issue. I'm doing a non-root installation. I build the psutil as root which went fine. But when I try to restart the metrics monitor it fails.
... View more
02-02-2018
08:52 PM
I'm having an issue starting the metrics monitor on one of my nodes. I get the below error. "ImportError: cannot import name _common"
... View more
Labels:
- Labels:
-
Apache Spark
01-19-2018
01:04 AM
Is there any difference between the way High availability is configured between HDP-2.4 and 2.6 using blueprints? I tried setting up HA cluster with exact same JSON files as mentioned in this blog Automate HDP installation using Ambari Blueprints – Part 3 for HDP-2.6 and 2.4 just by changing the stack version. It worked for 2.4 but seems to have issues for 2.6 during zookeeper install. Can anyone point me to the right place for setting up HA cluster on HDP-2.6 using blueprints?
... View more
Labels:
01-18-2018
09:33 PM
with HDP-2.6, I'm facing an issue with the zookeeper-server and client install with the above config. I tried removing and re-installing but that didn't work either. mkdir: cannot create directory `/usr/hdp/current/zookeeper-client': File exists
... View more
01-18-2018
05:57 PM
@Jay Kumar SenSharma The unlink and re-install doesn't seem to work. It gives me an error saying "The directory already exists and it's not a symlink". As I mentioned earlier, it's being created when I hit install but ahead of time. Is there any reason why this is happening? I'm using HDP-2.6.4 and Ambari-2.5.2
... View more
01-18-2018
04:30 AM
@Jay Kumar SenSharmaThis symlink dates to the current installation not a remain of the previous install.Just curious on why this is causing a conflict.
... View more
01-18-2018
03:32 AM
@Jay Kumar SenSharmaIt’s a symlink. How do I handle a symlink?
... View more
01-18-2018
02:33 AM
The install part is failing at the zookeeper client install using ambari-blueprints. It says mkdir: cannot create directory `/usr/hdp/current/zookeeper-client': File exists Here's the error log from the Ambari UI zookeeper-err.txt.
... View more
Labels:
- Labels:
-
Apache Ambari
01-18-2018
02:05 AM
yum -y erase hdp-select did the magic. Since I did a previous install and uninstalled and installed again.
... View more
01-17-2018
05:33 PM
@Jay Kumar SenSharma I removed the duplicate. but the HDFS_CLIENT installation is failing with the error. All the installs after this are failing. I used HDP-2.6 where blueprints is installing 2.6.4. Here's my hostmapping hostmap.txt. Here's the error log errorlog.txt from the ambari UI. And here's complete UI output log.var-lib-ambari-agent-data-output-46.txt parent directory /usr/hdp/current/hadoop-client/conf doesn't exist
... View more
01-17-2018
12:52 AM
HDP components installation is failing after it installaed a few. IS there any order in which I need to specify the hostgroups or the component names in a HA cluster?cluster-config.txt (config.json) This is the error I see on the UI. parent directory /usr/hdp/current/hadoop-client/conf doesn't exist
... View more
Labels:
- Labels:
-
Apache Ambari
01-15-2018
06:47 PM
To specify a specific version of HDP, I have my repo.json created as below. But I get an error saying cannot create repositories, I'm running the curl commands as 'ambari' user(non root user). curl -H "X-Requested-By: ambari"-X PUT -u admin:admin http://<ambari-server.com>:8080/api/v1/stacks/HDP/versions/2.6/operating_systems/redhat6/repositories/HDP-2.6 -d @repo.json
{
"Repositories":{
"base_url":"http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.6.3.0",
"verify_base_url":true
}
}
... View more
Labels:
11-14-2017
07:53 PM
Is there a difference between the kafka-connector in the python module and the confluent's one? This is the gihub link for the one mentioned in the python module,
... View more