Member since
09-11-2015
269
Posts
281
Kudos Received
55
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2599 | 03-15-2017 07:12 AM | |
1381 | 03-14-2017 07:08 PM | |
1665 | 03-14-2017 03:36 PM | |
1405 | 02-28-2017 04:32 PM | |
1036 | 02-28-2017 10:02 AM |
02-23-2017
03:04 AM
1 Kudo
@Poorvi Sachar What does the atlas application log say, when this request is made? which version of HDP is this? Is it possible to attach the application log here for debugging?
... View more
02-23-2017
02:44 AM
@Bilal Arshad From the logs, it looks like port on which Atlas is trying to run is already blocked.[Address already in use] Exception in thread "main" MultiException[java.lang.ExceptionInInitializerError, java.net.BindException: Address already in use] So, I would recommend you to follow the below steps.
First stop atlas by running, atlas_stop.py script and make sure atlas is stopped. Now, with the help of netstat command, check if some other process is already using the atlas port(21000) and kill that process id [kill -9 <pid>]. Now, restart atlas using atlas_start.py and check if that helps. If the above steps are not helping, then I would recommend you to change the atlas default port from 21000 to some other port(eg: 31000) and then restart. Let me know how it goes.
... View more
02-21-2017
10:13 AM
@Bilal Arshad Is this still an issue?
... View more
02-21-2017
10:12 AM
@Bilal Arshad Any thing on the above? were you able to resolve this?
... View more
02-21-2017
10:12 AM
@Bilal Arshad Is this resolved now?
... View more
02-20-2017
05:06 PM
1 Kudo
Major.minor version for java 1.8 is 52. And the mysql jar, you are using is incompatible with java1.7, that is why you are seeing this issue. https://blogs.oracle.com/darcy/entry/source_target_class_file_version If you cannot upgrade to java 1.8, then you can try to get the java-1.7 compatible mysql connector jar. This should also work.
... View more
02-20-2017
04:52 PM
@DIVYASREE TAMILMANI (In the context of HDP-2.5 release) As part of storm data model, kafka topic and hbase table are modeled as datasets. What it means is, whenever a storm topology is submitted, StormHook captures and posts all the metadata of spouts, bolts and endpoints including data sets(like kafka_topic and hbase_table) to Atlas. Please note there is no hook/plugin specifically for kafka which would ingest metadata of kafka topics into atlas. For more detailed information, please take a look at the below documentation. http://atlas.incubator.apache.org/StormAtlasHook.html http://atlas.incubator.apache.org/Bridge-Falcon.html
... View more
02-20-2017
04:40 PM
3 Kudos
@Bilal Arshad What is your java version on your system? This seems like some compatibility issue with your local JDK and the JDK with which java binaries are compiled. Upgrade your JDK version and this should work.
... View more
02-20-2017
11:03 AM
What do you see in the atlas application log? Is it running? check it is running or not by executing "netstat -planet | grep 21000" command.
... View more
02-20-2017
10:21 AM
1 Kudo
@DIVYASREE TAMILMANI Currently, Atlas supports ingesting and managing metadata from the following sources:
Hive Sqoop Storm/Kafka (limited support) Falcon (limited support) For more details, please refer to this documentation: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_data-governance/content/ch_hdp_data_governance_overview.html
... View more