Member since
04-08-2016
38
Posts
5
Kudos Received
0
Solutions
03-28-2019
10:06 AM
Hi all, By changing the access path, we get a timeout error from HBase. The actual Rowkeys is composed of 4 parts, let say: A_B_C_D We need for some case to access to the data with a Rowkeys based on the following schema: A_B_D_C What is the best practice to handle these access path changes regarding the Rowkey? changes? Regards
... View more
Labels:
11-21-2017
10:48 AM
Hi All, If we use a docker container to deliver our Spark ML code in a multi node Hadoop cluster, does this impact the parallel execution of the Spark jobs? How the driver will communicate then with the executors and Resource manager? Does every thing remain the same as if we don't use docker? Globally, what will be the impact, if any, to deliver Spark code within a docker container? Thanks
... View more
Labels:
09-21-2017
11:56 AM
Hi, Thanks
for your response. But
this is not exactly what I'm asking. I know that we can search based on
Taxonomy. For instance, we can associate "Customer" to a hive table
and then look for all object related to the "Customer" and we will
get back the hive table name. I'm
not looking for the hive table name rather the content of the related hive
table.
... View more
09-20-2017
12:29 PM
Hi all, I'm
wondering if we can tag using a business taxonomy at data level? Let’s
say we create a business taxonomy, then we tag a hive column with this taxonomy.
Can we then search the data using the business tag associated to the hive
column? Thanks Regards
... View more
Labels:
07-13-2017
08:11 AM
Thank you for your response. I think I already did all above, but I will verify. One point that I don't understand is the fact that import make an ANONYMUS connection to kafka topic. I'm running the import as a root user. this might explain the anonymus connection?
... View more
07-11-2017
09:24 AM
I can see an error in Ranger: -- 07/11/2017 11:19:34 AM ANONYMOUS hws_cluster_kafka
kafka
ATLAS_HOOKtopic
describe Denied ranger-acl
-- 07/11/2017 11:19:34 AM ANONYMOUS
hws_cluster_kafka
kafka
ATLAS_ENTITIEStopic
describe Denied ranger-acl This might be the source of the error?
... View more
06-28-2017
02:17 PM
Hi, I’m in HDP
2.6 (a fresh install). All the
services are up and running. Kafka is running
and topics are there: [root@ambariserver bin]# ./kafka-topics.sh
--list --zookeeper localhost:2181 ATLAS_ENTITIES ATLAS_HOOK __consumer_offsets … As i can’t
see in Atlas the new tables that i created in Hive, I used import-hive.sh to synchronize
Hive and Atlas. The script
ended in a error due a timeout. Can someone explain which are the synchronization steps? [root@ambariserver hook-bin]#
./import-hive.sh Atlas Log Dir =
/usr/hdp/current/atlas-server/logs Using Hive configuration directory
[/etc/hive/conf] Log file for import is
/usr/hdp/current/atlas-server/logs/import-hive.log log4j:WARN Continuable parsing error
88 and column 23 log4j:WARN The content of element
type "log4j:configuration" must match "(renderer*,throwableRenderer?,appender*,plugin*,(category|logger)*,root?,(categoryFactory|loggerFactory)?)". 2017-06-28 10:02:28,825 INFO- [main:] ~ Looking for
atlas-application.properties in classpath (ApplicationProperties:78) 2017-06-28 10:02:28,831 INFO- [main:] ~ Loading
atlas-application.properties from
file:/etc/hive/2.6.0.3-8/0/atlas-application.properties
(ApplicationProperties:91) 2017-06-28 10:02:28,898 DEBUG -
[main:] ~ Configuration loaded: (ApplicationProperties:104) 2017-06-28 10:02:28,898 DEBUG -
[main:] ~ atlas.authentication.method.kerberos = False
(ApplicationProperties:107) 2017-06-28 10:02:28,902 DEBUG -
[main:] ~ atlas.cluster.name = hws_cluster (ApplicationProperties:107) 2017-06-28 10:02:28,902 DEBUG -
[main:] ~ atlas.hook.hive.keepAliveTime = 10 (ApplicationProperties:107) 2017-06-28 10:02:28,902 DEBUG -
[main:] ~ atlas.hook.hive.maxThreads = 5 (ApplicationProperties:107) 2017-06-28 10:02:28,902 DEBUG -
[main:] ~ atlas.hook.hive.minThreads = 5 (ApplicationProperties:107) 2017-06-28 10:02:28,902 DEBUG -
[main:] ~ atlas.hook.hive.numRetries = 3 (ApplicationProperties:107) 2017-06-28 10:02:28,902 DEBUG -
[main:] ~ atlas.hook.hive.queueSize = 1000 (ApplicationProperties:107) 2017-06-28 10:02:28,903 DEBUG -
[main:] ~ atlas.hook.hive.synchronous = false (ApplicationProperties:107) 2017-06-28 10:02:28,903 DEBUG -
[main:] ~ atlas.kafka.bootstrap.servers = ambariserver.com:6667
(ApplicationProperties:107) 2017-06-28 10:02:28,903 DEBUG -
[main:] ~ atlas.kafka.hook.group.id = atlas (ApplicationProperties:107) 2017-06-28 10:02:28,903 DEBUG -
[main:] ~ atlas.kafka.zookeeper.connect = ambariserver.com:2181
(ApplicationProperties:107) 2017-06-28 10:02:28,903 DEBUG -
[main:] ~ atlas.kafka.zookeeper.connection.timeout.ms = 30000
(ApplicationProperties:107) 2017-06-28 10:02:28,903 DEBUG -
[main:] ~ atlas.kafka.zookeeper.session.timeout.ms = 60000
(ApplicationProperties:107) 2017-06-28 10:02:28,906 DEBUG -
[main:] ~ atlas.kafka.zookeeper.sync.time.ms = 20 (ApplicationProperties:107) 2017-06-28 10:02:28,906 DEBUG -
[main:] ~ atlas.notification.create.topics = True (ApplicationProperties:107) 2017-06-28 10:02:28,906 DEBUG -
[main:] ~ atlas.notification.replicas = 1 (ApplicationProperties:107) 2017-06-28 10:02:28,906 DEBUG -
[main:] ~ atlas.notification.topics = [ATLAS_HOOK, ATLAS_ENTITIES]
(ApplicationProperties:107) 2017-06-28 10:02:28,906 DEBUG -
[main:] ~ atlas.rest.address = http://ambariserver.com:21000
(ApplicationProperties:107) 2017-06-28 10:02:28,908 DEBUG -
[main:] ~ ==> InMemoryJAASConfiguration.init()
(InMemoryJAASConfiguration:173) 2017-06-28 10:02:28,910 DEBUG -
[main:] ~ ==> InMemoryJAASConfiguration.init()
(InMemoryJAASConfiguration:186) 2017-06-28 10:02:28,919 DEBUG - [main:]
~ ==> InMemoryJAASConfiguration.initialize() (InMemoryJAASConfiguration:243) 2017-06-28 10:02:28,919 DEBUG -
[main:] ~ <== InMemoryJAASConfiguration.initialize({})
(InMemoryJAASConfiguration:370) 2017-06-28 10:02:28,920 DEBUG -
[main:] ~ <== InMemoryJAASConfiguration.init()
(InMemoryJAASConfiguration:195) 2017-06-28 10:02:28,920 DEBUG -
[main:] ~ <== InMemoryJAASConfiguration.init()
(InMemoryJAASConfiguration:182) Enter username for atlas :- admin Enter password for atlas :- admin 2017-06-28 10:02:42,769 INFO- [main:] ~ Client has only one service URL,
will use that for all actions: http://ambariserver.com:21000
(AtlasBaseClient:201) 2017-06-28 10:02:43,624 WARN- [main:] ~ Unable to load native-hadoop
library for your platform... using builtin-java classes where applicable
(NativeCodeLoader:62) 2017-06-28 10:02:43,842 INFO- [main:] ~ Importing hive metadata
(HiveMetaStoreBridge:133) 2017-06-28 10:02:43,845 DEBUG -
[main:] ~ Getting reference for database default (HiveMetaStoreBridge:227) 2017-06-28 10:02:43,847 DEBUG -
[main:] ~ Using resource
http://ambariserver.com:21000/api/atlas/entities?type=hive_db&property=qualifiedName&value=default@hws_cluster
for 0 times (AtlasBaseClient:413) 2017-06-28 10:02:43,848 DEBUG -
[main:] ~ Calling API [ GET : api/atlas/entities ](AtlasBaseClient:295) 2017-06-28 10:02:43,894 DEBUG -
[main:] ~ API
http://ambariserver.com:21000/api/atlas/entities?type=hive_db&property=qualifiedName&value=default@hws_cluster
returned status 200 (AtlasBaseClient:303) 2017-06-28 10:02:43,902 INFO- [main:] ~ Response =
{"requestId":"pool-2-thread-5 -
498099ab-61ec-4383-8464-150adedd27bf","definition":{"jsonClass":"org.apache.atlas.typesystem.json.InstanceSerialization$_Reference","id":{"jsonClass":"org.apache.atlas.typesystem.json.InstanceSerialization$_Id","id":"bd119d9a-d111-4755-b3d5-b7d0e6329f99","version":0,"typeName":"hive_db","state":"ACTIVE"},"typeName":"hive_db","values":{"name":"default","location":"hdfs:\/\/ambariserver.com:8020\/apps\/hive\/warehouse","description":"Default
Hive
database","ownerType":{"value":"ROLE","ordinal":2},"qualifiedName":"default@hws_cluster","owner":"public","clusterName":"hws_cluster","parameters":null},"traitNames":[],"traits":{},"systemAttributes":{"createdBy":"ambari-qa","modifiedBy":"admin","createdTime":"2017-05-29T15:28:51.528Z","modifiedTime":"2017-06-28T13:59:34.905Z"}}}
(AtlasBaseClient:315) 2017-06-28 10:02:44,482 INFO- [main:] ~ Database default is already
registered with id bd119d9a-d111-4755-b3d5-b7d0e6329f99. Updating it.
(HiveMetaStoreBridge:173) 2017-06-28 10:02:44,482 INFO- [main:] ~ Importing objects from
databaseName : default (HiveMetaStoreBridge:182) 2017-06-28 10:02:44,482 DEBUG -
[main:] ~ updating instance of type hive_db (HiveMetaStoreBridge:521) 2017-06-28 10:02:44,494 DEBUG -
[main:] ~ Updating entity hive_db = { "jsonClass":"org.apache.atlas.typesystem.json.InstanceSerialization$_Reference", "id":{ "jsonClass":"org.apache.atlas.typesystem.json.InstanceSerialization$_Id", "id":"bd119d9a-d111-4755-b3d5-b7d0e6329f99", "version":0, "typeName":"hive_db", "state":"ACTIVE" }, "typeName":"hive_db", "values":{ "name":"default", "location":"hdfs://ambariserver.com:8020/apps/hive/warehouse", "description":"Default Hive database", "ownerType":2, "qualifiedName":"default@hws_cluster", "owner":"public", "clusterName":"hws_cluster", "parameters":{ } }, "traitNames":[ ], "traits":{ }, "systemAttributes":{ "createdBy":"ambari-qa", "modifiedBy":"admin", "createdTime":"2017-05-29T15:28:51.528Z", "modifiedTime":"2017-06-28T13:59:34.905Z" } } (HiveMetaStoreBridge:524) 2017-06-28 10:02:44,496 DEBUG -
[main:] ~ Updating entity id bd119d9a-d111-4755-b3d5-b7d0e6329f99 with { "jsonClass":"org.apache.atlas.typesystem.json.InstanceSerialization$_Reference", "id":{ "jsonClass":"org.apache.atlas.typesystem.json.InstanceSerialization$_Id", "id":"bd119d9a-d111-4755-b3d5-b7d0e6329f99", "version":0, "typeName":"hive_db", "state":"ACTIVE" }, "typeName":"hive_db", "values":{ "name":"default", "location":"hdfs://ambariserver.com:8020/apps/hive/warehouse", "description":"Default Hive database", "ownerType":2, "qualifiedName":"default@hws_cluster", "owner":"public", "clusterName":"hws_cluster", "parameters":{ } }, "traitNames":[ ], "traits":{ }, "systemAttributes":{ "createdBy":"ambari-qa", "modifiedBy":"admin", "createdTime":"2017-05-29T15:28:51.528Z", "modifiedTime":"2017-06-28T13:59:34.905Z" } } (AtlasClient:582) 2017-06-28 10:02:44,496 DEBUG -
[main:] ~ Calling API [ POST : api/atlas/entities ] <== { "jsonClass":"org.apache.atlas.typesystem.json.InstanceSerialization$_Reference", "id":{ "jsonClass":"org.apache.atlas.typesystem.json.InstanceSerialization$_Id", "id":"bd119d9a-d111-4755-b3d5-b7d0e6329f99", "version":0, "typeName":"hive_db", "state":"ACTIVE" }, "typeName":"hive_db", "values":{ "name":"default", "location":"hdfs://ambariserver.com:8020/apps/hive/warehouse", "description":"Default Hive database", "ownerType":2, "qualifiedName":"default@hws_cluster", "owner":"public", "clusterName":"hws_cluster", "parameters":{ } }, "traitNames":[ ], "traits":{ }, "systemAttributes":{ "createdBy":"ambari-qa", "modifiedBy":"admin", "createdTime":"2017-05-29T15:28:51.528Z", "modifiedTime":"2017-06-28T13:59:34.905Z" } } (AtlasBaseClient:295) Exception in thread "main"
org.apache.atlas.hook.AtlasHookException: HiveMetaStoreBridge.main() failed. at
org.apache.atlas.hive.bridge.HiveMetaStoreBridge.main(HiveMetaStoreBridge.java:650) Caused by: com.sun.jersey.api.client.ClientHandlerException:
java.net.SocketTimeoutException: Read timed out at
com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:149) at
com.sun.jersey.api.client.filter.HTTPBasicAuthFilter.handle(HTTPBasicAuthFilter.java:81) at
com.sun.jersey.api.client.Client.handle(Client.java:648) at
com.sun.jersey.api.client.WebResource.handle(WebResource.java:670) at
com.sun.jersey.api.client.WebResource.access$200(WebResource.java:74) at
com.sun.jersey.api.client.WebResource$Builder.method(WebResource.java:623) at
org.apache.atlas.AtlasBaseClient.callAPIWithResource(AtlasBaseClient.java:297) at
org.apache.atlas.AtlasBaseClient.callAPIWithResource(AtlasBaseClient.java:287) at
org.apache.atlas.AtlasBaseClient.callAPI(AtlasBaseClient.java:429) at
org.apache.atlas.AtlasClient.callAPIWithBodyAndParams(AtlasClient.java:1006) at
org.apache.atlas.AtlasClient.updateEntity(AtlasClient.java:583) at
org.apache.atlas.hive.bridge.HiveMetaStoreBridge.updateInstance(HiveMetaStoreBridge.java:526) at
org.apache.atlas.hive.bridge.HiveMetaStoreBridge.registerDatabase(HiveMetaStoreBridge.java:175) at org.apache.atlas.hive.bridge.HiveMetaStoreBridge.importDatabases(HiveMetaStoreBridge.java:140) at
org.apache.atlas.hive.bridge.HiveMetaStoreBridge.importHiveMetadata(HiveMetaStoreBridge.java:134) at
org.apache.atlas.hive.bridge.HiveMetaStoreBridge.main(HiveMetaStoreBridge.java:647) Caused by:
java.net.SocketTimeoutException: Read timed out at
java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.socketRead(SocketInputStream.java:116) at
java.net.SocketInputStream.read(SocketInputStream.java:171) at
java.net.SocketInputStream.read(SocketInputStream.java:141) at
java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at
java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at
java.io.BufferedInputStream.read(BufferedInputStream.java:345) at
sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:735) at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:678) at
sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1569) at
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1474) at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:480) at
com.sun.jersey.client.urlconnection.URLConnectionClientHandler._invoke(URLConnectionClientHandler.java:240) at
com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:147) ... 15 more Failed to import Hive Data Model!!!
... View more
Labels:
06-18-2017
02:32 PM
Hi, I get an error during this step of tutorial: /usr/maven/bin/mvn clean package-DskipTests The error: [storm@ambariserver iot-truck-streaming]$ sudo /usr/maven/bin/mvn -e clean package -DskipTests
[INFO] Error stacktraces are turned on.
[INFO] Scanning for projects...
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Build Order:
[INFO]
[INFO] transport-domain
[INFO] stream-simulator
[INFO] storm-streaming
[INFO] storm-demo-webapp
[INFO] storm-kafka-0.8-plus
[INFO] Storm Demo Parent Project
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building transport-domain 1.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO]
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ transport-domain ---
[INFO] Deleting /home/storm/iot-truck-streaming/transport-domain/target
[INFO]
[INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ transport-domain ---
[WARNING] Using platform encoding (UTF-8 actually) to copy filtered resources, i.e. build is platform dependent!
[INFO] skip non existing resourceDirectory /home/storm/iot-truck-streaming/transport-domain/src/main/resources
[INFO]
[INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ transport-domain ---
[INFO] Changes detected - recompiling the module!
[WARNING] File encoding has not been set, using platform encoding UTF-8, i.e. build is platform dependent!
[INFO] Compiling 3 source files to /home/storm/iot-truck-streaming/transport-domain/target/classes
[INFO] -------------------------------------------------------------
[ERROR] COMPILATION ERROR :
[INFO] -------------------------------------------------------------
[ERROR] No compiler is provided in this environment. Perhaps you are running on a JRE rather than a JDK?
[INFO] 1 error
[INFO] -------------------------------------------------------------
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] transport-domain ................................... FAILURE [ 0.602 s]
[INFO] stream-simulator ................................... SKIPPED
[INFO] storm-streaming .................................... SKIPPED
[INFO] storm-demo-webapp .................................. SKIPPED
[INFO] storm-kafka-0.8-plus ............................... SKIPPED
[INFO] Storm Demo Parent Project .......................... SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 0.723 s
[INFO] Finished at: 2017-06-18T10:26:01-04:00
[INFO] Final Memory: 8M/236M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) on project transport-domain: Compilation failure
[ERROR] No compiler is provided in this environment. Perhaps you are running on a JRE rather than a JDK?
[ERROR] -> [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) on project transport-domain: Compilation failure
No compiler is provided in this environment. Perhaps you are running on a JRE rather than a JDK?
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:212)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:116)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:80)
at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51)
at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:128)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:307)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:193)
at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:106)
at org.apache.maven.cli.MavenCli.execute(MavenCli.java:863)
at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:288)
at org.apache.maven.cli.MavenCli.main(MavenCli.java:199)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289)
at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229)
at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415)
at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356)
Caused by: org.apache.maven.plugin.compiler.CompilationFailureException: Compilation failure
No compiler is provided in this environment. Perhaps you are running on a JRE rather than a JDK?
at org.apache.maven.plugin.compiler.AbstractCompilerMojo.execute(AbstractCompilerMojo.java:858)
at org.apache.maven.plugin.compiler.CompilerMojo.execute(CompilerMojo.java:129)
at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:134)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:207)
... 20 more
[ERROR]
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException My setup: [storm@ambariserver iot-truck-streaming]$ echo $JAVA_HOME
/opt/jdk1.8.0_131 [storm@ambariserver iot-truck-streaming]$ mvn -version
Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-10T11:41:47-05:00)
Maven home: /usr/maven
Java version: 1.8.0_131, vendor: Oracle Corporation
Java home: /opt/jdk1.8.0_131/jre
Default locale: en_US, platform encoding: UTF-8
OS name: "linux", version: "3.10.0-514.21.1.el7.x86_64", arch: "amd64", family: "unix" Per my understanding the JAVA_HOME in maven point to JRE_HOME rather than JAVA_HOME: ${java.home} specifies the path to the current JRE_HOME environment
use with relative paths to get for example:
${java.home}../bin/java.exe Do you have any suggestion please?
... View more
Labels:
06-13-2017
04:09 PM
Yes exactly. When I did "hdfs dfs -ls /" it gave an error pointing to the bad host. So I found the error and fixed it by modifying fs.defaultFS.
... View more
06-13-2017
01:33 PM
Hi, I'm in HDP 2.6. When I try to start the NameNode i get the bellow error. java.io.IOException: No FileSystem for scheme: http
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2786)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2793)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:99)
and when i try a ls: [hdfs@ambariserver ambari-agent]$ hdfs dfs -ls /
ls: No FileSystem for scheme: http
Any suggestion? Thanks
... View more
Labels:
06-12-2017
08:29 AM
Hi, Did someone install a Nifi in a standalone mode, without Ambari, in an HDP? Thanks Regards
... View more
06-12-2017
08:12 AM
I found a work around by adding a mapping between the Public IP and the hostname in the /etc/hosts. But not a good solution as Public IP address change at each reboot!!
... View more
06-09-2017
01:28 PM
Hi all, After installing the HDP 2.6 on AWS, I can connect to
the Amabri server and start all services but when I try to connect to any UI, for
example Name node UI on 50070 using the public IP address I get a connection
refused (ERR_CONNECTION_REFUSED). I open all the port
on my VM to be sure this is not an issue related to the VM accessibility. On the VM, the netstat
-anp | grep 50070: tcp 0
0 127.0.0.1:60294
127.0.0.1:50070 TIME_WAIT - tcp 0
0 127.0.0.1:60310
127.0.0.1:50070 TIME_WAIT - tcp 0
0 127.0.0.1:50070
127.0.0.1:60613
ESTABLISHED 5369/java tcp 0
0 127.0.0.1:50070
127.0.0.1:60612
ESTABLISHED 5369/java tcp 0
0 127.0.0.1:50070
127.0.0.1:60646 TIME_WAIT - tcp 0
0 127.0.0.1:60292
127.0.0.1:50070 TIME_WAIT - My /etc/hosts: 127.0.0.1 ambariserver.com localhost
localhost.localdomain localhost4 localhost4.localdomain4 And in the HDFS
Namenode log: STARTUP_MSG: host = ambariserver.com/127.0.0.1 Thanks for your
help
... View more
Labels:
06-01-2017
12:30 PM
Hi, I have a question
regarding the setup of a development environment. I’m using an AWS
where I installed the last version of Hortonworks and wants to do some POC around kafka and spark streaming. As the access to
AWS is not free I wanted to create a dev env on my local laptop based on PyCharm
and spark 1.6. Usually I use a
maven repository but I’m looking for a simpler solution. Can this works:
Install localy the exact
spark version than AWS Setup the PyCharm to
point to the local spark Do the development locally
then deploy and run the code on the AWS Thanks
... View more
Labels:
05-28-2017
02:34 PM
Hi, I have an issue regarding my installation in AWS to configure ssh. I create an user: ambari useradd ambari
echo "ambari ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers.d/ambari and generate a private key: ssh-keygen -t rsa When i try a ssh with my public IP address: ssh ambari@52.31.236.46 i get an error: Permission denied (publickey,gssapi-keyex,gssapi-with-mic). and with debug: [ambari@ip-172-31-21-232 ~]$ ssh -v ambari@52.31.236.46
OpenSSH_6.6.1, OpenSSL 1.0.1e-fips 11 Feb 2013
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 56: Applying options for *
debug1: Connecting to 52.31.236.46 [52.31.236.46] port 22.
debug1: Connection established.
debug1: identity file /home/ambari/.ssh/id_rsa type 1
debug1: identity file /home/ambari/.ssh/id_rsa-cert type -1
debug1: identity file /home/ambari/.ssh/id_dsa type -1
debug1: identity file /home/ambari/.ssh/id_dsa-cert type -1
debug1: identity file /home/ambari/.ssh/id_ecdsa type -1
debug1: identity file /home/ambari/.ssh/id_ecdsa-cert type -1
debug1: identity file /home/ambari/.ssh/id_ed25519 type -1
debug1: identity file /home/ambari/.ssh/id_ed25519-cert type -1
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_6.6.1
debug1: Remote protocol version 2.0, remote software version OpenSSH_6.6.1
debug1: match: OpenSSH_6.6.1 pat OpenSSH_6.6.1* compat 0x04000000
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug1: kex: server->client aes128-ctr hmac-md5-etm@openssh.com none
debug1: kex: client->server aes128-ctr hmac-md5-etm@openssh.com none
debug1: kex: curve25519-sha256@libssh.org need=16 dh_need=16
debug1: kex: curve25519-sha256@libssh.org need=16 dh_need=16
debug1: sending SSH2_MSG_KEX_ECDH_INIT
debug1: expecting SSH2_MSG_KEX_ECDH_REPLY
debug1: Server host key: ECDSA 3b:a5:43:31:83:2d:57:52:cb:c0:0a:b4:a9:91:f1:9c
debug1: Host '52.31.236.46' is known and matches the ECDSA host key.
debug1: Found key in /home/ambari/.ssh/known_hosts:1
debug1: ssh_ecdsa_verify: signature correct
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug1: SSH2_MSG_NEWKEYS received
debug1: SSH2_MSG_SERVICE_REQUEST sent
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic
debug1: Next authentication method: gssapi-keyex
debug1: No valid Key exchange context
debug1: Next authentication method: gssapi-with-mic
debug1: Unspecified GSS failure. Minor code may provide more information
No Kerberos credentials available (default cache: KEYRING:persistent:1001) debug1: Unspecified GSS failure. Minor code may provide more information
No Kerberos credentials available (default cache: KEYRING:persistent:1001) debug1: Next authentication method: publickey
debug1: Offering RSA public key: /home/ambari/.ssh/id_rsa
debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic
debug1: Trying private key: /home/ambari/.ssh/id_dsa
debug1: Trying private key: /home/ambari/.ssh/id_ecdsa
debug1: Trying private key: /home/ambari/.ssh/id_ed25519
debug1: No more authentication methods to try.
Permission denied (publickey,gssapi-keyex,gssapi-with-mic). Any suggestion on how to fix this issue? Thanks
... View more
Labels:
05-27-2017
02:25 PM
Hi All, I'm installing Hortonworks 2.5 and Ambari 2.5.0.3, on AWS, RedHat 7. During the Ambari setup, I get an error missing some common module: /usr/sbin/ambari-server.py setup --databasehost=localhost --databasename=ambari --databaseusername=ambari --postgresschema=ambari --databasepassword=ambari --databaseport=5432 --database=postgres -s
Traceback (most recent call last):
File "/usr/sbin/ambari-server.py", line 29, in <module>
from ambari_commons.exceptions import FatalException, NonFatalException
ImportError: No module named ambari_commons.exceptions I installed several time Hortonworks on centos6 and never get such error. Any help will be appreciated. Thanks
... View more
Labels:
01-25-2017
03:51 PM
Hi, Is there any way to keep
track of all Ambari alerts? Does Ambari keeps the alerts? Thanks Regards Farhad
... View more
Labels:
09-09-2016
12:33 PM
Hi, After moving to StandBy
NameNode, my Solr doesn’t work anymore. In fact, in solrconfig.xml,
we give the namenode hostname: <str
name="solr.hdfs.home"> I replace the hostname by
the cluster name but it didn't work. I tried also to put a comma
or column separate hostname. It didn’t either work. Any suggestion will be
welcome. Thanks Regards Farhad
... View more
Labels:
08-31-2016
10:38 AM
1 Kudo
Hi All, I'm interested by any paper or returns of experience
regarding the Hadoop cluster fail over design. From
my understanding 2 mains components concerned are NameNode and ResourceManager. Thanks Regards Farhad
... View more
Labels:
08-19-2016
08:38 AM
Thank you for your response. Something that i don't undrestand. Our replication factory is the default value, 3. So how we can loss the same blocks on 3 nodes? We have a replication factory of 2 forHBase. Let me go more deep on this part.
... View more
08-19-2016
07:59 AM
Hi all, I have an alert in Ambari
saying there is missing 50 blocs since 5 august. Using some existing post I
found the command to identify the missing blocks: "sudo -u hdfs hdfs fsck
/". I have 2 questions: 1) The blocks missing does
it mean that these blocks are missing in one node or in all nodes? 2) Some of these blocks are
from HBase, so the solution to delete them cannot work for me. In the other
hand in the HDFS log I can see synchronization for these Hbase missing blocks
based on 1 august. Is there any way to restore these blocks? Thanks
... View more
Labels:
07-08-2016
05:44 AM
Hello, Thanks you for your response. I already install the Ranger. Is it related to the Ambari version or HDP version?
... View more
07-07-2016
04:31 PM
Hello, I'm using HDP 2.3. I want to add a new service:Ranger KMS to my existing installation. But in Add Service Wizard i can not see Ranger KMS. How I can install Ranger KMS? Thanks
... View more
Labels:
04-13-2016
02:43 PM
2 Kudos
The problem was coming from the capacity-scheduler.xml, there are 2 properties with -1: <property>
<name>yarn.scheduler.capacity.root.accessible-node-labels.default.capacity</name>
<value>-1</value>
</property>
<property>
<name>yarn.scheduler.capacity.root.accessible-node-labels.default.maximum-capacity</name>
<value>-1</value> </property> I changed -1 to 1 and could bring up Yarn. Thanks for your help Regards Farhad
... View more
04-13-2016
02:27 PM
in the yarn-yarn-resourcemanager-localhost.localdomain.log i can see a java error: 2016-04-09 20:26:30,277 INFO resourcemanager.ResourceManager (ResourceManager.java:transitionToStandby(1088)) - Transitioning to standby state
2016-04-09 20:26:30,277 INFO resourcemanager.ResourceManager (ResourceManager.java:transitionToStandby(1095)) - Transitioned to standby state
2016-04-09 20:26:30,277 FATAL resourcemanager.ResourceManager (ResourceManager.java:main(1241)) - Error starting ResourceManager
java.lang.IllegalArgumentException: Illegal capacity of -1.0 for node-label=default in queue=root, valid capacity should in range of [0, 100].
at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration.internalGetLabeledQueueCapacity(CapacitySchedulerConfiguration.java:465)
at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration.getLabeledQueueCapacity(CapacitySchedulerConfiguration.java:477)
at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CSQueueUtils.loadCapacitiesByLabelsFromConf(CSQueueUtils.java:143)
at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CSQueueUtils.loadUpdateAndCheckCapacities(CSQueueUtils.java:122)
at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.AbstractCSQueue.setupConfigurableCapacities(AbstractCSQueue.java:99)
at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.AbstractCSQueue.setupQueueConfigs(AbstractCSQueue.java:242)
at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue.setupQueueConfigs(ParentQueue.java:109)
at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue.<init>(ParentQueue.java:100)
1095,2-9 98%
... View more
04-13-2016
02:14 PM
I'm wondering if there is any dependency between YARN and other modules. I'm runing hdfs, map reduce and zookepr but not other hadoop component.
... View more