Member since
04-16-2020
25
Posts
4
Kudos Received
0
Solutions
06-18-2020
03:02 AM
@Scharan Hello, I dont know how to check because i am new to these, Can you help me check it out? this is the output i got udhav@ip-10-70-14-10:~$ cat /etc/ambari-server/conf/ambari.properties |grep -i jdbc custom.mysql.jdbc.name=mysql-connector-java.jar previous.custom.mysql.jdbc.name=mysql-connector-java.jar server.jdbc.connection-pool=c3p0 server.jdbc.connection-pool.acquisition-size=5 server.jdbc.connection-pool.idle-test-interval=7200 server.jdbc.connection-pool.max-age=0 server.jdbc.connection-pool.max-idle-time=14400 server.jdbc.connection-pool.max-idle-time-excess=0 server.jdbc.database=mysql server.jdbc.database_name=ambari server.jdbc.driver=com.mysql.jdbc.Driver server.jdbc.hostname=127.0.0.1 server.jdbc.port=3306 server.jdbc.rca.driver=com.mysql.jdbc.Driver server.jdbc.rca.url=jdbc:mysql://127.0.0.1:3306/ambari server.jdbc.rca.user.name=ambari server.jdbc.rca.user.passwd=/etc/ambari-server/conf/password.dat server.jdbc.url=jdbc:mysql://127.0.0.1:3306/ambari server.jdbc.user.name=ambari server.jdbc.user.passwd=/etc/ambari-server/conf/password.dat Thank you!
... View more
06-17-2020
10:29 PM
Hello, I did a reboot/restart of my ec2 cluster and then when i try to start ambari server it always fails. This is the screenshot, This is the log info, 18 Jun 2020 05:13:37,523 ERROR [main] DBAccessorImpl:119 - Error while creating database accessor com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure I also tried to restart my server( I know that its not already running) Can anyone help me in resolving this issue? Thank you in Advance.
... View more
Labels:
- Labels:
-
Apache Ambari
05-31-2020
11:27 PM
1 Kudo
Hey @stevenmatison, its working now thanks for the help.
... View more
05-28-2020
05:34 AM
@stevenmatison I already tried doing that a few times but still i am getting the same exception. Actually i have 2 users (A and B), My source code is running in user A while Hadoop and HIve are running in user B. I guess maybe thats why i am getting this issue, i am not sure. I also tried copying my intellij project to the other user, then i am not even able to run my code. Thanks
... View more
05-28-2020
02:04 AM
0 I m using Spark 2.4.5, Hive 3.1.2, Hadoop 3.2.1. While running hive in spark i got the following exception, Exception in thread "main" org . apache . spark . sql . AnalysisException : java . lang . RuntimeException : The root scratch dir : / tmp / hive on HDFS should be writable . Current permissions are : rwxrwxr - x ; This is my source code, package com . spark . hiveconnect
import java . io . File
import org . apache . spark . sql .{ Row , SaveMode , SparkSession }
object sourceToHIve {
case class Record ( key : Int , value : String )
def main ( args : Array [ String ]){
val warehouseLocation = new File ( "spark-warehouse" ). getAbsolutePath
val spark = SparkSession
. builder ()
. appName ( "Spark Hive Example" ). master ( "local" )
. config ( "spark.sql.warehouse.dir" , warehouseLocation )
. enableHiveSupport ()
. getOrCreate ()
import spark . implicits . _
import spark . sql
sql ( "CREATE TABLE IF NOT EXISTS src (key INT, value STRING) USING hive" ) sql ( "LOAD DATA LOCAL INPATH '/usr/local/spark3/examples/src/main/resources/kv1.txt' INTO TABLE src" ) sql ( "SELECT * FROM src" ). show () spark . stop ()
}
} This is my sbt file name := "SparkHive" version := "0.1" scalaVersion := "2.12.10" libraryDependencies += "org.apache.spark" %% "spark-core" % "2.4.5" libraryDependencies += "org.apache.spark" %% "spark-sql" % "2.4.5" libraryDependencies += "mysql" % "mysql-connector-java" % "8.0.19" libraryDependencies += "org.apache.spark" %% "spark-hive" % "2.4.5" How can solve this Issue? While observing the console i also saw this statement, is this the reason why i am getting this issue. 20 / 05 / 28 14 : 03 : 04 INFO SecurityManager : SecurityManager : authentication disabled ; ui acls disabled ; users with view permissions : Set ( UDHAV . MAHATA ); groups with view permissions : Set (); users with modify permissions : Set ( UDHAV . MAHATA ); groups with modify permissions : Set () Can anyone help me? Thank You!
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Spark
05-27-2020
02:59 AM
1 Kudo
Hello @ronics, I had the same error few days back, Following is the exception Exception in thread "main" java.lang.RuntimeException: com.ctc.wstx.exc.WstxParsingException: Illegal character entity: expansion character (code 0x8 at [row,col,system-id]: [3215,96,"file:/home/hadoop/hive/conf/hive-site.xml"] Open hive-site.xml file and go to the 3215th row. It is throwing that error, as there is a special character () between the words for and transactional . Either delete that character or copy and paste (replace) which the next 2 lines. Ensures commands with OVERWRITE (such as INSERT OVERWRITE) acquire Exclusive locks for transactional tables. This ensures that inserts (w/o overwrite) running concurrently are not hidden by the INSERT OVERWRITE. This should solve your problem.
... View more
05-23-2020
12:26 AM
i tried going to Help -> show logs in file. but there are many log files. which one should i share?
... View more
05-22-2020
12:58 AM
Hello, I am new to Spark. When i try creating a new project, sbt task fails. This is my intellij screen. And the sbt version i m using is 0.13.17. my doubts are which scala version and sbt version should i use? and when i see the official documentation it shows dependancies with scala 2.12? Thank you.
... View more
Labels:
- Labels:
-
Apache Spark
05-06-2020
12:44 PM
Hello,td Whenever i add hive service into my cluster it always fails to start. following is the stderr and stdout. Can someone help me resolve this issue? Thank you in advance. stderr: Traceback (most recent call last): File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 995, in restart self.status(env) File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/scripts/hive_metastore.py", line 87, in status check_process_status(status_params.hive_metastore_pid) File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/check_process_status.py", line 43, in check_process_status raise ComponentIsNotRunning() ComponentIsNotRunning The above exception was the cause of the following exception: Traceback (most recent call last): File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/scripts/hive_metastore.py", line 201, in <module> HiveMetastore().execute() File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 352, in execute method(env) File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 1006, in restart self.start(env, upgrade_type=upgrade_type) File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/scripts/hive_metastore.py", line 61, in start create_metastore_schema() # execute without config lock File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/scripts/hive.py", line 487, in create_metastore_schema user = params.hive_user File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__ self.env.run() File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run self.run_action(resource, action) File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action provider_action() File "/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", line 263, in action_run returns=self.resource.returns) File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 72, in inner result = function(command, **kwargs) File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 102, in checked_call tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy, returns=returns) File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper result = _call(command, **kwargs_copy) File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 314, in _call raise ExecutionFailed(err_msg, code, out, err) resource_management.core.exceptions.ExecutionFailed: Execution of 'export HIVE_CONF_DIR=/usr/hdp/current/hive-metastore/conf/ ; /usr/hdp/current/hive-server2/bin/schematool -initSchema -dbType mysql -userName hive -passWord [PROTECTED] -verbose' returned 1. SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/usr/hdp/3.1.4.0-315/hive/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/hdp/3.1.4.0-315/hadoop/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] Initializing the schema to: 3.1.1000 Metastore connection URL: jdbc:mysql://localhost/hive?createDatabaseIfNotExist=true Metastore Connection Driver : com.mysql.jdbc.Driver Metastore connection User: hive Thu May 07 01:06:19 IST 2020 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification. Failed to get schema version. Underlying cause: java.sql.SQLException : Access denied for user 'hive'@'localhost' (using password: YES) SQL Error code: 1045 org.apache.hadoop.hive.metastore.HiveMetaException: Failed to get schema version. at org.apache.hadoop.hive.metastore.tools.HiveSchemaHelper.getConnectionToMetastore(HiveSchemaHelper.java:94) at org.apache.hadoop.hive.metastore.tools.MetastoreSchemaTool.getConnectionToMetastore(MetastoreSchemaTool.java:250) at org.apache.hadoop.hive.metastore.tools.MetastoreSchemaTool.testConnectionToMetastore(MetastoreSchemaTool.java:333) at org.apache.hadoop.hive.metastore.tools.SchemaToolTaskInit.execute(SchemaToolTaskInit.java:53) at org.apache.hadoop.hive.metastore.tools.MetastoreSchemaTool.run(MetastoreSchemaTool.java:446) at org.apache.hive.beeline.schematool.HiveSchemaTool.main(HiveSchemaTool.java:138) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:318) at org.apache.hadoop.util.RunJar.main(RunJar.java:232) Caused by: java.sql.SQLException: Access denied for user 'hive'@'localhost' (using password: YES) at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:965) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3973) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3909) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:873) at com.mysql.jdbc.MysqlIO.proceedHandshakeWithPluggableAuthentication(MysqlIO.java:1710) at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1226) at com.mysql.jdbc.ConnectionImpl.coreConnect(ConnectionImpl.java:2188) at com.mysql.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:2219) at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2014) at com.mysql.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:776) at com.mysql.jdbc.JDBC4Connection.<init>(JDBC4Connection.java:47) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at com.mysql.jdbc.Util.handleNewInstance(Util.java:425) at com.mysql.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:386) at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:330) at java.sql.DriverManager.getConnection(DriverManager.java:664) at java.sql.DriverManager.getConnection(DriverManager.java:247) at org.apache.hadoop.hive.metastore.tools.HiveSchemaHelper.getConnectionToMetastore(HiveSchemaHelper.java:88) ... 11 more *** schemaTool failed *** stdout: 2020-05-07 01:06:04,904 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=3.1.4.0-315 -> 3.1.4.0-315 2020-05-07 01:06:04,917 - Using hadoop conf dir: /usr/hdp/3.1.4.0-315/hadoop/conf 2020-05-07 01:06:05,089 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=3.1.4.0-315 -> 3.1.4.0-315 2020-05-07 01:06:05,092 - Using hadoop conf dir: /usr/hdp/3.1.4.0-315/hadoop/conf 2020-05-07 01:06:05,093 - Group['livy'] {} 2020-05-07 01:06:05,094 - Group['spark'] {} 2020-05-07 01:06:05,094 - Group['hdfs'] {} 2020-05-07 01:06:05,094 - Group['hadoop'] {} 2020-05-07 01:06:05,094 - Group['users'] {} 2020-05-07 01:06:05,095 - User['yarn-ats'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2020-05-07 01:06:05,095 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2020-05-07 01:06:05,096 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2020-05-07 01:06:05,096 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2020-05-07 01:06:05,097 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None} 2020-05-07 01:06:05,097 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['livy', 'hadoop'], 'uid': None} 2020-05-07 01:06:05,098 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['spark', 'hadoop'], 'uid': None} 2020-05-07 01:06:05,098 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None} 2020-05-07 01:06:05,099 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2020-05-07 01:06:05,100 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop'], 'uid': None} 2020-05-07 01:06:05,100 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2020-05-07 01:06:05,101 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2020-05-07 01:06:05,102 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2020-05-07 01:06:05,102 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2020-05-07 01:06:05,103 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2020-05-07 01:06:05,104 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'} 2020-05-07 01:06:05,108 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if 2020-05-07 01:06:05,108 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'} 2020-05-07 01:06:05,109 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2020-05-07 01:06:05,110 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2020-05-07 01:06:05,110 - call['/var/lib/ambari-agent/tmp/changeUid.sh hbase'] {} 2020-05-07 01:06:05,116 - call returned (0, '1017') 2020-05-07 01:06:05,117 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1017'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'} 2020-05-07 01:06:05,123 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1017'] due to not_if 2020-05-07 01:06:05,124 - Group['hdfs'] {} 2020-05-07 01:06:05,124 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop', u'hdfs']} 2020-05-07 01:06:05,125 - FS Type: HDFS 2020-05-07 01:06:05,125 - Directory['/etc/hadoop'] {'mode': 0755} 2020-05-07 01:06:05,137 - File['/usr/hdp/3.1.4.0-315/hadoop/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'} 2020-05-07 01:06:05,137 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777} 2020-05-07 01:06:05,155 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'} 2020-05-07 01:06:05,158 - Skipping Execute[('setenforce', '0')] due to not_if 2020-05-07 01:06:05,159 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'} 2020-05-07 01:06:05,160 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'} 2020-05-07 01:06:05,161 - Directory['/var/run/hadoop/hdfs'] {'owner': 'hdfs', 'cd_access': 'a'} 2020-05-07 01:06:05,161 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'} 2020-05-07 01:06:05,163 - File['/usr/hdp/3.1.4.0-315/hadoop/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'} 2020-05-07 01:06:05,164 - File['/usr/hdp/3.1.4.0-315/hadoop/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'} 2020-05-07 01:06:05,170 - File['/usr/hdp/3.1.4.0-315/hadoop/conf/log4j.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644} 2020-05-07 01:06:05,178 - File['/usr/hdp/3.1.4.0-315/hadoop/conf/hadoop-metrics2.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'} 2020-05-07 01:06:05,179 - File['/usr/hdp/3.1.4.0-315/hadoop/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755} 2020-05-07 01:06:05,179 - File['/usr/hdp/3.1.4.0-315/hadoop/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'} 2020-05-07 01:06:05,182 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop', 'mode': 0644} 2020-05-07 01:06:05,186 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755} 2020-05-07 01:06:05,190 - Skipping unlimited key JCE policy check and setup since it is not required 2020-05-07 01:06:05,534 - Using hadoop conf dir: /usr/hdp/3.1.4.0-315/hadoop/conf 2020-05-07 01:06:05,542 - call['ambari-python-wrap /usr/bin/hdp-select status hive-server2'] {'timeout': 20} 2020-05-07 01:06:05,559 - call returned (0, 'hive-server2 - 3.1.4.0-315') 2020-05-07 01:06:05,560 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=3.1.4.0-315 -> 3.1.4.0-315 2020-05-07 01:06:05,574 - File['/var/lib/ambari-agent/cred/lib/CredentialUtil.jar'] {'content': DownloadSource('http://localhost:8080/resources/CredentialUtil.jar'), 'mode': 0755} 2020-05-07 01:06:05,576 - Not downloading the file from http://localhost:8080/resources/CredentialUtil.jar, because /var/lib/ambari-agent/tmp/CredentialUtil.jar already exists 2020-05-07 01:06:06,220 - call['ambari-sudo.sh su hive -l -s /bin/bash -c 'cat /var/run/hive/hive.pid 1>/tmp/tmptbeEn2 2>/tmp/tmp7HHFD1''] {'quiet': False} 2020-05-07 01:06:06,233 - call returned (1, '') 2020-05-07 01:06:06,233 - Execution of 'cat /var/run/hive/hive.pid 1>/tmp/tmptbeEn2 2>/tmp/tmp7HHFD1' returned 1. cat: /var/run/hive/hive.pid: No such file or directory 2020-05-07 01:06:06,233 - get_user_call_output returned (1, u'', u'cat: /var/run/hive/hive.pid: No such file or directory') 2020-05-07 01:06:06,234 - Execute['ambari-sudo.sh kill '] {'not_if': '! (ls /var/run/hive/hive.pid >/dev/null 2>&1 && ps -p >/dev/null 2>&1)'} 2020-05-07 01:06:06,239 - Skipping Execute['ambari-sudo.sh kill '] due to not_if 2020-05-07 01:06:06,240 - Execute['ambari-sudo.sh kill -9 '] {'not_if': '! (ls /var/run/hive/hive.pid >/dev/null 2>&1 && ps -p >/dev/null 2>&1) || ( sleep 5 && ! (ls /var/run/hive/hive.pid >/dev/null 2>&1 && ps -p >/dev/null 2>&1) )', 'ignore_failures': True} 2020-05-07 01:06:06,243 - Skipping Execute['ambari-sudo.sh kill -9 '] due to not_if 2020-05-07 01:06:06,244 - Execute['! (ls /var/run/hive/hive.pid >/dev/null 2>&1 && ps -p >/dev/null 2>&1)'] {'tries': 20, 'try_sleep': 3} 2020-05-07 01:06:06,247 - File['/var/run/hive/hive.pid'] {'action': ['delete']} 2020-05-07 01:06:06,248 - Pid file /var/run/hive/hive.pid is empty or does not exist 2020-05-07 01:06:06,251 - Yarn already refreshed 2020-05-07 01:06:06,251 - HdfsResource['/user/hive'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.1.4.0-315/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://localhost:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': 'missing_principal', 'user': 'hdfs', 'owner': 'hive', 'hadoop_conf_dir': '/usr/hdp/3.1.4.0-315/hadoop/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp'], 'mode': 0755} 2020-05-07 01:06:06,254 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET -d '"'"''"'"' -H '"'"'Content-Length: 0'"'"' '"'"'http://localhost:50070/webhdfs/v1/user/hive?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpT6HbFH 2>/tmp/tmpdfPhxt''] {'logoutput': None, 'quiet': False} 2020-05-07 01:06:06,281 - call returned (0, '') 2020-05-07 01:06:06,281 - get_user_call_output returned (0, u'{"FileStatus":{"accessTime":0,"blockSize":0,"childrenNum":0,"fileId":19006,"group":"hdfs","length":0,"modificationTime":1588792770732,"owner":"hive","pathSuffix":"","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}200', u'') 2020-05-07 01:06:06,282 - HdfsResource['/warehouse/tablespace/external/hive'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.1.4.0-315/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://localhost:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': 'missing_principal', 'user': 'hdfs', 'owner': 'hive', 'group': 'hadoop', 'hadoop_conf_dir': '/usr/hdp/3.1.4.0-315/hadoop/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp'], 'mode': 01777} 2020-05-07 01:06:06,283 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET -d '"'"''"'"' -H '"'"'Content-Length: 0'"'"' '"'"'http://localhost:50070/webhdfs/v1/warehouse/tablespace/external/hive?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmprlGA1e 2>/tmp/tmpxKIV4S''] {'logoutput': None, 'quiet': False} 2020-05-07 01:06:06,311 - call returned (0, '') 2020-05-07 01:06:06,312 - get_user_call_output returned (0, u'{"FileStatus":{"accessTime":0,"aclBit":true,"blockSize":0,"childrenNum":0,"fileId":19010,"group":"hadoop","length":0,"modificationTime":1588792770904,"owner":"hive","pathSuffix":"","permission":"1777","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}200', u'') 2020-05-07 01:06:06,312 - Skipping the operation for not managed DFS directory /warehouse/tablespace/external/hive since immutable_paths contains it. 2020-05-07 01:06:06,313 - HdfsResource['/warehouse/tablespace/managed/hive'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.1.4.0-315/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://localhost:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': 'missing_principal', 'user': 'hdfs', 'owner': 'hive', 'group': 'hadoop', 'hadoop_conf_dir': '/usr/hdp/3.1.4.0-315/hadoop/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp'], 'mode': 0700} 2020-05-07 01:06:06,314 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET -d '"'"''"'"' -H '"'"'Content-Length: 0'"'"' '"'"'http://localhost:50070/webhdfs/v1/warehouse/tablespace/managed/hive?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmph9qvm4 2>/tmp/tmpZjXRID''] {'logoutput': None, 'quiet': False} 2020-05-07 01:06:06,343 - call returned (0, '') 2020-05-07 01:06:06,343 - get_user_call_output returned (0, u'{"FileStatus":{"accessTime":0,"aclBit":true,"blockSize":0,"childrenNum":0,"fileId":19012,"group":"hadoop","length":0,"modificationTime":1588792771129,"owner":"hive","pathSuffix":"","permission":"700","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}200', u'') 2020-05-07 01:06:06,343 - Skipping the operation for not managed DFS directory /warehouse/tablespace/managed/hive since immutable_paths contains it. 2020-05-07 01:06:06,344 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'hdfs getconf -confKey dfs.namenode.acls.enabled 1>/tmp/tmp3D8soo 2>/tmp/tmpNLPv95''] {'quiet': False} 2020-05-07 01:06:07,279 - call returned (0, '') 2020-05-07 01:06:07,280 - get_user_call_output returned (0, u'true', u'') 2020-05-07 01:06:07,280 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'hdfs getconf -confKey dfs.namenode.posix.acl.inheritance.enabled 1>/tmp/tmpbQX4bK 2>/tmp/tmpiOs0xI''] {'quiet': False} 2020-05-07 01:06:08,209 - call returned (0, '') 2020-05-07 01:06:08,210 - get_user_call_output returned (0, u'true', u'') 2020-05-07 01:06:08,210 - Execute['hdfs dfs -setfacl -m default:user:hive:rwx /warehouse/tablespace/external/hive'] {'user': 'hdfs'} 2020-05-07 01:06:10,250 - Execute['hdfs dfs -setfacl -m default:user:hive:rwx /warehouse/tablespace/managed/hive'] {'user': 'hdfs'} 2020-05-07 01:06:12,237 - HdfsResource[None] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.1.4.0-315/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://localhost:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': 'missing_principal', 'user': 'hdfs', 'action': ['execute'], 'hadoop_conf_dir': '/usr/hdp/3.1.4.0-315/hadoop/conf', 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp']} 2020-05-07 01:06:12,239 - Directories to fill with configs: [u'/usr/hdp/current/hive-metastore/conf', u'/usr/hdp/current/hive-metastore/conf/'] 2020-05-07 01:06:12,240 - Directory['/etc/hive/3.1.4.0-315/0'] {'owner': 'hive', 'group': 'hadoop', 'create_parents': True, 'mode': 0755} 2020-05-07 01:06:12,240 - XmlConfig['mapred-site.xml'] {'group': 'hadoop', 'conf_dir': '/etc/hive/3.1.4.0-315/0', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'hive', 'configurations': ...} 2020-05-07 01:06:12,249 - Generating config: /etc/hive/3.1.4.0-315/0/mapred-site.xml 2020-05-07 01:06:12,249 - File['/etc/hive/3.1.4.0-315/0/mapred-site.xml'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2020-05-07 01:06:12,288 - File['/etc/hive/3.1.4.0-315/0/hive-default.xml.template'] {'owner': 'hive', 'group': 'hadoop', 'mode': 0644} 2020-05-07 01:06:12,288 - File['/etc/hive/3.1.4.0-315/0/hive-env.sh.template'] {'owner': 'hive', 'group': 'hadoop', 'mode': 0755} 2020-05-07 01:06:12,291 - File['/etc/hive/3.1.4.0-315/0/llap-daemon-log4j2.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0644} 2020-05-07 01:06:12,293 - File['/etc/hive/3.1.4.0-315/0/llap-cli-log4j2.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0644} 2020-05-07 01:06:12,295 - File['/etc/hive/3.1.4.0-315/0/hive-log4j2.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0644} 2020-05-07 01:06:12,296 - File['/etc/hive/3.1.4.0-315/0/hive-exec-log4j2.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0644} 2020-05-07 01:06:12,299 - File['/etc/hive/3.1.4.0-315/0/beeline-log4j2.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0644} 2020-05-07 01:06:12,300 - XmlConfig['beeline-site.xml'] {'owner': 'hive', 'group': 'hadoop', 'mode': 0644, 'conf_dir': '/etc/hive/3.1.4.0-315/0', 'configurations': {'beeline.hs2.jdbc.url.container': u'jdbc:hive2://localhost:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2', 'beeline.hs2.jdbc.url.default': u'container'}} 2020-05-07 01:06:12,308 - Generating config: /etc/hive/3.1.4.0-315/0/beeline-site.xml 2020-05-07 01:06:12,308 - File['/etc/hive/3.1.4.0-315/0/beeline-site.xml'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2020-05-07 01:06:12,309 - File['/etc/hive/3.1.4.0-315/0/parquet-logging.properties'] {'content': ..., 'owner': 'hive', 'group': 'hadoop', 'mode': 0644} 2020-05-07 01:06:12,309 - Directory['/etc/hive/3.1.4.0-315/0'] {'owner': 'hive', 'group': 'hadoop', 'create_parents': True, 'mode': 0755} 2020-05-07 01:06:12,310 - XmlConfig['mapred-site.xml'] {'group': 'hadoop', 'conf_dir': '/etc/hive/3.1.4.0-315/0', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'hive', 'configurations': ...} 2020-05-07 01:06:12,315 - Generating config: /etc/hive/3.1.4.0-315/0/mapred-site.xml 2020-05-07 01:06:12,316 - File['/etc/hive/3.1.4.0-315/0/mapred-site.xml'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2020-05-07 01:06:12,353 - File['/etc/hive/3.1.4.0-315/0/hive-default.xml.template'] {'owner': 'hive', 'group': 'hadoop', 'mode': 0644} 2020-05-07 01:06:12,353 - File['/etc/hive/3.1.4.0-315/0/hive-env.sh.template'] {'owner': 'hive', 'group': 'hadoop', 'mode': 0755} 2020-05-07 01:06:12,356 - File['/etc/hive/3.1.4.0-315/0/llap-daemon-log4j2.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0644} 2020-05-07 01:06:12,358 - File['/etc/hive/3.1.4.0-315/0/llap-cli-log4j2.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0644} 2020-05-07 01:06:12,360 - File['/etc/hive/3.1.4.0-315/0/hive-log4j2.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0644} 2020-05-07 01:06:12,361 - File['/etc/hive/3.1.4.0-315/0/hive-exec-log4j2.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0644} 2020-05-07 01:06:12,363 - File['/etc/hive/3.1.4.0-315/0/beeline-log4j2.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0644} 2020-05-07 01:06:12,363 - XmlConfig['beeline-site.xml'] {'owner': 'hive', 'group': 'hadoop', 'mode': 0644, 'conf_dir': '/etc/hive/3.1.4.0-315/0', 'configurations': {'beeline.hs2.jdbc.url.container': u'jdbc:hive2://localhost:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2', 'beeline.hs2.jdbc.url.default': u'container'}} 2020-05-07 01:06:12,373 - Generating config: /etc/hive/3.1.4.0-315/0/beeline-site.xml 2020-05-07 01:06:12,373 - File['/etc/hive/3.1.4.0-315/0/beeline-site.xml'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2020-05-07 01:06:12,375 - File['/etc/hive/3.1.4.0-315/0/parquet-logging.properties'] {'content': ..., 'owner': 'hive', 'group': 'hadoop', 'mode': 0644} 2020-05-07 01:06:12,375 - File['/usr/hdp/current/hive-metastore/conf/hive-site.jceks'] {'content': StaticFile('/var/lib/ambari-agent/cred/conf/hive_metastore/hive-site.jceks'), 'owner': 'hive', 'group': 'hadoop', 'mode': 0640} 2020-05-07 01:06:12,376 - Writing File['/usr/hdp/current/hive-metastore/conf/hive-site.jceks'] because contents don't match 2020-05-07 01:06:12,376 - XmlConfig['hive-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hive-metastore/conf/', 'mode': 0644, 'configuration_attributes': {u'hidden': {u'javax.jdo.option.ConnectionPassword': u'HIVE_CLIENT,CONFIG_DOWNLOAD'}}, 'owner': 'hive', 'configurations': ...} 2020-05-07 01:06:12,382 - Generating config: /usr/hdp/current/hive-metastore/conf/hive-site.xml 2020-05-07 01:06:12,382 - File['/usr/hdp/current/hive-metastore/conf/hive-site.xml'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2020-05-07 01:06:12,506 - Writing File['/usr/hdp/current/hive-metastore/conf/hive-site.xml'] because contents don't match 2020-05-07 01:06:12,509 - File['/usr/hdp/current/hive-metastore/conf//hive-env.sh'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0755} 2020-05-07 01:06:12,510 - Writing File['/usr/hdp/current/hive-metastore/conf//hive-env.sh'] because contents don't match 2020-05-07 01:06:12,510 - Directory['/etc/security/limits.d'] {'owner': 'root', 'create_parents': True, 'group': 'root'} 2020-05-07 01:06:12,512 - File['/etc/security/limits.d/hive.conf'] {'content': Template('hive.conf.j2'), 'owner': 'root', 'group': 'root', 'mode': 0644} 2020-05-07 01:06:12,512 - File['/usr/lib/ambari-agent/DBConnectionVerification.jar'] {'content': DownloadSource('http://localhost:8080/resources/DBConnectionVerification.jar'), 'mode': 0644} 2020-05-07 01:06:12,512 - Not downloading the file from http://localhost:8080/resources/DBConnectionVerification.jar, because /var/lib/ambari-agent/tmp/DBConnectionVerification.jar already exists 2020-05-07 01:06:12,512 - Directory['/var/run/hive'] {'owner': 'hive', 'create_parents': True, 'group': 'hadoop', 'mode': 0755, 'cd_access': 'a'} 2020-05-07 01:06:12,513 - Directory['/var/log/hive'] {'owner': 'hive', 'create_parents': True, 'group': 'hadoop', 'mode': 0755, 'cd_access': 'a'} 2020-05-07 01:06:12,514 - Directory['/var/lib/hive'] {'owner': 'hive', 'create_parents': True, 'group': 'hadoop', 'mode': 0755, 'cd_access': 'a'} 2020-05-07 01:06:12,514 - XmlConfig['hivemetastore-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hive-metastore/conf/', 'mode': 0600, 'configuration_attributes': {}, 'owner': 'hive', 'configurations': ...} 2020-05-07 01:06:12,523 - Generating config: /usr/hdp/current/hive-metastore/conf/hivemetastore-site.xml 2020-05-07 01:06:12,524 - File['/usr/hdp/current/hive-metastore/conf/hivemetastore-site.xml'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0600, 'encoding': 'UTF-8'} 2020-05-07 01:06:12,537 - File['/usr/hdp/current/hive-metastore/conf/hadoop-metrics2-hivemetastore.properties'] {'content': Template('hadoop-metrics2-hivemetastore.properties.j2'), 'owner': 'hive', 'group': 'hadoop', 'mode': 0600} 2020-05-07 01:06:12,539 - File['/var/lib/ambari-agent/tmp/start_metastore_script'] {'content': StaticFile('startMetastore.sh'), 'mode': 0755} 2020-05-07 01:06:12,540 - HdfsResource[None] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.1.4.0-315/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://localhost:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': 'missing_principal', 'user': 'hdfs', 'action': ['execute'], 'hadoop_conf_dir': '/usr/hdp/3.1.4.0-315/hadoop/conf', 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp']} 2020-05-07 01:06:12,542 - Directory['/usr/lib/ambari-logsearch-logfeeder/conf'] {'create_parents': True, 'mode': 0755, 'cd_access': 'a'} 2020-05-07 01:06:12,542 - Generate Log Feeder config file: /usr/lib/ambari-logsearch-logfeeder/conf/input.config-hive.json 2020-05-07 01:06:12,543 - File['/usr/lib/ambari-logsearch-logfeeder/conf/input.config-hive.json'] {'content': Template('input.config-hive.json.j2'), 'mode': 0644} 2020-05-07 01:06:12,543 - Execute['export HIVE_CONF_DIR=/usr/hdp/current/hive-metastore/conf/ ; /usr/hdp/current/hive-server2/bin/schematool -initSchema -dbType mysql -userName hive -passWord [PROTECTED] -verbose'] {'not_if': "ambari-sudo.sh su hive -l -s /bin/bash -c 'export HIVE_CONF_DIR=/usr/hdp/current/hive-metastore/conf/ ; /usr/hdp/current/hive-server2/bin/schematool -info -dbType mysql -userName hive -passWord [PROTECTED] -verbose'", 'user': 'hive'} Command failed after 1 tries
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hive
04-22-2020
05:56 AM
I wanted to create a cluster in Ambari but unfortunately at the "Confirm hosts" step I could not register my nodes due to following exception: Exception: Registration failed due to: Cannot register host with not supported os type, hostname=ip6-localhost, serverOsType=ubuntu18, agentOsType=ubuntu18 Is it because i am using ubuntu 18.04? Thank you.
... View more
- Tags:
- Ambari
Labels:
- Labels:
-
Apache Ambari
04-22-2020
05:14 AM
@stevenmatison Thank you very much!
... View more
04-22-2020
04:36 AM
@stevenmatison after this will i be able to use the hostname as a target host while creating a cluster in ambari? And also i am using ubuntu 18.04. Thank you
... View more
04-21-2020
11:35 PM
Hello, I want to do a passwordless ssh login to my localhost. I tried the following steps many times but it asks for password when i try "ssh localhost". 1. ssh-keygen 2. cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys 3. chmod 700 ~/.ssh 4. chmod 600 ~/.ssh/authorized_keys This is screenshot of my terminal. I tried the same steps to login from localhost to udhav.fqdn.com, but even now it promts me for password. For the second case i followed steps provided in this link. https://www.tecmint.com/ssh-passwordless-login-using-ssh-keygen-in-5-easy-steps/ I guess i am doing small mistake. Can anyone help me with this? Thank you.
... View more
Labels:
- Labels:
-
Apache Ambari
04-16-2020
06:01 AM
hello, I Know my doubt is silly but i a newbie in bigdata and ambari. I was creating a cluster. In the installation option i m having troubles while naming Target host. I am running ambari server(localhost:8080) on my local machine(laptop). My doubt is how should i name target host or what should i name the target host if i want it to run on my local machine(laptop) itself. If i give "localhost" as the target host name it always fails in the next "Confirm Host" step. Can anyone guide me through on how shouldi name the target host so that i can run everything in a single machine? Thank you!
... View more
Labels:
- Labels:
-
Apache Ambari