Member since
07-28-2016
37
Posts
2
Kudos Received
0
Solutions
09-10-2018
02:26 PM
Thanks for the answer @Vinicius Higa Murakami . Can we use Nifi ?
... View more
09-10-2018
01:42 PM
What different options are available other than distcp to copy data including hive metadta or hive tables and hdfs data between two hdp clusters?
... View more
Labels:
- Labels:
-
Apache Hadoop
09-07-2018
01:41 PM
@James.jones : I am having a similar scenario .Did you find out the R version compatible with HDP 2.6 ?
... View more
09-07-2018
01:38 PM
Hello , Which version of R is compatible with Spark 2.3.0 & Spark 1.6.3 in HDP 2.6.5.0-292 . We plan to use SparkR for Spark and Spark2 on Cli as well as RStudio . I earlier had R 2.3.1 and was getting the below warning Version mismatch between Spark JVM and SparkR package. JVM version was 2.3.0.2.6.5.0-292 , while R package version was 2.3.1
... View more
Labels:
08-22-2018
03:16 PM
@Jonathan Sneep : Thanks ! i didnt had tez client on the ATS server.After installation the ATS came up .
... View more
08-22-2018
12:57 PM
1 Kudo
Hello All , I have upgraded hdp to 2.6.5.0 from hdp 2.5.0.0 and installed Apache slider on hdp 2.6.5.0 . After installation of apache slider the App timeline server is not starting .Below error is displayed in the logs: 2018-08-22 13:39:08,699 INFO service.AbstractService (AbstractService.java:noteFailure(272)) - Service EntityGroupFSTimelineStore failed in state INITED; cause: java.lang.RuntimeException: No class defined for org.apache.tez.dag.history.logging.ats.TimelineCachePluginImpl
java.lang.RuntimeException: No class defined for org.apache.tez.dag.history.logging.ats.TimelineCachePluginImpl
at org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.loadPlugIns(EntityGroupFSTimelineStore.java:256)
at org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.serviceInit(EntityGroupFSTimelineStore.java:196)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107)
at org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryServer.serviceInit(ApplicationHistoryServer.java:111)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryServer.launchAppHistoryServer(ApplicationHistoryServer.java:174)
at org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryServer.main(ApplicationHistoryServer.java:184)
Caused by: java.lang.ClassNotFoundException: org.apache.tez.dag.history.logging.ats.TimelineCachePluginImpl
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at org.apache.hadoop.util.ApplicationClassLoader.loadClass(ApplicationClassLoader.java:197)
at org.apache.hadoop.util.ApplicationClassLoader.loadClass(ApplicationClassLoader.java:165)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.loadPlugIns(EntityGroupFSTimelineStore.java:243)
... 7 more
2018-08-22 13:39:08,700 INFO timeline.EntityGroupFSTimelineStore (EntityGroupFSTimelineStore.java:serviceStop(330)) - Stopping EntityGroupFSTimelineStore
2018-08-22 13:39:08,702 INFO service.AbstractService (AbstractService.java:noteFailure(272)) - Service org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryServer failed in state INITED; cause: java.lang.RuntimeException: No class defined for org.apache.tez.dag.history.logging.ats.TimelineCachePluginImpl
java.lang.RuntimeException: No class defined for org.apache.tez.dag.history.logging.ats.TimelineCachePluginImpl
at org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.loadPlugIns(EntityGroupFSTimelineStore.java:256)
at org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.serviceInit(EntityGroupFSTimelineStore.java:196)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107)
at org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryServer.serviceInit(ApplicationHistoryServer.java:111)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryServer.launchAppHistoryServer(ApplicationHistoryServer.java:174)
at org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryServer.main(ApplicationHistoryServer.java:184)
Caused by: java.lang.ClassNotFoundException: org.apache.tez.dag.history.logging.ats.TimelineCachePluginImpl
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at org.apache.hadoop.util.ApplicationClassLoader.loadClass(ApplicationClassLoader.java:197)
at org.apache.hadoop.util.ApplicationClassLoader.loadClass(ApplicationClassLoader.java:165)
at java.lang.Class.forName0(Native Method)
... View more
Labels:
08-22-2018
08:13 AM
@Vivek Somani : Can you please let me know if Hive LLAP is supported by OneFS in new release? If yes please share the version . Thanks in advance for your help .
... View more
08-15-2018
03:57 PM
@Ashish Kumar ,@Misbah Rehman : Were you able to resolve the issue?
... View more
07-31-2018
11:04 AM
@jfuentes : Thanks for the response will try it out.
... View more
07-31-2018
11:00 AM
@schhabra : Thanks for the response, The service check is getting fired from the same host where RM is installed . 18/07/31 11:11:34 INFO impl.TimelineClientImpl: Timeline service address: http://RM-host:8188/ws/v1/timeline/
18/07/31 11:11:34 INFO client.RMProxy: Connecting to ResourceManager at RM-host/RM-ip:8050
18/07/31 11:11:35 INFO client.AHSProxy: Connecting to Application History server at RM-host/RM-ip:10200
18/07/31 11:12:39 INFO ipc.Client: Retrying connect to server: RM-host/RM-ip:8050. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
18/07/31 11:13:43 INFO ipc.Client: Retrying connect to server: RMhost/RM-ip:8050. Already tried 1 time(s); retry policy is
... View more
07-30-2018
02:11 PM
Can we disable the service check steps during hdp upgrade? Currently we have hdp 2.5.0 and are planning to upgrade to 2.6 .We have some custom implementation because of which some service checks fails.Is there a way to disable service check during the upgrade as when i tested and skipped the service checks the upgrade was stuck at final stage due to service check failure after upgrade .
... View more
Labels:
- Labels:
-
Hortonworks Data Platform (HDP)
07-30-2018
02:05 PM
MapReduce service check fails with ipc.Client connection timed out error 2018-07-30 14:39:43,127 - Execute['hadoop --config /usr/hdp/current/hadoop-client/conf jar /usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples-2.*.jar wordcount /user/ambari-qa/mapredsmokeinput /user/ambari-qa/mapredsmokeoutput'] {'logoutput': True, 'try_sleep': 5, 'environment': {}, 'tries': 1, 'user': 'ambari-qa', 'path': ['/usr/sbin:/sbin:/usr/lib/ambari-server/*:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/var/lib/ambari-agent:/usr/hdp/current/hadoop-client/bin:/usr/hdp/current/hadoop-yarn-client/bin']}
18/07/30 14:39:45 INFO impl.TimelineClientImpl: Timeline service address: http://hostname:8188/ws/v1/timeline/
18/07/30 14:39:45 INFO client.RMProxy: Connecting to ResourceManager at hostname/ip:8050
18/07/30 14:39:45 INFO client.AHSProxy: Connecting to Application History server at hostname/ip:10200
18/07/30 14:40:49 INFO ipc.Client: Retrying connect to server: hostname/ip:8050. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
18/07/30 14:41:53 INFO ipc.Client: Retrying connect to server: hostname/ip:8050. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
18/07/30 14:42:57 INFO ipc.Client: Retrying connect to server: hostanme/ip:8050. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
18/07/30 14:44:01 INFO ipc.Client: Retrying connect to server: hostname/ip:8050. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS) Logs: WARN ipc.Client (Client.java:handleConnectionFailure(886)) - Failed to connect to server: ResourceManager-Hostname/ResourceManager-ip-address:8050: retries get failed due to exceeded maximum allowed retries number: 50
java.net.ConnectException: Connection timed out
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:650)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:745)
at org.apache.hadoop.ipc.Client$Connection.access$3200(Client.java:397)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1618)
at org.apache.hadoop.ipc.Client.call(Client.java:1449)
at org.apache.hadoop.ipc.Client.call(Client.java:1396)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
at com.sun.proxy.$Proxy77.getApplicationReport(Unknown Source)
at org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.getApplicationReport(ApplicationClientProtocolPBClientImpl.java:191)
at sun.reflect.GeneratedMethodAccessor58.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:278)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:194)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:176)
at com.sun.proxy.$Proxy78.getApplicationReport(Unknown Source)
at org.apache.hadoop.yarn.logaggregation.AggregatedLogDeletionService$LogDeletionTask.isApplicationTerminated(AggregatedLogDeletionService.java:155)
at org.apache.hadoop.yarn.logaggregation.AggregatedLogDeletionService$LogDeletionTask.deleteOldLogDirsFrom(AggregatedLogDeletionService.java:101)
at org.apache.hadoop.yarn.logaggregation.AggregatedLogDeletionService$LogDeletionTask.run(AggregatedLogDeletionService.java:85)
at java.util.TimerThread.mainLoop(Timer.java:555)
at java.util.TimerThread.run(Timer.java:505) The port 8050 is open and listening [root@bhwx24hwxworker2 yarn]# netstat --listen
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 *:8188 *:* LISTEN
tcp 0 0 *:8030 *:* LISTEN
tcp 0 0 *:8670 *:* LISTEN
tcp 0 0 *:8191 *:* LISTEN
tcp 0 0 *:sqlexec *:* LISTEN
tcp 0 0 *:10020 *:* LISTEN
tcp 0 0 *:eforward *:* LISTEN
tcp 0 0 *:40070 *:* LISTEN
tcp 0 0 localhost:40071 *:* LISTEN
tcp 0 0 *:8040 *:* LISTEN
tcp 0 0 *:40072 *:* LISTEN
tcp 0 0 *:7337 *:* LISTEN
tcp 0 0 *:fs-agent *:* LISTEN
tcp 0 0 *:8141 *:* LISTEN
tcp 0 0 *:45454 *:* LISTEN
tcp 0 0 *:19888 *:* LISTEN
tcp 0 0 bhwx24hwxworke:ciphire-serv *:* LISTEN
tcp 0 0 *:10033 *:* LISTEN
tcp 0 0 *:8050 *:* LISTEN
tcp 0 0 *:39987 *:* LISTEN
tcp 0 0 *:ssh *:* LISTEN
tcp 0 0 *:7447 *:* LISTEN
tcp 0 0 *:trisoap *:* LISTEN
tcp 0 0 *:radan-http *:* LISTEN
tcp 0 0 *:irisa *:* LISTEN
tcp 0 0 *:ca-audit-da *:* LISTEN
tcp 0 0 localhost:8089 *:* LISTEN
tcp 0 0 localhost:metasys *:* LISTEN
tcp 0 0 localhost:smtp *:* LISTEN
tcp 0 0 *:13562 *:* LISTEN
tcp 0 0 *:ssh *:* LISTEN
udp 0 0 bhwx24hwxworker2.cse-int:ntp *:*
udp 0 0 localhost:ntp *:*
udp 0 0 *:ntp *:*
udp 0 0 *:bootpc *:*
udp 0 0 *:ntp *:*
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Cloudera Manager
07-27-2018
10:21 AM
@Junfeng Chen I am facing a similar problem , can you please share the steps you performed to resolve it.
... View more
07-25-2018
03:53 PM
The Save button is disabled when registering a new version using local repo Baseurl in Ambari 2.5.0.3(HDP2.6.0.3). I have tried the below solution provided for Ambari 2.6 but its not working for Ambari 2.5.0.3 # ambari-server setup --enable-lzo-under-gpl-license
Using python /usr/bin/python
Setup ambari-server
Usage: ambari-server.py [options] action [stack_id os]
ambari-server.py: error: no such option: --enable-lzo-under-gpl-license
... View more
Labels:
07-25-2018
11:12 AM
After ambari upgrade from 2.4 to 2.5.0.3 the oozie server is not starting,stopping or restarting from Ambari ui .I am able to do the same from commandline . stderr: /var/lib/ambari-agent/data/errors-566.txt Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/OOZIE/4.0.0.2.0/package/scripts/oozie_server.py", line 222, in <module>
OozieServer().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 314, in execute
method(env)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 727, in restart
self.stop(env, upgrade_type=upgrade_type)
File "/var/lib/ambari-agent/cache/common-services/OOZIE/4.0.0.2.0/package/scripts/oozie_server.py", line 96, in stop
oozie_service(action='stop', upgrade_type=upgrade_type)
File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk
return fn(*args, **kwargs)
File "/var/lib/ambari-agent/cache/common-services/OOZIE/4.0.0.2.0/package/scripts/oozie_service.py", line 181, in oozie_service
user = params.oozie_user)
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 262, in action_run
tries=self.resource.tries, try_sleep=self.resource.try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 72, in inner
result = function(command, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 102, in checked_call
tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 150, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 303, in _call
raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of 'cd /var/tmp/oozie && /usr/hdp/current/oozie-server/bin/oozied.sh stop 60 -force' returned 1. Setting OOZIE_HOME: /usr/hdp/2.5.0.0-1245/oozie
Sourcing: /usr/hdp/2.5.0.0-1245/oozie/bin/oozie-env.sh
setting OOZIE_CONFIG=${OOZIE_CONFIG:-/usr/hdp/current/oozie-client/conf}
setting CATALINA_BASE=${CATALINA_BASE:-/usr/hdp/current/oozie-client/oozie-server}
setting CATALINA_TMPDIR=${CATALINA_TMPDIR:-/var/tmp/oozie}
setting OOZIE_CATALINA_HOME=/usr/lib/bigtop-tomcat
setting JAVA_HOME=/usr/java/default
setting JRE_HOME=${JAVA_HOME}
setting CATALINA_OPTS="$CATALINA_OPTS -Xmx2048m -XX:MaxPermSize=256m"
setting OOZIE_LOG=/var/log/oozie
setting CATALINA_PID=/var/run/oozie/oozie.pid
setting OOZIE_DATA=/hadoop/oozie/data
setting OOZIE_HTTP_PORT=11000
setting OOZIE_ADMIN_PORT=11001
setting JAVA_LIBRARY_PATH=/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64
setting OOZIE_CLIENT_OPTS="${OOZIE_CLIENT_OPTS} -Doozie.connection.retry.count=5 "
Using OOZIE_CONFIG: /usr/hdp/current/oozie-server/conf
Sourcing: /usr/hdp/current/oozie-server/conf/oozie-env.sh
setting OOZIE_CONFIG=${OOZIE_CONFIG:-/usr/hdp/current/oozie-client/conf}
setting CATALINA_BASE=${CATALINA_BASE:-/usr/hdp/current/oozie-client/oozie-server}
setting CATALINA_TMPDIR=${CATALINA_TMPDIR:-/var/tmp/oozie}
setting OOZIE_CATALINA_HOME=/usr/lib/bigtop-tomcat
setting JAVA_HOME=/usr/java/default
setting JRE_HOME=${JAVA_HOME}
setting CATALINA_OPTS="$CATALINA_OPTS -Xmx2048m -XX:MaxPermSize=256m"
setting OOZIE_LOG=/var/log/oozie
setting CATALINA_PID=/var/run/oozie/oozie.pid
setting OOZIE_DATA=/hadoop/oozie/data
setting OOZIE_HTTP_PORT=11000
setting OOZIE_ADMIN_PORT=11001
setting JAVA_LIBRARY_PATH=/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64
setting OOZIE_CLIENT_OPTS="${OOZIE_CLIENT_OPTS} -Doozie.connection.retry.count=5 "
Setting OOZIE_CONFIG_FILE: oozie-site.xml
Using OOZIE_DATA: /hadoop/oozie/data
Using OOZIE_LOG: /var/log/oozie
Setting OOZIE_LOG4J_FILE: oozie-log4j.properties
Setting OOZIE_LOG4J_RELOAD: 10
Setting OOZIE_HTTP_HOSTNAME: bhwx22hwxworker2.cse-int-06.local
Using OOZIE_HTTP_PORT: 11000
Using OOZIE_ADMIN_PORT: 11001
Setting OOZIE_HTTPS_PORT: 11443
Setting OOZIE_BASE_URL: http://bhwx22hwxworker2.cse-int-06.local:11000/oozie
Using CATALINA_BASE: /usr/hdp/current/oozie-client/oozie-server
Setting OOZIE_HTTPS_KEYSTORE_FILE: /home/oozie/.keystore
Setting OOZIE_HTTPS_KEYSTORE_PASS: password
Setting OOZIE_INSTANCE_ID: bhwx22hwxworker2.cse-int-06.local
Setting CATALINA_OUT: /var/log/oozie/catalina.out
Using CATALINA_PID: /var/run/oozie/oozie.pid
Using CATALINA_OPTS: -Dhdp.version=2.5.0.0-1245 -Xmx2048m -XX:MaxPermSize=256m -Xmx2048m -XX:MaxPermSize=256m -Dderby.stream.error.file=/var/log/oozie/derby.log
Adding to CATALINA_OPTS: -Doozie.home.dir=/usr/hdp/2.5.0.0-1245/oozie -Doozie.config.dir=/usr/hdp/current/oozie-server/conf -Doozie.log.dir=/var/log/oozie -Doozie.data.dir=/hadoop/oozie/data -Doozie.instance.id=bhwx22hwxworker2.cse-int-06.local -Doozie.config.file=oozie-site.xml -Doozie.log4j.file=oozie-log4j.properties -Doozie.log4j.reload=10 -Doozie.http.hostname=bhwx22hwxworker2.cse-int-06.local -Doozie.admin.port=11001 -Doozie.http.port=11000 -Doozie.https.port=11443 -Doozie.base.url=http://bhwx22hwxworker2.cse-int-06.local:11000/oozie -Doozie.https.keystore.file=/home/oozie/.keystore -Doozie.https.keystore.pass=password -Djava.library.path=/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64
PID file found but no matching process was found. Stop aborted. stdout: /var/lib/ambari-agent/data/output-566.txt 2018-07-25 10:50:05,194 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2018-07-25 10:50:05,300 - Stack Feature Version Info: stack_version=2.5, version=2.5.0.0-1245, current_cluster_version=2.5.0.0-1245 -> 2.5.0.0-1245
2018-07-25 10:50:05,301 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
User Group mapping (user_group) is missing in the hostLevelParams
2018-07-25 10:50:05,302 - Group['hadoop'] {}
2018-07-25 10:50:05,304 - Group['users'] {}
2018-07-25 10:50:05,304 - Group['spark'] {}
2018-07-25 10:50:05,304 - Group['livy'] {}
2018-07-25 10:50:05,304 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2018-07-25 10:50:05,306 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2018-07-25 10:50:05,308 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2018-07-25 10:50:05,309 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2018-07-25 10:50:05,311 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2018-07-25 10:50:05,312 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2018-07-25 10:50:05,313 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2018-07-25 10:50:05,314 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2018-07-25 10:50:05,315 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2018-07-25 10:50:05,315 - User['gpadmin'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2018-07-25 10:50:05,317 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2018-07-25 10:50:05,317 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2018-07-25 10:50:05,318 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2018-07-25 10:50:05,319 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2018-07-25 10:50:05,320 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2018-07-25 10:50:05,321 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2018-07-25 10:50:05,330 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2018-07-25 10:50:05,330 - Group['hdfs'] {}
2018-07-25 10:50:05,331 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'hdfs']}
2018-07-25 10:50:05,331 - FS Type:
2018-07-25 10:50:05,332 - Directory['/etc/hadoop'] {'mode': 0755}
2018-07-25 10:50:05,346 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2018-07-25 10:50:05,346 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2018-07-25 10:50:05,360 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2018-07-25 10:50:05,370 - Skipping Execute[('setenforce', '0')] due to not_if
2018-07-25 10:50:05,370 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'}
2018-07-25 10:50:05,372 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'}
2018-07-25 10:50:05,373 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'}
2018-07-25 10:50:05,373 - File['/var/lib/ambari-agent/lib/fast-hdfs-resource.jar'] {'content': StaticFile('fast-hdfs-resource.jar'), 'mode': 0644}
2018-07-25 10:50:05,427 - File['/usr/hdp/current/hadoop-client/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2018-07-25 10:50:05,429 - File['/usr/hdp/current/hadoop-client/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}
2018-07-25 10:50:05,433 - File['/usr/hdp/current/hadoop-client/conf/log4j.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2018-07-25 10:50:05,441 - File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs', 'group': 'hadoop'}
2018-07-25 10:50:05,442 - File['/usr/hdp/current/hadoop-client/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2018-07-25 10:50:05,443 - File['/usr/hdp/current/hadoop-client/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2018-07-25 10:50:05,446 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop'}
2018-07-25 10:50:05,454 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2018-07-25 10:50:05,671 - Stack Feature Version Info: stack_version=2.5, version=2.5.0.0-1245, current_cluster_version=2.5.0.0-1245 -> 2.5.0.0-1245
2018-07-25 10:50:05,672 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2018-07-25 10:50:05,678 - checked_call['rpm -q --queryformat '%{version}-%{release}' hdp-select | sed -e 's/\.el[0-9]//g''] {'stderr': -1}
2018-07-25 10:50:05,714 - checked_call returned (0, '2.5.0.0-1245', '')
2018-07-25 10:50:05,717 - Directory['/var/tmp/oozie'] {'owner': 'oozie', 'create_parents': True}
2018-07-25 10:50:05,718 - Execute['cd /var/tmp/oozie && /usr/hdp/current/oozie-server/bin/oozied.sh stop 60 -force'] {'environment': {'OOZIE_CONFIG': '/usr/hdp/current/oozie-server/conf'}, 'only_if': "ambari-sudo.sh su oozie -l -s /bin/bash -c 'ls /var/run/oozie/oozie.pid >/dev/null 2>&1 && ps -p `cat /var/run/oozie/oozie.pid` >/dev/null 2>&1'", 'user': 'oozie'}
2018-07-25 10:50:05,838 - Execute['find /var/log/oozie -maxdepth 1 -type f -name '*' -exec echo '==> {} <==' \; -exec tail -n 40 {} \;'] {'logoutput': True, 'ignore_failures': True, 'user': 'oozie'}
==> /var/log/oozie/oozie.log-2018-07-24-02 <==
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Oozie
07-19-2018
11:47 AM
@sunile.manjee: The thing is we cannot enable Knox as well due to customized architecture. We have PAM based authentication for Hive and it works as expected.We want to enable the same for Spark SQL but i cannot find an option for that . The main security issues is observed when a user tries to connect through spark sql through beeline with just the username and not even password ,they are able to connect . beeline -u jdbc:hive2://localhost:10015/default -n bob Connecting to jdbc:hive2://localhost:10015/default Connected to: Spark SQL (version 1.6.2) Driver: Hive JDBC (version 1.2.1000.2.5.0.0-1245) Transaction isolation: TRANSACTION_REPEATABLE_READ Beeline version 1.2.1000.2.5.0.0-1245 by Apache Hive
... View more
07-18-2018
04:04 PM
When user is connecting to spark sql using jdbc , password is not prompted. How to enable pam based authentication to authenticate users connecting spark sql through jdbc(beeline) We do not have kerberos ,ACL or ranger enabled and cannot use them due to custom software. beeline -u jdbc:hive2://localhost:10015/default -n hive
Connecting to jdbc:hive2://localhost:10015/default
Connected to: Spark SQL (version 1.6.2)
Driver: Hive JDBC (version 1.2.1000.2.5.0.0-1245)
Transaction isolation: TRANSACTION_REPEATABLE_READ
Beeline version 1.2.1000.2.5.0.0-1245 by Apache Hive
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Spark
07-10-2018
02:03 PM
@Scott Shaw , @Jay Kumar SenSharma the main problem is we used a customized env and cannot upgrade to hdp2.6 because of thirdparty dependencies .Hence i asked if its stable enough in hdp 2.5 to be used in production env considering upgrade to hdp 2.6 is out of scope
... View more
07-10-2018
01:27 PM
We have hdp 2.5 in production env and cannot upgrade due to business reasons . We want to enable LLAP and want to confirm if its stable enough in hdp 2.5 .
... View more
Labels:
06-29-2018
12:00 PM
@Geoffrey Shelton Okot,@ARUN,@Aravindan Vijayan,@Abhishek Reddy Chamakura,@Karan Alang Did you guys got a permanent solution for this issue? we are getting the error "KeeperErrorCode =
NodeExists for /ams-hbase-secure/namespace/hbase" We are facing the same issue . email address: tauqeerkhan@outlook.com
... View more
03-21-2017
11:11 AM
HDP 2.4 ambari 2.4.2
... View more
03-21-2017
11:09 AM
Pig service check is failing with "Can't get Master Kerberos principal for use as renewer" error . Logs: to follow:
Can't get Master Kerberos principal for use as renewer
at org.apache.pig.newplan.logical.visitor.InputOutputFileValidatorVisitor.visit(InputOutputFileValidatorVisitor.java:95)
at org.apache.pig.newplan.logical.relational.LOStore.accept(LOStore.java:66)
at org.apache.pig.newplan.DepthFirstWalker.depthFirst(DepthFirstWalker.java:64)
at org.apache.pig.newplan.DepthFirstWalker.depthFirst(DepthFirstWalker.java:66)
at org.apache.pig.newplan.DepthFirstWalker.depthFirst(DepthFirstWalker.java:66)
at org.apache.pig.newplan.DepthFirstWalker.walk(DepthFirstWalker.java:53)
at org.apache.pig.newplan.PlanVisitor.visit(PlanVisitor.java:52)
at org.apache.pig.newplan.logical.relational.LogicalPlan.validate(LogicalPlan.java:212)
at org.apache.pig.PigServer$Graph.compile(PigServer.java:1808)
at org.apache.pig.PigServer$Graph.access$300(PigServer.java:1484)
at org.apache.pig.PigServer.execute(PigServer.java:1397)
at org.apache.pig.PigServer.executeBatch(PigServer.java:456)
at org.apache.pig.PigServer.executeBatch(PigServer.java:439)
at org.apache.pig.tools.grunt.GruntParser.executeBatch(GruntParser.java:171)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:234)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:205)
at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:81)
at org.apache.pig.Main.run(Main.java:631)
at org.apache.pig.Main.main(Main.java:177)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.io.IOException: Can't get Master Kerberos principal for use as renewer
at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:116)
at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100)
at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
at org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:142)
at org.apache.pig.newplan.logical.visitor.InputOutputFileValidatorVisitor.visit(InputOutputFileValidatorVisitor.java:69)
... 24 more
========================================================================
... View more
Labels:
- Labels:
-
Apache Pig
01-23-2017
08:15 PM
Hello Sami, Can You please share the solution, am also facing the same issue.
... View more
10-16-2016
07:05 AM
How to reset the admin password set while enabling the kerberos. Previously kerberos was enabled but we disabled it. When trying to again enable kerberos, admin username and password is required. How to reset the password for this? Or is their any way to enable kerberos wirhout knowing the password? Hdp 2.4
... View more
Labels:
- Labels:
-
Apache Ambari
09-23-2016
07:49 AM
OS: RHEL 6.8 User doesn't have yum privilege . Is their any other way to install ambari(with no internet access) without using yum ?
... View more
Labels: