Member since
08-02-2018
14
Posts
2
Kudos Received
0
Solutions
08-27-2018
10:14 PM
2 Kudos
Hi @Jay Kumar SenSharma, These instructions didn't work for me when I tried it on my cluster with HDP 3.0 and Zeppelin 0.8.0. I noticed in my initial "Advanced zeppelin-shiro-ini", the passwords are encrypted strings, like user3 = $shiro1$SHA-256$500000$nf0GzH10GbYVoxa7DOlOSw==$ov/IA5W8mRWPwvAoBjNYxg3udJK0EmrVMvFCwcr9eAs=, role2<br> Then if I add a new user like this newuser = newuserpassword
Or like this newuser = newuserpassword, newrole
None of them worked. Am I missing something in the settings? To clarify, my purpose is to add a new Zeppelin user named `newuser`. Thanks! === Update === I found there is another line in my "Advanced zeppelin-shiro-ini" section `[main]` that says ## To be commented out when not using [user] block / paintext
passwordMatcher = org.apache.shiro.authc.credential.PasswordMatcher
iniRealm.credentialsMatcher = $passwordMatcher And per Apache Shiro Configuration, that string starting with `$shiro` is a hash of the password. I commented out the two lines shown above and passwords stored in plain text in "Advanced zeppelin-shiro-ini" are ok now.
... View more
08-23-2018
05:16 PM
Hey thanks Felix. I figured out it was actually neither Spark nor the firewall. It was due to an extra network adapter created by VirtualBox.
... View more
08-22-2018
06:36 PM
I figured out what went wrong... It actually had nothing to do with Spark or Windows Firewall, but with VirtualBox. My Windows machine has a VirtualBox installed, and hosts a guest VM. VirtualBox creates a network adapter called something like "VirtualBox Host-Only Network", which has a different IP address than the actual network adapter. In my case, the actual network adapter is a LAN with IP address 10.100.1.61, and the VirtualBox Host-Only Network has an IP address 192.168.56.1. I solved the issue by disabling the VirtualBox Host-Only Network in Control Panel >> Network and Internet >> Network Connections. I found this by first running `pyspark` in PowerShell, then run `netstat -an | Select-String 50000`, and saw someone listening on 192.168.56.1:50000 PS > netstat -an | sls 50000
TCP 192.168.56.1:50000 0.0.0.0:0 LISTENING
... View more
08-22-2018
02:39 AM
I have a HDP cluster of version HDP 3.0.0.0. Machines in the cluster are all Ubuntu 16.04 OS.
I want to make a Windows machine able to connect and run Spark on the cluster.
So far I've managed to make Spark submit jobs to the cluster via `spark-submit --deploy-mode cluster --master yarn`.
I'm having trouble running `pyspark` interactive shell with `--deploy-mode client`, which, to my understanding, will create a driver process running on the Windows machine. Right now when I run `pyspark` in a Windows command line console (specifically, I use PowerShell), it always fails with the following outputs:
PS > pyspark --name pysparkTest8
Python 2.7.12 (v2.7.12:d33e0cf91556, Jun 27 2016, 15:19:22) [MSC v.1500 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
2018-08-21 18:27:10 WARN DomainSocketFactory:117 - The short-circuit local reads feature cannot be used because UNIX Domain sockets are not available on Windows.
2018-08-21 18:40:48 ERROR SparkContext:91 - Error initializing SparkContext.
org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:89)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:63)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:164)
at org.apache.spark.SparkContext. (SparkContext.scala:500)
at org.apache.spark.api.java.JavaSparkContext. (JavaSparkContext.scala:58)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:238)
at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80)
at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:748)
2018-08-21 18:40:48 WARN YarnSchedulerBackend$YarnSchedulerEndpoint:66 - Attempted to request executors before the AM has registered!
2018-08-21 18:40:48 WARN MetricsSystem:66 - Stopping a MetricsSystem that is not running
2018-08-21 18:40:48 WARN SparkContext:66 - Another SparkContext is being constructed (or threw an exception in its constructor).
This may indicate an error, since only one SparkContext may be running in this JVM (see SPARK-2243). The other SparkContext was created at:
org.apache.spark.api.java.JavaSparkContext.(JavaSparkContext.scala:58)
sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
java.lang.reflect.Constructor.newInstance(Constructor.java:423)
py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247)
py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
py4j.Gateway.invoke(Gateway.java:238)
py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80)
py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69)
py4j.GatewayConnection.run(GatewayConnection.java:238)
java.lang.Thread.run(Thread.java:748)
2018-08-21 18:54:07 ERROR SparkContext:91 - Error initializing SparkContext.
org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:89)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:63)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:164)
at org.apache.spark.SparkContext. (SparkContext.scala:500)
at org.apache.spark.api.java.JavaSparkContext. (JavaSparkContext.scala:58)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:238)
at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80)
at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:748)
2018-08-21 18:54:07 WARN YarnSchedulerBackend$YarnSchedulerEndpoint:66 - Attempted to request executors before the AM has registered!
2018-08-21 18:54:07 WARN MetricsSystem:66 - Stopping a MetricsSystem that is not running
Traceback (most recent call last):
File "C:\\python\pyspark\shell.py", line 54, in
spark = SparkSession.builder.getOrCreate()
File "C:\\python\pyspark\sql\session.py", line 173, in getOrCreate
sc = SparkContext.getOrCreate(sparkConf)
File "C:\\python\pyspark\context.py", line 343, in getOrCreate
SparkContext(conf=conf or SparkConf())
File "C:\\python\pyspark\context.py", line 118, in __init__
conf, jsc, profiler_cls)
File "C:\\python\pyspark\context.py", line 180, in _do_init
self._jsc = jsc or self._initialize_context(self._conf._jconf)
File "C:\\python\pyspark\context.py", line 282, in _initialize_context
return self._jvm.JavaSparkContext(jconf)
File "C:\\python\lib\py4j-0.10.7-src.zip\py4j\java_gateway.py", line 1525, in _
_call__
File "C:\\python\lib\py4j-0.10.7-src.zip\py4j\protocol.py", line 328, in get_re
turn_value
py4j.protocol.Py4JJavaError: An error occurred while calling None.org.apache.spark.api.java.JavaSparkContext.
: org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:89)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:63)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:164)
at org.apache.spark.SparkContext. (SparkContext.scala:500)
at org.apache.spark.api.java.JavaSparkContext. (JavaSparkContext.scala:58)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:238)
at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80)
at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:748)
When I look at the YARN application logs, there's something worth noting in stderr:
Log Type: stderr
Log Upload Time: Tue Aug 21 18:50:14 -0700 2018
Log Length: 3774
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/hadoop/yarn/local/filecache/11/spark2-hdp-yarn-archive.tar.gz/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/3.0.0.0-1634/hadoop/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
18/08/21 18:36:41 INFO util.SignalUtils: Registered signal handler for TERM
18/08/21 18:36:41 INFO util.SignalUtils: Registered signal handler for HUP
18/08/21 18:36:41 INFO util.SignalUtils: Registered signal handler for INT
18/08/21 18:36:41 INFO spark.SecurityManager: Changing view acls to: yarn,myusername
18/08/21 18:36:41 INFO spark.SecurityManager: Changing modify acls to: yarn,myusername
18/08/21 18:36:41 INFO spark.SecurityManager: Changing view acls groups to:
18/08/21 18:36:41 INFO spark.SecurityManager: Changing modify acls groups to:
18/08/21 18:36:41 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(yarn, myusername); groups with view permissions: Set(); users with modify permissions: Set(yarn, myusername); groups with modify permissions: Set()
18/08/21 18:36:42 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
18/08/21 18:36:42 INFO yarn.ApplicationMaster: Preparing Local resources
18/08/21 18:36:43 WARN shortcircuit.DomainSocketFactory: The short-circuit local reads feature cannot be used because libhadoop cannot be loaded.
18/08/21 18:36:43 INFO yarn.ApplicationMaster: ApplicationAttemptId: appattempt_1534303777268_0044_000001
18/08/21 18:36:44 INFO yarn.ApplicationMaster: Waiting for Spark driver to be reachable.
18/08/21 18:38:51 ERROR yarn.ApplicationMaster: Failed to connect to driver at Windows-client-hostname:50000, retrying ...
18/08/21 18:38:51 ERROR yarn.ApplicationMaster: Uncaught exception:
org.apache.spark.SparkException: Failed to connect to driver!
at org.apache.spark.deploy.yarn.ApplicationMaster.waitForSparkDriver(ApplicationMaster.scala:672)
at org.apache.spark.deploy.yarn.ApplicationMaster.runExecutorLauncher(ApplicationMaster.scala:532)
at org.apache.spark.deploy.yarn.ApplicationMaster.org$apache$spark$deploy$yarn$ApplicationMaster$$runImpl(ApplicationMaster.scala:347)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anonfun$run$2.apply$mcV$sp(ApplicationMaster.scala:260)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anonfun$run$2.apply(ApplicationMaster.scala:260)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anonfun$run$2.apply(ApplicationMaster.scala:260)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$5.run(ApplicationMaster.scala:815)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1688)
at org.apache.spark.deploy.yarn.ApplicationMaster.doAsUser(ApplicationMaster.scala:814)
at org.apache.spark.deploy.yarn.ApplicationMaster.run(ApplicationMaster.scala:259)
at org.apache.spark.deploy.yarn.ApplicationMaster$.main(ApplicationMaster.scala:839)
at org.apache.spark.deploy.yarn.ExecutorLauncher$.main(ApplicationMaster.scala:869)
at org.apache.spark.deploy.yarn.ExecutorLauncher.main(ApplicationMaster.scala)
18/08/21 18:38:51 INFO yarn.ApplicationMaster: Final app status: FAILED, exitCode: 13, (reason: Uncaught exception: org.apache.spark.SparkException: Failed to connect to driver!)
18/08/21 18:38:51 INFO util.ShutdownHookManager: Shutdown hook called
My suspect is that the Windows client machine's firewall is blocking port 50000, because if I run telnet from one of the Ubuntu machines, I get "Connection timed out"
telnet windows-client-hostname 50000
Trying 10.100.1.61...
telnet: Unable to connect to remote host: Connection timed out
But I have specifically allowed ports 1025-65535 in Inbound Rules in Windows Firewall with Advanced Security (my Windows is Windows Server 2012 R2).
I have configured `spark.port.maxRetries` as suggested in
this post, but it didn't change anything. My `spark-defaults.conf` on the Windows client machine looks like this:
spark.master yarn
spark.yarn.am.memory 4g
spark.executor.memory 5g
spark.serializer org.apache.spark.serializer.KryoSerializer
spark.driver.maxResultSize 10g
spark.driver.memory 5g
spark.yarn.archive hdfs:///hdp/apps/3.0.0.0-1634/spark2/spark2-hdp-yarn-archive.tar.gz
spark.port.maxRetries 100
spark.driver.port 50000
At this point I am totally confused. Can someone give some hints on how to tackle this?
Thank you very much!
... View more
Labels:
- Labels:
-
Apache Spark
08-07-2018
05:06 AM
Thanks @Akhil S Naik.
Unfortunately I don't see a `errors-2610.txt` or `output-2610` file on the Ambari server machine in `/var/lib/ambari-agent/data` directory. There are many other errors-xxxx.txt files but not `-2610`...
But `/var/log/ambari-server/ambari-server.log` has something related to 2610:
cat ambari-server.log | grep -A 5 2610
2018-08-06 15:48:25,876 ERROR [ambari-action-scheduler] ActionScheduler:817 - Execution command has no timeout parameter{"clusterName":"citilabs_test_cluster","requestId":192,"stageId":-1,"taskId":2610,"commandId":"192--1","hostname":"_internal_ambari","role":"AMBARI_SERVER_ACTION","hostLevelParams":{},"roleParams":{"ACTION_USER_NAME":"ambari","ACTION_NAME":"org.apache.ambari.server.serveraction.users.PostUserCreationHookServerAction"},"roleCommand":"EXECUTE","clusterHostInfo":{},"configurations":{},"configurationAttributes":{},"configurationTags":{},"forceRefreshConfigTagsBeforeExecution":false,"commandParams":{"cmd-hdfs-principal":"NA","cmd-input-file":"/var/lib/ambari-server/data/tmp/user_hook_input_1533595705841.csv","cluster-security-type":"NONE","cmd-hdfs-user":"hdfs","cmd-payload":"{\"guozhen\":[]}","cmd-hdfs-keytab":"NA","hook-script":"/var/lib/ambari-server/resources/sripts/post-user-creation-hook.sh","cluster-name":"citilabs_test_cluster","cluster-id":"2"},"serviceName":"","kerberosCommandParams":[],"localComponents":[],"availableServices":{},"componentVersionMap":{"HIVE":{"HIVE_SERVER":"3.0.0.0-1634","HIVE_SERVER_INTERACTIVE":"3.0.0.0-1634","HIVE_METASTORE":"3.0.0.0-1634","HIVE_CLIENT":"3.0.0.0-1634"},"ZEPPELIN":{"ZEPPELIN_MASTER":"3.0.0.0-1634"},"SQOOP":{"SQOOP":"3.0.0.0-1634"},"HDFS":{"SECONDARY_NAMENODE":"3.0.0.0-1634","HDFS_CLIENT":"3.0.0.0-1634","ZKFC":"3.0.0.0-1634","NFS_GATEWAY":"3.0.0.0-1634","DATANODE":"3.0.0.0-1634","JOURNALNODE":"3.0.0.0-1634","NAMENODE":"3.0.0.0-1634"},"MAPREDUCE2":{"MAPREDUCE2_CLIENT":"3.0.0.0-1634","HISTORYSERVER":"3.0.0.0-1634"},"OOZIE":{"OOZIE_CLIENT":"3.0.0.0-1634","OOZIE_SERVER":"3.0.0.0-1634"},"TEZ":{"TEZ_CLIENT":"3.0.0.0-1634"},"ZOOKEEPER":{"ZOOKEEPER_SERVER":"3.0.0.0-1634","ZOOKEEPER_CLIENT":"3.0.0.0-1634"},"SPARK2":{"SPARK2_CLIENT":"3.0.0.0-1634","SPARK2_THRIFTSERVER":"3.0.0.0-1634","LIVY2_SERVER":"3.0.0.0-1634","SPARK2_JOBHISTORYSERVER":"3.0.0.0-1634"},"YARN":{"TIMELINE_READER":"3.0.0.0-1634","NODEMANAGER":"3.0.0.0-1634","YARN_CLIENT":"3.0.0.0-1634","APP_TIMELINE_SERVER":"3.0.0.0-1634","YARN_REGISTRY_DNS":"3.0.0.0-1634","RESOURCEMANAGER":"3.0.0.0-1634"}},"commandType":"EXECUTION_COMMAND"}
2018-08-06 15:48:25,917 INFO [Server Action Executor Worker 2610] PostUserCreationHookServerAction:131 - Validating command parameters ...
2018-08-06 15:48:25,917 INFO [Server Action Executor Worker 2610] PostUserCreationHookServerAction:158 - Command parameter validation passed.
2018-08-06 15:48:25,919 INFO [Server Action Executor Worker 2610] CsvFilePersisterService:106 - Persisting map data to csv file
2018-08-06 15:48:25,919 INFO [Server Action Executor Worker 2610] CsvFilePersisterService:82 - Persisting collection to csv file
2018-08-06 15:48:25,919 INFO [Server Action Executor Worker 2610] CsvFilePersisterService:86 - Collection successfully persisted to csv file.
2018-08-06 15:48:25,919 INFO [Server Action Executor Worker 2610] ShellCommandUtilityWrapper:48 - Running command: /var/lib/ambari-server/resources/sripts/post-user-creation-hook.sh
2018-08-06 15:48:25,923 ERROR [Server Action Executor Worker 2610] PostUserCreationHookServerAction:93 - Server action is about to quit due to an exception.
java.io.IOException: Cannot run program "/var/lib/ambari-server/resources/sripts/post-user-creation-hook.sh": error=2, No such file or directory
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
at org.apache.ambari.server.utils.ShellCommandUtil.runCommand(ShellCommandUtil.java:457)
at org.apache.ambari.server.utils.ShellCommandUtil.runCommand(ShellCommandUtil.java:513)
at org.apache.ambari.server.utils.ShellCommandUtil.runCommand(ShellCommandUtil.java:526)
--
2018-08-06 15:48:25,924 WARN [Server Action Executor Worker 2610] ServerActionExecutor:471 - Task #2610 failed to complete execution due to thrown exception: org.apache.ambari.server.AmbariException:Server action execution failed to complete!
org.apache.ambari.server.AmbariException: Server action execution failed to complete!
at org.apache.ambari.server.serveraction.users.PostUserCreationHookServerAction.execute(PostUserCreationHookServerAction.java:94)
at org.apache.ambari.server.serveraction.ServerActionExecutor$Worker.execute(ServerActionExecutor.java:550)
at org.apache.ambari.server.serveraction.ServerActionExecutor$Worker.run(ServerActionExecutor.java:466)
at java.lang.Thread.run(Thread.java:745)
It's quite obvious now... the `ERROR` line says that `/var/lib/ambari-server/resources/sripts/post-user-creation-hook.sh` file doesn't exist. I missed a 'c' in 'scripts' in the path. I corrected it and user home directory creation worked!
Thanks for helping me out!
My HDFS config file says these:
hadoop.proxyuser.root.groups=*
hadoop.proxyuser.root.hosts=vm-097
where `vm-097` is the Ambari server hostname. Should I be worried about this?
Finally, do you mind give some advice on how to know where to track logs when problems occur? I would have no idea where to locate the error message if it weren't for your help. (Thanks again!)
... View more
08-06-2018
11:19 PM
The issue:
I followed
Administering Ambari: Enable user home directory creation to enable creating a home directory in HDFS for a user added via Ambari. However, every time I create a user, Ambari's task list shows a failure log titled "Post user creation hook for [1] users", and the content says:
stderr: errors-2610.txt
Server action execution failed to complete!
stdout: output-2610.txt
Server action failed
My understanding is that this is because the `admin` user has no permission to modify content of HDFS directory `/user`.
`hdfs dfs -ls` commands show the following:
$ hdfs dfs -ls /
Found 13 items
drwxrwxrwt - yarn hadoop 0 2018-08-06 15:04 /app-logs
drwxr-xr-x - hdfs hdfs 0 2018-08-05 21:05 /apps
drwxr-xr-x - yarn hadoop 0 2018-08-02 15:34 /ats
drwxr-xr-x - hdfs hdfs 0 2018-08-02 15:34 /atsv2
drwxr-xr-x - hdfs hdfs 0 2018-08-02 15:34 /hdp
drwx------ - livy hdfs 0 2018-08-02 15:50 /livy2-recovery
drwxr-xr-x - mapred hdfs 0 2018-08-02 15:34 /mapred
drwxrwxrwx - mapred hadoop 0 2018-08-02 15:35 /mr-history
drwxr-xr-x - hdfs hdfs 0 2018-08-02 15:34 /services
drwxrwxrwx - spark hadoop 0 2018-08-06 16:07 /spark2-history
drwxrwxrwx - hdfs hdfs 0 2018-08-05 20:53 /tmp
drwxr-xr-x - hdfs hdfs 0 2018-08-06 15:46 /user
drwxr-xr-x - hdfs hdfs 0 2018-08-03 00:25 /warehouse
$ hdfs dfs -ls /user
Found 8 items
drwxr-xr-x - admin hdfs 0 2018-08-06 15:46 /user/admin
drwxrwx--- - ambari-qa hdfs 0 2018-08-05 21:00 /user/ambari-qa
drwxr-xr-x - hive hdfs 0 2018-08-03 19:51 /user/hive
drwxrwxr-x - livy hdfs 0 2018-08-02 15:50 /user/livy
drwxrwxr-x - oozie hdfs 0 2018-08-05 20:54 /user/oozie
drwxrwxr-x - spark hdfs 0 2018-08-02 15:50 /user/spark
drwxrwx--- - yarn-ats hadoop 0 2018-08-03 19:46 /user/yarn-ats
drwxr-xr-x - zeppelin hdfs 0 2018-08-05 21:06 /user/zeppelin
My questions are:
Is it due to the permission problem that Ambari failed to run `post-user-creation-hook.sh`?
If yes, how do I give enough permission to the `admin` user?
If not, what might be causing the failure?
Thanks!
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop