Member since
12-28-2016
10
Posts
1
Kudos Received
0
Solutions
11-16-2017
10:57 AM
Guys, Please let me know when you will fix this issue. I was using HDP 2.5. I need Zeppelin. HDP 2.5 demands to install Spark 1.x for zeppelin, but I need Spark 2 for my project. To avoid that I plan to install HDP 2.6 with Zeppeling 0.7. Now stuck with this issue that prevents me from creating the cluster.
... View more
Labels:
01-17-2017
09:59 AM
I am trying to execute spark job from my remote machine. As in http://theckang.com/2015/remote-spark-jobs-on-yarn/ I have downloaded the yarn-site and core-site.xml files and exported its path as HADOOP_CONF_DIR and installed spark as mentioned in the URL When executing spark-2.0.2-bin-hadoop2.7/bin/spark-submit --class org.apache.spark.examples.SparkPi --master yarn /home/hanna/hadoop/spark-2.0.2-bin-hadoop2.7/examples/jars/spark-examples* 1 Get the below exception Container id: container_e02_1484635534666_0011_01_000003 Exit code: 1 Exception message: /hadoopfs/fs1/yarn/nodemanager/usercache/hanna/appcache/application_1484635534666_0011/container_e02_1484635534666_0011_01_000003/launch_container.sh: line 19: $PWD:$PWD/__spark_conf__:$PWD/__spark_libs__/*:$HADOOP_CONF_DIR:/usr/hdp/current/hadoop-client/*:/usr/hdp/current/hadoop-client/lib/*:/usr/hdp/current/hadoop-hdfs-client/*:/usr/hdp/current/hadoop-hdfs-client/lib/*:/usr/hdp/current/hadoop-yarn-client/*:/usr/hdp/current/hadoop-yarn-client/lib/*:$PWD/mr-framework/hadoop/share/hadoop/mapreduce/*:$PWD/mr-framework/hadoop/share/hadoop/mapreduce/lib/*:$PWD/mr-framework/hadoop/share/hadoop/common/*:$PWD/mr-framework/hadoop/share/hadoop/common/lib/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/lib/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/lib/*:$PWD/mr-framework/hadoop/share/hadoop/tools/lib/*:/usr/hdp/${hdp.version}/hadoop/lib/hadoop-lzo-0.6.0.${hdp.version}.jar:/etc/hadoop/conf/secure: bad substitution
... View more
Labels:
01-17-2017
09:42 AM
Hi, Have you succeeded? I think I am facing the same error. Trying to launch a spark job from a remote machine and also the yarn-site and core-site xmls. Downloaded the spark client and use spark-submit to yarn cluster, but fails with the Exception message: /hadoopfs/fs1/yarn/nodemanager/usercache/hanna/appcache/application_1484635534666_0010/container_e02_1484635534666_0010_02_000005/launch_container.sh: line 19: $PWD:$PWD/__spark_conf__:$PWD/__spark_libs__/*:$HADOOP_CONF_DIR:/usr/hdp/current/hadoop-client/*:/usr/hdp/current/hadoop-client/lib/*:/usr/hdp/current/hadoop-hdfs-client/*:/usr/hdp/current/hadoop-hdfs-client/lib/*:/usr/hdp/current/hadoop-yarn-client/*:/usr/hdp/current/hadoop-yarn-client/lib/*:$PWD/mr-framework/hadoop/share/hadoop/mapreduce/*:$PWD/mr-framework/hadoop/share/hadoop/mapreduce/lib/*:$PWD/mr-framework/hadoop/share/hadoop/common/*:$PWD/mr-framework/hadoop/share/hadoop/common/lib/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/lib/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/lib/*:$PWD/mr-framework/hadoop/share/hadoop/tools/lib/*:/usr/hdp/${hdp.version}/hadoop/lib/hadoop-lzo-0.6.0.${hdp.version}.jar:/etc/hadoop/conf/secure: bad substitution
... View more
01-11-2017
12:01 PM
Made it possible by editing the pg_hba.conf under /var/lib/pgsql9/data. Modified the acccess that was granted to postgres to all. And changed peer to trust
... View more
01-11-2017
09:08 AM
@emaxwell The need is not to create user within postgres, but to connect postgres and run psql from the machine user(for ex: user cloudbreak) I am able to run psql as postgres user, but not with others. So can you tell me how can I make non-postgres users to run psql commands. Also can you share the postgres user password, so that I can hack giving rights of postgres users to other users
... View more
01-10-2017
02:49 PM
I want to access postgres db server from a new user created say X. I could create database and users within postgres by switching to postgres user But when my component is getting deployed, I need to use my newly created user X. From user X, I need to execute few postgres commands.. But from user X, if I even type psql, I get the following error. psql: FATAL: no pg_hba.conf entry for host "[local]", user "jenkins", database "jenkins", SSL off How to give same permissions as postgres user to my custom user? All this happens in the master node of the cluster created via Spark+Hive CloudController template
... View more
01-10-2017
01:16 PM
I am running Cloudbreak with a master node and 2 worker nodes. I have installed my war on the master node running on port 50031. When I tried accessing the publicip:port, I get 503 error. I modified the security group of the master node machine to allow all traffic from anywhere. Still no success. But I am able to connect via telnet telnet publicip port Am I missing something?
... View more
01-09-2017
01:20 PM
1 Kudo
I am using Hortonworks Data cloud solution, which installs the cluster via the Cloud controller. It brings in Postgres 9 by default. I am also trying to install our custom components on the master node, which demand Postgres 8. How can I achieve, if at all possible?
... View more
Labels:
01-07-2017
08:09 PM
With hdfs user I can create dirs, but when i scp files from my local to cloudbreak user on the master and if I want to put those files to hdfs, hdfs user do not have permission to look into the files /home/cloudbreak/. Please suggest how to move files from my local to the hdfs.
... View more
01-07-2017
11:38 AM
http://hortonworks.github.io/hdp-aws/using/index.html#switching-to-the-admin-user I used CloudController to install the cluster. Creation went fine. I was trying to write hdfs. As mentioned in the URL, tried to switch to admin user from cloudbreak user, but encountered below error. [cloudbreak@ip-10-0-1-253 ~]$ sudo su - admin su: user admin does not exist Read Access: [cloudbreak@ip-10-0-1-253 ~]$ hadoop fs -ls /user Found 6 items drwxr-xr-x - admin hdfs 0 2017-01-07 10:37 /user/admin drwxrwx--- - ambari-qa hdfs 0 2017-01-07 10:32 /user/ambari-qa drwxr-xr-x - hcat hdfs 0 2017-01-07 10:34 /user/hcat drwxr-xr-x - hive hdfs 0 2017-01-07 10:36 /user/hive drwxrwxr-x - spark hdfs 0 2017-01-07 10:34 /user/spark drwxr-xr-x - yarn hdfs 0 2017-01-07 10:37 /user/yarn Write Access: hadoop fs -mkdir /user/ariya 17/01/07 11:34:45 WARN retry.RetryInvocationHandler: Exception while invoking ClientNamenodeProtocolTranslatorPB.mkdirs over null. Not retrying because try once and fail. org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=cloudbreak, access=WRITE, inode="/user/ariya":hdfs:hdfs:drwxr-xr-x at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1827) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1811) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1794) at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:71) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4011) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1102) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:630) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1552) at org.apache.hadoop.ipc.Client.call(Client.java:1496) at org.apache.hadoop.ipc.Client.call(Client.java:1396) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233) at com.sun.proxy.$Proxy10.mkdirs(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:603) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:278) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:194) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:176) at com.sun.proxy.$Proxy11.mkdirs(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3061) at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:3031) at org.apache.hadoop.hdfs.DistributedFileSystem$24.doCall(DistributedFileSystem.java:1162) at org.apache.hadoop.hdfs.DistributedFileSystem$24.doCall(DistributedFileSystem.java:1158) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1158) at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1150) at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1913) at org.apache.hadoop.fs.shell.Mkdir.processNonexistentPath(Mkdir.java:76) at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:273) at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255) at org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:119) at org.apache.hadoop.fs.shell.Command.run(Command.java:165) at org.apache.hadoop.fs.FsShell.run(FsShell.java:297) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) at org.apache.hadoop.fs.FsShell.main(FsShell.java:350) mkdir: Permission denied: user=cloudbreak, access=WRITE, inode="/user/ariya":hdfs:hdfs:drwxr-xr-x
... View more
Labels: