Member since
11-22-2019
7
Posts
0
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3133 | 12-10-2019 07:17 PM |
12-20-2019
07:57 PM
I am logged into Ambari with 'admin' user.
Then I navigated to File View >> 'user' directory >> Create a New Folder >> Enter name of folder and click on 'Add' button.
But I am getting a server message: -
Permission denied: user=admin, access=WRITE, inode="/user":hdfs:hdfs:drwxr-xr-x
How can I provide write permission on user directory to write something or add a folder to it. Request to community to help me. Thanks.
... View more
Labels:
- Labels:
-
Apache Ambari
12-16-2019
07:27 PM
@Shelton you are awesome. You are trying your best to help me. But I am just the starter, so missing the minute threads. I followed the steps you have provided. But still the same issue. Pls see the screen shot. I want to gain a pinch of what you have mastered. Thanks.
... View more
12-15-2019
06:55 PM
@Shelton Great thanks for your detailed response. I have added a print screen of Ambari 'File Views'. I want to know whether it is local file system or hdfs file system. However I run commands suggested by you. But when when I tried to run following command "hdfs dfs -copyFromLocal /tmp/data/riskfactor1.csv /tmp/data" . I got the message "copyFromLocal: `/tmp/data/riskfactor1.csv': No such file or directory". I am not sure where I am doing something wrong. Thanks again for your help. Eagerly waiting for your response.
... View more
12-10-2019
07:17 PM
I restarted all the interpreters in settings section like :- spark2, angular jdbc livy2 md and saved it. Then re-run the code and it was successful. Thanks.
... View more
12-10-2019
07:09 PM
I am running following command in Zeppelin.First created hive context with following code - val hiveContext = new org.apache.spark.sql.SparkSession.Builder().getOrCreate() then I tried to load a file from HDFS with following code - val riskFactorDataFrame = spark.read.format("csv").option("header", "true").load("hdfs:///tmp/data/riskfactor1.csv") but I am getting following error message "org.apache.spark.sql.AnalysisException: Path does not exist: hdfs://sandbox-hdp.hortonworks.com:8020/tmp/data/riskfactor1.csv;" I am quite new in Hadoop. Please help me figure out what wrong I am doing.
... View more
Labels:
- Labels:
-
Apache Spark
-
Apache Zeppelin
12-07-2019
03:21 AM
I had installed HDP 3.0 which has Spark 2.3.1. I was running sql commands in Zeppelin notebook successfully but suddenly Windows update restarted the machine. When I opened the Zeppelin notebook again and ran the first command
%spark2
val hiveContext = new org.apache.spark.sql.SparkSession.Builder().getOrCreate()
I got following error logs: -
java.lang.NullPointerException at org.apache.thrift.transport.TSocket.open(TSocket.java:209) at org.apache.zeppelin.interpreter.remote.ClientFactory.create(ClientFactory.java:51) at org.apache.zeppelin.interpreter.remote.ClientFactory.create(ClientFactory.java:37) at org.apache.commons.pool2.BasePooledObjectFactory.makeObject(BasePooledObjectFactory.java:60) at org.apache.commons.pool2.impl.GenericObjectPool.create(GenericObjectPool.java:861) at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:435) at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:363) at org.apache.zeppelin.interpreter.remote.RemoteInterpreterProcess.getClient(RemoteInterpreterProcess.java:62) at org.apache.zeppelin.interpreter.remote.RemoteInterpreterProcess.callRemoteFunction(RemoteInterpreterProcess.java:133) at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.internal_create(RemoteInterpreter.java:165) at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.open(RemoteInterpreter.java:132) at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.getFormType(RemoteInterpreter.java:299) at org.apache.zeppelin.notebook.Paragraph.jobRun(Paragraph.java:407) at org.apache.zeppelin.scheduler.Job.run(Job.java:188) at org.apache.zeppelin.scheduler.RemoteScheduler$JobRunner.run(RemoteScheduler.java:307) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748)
I am clueless. Request you to help me out of this issue. Thanks.
... View more
Labels:
11-22-2019
10:56 PM
Hi @mqureshi , you have explained beautifully. But how the replication of blocks will impact this calculation? Please explain. Regards.
... View more