Member since
11-12-2018
218
Posts
179
Kudos Received
35
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
357 | 08-08-2025 04:22 PM | |
440 | 07-11-2025 08:48 PM | |
658 | 07-09-2025 09:33 PM | |
1135 | 04-26-2024 02:20 AM | |
1504 | 04-18-2024 12:35 PM |
07-20-2020
12:27 AM
1 Kudo
Can you verify one, whether did you followed all the steps listed in the documentation? https://docs.cloudera.com/runtime/7.1.1/ozone-storing-data/topics/ozone-setting-up-ozonefs.html
... View more
06-06-2020
09:38 PM
Hi @Ettery Can you try to add those properties in nifi.properties? the Docker configuration has been updated to allow proxy whitelisting from the run command the host header protection is only enforced on "secured" NiFi instances. This should make it much easier for users to quickly deploy sandbox environments like you are doing in this case Even you can try with: -e NIFI_WEB_HTTP_HOST=<host> in docker run command docker run --name nifi -p 9090:9090 -d -e NIFI_WEB_HTTP_PORT='9090' -e NIFI_WEB_HTTP_HOST=<host> apache/nifi:latest In GitHub example configuration and documentation for NiFi running behind a reverse proxy that you may be interested in. For more detail refer stackoverflow1 and stackoverflow2
... View more
06-06-2020
09:15 PM
Glad to hear that you have finally found the root cause of this issue. Thanks for sharing @Heri
... View more
06-05-2020
07:58 PM
1 Kudo
You can try with spark-shell --conf spark.hadoop.hive.exec.max.dynamic.partitions=xxxxx. $ spark-shell --conf spark.hadoop.hive.exec.max.dynamic.partitions=30000 Setting default log level to "WARN". To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel). Spark context Web UI available at http://hostname:port Spark context available as 'sc' (master = yarn, app id = application_xxxxxxxxxxxx_xxxx). Spark session available as 'spark'. Welcome to ____ __ / __/__ ___ _____/ /__ _\ \/ _ \/ _ `/ __/ '_/ /___/ .__/\_,_/_/ /_/\_\ version 2.x.x.x.x.x.x-xx /_/ Using Scala version 2.11.12 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_112) Type in expressions to have them evaluated. Type :help for more information. scala> spark.sqlContext.getAllConfs.get("spark.hadoop.hive.exec.max.dynamic.partitions") res0: Option[String] = Some(30000) Ref: SPARK-21574
... View more
05-28-2020
09:07 PM
2 Kudos
Hi @Karan1211, User 'admin' does not have access to create a directory under /user. Because the /user/ directory is owned by "hdfs" with 755 permissions. As a result, only hdfs can write to that directory. So you would need to do this: If you want to create a home directory for root so you can store files in this directory, do: sudo -u hdfs hdfs dfs -mkdir /user/admin sudo -u hdfs hdfs dfs -chown admin /user/admin Then as admin you can do hdfs dfs -put file /user/admin/ NOTE: If you get below authentication error, either from your user account, you do not have enough permission to run the above command, so try with sudo or try with first sudo to hdfs user and then execute chown command as hdfs user. su: authentication failure I hope this helps.
... View more
05-28-2020
08:46 PM
1 Kudo
HI @Heri, Here I just wanna add some points. You can use PURGE option to delete data file as well along with partition metadata but it works only in Internal/ Managed tables ALTER TABLE table_name DROP [IF EXISTS] PARTITION partition_spec PURGE; But for External tables have a two-step process to alter table drop partition + removing file ALTER TABLE table_name DROP [IF EXISTS] PARTITION partition_spec; hdfs dfs -rm -r <partition file path> I hope this gives some insights here. cc @aakulov
... View more
04-23-2020
08:03 PM
1 Kudo
Please can you check with your internal Linux team / Network team for further support? Because it seems you have some internal connection while connecting the node from the Intellij idea node. Once you resolve the connection issue we will check further.
... View more
04-23-2020
07:57 PM
1 Kudo
Can you add below property at <spark_home>/conf/hive-site.xml and <hive-home>/conf/hive-site.xml hive.exec.max.dynamic.partitions=2000 <name>hive.exec.max.dynamic.partitions</name>
<value>2000</value>
<description></description> Hope this helps. Please accept the answer and vote up if it did. Note: Restart HiveServer2 and Spark History Server if it didn't work. -JD
... View more
04-21-2020
12:28 PM
1 Kudo
Can you try this below article? https://saagie.zendesk.com/hc/en-us/articles/360021384151-Read-Write-files-from-HDFS
... View more
04-21-2020
10:31 AM
1 Kudo
Hi @w12q12 So as per the below error in the log trace 20/04/21 18:20:50 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0) org.apache.hadoop.hdfs.BlockMissingException: Could not obtain block: BP-1067413441-127.0.0.1-1508775264580:blk_1073743149_2345 file=/data/ratings.csv at org.apache.hadoop.hdfs.DFSInputStream.chooseDataNode(DFSInputStream.java:930) .... Caused by: org.apache.hadoop.hdfs.BlockMissingException: Could not obtain block: BP-1067413441-127.0.0.1-1508775264580:blk_1073743149_2345 file=/data/ratings.csv at org.apache.hadoop.hdfs.DFSInputStream.chooseDataNode(DFSInputStream.java:930) It seems that the namenode is not able to connect to the datanode when you ran the command. Please can you try to ping and telnet datanode and name node vice versa also check whether do you have any corrupt blocks and files in the cluster? ~JD
... View more