Member since
11-12-2018
218
Posts
179
Kudos Received
35
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
349 | 08-08-2025 04:22 PM | |
423 | 07-11-2025 08:48 PM | |
649 | 07-09-2025 09:33 PM | |
1132 | 04-26-2024 02:20 AM | |
1490 | 04-18-2024 12:35 PM |
06-05-2020
07:58 PM
1 Kudo
You can try with spark-shell --conf spark.hadoop.hive.exec.max.dynamic.partitions=xxxxx. $ spark-shell --conf spark.hadoop.hive.exec.max.dynamic.partitions=30000 Setting default log level to "WARN". To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel). Spark context Web UI available at http://hostname:port Spark context available as 'sc' (master = yarn, app id = application_xxxxxxxxxxxx_xxxx). Spark session available as 'spark'. Welcome to ____ __ / __/__ ___ _____/ /__ _\ \/ _ \/ _ `/ __/ '_/ /___/ .__/\_,_/_/ /_/\_\ version 2.x.x.x.x.x.x-xx /_/ Using Scala version 2.11.12 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_112) Type in expressions to have them evaluated. Type :help for more information. scala> spark.sqlContext.getAllConfs.get("spark.hadoop.hive.exec.max.dynamic.partitions") res0: Option[String] = Some(30000) Ref: SPARK-21574
... View more
05-28-2020
09:07 PM
2 Kudos
Hi @Karan1211, User 'admin' does not have access to create a directory under /user. Because the /user/ directory is owned by "hdfs" with 755 permissions. As a result, only hdfs can write to that directory. So you would need to do this: If you want to create a home directory for root so you can store files in this directory, do: sudo -u hdfs hdfs dfs -mkdir /user/admin sudo -u hdfs hdfs dfs -chown admin /user/admin Then as admin you can do hdfs dfs -put file /user/admin/ NOTE: If you get below authentication error, either from your user account, you do not have enough permission to run the above command, so try with sudo or try with first sudo to hdfs user and then execute chown command as hdfs user. su: authentication failure I hope this helps.
... View more
04-23-2020
08:03 PM
1 Kudo
Please can you check with your internal Linux team / Network team for further support? Because it seems you have some internal connection while connecting the node from the Intellij idea node. Once you resolve the connection issue we will check further.
... View more
04-23-2020
10:35 AM
Web Hdfs is disabled for our cluster.. Is there any other options ?
... View more
04-09-2020
01:56 AM
1 Kudo
Hi @drgenious Are you getting a similar error which reported in KUDU-2633 It seems this is open JIRA reported in the community ERROR core.JobRunShell: Job DEFAULT.EventKpisConsumer threw an unhandled Exception:
org.apache.spark.SparkException: Job aborted due to stage failure: Aborting TaskSet 109.0 because task 3 (partition 3) cannot run anywhere due to node and executor blacklist. Blacklisting behavior can be configured via spark.blacklist.*. If you have the data in HDFS in (csv/avro/parquet) format, then you can use the below command to import the files to Kudu table. Prerequisites: Kudu jar with compatible version (1.6 or higher) For more reference spark2-submit --master yarn/local --class org.apache.kudu.spark.tools.ImportExportFiles <path of kudu jar>/kudu-spark2-tools_2.11-1.6.0.jar --operation=import --format=<parquet/avro/csv> --master-addrs=<kudu master host>:<port number> --path=<hdfs path for data> --table-name=impala::<table name> Hope this helps. Please accept the answer and vote up if it did.
... View more
04-05-2020
07:13 PM
Hi @jagadeesan just a quick question, what if I have installed the CDH cluster before using parcels, is it ok to have an upgrade using packages than parcels or it doesn't have any effect?
... View more
03-31-2020
01:35 AM
Hi @sppandita85BLR Currently, there is no documented procedure to migrate from HDP. In these cases, it's best to engage with your local Cloudera account rep and professional services. They may help you with the runbook to do the migration or any other feasibility. Hope this helps. Please accept the answer and vote up if it did. Regards,
... View more
03-30-2020
12:11 AM
Hi @jagadeesan , i want result like this workflow name and status,start and end time.we have 200 jobs.daily we doing manually.so need to update automatically. workflowname Ingestion Daily Completed 9.00 PM 7.36 AM
... View more
03-28-2020
03:51 AM
Nice article @jagadeesan
... View more
03-27-2020
01:52 AM
Hi @JasmineD, We might need to consider backing up the following: flow.xml.gz users.xml authorizations.xml All config files in NiFi conf directory NiFi local state from each node NiFi cluster state stored in zookeeper. Please make sure that you have stored the configuration passwords safely. NiFi relies on sensitive.props.key password to decrypt sensitive property values from flow.xml.gz file. If they do not know sensitive props key, they would need to manually clear all encoded values from flow.xml.gz. This action will clear all passwords in all components on the canvas. We need to re-enter all of them once NiFi was recovered. Also, if there are any local files that are required by the DataFlows, that would also need to be backed up as well. (i.e., Custom processor jars, user-built scripts, externally referenced config/jar files used by some processors, etc.). Note: All the repositories in NiFi are backed up by default. Here is a good article to see how backup works in NiFi. https://community.cloudera.com/t5/Community-Articles/Understanding-how-NiFi-s-Content-Repository-Archiving-works/ta-p/249418 Hope this helps. Please accept the answer and vote up if it did.
... View more