Member since
01-19-2017
3679
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 866 | 06-04-2025 11:36 PM | |
| 1438 | 03-23-2025 05:23 AM | |
| 718 | 03-17-2025 10:18 AM | |
| 2587 | 03-05-2025 01:34 PM | |
| 1710 | 03-03-2025 01:09 PM |
04-02-2018
02:33 PM
@Praveen Atmakuri If you change the configuration through Ambari did you restart the stale services? You can use the -skipTrash option will bypass trash very handy in releasing diskspace in an emergency. $ hdfs dfs -rm -R -skipTrash /xxxx/22/feb/* If you set fs.trash.interval=60 min That means the files you deleted to .trash directory will be cleared in exactly 1 hour fs.trash.interval is the number of minutes after which the checkpoint gets deleted. It's advisable NOT to set it to 0 because that disables the configuration. Hope that helps
... View more
04-02-2018
12:14 PM
@Nilesh 1. What is log.message.format.version in Kafka This translates to your current kafka version or the value of inter.broker.protocol.version 2. What is the use of log.message.format.version in Kafka It can be used for rolling restart during upgrade it also specifies format version the broker will use to append messages to the logs.Setting it incorrectly will cause consumers with older versions to break as they will receive messages with a format that they don't understand. 3.What could be the version of log.message.format This should be/ or translate to your current kafka version eg (0.10.0, 0.10.1, 0.10.2, 0.11.0). These 2 parameters should be the same. inter.broker.protocol.version=CURRENT_KAFKA_VERSION
log.message.format.version=CURRENT_MESSAGE_FORMAT_VERSION Hope that helps
... View more
04-02-2018
11:42 AM
1 Kudo
@Nikhil R I think it should be controller-services instead of controller-service you could be missing the "s" http://localhost:8080/nifi-api/process-groups/{process-group-id}/controller-service Hope that helps
... View more
04-02-2018
10:53 AM
@Praveen Atmakuri The dfsadmin is run like below Get a report of status from each name node to show that the name nodes are in Safe Mode or not. $ hdfs dfsadmin -D 'fs.default.name=hdfs://mycluster/' -safemode get A report that shows the details of HDFS state: $ hdfs dfsadmin -D 'fs.default.name=hdfs://mycluster/' -report Get HDFS out of safe mode $ hdfs dfsadmin -D 'fs.default.name=hdfs://mycluster/' -safemode leave What's is your HDInsight cluster name
... View more
04-02-2018
09:30 AM
@Praveen Atmakuri So I can see you have only the user paplcloudamin home directory set Can you run the following $ hdfs dfs -D "fs.default.name=hdfs://mycluster/" -mkdir /test Check if the directory was created, you should be able to see test directory $ hdfs dfs -D "fs.default.name=hdfs://mycluster/" -ls / This command should run successfully,can you just copy and paste this command replacing my cluster with your clustername $ hdfs dfsadmin -D "fs.default.name=hdfs://mycluster/" -report Please revert
... View more
04-02-2018
08:21 AM
@Praveen Atmakuri Can you give the output of this command
hadoop fs -ls wasb://yourcontainer@youraccount.blob.core.windows.net/
The following is a list of configurations that should be modified to configure WASB: I assume you have done the below fs.defaultFS wasb://<containername>@<accountname>.blob.core.windows.net fs.AbstractFileSystem.wasb.impl org.apache.hadoop.fs.azure.Wasb fs.azure.account.key. blob.core.windows.net storage_access_key Please revert
... View more
04-01-2018
05:15 PM
@Aishwarya Sudhakar Could you clarify which username under which you are running the spark under? Because of its distributed aspect, you should copy the dataset.csv to HDFS users directory which is accessible to that user running the spark job. According to your output above you file is HDFS directory /demo/demo/dataset.csv so your load should look like this load "hdfs:////demo/demo/dataset.csv" This is what you said. "The demo is the directory that is inside hadoop. And datset.csv is the file that contains data." Did you mean in HDFS? Does the command print anything $ hdfs dfs -cat /demo/demo/dataset.csv Please revert !
... View more
04-01-2018
10:32 AM
@Aishwarya Sudhakar Yes to validate that the file you copied has the desired data. You forgot the / before demo $ hdfs dfs -cat /demo/dataset.csv Hope that helps
... View more
04-01-2018
10:07 AM
@Aishwarya Sudhakar Your demo directory in hdfs is empty, You will need to copy the dataset.csv to HDFS in /demo These are the steps to do : Locate the dataset.csv in this example its in the /tmp onthe local node As user hdfs $ hdfs dfs -mkdir /demo Copy the dataset.csv to hdfs $ hdfs dfs -put /tmp/dataset.csv /demo Make sure the user running the spark has the correct permissions else Change the owner where xxx is the user running spark $ hdfs dfs -chown xxx:hdfs /demo Now run your spark Hope that helps
... View more
04-01-2018
07:26 AM
@Aishwarya Sudhakar The load should point to the hdfs location load"hdfs:///demo/dataset.csv") Hope that helps
... View more