Member since
01-19-2017
3679
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 923 | 06-04-2025 11:36 PM | |
| 1524 | 03-23-2025 05:23 AM | |
| 754 | 03-17-2025 10:18 AM | |
| 2701 | 03-05-2025 01:34 PM | |
| 1801 | 03-03-2025 01:09 PM |
07-19-2018
10:40 AM
@wei mou YES, of course, you need to connect like root but what was missing was the privileges for the oozie user to the oozie database ! Which now has been solved can you explain or share how you are adding smtp mail server.?
... View more
07-19-2018
07:46 AM
@wei mou If you create the MySQL database you should have run a command similar to this: Assumption root pwassword is welcome1 Oozie user is oozie MySQL databases host is MySQL_host mysql -u root -pwelcome1
create database oozie;
create user oozie identified by 'oozie';
grant all on oozie.* to oozie;
GRANT ALL PRIVILEGES ON *.* TO 'oozie'@'localhost' IDENTIFIED BY 'oozie';
GRANT ALL PRIVILEGES ON *.* TO 'oozie'@'<MySQL_host>'IDENTIFIED BY 'oozie' with admin option;
flush privileges;
quit; To test that the oozie user can connect successfully mysql -u oozie -poozie
show databases;
quit; It seems your problem is the privileges and the oozie table belong to root not the oozie user
... View more
07-16-2018
09:49 AM
1 Kudo
@Satish Anjaneyappa I think when you are running the hive query it's generating some temporary files in HDFS. Can you check the remaining space in hdfs $ hdfs dfsadmin -report When you delete files in hadoop it drops the files to the /.Trash but does not remove them. If you are sure of the files you want to delete its always a good idea to use -skipTrash option $ hdfs dfs -rm -skipTrash /path/to/file This will skip the trash However, if you do not add the -skipTrash flag, files are stored in a trash folder, which by default is: /user/hdfs/.Trash Empty the HDFS Trash by running $ hdfs dfs -expunge HTH
... View more
07-15-2018
12:16 PM
@Michael Bronson Do you still have this below parameter set to true? Remove it completely and retry and please paste the error dfs.client.block.write.replace-datanode-on-failure.enable
Regard
... View more
07-15-2018
10:39 AM
@Michael Bronson The ONLY valid options for dfs.client.block.write.replace-datanode-on-failure.policy are if you see from my initial posting are DISABLE NEVER DEFAULT ALWAYS When using DEFAULT or ALWAYS, if only one DataNode succeeds in the pipeline, the recovery will never succeed and client will not be able to perform the write. This problem is addressed with this configuration property dfs.client.block.write.replace-datanode-on-failure.best-effort which defaults to false. With the default setting, the client will keep trying until the specified policy is satisfied. When this property is set to true, even if the specified policy can’t be satisfied (for example, there is only one DataNode that succeeds in the pipeline, which is less than the policy requirement), the client will still be allowed to continue to write So I suggest you do the follwing modification and restart the start configs and revert dfs.client.block.write.replace-datanode-on-failure.policy=ALWAYS
dfs.client.block.write.replace-datanode-on-failure.best-effort=true Please let me know
... View more
07-15-2018
10:09 AM
1 Kudo
@Erkan ŞİRİN Did you enable your plugin like below I would advice you to restart the cluster it happened to me once that the audits ONLY popped up after the restart of cluster for reasons I don't know yet ! Make sure you see the entry in zookeeper too
... View more
07-15-2018
09:38 AM
@Michael Bronson Whats the value of your below parameter dfs.client.block.write.replace-datanode-on-failure.policy If its NEVER that means never add a new datanode and that is the cause of the error stack !! HTH
... View more
07-15-2018
09:05 AM
1 Kudo
@Bal P
I tried replicating your error on my kerberized cluster, the only difference I used a HDFS path for user hive /user/hive other than a local FS in your case '/home/abc/Sample' and the database was created hive> create database if not exists practice comment "This database was created for practice purpose" location '/home/hive/test' with dbproperties('Date'='2018-07-15','created by'='Developer','Email'='developer@dev.com'); Output of created database
$ hdfs dfs -ls /user/hive
Found 3 items
drwx------ - hive hdfs 0 2018-07-14 20:00 /user/hive/.Trash
drwxr-xr-x - hive hdfs 0 2018-07-12 16:37 /user/hive/.hiveJars
drwxr-xr-x - hive hdfs 0 2018-07-15 10:25 /user/hive/test
And when I run the describe all looked perfect !!!!
Hive works on top of the hadoop meaning it uses the hdfs for storage,by default it stores all databases in /apps/hive/warehouse/* unless in the table creation one uses the clause external keyword to point to an alternative path. Can you confirm that you have this path in hdfs ? /abcdhdfs/apps/hive/warehouse HTH
... View more
07-13-2018
10:18 PM
@Michael Bronson The response is YES. To disable using any of the policies, you can set the following configuration property to false (the default is true): dfs.client.block.write.replace-datanode-on-failure.enable When enabled, the default policy is DEFAULT. The following config property changes the policy: dfs.client.block.write.replace-datanode-on-failure.policy When using DEFAULT or ALWAYS, if only one DataNode succeeds in the pipeline, the recovery will never succeed and client will not be able to perform the write. This problem is addressed with this configuration property: dfs.client.block.write.replace-datanode-on-failure.best-effort which defaults to false. With the default setting, the client will keep trying until the specified policy is satisfied. When this property is set to true, even if the specified policy can’t be satisfied (for example, there is only one DataNode that succeeds in the pipeline, which is less than the policy requirement), the client will still be allowed to continue to write. Please revert ! HTH
... View more
07-13-2018
05:46 AM
1 Kudo
@Michael Bronson Below is the datanode policy and the option NEVER is as stated not desirable to eliminate that error look at the other options Recovery from Pipeline upon Failure There are four configurable policies regarding whether to add additional DataNodes to replace the bad ones when setting up a pipeline for recovery with the remaining DataNodes:
DISABLE: Disables DataNode replacement and throws an error (at the server); this acts like NEVER at the client. NEVER: Never replace a DataNode when a pipeline fails (generally not a desirable action). DEFAULT: Replace based on the following conditions:
Let r be the configured replication number. Let n be the number of existing replica datanodes. Add a new DataNode only if r >= 3 and EITHER
floor(r/2) >= n; OR r > n and the block is hflushed/appended. ALWAYS: Always add a new DataNode when an existing DataNode failed. This fails if a DataNode can’t be replaced. To disable using any of these policies, you can set the following configuration property to false (the default is true):
dfs.client.block.write.replace-datanode-on-failure.enable
When enabled, the default policy is DEFAULT. The following config property changes the policy:
dfs.client.block.write.replace-datanode-on-failure.policy
When using DEFAULT or ALWAYS, if only one DataNode succeeds in the pipeline, the recovery will never succeed and client will not be able to perform the write. This problem is addressed with this configuration property:
dfs.client.block.write.replace-datanode-on-failure.best-effort
which defaults to false. With the default setting, the client will keep trying until the specified policy is satisfied. When this property is set to true, even if the specified policy can’t be satisfied (for example, there is only one DataNode that succeeds in the pipeline, which is less than the policy requirement), the client will still be allowed to continue to write. HTH
... View more