128
Posts
15
Kudos Received
8
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2627 | 01-13-2015 09:09 AM | |
4867 | 05-28-2014 09:28 AM | |
1970 | 04-22-2014 01:24 PM | |
1929 | 03-31-2014 09:07 AM | |
62803 | 02-07-2014 08:40 AM |
01-06-2021
10:04 AM
hdfs,yarn,hive etc are system users, they will not have any passwords by default, however you can su from root. If you really want set passwords anyway then $ passwd hdfs command will prompt you to set a new password but i don't see a reason why anyone want to do that for system users.
... View more
05-12-2020
09:37 AM
You may have to write one using Ambari API http://<ambari-server-host>:<port>/api/v1/<resource-path> Ref: https://github.com/apache/ambari/blob/trunk/ambari-server/docs/api/v1/index.md Can use just plain CURL or Python to dump JSON and further convert to whichever format you like.
... View more
01-16-2018
07:04 AM
1 Kudo
tier1.sources.source1.zookeeperConnect is deprecated, use tier1.sources.kafkasource1.kafka.bootstrap.servers = bda03:9092, bda04:9092,bda05:9092.
... View more
01-12-2018
09:18 AM
2 Kudos
You can use PURGE option to delete data file as well along with partition mentadata but it works only in INTERNAL/MANAGED tables ALTER TABLE table_name DROP [IF EXISTS] PARTITION partition_spec PURGE; External Tables have a two step process to alterr table drop partition + removing file ALTER TABLE table_name DROP [IF EXISTS] PARTITION partition_spec; hadoop fs -rm -r <partition file path>
... View more
01-21-2016
08:16 AM
Change the download path when you go through the initial setup of CDH where it asks to specify the parcel repository
... View more
01-13-2015
09:09 AM
Log in to CM > Go to Hosts tab > select the server name > Go to Components tab this is where you can see what services and versions installed on that host
... View more
10-20-2014
01:51 PM
There are different ways you can avoid this problem depending on your data blocks.. 1) if you have a data under-replicated it should just automatically replicate the blocks to other data nodes to match the replication factor 2) if it is not replicating on your own run a balancer 3) you can also set replication on a specific file which is under replicated 4) if it is just a temp file which is created while running the job when your speculative execution tasks are high, make the speculative execution tasks nearly match the replication factor so that it wouldn't complain about the temp files after the job run.
... View more
09-29-2014
01:22 PM
you trying to write and read the file at the same time ?
... View more
06-30-2014
08:52 AM
the /etc/hadoop/conf is the default path that hadoop uses /run/cloudera....... is the one that cloudera services export the confs and run from there, either way it works, if you make any changes on CM it gets effective in /run/cloudera.... and when you deploy client configs it gets reflected in /etc/hadoop.... Its just the CM way of doing/running things, no big difference... If you want to add a property to the cluster configuration there is an option in cloudera manager which allows you to add new configurations, I am not sure where can we find it but there is a way to do it as far as i know... Thanks
... View more
06-04-2014
07:05 AM
1 Kudo
AFter you apply the changes in CM you will need to re-deploy the client configs to get those changes reflected on the client machine...
... View more