Member since
09-18-2015
3274
Posts
1159
Kudos Received
426
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2557 | 11-01-2016 05:43 PM | |
| 8475 | 11-01-2016 05:36 PM | |
| 4848 | 07-01-2016 03:20 PM | |
| 8167 | 05-25-2016 11:36 AM | |
| 4311 | 05-24-2016 05:27 PM |
06-16-2016
10:29 AM
1 Kudo
I got it working on Ambari 2.2.1 1.Create mount points: #mkdir /hadoop/hdfs/data1 /hadoop/hdfs/data2
/hadoop/hdfs/data3 #chown hdfs:hadoop /hadoop/hdfs/data1 /hadoop/hdfs/data2
/hadoop/hdfs/data3 (**We are using
the configuration for test purpose only, so no disks are mounted.) 2.Login to Ambari > HDFS>setting 3.Add datanode directories as shown
below: Datanode>datanode
directories: [DISK]/hadoop/hdfs/data,[SSD]/hadoop/hdfs/data1,[RAMDISK]/hadoop/hdfs/data2,[ARCHIVE]/hadoop/hdfs/data3
Restart hdfs hdfs service. Restart all other afftected services. Create a directory
/cold # su hdfs [hdfs@hdp-qa2-n1 ~]$
hadoop fs -mkdir /cold Set COLD storage policy
on /cold [hdfs@hdp-qa2-n1 ~]$
hdfs storagepolicies -setStoragePolicy -path /cold -policy COLD Set storage policy
COLD on /cold 5. Run get storage
policy: [hdfs@hdp-qa2-n1 ~]$
hdfs storagepolicies -getStoragePolicy -path /cold The storage policy of
/cold: BlockStoragePolicy{COLD:2,
storageTypes=[ARCHIVE], creationFallbacks=[], replicationFallbacks=[]}
... View more
06-06-2016
06:40 PM
please refer http://docs.hortonworks.com/HDPDocuments/Ambari-2.1.2.0/bk_ambari_views_guide/content/_reverse_proxy_views.html
... View more
10-27-2015
05:47 PM
Hi Neeraj.I am mainly looking at log4j stuff.
... View more
10-26-2015
11:56 PM
@rgarcia@hortonworks.com Remove "use_fully_qualified_names=True" and it should fix the issue.
... View more
10-27-2015
05:38 AM
Refer following documentation for Host Config Groups http://docs.hortonworks.com/HDPDocuments/Ambari-2.1.2.0/bk_Ambari_Users_Guide/content/_using_host_config_groups.html
... View more
11-26-2015
01:18 PM
2 Kudos
You can run smoke test script via command line to see if timeout value causing service check to fail ? (one of the possible cause) Run below command as ambari-qa user: source /usr/hdp/current/oozie-client/conf/oozie-env.sh ; /usr/hdp/current/oozie-client/bin/oozie
-Doozie.auth.token.cache=false job -oozie http://localhost:11000/oozie -config /usr/hdp/current/oozie-client/doc/examples/apps/map-reduce/job.properties -run
... View more
02-02-2016
04:35 PM
@Andrew Watson has this been resolved? Can you accept best answer or provide your own solution?
... View more
10-25-2015
10:15 AM
Zeppelin Ambari service has been updated to install the updated TP Zeppelin bits for Spark 1.4.1 and 1.3.1. The update will be made for 1.5.1 this week after the TP is out Also the Magellan notebook has also been updated with documentation and to enable it to run standalone on 1.4.1
... View more
09-12-2016
03:57 AM
1 Kudo
I ran into similar issue and did what Miraj said, it works!
... View more
10-27-2015
04:06 PM
@terry@hortonworks.com @Madhan Neethiraj This restriction was prior to HDP 2.2.4
... View more