Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2452 | 04-27-2020 03:48 AM | |
4890 | 04-26-2020 06:18 PM | |
3977 | 04-26-2020 06:05 PM | |
3221 | 04-13-2020 08:53 PM | |
4928 | 03-31-2020 02:10 AM |
03-13-2017
07:41 AM
@gopi seelam The "WordCount" program is written with java "package" instruction ? If yes then you should pass the fully qualified classname. - In order to verify it can you please share the output of the following command this will help us in knowing if you are passing the correct classname there? jar -tvf WordCount.JAR . Also please check if your Jar file name is in Uppercase or lowercase? Like "WordCount.JAR" Or "WordCount.jar"
If the jar file name is in lowercase then you should pass the correct file name in same case. $ hadoop jar WordCount.jar WordCount siva/file1.txt .
... View more
03-12-2017
05:04 AM
@nbalaji-elangovan As you mentioned that it is secured environment. So i guess In your "spark-submit" command line argument you should pass the keytab & principal information: Example: --keytab /etc/security/keytabs/spark.headless.keytab --principal spark-XYZ@ABC.COM .
... View more
03-12-2017
03:00 AM
2 Kudos
@Sachin Ambardekar - The NameNode heap size depends on many factors, such as the number of files, the number of blocks, and the load on the system. So you can refer to know how much Heap will be usually needed for the NameNode based on the number of files and same thing will apply on the "StandBy NameNode". So you can plan for Xmx heap memory and RAM on that namenode host accordingly. https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_command-line-installation/content/ref-80953924-1cbf-4655-9953-1e744290a6c3.1.html - Similarly for YARN (like resource Manager) you can use the HDP utility script is the recommended method for calculating HDP memory configuration settings, also information about manually calculating YARN and MapReduce memory configuration settings is also provided for reference. See below link https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_command-line-installation/content/determine-hdp-memory-config.html Example: # python yarn-utils.py -c 16 -m 64 -d 4 -k True
OUTPUT
======
Using cores=16 memory=64GB disks=4 hbase=True
Profile: cores=16 memory=49152MB reserved=16GB usableMem=48GB disks=4
Num Container=8
Container Ram=6144MB
Used Ram=48GB
Unused Ram=16GB
yarn.scheduler.minimum-allocation-mb=6144
yarn.scheduler.maximum-allocation-mb=49152
yarn.nodemanager.resource.memory-mb=49152
mapreduce.map.memory.mb=6144
mapreduce.map.java.opts=-Xmx4096m
mapreduce.reduce.memory.mb=6144
mapreduce.reduce.java.opts=-Xmx4096m
yarn.app.mapreduce.am.resource.mb=6144
yarn.app.mapreduce.am.command-opts=-Xmx4096m
mapreduce.task.io.sort.mb=1792
tez.am.resource.memory.mb=6144
tez.am.launch.cmd-opts =-Xmx4096m
hive.tez.container.size=6144
hive.tez.java.opts=-Xmx4096m
... View more
03-12-2017
03:00 AM
@Sachin Ambardekar
yes, there is no harm in deleting the default instance and recreate a new one, Because the FileView/ CapacityScheduler (also knows as "YARN Queue Manager") Views does not contain any user specific data.
... View more
03-12-2017
01:59 AM
1 Kudo
@Sachin Ambardekar Sometimes this kind of error occurs with the default existing view instances and causes the error that you are getting as fllowing Caused by: org.apache.ambari.server.ClusterNotFoundException: Cluster not found, clusterId=2 . In order to fix it you should try creating a new "File View" instance by clicking on the "Create Instance" button on the File View. You can choose the default options to create the view instance (if it is not kerberized) .
... View more
03-10-2017
03:28 PM
1 Kudo
@samarth srivastava NOTE the previously shared API will stop all the services in the cluster. But as you mentioned that you want to achieve it to stop all the components on a particular host so in that case you can try the following: - Stop All components on host "sandbox.hortonworks.com" curl -i -u admin:admin -H "X-Requested-By: ambari" -X PUT -d '{"RequestInfo":{"context":"Stop All Host Components","operation_level":{"level":"HOST","cluster_name":"Sandbox","host_names":"sandbox.hortonworks.com"},"query":"HostRoles/component_name.in(APP_TIMELINE_SERVER,DATANODE,HISTORYSERVER,METRICS_COLLECTOR,METRICS_GRAFANA,METRICS_MONITOR,NAMENODE,NFS_GATEWAY,NODEMANAGER,RANGER_ADMIN,RANGER_TAGSYNC,RANGER_USERSYNC,RESOURCEMANAGER,SECONDARY_NAMENODE,ZOOKEEPER_SERVER)"},"Body":{"HostRoles":{"state":"INSTALLED"}}}' http://localhost:8080/api/v1/clusters/Sandbox/hosts/sandbox.hortonworks.com/host_components . - Start All components on host "sandbox.hortonworks.com" curl -i -u admin:admin -H "X-Requested-By: ambari" -X PUT -d '{"RequestInfo":{"context":"Stop All Host Components","operation_level":{"level":"HOST","cluster_name":"Sandbox","host_names":"sandbox.hortonworks.com"},"query":"HostRoles/component_name.in(APP_TIMELINE_SERVER,DATANODE,HISTORYSERVER,METRICS_COLLECTOR,METRICS_GRAFANA,METRICS_MONITOR,NAMENODE,NFS_GATEWAY,NODEMANAGER,RANGER_ADMIN,RANGER_TAGSYNC,RANGER_USERSYNC,RESOURCEMANAGER,SECONDARY_NAMENODE,ZOOKEEPER_SERVER)"},"Body":{"HostRoles":{"state":"STARTED"}}}' http://localhost:8080/api/v1/clusters/Sandbox/hosts/sandbox.hortonworks.com/host_components You can get the list of components installed on host "sandbox.hortonworks.com" using the following API. curl -i -u admin:admin -H "X-Requested-By: ambari" -X GET http://localhost:8080/api/v1/clusters/Sandbox/hosts/sandbox.hortonworks.com/host_components
Once you get the list of components installed on the host you can stop then using the command mentioned. . Running All Service Checks using Ambari API: https://gist.github.com/mr-jstraub/0b55de318eeae6695c3f#payload-to-run-all-service-checks . .
... View more
03-10-2017
03:25 PM
2 Kudos
@samarth srivastava In order to stop all service using ambari API (On the whole cluster) you can do the following: curl -i -u admin:admin -H "X-Requested-By: ambari" -X PUT -d '{"RequestInfo":{"context":"_PARSE_.STOP.ALL_SERVICES","operation_level":{"level":"CLUSTER","cluster_name":"Sandbox"}},"Body":{"ServiceInfo":{"state":"INSTALLED"}}}' http://localhost:8080/api/v1/clusters/Sandbox/services
.
In order to start all services you can do the following: curl -i -u admin:admin -H "X-Requested-By: ambari" -X PUT -d '{"RequestInfo":{"context":"_PARSE_.STOP.ALL_SERVICES","operation_level":{"level":"CLUSTER","cluster_name":"Sandbox"}},"Body":{"ServiceInfo":{"state":"STARTED"}}}' http://localhost:8080/api/v1/clusters/Sandbox/services .
... View more
03-10-2017
11:38 AM
@Pradeep kumar
You will need to extract the tzr.gz file and then the extarcted path you need to pass. Example # ambari-server setup --java-home=/usr/jdk/jdk1.8.0_121 .
... View more
03-10-2017
10:27 AM
@Pradeep kumar You should try the following approach: 1). Download your desired JDK 1.8 and place it in all your Cluster Hosts including Ambari in the same location on all hosts
Example: /usr/jdk64/jdk1.8.0_121 2). Now run the ambari-server setup command as following: # ambari-server setup --java-home=/usr/jdk64/jdk1.8.0_121 .
In the Output you will notice: .
.
OK to continue [y/n] (y)?
Checking JDK...
WARNING: JAVA_HOME /usr/jdk64/jdk1.8.0_121 must be valid on ALL hosts
. . Hence you will need to make sure that your JDK path (/usr/jdk64/jdk1.8.0_121) present in the abari server is identical through out the cluster nodes.
... View more
03-10-2017
09:53 AM
@Pradeep kumar
Please make sure that you do not have any HTTP proxy setup at your end. If it is setup then you should use the following approach: 1. Edit /var/lib/ambari-server/ambari-env.sh.
2. Add "-Dhttp.proxyHost=myproxyhost -Dhttp.proxyPort=4444" to the AMBARI_JVM_ARGS. (here 4444 port may be different in your case)
3. Restart the Ambari Server. . If you have any internet connectivity issue then you can you can also try using the "--jdk-location" after manually downloading it and scp it to ambari host the Desired JDK to your ambari server host. This option can be used to specify JDK file in local filesystem instead of downloading. .
... View more